Week 3 Summary: 11 – 15 May 2009


Kevin – Identify trends and signatures and their meaning could be useful.  Identify what’s useful in developing a storm in initiation.

Chris – Liked the trends in Google Earth.

Steve – VLMA Still needs correlation between its display and other signatures and severe weather events.  Don’t know the benefit of satellite-based sensor in areas of  regular LMA sensor.

Kristen – LMAs are line of sight so they’d need to be smartly located in topography.

John- Would anticipate what it’s coverage would be in poor radar coverage areas and also for coastal areas for advance warning of LTG for mariners and beach-goers.  Concurs more research needed.

What want to see?   Trending, manual area defining intervention to define trending (e.g., distance speed tool or a box),


John – anticipates usefulness of 3dvar and multi radars but also big education challenge.  Would like to see vertical cross-section.  Would prefer a 2 min update if they could get some vertical structure.

Steve – Would like cross-section from multi-radar including CASA.  Wed night was a significant stepping stone was like Red Rock was in 1991.  Would like to time-match 88D, CASA, PAR in one 4-panel.

Kevin – CASA saw the circulations 3-5 min ahead of 88D in Gracemont.  Kevin walked into the WFO concerned they were seeing it.  But they had the TOR drafted.  WFO forecasters also walked into the HWT to view the CASA data.

Concerning RFD winds in Anadarko, CASA really showed the winds.  The 3dVAR picked up on the winds as well.  KTLX showed no RFD winds.

Chris – Concurs with the benefits of rapid update.  Drawback is the attenuation.

CASA wishlist:  Can a manual scanning override be accomplished?  This is a workload strategy that could be accomplished by having someone managing scanning while the other interprets storm structure.  The software would need to be made simple.

Would an attenuation product be useful?  Unknown at this time.


Kevin:  Liked the mid- upper-level resolution.  Midlevel mesos tightened up more quickly in PAR than 88D.  Did not see any adverse effects of adaptive scanning.  Perhaps would like to see a more frequent full volume scan in explosive initiation environments.  Edge of sector had a bit more velocity problems.

Chris:  Lot of more features visible on PAR.

Steve:  Data quality was better this year’s experiment.

John:  Anticipates much better detection of descending cores in low-shear event.  Worth having superres even with more time.

What tradeoff are the forecasters willing to consider  if one is required to see clear boundaries around the storm?  They are willing to consider a tradeoff depending on how important it is to see the boundaries.

Kevin:  Is CASA network refractivity conceivable?  Feb 10 case refractivity could’ve explained why the following supercells weren’t tornadic.  Could it compensate for sacrificing scanning frequency for sensitivity.

Chris:  Wondered about the impact 2.1 deg beamwidth on edges of sectors?  There was some impact but would be limited in the future with hardware upgrade.

Adam wondered about most what was the most useful scanning stategies.  Not much difference…all were good. 15 elevations was sufficient.

Kevin:  Changing the PRF could be made easier to do but liked the immediate feedback in the PAR.


Steve:  What would be the best?  Get it into AWIPS in general

Kevin:  Not a pleasant thought thinking about not using the products.  MESH, 30 and 50dBZ hgt > 0C and -20C,  Reflectivity at 0, -20 C.  Kevin doesn’t look at rotation tracks as much as others in the office.  Though they use it for post-mortem.

The MESH, rotation tracks were used to track motion.  Kevin noticed how big the polygons  are compared to actual tracks.

Chris:  MESH and reports seem to coincide. Rotation tracks showed the strongest tracks were colocated with tornadoes.

John:  MESH and rotation tracks were useful to call in and verify reports.

Merging LMA with multi-radar data?

General comments:

Steve liked the jobsheets for WDSSII.  Should be done on day 1.

Need better AWIPS localization to bring up map products on 4-panels without procedures.

Intro seminars – Good, no death by powerpoint by death.

AWIPS introduction was an added bonus.

Give a chance for forecasters to customize. They could bring their own procedures.  Could use the ‘alter functions’ to change model type or radar ID.

International visitors could benefit from an hour or two of software spinup.

Get SVS capability.

Additional “lost” notes found in an archived draft never published until now:

LMA considerations:  Don’t know what VLMA intensity to consider in warning issuance.  Screen real estate issues.  Would like the source points.  Swears that supercell ID can be done with VLMA and 0.5 deg radar data.  From an Canadian pt of view, if a radar goes out there’s no extra data.  VLMA very useful to see LTG frequency going up just before meso increased north Norman.  MESH would increase after the LTG would peak.  VLMA would indicate splits about to occur.  Might give more confidence about which updraft might be more dominant.  Reflectivity may be more quick to develop but there are lot teaser cores whereas VLMA gives larger view.  One minute update in VLMA really helpful to get the warning out more quickly.

PAR considerations:  So much to see and tremendous detail.  They spent the whole night looking at base data.  Can animate cross section to see the cores going up and then making the plunge.  Were able to change the PRF and see changes almost instantly. Scanning strategies were pumping data so fast that they were not finished with looking at upper tilts by the time new low tilts came in.  Super res in high tilts revealed stuff not seen before.

CASA:  3dvar was very useful showing a meso in all physical dimensions.  Could see the meso forming on the forward flank of the storm.  WFO came in to see the CASA display of the mesocyclone wrap up.  Could see vortex holes and multiple velocity couplets.  Adaptive scanning was easy.  Looked at individual CASA radars but composite CASA was used most often.  3dVAR does have a 10min latency so it was used as confirming evidence.   Long range concern about how many people are needed to monitor a CWA full of CASA radars.  How to display multiple base velocity fields?  3dvar is one way, shear and divergence products are another.  Composite vs. single radar in CASA.  Single radar showed much more pronounced hook but then there’s attenuation issues that composite helps overcome.

No EMs called last night concerning CASA.

Need procedures.  Or better, need to load specific maps even without procedures. Everything was geared to issuing warning earlier.

Jim LaDue (EWP Weekly Coordinator, 11-15 May 2009)

Tags: None

Week 2 Summary: 4 – 8 May 2009


-Trends from MR/MS grids provide an excellent tool for the mesoanalyst and situational awareness given the capability to see the long-term trends

-Avg. trends in a sector/area to see the “average storm behavior might be helpful

-Grids of the time change fields might be helpful (i.e., the change in MESH for a storm in the past 10 minutes)

-Probability of lightning not useful with the put-get type advection; a forecast swath would be much better to see all areas which the storm might move over in the next N minutes

-Divergence fields for MARC signature

-Need to investigate how radar coverage affects MR/MS grids

-Send MR/MS grids to WTDB to help expose to more forecasters?

-Forecasters seem to have their own preferences for what levels and echos they would like to see (i.e., maybe dBZ @ -30C instead of -20C).  MR/MS set up needs to be configurable on the fly.

-Will need a salesman at each office for MR/MS.  They need to be knowlegable and experienced with MR/MS so that they can sell it to other forecasters.


-Too many panels; all the information took away from the experience with CASA data

-Idea for WG: move to tabs to help sort radar data?

-For wind study, it was suggested to get experienced forecasters (i.e., forecasters from offices who experience derechoes or main climatological threat is wind) to evaluate how useful the CASA data is for wind threat.


-More trends of the data

Overall/General Comments:

-One team member on Google Earth and one team member on AWIPS worked well

-AWIPS for canned cases

-Visiting forecaster: Can we get flexible shifts for real time events?

Kiel Ortega (EWP Weekly Coordinator, 4-8 May 2009)

Tags: None

Week 1 Summary: 27 April – 1 May 2009


We just wrapped up the first week of the 2009 EWP Spring Program. It was a very productive week, and our visiting forecasters were able to get some experience with all four experiments either through archive cases or real-time events.  We had real-time events each of the 4 operations days as broad southwesterly flow, coupled with ample low-level moisture, was present all week.


Steve Cobb – NWS WFO Lubbock TX (LUB)

Suzanne Fortin – NWS WFO Pleasant Hill MO (EAX)

IOP Summary:

Monday – Expected to work the PAR and CASA data on a developing line of severe storms in Central Oklahoma, but they died early, and we were left with little significant live data to utilize. Thus, the evening was spent mostly with archive case analysis.

Tuesday – A Multi-Radar/Multi-Sensor (MRMS) algorithm IOP in the latter half of the shift, centered on two isolated supercells in Southeast New Mexico.

Wednesday – A MRMS algorithm IOP in the latter half of the shift centered on severe and tornadic storms between Lubbock and Childress TX.

Thursday – A late IOP for a single isolated supercell in Western Oklahoma, as viewed by the PAR. MRMS algorithm products were also used in conjunction with the PAR data to issue warnings.

LMA Discussion:

The forecaster felt it was useful to compare the LMA products to the other multi-radar/sensor products. In fact, the same was said when using the PAR data to issue warnings. The forecasters were more comfortable with an integrated approach – to include all the experimental data.

A comment imitated a short discussion on whether we should be trying to issue experimental lightning warnings.

Multi-radar/Sensor Algorithms Discussion:

The MRMS products increased their ability to diagnose the storms versus using just the base data alone. The forecasters were quite pleased with the hands-on demo of each of the MRMS products that Greg gave on Monday. This greatly helped them understand what each of the products meant, how to use them for warning decision making, and how to properly combine various products. With the latter point being made, the forecasters commented that NSSL should develop a few default AWIPS procedures with multi-parameter and multi-panel image loads available to new forecasters each week. One forecaster felt that it would good if some of the future forecasters got to practice with an event that moved over one radar, with the other radars “filling in” the 3D MRMS grids. Each forecaster concentrated on their “favorite products” and thus did not evaluate each and ever product individually. This is not a bad thing and is good to know!

Finally, one forecaster commented that the introduction of these new products to operations should be done very carefully. If not, forecasters might find that the products put themselves too external to their comfort zones, and will push the new products aside. These first impressions can sometimes last a while.

PAR and CASA Discussion:

There wasn’t much additional discussion on PAR and CASA since they were adequately covered in the Thursday debriefing earlier during our Friday morning session. The underlying theme with both the PAR and CASA data was that the data refresh rate was occasionally too fast to manage, yet that having the more-frequent updates allowed the forecasters to better diagnose the evolution of the severe weather and tornadic signatures.

Project logistics Discussion:

The forecasters noted that having the WDSSII MRMS data in AWIPS helped with the analysis immensely, and they were grateful that we facilitated this in the testbed this year.

They noted that it was nice to be able to use the WDSSIII GUI (‘wg’), which is like peering “under the hood” of the more-familiar (to WFO mets) Four-dimensional Stormcell Investigator (FSI). They commented that some of the ‘wg’ features might be incorporated into a future build of the FSI. There was one suggestion provide linked cursors between the FSI and AWIPS D2D.

One forecaster noted that any forecaster might have a slightly difficult time adjusting to issuing storms in a County Warning Area (CWA) for which they are unfamiliar since there is a wide range of “comfort zones” with each forecaster and/or each WFO. They also suggested asking the forecaster to email their AWIPS procedures ahead of time to load them up on the HWT machines.

The value of Google Earth to illustrate multi-parameter trends was mentioned.

The forecasters felt the schedule was not too demanding, although hoped that the NWSEO could allow for some flexibility in the shift schedule to accommodate the “storm’s schedules”.

Having the cognizant scientist “mentors” provide another overview of the products during the 30 minute pre-IOP spin-up was found to be very useful. One forecaster also suggested that we provide an “Area Weather Update” during the 30-minute spin-up, to orient the “new forecast shift” with the situation. Also, the forecaster wanted to ability to issue polygon-based Special Weather Statements (SPS) which could be used for Significant Weather Updates.

The forecasters like the discussions, as learning comes best from discussion.

Friday Brown-bag lunch seminar abstracts/titles:

Our visiting forecasters each opted to not provide a seminar this week, and thus the brown-bag lunch was canceled.

Final thoughts from the weekly coordinator:

I’ve discovered that being the overall experiment operations coordinator, plus being the weekly coordinator for week 1, was a little too much – there were many experiment logistics loose ends that need to be tied up and fires to put out. Next year, I will do the weekly coordinator stint a little later in the experiment period. Otherwise, I think we have been much better prepared this spring as compared to 2008, even given our big transition to AWIPS, and we’re ready to roll on for the next 5 weeks of the experiment.

Greg Stumpf (EWP Weekly Coordinator, 27 Apr – 1 May 2009)

Tags: None

Week 6 Summary: 2-6 June 2008

Visiting forecasters this week were George Phillips (SOO, WFO Topeka), Jon Hitchcock (Forecaster, WFO Buffalo), and Chris Sohl (Lead Forecaster, WFO Norman). In addition, Milovan Radmanovac from the Weather Service in Serbia also participated. John Ferree (OCWWS) was an observer. Other participants who supported testing included Angelyn Kolodziej, Kevin Scharfenberg, Travis Smith, Greg Stumpf, Jerry Brotzge, Dave Priegnitz, Kevin Manross, Pam Heinselman, Rick Hluchan, David Pepyne. Liz Quoetone and Kiel Ortega were Weekly Coordinators.

Weather during the week was predominately outside the Oklahoma Testbed. Forecasters spent the first half of each shift on training or simulations for Phased Array and CASA. Monday through Wednesday evening IOPs were Probabilistic Warning events generally associated with weather in the central and southern Plains. Thursday both teams did the Prob Warn exercise from Grand Forks. This was followed by the only live event to make it into the testbed. Storms reached the PAR network and were viewable (albeit at longer ranges) for a couple hours. Storms ultimately reached the CASA network with less than an hour remaining in the shift.

General observations:

Prob Warn

Participants got very comfortable with the technological end of this process such that by the end of the week, they were putting out multiple threat areas for the same storm and keeping track of things. Both groups used AWIPS to interrogate storms and WDSS II for assigning the actual threat area. This seemed like a good way to keep the technology from getting overwhelming, as well as involving both members of the team. Some discussion involved how the users would interpret these probabilities. The idea was that most would have a trigger point for various actions based on the probabilities. However, once the forecaster becomes aware of those trigger points, does this factor into the assigned probabilities (even though we are technically not suppose to consider any societal aspects of this).

Forecasters experimented with probabilities with one team issuing a 100% hail threat for 60minutes with no degradation. This was associated on a long-lived HP supercell and obviously confidence was high. The group did some wind threats but speculated that this could be much more complicated in squall line situations (which nature did not afford us during the live events).


Scientists captured more thorough findings from the group but in general, everyone found the temporal improvements to be beneficial. While the improved resolution was also a plus, it was felt that this was somewhat offset by the release of the Super Res products in the 88D. Some mentioned the sector scanning for CASA as not very useful. Participants talked about the small scale (time and space) features that you could see with each radar and the benefits of perhaps getting a warning out earlier, but also weighing this against the inclination to overwarn on features which are transient and not associated with severe weather. A definite learning curve.

The Friday wrap-up session was followed by two talks:

“December 19, 2004 Southern Lake Michigan Single Band” Jon Hitchcock, Buffalo NY

“Radar Network and Hail Suppression System in Serbia” Milovan Radmanovac, Serbia

Liz Quoetone (EWP Weekly Coordinator, 2-6 June)

Tags: None

Week 5 Summary: 27-30 May 2008

In week 5, the Experimental Warning Program reached out to Canada and the Pacific Northwest, bringing in meteorologists with quite unique perspectives on thunderstorms and warning operations. Our forecaster/evaluators included Brad Colman, Meteorologist in Charge at the Seattle Washington NWSFO, Eric Stevens, Science and Operations Officer at the Fairbanks, Alaska NWSFO, and Mark Melsness of Environment Canada in Winnipeg. Adding some local experience to the group, we had Kevin Brown, Senior Forecaster at the Norman, Oklahoma NWSFO. And I am Patrick Burke, General Forecaster at the Norman NWSFO; I served as Weekly Coordinator, but also as an evaluator for Thursday’s operations.

Although the Memorial Day Holiday shortened Week 5 to three and a half days, the group was able to work on all three experimental data platforms, including plenty of live data – particularly for probabilistic warnings.

Tuesday initially showed some promise for a Central OK intensive operations period (IOP), so we ran with a game plan to complete PAR and CASA training. The group sat down for the first time in front of WDSSII to practice data interrogation using a live supercell near Altus, OK. Kevin and Eric then viewed PAR data for about an hour, with this storm just near the edge of the domain. Unfortunately, stable air overspread central OK, and all the thunderstorms propagated away from the PAR and CASA domain. Participants then turned to archive cases to round out the evening.

Wednesday brought an opportunity for the groups to trade places on the PAR and CASA archives before moving smoothly into Probabilistic Warnings for the remainder of the day. Upslope flow pushed mid and upper 50s dewpoints onto the parched high plains of eastern New Mexico, while mid and upper level winds showed a gradual increase downstream from a trough over southern California. The situation proved favorable for severe storms. Coordinating with the SHAVE project to find relatively dense verification swaths, Brad and Mark issued probabilistic warnings for hail on multiple storms, and eventually one low-probability warning for tornadoes. Meanwhile, Kevin and Eric inherited a long-lived eastward moving supercell which paralleled Interstate 40 from near Albuquerque to Tucumcari. The team issued probabilities for hail and tornadoes for this and a second cell which followed in the same path. Both cells received traditional tornado warnings from the Albuquerque NWSFO. Kevin and Eric eventually added a probabilistic swath for severe hail when the lead supercell took on high-precipitation character with an extensive rear flank downdraft. It was very impressive to see how comfortable the teams became with issuing multiple threats for multiple storms within hours of first being introduced to the experiment.

Thursday presented the best opportunity yet this spring for participants to test probabilistic warnings during an outbreak of long-lived tornadic supercells. After our map discussion which included a categorical High Risk for severe weather in Nebraska, we chose to put Kevin, Eric, and Mark stright to work on the Prob-Warn archive case; it is important to have as many forecasters as possible provide feedback on this one particular case so that meaningful statistics may be derived. We said goodbye to Brad who left as planned so he could attend to other obligations. Thus, when we jumped on the live Prob-Warn operations at 2130 UTC, Kevin was paired with Mark, and Eric with myself.

Both teams inherited severe storms already in progress, and though it was not the original intent, storms aligned such that it was beneficial for Team 1 to work within the Goodland NWSFO CWA, and Team 2 within the Hastings CWA. At one point this resulted in a unique opportunity to coordinate the passing of a probabilistic swath across CWA borders. Another storyline developed as teams tested the workload by issuing probabilities for hail, tornado, and straight line winds for each of 3 different storms, resulting in 9 threat areas. Threats from one storm often overlapped those of another storm, and the teams took to giving their warnings meaningful names based on the location of the initial warning.

Operations were also enhanced by the Situation Display which showed live video streaming from storm chasers in both CWAs. The Hastings team worked a storm that appeared to be producing a significant tornado at Kearney, NE, while the Goodland team received occasional tornado reports from Sheridan to Rooks Counties. The Goodland storms found deeper moisture and began producing more significant tornadoes near Jewell and Beloit, KS, just after the IOP ended. We allowed enough time to hold a short debriefing late that evening while replaying the event through WDSSII on the Situation Display.

Friday allowed us to hold a more thorough round table discussion of the Thursday event, and also reflect on previous days’ events. Despite the holiday-shortened week, forecasters gained experience with PAR, CASA, and Probabilistic Warnings, and worked one of the most data-productive events of the season on Thursday. Before concluding for the week, we were treated to a brown bag lunch at which our participants presented the following:

Mark Melsness: “Precipitable Water Veriication of a Ground-Based Sounder using RAOBS at Winnipeg.”

Eric Stevens: “Impact of Snow Cover on October Surface Temperatures in Fairbanks” (a.k.a. “Falling off the Cliff”)

Kevin Brown: “Influence of Radar Beam Ducting on Warning Decisions”

PAR Discussion:


o Multi-panel (even greater than 4) would be useful

o For Looping, would like to select from a range of time resolutions since 1-min updated is not as important 60 minutes in the past. Perhaps a hybrid loop that has 5-min resolution transition to 1-min upon nearing the present.

o Interlaced 0.5 deg data is good


o Would like improved azimuthal resolution (beam width) at long range

o Like opportunity to scan even faster in 45 deg sectors


o Tuesday’s Elk City supercell split was recognizable in PAR earlier than 88D

o For other features, KFDR was best simply because of location

o Very useful for intensity trends

o Numerous features that might prompt warnings from 88D perspective, but many are transient. Felt it was a luxury to watch them evolve and warn only on those that showed consistent upward trend in intensity. Will need much training on this concept to avoid dramatic increase in false alarm rate

o Could combine PAR with Prob-Warn thinking, and a good approach could be broad area of lower prob warning with pinpointed short-duration higher prob warning for these transient features.

CASA Discussion:


o Please add contours of wind speed to 3DVAR product

o 3D isosurface of wind speed, looped in time, would be useful

o Four-Dimensional Storm Investigator could be used in lieu of RHI


o Adaptive scanning was too hard to follow. Forecasters preferred the 2.0° scan which is 360°. Greg Stumpf mentioned that WDSSII has a merger that can put all the data into a 3D grid, update only those portions (sectors) of the grid with live data, and then time-to-space displace the older data. 3DVAR product can also do the same

o Dual-pol could be useful for non-precipitation returns, like smoke, volcanic ash, during “big bubble no trouble” conditions (i.e., “severe clear”)

o Some were overwhelmed at the resolution, especially spatial. Need 88D-like data working side by side to maintain awareness of the big picture.


o In Alaska, CASA would be more useful as a gap filler under 88D beam over population centers, rather than as an overlapping adaptive network. He gave the example of Delta Junction, AK, where there are “competing” mountain-valley circulations (Chinook v. Bora) which the CASA radar could help diagnose.

o Could be useful for Olympic venues (although for Vancouver, will be using mobile radars).



o Need an alert system for warnings nearing expiration, or perhaps for storms bleeding outside the warning swath. Something akin to the AVNFPS could be ideal.


o Would like options for applying a more advanced approach to swath creation, such as different motion uncertainty and buffer to the left versus right

o Would be nice to be able to multi-select several warnings and group-adjust variables like the motion vector.


o Need to add information about intensity (e.g. Hail Size, Wind Magnitude)

o Workload is an issue. Varying degrees of comfort handling multiple threats on multiple storms. Could let algorithms assist, especially with hail which tends to be easiest to detect. Forecasters want ability to QC any warning the algorithm suggests before sending to public

o Like the ability to issue low-prob threats as this more continuously conveys trends in forecaster thinking compared to legacy warnings

o Gridded probabilities offer ability to derive output with many grades of sophistication for various users

· Related Discussion

o Greg Stumpf: What if a storm begins to turn right after issuing a storm-based polygon? Do you issue a new polygon? Does it overlap the old? Do you cancel the old polygon?

o Patrick: The gridded warning concept is preferred, as it allows you to nudge the motion vector when needed.

o Patrick Related his experience taking over warnings on the 5/24/08 Oklahoma supercell. With two tornado warnings in effect for the same county, each labeled directionally (e.g. Eastern Noble County) he cancelled the western warning (essentially the threat was advecting east out of the first warning and into the second. A television station only picked up on the Cancellation headline, and not the continuation of the eastern warning mentioned in the text. Thus, he had to issue another SVS 1 minute later to reiterate this. This was a good example of a non-meteorological condition affecting warning judgment. Feels that the PW system would have dealt with this a lot better.


· Organizers made experience very easy.

· Enjoyable.

· Nice mix of lecture and hands-on experience.

Patrick Burke (EWP Weekly Coordinator, 27-30 May)

Tags: None

Week 4 Summary: 19-23 May 2008

This week, visiting participants included Steve Hodanish (WFO Pueblo, CO), Jonathon Howell (WFO Memphis, TN), Ryan Knutsvig (WFO Elko, NV), Steve Rogowski (WFO Sterling, VA), and Dave Patrick (Environment Canada, Winnipeg) as full-time participants. Other participants this week included Mike Magsig (WDTB), Paul Schlatter (WDTB), Don Rude (U. Virginia), Jerry Brotzge (OU) and an NSSL support team of Kevin Manross, Kiel Ortega, Kristin Kuhlman, Pam Heinselman, Dave Preignitz, Angelyn Kolodziej, and Greg Stumpf. Travis Smith (NSSL) was the Weekly Coordinator.

Monday, May 19:

There was very little threat of significant severe weather anywhere in the CONUS today, which made it an ideal training day for our visitors. The afternoon consisted of general orientation, map discussion, CASA & PAR orientation, and WDSSII training. In the evening, the forecasters participated in probabilistic warning guidance training on some real-time data SW of Brownsville, TX (issuing warning guidance for Mexico!)

Tuesday, May 20:

PAR and CASA playback cases in the afternoon, followed by a probabilistic warning IOP starting around 4pm CDT. The area of interest is the Carolinas and Georgia. A detailed summary of the post-event discussion for this case can be found in the blog entry “20 May 2008 – Summary”.

Wednesday, May 21:

Similar to yesterday – PAR and CASA playback cases in the afternoon, followed by an IOP around 6pm for NE Colorado. A detailed summary of the post-event discussion for this case can be found in the blog entry “21 May 2008 – Summary”.

Thursday, May 22:

HP Supercell event in Oklahoma, outside the PAR domain, but we worked it anyway to try to get the forecasters a real-time event.

Friday, May 23, Summary Discussion

CASA – no real-time operations this week.

  • Dave: really like the assimilated data display, saw it as advantageous to look at velocity data as vectors as opposed to the radial velocity display.
  • SteveH: 2D Wind display was a bonus.
  • Several forecasters like the idea of a 2D wind field presentation.
  • SteveH: would like to see data out to 60 km, or some way to see the big picture.
  • SteveR: not confident in the sectored scanning.
  • Dave: wants to make sure he can see low-level boundaries.
  • Need a visual cue for the cross-sections.
  • Dave: cross-section is overkill,
  • SteveH: can do the same thing better with rapid scan of sector
  • Dave: would like RHI capabilities.


  • SteveH: 100% wind/hail for HP supercell was a no-brainer, tornado was not so clear. Noisy, range-folded. Reflectivity was excellent, however, and FSI cross-sections were very useful. Thinks it will be a huge improvements once it gets the bugs out. Was most impressed by pulse storm playback case – can see core development aloft (rather than by luck with 88D).
  • Jonathan – good tool to update warnings as well. Can see more variation in intensity, see microbursts, etc.
  • No one uses detection algorithms in their offices, except for the gridded MESH. SteveH: can better interpret the raw data than algorithms. Need to be able to trust the algorithms (like MESH).
  • Dave: likes algorithms, if they are trustworthy.
  • Everyone liked the pulse storm case – “Pulse storms are a pain…!” – SteveH.
  • Jonathan: can see between volume scans.
  • This could help with longevity of warnings.
  • “The fire hose is coming, but that’s why they teach the fireman how to hold the hose!” – Steve H
  • Jonathan – not going to be a big change, things will look the same as the 88D (same conceptual models).
  • 3D will be valuable in the future – need to be integrated into operational system – lots of things need to be learned about it.
  • Scan strategies? SteveH: does it need to be adaptive? It already scans really quickly.
  • There are trade-offs with high-resolution temporal versus spatial sampling.
  • SteveR: higher resolution at lower tilts would be nice.
  • They would like to show some simulated cases to their staff. Could do in FSI. Have other training priorities, but could be worked in.

Probabilistic warning guidance:

  • Ryan: new way of thinking. More sophisticated users will love it.SteveH: thinks it is straightforward for forecasters.
  • Jonathan – protecting life is the primary objective; need to be clear on call to action (yes/no answer for most users)
  • SteveR: we could do both methods –
  • SteveH: need training to forecasters on what the probabilities mean. Need to train public is more important though – forecasters
  • SteveH doesn’t like the future trend prediction. Didn’t know what to do with it (forecasting the short-term intensity variations is very difficult)
  • Need organizational software

May 30 playback case:

  • Mark is very comfortable with a high level of automation, even for low-end warnings, so long as forcaster has final approval. Canadian way of doing things.
  • Eric thought the workload for the prob warn playback case was about right.

General final thoughts:

  • SteveR: exceeded expectations
  • SteveH: tour of SPC would be nice
  • All: Good pace – they were here to work, so no problems.
  • SteveH: more training on probabilistic warning tools. Had a good experience.
  • Dave: was action filled.
  • Jonathan: send out WDSSII to offices beforehand.
  • Jonathan: had a very good experience
  • They would come back next year.
  • SteveH: wants 30-minute training sessions.

Travis Smith (EWP Weekly Coordinator, 19-23 May)

Tags: None

Week 3 Summary: 12-16 May 2008


  • Dave Hotz, NWS MRX
  • Dan Miller, NWS DLH
  • Dan Porter, NWS ABQ
  • Ron Przybylinski, NWS LSX

Week three of the 2008 EWP looked to be off to a dull start, weather wise. Fortunately, this wasn’t the case and we gained some good feedback from our participants. You are referred to the Daily Summaries for details of each day’s events, but a quick synopsis follows:

Monday (12 May): Training day. Not really any weather through the whole CONUS, and with anticipated SVR in the CASA/PAr domains, a good day to hit training for all the projects

Tuesday (13 May): Storms developed along a cold front draped along and to the E of the I-44 corridor. Had good PAR exercise (and even attempted PROBWARN with PAR data given Dan M. and Ron P.’s familiarity with WDSSII). Alas, only received a few pixels worth of radar data in the CASA domain as the cap held that far south.

Wednesday (14 May): Did some CASA/PAR playback cases earlier in the day and then ran a PROBWARN IOP for W and C TX with two teams. One team took the storms near and south of I-20, while the other team focused on storms developing in Mexico and moving into the Eagle Pass, TX area.

Thursday (15 May): Similar to Wed. – ran CASA playback and then an abbreviated PROBWARN IOP near Eagle Pass, TX again.

Weekly Debrief…


DH: High temporal res is a big plus, however, have to rearrange concept to adapt to continuity differences (“temporally noisy”).

RP: Much nicer to see storm structure evolution (updraft downdraft) much better on live case.

There was a tendency to jump the gun, since signature could be there and gone in minutes, versus 5 min volume scans.

RP/DP: Evolution is so fast, how do we shift to deal with this in a warning sense.

RP: PAR would really help with rapidly evolving QLCS tornadoes.

How do we deal with tornadoes that are there and gone before there would have been an 88D signature?

Might there be more false alarms now that we see more of these transient features?

DP: In ABQ, got a lot of 2-minutes F0, between volume scans. Would PAR cause a spike in warning frequency? And will warnings always be too late?

DM: Far ranges, still don’t solve the radar horizon and beam spreading problem.

DM: Next year, WFOs will be used to super-res data, and PAR may not match up and be less desirable?


Had no real-time events, so some discussion on case playback, adaptive scanning, QC.

RP: Need a 360° surveillance of three low-altitude elevation scans, but just 2.0°.

RP: Some of the sectors were missing portions of the storms. And the sectors changed a lot – not a lot of consistency.

DP: Did a good job when storms were moving into the network, but once inside network, it got inconsistent.

Jerry B: What if there wasn’t 88D data to supplement? RP still thinks we need the lower three 360°scans.

DP: Would be nice to have a 20-25 VCPs, let software choose, but user can override.

DH: Will we use it for a full network, or just to fill in gaps? If the former, need to look at all scans. What happens when you have 120 CASA radars? Need to look at a composite, not individual radars.

DP: Significant attenuation, and dealiasing problems on leading edge of storms.

DM: DQ has improved significantly from last year.

DP: Gives you greater confidence than seeing data on 88D.

DM: Doesn’t see this as a replacement to the 88D, but a supplement/value-added, gap filler below 88D. Radar horizon in one corner of his CWA is 12-14kft. E.g., had a lake-effect situation, 28”, all echo was below lowest elevation, and satellite blocked area with cirrus. Only got reports and webcams.

JB: Were RHIs helpful? Yes.

JB: Were 3DVAR winds useful? Yes.

DP: Some concern about DQ – were the wind fields correct? Didn’t know.

KM showed 5/7 data from last week, including 3DVAR. Forecasters reviewed this case this week.

RP: See great potential.

DP: Can’t use it for hail threat (doesn’t go high), but for wind features (TVS, boundaries, microbursts, etc).

JB: Were you overwhelmed with amount of data? No.

DP: Not amount of data, but annoyed by how sectors scans keep flipping around. But like the one-minute update.

DP: Would like to have a VAD profile.

DH: Precip estimates.

JB: These are dual-pol radars, but we didn’t show any of it.


Thursday 5/15 debriefing:

Eagle Pass real-time IOP summary. We started by looking at the ondemand hail tracks and SHAVE calls (turns out we didn’t know they were making the calls in real-time).

NWS polygons are a lot bigger than our threat areas.

DM: Default polygons are usually too narrow. The upstream (back) end of the default warngen polygon is a set length – not tagged to threat area size, and the back end is one time step in the future, and must be dragged back.

Contours of our warning grids would be nice, but keep the grids so that they can be sampled. Option to do both.

Need better display management of our warnings.

DM: Doesn’t like how hotkeys are moving away from keypad.

DP: Interesting to see what was going to happen with the outflow boundary.

Lot of earlier talk about having low prob large areas when individual threats were still less certain.

DM: How do we define that we are “doing better”. Is it warning size? Better service?

Travis: Note that back edge moves forward, but front edge does not due to current software limitations. We will reprocess the data and move the front edge.

DH: How much lead time is too much? Greg says that each user requires different lead time. KM: Allows us to cover a lot of bases (different users) that we’re not covering. DH: Prefer that the forecaster determines when the warning gets to the public via the tv station, rather than the tv station taking the data and when to “light up the county” based on probs and time of arrival.

TS: We will meet with non-met experts to get their opinions on this.

Good discussion about probwarn and how various users could benefit (or get confused) with the added information.

Probwarn Archive Case playback:

Noted differences in threat area location and size, motion vector, direction uncertainty, probability trends.

DM: What about editing the current threat grid instead of the contours? Greg asks, how do you set motion vectors for the grid points? TS: Could automate some of this with algs. To help maintain threat areas.

DH: Would be nice to update all three threats on one storm with same motion. Perhaps add the storm motion vector to the contour editor, instead of sliders.

DM: When you pick up and drag threat, record the movement and automatically calculate the motion vector based on that! Togglable, so that forecaster can override this.

Brad: Asks forecasters about how much time they are spending on storm interrogation doing it this way. DH: Less time.

DM: This week – less time, but if integrated into AWIPS2, and with more experience, would get better.

DM and RP were doing the D2D on one, and WDSSII on the other – two person team.

RP: two-person teams

DP: No, but should in ABQ. But in GFK, they did have one person keeping track of warnings, and one person on the meteorology (radar analysis and NSE). But sometimes also had a dedicated meso person.

Tornado probs had big difference between forecasters.

DM: His tornado threat area took into account RFD and new cycle locations.

Experiment logistics:

RP: Consider a sliding shift schedule. Greg tough to do with schedules.

DH: Would be nice to intermesh with EFP’s briefings. Could utilize their discussions as an update for rest of the day.

What about three experiments? Like the cross-pollination.

Was one week enough? Two weeks might be better (would have to deal with EO and WFO management).

Number of participants. Seems to be a good number. Any less, and there might not be enough diversity and change to interact.

DH: Thinks an even number is best – teams of two.

DM: Get stuff into AWIPS/2 as much as possible. Will save a lot of spin up time on training.

AWIPS2 may hinder what we can do.

DH: Appreciate the willingness to listen to different viewpoints.

Appreciate the opportunity.

RP: Mind has changed, see some potential. May not in current form, but something else down the line.

Kevin Manross (EWP Weekly Coordinator, 12-16 May)

Tags: None

Week 2 Summary: 5-9 May 2008

Overview: Monday – Thursday

Week 2 of the EWP was relatively active with probabilistic warning activities occurring on Monday, Tuesday and Thursday while both CASA and PAR evaluations occurred on Wednesday. This week featured visiting forecasters Bill Rasch (NWS Billings), and Craig Shoemaker (NWS Tucson). A visiting meteorologist from Environment Canada, Bryan Tugwood also spent the entire week with the EWP team. Other meteorologists from Environment Canada watched EWP experiments including Ria Alsen and David Schmidt. Cynthia Whittier, from WDTB, also participated in the EWP. Greg Stumpf and Kristen Kuhlman served as ProbWarn scientists, Jim LaDue served as EWP coordinator. CASA scientists included Don Rude and Jerry Brotzge.

Monday evening featured a supercell with large hail near GCK that merged with a line to the east and became an outflow dominated multicell event. The hail threat decreased as largescale wind threat increased. This team consisting of Bill, Craig, Cynthia and Brian focused on hail threats given their unfamiliarity with the probwarn software. The challenge of the day was getting used to new software and then dealing with threat areas as isolated multi- and supercells coalesced into a larger complex. Due to the visitor’s unfamiliarity with the software, Greg and Jim decided that all visitors group together as a team with one of the probwarn scientists operated WDSS-II. An instance of D2D proved very helpful for storm interpretation.

Tuesday evening featured a probwarn exercise in west Texas. Monday’s big team was split in two with Bill and Kristen taking Midland’s CWA while Craig and Bryan tackled the Lubbock CWA.

The southern team concentrated on hail and tornado threat areas for mainly isolated convective modes including one supercell southeast of MAF and a splitting storm in southeast NM. The splitting storm provided some challenge to the team as to how to handle the threat areas as the split occurred. They decided to keep the original threat area with the right mover and issue a new one for the left moving component. But the sequence in which they decided to do the edits resulted in a period when the left mover was not covered by a threat area.

The northern team focused on hail and tornado threat areas for a relatively complicated cluster of small multicells in which of them merged together. Bryan decided to issue much larger threat areas than the southern team to group three areas of relatively higher storm density. His reasoning for such grouping was that the cells appeared to pulse in ways that could not be anticipated.

Wednesday evening’s severe storm threat included central Oklahoma and so PAR and CASA activities were scheduled. An upper-level low with an accompanying surface low tracked across Oklahoma City. The CASA network featured numerous outflow dominated small multicell line segments featuring strong downbursts. Bryan and Craig worked with the CASA scientists while Kristen worked with Bill on the PAR. The CASA team enjoyed observing two radars, Lawton and Rush Springs, get direct hits from 60kt and greater downbursts. To the north of these outflow dominated multicells, a boundary with strong vertical vorticity was tracked by the PAR sector. Kristen routinely updated the PAR sector in order to keep up with these storms as they rapidly crossed azimuths toward OKC. At least two TVS circulations were monitored by the PAR as they produced tornadoes near Lake Overholser and then up in Edmund. Meanwhile, the north edge of the outflow dominant multicells in the CASA network also generated significant vertical vorticity from which a TVS and a tornado subsequently formed just south of the National Weather Center. The tornado was visible from the center but only the TDWR could adequately track the TVS.

Thursday evening’s threat did not include the PAR and CASA domains and so the probabilistic warning activity was the sole experiment. The question was whether to set up the experiment in VA or NC where line and isolated supercell convection was forecast to occur or whether to focus on the supercell threat in western Kansas? This time, the SHAVE experiment was active and the probabilistic warning team wanted to coincide with them in order to get enhanced report density. Since the SHAVE team already picked a supercell west of Garden City, KS by the time the probabilistic warning activity ensued, we chose the same supercell. Now instead of two teams tracking multiple threats in two geographic areas, the two teams picked the same area and split up operations by threat type. Cynthia and Craig covered the hail threat while Bill and Bryan grappled with the tornado threat.

There was a leading, HP supercell that had an almost certain large hail probability and an uncertain tornado threat. This supercell spawned numerous, shortlived mesocyclones that became embedded in rain during their maximum intensities. This behavior provided a challenge to the tornado warning team in designing their threat areas. Many chasers were providing live streaming video coverage through severe studios website but despite this plethora of visual ground truth data, there was always the possibility of a rain obscured tornado. Therefore, the tornado warning team issued probabilities that just exceeded their legacy warning threshold of 50%. A later squall line started to exhibit strong vertical vorticity and so the team decided a large, low probability threat area was the best solution unless a specific vortex intensified.

The following is a highlight of observations regarding each of the three experiments over the week:

PAR discussion:

  • The PAR allowed circulation features to be more easily tracked through the motion of reflectivity patterns. The velocity static images did not appear as clean as the 88D but the quick updates allowed for easier tracking of vorticies.
  • Bill appreciated the opportunity to react more quickly as a function of more rapidly updating data, especially with the vortices in OKC.
  • Bill was thrown off by the interlaced 0.5 deg scan. A recent fix puts this interlaced scan into a different product.
  • Both Bill and Craig appreciated the rapid scanning for midlevel downburst precursor signatures. They thought there could be an addition 3-4 minutes lead time.
  • They thought an additional interlaced mid-level user selectable scan would provide more benefit for downburst precusor signatures.
  • Bill thought he didn’t have enough time to do a full volumetric analysis. Would there need to be some automated products to assist?

CASA discussion:

  • Bill believes the adaptive scanning appeared very intelligent during archive and playback cases.
  • Bill thinks the CASA network is very good for fire weather issues when quickly changing winds were occurring.
  • Craig found potential value for the Tucson CWA such as dust storms, non mesocyclonic tornadoes, even dust devils. He thinks a CASA network would be a valuable compliment to 88Ds in the west to monitor boundary layer action.
  • Bill wanted to see high elevation tops in CASA and he wondered if the 88D data could be overlaid.
  • Jerry wondered if a 3-D grid of CASA and 88D data would solve Bill’s wish using w2merger? However such a merger would be a challenge given the vastly different resolutions of the two datasets.
  • The RHIs were a hit when they were in the right place. Many times they weren’t in the right place according to Bill and Craig.
  • CASA volumes were easier to keep up with according to Bill. However he thought he lost track of the big picture owing to rapid paced data.
  • 3dVAR 2-D wind analysis was useful to forecasters though it was a bit smoothed. They want this display in WDSS-II and plans are for that to occur.

Probwarn discussion:

  • Bill thought he had a better handle of location of threat area than he did for assigning probabilities. However he thought he also had a good handle assigning probabilities.
  • Assigning hail probabilities was easier than that of tornado.
  • The team appreciated high resolution reports from CASA, especially live chaser video. However they also realized that unreported events could occur inside of this dense observation network.
  • They felt that if the same storm as the Thursday supercell approached a city, their legacy warning probability threshold would go down. But on the other hand, they felt that the denser reporting network would raise confidence, partially negating the first consideration.
  • The size of threat areas increased as the workload increased in order to group multiple cells in close proximity. It also increased if the uncertainty of where the threat materialized increased (reported from GFK scenario).
  • The expiration timing of swaths is uncertain. They tended to stick with traditional validation times.

General issues with both PAR/CASA:

Basic Logistics

  • Craig appreciated the day 1 arrangement of a large team then easing to smaller teams the next day as his experience with software and probwarn decision making improved.
  • Bill appreciated the WDSS-II training on day 1 though the ambient noise level was distracting. Do we shut the garage door or move training to the Dev Lab? Bill also liked the one group training.
  • Bryan and Craig like the 1-9 pm shift. They were fatigued at the end of the shift, especially on Wednesday.
  • Bryan thought surveys were redundant in that his conclusions were the same.

Jim LaDue (EWP Weekly Coordinator, 5-9 May)

Tags: None

Week 1 Summary: 28 April – 2 May 2008


David Blanchard (NWS WFO Flagstaff, AZ)

Mike Cammarata (NWS WFO Columbia, SC)

Andy Edman (NWS Western Region HQ)

Ken Cook (NWS WFO Wichita, KS)


The first regular week of EWP concluded today with our end of week de-briefing. It was mostly a quiet week across the CONUS for severe weather. We had two days (Tue and Wed) with no severe weather IOPs during our 1-9pm shift, so we used that time having the forecasters run through a number of archive case playbacks for all three experiments.

On Monday, we unfortunately were met with the prospects of an early significant severe weather event over eastern VA and NC before our forecasters could become trained on the various HWT systems, thus we missed out on working an IOP for that event. We also learned that our Monday orientation schedule needed some tweaking and “compression” so that we could have the forecasters sufficiently trained on WDSSII and introduced to which ever experiment would be running during the day’s IOP before the IOP began. This means a Monday DY1 map discussion, a shorter orientation seminar (there was a lot of repeat information with the various experiment introduction seminars), and a group WDSSII training session in the HWT with all visitors on a workstation simultaneously, all to be concluded by 315 (the start of the EFP Monday briefing). Then, at 315, there would be three scenarios:

1. Gridded Warning IOP to begin between 5-6 pm.

2. Central OK IOP to begin between 5-6 pm (PAR/CASA).

3. Early IOP to begin at 315pm.

In the event of either scenario 1 or 2, we would introduce the experiment du jour and provide some training before the IOP. The introduction seminars are outside the HWT ops area, not to interfere with the EFP 315pm briefing. Training would return to the HWT at 4pm and continue to the IOP. For scenario 2, we would split up the intro/training into two groups of participants, and those same two groups would work the IOP event respectively. For scenario 1, all visitors would participate in the gridded warning introduction, training, and IOP. For scenario 3 – baptism by fire!

Thursday was our only real-time IOP day. The storms formed in Central OK, but quickly moved out of the CASA network, so we put both of our visiting forecasters on the PAR station. Mike Magsig, our guest forecaster decided to work gridded warnings solo. We concluded that there may be situations like this in the future, and that we could consider a PAR/gridded warning scenario. However, if a central OK event was forecasted on Monday, we wouldn’t be able to train all the visitors on all three systems, so a gridded warning IOP might have to be run mainly by the gridded warning scientists with the forecasters observing.

End-of-week notes concerning the PAR experiment:

There was some difficulty in using the display to view the PAR data. Navigation of virtual volumes was not supported; the interlaced 0.5° tilt breaks into two virtual volumes. The display couldn’t keep up with the rapid refresh rate of the data. (Note that both of these issues have been solved by Week 3). There was a recommendation to add a display setting that would update the entire volume scan at once, since 30- and 60- second refresh rates are probably as fast a rate that the forecaster can consume in real-time.

There were some PAR data quality issues that also affected real-time operations. The reflectivity data above 1.8° was very noisy and there was bin smearing at higher tilts. Velocity data was also pretty noisy, although the same issues were also apparent on KTLX.

The display couldn’t animate the data fast enough or with a sufficient animation period for the forecasters to extract “88D-comparable” trends in the data. The PAR data refresh rate was too fast, and the display could never loop as nicely as the animated gifs presented during the training sessions.

There was also discussion that the rapid update of data presents a lot of information overload, and that there needs to be some serious discussion on how to manage all that information. But this is just speaking from the PAR data alone. There was a lot more information coming into the HWT via the Situational Awareness Display (live television broadcasts), amateur radio, and other data sources (KTLX, TDWR, CASA radars). This extra information started to become very distracting to the forecasters.

We then debated the use of the use of the SAD during PAR and CASA live ops. CASA’s objective was to see how the CASA radar data could compliment other data sources, whereas PAR scientists want to isolate the use of PAR data for warning decision making. This presents an issue in the HWT, since we operate both experiments simultaneously and the SAD was designed to provide additional information to all experiments. Some suggestions were to lower the volume of the television audio, or having the weekly coordinator listen to it on a Bluetooth headset. There was also a suggestion to treat the PAR archive cases “in isolation” (no other data sources), and PAR live cases as complimentary to the entire suite of data sources. Finally, we noted a lot of interest in the PAR and SAD displays to the point where too many folks were crowding that area. Suggest that the coordinator keep that area mostly clear of people, keep the crowds and noise level to a minimum.

End-of-week notes concerning the CASA experiment:

We had no live CASA operations during the first week. However, the CASA scientists collected live data on overnight squall line without the availability of the forecasters. The forecasters only evaluated archive data during the first week.

Data from the overnight case were shown. It was hard to see gust front and boundaries in the Reflectivity, but they showed up better in dual-pol data (ZDR).

Some feedback from the archive case playback included: The RHIs may not add much value over dynamic Vertical cross sections and CAPPIs. The forecasters mostly concentrated on the one minute heartbeat 2.0° elevation scans, which are always a full 360° scan, since the sector scans didn’t always get a full volume on the storms. Also, the complete storm volume is not observed with CASA data, so it must be complimented with nearby 88D data.

End-of-week notes concerning the Gridded Probabilistic Threat Area experiment:

Due to the bad weather timing, there was not a good opportunity to have the visiting forecasters run a gridded warning live IOP. Their experience was gained primary through training, and the archive case playback. Nonetheless, they did provide some useful feedback, some of which was included in earlier blog entries.

One of the biggest issues had to do with the learning curve on the WDSSII display, and knobology difference compared to AWIPS/D2D. It really helped to have a knowledgeable NSSL scientist sitting with the warning forecasters to help with the WDSSII. Most commented that these technology issues could go away if the software was fully integrated into D2D.

One suggestion was to xhost a D2D to the PW workstations, so that the forecasters could use it for their radar analysis if they didn’t feel comfortable with WDSSII. In this setup, WDSSII would only be used to issue and monitor the warnings.

Other software suggestions: Add the FSI hotkeys to wg; CurrentThreatAreas should be a contour which is easier to see over the radar data; add the warning vector with tick marks as an overlay like in WarnGen.

In terms of operations, there was some discussion about sectorizing operations. It could be done by different storm areas, or by different threat types. Both of these concepts of operations will be tested in week 2.

There were concerns about starting an IOP without much of a “situational awareness warm-up”. The pre-IOP activities are usually about archive case playback and training, and we aren’t really watching the weather situation too closely.

I’ll include some notes on our discussions about adding probabilities from the live blogs: How do we calibrate the probabilities? Perhaps we can integrate the verification into the NGWT from the get-go – lesson learned from WRH experience with GFE (see their white paper). Other items for thought – how would the GRPA metrics be modified for probabilistic warnings? How will we handle calls to action and other meta-data in the warnings? When should the general public be told “to duck”? Finally, how we can objectively calibrate forecasters to the verification and to each other, so that there is a consistent answer for each warning?

General feedback:

The forecasters felt that interviews would be better than written surveys used for PAR and CASA. There was some concern that the post-event written surveys are limited in that the interviewer can’t ask follow-up questions. The participants are too tired after the IOP and don’t always feel alert enough for writing after the event. Also, some forecasters might not be as good with written communications. It was noted that CASA voice records the conversations. The gridded warning experiment uses the live blog to record discussion notes. Other suggestions included stopping archive case playback at certain times for interview questions. This is similar in concept to WDTB DLOC classes. Therefore, it was also suggested that the cognizant scientists experience how WDTB does their DLOC trainings sessions to get a taste of how they capture feedback.

Also, the archive case playback/training sessions will take at least 2 hours, and more on the first day when the introduction seminar is also given.

A few additional suggestions were provided to improve the spring experiment for future participants. They included providing some menus for local restaurants when we do “food runs”, adding a “snack honor bar” or ask the SPC and WFO if we can share theirs.

Greg Stumpf (EWP Weekly Coordinator, 28 April – 2 May)

Tags: None

Shakedown Week Summary: 21 – 25 April 2008

Our first week of “operations” has completed, our operational shakedown week. We conducted our first end-of-week debriefing with a large group of NWC participants. Some of the highlights:

We discussed how to hold these end-of-week debriefings. They will include:

  1. A daily debriefing of Thursday operations.
  2. Scientific and technological discussion about the various projects.
  3. Logistical concerns of the visiting participants.
  4. Overall discussion on how we might improve the experience for all participants.

We also had a discussion on how we will introduce the visitors to the experiment at the orientation seminar. One particular item of note was to point out that we are running a research experiment, and that we should all expect bugs, kinks, and wrinkles. The forecaster/evaluators should look beyond those and focus on the bigger picture, to the future of NWS warning operations and technologies.

We finished the meeting with a lengthy discussion on the gridded probabilistic warning experiment.

One suggestion was to suggest at the orientation seminar that the forecasters should divorce themselves of the non-meteorological factors that effect their warning decision making, for example:

  1. Not getting verification due to low population.
  2. Letting certain users or subsets of users dictate your meteorological decisions.
  3. Turn off the county and city overlays…

…and focus on the science and meteorology!

Some discussion on what metrics we might measure included:

  1. Lead time for each event at different probability thresholds versus deterministic polygon. Can use LSRs, or hail tracks/rotation tracks for time of arrival/departure.
  2. Kristin talked to Harold Brooks, who suggested the forecasters, when issuing their warnings, treat our initial probability plateau as the probability of an event within the initial threat area, and not within x distance from the grid point. We can do that kind of analysis after the fact with the collected data.

Finally, we discussed the kinds of feedback we might seek from the forecasters. I’ve summarized the discussion:

1. Evaluate the concept of continuously advecting threat areas

  • How do we define the initial threat area?
  • Equitable lead time
  • Canceling out back of threat with time
  • Maintaining the threats during lifetime of storm

2. Provide feedback on the science of adding uncertainty information to warnings.

  • What baseline probabilities relate to today’s storm-based warnings?
  • How do we calibrate these probabilities over time?
  • Using algorithms to offer probabilistic guidance as a “first guess”?
  • How does enhanced verification (SHAVE reports) affect your WDM?

3. Assess the scientific and technological concepts before they are implemented into the NWS Next-Generation Warning Tool (NGWT).

Kiel added a “check box” to the polygon GUI that a forecaster could check when they think the current threat area that they are warning has now reached their “internal criteria” for issuing a storm-based warning of today. There was mixed reaction to that idea, but most folks generally felt it was ok to leave that there.

Here are some various images from the week. Most of the folks working this week were our future weekly coordinators and cognizant scientists, getting training on the various system and acting as “forecaster/evaluators”. Their comments and suggestions have been very helpful.

Greg Stumpf (EWP Weekly Coordinator, 21-25 April)

Tags: None