Week 1 Summary: 28 April – 2 May 2008


David Blanchard (NWS WFO Flagstaff, AZ)

Mike Cammarata (NWS WFO Columbia, SC)

Andy Edman (NWS Western Region HQ)

Ken Cook (NWS WFO Wichita, KS)


The first regular week of EWP concluded today with our end of week de-briefing. It was mostly a quiet week across the CONUS for severe weather. We had two days (Tue and Wed) with no severe weather IOPs during our 1-9pm shift, so we used that time having the forecasters run through a number of archive case playbacks for all three experiments.

On Monday, we unfortunately were met with the prospects of an early significant severe weather event over eastern VA and NC before our forecasters could become trained on the various HWT systems, thus we missed out on working an IOP for that event. We also learned that our Monday orientation schedule needed some tweaking and “compression” so that we could have the forecasters sufficiently trained on WDSSII and introduced to which ever experiment would be running during the day’s IOP before the IOP began. This means a Monday DY1 map discussion, a shorter orientation seminar (there was a lot of repeat information with the various experiment introduction seminars), and a group WDSSII training session in the HWT with all visitors on a workstation simultaneously, all to be concluded by 315 (the start of the EFP Monday briefing). Then, at 315, there would be three scenarios:

1. Gridded Warning IOP to begin between 5-6 pm.

2. Central OK IOP to begin between 5-6 pm (PAR/CASA).

3. Early IOP to begin at 315pm.

In the event of either scenario 1 or 2, we would introduce the experiment du jour and provide some training before the IOP. The introduction seminars are outside the HWT ops area, not to interfere with the EFP 315pm briefing. Training would return to the HWT at 4pm and continue to the IOP. For scenario 2, we would split up the intro/training into two groups of participants, and those same two groups would work the IOP event respectively. For scenario 1, all visitors would participate in the gridded warning introduction, training, and IOP. For scenario 3 – baptism by fire!

Thursday was our only real-time IOP day. The storms formed in Central OK, but quickly moved out of the CASA network, so we put both of our visiting forecasters on the PAR station. Mike Magsig, our guest forecaster decided to work gridded warnings solo. We concluded that there may be situations like this in the future, and that we could consider a PAR/gridded warning scenario. However, if a central OK event was forecasted on Monday, we wouldn’t be able to train all the visitors on all three systems, so a gridded warning IOP might have to be run mainly by the gridded warning scientists with the forecasters observing.

End-of-week notes concerning the PAR experiment:

There was some difficulty in using the display to view the PAR data. Navigation of virtual volumes was not supported; the interlaced 0.5° tilt breaks into two virtual volumes. The display couldn’t keep up with the rapid refresh rate of the data. (Note that both of these issues have been solved by Week 3). There was a recommendation to add a display setting that would update the entire volume scan at once, since 30- and 60- second refresh rates are probably as fast a rate that the forecaster can consume in real-time.

There were some PAR data quality issues that also affected real-time operations. The reflectivity data above 1.8° was very noisy and there was bin smearing at higher tilts. Velocity data was also pretty noisy, although the same issues were also apparent on KTLX.

The display couldn’t animate the data fast enough or with a sufficient animation period for the forecasters to extract “88D-comparable” trends in the data. The PAR data refresh rate was too fast, and the display could never loop as nicely as the animated gifs presented during the training sessions.

There was also discussion that the rapid update of data presents a lot of information overload, and that there needs to be some serious discussion on how to manage all that information. But this is just speaking from the PAR data alone. There was a lot more information coming into the HWT via the Situational Awareness Display (live television broadcasts), amateur radio, and other data sources (KTLX, TDWR, CASA radars). This extra information started to become very distracting to the forecasters.

We then debated the use of the use of the SAD during PAR and CASA live ops. CASA’s objective was to see how the CASA radar data could compliment other data sources, whereas PAR scientists want to isolate the use of PAR data for warning decision making. This presents an issue in the HWT, since we operate both experiments simultaneously and the SAD was designed to provide additional information to all experiments. Some suggestions were to lower the volume of the television audio, or having the weekly coordinator listen to it on a Bluetooth headset. There was also a suggestion to treat the PAR archive cases “in isolation” (no other data sources), and PAR live cases as complimentary to the entire suite of data sources. Finally, we noted a lot of interest in the PAR and SAD displays to the point where too many folks were crowding that area. Suggest that the coordinator keep that area mostly clear of people, keep the crowds and noise level to a minimum.

End-of-week notes concerning the CASA experiment:

We had no live CASA operations during the first week. However, the CASA scientists collected live data on overnight squall line without the availability of the forecasters. The forecasters only evaluated archive data during the first week.

Data from the overnight case were shown. It was hard to see gust front and boundaries in the Reflectivity, but they showed up better in dual-pol data (ZDR).

Some feedback from the archive case playback included: The RHIs may not add much value over dynamic Vertical cross sections and CAPPIs. The forecasters mostly concentrated on the one minute heartbeat 2.0° elevation scans, which are always a full 360° scan, since the sector scans didn’t always get a full volume on the storms. Also, the complete storm volume is not observed with CASA data, so it must be complimented with nearby 88D data.

End-of-week notes concerning the Gridded Probabilistic Threat Area experiment:

Due to the bad weather timing, there was not a good opportunity to have the visiting forecasters run a gridded warning live IOP. Their experience was gained primary through training, and the archive case playback. Nonetheless, they did provide some useful feedback, some of which was included in earlier blog entries.

One of the biggest issues had to do with the learning curve on the WDSSII display, and knobology difference compared to AWIPS/D2D. It really helped to have a knowledgeable NSSL scientist sitting with the warning forecasters to help with the WDSSII. Most commented that these technology issues could go away if the software was fully integrated into D2D.

One suggestion was to xhost a D2D to the PW workstations, so that the forecasters could use it for their radar analysis if they didn’t feel comfortable with WDSSII. In this setup, WDSSII would only be used to issue and monitor the warnings.

Other software suggestions: Add the FSI hotkeys to wg; CurrentThreatAreas should be a contour which is easier to see over the radar data; add the warning vector with tick marks as an overlay like in WarnGen.

In terms of operations, there was some discussion about sectorizing operations. It could be done by different storm areas, or by different threat types. Both of these concepts of operations will be tested in week 2.

There were concerns about starting an IOP without much of a “situational awareness warm-up”. The pre-IOP activities are usually about archive case playback and training, and we aren’t really watching the weather situation too closely.

I’ll include some notes on our discussions about adding probabilities from the live blogs: How do we calibrate the probabilities? Perhaps we can integrate the verification into the NGWT from the get-go – lesson learned from WRH experience with GFE (see their white paper). Other items for thought – how would the GRPA metrics be modified for probabilistic warnings? How will we handle calls to action and other meta-data in the warnings? When should the general public be told “to duck”? Finally, how we can objectively calibrate forecasters to the verification and to each other, so that there is a consistent answer for each warning?

General feedback:

The forecasters felt that interviews would be better than written surveys used for PAR and CASA. There was some concern that the post-event written surveys are limited in that the interviewer can’t ask follow-up questions. The participants are too tired after the IOP and don’t always feel alert enough for writing after the event. Also, some forecasters might not be as good with written communications. It was noted that CASA voice records the conversations. The gridded warning experiment uses the live blog to record discussion notes. Other suggestions included stopping archive case playback at certain times for interview questions. This is similar in concept to WDTB DLOC classes. Therefore, it was also suggested that the cognizant scientists experience how WDTB does their DLOC trainings sessions to get a taste of how they capture feedback.

Also, the archive case playback/training sessions will take at least 2 hours, and more on the first day when the introduction seminar is also given.

A few additional suggestions were provided to improve the spring experiment for future participants. They included providing some menus for local restaurants when we do “food runs”, adding a “snack honor bar” or ask the SPC and WFO if we can share theirs.

Greg Stumpf (EWP Weekly Coordinator, 28 April – 2 May)

Tags: None