Forecaster Thoughts – George Phillips (2008 Week 6)

While PAR, CASA and probabilistic warnings are quite away down the road, I appreciated the fact that they are obtaining considerable input so far in advance.  This is, of course, the way every significant change in technology/operations should be tested and input from operational people received.

PAR – We only had one day where real storms impacted the PAR coverage area.  On the other days we played back archived cases.  While working the real weather day, it didn’t seem to help a great deal while I was the one investigating the storms early in the event.  Very strong wind fields on that day (June 5th) led to multiple dealiasing failures, making especially the early part of the real-time case, difficult.  As the event progressed and storms moved closer to the radar, rotation could sometimes be seen earlier on the PAR than on the 88D.

On the playback cases, the high temporal resolution would have helped greatly with the issuance of warnings for pulse storms, and would have led to more lead time in a tornado case.  If the high frequency updates from the PAR were coupled with a display like GR2AE, the ability to see updraft/core development and downdraft/core descent, would greatly help in visualizing what was going on with storms, and could easily help with understanding when warnings are or aren’t warranted based on their evolution.

Another advantage of the PAR was obtaining time continuity for questionable quality data.  Let’s say on the 88D you see an interesting velocity signature in an interesting area of the storm, but it doesn’t quite look right.  You may have to wait for another volume scan (4-5 minutes) before making a decision based on this signature to see if it is a dealiasing failure.  With the PAR, you have time continuity in very short order and can usually evaluate data quality much quicker.

On the challenging side was the fact that we are not used to such high frequency updates.  Transient features, that may or may not mean anything from a warning perspective, are seen much more frequently.  It will take awhile to adjust to mentally calibrate the WDM process with such high temporal resolution updates.  Concern was expressed about possible data overload as the volume scan could come in at 30 second or 1 minute periodicity.  While this is a valid concern, good visualization software would certainly help with this situation

CASA -  These radars are southwest of Norman, and are only about 30 km from each other.  Once again, only one day had real weather that impacted the radars, and that was late in the shift on the last day, so real-time evaluation was not extremely useful during that week.

We played back a few cases using the CASA radars and they showed some of the strengths.  In particular, with wind storms, the actual winds are often at some large angle to the 88D radar beam.  Or, the 88D is showing strong winds with a storm, but 0.5 degrees is intercepting the storms at 8000 ft.  Are those strong winds making it to the surface?  With CASA radars spaced relatively close together, sampling the lower atmosphere is easy, and the likelihood of being able to obtain a good estimate of the winds as they approach (or move away from) one or more of the radars, is also good.

Also, being able to sample the lower atmosphere in high resolution means that velocity and reflectivity signatures of small scale features should show up much better/more frequently.  We saw this in an example case with a mini-supercell associated with a tropical system, which had a nice little hook, and decent velocity couplet on the CASA display, while the 88D showed it as a blob with no real velocity signature until after the tornado had touched down.

Of course at 3 cm wavelength, attenuation occurs frequently, so any future CASA network would seem to need to be a supplement to a network of 10 cm radars.

Probabilistic “Warnings” -  Ever issued a deterministic warning and wish 10 minutes later you could cancel it, or reorient it, but are concerned about the verification implications, or possible consequences if you are wrong?  In the era of probabilstic warnings, one simply decreases/increase the probabilities, or reorients the track to produce a different area of probabilities.

We did this each day, in real time for various CWAs across the Plains.  We also did this on the last day with a canned case that all the participants in the EWP went through.

This actually worked better than I had expected.  But one could see that following more than two storms around with probabilities for tornadoes, winds and hail, quickly became a workload.  Of course there are also challenges with reasonably assigning probabilities, since that is not something we are used to.

On the last day we worked an archived case that all the participants in this EWP went through.  We had VERY limited environmental information for this event.  Assigning tornado probabilities without good environmental information was very frustrating, and really emphasized the importance of having this data.

There are a number of problems with the current warning system.  How we would transition from what we do now, to this method is not entirely clear, and how some of our users would react to this change is also unclear.  However, one can see that sophisticated users could obtain useful information that they currently don’t have.  Frequent updates to threat areas has the potential to give earlier heads up to people downstream of the ongoing severe storms, than issuing periodic warnings does.

George Phillips (NWS Topeka KS – Week 6 Participant)

Tags: None