Forecaster Thoughts – Steve Nelson (2010 Week 5 – CASA)

Hazardous Weather Testbed in action during the 10 May 2010 tornado  outbreak.
Figure 1. Hazardous Weather Testbed in action during the 10 May 2010 tornado outbreak.

In March  2010, I was asked to participate in the CASA (Collaborative Adaptive Sensing of the Atmosphere) portion of the 2010 Spring Experimental Warning Program (EWP) in the Hazardous Weather Testbed (HWT) at the National Weather Center (NWC) in Norman, OK (Figure 1).  CASA operates a dense radar network of four X-band 3cm radars between Oklahoma City and Lawton, OK.  These radars only have a 30nm effective range but overlap to provide multiple-radar analyses of reflectivity and velocity. For more information on CASA, see http://www.casa.umass.edu/ or the CASA IP1 Wiki at http://casa.forwarn.org/wiki/. The purpose of the CASA EWP experiment is have experienced forecasters evaluate real-time and case studies of CASA radar data.

Beginning on the week before my arrival at the NWC, I became increasingly excited because of consistent model forecasts of severe weather in Oklahoma.  I even stayed up the night before my departure to view the SPC outlook for May 10 – High Risk of severe thunderstorms and large tornadoes in Oklahoma!  When I arrived at OKC airport at noon on Monday, I immediately began coordinating my arrival with Jerry Brotzge and Brenda Philips (CASA Principal Investigators) via phone and text messages.  Brenda’s flight had also landed at OKC around noon, so we drove down together.  We had just enough time to grab a quick lunch to go and arrived at the HWT around 130pm where we immediately began reviewing the latest information.  Central Oklahoma was still under the gun and storms were developing along the dryline to the northwest of the CASA testbed area.  I don’t think I had finished my lunch yet when Brenda told me it was time to make a forecast!  We used twitter and NWSChat as our primary mediums for disseminating our forecasts, warnings, and updates.  After pegging a time of 5pm for activity to reach the testbed area, I watched the event unfold with one supercell after another developing along and ahead of the dryline.  Unfortunately, all of them seemed to develop just outside of the testbed area.  Around 515pm, one left-moving supercell storm split off to the NE and moved inside the network.  This storm contained an unusually strong anticyclonic mesocyclone (mesoanticyclone?) and hook configuration (Figure 2).  When asked if I would issue a tornado warning on that storm, I replied “No, because anticyclonic mesocyclones rarely produce tornadoes.”  At 525pm, a Tornado Warning was issued by WFO OUN for this storm.  It turned out that a six mile long tracked EF1 anticyclonic tornado had touched down at 518pm near Bray, OK and another pair of tornadoes (one anticyclonic and the other cyclonic) near I-35 and Wayne, OK.  Around this time, two LP-like supercells were approaching Moore and Norman.   As the Norman storm approached, I saw SPC forecasters run to the west windows. Being a conscientious, safety-minded NWS meteorologist, I also ran to the window and observed a rapidly rotating funnel nearly over the National Weather Center (Figure 3).  The tornado grew in size as it tracked east along Highway 9 and even damaged a few of the NWC employees’ homes.  As storms moved east and away from the network that evening, we closed operations for the day.  Between 2 and 8 pm, 31 tornadoes were confirmed across the state [http://www.srh.noaa.gov/oun/?n=events-20100510-tornadotable].  An exciting start to the experiment to say the least!  In the following days, there were several close calls to observing severe storms within the network during and after operations, but none as significant as the May 10 event.

Tornado 300 yards south of the National Weather Center on 10 May  2010. Photo by Kevin Kloesel.
Figure 3. Tornado 300 yards south of the National Weather Center on 10 May 2010. Photo by Kevin Kloesel.
Anticyclonic hook depicted on 2.0 deg reflectivity from KRSP CASA  X-band radar at 2221Z 10 May 2010.
Figure 2. Anticyclonic hook depicted on 2.0 deg reflectivity from KRSP CASA X-band radar at 2221Z 10 May 2010.

The orientation planned for Monday took place on Tuesday.  During the rest of the week, I went through several displaced real-time simulations using 88D data only, then repeated using 88D and CASA radar data, multi-radar wind analyses and high-resolution model forecasts. The simulations included the Anadarko, OK tornado of 14 May 2009 and the Rush Springs, OK tornado during the early morning of 2 April 2010.  Without knowing any details of either case, I was challenged trying to issue timely warnings based on CASA radar data.  Without going into detail, CASA radars use adaptive scanning strategies that depend on the coverage and intensity of storms.  Data at any one elevation angle can be as frequent as every 30 seconds.  Trying to mentally process data from four CASA radars in the same way we do one 88D data was an exercise in futility.  I do not believe manual interrogation of such high-resolution radar data is a realistic option for warning forecasters of the future.

The Rush Springs, OK tornado case was very eye-opening and showed the tremendous potential of CASA radar technology to detect smaller tornadoes.  Figure 4 shows a side by side comparison of KTLX and the KRSP CASA reflectivity at the time of the tornado.  Many areas east of the Mississippi river are prone to these smaller tornadoes that develop more rapidly than those from supercells.  Trapp and Weismann (2005) more recently showed how tornadoes spin-up in the comma head portion and along the leading edge of quasi-linear convective systems (QLCS).  Tornado warning lead time and accuracy is lower for both QLCS and tropical cyclone storms than that of supercells.  A local study done in the Peachtree City WFO in 2009 showed that 13 out of 16 unwarned F2 or greater tornado events resulted from QLCS storms.  A Hollings Scholar is also studying  QLCS tornado climatology and warning accuracy this summer in the Peachtree City WFO.  So far we have determined that the initial lead time of tornadoes from QLCS storms across the mid-south and southeast averages about 25% of those from supercells (3-5 minutes vs 20 minutes).

Radar reflectivity from the 2 April 2010 Rush Springs, OK tornado. The image on the left is from the KTLX 88D at 1057Z, the middle and right images are from the KRSP CASA X-band radar at 1058Z and 1100Z, respectively.
Figure 4. Radar reflectivity from the 2 April 2010 Rush Springs, OK tornado. The image on the left is from the KTLX 88D at 1057Z, the middle and right images are from the KRSP CASA X-band radar at 1058Z and 1100Z, respectively.

During the week, I was able to pick the brains of some scientists. I shared presentations and concerns from the 14-15 March 2008 tornado and 21 September 2009 flash flood events in north Georgia.  I discussed research on unwarned tornadoes recently published in WAF with Jerry Brotzge, who showed how such missed events can be correlated to smaller tornadoes (as just mentioned).  I plan on collaborating further in the future.

I will certainly remember the experience I had at the EWP this year and look forward to the day when technology like this is deployed operationally.

Steve Nelson (Science and Operations Officer, NWS Peachtree City/Atlanta GA – 2010 Week 5 Evaluator)

Tags: None

Week 5 EWP Summary: 10-14 May 2010

Week #5 of EWP2010 wrapped up with continued CASA experimentation.

CASA:

During the week, CASA hosted the following National Weather Service participant:  Steve Nelson (WFO Atlanta, GA).

Steve was joined by CASA participants Ellen Bass, Jerry Brotzge, Kevin Kloesel and Brenda Philips.  During the week, several additional CASA staff and students, including Don Rude, Brendan Hogan, and Cedar League, also were in Oklahoma visiting emergency managers in the field during the real-time events.

The week got off to a fast start with a local tornado outbreak, with no time for training or preparing Steve for what to expect, how to use the WDSS-II system, or the communications protocol with emergency managers.  Monday (May 10) saw a total of at least 12 tornadoes touch down in the region, but with only one confirmed tornado within the CASA testbed.  The one tornado in the CASA domain was on the ground for ~ 10 minutes, but was unique – anticyclonic from a left-moving supercell.  The strong magnitude of the low-level velocity shear from CASA prompted Steve to issue a (in-house) warning on the storm, despite the anticyclonic rotation and unusual location relative to the parent supercell.  The rotation also was observed by KTLX and TDWR, but at a higher elevation.  The tornado was confirmed and classified as an EF-1.  Other tornadic supercells initiated within the testbed, but moved east prior to tornado formation.

Steve spent the remainder of the week reviewing both weaker real-time data as well as a series of case study events.  A strong cold front passed through the testbed at 4am Wed morning (May 12), which spawned at least one area of rotation as detected by CASA radar KCYR.  A small area of damage was reported to the NWS, which coincided with this observed vortex.  Several of the archived cases show similar results from supercell events.  Data from the May 13, 2009 event revealed several areas of rotation by CASA.  The squall line observed April 2, 2010, showed at least one supercell within the line, spawning two strong vortices, each with coincident damage at the ground.  The high spatial and temporal data provided by CASA allow the complete development and evolution of these vortices to be better tracked and warned in advance.

Additional tools, such as 3DVAR and real-time NWP (a.k.a., warn-on-forecast), were run operationally during the real-time events.  For the May 13 event, Steve first observed rotation within the 3DVAR display.  Similarly, the forecast for May 13 allowed Steve much greater insight into what mode of storms to expect.  Likewise, during the May 10 event the real-time NWP forecast predicted long-track supercells, with the location and timing very similar to what was observed.

Steve recognized a number of benefits and some challenges with the collaborative and adaptive radar network design.  The low-level scanning abilities and high space and time resolution provided significant benefits, particularly in observing and warning on strong low-level winds and tornadoes.  The greatest challenge was handling data overload.  CASA data were available on both AWIPS and WDSS-II, and Steve used both systems in real-time.  The primary need moving forward will be the development of visualization tools capable of easily displaying multiple radars and merged products.  The ability to quickly and easily move between products and radars will be critical to using the valuable information available by these new systems.

The CASA experiment has concluded for the spring.

PARISE:

PARISE has concluded its activities in the testbed for the spring.  In several weeks, an end-of-experiment quick summary will be prepared by the PARISE principle scientists.

A LOOK AHEAD:

Beginning 17 May, we will begin the second phase of our spring activities with two new experiments, a) an evaluation of experimental Multiple-Radar/Multiple-Sensor (MRMS) severe weather algorithm products, and b) an evaluation of GOES-R convective initiation and lightning mapping products.

Greg Stumpf, EWP2010 Operations Coordinator

Jerry Brotzge, CASA Scientist

Tags: None

Forecaster Thoughts – Bill Martin (2010 Week 4 – CASA)

I spent last week in Norman at NSSL and the Hazardous Weather Testbed helping to evaluate how the CASA radar network can be used in operations.  In addition to the radar people in Norman, I got to work with systems engineers from the Univ. of Virgina who are studying how forecasters make use of information and software tools.

As you may recall, the CASA radar network consists of 4 radars in southwest Oklahoma which are relatively low powered radars designed to work collaboratively.  A much larger network has been envisioned; I was told that a national network would require 10 000 such radars.  The close spacing of the radars allows them to see the lower levels of the atmosphere much better than 88Ds, and allows them to be closer to targets and, thus, have considerably better resolution than a typical 88D (though an 88D has better resolution for targets close to it).  The collaborative aspects of the network includes things like dual-Doppler analysis.  A large network would be able to give 2-D wind vector fields, instead of just towards and away wind values.  This would reduce the intellectual load of radar interpretation quite a bit.  Disadvantages of the network include attenuation and data quality problems (which would be mitigated by a larger network), and cost.  Each of the prototypes cost around $250K plus maintenance, though this would presumably come down with mass production, if it ever came to that.

CASA radars are sometimes considered as gap-filling radars, and they could certainly fill this role.  However, gap-filling radars have been available from vendors for some time, and the CASA project was designed to be more than that through collaborative properties.  Core funding for CASA has been from the NSF and has another 2 years to run.  After this, progress may come more slowly as funding for different aspects of CASA becomes diffuse, unless a source for new funding is identified.  I was involved in CASA from the beginning, having attended the NSF review that originally funded the project (as a graduate student).

As there was no active weather the week I was there, most of the time in the Testbed was spent playing back archived cases and issuing experimental warnings based on the CASA data, in addition to the usual data.

Some of the interesting issues that came-up:

—The systems engineering people were fascinated by the fact that all the forecasters they had evaluated used the available information differently.  I’m not sure if that is good or bad.  One the one hand, it is good to have variety so that new ideas can come to light; on the other hand, for some things there are probably “best” ways to proceed.

–WDSS II versus AWIPS.  The WDSS II software was used to visualize data.  This was much more sluggish and difficult to use than D2D.  FSI that we use as a plug-in to D2D is a subset of WDSS II.  For operations, we need fast and highly responsive access to data.  I recommended WDSS II be redesigned to be more efficient.  They had recently gotten D2D to work with real-time CASA data, and it was good to have both of those available so I could show them that software to look at radar data can actually be zippy.

–Having high-resolution data routinely available allows tornadoes to be discriminated based on reflectivity signatures.  I believe this would be a relatively new concept in operations.  The reflectivity “donut” associated with tornadoes that is seen in high-res research radars has been recognized form some years as verification of a tornado.  “Donuts” or similar features were seen in all tornado cases available with CASA, and such features are rarely seen in 88Ds due to lower typical resolution.  With super-res data in the 88Ds, I suspect tornado reflectivity features are now more often seen in 88Ds, though.  The TVS algorithm we currently use relies only on velocity information, and many forecasters do likewise; however, it is becoming clear that greatly improved detection can be achieved by considering both velocity and reflectivity signatures.

Data overload.  CASA radars give a volume scan every minute, there are 4 CASA radars to look at, they have 2-D wind analyses to look at as well, and have short-term forecasts to look at, in addition to all the usual things.  It is very difficult to keep up with all these data sources and simultaneously make warning decisions.  The data overload problem is recognized as an issue with many new data streams.  Possible solutions include greatly improved algorithms to handle some, or most of the analysis, and putting all the data from different sources into some sort of combined 4-D space than can be perused (similar to the FAAs 4-D cube).  With a 4-D cube concept, a short term forecast can be combined with the data in the same 4-D space to show an extrapolation (similar to the warn-on-forecast concept).

–Using CASA radars did help quite a bit in issuing warnings because of improved resolution of features, because of seeing closer to the ground, and because of better time resolution.  Having a dense network of CASA radars (with good software tools for analysis) would be quite an advance.  Of course, doubling the density of the 88D network might achieve many of the same goals, and it is really a question of cost-effectiveness.

A couple other things I learned on the trip:

–The MPAR (Multi-function Phased Arrray Radar) is scheduled for an large increase in funding next year.  This is mostly to prove the concept of dual-pol phased array, which hasn’t been done before.  A phased-array radar network is envisioned as a potential replacement for the 88D network.  This one network would be used by multi-agencies, including the NWS, the FFA for air traffic control, and by DHS.  For this concept to be palatable to the NWS, the replacement for the 88D network should be at least close in performance to the current 88D network, and this includes dual-pol.

–NOAA is developing a roadmap for radar which extends through 2025.  I suspect this is fairly fluid, but ideas include MPAR, gap-filling radars, and integrating private sector radars (TV stations), as well as assimilating radar data for warn-on-forecast.  The only thing really firm is the dual-pol deployment over the next 3 years.

Bill Martin (Science and Operations Officer, NWS Glasgow MT – 2010 Week 4 Evaluator)

Tags: None

Week 4 EWP Summary: 3-7 May 2010

Week #4 of EWP2010 wrapped up with continued CASA experimentation.

CASA:

During the week, CASA hosted the following National Weather Service participant:  Bill Martin (WFO Glasgow, MT).

A lack of severe weather during week #4 once again allowed ample time for careful review of archived case studies. Monday was spent reviewing CASA operations, forecaster-emergency manager communications, and familiarization of CASA visualization and scanning tools.  The remainder of the week was used for reviewing archived events and running through Don Rude’s comparison cases.

Many archived events were reviewed in displaced real-time, allowing for more “true-to-life” operational scenarios.   Several cases were reviewed twice – first with NEXRAD and TDWR data only, and then a second time including NEXRAD, TDWR and CASA and CASA products.   CASA products included 3DVAR and  NWP.   Topics raised throughout the week included emergency manager operations, data overload, multi-radar visualization, multi-sensor products, and NWP (“Warn-on-Forecast”) output.   One important issue going forward is how forecast information can be integrated easily and seamlessly into NWS operations.   Ideally, a data cube including all multi-sensor data will be available that shows real-time 3D analysis (at multiple scales) as well as projections into the future.   Otherwise, data overload could escalate as the number of radars and radar products are added.  As the case studies again showed, the low-level sampling and spatial resolution, as demonstrated by the CASA testbed, highlight many storm details that may be critical for future improvements in our warning capability.

The CASA experiment continues for one more week in the testbed before wrapping up for the spring.

PARISE:

PARISE has concluded its activities in the testbed for the spring.  In several weeks, an end-of-experiment quick summary will be prepared by the PARISE principle scientists.

A LOOK AHEAD:

Beginning 17 May, we will begin the second phase of our spring activities with two new experiments, a) an evaluation of experimental Multiple-Radar/Multiple-Sensor (MRMS) severe weather algorithm products, and b) an evaluation of GOES-R convective initiation and lightning mapping products.

Greg Stumpf, EWP2010 Operations Coordinator

Jerry Brotzge, CASA Scientist

Tags: None

Forecaster Thoughts – Ernie Ostuno (2010 Week 3 – PARISE)

First I want to say that my overall impression of PARISE 2010 is that it was a very well-run and enjoyable exercise. Seldom have I found simulated severe weather to be so much fun. 🙂

Here’s what I observed, and remembered most:

The main benefit of the PAR was the increased temporal resolution. This was most apparent in the Tropical Storm Erin case study where small, rapidly evolving mesocyclones were sampled often enough to show the rapid increases in low level rotation. In Michigan, we often see these type of mesos in the warm season and have trouble issuing warnings with any lead time on them. One issue that should be studied from a social science perspective is how the PAR data, particularly the increased temporal resolution, will affect warning decisions by forecasters who will be seeing detail in storm evolution that they are not familiar with. Will it increase lead times and false alarms? Can we measure this? Can we sufficiently train warning forecasters on the new data before PAR is fielded? I’m also concerned that we might be looking at case studies that were not fully investigated on the ground. Is it possible that some of these storms produced hail, wind or even tornadoes that were not documented?

I noted a couple PAR data quality issues. There was one case where sidelobe contamination masked the evolution of an outflow boundary. There were a a few cases where improperly dealiased data masked a velocity couplet, but this also illustrated the importance of increased temporal resolution since one bad scan meant only a loss of two minutes in the storm’s evolution, versus an equivalent 8 or 10 minute gap in the 88D data.

I understand that the PAR “library” of events is probably rather limited at this time, but I would like to see a case study of a line of convection with short, bowing segments and small, shallow, rapidly evolving circulations, which makes up one of our most common severe weather types in Michigan, especially in the cool season.

Let me end by saying thanks to all of you who were responsible for putting together such a great experience for me as a warning forecaster, and for all your efforts in seeking and documenting our feedback!

Ernie Ostuno (Lead Forecaster, NWS Grand Rapids MI – 2010 Week 3 Evaluator)

Tags: None

Forecaster Thoughts – Ryan Sharp (2010 Week 3 – PARISE)

First off, thanks for the chance to come up there and see this great new tool that hopefully will be available to all forecasters soon.

In my opinion the experiment our week was well run, with the least amount of stress put upon us as forecasters.  We had plenty of time to assess the situation for each case study and only 1 hour of “stress”-ful time.  But even that time went by fast as weather was relatively easy to focus on.  By the nature of the 90 degree wedge, forecasters had already sectorized the workload.  It would be interesting to see a case study with a long line of storms with multiple areas of severe weather within that wedge to see how we would have done.  Hopefully you got a good amount of detail with the Norman tornado outbreak of a few days ago.

One concern I had on day 1 was having to use a new interface to handle the data.  I liked the WDSS-II, as it had a similar feel to FSI.  As the week progressed, I got used to using the tools it had available, but I still would have liked to have access to all of the usual procedures I have for storm interrogation.  Not sure what the future is for this type of interface within NWS, but I would like to see the experiment going towards whatever we will be using.  I will have to say that when I returned to the office and dealt with the subsequent severe weather we had over the last couple of weeks, I did go back to using FSI again to help with storm interrogation as a result of my WDSS-II usage.

It would be interesting to study radar fatigue on dealing with 1 minute data.  Since I have been in the weather service, we’ve gone from 5-6 minute updates to 4 minute updates with the 12 series of VCPs.  I have noticed this change resulting in less chance to get away from the desk to use the bathroom or get something to eat.  Going to new data every minute, or even less than a minute, would make me spend a lot more time interrogating with no down time in between.  The easy workaround is to make sure you rotate people working on radar.  Our office employs a convective weather operations plan (CWOP) that could be changed around to make sure the event coordinator is mindful of making this type of rotation more quickly.

Speaking of the CWOP, it was fun to work as a team of forecasters making decisions on the warnings.  At our last severe weather round table meeting back in March, a “Cadillac” model of operations was introduced into our plan where we would have teams of forecasters making the warning decisions.  We would only do this in situations where we have lots of staffing preceding a moderate to high risk day with enough notice to “gather the forces”.  When the team decides a warning/statement is warranted, the second forecaster would deal with all of the text needed while the first forecaster maintains radar watch.  I thought the experiment went well, and I understood the need you had to hear what our thoughts were in making the warning decisions at the times we were making the warning decisions…to see what products we were using most to make our decisions.  However, future tests may need to drop back down to one forecaster analyzing all of the data.  Then you would have to do a quick debrief afterward to find out what thoughts went into the warn/no-warn decisions.

Again, I really appreciate the opportunity I had to come over there and see the National Weather Center for the first time as well as work with the next/next generation of radar data.  After my FSU upbringing, I became jealous of all that OU’s school of meteorology has to offer with the co-located offices in that building.  I could certainly see why Norman is such a popular place for meteorologists.

Ryan Sharp (Lead Forecaster, NWS Louisville KY – 2010 Week 3 Evaluator)

Tags: None

Forecaster Thoughts – Kathleen Torgerson (2010 Week 3 – PARISE)

The higher temporal sampling of the PAR data finally provides a more fluid perspective of storm evolution, which was exciting to observe!   This proved particularly beneficial for the fast evolving storm cases, where 1 minute volume scans finally gave warning decision makers a fighting chance in getting some warning lead time, especially for rapidly evolving tornadic storms.  Another benefit of the higher temporal sampling of PAR data was the ability to diagnose feature persistence, particularly with rapidly evolving mesocyclones.  In the 4 to 5 minute volume scans of the current WSR-88D design, rapidly evolving tornadic mesocyclones may be captured by one volume scan after which time the warning decision maker is left to speculate, is this feature going to be persistent or transient?  Should I warn, or should I wait for one more volume scan?  If I wait, will I sacrifice potentially life-saving lead time? Or, if the feature is not persistent, will I reduce my False Alarm Rate (FAR) and potentially increase credibility in the public’s eye from having not warned for a null event?    In cases like these, the environmental cues (how conducive is the environment to producing tornadoes) is relied on more heavily to anticipate storm evolution and tip the scales towards warning, or not warning.  Of course, all storms within a similar environment will not necessarily produce the same result, and hence the weakness of this strategy.  With the PAR data, the higher temporal sampling gave us several more volume scans to assess the persistence of storm features, and gain a better understanding of the storm evolution.  This resulted in greater confidence in the warning decision making process, even if the outcome (would the storm successfully produce a tornado, and if so would it be observed and reported?) was not entirely certain.  I believe the challenge of achieving greater warning lead-time without a higher FAR will not entirely go away in the higher-temporal resolution world of PAR.  But our ability to diagnose storm processes will certainly improve, and through a learning experience, our warning decisions will improve with it.

I found my storm interrogation technique through the duration of the project evolving away from the “All Tilts” perspective of examining each elevation scan as it arrived in the database.    Instead, I found myself gravitating towards watching loops at critical levels within the storm in order to identify key storm scale processes important to my warning decision making (such as intensifying/deepening mesocyclones, RFDs, hail cores aloft ).   Our current AWIPS/WSR-88D techniques of storm analysis through all-tilts would be a daunting task in a PAR environment with 1 minute volume scans.  To analyze and interpret this vast amount of data rapidly, our techniques for radar interrogation will probably need to evolve towards viewing data in 4 dimensions.  I could envision new radar display capabilities for looping 3D isosurfaces of radar parameters such as reflectivity, velocity, shear, and eventually the dual pol parameters. Our computer systems will need to be fast enough to display and loop this larger quantity of data, and the GUI interfaces intuitive and efficient enough to modify display characteristics rapidly without negatively affecting system performance.    Warning operator fatigue with higher temporal resolution PAR radar data is certainly something to be concerned with, especially with widespread events across the entire forecast area.  But with the right tools and interrogation techniques, I believe this could be overcome, and the benefits this data could provide in understanding storm scale evolutions and enhancing NWS warning operations could be far reaching!

Participating in PARISE was such an enjoyable and exciting experience!  My thanks goes to everyone who put this project together and gave field forecasters the opportunity to participate.  I was truly inspired by your attentiveness to our feedback and your desire to understand our experiences with this new data set.  After having experienced how powerful PAR data was for warning decision making, I hope this system can be fielded as soon as possible.

Kathleen Torgerson (Lead Forecaster, NWS Pueblo CO – 2010 Week 3 Evaluator)

Tags: None

Week 3 EWP Summary: 26-30 April 2010

SUMMARY:

Week #3  of EWP2010 wrapped up with continued PARISE and CASA experimentation.  Another week without severe weather in Central Oklahoma during the EWP operational shifts left participants going through exercises with archived data.  A summary of each experiment follows:

CASA:

During the week, CASA hosted the following National Weather Service participant:  Ron Przybylinski (WFO St. Louis, MO)

A lack of severe weather during week #3 once again allowed ample time for careful review of archived case studies.  Monday was spent reviewing CASA operations, forecaster-emergency manager communications, and familiarization of CASA visualization and scanning tools.  The remainder of the week was used for reviewing archived events and running through Don Rude’s comparison cases.

Many archived events were reviewed in displaced real-time, allowing for more “true-to-life” operational scenarios.  Topics raised throughout the week included emergency manager operations, data overload, multi-radar visualization, multi-sensor products, and NWP (“Warn-on-Forecast”) output.    A number of details arose within the case studies that confirmed conceptual models, and yet many details may require some refining of those models.  Data collected from CASA may shed new light on the damage caused by small-scale (non-tornadic?) vortices and raise new questions about how to warn on such features.  One key takeaway: Many storm features simply are not visible in the current WSR-88D network configuration. The low-level sampling and spatial resolution, as demonstrated by the CASA testbed, highlight many storm details that may be critical for future improvements in NWS warning capability.

The CASA experiment continues for two more weeks in the testbed before wrapping up for the spring.

PARISE:

During the week, PARISE hosted the following National Weather Service participants:  Ernie Ostuno (WFO Grand Rapids, MI), Jennifer Palucki (WFO Albuquerque, NM), Ryan Sharp (WFO Louisville, KY), Kathy Torgerson (WFO Pueblo, CO).

With this final set of four NWS forecasters, the PARISE experiment repeated their exercise from Weeks 1 and 2.  As in Weeks 1 and 2, there were no real-time PAR data collection opportunities, so all exercises were conducted with archive case data sets.

With this third week, PARISE has concluded its activities in the testbed for the spring.  In several weeks, an end-of-experiment quick summary will be prepared by the PARISE principle scientists.

A LOOK AHEAD:

Beginning 17 May, we will begin the second phase of our spring activities with two new experiments, a) an evaluation of experimental Multiple-Radar/Multiple-Sensor (MRMS) severe weather algorithm products, and b) an evaluation of GOES-R convective initiation and lightning mapping products.

Greg Stumpf, EWP2010 Operations Coordinator

Jerry Brotzge, CASA Scientist

Pamela Heinselman, PAR Scientist

Tags: None

Monday Kickoff – 2010 Week 3 PARISE

Week 3 of EWP2010 commences on Monday April 26.  Our National Weather Service participants will be:

CASA:  Ron Przybylinski (WFO St. Louis, MO)

PARISE:  Ernie Ostuno (WFO Grand Rapids, MI), Jennifer Palucki (WFO Albuquerque, NM), Ryan Sharp (WFO Louisville, KY), Kathy Torgerson (WFO Pueblo, CO)

The long range outlook suggests that we should finally see some severe weather activity in Central Oklahoma this week, particularly toward the end of the week.  Crossing fingers!

Greg Stumpf, EWP2010 Operations Coordinator

Tags: None

Forecaster Thoughts – Andrea Lammers (2010 Week 2 – PARISE)

The best quality of PAR is the increased temporal resolution.  Updates every minute (or sometimes every 43 seconds) on the PAR sure beats the 4 minute updates on the 88D’s.  From the testing that we did, I truly believe these faster updates will lead to better warning decision making.  When comparing side-by-side the 88D 4 min updates with PAR 1 min updates, it was quite obvious that storm features and mesoscale evolutions can be better detected by the PAR.  As discussed in my session, specific storm threats/types that might benefit the most from this faster data are pulse storms, quick spin-up tornadoes, and microbursts.  However, I think warning decisions on all storm types will be improved by the PAR as well.

I definitely think the NWS should choose to implement PARs as soon as possible!  I think sharing PARs with the aviation community is a good idea to help with funding, however, I think it should be thoroughly tested to make sure it can handle such a work load.  Watching planes and severe storms with one minute updates seems like quite a job, but maybe I’m underestimating the PAR.  As for the PAR design, a cylindrical shape seems reasonable.  However, it would be very nice to have sensors on top of the cylinder to fill in the cone of silence.

PAR data is awesome, and I think forecasters in the field will really, really love this new technology!  I think forecasters of various skill levels and technological expertise will be able to adapt quickly to the new PAR data.  I think my PAR experiment group proved this point as we were a pretty diverse group, but all of us seemed to adapt well after just 3 or 4 days of working with PAR data.  Of greater concern might be forecaster fatigue…We seemed to get tired quicker working with the PAR data as we were constantly processing the new data every minute as opposed to every 4 min.  It was doable, but perhaps PAR radar operators may need to have more breaks or work radar for shorter periods than do current 88D operators.

Again, thanks for this opportunity to test our future technology.  I can’t wait to hear thoughts from the other PAR test groups!

Andrea Lammers (Lead Forecaster, NWS Louisville KY – 2010 Week 2 Evaluator)

Tags: None