Forecaster Thoughts – Ryan Sharp (2010 Week 3 – PARISE)

First off, thanks for the chance to come up there and see this great new tool that hopefully will be available to all forecasters soon.

In my opinion the experiment our week was well run, with the least amount of stress put upon us as forecasters.  We had plenty of time to assess the situation for each case study and only 1 hour of “stress”-ful time.  But even that time went by fast as weather was relatively easy to focus on.  By the nature of the 90 degree wedge, forecasters had already sectorized the workload.  It would be interesting to see a case study with a long line of storms with multiple areas of severe weather within that wedge to see how we would have done.  Hopefully you got a good amount of detail with the Norman tornado outbreak of a few days ago.

One concern I had on day 1 was having to use a new interface to handle the data.  I liked the WDSS-II, as it had a similar feel to FSI.  As the week progressed, I got used to using the tools it had available, but I still would have liked to have access to all of the usual procedures I have for storm interrogation.  Not sure what the future is for this type of interface within NWS, but I would like to see the experiment going towards whatever we will be using.  I will have to say that when I returned to the office and dealt with the subsequent severe weather we had over the last couple of weeks, I did go back to using FSI again to help with storm interrogation as a result of my WDSS-II usage.

It would be interesting to study radar fatigue on dealing with 1 minute data.  Since I have been in the weather service, we’ve gone from 5-6 minute updates to 4 minute updates with the 12 series of VCPs.  I have noticed this change resulting in less chance to get away from the desk to use the bathroom or get something to eat.  Going to new data every minute, or even less than a minute, would make me spend a lot more time interrogating with no down time in between.  The easy workaround is to make sure you rotate people working on radar.  Our office employs a convective weather operations plan (CWOP) that could be changed around to make sure the event coordinator is mindful of making this type of rotation more quickly.

Speaking of the CWOP, it was fun to work as a team of forecasters making decisions on the warnings.  At our last severe weather round table meeting back in March, a “Cadillac” model of operations was introduced into our plan where we would have teams of forecasters making the warning decisions.  We would only do this in situations where we have lots of staffing preceding a moderate to high risk day with enough notice to “gather the forces”.  When the team decides a warning/statement is warranted, the second forecaster would deal with all of the text needed while the first forecaster maintains radar watch.  I thought the experiment went well, and I understood the need you had to hear what our thoughts were in making the warning decisions at the times we were making the warning decisions…to see what products we were using most to make our decisions.  However, future tests may need to drop back down to one forecaster analyzing all of the data.  Then you would have to do a quick debrief afterward to find out what thoughts went into the warn/no-warn decisions.

Again, I really appreciate the opportunity I had to come over there and see the National Weather Center for the first time as well as work with the next/next generation of radar data.  After my FSU upbringing, I became jealous of all that OU’s school of meteorology has to offer with the co-located offices in that building.  I could certainly see why Norman is such a popular place for meteorologists.

Ryan Sharp (Lead Forecaster, NWS Louisville KY – 2010 Week 3 Evaluator)

Tags: None

Forecaster Thoughts – Kathleen Torgerson (2010 Week 3 – PARISE)

The higher temporal sampling of the PAR data finally provides a more fluid perspective of storm evolution, which was exciting to observe!   This proved particularly beneficial for the fast evolving storm cases, where 1 minute volume scans finally gave warning decision makers a fighting chance in getting some warning lead time, especially for rapidly evolving tornadic storms.  Another benefit of the higher temporal sampling of PAR data was the ability to diagnose feature persistence, particularly with rapidly evolving mesocyclones.  In the 4 to 5 minute volume scans of the current WSR-88D design, rapidly evolving tornadic mesocyclones may be captured by one volume scan after which time the warning decision maker is left to speculate, is this feature going to be persistent or transient?  Should I warn, or should I wait for one more volume scan?  If I wait, will I sacrifice potentially life-saving lead time? Or, if the feature is not persistent, will I reduce my False Alarm Rate (FAR) and potentially increase credibility in the public’s eye from having not warned for a null event?    In cases like these, the environmental cues (how conducive is the environment to producing tornadoes) is relied on more heavily to anticipate storm evolution and tip the scales towards warning, or not warning.  Of course, all storms within a similar environment will not necessarily produce the same result, and hence the weakness of this strategy.  With the PAR data, the higher temporal sampling gave us several more volume scans to assess the persistence of storm features, and gain a better understanding of the storm evolution.  This resulted in greater confidence in the warning decision making process, even if the outcome (would the storm successfully produce a tornado, and if so would it be observed and reported?) was not entirely certain.  I believe the challenge of achieving greater warning lead-time without a higher FAR will not entirely go away in the higher-temporal resolution world of PAR.  But our ability to diagnose storm processes will certainly improve, and through a learning experience, our warning decisions will improve with it.

I found my storm interrogation technique through the duration of the project evolving away from the “All Tilts” perspective of examining each elevation scan as it arrived in the database.    Instead, I found myself gravitating towards watching loops at critical levels within the storm in order to identify key storm scale processes important to my warning decision making (such as intensifying/deepening mesocyclones, RFDs, hail cores aloft ).   Our current AWIPS/WSR-88D techniques of storm analysis through all-tilts would be a daunting task in a PAR environment with 1 minute volume scans.  To analyze and interpret this vast amount of data rapidly, our techniques for radar interrogation will probably need to evolve towards viewing data in 4 dimensions.  I could envision new radar display capabilities for looping 3D isosurfaces of radar parameters such as reflectivity, velocity, shear, and eventually the dual pol parameters. Our computer systems will need to be fast enough to display and loop this larger quantity of data, and the GUI interfaces intuitive and efficient enough to modify display characteristics rapidly without negatively affecting system performance.    Warning operator fatigue with higher temporal resolution PAR radar data is certainly something to be concerned with, especially with widespread events across the entire forecast area.  But with the right tools and interrogation techniques, I believe this could be overcome, and the benefits this data could provide in understanding storm scale evolutions and enhancing NWS warning operations could be far reaching!

Participating in PARISE was such an enjoyable and exciting experience!  My thanks goes to everyone who put this project together and gave field forecasters the opportunity to participate.  I was truly inspired by your attentiveness to our feedback and your desire to understand our experiences with this new data set.  After having experienced how powerful PAR data was for warning decision making, I hope this system can be fielded as soon as possible.

Kathleen Torgerson (Lead Forecaster, NWS Pueblo CO – 2010 Week 3 Evaluator)

Tags: None

Forecaster Thoughts – Andrea Lammers (2010 Week 2 – PARISE)

The best quality of PAR is the increased temporal resolution.  Updates every minute (or sometimes every 43 seconds) on the PAR sure beats the 4 minute updates on the 88D’s.  From the testing that we did, I truly believe these faster updates will lead to better warning decision making.  When comparing side-by-side the 88D 4 min updates with PAR 1 min updates, it was quite obvious that storm features and mesoscale evolutions can be better detected by the PAR.  As discussed in my session, specific storm threats/types that might benefit the most from this faster data are pulse storms, quick spin-up tornadoes, and microbursts.  However, I think warning decisions on all storm types will be improved by the PAR as well.

I definitely think the NWS should choose to implement PARs as soon as possible!  I think sharing PARs with the aviation community is a good idea to help with funding, however, I think it should be thoroughly tested to make sure it can handle such a work load.  Watching planes and severe storms with one minute updates seems like quite a job, but maybe I’m underestimating the PAR.  As for the PAR design, a cylindrical shape seems reasonable.  However, it would be very nice to have sensors on top of the cylinder to fill in the cone of silence.

PAR data is awesome, and I think forecasters in the field will really, really love this new technology!  I think forecasters of various skill levels and technological expertise will be able to adapt quickly to the new PAR data.  I think my PAR experiment group proved this point as we were a pretty diverse group, but all of us seemed to adapt well after just 3 or 4 days of working with PAR data.  Of greater concern might be forecaster fatigue…We seemed to get tired quicker working with the PAR data as we were constantly processing the new data every minute as opposed to every 4 min.  It was doable, but perhaps PAR radar operators may need to have more breaks or work radar for shorter periods than do current 88D operators.

Again, thanks for this opportunity to test our future technology.  I can’t wait to hear thoughts from the other PAR test groups!

Andrea Lammers (Lead Forecaster, NWS Louisville KY – 2010 Week 2 Evaluator)

Tags: None

Forecaster Thoughts – Brian Montgomery (2010 Week 2 – PARISE)

I wanted to re-express my gratitude to you, Dr. Pam Heinselman and the remainder of the team for an amazing week with PAR!  I am confident these experiments will prove the future of this technology will not only be revolutionary, but provide faster life saving information.  Here are some additional thoughts:

1.  The experiment was well coordinated with an exceptional set of enthusiastic graduate students to make this experience quite rewarding.  Recommendation would be add one more full day of activities.  This would allow for forecasters to become more familiar with the software, visit with SPC and Norman Forecast Office, and perhaps increase the potential to catch a severe weather episode (assuming another experiment during convective season).
2.  Forecaster fatigue was discussed during the experiment since PAR offers a high frequency of updates.  While we all strive to get as much lead time as possible, the human factor of mental breaks is a necessity.  At the time of writing this, during VCP12, we have become accustomed to 4-5 minute update intervals.  I would recommend an experiment of working a full event to test the “breaking point” when a forecaster would need to step away for that mental break.
3.  While we had some software glitches during our visit, WDSS-II does offer a promise of where this display technology is going.  I was delighted to see the enhancements from the original display and our current legacy D2D AWIPS software.  Some integration of software technologies including GR2Analyst may provide additional flexibility with PAR.
4.  I was pleased to see the emphasis on base data!  While algorithms can aid in where forecasters should focus, this is usually “after the fact” and it does take away from the warn-on-forecast philosophy. In fact, with PAR, the concept of these algorithms will likely grow beyond TVS’s, MESO’s and Hail to the future “Warn On Forecast” as presented to us by David Stensrud (and Dustan Wheatley).  This future research is rather exciting as meteorologists, Emergency Managers, media and decision makers can provide better prognosis in a rapidly changing environment.  Now all we need is for computational Moore’s Law to catch up to provide this enhanced awareness.

Again, a phenomenal opportunity to work with everyone as I would be honored to continue where we left off back in April.

Brian Montgomery (Lead Forecaster, NWS Albany NY – 2010 Week 2 Evaluator)

Tags: None

Forecaster Thoughts – Michael Scotten (2010 Week 1 – PARISE)

I want to thank Pam, Greg, Daphne, Heather, and others for giving me this great opportunity to use and evaluate PAR data.  This was an awesome experience!  I truly believe this technology will help meteorologists make better warning decisions in the near future.

The higher temporal PAR data with 40 to 80 second updates of the lowest elevation slices appear to be the biggest advantage for NWS forecasters.  This will definitely change future warning decisions.  Meteorologists will be able to detect microscale evolution of hooks, bows, and appendages to quickly pinpoint tornado spinnups and microburst winds.  The tropical case event with rapid tornadogenesis stood out as the most compelling argument for higher temporal data as the higher temporal PAR data caught each tornado occurrence, while the current much lower temporal WSR-88D data did not.   As a result of using higher temporal data, meteorologists will likely uncover more small scale phenomena such as tornadoes, especially weak ones, and microburst winds.  Not to mention, faster detection of tornadoes and damaging winds will occur as well.  These benefits will help us better understand severe local storms.

There may be a few possible downfalls with the higher temporal PAR data.  For some radar operators, the higher temporal data may be overwhelming, particularly in quickly evolving weather situations.   Also, the uncovering of more small scale phenomena may skew tornado and damaging wind climatologies upwards.  When higher temporal data first arrives to local forecast offices, false alarms may increase with more warnings issued as more phenomena get discovered.  However down the road, possibly fewer warnings and much smaller spatial areas for warnings will likely unfold, leading to better customer service.

I also enjoyed working with new and creative PAR sampling adaptive strategies.  In particular, I preferred the Oversampled_VCP_within_120km_only scanning strategy with the quickest update times of lowest elevation slices.  New scanning strategies will lead to better sampling of varying weather phenomena.  For example, perhaps in the near future, differing scanning strategies could be used for detecting mesocyclones with discrete supercells  that specialize on interrogating one or two cells compared to a wider sampling for several mesoscyclones embedded within a large scale QLCS.  This will only help the NWS further fulfill its mission of saving lives.

We need this technology now!

Michael Scotten (Lead Forecaster, NWS Memphis TN – 2010 Week 1 Evaluator)

Tags: None

Forecaster Thoughts – Pete Wolf (2009 Week 5)

Sincerely appreciate the opportunity to attend HWT EWP the week of June 1st… definitely a worthwhile experience.  Viewed phased-array radar (PAR), lightning mapping array (LMA), the CASA radar concept, and multi-radar/multi-sensor (MRMS) algorithms, and was given a peek at the PHI probabilistic warning program. Here is a summary of input provided (by myself and other participants) on each of these technologies based on EWP efforts that week…

LMA:
1) Viewed several real-time cases. Found this data potentially useful in the warning program in areas where radar data are sparse (e.g. western U.S.). Had a lower-resolution GLM version, similar to planned output from GOES-R, to review…and appeared sufficient in these areas despite lower resolution.
2) Interesting observation…some cells produced cg strikes first, others produced inter-cloud before cg, while others produced inter-cloud with no cg at all. Is there something about these different lightning patterns that can tell us something about storm structure or environment? Perhaps a future research topic.
3) When it comes to overall warning decision-making, LMA does not indicate anything that radar reflectivity structure doesn’t show. In fact, there is often a short lag between reflectivity trends and subsequent lightning trends.
4) Noted some problems with data beyond 100km from center of LMA sensor network…data seemed better closer to the center of network. In some instances, had cells producing impressive 1-minute cg strike rates on NLDN data (nearly continuous cg), while the LMA showed very little, despite apparently being located within range.
5) We had 5-minute averaged LMA data in AWIPS at the HWT…lower resolution of this data proved to be less useful than the high-resolution real-time data available online. For LMA data to be useful, the data in AWIPS will need to be the highest-resolution possible.

PAR:
1) Viewed data from < 1 minute volume scans for an impressive inland tropical storm event in OK.
2) Very helpful in assessing mini-supercells and tropical-cyclone tornado features. Velocity enhancement signatures (VES’s) stood out well, and in one instance, could watch strong low-level convergence evolve into rotation, allowing a possible warning with decent lead time before the TVS.  In dealing with problem issues such as mini-supercells and TC tornadoes, PAR shows promise!
3) The resolution of the data was similar to current 88D super-res. The faster volume scans provided considerably more data to look at. This offers both a positive and a negative in the warning process. Positive: easier to monitor evolutions and view small scale features, with potential for a few extra minutes of warning lead time.  Negative: Easier to “wait one more scan” knowing it was less than a minute away, compared to the longer 88D volume scans…which could minimize the potential increase in lead time.
4) Viewing rapid changes to small scale features requires research into understanding what we’re looking at. At times, I was fascinated at what I was seeing, but had no idea what exactly was occurring, and what it meant with regard to severe weather potential (e.g. was it increasing or decreasing). This also led to a feeling of “wait one more scan” to try to understand what was going on.

CASA:
1) Viewed a few different cases. Viewed data at scan rates of 1 minute or less.
2) Key benefit is greater coverage of low-level radar data, with more frequent data updates. Certainly beneficial when it comes to tornado threat detection.
3) Same positive and negative as for PAR above (#3).
4) Not a stand-alone warning tool…needs to be augmented with 88D data. CASA best as a tool that provides enhanced low-level radar coverage when it is needed (e.g. tornado or downburst potential). CASA radars have significant attenuation problems, this was seen in cases viewed. Less of a problem if used in conjunction with (rather than in place of) available 88D radar data.

MRMS:
1) Viewed several real-time cases covering the western, central, and eastern U.S.
2) Numerous algorithm fields available, with considerable redundancy (most focused on hail, tornado/meso, not much for wind threat).
3) Viewed situations when MRMS is very useful, such as for storms moving within the “cone of silence” of one radar, and for prioritizing numerous storms on display. At other times (e.g. few storms on radar), does not provide anything more than other displays (e.g. VIL).
4) Found Maximum Estimated Hail Size (MESH) quite useful in the real-time events. The MESH utilized was an improved version over what’s available now in the field.
5) Several reflectivity layer products (e.g. reflectivity depth above -10C, 50dBZ echo top, etc) that were also potentially useful in warning process.
6) MESH tracks and MESO tracks useful, particularly in post-event verification efforts.
7) Was interested in the time-height series of 18dBZ, 30dBZ, and 50dBZ echo tops overlaid on one display. Can storm intensity changes with time be related to periods when these lines press closer together with time, vs. times when the lines spread further apart?

PHI:
1) Came in early one day to get a peek at the probabilistic warning program, in which the warning areas move with the threat (rather than being fixed areas). This requires a change in philosophy, which can be challenging in the NWS.
2) Was impressed at the program…could graphically illustrate complex situations (merging cells, splitting cells, etc) much easier than trying to explain the threat areas in lengthy text products. In addition, by “advecting” the threat area, one can always provide maximum lead time downstream of the warned storm (rather than waiting for storm to approach end of polygon before issuing new polygon warning).
3) Did not view program as terribly complex, nor one that would involve considerable forecaster workload.  In fact, if text product generation and dissemination can be mostly automated (since most products are worded similarly), workload could actually decrease, particularly in major events, allowing more time for data analysis (important if the above technologies are provided in operations).

Also gave a presentation on my Probabilistic-Augmented Warning System (PAWS) concept, suggesting it as a “middle-step” toward what is proposed with the PHI program. While viewed positively, the general viewpoint was to focus effort on PHI. There was no estimate as to when the PHI concept might become an operational entity for the NWS, though I would guess a 5-10 year range if not later (much later if introduced with Warn on Forecast concept in which it is linked to high-res numerical model output).

Key Concluding Thoughts:
1) EWP attendance very beneficial…and would encourage others to get involved.  While there, I expressed the fact that the SOO community could provide a resource for additional evaluation, even if not located in Norman.
2) The technology evaluated was impressive, and offers much…not only in the area of operations, but also research, especially when viewing high-resolution output in rapid scan updates. Despite this, getting the money for operational implementation could be a tough sell, especially if “selling” requires a demonstration of verification score impact. Despite a favorable view of the technology in operations and research, I did not sense that it would positively impact verification scores that much. For example, < 1 minute radar scans offer the potential of adding a few extra minutes lead time to warning. However, this also makes it easier to wait another scan, since only 1 minute away (rather than 5 minutes), which would diminish this potential gain. Also watched SHAVE verification efforts during some events worked…amazing to see the difference in their report coverage vs. that of the WFO…a demonstration of the inaccuracy of our verification scores (our future funding is based on inaccurate numbers?).
3) Implementation of these technologies at the WFO will result in data overload, which gets even worse if you add high-res model output that could utilize these datasets. Of course, more data isn’t always better data. The forecaster will need to learn how to process the data (better prioritizing), and know when to stop looking at data and make the warning decision. Further automation (through algorithms) will be necessary to help forecasters process the data load. (This makes concepts such as PAWS and PHI very important, by placing forecaster focus on data analysis and not on product design.)  In other words…SOO job security! 😉 Dual-pol will be a good start in this regard, as it will add products to the warning decision-process.  Training needs to be developed, perhaps with the WES machine, that allows the SOO to evaluate the impact of adding more and more data (e.g. more products, faster update times, etc.) to forecaster warning decision-making.  At the conclusion of the EWP, I suggested the need for continued leadership from the WDTB in this regard.

Again, I thought this was a beneficial experience, and appreciate the opportunity to participate.  Thanks….

Pete Wolf (NWS Jacksonville FL – 2009 Week 5 Evaluator)

Tags: None

Forecaster thoughts – Bill Martin (2009 Week 5)

Attending the EWP was an excellent experience, and I appreciate all the time and effort that people put into it.  The EWP kept the attendees very busy with a series of relatively intense exercises.  One’s skill at issuing warnings is much enhanced by issuing so many warnings in such a short period of time.  I wish my forecasters had the opportunity to go through these exercises as well.  In fact, one of the things I carried away from the EWP was the power of hands-on training of this kind.  We try to use WES cases in the field for training purposes to get something like this effect, but the experience in Norman is superior to what we can do with the WES.

Both the CASA radars and the PAR were found to be valuable for their rapid update capabilities.  We were able to get routine 1-minute volume scans from both.  In several instances, warnings were issued several minutes earlier than possible with 88D radars.  Also, from some cases, the high-resolution of the CASA radars helped identify severe features that might have gone unnoticed in 88D data.  If anything like a national network of CASA radars is ever developed, we will need to decide if we want to warn for every little vortex these radars are able to detect.  CASA radars do suffer from beam attenuation problems, though.  The original concept was for CASA to use phased-array antennas, but this has not been achieved as yet.  Also, the evaluation of a larger CASA array is probably needed and, I’m told, is planned.

On the CASA radars, there is a sense in the field that CASA radars are primarily valuable as potential gap-filling radars.  However, this is not the original intent of the CASA program, and, in fact, gap filling radars have been available for decades from a number of vendors.  The ground-breaking collaborative properties of a CASA network are not widely appreciated.  Still, gap-filling radars are much needed in the west, probably more so than collaborative or phased array radars.  If new money is available for more radars, solving the gap problem may be more of a priority than an innovative new technology.

The LMA I found to be pretty interesting.  The thousands of VHF sources detected from lightning channels are mapped into a vertically integrated lightning density product.  When color-contoured, this product looks similar to a radar composite reflectivity map, with comparable resolution.  Any electrically active storm can be imaged.  It was also possible to look at the 3D images of the lightning channels for storms, but this was just too much information.  LMA also provides 1-minute updates.  What is still being learned is how to associate the severity of a storm with its lightning density history.   With some experience, we were able to expedite warnings on storms based on a lightning pulse.  The ability to image storms from lightning, I found to be a fascinating concept, and I found the LMA to be a valuable companion to radar when deciding whether to issue a warning.  As I work in a CWA with large radar gaps, having LMA data would be particularly valuable.  Even with good radar coverage, the detailed LMA data helps to fill-in the picture we get from radar, and at a small cost.  The cost of a nationwide LMA capability would seem to be a small fraction that of a radar network, making the LMA attractive from a cost-benefit stand point.

The MRMS algorithms are run off of existing operational data sources.  For one real-time case we warned for, the storm went right over the top of the radar, so the multi-radar approach was shown to be valuable in this case.  One of the new MRMS algorithms is for hail size.  We found this to be pretty good, and tended to agree well with reports.  It was at least as good as the “VIL of the day” concept.  The “rotation track” product was also useful.  As MRMS is derived from available data, some of the products duplicate what might be easier to see another way.  Storm heights, for example, might be better found by consulting a cross-section in FSI

We considered the problem of information overload in integrating new data sources into operations.  For any of these technologies to succeed, they need to make it easier for a forecaster to issue a product.  Having to evaluate a flood of new data every minute may be paralyzing to some.  Even though dense new data sources have more information for producing more precise warnings, they need to be integrated in some user-friendly way into operations.  This leads to a need for better algorithms for analyzing some of these data streams in the background.

There are no set plans that I am aware of currently to expand CASA, PAR, LMA, or MRMS nationally.  Of these, MRMS would be the easiest to implement as it only requires the fielding of algorithms.  The LMA may also be cost-effective.  CASA and PAR are both expensive technologies, each one at least as expensive as the current 88D network.

Bill Martin (NWS Glasgow MT – 2009 Week 5 Evaluator)

Tags: None

Forecaster Thoughts – Mike Vescio (Week 4)

This is an excerpt from the July 2009 issue of the National Weather Association’s (NWA) Newsletter’s “President’s Message”

I would like to use this edition of the President’s Message to cover a few topics. In mid-May I had the opportunity to spend a week in Norman, OK, participating in the Experimental Warning Program (EWP) as part of the NOAA Hazardous Weather Testbed (HWT). When I was a forecaster at the Storm Prediction Center I earned the nickname “the dry slot” because of my tendency to suppress convection, and it proved true again this year as Oklahoma experienced a week of beautiful cloud free weather (much to my dismay!).  Fortunately, in the EWP you can focus on any part of the country, and there was one good severe weather day in Nebraska where we could issue test warnings.  Also, we went through a number of case studies from earlier in the year that were truly fascinating. The purpose of the EWP is to learn how emerging technologies can improve the warning process. I can only describe what was available to the visiting scientists and forecasters as being like a kid in a candy store.  There was access to Phased Array Radar data with 60 second update times, the highly sensitive Collaborative Adaptive Sensing of the Atmosphere (CASA) radars, the Oklahoma Lightning Mapping Array and The Warning Decision Support System – Integrated Information (WDSS-Il) algorithms and display interface.  The job of the participants was to determine how these tools improved the convective warning process, and let me assure you that they did! We will be having a series of invited talks about this technology at the NWA Annual Meeting in Norfolk so that these exciting datasets be shared with everyone.

Mike Vescio (NWS Pendleton OR – 2009 Week 4 Evaluator; and NWA President)

Tags: None

Forecaster Thoughts – Kevin Brown (2009 Week 3)

Luckily, we were able see real-time data with the LMA, CASA, and PAR systems, along with the WDSS-II multi-radar algorithms. When compared to the archived cases, real-time operations provided a much better picture of the challenges operational forecasters will have in base data interpretation, primarily due to the more realistic warning environment. Below are some of the main points that I would like to make about each system:

LMA:  With its quick updates, the 1km data aided in locating areas of updraft intensification and deviant motion trends. The ability to see the trend data was also a good indicator of storm strength/trend, especially in radar-sparse areas.  Future research will hopefully lead to additional tools for storm/warning forecasting and warnings.

CASA:  Although rapid updates from multiple radars can be overwhelming at times, the increased temporal and spatial resolution are well worth it.  To alleviate the rapid-fire of new radar scans from 4 separate radars, the multi-radar composite was utilized with success.  Especially in areas with sparse/distant radar coverage, this system should easily increase probabilities of detection and lead-times of severe weather events.  However, due to the detection of features not previously seen on the 88D, an increase in false alarm rate is also expected.  I also found that the sector scanning strategies took away from base data interpretation. Perhaps being able to manually control what sector is scanned would help in these situations.

PAR
:  The rapid scans, along with higher resolutions above the traditional “spit-cuts”, were outstanding.  This helped increase the confidence of meso-cyclone and core strengths.  The ADAPTS strategy appeared to work quite well without compromising the base data moments.  Hopefully in the not-so-distant future, additional panels will be added to alleviate the beam broadening on the edges of our current panel, and allow a greater radar coverage area.

Multi-radar/Multi-sensor algorithms:  In my experience, the use of MESH and reflectivities at 0C and -20C increase warning confidence tremendously.  Other diagnostic products, such as MESH Swath, Rotation Tracks, etc… are also great tools to help with warning confidence and warning polygon construction.  Especially in areas where radar coverage is sparse/distant with respect to your targets, these algorithms are needed.  After using the multi-radar data in this workshop, and also in real-time operations at my office, I feel the products greatly enhance warning and non-warning decisions.

Kevin Brown (NWS Norman OK – 2009 Week 3 Evaluator)

Tags: None

Forecaster Thoughts – Chris Wielki (2009 Week 3)

The EWP was a great experience and I couldn’t really ask for much more. As a meteorologist from outside of the United States I was somewhat uncertain what to expect but what I found impressed me. Technologies that we used throughout the week were useful and the exchange of ideas between researchers and meteorologists is something we too should strive for. The highlight was Wednesday when several tornadic supercells developed over Oklahoma with one passing through the CASA domain. The 3Dvar set the stage for the supercell entering the CASA domain and had us thinking about a tornado potential before appearing on the casa radars.  When the cell moved into the CASA domain the rotation was apparent and we were able to send the warning quickly. The strong winds that followed the tornado were easy to identify in the CASA dataset and warnings could likely be narrowed down as confidence increases with the use of these x-band radars. The WSR-88D data also showed the TVS however it was after the CASA data showing an obvious potential to improve lead times. PAR data and scanning strategies didn’t have any apparent faults and with the improved resolution and frequency I feel features would be singled out earlier in an event and once again lead times would increase. The LMA data has potential but I felt that I would have to get more comfortable with it and develop a conceptual model of what to expect with larger supercells. Picking out a peak in the LMA data would prove to be difficult but the graphical tool on Google earth was useful and could increase confidence in warnings already issued. There may be potential to use the LMA data for storm splitting since it would show the two updrafts however cases showing a split were limited. Last of all were the multi-radar/sensor algorithms. Returning to my office without products such as the MEHS, Reflectivity at -20, MEHS and rotation tracks will prove to be a difficult experience.

Chris Wielki (Prairie and Arctic Storm Prediction Center, Edmonton, AB – 2009 Week 3 Evaluator)

Tags: None