Forecaster Thoughts – Brian Montgomery (2010 Week 2 – PARISE)

I wanted to re-express my gratitude to you, Dr. Pam Heinselman and the remainder of the team for an amazing week with PAR!  I am confident these experiments will prove the future of this technology will not only be revolutionary, but provide faster life saving information.  Here are some additional thoughts:

1.  The experiment was well coordinated with an exceptional set of enthusiastic graduate students to make this experience quite rewarding.  Recommendation would be add one more full day of activities.  This would allow for forecasters to become more familiar with the software, visit with SPC and Norman Forecast Office, and perhaps increase the potential to catch a severe weather episode (assuming another experiment during convective season).
2.  Forecaster fatigue was discussed during the experiment since PAR offers a high frequency of updates.  While we all strive to get as much lead time as possible, the human factor of mental breaks is a necessity.  At the time of writing this, during VCP12, we have become accustomed to 4-5 minute update intervals.  I would recommend an experiment of working a full event to test the “breaking point” when a forecaster would need to step away for that mental break.
3.  While we had some software glitches during our visit, WDSS-II does offer a promise of where this display technology is going.  I was delighted to see the enhancements from the original display and our current legacy D2D AWIPS software.  Some integration of software technologies including GR2Analyst may provide additional flexibility with PAR.
4.  I was pleased to see the emphasis on base data!  While algorithms can aid in where forecasters should focus, this is usually “after the fact” and it does take away from the warn-on-forecast philosophy. In fact, with PAR, the concept of these algorithms will likely grow beyond TVS’s, MESO’s and Hail to the future “Warn On Forecast” as presented to us by David Stensrud (and Dustan Wheatley).  This future research is rather exciting as meteorologists, Emergency Managers, media and decision makers can provide better prognosis in a rapidly changing environment.  Now all we need is for computational Moore’s Law to catch up to provide this enhanced awareness.

Again, a phenomenal opportunity to work with everyone as I would be honored to continue where we left off back in April.

Brian Montgomery (Lead Forecaster, NWS Albany NY – 2010 Week 2 Evaluator)

Tags: None

Week 1 PARISE Summary: 13-16 April 2010

The goal of the 2010 PARISE is to gain an understanding of the impact of temporal sampling on warning decision making, and warning lead time. During the first week, 13–16 April 2010, four NWS forecasters from the Southern Region, Eastern Region, and Central Region helped us address this goal by applying their warning decision expertise to five different playback cases sampled by the National Weather Radar Testbed Phased Array Radar (NWRT PAR). The forecasters worked each case in teams of two.

Before each case, forecasters developed situational awareness of the forcing mechanisms and near-storm environment in which the storms developed. This situational awareness allowed them to form a conceptual model of the storm type and severe weather threats they anticipated. Then they applied their warning decision expertise to interrogate the NWRT PAR data using the Warning Decision Support System – Integrated Information, and to issue warnings using a WarnGen tool similar to the one in AWIPS.  After each case, the teams discussed their warning decision process with a PAR scientist.  The case culminated with an overview of any severe weather that occurred, so that forecasters could self-evaluate their warning process.

The experiment wrapped up with a group discussion on participant’s experiences with NWRT PAR data and the PARISE as a whole. Forecasters said that they had a good experience and will encourage others at their office to participate next year. They enjoyed getting to work with rapid update data and experiencing how those data may change their current warning decision process. Forecasters also enjoyed getting to work with researchers and people from other NWSFOs.

Following analysis of the data collected during PARISE 2010, findings will be shared initially via conference papers in Fall 2010.

Pamela Heinselman, NSSL, PARISE Project Scientist

Tags: None

Week 1 CASA Summary: 12-16 April 2010

A primary goal for CASA during 2010 is to document the improvement in warning operations through the use of enhanced forecaster-emergency management communications.  The use of NWS personnel in the HWT allows us to test and refine how best to utilize CASA’s high-spatial (100 m) and temporal (1-minute) data, in coordination with field emergency managers (EMs).  Several particular areas of interest include the concepts of data overload, multi-sensor products, and the development of warn-on-forecast products and display.

The lack of severe weather in Central Oklahoma during Week #1 allowed CASA to refine its warning and communications operations.  Les Lemon (WDTB) spent much of the week with CASA PIs Brenda Philips and Ellen Bass refining the communication technologies to be used between the forecaster and EM.  Twitter and NWS Chat will be the primary means for communication, with additional use of Skype by some EMs and spotters.  Les spent Monday getting trained, learning about the latest products from CASA (e.g., new NWP forecasts available every 10 minutes!), and refined products (real-time 3DVAR analysis).  Tuesday through Thursday was spent reviewing and revising the interaction among the CASA forecasters and EMs.  Several EMs and spotters visited the HWT and practiced with various communication technologies.  Les spent Wednesday morning with Jerry Brotzge, Keith Brewster, Brenda, and Ellen in discussions on ways to improve the presentation of output from the real-time forecasts – in essence, how do we present “Warn-on-Forecast” information to forecasters already overwhelmed with data?   Friday ended with precipitation in the HWT – an opportunity to review CASA products one last time…but now with echoes!

Jerry Botzge, Univ. Oklahoma, EWP2010 CASA Scientist

Tags: None

Forecaster Thoughts – Michael Scotten (2010 Week 1 – PARISE)

I want to thank Pam, Greg, Daphne, Heather, and others for giving me this great opportunity to use and evaluate PAR data.  This was an awesome experience!  I truly believe this technology will help meteorologists make better warning decisions in the near future.

The higher temporal PAR data with 40 to 80 second updates of the lowest elevation slices appear to be the biggest advantage for NWS forecasters.  This will definitely change future warning decisions.  Meteorologists will be able to detect microscale evolution of hooks, bows, and appendages to quickly pinpoint tornado spinnups and microburst winds.  The tropical case event with rapid tornadogenesis stood out as the most compelling argument for higher temporal data as the higher temporal PAR data caught each tornado occurrence, while the current much lower temporal WSR-88D data did not.   As a result of using higher temporal data, meteorologists will likely uncover more small scale phenomena such as tornadoes, especially weak ones, and microburst winds.  Not to mention, faster detection of tornadoes and damaging winds will occur as well.  These benefits will help us better understand severe local storms.

There may be a few possible downfalls with the higher temporal PAR data.  For some radar operators, the higher temporal data may be overwhelming, particularly in quickly evolving weather situations.   Also, the uncovering of more small scale phenomena may skew tornado and damaging wind climatologies upwards.  When higher temporal data first arrives to local forecast offices, false alarms may increase with more warnings issued as more phenomena get discovered.  However down the road, possibly fewer warnings and much smaller spatial areas for warnings will likely unfold, leading to better customer service.

I also enjoyed working with new and creative PAR sampling adaptive strategies.  In particular, I preferred the Oversampled_VCP_within_120km_only scanning strategy with the quickest update times of lowest elevation slices.  New scanning strategies will lead to better sampling of varying weather phenomena.  For example, perhaps in the near future, differing scanning strategies could be used for detecting mesocyclones with discrete supercells  that specialize on interrogating one or two cells compared to a wider sampling for several mesoscyclones embedded within a large scale QLCS.  This will only help the NWS further fulfill its mission of saving lives.

We need this technology now!

Michael Scotten (Lead Forecaster, NWS Memphis TN – 2010 Week 1 Evaluator)

Tags: None

EWP2010 Week 1 Underway

Week 1 of the EWP is underway. PARISE is chugging along with their four forecasters and collecting data on archive cases. Because each week the participants will be reviewing the same cases, we won’t be providing any case information here (or will spoil the secrets!). The CASA project has been in a shakedown this week, and the PIs have been adding a few new details to the project, which will soon be revealed in the CASA prject plan and briefing to be posted soon on the EWP2010 page. Les Lemon has been very helpful during the week, and will probably carry over into Week 2 to serve as additional help.

Weather wise in Central Oklahoma has been very quiet, although some convection is anticipated this afternoon and evening. Severe weather is unlikely. Due to PARISE’s unique schedule and experiment requirements, they will not be observing real-time data this evening. However, the CASA folks will be observing the data in a “low key” sense and using the opportunity to continue to shakedown the technology and concept of operations.

We will post end-of-week summaries for both PARISE and CASA on Friday or Saturday.

Greg Stumpf, EWP2010 Operations Coordinator

Tags: None

Almost ready for EWP2010 “Phase I”

We are only 4 days away from beginning “Phase I” of the EWP2010 spring experiment. Phase I consists of our PAR and CASA experiments. We’ve had some successes over the past 2 weeks:

1. The PARISE shakedown on 4/6 went well with guest evaluators from the WFO OUN and the WDTB.
2. The WDSSII version of WarnGen is almost ready to go. It looks and feels a lot like AWIPS WarnGen.
3. The HWT AWIPS and ORPG have been upgraded. Bonus: CASA data are now displayable in AWIPS in the HWT!
4. The participant selections for all 9 weeks have been made and will be entered into the EWP Google Calendar.

We will be welcoming our first set of evaluators next week. They are:

CASA: Les Lemon (WDTB)
PAR: Mark Bacon (ILM); Jim Caruso (ICT); Jeff Cupo (FAATC-OKC); Mike Scotten (MEG)

One note:  do not expect to see many details on the blog during Phase I of EWP2010, unless there are real-time events in Central Oklahoma.  The evaluators will be going through a number of archive cases, and we don’t want to reveal details about those cases to participants who will be coming in future weeks.  In addition, the PAR experiment is being conducted as a social science experiment, and much of the data are considered confidential.

Greg Stumpf, EWP2010 Operations Coordinator

Tags: None

2010 Planning Underway

With the first significant tornado of 2010 hitting Oklahoma yesterday, it’s time to resurrect the EWP Blog. We are underway getting the EWP spring experiment going for 2010. This year we will be looking at a lot of new data sets, and the experiment period is longer than ever (9 weeks). We are also planning double the participation of NWS folks this year. Here is the experiment schedule (by week):

Phase I:

12 Apr – 16 Apr: PAR, CASA
19 Apr – 23 Apr: PAR, CASA
26 Apr – 30 Apr: PAR, CASA
3 May – 7 May: CASA
10 May – 14 May: CASA

Phase II:

17 May – 21 May: GOES-R, LMA, MRMS
24 May – 28 May: GOES-R, LMA, MRMS
31 May – 4 June: No operations (Memorial Day week)
7 June – 11 June: GOES-R, LMA, MRMS
14 June – 18 June: GOES-R, LMA, MRMS

In addition, during the latter half of the experiment, we may be ready to introduce the participants to some of the early radar data assimilation work being done for the Warn-On-Forecast program.

More information about each project is available here:  http://ewp.nssl.noaa.gov/2010plans.pdf

The invitation for participants has been sent to all six NWS Regions and we expect to make our decisions by 29 March.

Check back here for more updates.

Greg Stumpf, EWP 2010 Operations Coordinator

Tags: None

The EWP2009 Thank You Post

After we wrapped up daily operations in the HWT on 12 June 2009, Norman got hit by a weak tornado. While only minor damage occurred, and there were no injuries and deaths, what a way to end our 6 week experiment! This is this year’s EWP Thank You post, expressing our gratitude to the many participants of the Experimental Warning Program’s 2009 spring experiment. This year’s experiment was just as successful as the 2007 and 2008 experiments, and it could not have been carried out without the hard work and long hours of our team of participants.

The biggest expression of thanks goes to our IT Coordinator, Kevin Manross, who put in more hours than anyone else to pull off the experiment. As you will see below, Kevin wore many hats again this year.

Next, we’d like to thank our primary- and co-Weekly Coordinators for keeping operations on track each week: Kiel Ortega, Dale Morris, Jim LaDue, Patrick Burke, Travis Smith, Liz Quoetone, Paul Schlatter, Greg Stumpf, and Kevin Manross.

The cognizant scientists brought their expertise to the experiment to help guide live operations and playback of archive cases for each of the experiments.

For the Multi-Radar/Multi-Sensor (MRMS) application experiment, they included NSSL/CIMMS principle investigators Travis Smith and Greg Stumpf, with additional help from NSSL scientists Arthur Witt and Kevin Manross.

For the Lightning Mapping Array (LMA) experiment, they included NSSL/CIMMS scientist Kristin Kuhlman who was the principle investigator, along with visiting scientists Geoffrey Stano (NASA-Huntsville) and Eric Bruning (Univ. Maryland/NESDIS).

For the Phased Array Radar (PAR) experiment, Dr. Pamela Heinselman captained the ship, along with these folks from NSSL, CIMMS, and OU: Dave Preignitz, Ric Adams, Arthur Witt, Rick Hluchan, Adam Smith, and Jennifer Newman.

For the Collaborative Adaptive Sensing of the Atmosphere (CASA) experiment was again led by Brenda Phillips (U. Mass.), Jerry Brotzge (OU), and Ellen Bass (U. VA). In addition, we had help from Don Rude (U. VA), David Westbrook (U. Mass.), Cedar League (Univ. Colorado – Colorado Springs), Rachel Butterworth (OU), Corey Potvin (OU), and Vivek Mahale (OU).

We had NSSL IT help from Jeff Brogden, Robert Toomey, Charles Kerr, Villiappa Lakshmanan, Vicki Farmer, Karen Cooper, Paul Griffin, Brad Sagowitz, Brian Schmidt, and Joe Young.

We were also graciously provided AWIPS help from NWS Warning Decision Training Branch scientists Ben Baranowski and Darrel Kingfield.

There were a number of guest evaluators from the NWC that provided expertise: From WDTB, Les Lemon and Veronica Davis, from SPC/CIMMS (and the GOES-R Proving Ground), Chris Siewart, and from Florida State University, Scott Rudlosky.

Undergraduate students who supported our SHAVE efforts were: James Miller (coordinator), Anthony Bain, Jessica Erlingis, Steve Irwin, Erika Kohler, Tiffany Meyer, Corey Mottice, Nicole Ramsey, and Brandon Smith.

The EWP leadership team of Travis Smith and David Andra, along with the other HWT management committee members (Steve Weiss, Jack Kain, Mike Foster, Joe Schaefer, and Jeff Kimpel), and Dr. Stephan Smith, chief of the MDL Decision Assistance Branch, were instrumental in providing the necessary resources to make the EWP spring experiment happen.

Finally, we express a multitude of thanks to our National Weather Service and international operational meteorologists who traveled to Norman to participate as evaluators in this experiment (and we also thank their local and regional management for providing the personnel). They are:

Steve Cobb (WFO Lubbock, TX)

Suzanne Fortin (WFO Pleasant Hill, MO)

Gino Izzi (WFO Chicago, IL)

Jeff Michalski (WFO Seattle, WA)

Tom Ainsworth (WFO Juneau, AK)

Chris Wielki (Environment Canada, Edmonton, AB)

Rebecca Schneider (Environment Canada, Montreal, QC)

John Billet (WFO Wakefield, VA)

Kevin Brown (WFO Norman, OK)

Steve Hodanish (WFO Pueblo, CO)

James Cummine (Environment Canada, Winnipeg, MB)

Sarah Wong (Environment Canada, Toronto, ON)

Matthew Kramar (WFO Sterling, VA)

Mike Vescio (WFO Pendleton, OR)

Rob Handel (WFO Peachtree City, GA)

Bill Martin (WFO Glasgow, MT)

Pete Wolf (WFO Jacksonville, FL)

Bill Ward (NWS Pacific Region HQ, Honolulu, HI)

Daniel Nietfeld (WFO Omaha, NE)

Gail Hartfield (WFO Raleigh, NC)

Steve Kieghton (WFO Blacksburg, VA)

Dan Miller (WFO Duluth, MN)

Jenni Rauhala (Finnish Meteorological Institute, Helsinki)

Many thanks to everyone, including those we may have inadvertently left off this list. Please let us know if we missed anyone. We can certainly edit this post and include their names later.

Greg Stumpf (EWP Operations Coordinator)

Tags: None

Forecaster Thoughts – Pete Wolf (2009 Week 5)

Sincerely appreciate the opportunity to attend HWT EWP the week of June 1st… definitely a worthwhile experience.  Viewed phased-array radar (PAR), lightning mapping array (LMA), the CASA radar concept, and multi-radar/multi-sensor (MRMS) algorithms, and was given a peek at the PHI probabilistic warning program. Here is a summary of input provided (by myself and other participants) on each of these technologies based on EWP efforts that week…

LMA:
1) Viewed several real-time cases. Found this data potentially useful in the warning program in areas where radar data are sparse (e.g. western U.S.). Had a lower-resolution GLM version, similar to planned output from GOES-R, to review…and appeared sufficient in these areas despite lower resolution.
2) Interesting observation…some cells produced cg strikes first, others produced inter-cloud before cg, while others produced inter-cloud with no cg at all. Is there something about these different lightning patterns that can tell us something about storm structure or environment? Perhaps a future research topic.
3) When it comes to overall warning decision-making, LMA does not indicate anything that radar reflectivity structure doesn’t show. In fact, there is often a short lag between reflectivity trends and subsequent lightning trends.
4) Noted some problems with data beyond 100km from center of LMA sensor network…data seemed better closer to the center of network. In some instances, had cells producing impressive 1-minute cg strike rates on NLDN data (nearly continuous cg), while the LMA showed very little, despite apparently being located within range.
5) We had 5-minute averaged LMA data in AWIPS at the HWT…lower resolution of this data proved to be less useful than the high-resolution real-time data available online. For LMA data to be useful, the data in AWIPS will need to be the highest-resolution possible.

PAR:
1) Viewed data from < 1 minute volume scans for an impressive inland tropical storm event in OK.
2) Very helpful in assessing mini-supercells and tropical-cyclone tornado features. Velocity enhancement signatures (VES’s) stood out well, and in one instance, could watch strong low-level convergence evolve into rotation, allowing a possible warning with decent lead time before the TVS.  In dealing with problem issues such as mini-supercells and TC tornadoes, PAR shows promise!
3) The resolution of the data was similar to current 88D super-res. The faster volume scans provided considerably more data to look at. This offers both a positive and a negative in the warning process. Positive: easier to monitor evolutions and view small scale features, with potential for a few extra minutes of warning lead time.  Negative: Easier to “wait one more scan” knowing it was less than a minute away, compared to the longer 88D volume scans…which could minimize the potential increase in lead time.
4) Viewing rapid changes to small scale features requires research into understanding what we’re looking at. At times, I was fascinated at what I was seeing, but had no idea what exactly was occurring, and what it meant with regard to severe weather potential (e.g. was it increasing or decreasing). This also led to a feeling of “wait one more scan” to try to understand what was going on.

CASA:
1) Viewed a few different cases. Viewed data at scan rates of 1 minute or less.
2) Key benefit is greater coverage of low-level radar data, with more frequent data updates. Certainly beneficial when it comes to tornado threat detection.
3) Same positive and negative as for PAR above (#3).
4) Not a stand-alone warning tool…needs to be augmented with 88D data. CASA best as a tool that provides enhanced low-level radar coverage when it is needed (e.g. tornado or downburst potential). CASA radars have significant attenuation problems, this was seen in cases viewed. Less of a problem if used in conjunction with (rather than in place of) available 88D radar data.

MRMS:
1) Viewed several real-time cases covering the western, central, and eastern U.S.
2) Numerous algorithm fields available, with considerable redundancy (most focused on hail, tornado/meso, not much for wind threat).
3) Viewed situations when MRMS is very useful, such as for storms moving within the “cone of silence” of one radar, and for prioritizing numerous storms on display. At other times (e.g. few storms on radar), does not provide anything more than other displays (e.g. VIL).
4) Found Maximum Estimated Hail Size (MESH) quite useful in the real-time events. The MESH utilized was an improved version over what’s available now in the field.
5) Several reflectivity layer products (e.g. reflectivity depth above -10C, 50dBZ echo top, etc) that were also potentially useful in warning process.
6) MESH tracks and MESO tracks useful, particularly in post-event verification efforts.
7) Was interested in the time-height series of 18dBZ, 30dBZ, and 50dBZ echo tops overlaid on one display. Can storm intensity changes with time be related to periods when these lines press closer together with time, vs. times when the lines spread further apart?

PHI:
1) Came in early one day to get a peek at the probabilistic warning program, in which the warning areas move with the threat (rather than being fixed areas). This requires a change in philosophy, which can be challenging in the NWS.
2) Was impressed at the program…could graphically illustrate complex situations (merging cells, splitting cells, etc) much easier than trying to explain the threat areas in lengthy text products. In addition, by “advecting” the threat area, one can always provide maximum lead time downstream of the warned storm (rather than waiting for storm to approach end of polygon before issuing new polygon warning).
3) Did not view program as terribly complex, nor one that would involve considerable forecaster workload.  In fact, if text product generation and dissemination can be mostly automated (since most products are worded similarly), workload could actually decrease, particularly in major events, allowing more time for data analysis (important if the above technologies are provided in operations).

Also gave a presentation on my Probabilistic-Augmented Warning System (PAWS) concept, suggesting it as a “middle-step” toward what is proposed with the PHI program. While viewed positively, the general viewpoint was to focus effort on PHI. There was no estimate as to when the PHI concept might become an operational entity for the NWS, though I would guess a 5-10 year range if not later (much later if introduced with Warn on Forecast concept in which it is linked to high-res numerical model output).

Key Concluding Thoughts:
1) EWP attendance very beneficial…and would encourage others to get involved.  While there, I expressed the fact that the SOO community could provide a resource for additional evaluation, even if not located in Norman.
2) The technology evaluated was impressive, and offers much…not only in the area of operations, but also research, especially when viewing high-resolution output in rapid scan updates. Despite this, getting the money for operational implementation could be a tough sell, especially if “selling” requires a demonstration of verification score impact. Despite a favorable view of the technology in operations and research, I did not sense that it would positively impact verification scores that much. For example, < 1 minute radar scans offer the potential of adding a few extra minutes lead time to warning. However, this also makes it easier to wait another scan, since only 1 minute away (rather than 5 minutes), which would diminish this potential gain. Also watched SHAVE verification efforts during some events worked…amazing to see the difference in their report coverage vs. that of the WFO…a demonstration of the inaccuracy of our verification scores (our future funding is based on inaccurate numbers?).
3) Implementation of these technologies at the WFO will result in data overload, which gets even worse if you add high-res model output that could utilize these datasets. Of course, more data isn’t always better data. The forecaster will need to learn how to process the data (better prioritizing), and know when to stop looking at data and make the warning decision. Further automation (through algorithms) will be necessary to help forecasters process the data load. (This makes concepts such as PAWS and PHI very important, by placing forecaster focus on data analysis and not on product design.)  In other words…SOO job security! 😉 Dual-pol will be a good start in this regard, as it will add products to the warning decision-process.  Training needs to be developed, perhaps with the WES machine, that allows the SOO to evaluate the impact of adding more and more data (e.g. more products, faster update times, etc.) to forecaster warning decision-making.  At the conclusion of the EWP, I suggested the need for continued leadership from the WDTB in this regard.

Again, I thought this was a beneficial experience, and appreciate the opportunity to participate.  Thanks….

Pete Wolf (NWS Jacksonville FL – 2009 Week 5 Evaluator)

Tags: None

Forecaster thoughts – Bill Martin (2009 Week 5)

Attending the EWP was an excellent experience, and I appreciate all the time and effort that people put into it.  The EWP kept the attendees very busy with a series of relatively intense exercises.  One’s skill at issuing warnings is much enhanced by issuing so many warnings in such a short period of time.  I wish my forecasters had the opportunity to go through these exercises as well.  In fact, one of the things I carried away from the EWP was the power of hands-on training of this kind.  We try to use WES cases in the field for training purposes to get something like this effect, but the experience in Norman is superior to what we can do with the WES.

Both the CASA radars and the PAR were found to be valuable for their rapid update capabilities.  We were able to get routine 1-minute volume scans from both.  In several instances, warnings were issued several minutes earlier than possible with 88D radars.  Also, from some cases, the high-resolution of the CASA radars helped identify severe features that might have gone unnoticed in 88D data.  If anything like a national network of CASA radars is ever developed, we will need to decide if we want to warn for every little vortex these radars are able to detect.  CASA radars do suffer from beam attenuation problems, though.  The original concept was for CASA to use phased-array antennas, but this has not been achieved as yet.  Also, the evaluation of a larger CASA array is probably needed and, I’m told, is planned.

On the CASA radars, there is a sense in the field that CASA radars are primarily valuable as potential gap-filling radars.  However, this is not the original intent of the CASA program, and, in fact, gap filling radars have been available for decades from a number of vendors.  The ground-breaking collaborative properties of a CASA network are not widely appreciated.  Still, gap-filling radars are much needed in the west, probably more so than collaborative or phased array radars.  If new money is available for more radars, solving the gap problem may be more of a priority than an innovative new technology.

The LMA I found to be pretty interesting.  The thousands of VHF sources detected from lightning channels are mapped into a vertically integrated lightning density product.  When color-contoured, this product looks similar to a radar composite reflectivity map, with comparable resolution.  Any electrically active storm can be imaged.  It was also possible to look at the 3D images of the lightning channels for storms, but this was just too much information.  LMA also provides 1-minute updates.  What is still being learned is how to associate the severity of a storm with its lightning density history.   With some experience, we were able to expedite warnings on storms based on a lightning pulse.  The ability to image storms from lightning, I found to be a fascinating concept, and I found the LMA to be a valuable companion to radar when deciding whether to issue a warning.  As I work in a CWA with large radar gaps, having LMA data would be particularly valuable.  Even with good radar coverage, the detailed LMA data helps to fill-in the picture we get from radar, and at a small cost.  The cost of a nationwide LMA capability would seem to be a small fraction that of a radar network, making the LMA attractive from a cost-benefit stand point.

The MRMS algorithms are run off of existing operational data sources.  For one real-time case we warned for, the storm went right over the top of the radar, so the multi-radar approach was shown to be valuable in this case.  One of the new MRMS algorithms is for hail size.  We found this to be pretty good, and tended to agree well with reports.  It was at least as good as the “VIL of the day” concept.  The “rotation track” product was also useful.  As MRMS is derived from available data, some of the products duplicate what might be easier to see another way.  Storm heights, for example, might be better found by consulting a cross-section in FSI

We considered the problem of information overload in integrating new data sources into operations.  For any of these technologies to succeed, they need to make it easier for a forecaster to issue a product.  Having to evaluate a flood of new data every minute may be paralyzing to some.  Even though dense new data sources have more information for producing more precise warnings, they need to be integrated in some user-friendly way into operations.  This leads to a need for better algorithms for analyzing some of these data streams in the background.

There are no set plans that I am aware of currently to expand CASA, PAR, LMA, or MRMS nationally.  Of these, MRMS would be the easiest to implement as it only requires the fielding of algorithms.  The LMA may also be cost-effective.  CASA and PAR are both expensive technologies, each one at least as expensive as the current 88D network.

Bill Martin (NWS Glasgow MT – 2009 Week 5 Evaluator)

Tags: None