Forecaster Thoughts – Kevin Donofrio (2011 Week 2)

I share some of Steve Keighton’s comments on structure/logistics of the EWP. Here are a few more general comments: I think collaboration with the EFP could be very useful, I just did not find it to be the one day that I was not on the full-blown warning shift. I also would find it useful to either a) Not be completely focused on the timeliness of warnings, and more on the analysis of the products, (b) Possibly having certain forecasters focus on certain products at various times in the warning process. That said, it is good to see what products came to the forefront and which ones maybe were not (and maybe should have!), and to truly mimic a warning environment,. Having grown accustomed to 3 D2D screens in a warning environment, I did not use quite as many products as I would have in a typical environment, and was also creating procedures on the fly, which I would not normally do in a warning situation. I would be really interested to see the effects of Dual Polarization has on screen real estate, and if it would affect the experimental products chosen for use in the warming decision/awareness process. I think we all tried to strike a balance between attempting to simulate and get warnings out in “real” time vs. giving each new tool more attention.

Below are a few key points about specific products from my experience. These comments have been shared with the staff at WFO Portland, and will be shared with all Western Region SOOs.

* The OUN WRF was a “hot model”, in that it also appeared to break the cap too quickly. That said, it did clue us in to potential scenarios, and seemed to have a decent grasp on convective mode expected, given convective initiation. It would be nice to have this run in an ensemble mode as well, as it basically served as another tool to compare to EMC WRF, NSSL WRF, and the RUC HRRR models.

* The 3DVAR Multi-Radar Real Time Data Assimilation Products, while fairly new, hold alot of promise. The products that I found useful included the Updraft (instantaneous and track) and the Rotation products. The Updraft product, while sometimes misplaced, was still very valuable. The downdraft products were a bit noisy, not only in multicell situations, but even in supercell cases. These products aided in assessing whether updrafts were strengthening or weakening, and the rotation track also aided in more accurately following storm motion. While I did not rely on these products, and did always confirm what I was seeing with my traditional radar analysis, it did provide a clearer situational awareness picture, and it was fairly easy to create procedures to integrate into the warning decision process.

* Convective Initiation tools, while useful, are still contaminated by cirrus. They did provide some lead time before seeing significant radar echoes, though lead time in rapidly developing cases was not much more. I did find, once radar operations got more intense, that I did ignore these products a bit more than I wanted to, but it was more a workload issue. Also, on the days where convective initiation did not occur, these tools provided a correct NULL result. I did like the CIMSS product a bit better, as it provided CI likely, CI occurring, CI possible vs. a YES/NO from UAH. The UAH algorithm was too sensitive, and the CIMSS product seemed not sensitive enough. This is intended as the CIMSS product is going for a low False Alarm Rate, and UAH is going for high Probability of Detection.

* We got to look at multi-cellular and more borderline cases as well, which was very useful instead of focusing on “supercell” cases. The products didn’t do too bad, though seemed to perform best in more traditional supercellular cases.

* The pseudo-GLM was very useful in that it focused attention on storm intensification, and was able to pick up on flash rates much earlier than the CG network. Though I did not see this for many borderline cases, this would be useful when the forecaster is not sure whether particular areas are electrified or not, particularly when not seeing any CG strikes.

* I relied heavily on the Multi-radar, multi-sensor products when in a warning environment. I wouldn’t say they made the warning decision for me, but they served as a great situational awareness tool as to where to focus my attention. Of particular use were the -10C and -20 C Reflectivity products, but my favorites were the 50 dbz height above/below a user specified level (such as the -20 C level), MESH (Multi-radar estimated hail size, both instantaneous and track). These products were great for estimating hail size when combined with radar tools. These products are available (they are adding the Western domain soon, if not already) on wdssii.nssl.noaa.gov as KML files, or on a neat google map in
development wdssii.nssl.noaa.gov/maps.

Thank you again for allowing me to participate in the EWP this year. This was my first experience in an experimental warning program, and I hope to leverage this experience to spread the word on the promise on new warning decision tools, and to hopefully participate in another Experimental Warning Program in the future.

Kevin Donofrio (General Forecaster, NWS Portland OR – EWP2011 Week 2 Participant)

Tags: None

Forecaster Thoughts – Steve Keighton (2011 Week 2)

* Overall, another very valuable experience for me, and an important program to get feedback from forecasters during the development process. Thanks for the opportunity!

* A general suggestion is to either require participants to review materials on the products ahead of time (through recorded presentations), and then have a much shorter period the first day for SMEs to summarize and answer questions, thus allowing more time on the first day to get more familiar with products and procedures during a WES….OR, more ideally, have participants come for a two week period to get especially comfortable with the background on the products, and more time spent really working with the various products, and perhaps more opportunity for concentrated time spend really evaluating one product at a time initially. I realize though this would reduce the number of overall participants, and that’s not good either.  So just something to consider perhaps. I felt that there just wasn’t quite enough time to really get to know and work with each of the products, and we were just getting there on the last day.  A more focused effort on each product before given the flexibility to use any or all to help make warning decisions or raise S.A. would be better, since I felt I was trying to make sure I frequently tried to look at all of them, while at the same time trying to keep up with warning decisions using traditional products/methods too.

* Another general suggestion is to consider different interactions with the EFP side than what we did.  Felt it was generally a waste of time to work with the CI group that one day I did that, and would prefer to work with the severe wx desk and get a sense for the various models and high res ensembles they were evaluating. We got to evaluate the OUN WRF, but that’s just one potential solution.

* I was most impressed with the promise of the 3DVAR products in terms of a little more of a complete picture of storm structure/intensity (4D really with tracks of max updraft and rotation very helpful for trends to stand out).  These were particularly helpful in combo with traditional radar products, especially with 5 minute updates, but latency is the one concern. Not likely to make a warning decision based solely on these, but certainly adds confidence to decisions being considered with traditional (usually base radar) products.  Obviously with the future potential to advect this in the future from RUC-based forecasts will add another dimension to it’s utility.  Specific comments on the various products available were made via the survey forms, but here will just mention the need to improve the downdraft product (focus on lower levels), perhaps updraft too (various levels), an updraft helicity product, and then access in AWIPS to the 2D winds would be a great benefit.

* I was a little discouraged with the CI products, but do see the potential, but would like to have spent more time with these in different environments. My impression is that these will be most helpful for otherwise clear conditions with CI expected on a boundary like the dry-line, and much less useful in moist “airmass” type environments, and obviously when there is cirrus or present (these last two scenarios are fairly common here in the mid-Atlantic region). A little concerned about two different efforts/groups developing different CI products, and each seemed to have some advantages, in my limited experience that one one week I was more impressed with the UAH version, (with more detections), but need the multi-tier output I think like the UW-CIMSS version.

* OUN WRF (as other high res convective resolving models have shown) certainly has a lot to offer in terms of helping to anticipate storm mode and to some degree evolution, but not necessarily the locations or timing (timing probably worst).  Still need to have a collection or ensemble of these to get a better feel for the ranges of timing and locations, but looking at the details of a single model and it’s trends can still provide some helpful info for overall S.A., and ultimately leads to quicker decisions if you are ready to anticipate certain structures/evolutions. Despite some errors in timing and placement on that last active day we were there (May 19), the signals were good enough to help prepare for some upscale evolutions during the evening, and did suggest some early convection initiation (which some other models did not have).

* Finally, love the new wdsii-nssl map web page for the MRMS products (even though we weren’t specifically evaluating those). I used these frequently in AWIPS, with some of the same advantages of the 3DVAR products such as trends, but the rapid update for these since they are multi-radar is a uniqe advantage.  I’m more easily to introduce these products to my staff using the web page (http://wdsii.nssl.noaa.gov/maps), but would also like to work with ER SSD to get these into AWIPS via LDM (when I get a chance!).

* I just realized I totally left out any comments on the PGLM data!!  IN part I think since I really did not get a chance to spend much time evaluating it. In one case, one of the key ground sensors actually dropped out and it gave the impression that the flash density dropped when it really didn’t. Also, for this event the NLDN CG data was intermittent so not a good chance to compare the two.  Again, I think with this data set it would be especially important to focus solely on this product for most of an afternoon/evening, or during a DRT case.  Still don’t have a real good feel for how the total lightning relates to the CG data we are used to tracking, and relationship to severe weather.  I think it would be very important to almost spend at least one day on this one product to get some worthwhile feedback. Sorry I don’t have more on this.

Steve Keighton (Science and Operations Officer, NWS Blacksburg VA – EWP2011 Week 2 Participant)

Tags: None

Forecaster Thoughts – Marcus Austin (2010 Week 9 – MRMS/GOES-R)

The last week of the EWP2010 was a successful one with ample opportunities to test and evaluate the latest technologies soon to become available to operational meteorologists. As a SCEP at the NWS Tallahassee, FL, I had little previous warning experience. Initially, the program was a bit overwhelming, with so many new products to evaluate and only one short week to do so, however, by the later part of the week, certain products became favored over others and the process of developing situational awareness became more natural and comfortable. Our week was the last of the program and was geared toward the GOES-R and MRMS products. My comments on each of these can be found below.  We had a very busy couple of days on June 16th and 17th with a long lived tornadic supercell over South Dakota and a widespread tornado outbreak on the 17th. These two days were challenging and the products in development at the HWT were very useful in the warning decision process. I’d like to thank all who were involved in putting the program together as well as those who stayed around for support. It was a lot of fun and I hope I can make it out again soon.

MRMS Products

The Multi-Radar Multi-Sensor algorithms were an excellent tool during the warning process. In particular, those which emphasized reflectivity at the 0°C, -10°C, and -20°C isotherms were instrumental in assessing hail potential. These combined with products such as the 50 dBZ echo top height above -20°C and layer average reflectivity at the various isotherms highlighted major storms versus those that posed a lesser severe threat. In terms of MESH versus MESHb, storm reports showed that MESH outperformed MESHb when considering more classic supercellular/tornadic storms while MESHb worked well in multicell/linear MCS type events where updraft strength was not as robust. These tools were also useful in terms of delimiting severe warned areas. Looking at reflectivity well above the freezing level allowed forecasters to notice non-descended cores that may produce severe weather downstream. This improved overall polygon size and orientation considering future threats.

In terms of analyzing tornado potential, rotation tracks and the low and mid level shear products were good to reference when orienting polygons. With these tools, storm rotation trends could be derived and tornado warnings could more accurately reflect the individual threat from the tornado. These products complimented base reflectivity and velocity data well and provided extra guidance on the location and motion of potential tornadic circulations.

Overall, the MRMS products were most useful during the warning phase of the experiment. They provided a good quick look when analyzing traditional radar data and a confidence booster when issuing warnings for severe hail and tornadoes. Products geared toward reflectivity thresholds provided an overview of which cells tended to be more severe and how they were growing/decaying over time. This prevented possible oversight of minor severe storms while focusing on those that were producing the most severe weather at the time. I hope to see these products move into the operational realm so more forecasters have an opportunity to test them in real severe weather scenarios.

GOES-R Products

The GOES-R tools were mainly geared toward convective activity. A convective initiation algorithm was developed to discern areas of likely impending thunderstorm development. This product would be very useful, especially for aviation interests when making short-term changes to TAFs to reflect thunderstorm threats. It would also be a good reference for putting out watches/mesoscale discussions when forecasting short-term probability of severe weather given a volatile thunderstorm environment. Unfortunately, we were unable to effectively evaluate this product as the shifts took place in the afternoon after most convection had already initiated. Verification was carried out the day after to see how severe reports lined up with the CI detections. Overall, I feel that visible satellite would likely clue me in on convective initiation, but the tool performed well in retrospect.

In addition to these, overshooting top (OT)/cloud top cooling and enhanced-v signature products were developed to indicate the likelihood of severe weather for particular storm cells. OT detections were widespread with very few enhanced-v signatures detected. Enhanced-v detections were always associated with severe weather, mainly in supercell thunderstorms. Some overshooting tops on visible satellite were missed, but overall it performed fairly well. The real question is how useful would such a product be in a warning situation. I would not feel comfortable warning solely on an OT or enhanced-v detection without some base radar data or perhaps MRMS imagery to back it up. These products would have been better before the event began and in a regional sense to get a feel for where the strongest convection was occurring, or where the greatest potential for severe weather would begin. One glaring limitation of the GOES-R products is that they only work under clear skies with no cirrus present. Overall, I found these products to be more interesting than useful in terms of issuing warnings; however I was not able to evaluate them thoroughly due to time limitations.

Marcus Austin (Student Career Employment Program, NWS Tallahassee FL – 2010 Week 9 Evaluator)

Tags: None

Forecaster Thoughts – Pat Spoden (2010 Week 8 – MRMS/GOES-R)

I was part of a group of 5 new forecasters who reported for duty at 1PM on Monday, June 7, 2010. It was week 8 of the EWP, with each week bringing in new forecasters. The first couple of hours were spent training on the new products we were being asked to evaluate. The objective was to determine which, if any, of the newer products would be useful in a short-fused warning situation. Our feedback would be given via surveys at the end of each day and during debriefs at the start of the following day. Real-time reports were supplied by students and the SHAVE (Severe Hazards Analysis & Verification Experiment), if applicable, during the week.

Each spring, NSSL and SPC jointly put together the Hazardous Weather Testbed (HWT) in Norman, Oklahoma. The two main program areas are the Experimental Forecast Program (EFP) and the Experimental Warning Program (EWP). The EFP focuses on the weather prediction models while the EWP tests concepts and technology and focuses on short-fused warnings.

Severe weather was expected Monday afternoon over the Central Plains. I have to admit, the first day, I was relying more on the products with which I was comfortable with rather than the new products. However, I did try to compare their output to the real-time reports we were given and my “crutch” products. Several of us did participate in a “canned” case using the total lightning products with storms over northern Oklahoma. The lightning data was helpful, yet confusing, because there were several lightning jumps as the updrafts increased. These storms were made up of multiple updrafts so it was difficult to determine which updraft was key in forecasting severe weather.

During the debrief on Tuesday, I was not overly confident that the “new” products were going to be of much help. But, I thought, it was early, and I needed to give them a chance. Plus, it was a slightly different environment than back at my office and that probably had an impact on my analysis so far.

Thankfully, more severe weather was expected that day. The software allow us to create a virtual forecast just about anywhere. We were not allowed to see what the actual forecast office was doing so that we would not be biased. The focus Tuesday afternoon and evening was over Kansas. I really worked to bring more of the new products into use and my comfort level was rising. We were able to make use of the convective initiation products as long as cirrus was not in the area. The convective initiation products did show convection quickly developing over Texas where the skies were generally clear.

It became clear to me that reflectivity on the -20 degree C surface was extremely helpful and I began to bring in more of the estimated hail size (MESH). These were the MRMS (Multiple Radar/Multiple Sensor) products. The MRMS products took advantage of all radars and were helpful as storms moved over the KICT WSR-88D’s “cone of silence” where one could see the storm increasing in intensity, but probably not to severe levels. Many of the products would allow a forecaster to pick out the strongest storms rather quickly. While there were many products available to us, not all of them appeared helpful.

On Wednesday, the debrief was more vocal as we began to find more and more useful products and examples were given of where certain products excelled and where they were not helpful. There were a few storms in the Alabama lightning network area, so that was the area of focus that day. We saw several instances where the total lightning was picking up on storms before the AWIPS lightning mapper program picked up on them. One could see the utility of this in the future, bringing with it a potential for lighting statements and potentially lightning based warnings. We continued to test all of the products available to us.

By Thursday, I was excited about the prospect of testing more of the new products. It seemed that everyone’s confidence was increasing dramatically using the newer products, especially the MRMS products. We were in luck as supercells were developing outside of Denver. We could see via the situational awareness display that VORTEX 2 was going to be on those storms. They provided us with real-time written reports and live video of the storms. I thought how nice this would be to have on a regular basis back in Paducah.

My partner and I were warning on the supercells. I relied heavily on the MESH and both the 0-2 km and 3-6 km 30 minute rotational track products. They clearly pointed out the areas of concern. I did look at the traditional products, but with the reverse weighting of what I used back on Monday. As the event moved on, one supercell became several. This is where the MRMS products shined. We created a split screen situation awareness display. The MESH and our warnings were contained on one side with the rotational tracks and our warnings on the other. From this, you could quickly look up and ensure that everything was covered. We were not too concerned with becoming overly focused on these storms with that configuration.

Friday was an early day, arriving at the weather building at 10AM. We reviewed the week’s events and discussed ways to improve them. Some ideas were as simple as changing color scales, while others were to allow more “on-the-fly” changes to look at reflectivity on different temperature surfaces and different dBZ cores. Before we left, Dan Neitfield, SOO WFO OAX, gave a presentation on how they handle tornado warnings in Omaha.

My experience at the EWP was fantastic and I hope to go back again. I have learned a tremendous amount about what may be available shortly to the field, and hopefully, helped the researchers look at different ideas and improve upon all of the work that has already been done. I could clearly see how several of these products would immediately help back at the office. While we are blessed with good radar coverage at Paducah,   these products would help us even further in critical situations. I could recall past situations where we would have benefited from having these products available. I knew I was going to miss having them once I got back to Paducah. Thankfully, they are available as test products on the WDSS-II website. http://wdssii.nssl.noaa.gov/ .

Pat Spoden (SOO, NWS Paducah, KY – 2010 Week 8 Evaluator)

Tags: None

Forecaster Thoughts – Frank Alsheimer (2010 Week 8 – MRMS/GOES-R)

I took part in the EWP for a week during June. I was able to experimentally use satellite and lightning products from the GOES-R Proving Ground applications as well as algorithms and products from the Multi-Radar/Multi-Sensor project. I will talk about the benefits and weaknesses of both.

GOES-R products — The products I got to experiment with were the convective initiation, overshooting tops, and the pseudo-lightning. I see both promise and limitations, but some of the limitations will be rectified once GOES-R becomes reality.

Convective Initiation — During the week of the experiment, I found myself only occasionally being able to use the product. The lack of ability to detect initiation through cirrus is a major drawback, significantly limiting the number of signals. However, some of that may have been due to issues with the satellite images themselves on which the algorithm is based. Especially noticeable was a one hour period around 00Z when we only got 3 images in an hour’s time. That really made it hard for the algorithm to do its job up to its potential, and therefore made it difficult to give it a true workout for when it would be operational with GOES-R. In theory, the product would have use in operations before a convective event begins.

Overshooting Tops — There were a few more opportunities during the week to see this algorithm in action.While it did a good job in determining many of the overshooting tops during the events I worked, I did not get a whole lot of additional lead time over just using traditional radar interrogation. This is another case, however, where more frequent (at least every 5 minute) images from the GOES-R satellite may show more benefit to the product.

Pseudo Lightning — I found this product to be complimentary to, and in a few cases superior to, the ground based lightning detection networks to which we currently have access. There was one specific real-time case I remember where the total lightning product actual gave lead time to a cell that had become electrically active over both traditional radar interrogation methods as well as the ground based lightning network. This is very important since many lightning fatalities are recorded with the first strike. It will also prove very beneficial as we get more into decision support services, especially to support the safety of responders to incidents who are exposed to lightning hazards.

Multi-Radar/Multi-Sensor — I found some of these products more beneficial than others. I will talk about each grouping of products individually.

Gridded Hail Detection Algorithm (HDA) products — The bias-corrected version of the MESH algorithm created a product that was far superior to the current MESH algorithm associated with individual radars during the week I participated in the test. Once I got used to the product, I used it as a primary tool in the warning decision process and would definitely use it regularly were it available in the AWIPS system at my office. The non-bias corrected product was not quite as reliable, but it was still nice to have a product that updated more frequently than any VCP we have available today, as well as helped to mitigate the “cone of silence” issue we have with individual radars. I used the 30 minute swaths occasional when following a supercell, but did not find a lot of real-time use for the 120 minute swath products.

Hail/Lightning/Convective diagnostic products — The most beneficial of these products was the reflectivity products at specific temperature altitudes, especially the -20degC. The two minute updates of these products helped to identify rapidly increasing convective cores. The 50 dBZ echo tops as well as the height of the 50 dBZ above specific temperature levels (i.e. 0degC and -20degC) was also beneficial, but 60 dBZ would likely have been a better product to access. I didn’t find much use  for the VIL, VIL Density, and LRA products, although I have to say I didn’t really use them a  whole lot once I found some of the other products I liked better.

Derived Shear Products —  I found some cases where the products were helpful and some others where it was not. It has a tendency to increase values as one gets closer to an RDA because of the weighting process, which is a bit of an issue at times (although it’s an issue on the individual radars as well). Similar to the HDA products, I thought the 30 minute tracks had some benefit for tracking purposes, but not so much the 120 minute.

Cloud-To-Ground Lightning Products — I found the density product useful as it gave a discreet value which could be compared both to the trend of the cell in question as well as other cells. It would occasionally be better than the individual strike product we currently get on AWIPS because it was easier to discern the lightning frequency near an individual cell.

Frank Alsheimer (Science and Operations Officer, NWS Charleston SC – Week 8 Evaluator)

Tags: None

Forecaster Thoughts – David Blanchard (2010 Week 6 – MRMS/GOES-R)

[Ed. – David’s post is taken from his daily journal notes during his week at the HWT.]

2010_0517

I’m in Norman, Oklahoma, for EWP2010 to forecast severe weather scenarios using new technologies and software. This should be both exciting and challenging and I’m looking forward to the experience.

Much of the day was spent with an overview of the various products we will be using and testing and there is much that is new and potentially very useful. By early evening we switched into forecast mode and loaded up real-time data of the ongoing convection and severe weather in southwest Texas and southeast New Mexico. Not surprisingly, there were a few software glitches but there always are with these type of programs and we just worked our way through it. Eventually we were able to view the multi-radar data fields and the new satellite tools. These included convective initiation products and overshooting convective cloud tops. Because these are new tools that we have not used before, it takes some time to learn how to use them and how they can be used to improve severe weather warnings.

2010_0518

We started with a debriefing of yesterdays events over the southern High Plains. Next we briefed on todays expected weather which should include supercells with tornadoes possible over the southern and central High Plains.

We received a brief overview of some of the new satellite products including the simulated satellite data generated from model output. It uses NSSL 4km WRF data to simulate all IR bands and produces results that are very similar to true radar data. It is, however, very compute intensive and requires many hours to generate. It’s unlikely that we will see this product on any operational workstation in the near future.

After the EFP weather briefing, we began forecasting operations for the day. Our group of four broke into 2 groups of two with our group forecasting for AMA and the other for PUB. Within a short time, we had convective initiation but were unable to use the satellite CI products — or any others — because of excess cirrus cloud obscuring the low cloud. We switched to MRMS products and began the forecast. These new products are a challenge to use at first — as with any new product — but have great potential value. It requires that we load and these tools alongside the more conventional tools as we prepare our warnings.

There are simply too many products to attempt to use and view all.  One needs to judiciously choose a few and work with these during the forecast and warning session. To select too many will result in information overload. I suspect that the products that I selected today and that worked for me today may not be the same on another event. For today, I found the MESH and ROTATIONAL products to be useful as well as the REFLECTIVITY -20C for warning on large hail and tornadoes.

2010_0519

The debriefing today included comparisons between the warnings we issued and those issued by the NWS offices. Our team was warning for AMA and the other team was warning for PUX. There were only a few supercells and these quickly became severe and then tornadic so that warning was fairly easy. I’m not sure that any conclusions can be drawn from this event since we were competing against forecasters familiar with the area and consequently our warnings were usually a few minutes behind theirs in issuance time.

Today poses a Moderate to High Risk across portions of the central Plains and both teams will be forecasting for the OUN warning area. We sectorized by storms as necessary. The strongest storms of the day were generally north of the I-40 corridor and the other team handled most of these. We did warn on one storm in that area but our warning was almost 30 minutes later than that issued by the WFO and I believe ours was more timely. MRMS parameters suggested that this storm was not severe and not tornadic when initially warned. It’s possible that their upgrade to a warning was predicated upon the evolution of an earlier storm that quickly became tornadic. All or most of the storms in this area eventually became tornadic and VORTEX2 was operating in this area which allowed us to receive timely reports of large hail and tornadoes.

We warned on a storm that began near Lawton then moved towards Chickasha and eventually moved across south Norman. It was very slow to evolve and we delayed warnings until MRMS and base state data convinced us that the storm had finally evolved into a severe storm. First warnings were SVR, then extended, then finally TOR, extended, then dropped back to SVR. No tornadoes were reported by experienced chasers and spotters but large hail was reported 3S of NWC. I think holding back on the warning was an improvemnt on FAR for individual counties and cities early in the life cycle of the storm.

2010_0520

With the front now stalled across Texas we focus on that section of the country using KFTW as a localization. Severe storms form on the front and we warn on them using the new radar products. Not too different from the previous days. But there is one interesting feature and that is an outflow boundary and fine line moving westward. The AzShear and RotationTracks products both show this feature well and it could serve as an initition point for convection later in the afternoon. By the time we break, however, it has not yet done so.

In the evening we switch to KHSV so that we can use the pseudo-GLM (global lightning mapper). These are tools that we have not yet used and it maps all lightning channels including CG, CC, and IC.  The most interesting thing we notice is that there is substantial lightning being detected in the trailing stratiform region. The echo line is oriented north-south and during the evening a few low reflectivity notches develop on the rearward side followed by very strong winds on the leading edge of convection. It appears that rear inflow jets (RIJ) are developing under the mesoscale anvil. Reminds me of some Pre-STORM events.

2010_0521

Today we discuss and debrief on all the weeks activities. There is general consensus that the radar products are good tools and become more useful with use. We’re not yet certain of the value of the satellite products since they aren’t telling us much more that we can see with other products but it may be that there will be events in which they outperform the radar. So, other groups may see some value that we didn’t get to experience.

David Blanchard (Lead Forecaster, NWS Flagstaff AZ – 2010 Week 6 Evaluator)

Tags: None

Forecaster Thoughts – Darren Van Cleave (2010 Week 6 – MRMS/GOES-R)

I was privileged to be invited as a participant in the GOES-R/MRMS portion of the 2010 Experimental Warning Program, week 6 (mid-May). We had a fairly busy week with tornadoes and other severe weather occurring in the Amarillo and Pueblo WFO’s along with the hometown Norman WFO (the Amarillo and Norman experiment days happened to feature Vortex II providing live on-site data of the tornadoes). Going into the program, I was expecting to be more interested in the GOES-R side of the experiment; however, as the week progressed, it became apparent that the MRMS system was a more promising innovation at the present time. The GOES-R tools proved to be difficult to fully analyze because their namesake satellite with its quicker routine scan intervals had yet to be launched. GOES product issues and other technical glitches aside, the week was successful and I very much enjoyed my stay. Here is a brief collection of my thoughts on each of the experimental tools:

MRMS

This system was by far the most impressive tool premiered during the week. The technology has been around for several years, but this was the first I had heard of it. The concept is simple: avoid radar-data overload and simplify warning operations by combining ancillary radars (TDWR, CASA) and neighboring WFO radars into one streamlined product. This provides an excellent way to analyze multiple radars at the same time, provided a WFO has overlapping radar coverage. Traditional cell diagnostics such as POSH (probability of severe hail) can then be constructed from this base data, along with new products that take advantage of the isothermal plotting capabilities (i.e. reflectivity plotted on an isothermal surface).

One readily apparent drawback of the MRMS system we previewed was the sheer number of different analyses available. It was nearly impossible in the time allotted to adequately test or even plot all of the fifty or so products listed for us. I settled into using about 5 of the products and was able to try about 15 over the course of the experiment. Continued experimentation (such as the EWP program) should help whittle this down to a more manageable list when MRMS products are made available to WFO’s.

I found several of the MRMS products to be very helpful in forecasting severe hail. The traditional MESH and POSH algorithms available through MRMS performed well, both in highlighting the onset of severe hail and following its track. Curiously, the bias-corrected MESH performed the worst, being off-track by an appreciable margin for many of the storms. Reflectivity was available on isothermal surfaces; the reflectivity at the 0C and -20C surfaces in particular were handy for issuing warnings for severe hail. For tornado warnings, the 0-1 km azimuthal shear and 30-minute rotation tracks both provided valuable information of low-level rotation and tornadic history. I found that the rotation track tool gave a good first guess for shaping the path of the warning polygon in situations where the track forecast was more difficult.

One drawback of relying extensively on MRMS products is the slight data latency of approximately 2 minutes. Warning decisions which require up-to-the-minute radar data would be hampered by waiting on the next available data, which might be around 2 minutes late in comparison to the WFO radar itself. I suppose in this regard, MRMS data is probably more useful in tracking and updating existing warnings than in issuing new ones.  [Note:  The latency is a result of the experimental nature of the AWIPS set up.  An operational system, and hopefully our future EWP system, should have reduced latency.  -Stumpf]

Additionally, I suspect that WFO’s which lack overlapping radar coverage probably wouldn’t experience the full benefit of the MRMS system. In particular, it seems that the low-level shear products will suffer since some of required elevation scans might not be available at greater distances from the radar.  [Note:  The 0-2 km AGL azimuthal shear and rotation tracks products always use data from the 0.5 degree elevation scan even if it is above the 0-2 km AGL layer.  -Stumpf]

GOES Overshooting Top/Enhanced-V Algorithm & U. Wisc Convective Initiation Product

We were provided with an overshooting top algorithm which located the colder clouds of an overshooting cloud top along with the associated “enhanced-v” signature. We were also given a convective initiation product which provided four discrete values indicating the likelihood of convection over a given area. I’ve lumped the two tools together in this review because it was difficult to gauge the usefulness of either in warning operations, due to the current GOES scan interval of 15 minutes. New convection and even overshoots were often easily diagnosed by radar within the time required for a new scan. To make matters worse, the scheduled afternoon calibration and full disc scan created occasional 30 minute gaps in the imagery, further hampering the tools. Needless to say, these wide gaps in the imagery updates rendered the products difficult to evaluate. However, when the GOES-R satellite is launched, the algorithms will receive 5-minute imagery at all times of the day (up to 30-second imagery with rapid scan mode), which should greatly enhance the utility of these products. Until that time, I would say the jury is still out.

GOES-R Geostationary Lightning Mapper (GLM)

The GLM was one tool which was not actually available for the EWP, and was instead mimicked with other data to give a rough estimate of how it might behave. In the future, GLM data will give forecasters a unique look at storm activity by providing the total flash rate via a visible channel on the GOES-R satellite. This provides much more information than the current cloud-to-ground lightning data provided by Vaisala (NLDN), not to mention the benefits of public-use lightning data instead of Vaisala’s proprietary data. As previously mentioned, for the purposes of our experiment it was intended to use a pseudo-GLM (GLM output being imitated by real total lightning data) in warning operations. Unfortunately, this also required the operations to take place in locations which featured the total (3D) lightning-mapping instrumentation, which was rarely the case for our week of operations. On the one day we did have pseudo-GLM data available, the storms were sub-severe. Other weeks of operation probably worked better for analyzing the GLM, so I would defer to participants of those weeks for more information on this tool.

Darren Van Cleave (Meteorologist Intern, NWS Rapid City SD – 2010 Week 6 Evaluator)

I was privileged to be invited as a participant in the GOES-R/MRMS portion of the 2010 Experimental Warning Program, week 6 (mid-May). We had a fairly busy week with tornadoes and other severe weather occurring in the Amarillo and Pueblo WFO’s along with the hometown Norman WFO (the Amarillo and Norman experiment days happened to feature Vortex II providing live on-site data of the tornadoes). Going into the program, I was expecting to be more interested in the GOES-R side of the experiment; however, as the week progressed, it became apparent that the MRMS system was a more promising innovation at the present time. The GOES-R tools proved to be difficult to fully analyze because their namesake satellite with its quicker routine scan intervals had yet to be launched. GOES product issues and other technical glitches aside, the week was successful and I very much enjoyed my stay. Here is a brief collection of my thoughts on each of the experimental tools:

MRMS

This system was by far the most impressive tool premiered during the week. The technology has been around for several years, but this was the first I had heard of it. The concept is simple: avoid radar-data overload and simplify warning operations by combining ancillary radars (TDWR, CASA) and neighboring WFO radars into one streamlined product. This provides an excellent way to analyze multiple radars at the same time, provided a WFO has overlapping radar coverage. Traditional cell diagnostics such as POSH (probability of severe hail) can then be constructed from this base data, along with new products that take advantage of the isothermal plotting capabilities (i.e. reflectivity plotted on an isothermal surface).

One readily apparent drawback of the MRMS system we previewed was the sheer number of different analyses available. It was nearly impossible in the time allotted to adequately test or even plot all of the fifty or so products listed for us. I settled into using about 5 of the products and was able to try about 15 over the course of the experiment. Continued experimentation (such as the EWP program) should help whittle this down to a more manageable list when MRMS products are made available to WFO’s.

I found several of the MRMS products to be very helpful in forecasting severe hail. The traditional MESH and POSH algorithms available through MRMS performed well, both in highlighting the onset of severe hail and following its track. Curiously, the bias-corrected MESH performed the worst, being off-track by an appreciable margin for many of the storms. Reflectivity was available on isothermal surfaces; the reflectivity at the 0C and -20C surfaces in particular were handy for issuing warnings for severe hail. For tornado warnings, the 0-1 km azimuthal shear and 30-minute rotation tracks both provided valuable information of low-level rotation and tornadic history. I found that the rotation track tool gave a good first guess for shaping the path of the warning polygon in situations where the track forecast was more difficult.

One drawback of relying extensively on MRMS products is the slight data latency of approximately 2 minutes. Warning decisions which require up-to-the-minute radar data would be hampered by waiting on the next available data, which might be around 2 minutes late in comparison to the WFO radar itself. I suppose in this regard, MRMS data is probably more useful in tracking and updating existing warnings than in issuing new ones.

Additionally, I suspect that WFO’s which lack overlapping radar coverage probably wouldn’t experience the full benefit of the MRMS system. In particular, it seems that the low-level shear products will suffer since some of required elevation scans might not be available at greater distances from the radar.

GOES Overshooting Top/Enhanced-V Algorithm & UWisc Convective Initiation Product

We were provided with an overshooting top algorithm which located the colder clouds of an overshooting cloud top along with the associated “enhanced-v” signature. We were also given a convective initiation product which provided four discrete values indicating the likelihood of convection over a given area. I’ve lumped the two tools together in this review because it was difficult to gauge the usefulness of either in warning operations, due to the current GOES scan interval of 15 minutes. New convection and even overshoots were often easily diagnosed by radar within the time required for a new scan. To make matters worse, the scheduled afternoon calibration and full disc scan created occasional 30 minute gaps in the imagery, further hampering the tools. Needless to say, these wide gaps in the imagery updates rendered the products difficult to evaluate. However, when the GOES-R satellite is launched, the algorithms will receive 5-minute imagery at all times of the day (up to 30-second imagery with rapid scan mode), which should greatly enhance the utility of these products. Until that time, I would say the jury is still out.

GOES Lightning Mapper

The GLM (GOES Lightning Mapper) was one tool which was not actually available for the EWP, and was instead mimicked with other data to give a rough estimate of how it might behave. In the future, GLM data will give forecasters a unique look at storm activity by providing the total flash rate via a visible channel on the GOES-R satellite. This provides much more information than the current cloud-to-ground lightning data provided by Vaisala, not to mention the benefits of public-use lightning data instead of Vaisala’s proprietary data. As previously mentioned, for the purposes of our experiment it was intended to use a pseudo-GLM (GLM output being imitated by real lightning data) in warning operations. Unfortunately, this also required the operations to take place in locations which featured the lightning-mapping instrumentation, which was rarely the case for our week of operations. On the one day we did have pseudo-GLM data available, the storms were sub-severe. Other weeks of operation probably worked better for analyzing the GLM, so I would defer to participants of those weeks for more information on this tool.

Tags: None

Forecaster Thoughts – Steve Nelson (2010 Week 5 – CASA)

Hazardous Weather Testbed in action during the 10 May 2010 tornado  outbreak.
Figure 1. Hazardous Weather Testbed in action during the 10 May 2010 tornado outbreak.

In March  2010, I was asked to participate in the CASA (Collaborative Adaptive Sensing of the Atmosphere) portion of the 2010 Spring Experimental Warning Program (EWP) in the Hazardous Weather Testbed (HWT) at the National Weather Center (NWC) in Norman, OK (Figure 1).  CASA operates a dense radar network of four X-band 3cm radars between Oklahoma City and Lawton, OK.  These radars only have a 30nm effective range but overlap to provide multiple-radar analyses of reflectivity and velocity. For more information on CASA, see http://www.casa.umass.edu/ or the CASA IP1 Wiki at http://casa.forwarn.org/wiki/. The purpose of the CASA EWP experiment is have experienced forecasters evaluate real-time and case studies of CASA radar data.

Beginning on the week before my arrival at the NWC, I became increasingly excited because of consistent model forecasts of severe weather in Oklahoma.  I even stayed up the night before my departure to view the SPC outlook for May 10 – High Risk of severe thunderstorms and large tornadoes in Oklahoma!  When I arrived at OKC airport at noon on Monday, I immediately began coordinating my arrival with Jerry Brotzge and Brenda Philips (CASA Principal Investigators) via phone and text messages.  Brenda’s flight had also landed at OKC around noon, so we drove down together.  We had just enough time to grab a quick lunch to go and arrived at the HWT around 130pm where we immediately began reviewing the latest information.  Central Oklahoma was still under the gun and storms were developing along the dryline to the northwest of the CASA testbed area.  I don’t think I had finished my lunch yet when Brenda told me it was time to make a forecast!  We used twitter and NWSChat as our primary mediums for disseminating our forecasts, warnings, and updates.  After pegging a time of 5pm for activity to reach the testbed area, I watched the event unfold with one supercell after another developing along and ahead of the dryline.  Unfortunately, all of them seemed to develop just outside of the testbed area.  Around 515pm, one left-moving supercell storm split off to the NE and moved inside the network.  This storm contained an unusually strong anticyclonic mesocyclone (mesoanticyclone?) and hook configuration (Figure 2).  When asked if I would issue a tornado warning on that storm, I replied “No, because anticyclonic mesocyclones rarely produce tornadoes.”  At 525pm, a Tornado Warning was issued by WFO OUN for this storm.  It turned out that a six mile long tracked EF1 anticyclonic tornado had touched down at 518pm near Bray, OK and another pair of tornadoes (one anticyclonic and the other cyclonic) near I-35 and Wayne, OK.  Around this time, two LP-like supercells were approaching Moore and Norman.   As the Norman storm approached, I saw SPC forecasters run to the west windows. Being a conscientious, safety-minded NWS meteorologist, I also ran to the window and observed a rapidly rotating funnel nearly over the National Weather Center (Figure 3).  The tornado grew in size as it tracked east along Highway 9 and even damaged a few of the NWC employees’ homes.  As storms moved east and away from the network that evening, we closed operations for the day.  Between 2 and 8 pm, 31 tornadoes were confirmed across the state [http://www.srh.noaa.gov/oun/?n=events-20100510-tornadotable].  An exciting start to the experiment to say the least!  In the following days, there were several close calls to observing severe storms within the network during and after operations, but none as significant as the May 10 event.

Tornado 300 yards south of the National Weather Center on 10 May  2010. Photo by Kevin Kloesel.
Figure 3. Tornado 300 yards south of the National Weather Center on 10 May 2010. Photo by Kevin Kloesel.
Anticyclonic hook depicted on 2.0 deg reflectivity from KRSP CASA  X-band radar at 2221Z 10 May 2010.
Figure 2. Anticyclonic hook depicted on 2.0 deg reflectivity from KRSP CASA X-band radar at 2221Z 10 May 2010.

The orientation planned for Monday took place on Tuesday.  During the rest of the week, I went through several displaced real-time simulations using 88D data only, then repeated using 88D and CASA radar data, multi-radar wind analyses and high-resolution model forecasts. The simulations included the Anadarko, OK tornado of 14 May 2009 and the Rush Springs, OK tornado during the early morning of 2 April 2010.  Without knowing any details of either case, I was challenged trying to issue timely warnings based on CASA radar data.  Without going into detail, CASA radars use adaptive scanning strategies that depend on the coverage and intensity of storms.  Data at any one elevation angle can be as frequent as every 30 seconds.  Trying to mentally process data from four CASA radars in the same way we do one 88D data was an exercise in futility.  I do not believe manual interrogation of such high-resolution radar data is a realistic option for warning forecasters of the future.

The Rush Springs, OK tornado case was very eye-opening and showed the tremendous potential of CASA radar technology to detect smaller tornadoes.  Figure 4 shows a side by side comparison of KTLX and the KRSP CASA reflectivity at the time of the tornado.  Many areas east of the Mississippi river are prone to these smaller tornadoes that develop more rapidly than those from supercells.  Trapp and Weismann (2005) more recently showed how tornadoes spin-up in the comma head portion and along the leading edge of quasi-linear convective systems (QLCS).  Tornado warning lead time and accuracy is lower for both QLCS and tropical cyclone storms than that of supercells.  A local study done in the Peachtree City WFO in 2009 showed that 13 out of 16 unwarned F2 or greater tornado events resulted from QLCS storms.  A Hollings Scholar is also studying  QLCS tornado climatology and warning accuracy this summer in the Peachtree City WFO.  So far we have determined that the initial lead time of tornadoes from QLCS storms across the mid-south and southeast averages about 25% of those from supercells (3-5 minutes vs 20 minutes).

Radar reflectivity from the 2 April 2010 Rush Springs, OK tornado. The image on the left is from the KTLX 88D at 1057Z, the middle and right images are from the KRSP CASA X-band radar at 1058Z and 1100Z, respectively.
Figure 4. Radar reflectivity from the 2 April 2010 Rush Springs, OK tornado. The image on the left is from the KTLX 88D at 1057Z, the middle and right images are from the KRSP CASA X-band radar at 1058Z and 1100Z, respectively.

During the week, I was able to pick the brains of some scientists. I shared presentations and concerns from the 14-15 March 2008 tornado and 21 September 2009 flash flood events in north Georgia.  I discussed research on unwarned tornadoes recently published in WAF with Jerry Brotzge, who showed how such missed events can be correlated to smaller tornadoes (as just mentioned).  I plan on collaborating further in the future.

I will certainly remember the experience I had at the EWP this year and look forward to the day when technology like this is deployed operationally.

Steve Nelson (Science and Operations Officer, NWS Peachtree City/Atlanta GA – 2010 Week 5 Evaluator)

Tags: None

Forecaster Thoughts – Bill Martin (2010 Week 4 – CASA)

I spent last week in Norman at NSSL and the Hazardous Weather Testbed helping to evaluate how the CASA radar network can be used in operations.  In addition to the radar people in Norman, I got to work with systems engineers from the Univ. of Virgina who are studying how forecasters make use of information and software tools.

As you may recall, the CASA radar network consists of 4 radars in southwest Oklahoma which are relatively low powered radars designed to work collaboratively.  A much larger network has been envisioned; I was told that a national network would require 10 000 such radars.  The close spacing of the radars allows them to see the lower levels of the atmosphere much better than 88Ds, and allows them to be closer to targets and, thus, have considerably better resolution than a typical 88D (though an 88D has better resolution for targets close to it).  The collaborative aspects of the network includes things like dual-Doppler analysis.  A large network would be able to give 2-D wind vector fields, instead of just towards and away wind values.  This would reduce the intellectual load of radar interpretation quite a bit.  Disadvantages of the network include attenuation and data quality problems (which would be mitigated by a larger network), and cost.  Each of the prototypes cost around $250K plus maintenance, though this would presumably come down with mass production, if it ever came to that.

CASA radars are sometimes considered as gap-filling radars, and they could certainly fill this role.  However, gap-filling radars have been available from vendors for some time, and the CASA project was designed to be more than that through collaborative properties.  Core funding for CASA has been from the NSF and has another 2 years to run.  After this, progress may come more slowly as funding for different aspects of CASA becomes diffuse, unless a source for new funding is identified.  I was involved in CASA from the beginning, having attended the NSF review that originally funded the project (as a graduate student).

As there was no active weather the week I was there, most of the time in the Testbed was spent playing back archived cases and issuing experimental warnings based on the CASA data, in addition to the usual data.

Some of the interesting issues that came-up:

—The systems engineering people were fascinated by the fact that all the forecasters they had evaluated used the available information differently.  I’m not sure if that is good or bad.  One the one hand, it is good to have variety so that new ideas can come to light; on the other hand, for some things there are probably “best” ways to proceed.

–WDSS II versus AWIPS.  The WDSS II software was used to visualize data.  This was much more sluggish and difficult to use than D2D.  FSI that we use as a plug-in to D2D is a subset of WDSS II.  For operations, we need fast and highly responsive access to data.  I recommended WDSS II be redesigned to be more efficient.  They had recently gotten D2D to work with real-time CASA data, and it was good to have both of those available so I could show them that software to look at radar data can actually be zippy.

–Having high-resolution data routinely available allows tornadoes to be discriminated based on reflectivity signatures.  I believe this would be a relatively new concept in operations.  The reflectivity “donut” associated with tornadoes that is seen in high-res research radars has been recognized form some years as verification of a tornado.  “Donuts” or similar features were seen in all tornado cases available with CASA, and such features are rarely seen in 88Ds due to lower typical resolution.  With super-res data in the 88Ds, I suspect tornado reflectivity features are now more often seen in 88Ds, though.  The TVS algorithm we currently use relies only on velocity information, and many forecasters do likewise; however, it is becoming clear that greatly improved detection can be achieved by considering both velocity and reflectivity signatures.

Data overload.  CASA radars give a volume scan every minute, there are 4 CASA radars to look at, they have 2-D wind analyses to look at as well, and have short-term forecasts to look at, in addition to all the usual things.  It is very difficult to keep up with all these data sources and simultaneously make warning decisions.  The data overload problem is recognized as an issue with many new data streams.  Possible solutions include greatly improved algorithms to handle some, or most of the analysis, and putting all the data from different sources into some sort of combined 4-D space than can be perused (similar to the FAAs 4-D cube).  With a 4-D cube concept, a short term forecast can be combined with the data in the same 4-D space to show an extrapolation (similar to the warn-on-forecast concept).

–Using CASA radars did help quite a bit in issuing warnings because of improved resolution of features, because of seeing closer to the ground, and because of better time resolution.  Having a dense network of CASA radars (with good software tools for analysis) would be quite an advance.  Of course, doubling the density of the 88D network might achieve many of the same goals, and it is really a question of cost-effectiveness.

A couple other things I learned on the trip:

–The MPAR (Multi-function Phased Arrray Radar) is scheduled for an large increase in funding next year.  This is mostly to prove the concept of dual-pol phased array, which hasn’t been done before.  A phased-array radar network is envisioned as a potential replacement for the 88D network.  This one network would be used by multi-agencies, including the NWS, the FFA for air traffic control, and by DHS.  For this concept to be palatable to the NWS, the replacement for the 88D network should be at least close in performance to the current 88D network, and this includes dual-pol.

–NOAA is developing a roadmap for radar which extends through 2025.  I suspect this is fairly fluid, but ideas include MPAR, gap-filling radars, and integrating private sector radars (TV stations), as well as assimilating radar data for warn-on-forecast.  The only thing really firm is the dual-pol deployment over the next 3 years.

Bill Martin (Science and Operations Officer, NWS Glasgow MT – 2010 Week 4 Evaluator)

Tags: None

Forecaster Thoughts – Ernie Ostuno (2010 Week 3 – PARISE)

First I want to say that my overall impression of PARISE 2010 is that it was a very well-run and enjoyable exercise. Seldom have I found simulated severe weather to be so much fun. 🙂

Here’s what I observed, and remembered most:

The main benefit of the PAR was the increased temporal resolution. This was most apparent in the Tropical Storm Erin case study where small, rapidly evolving mesocyclones were sampled often enough to show the rapid increases in low level rotation. In Michigan, we often see these type of mesos in the warm season and have trouble issuing warnings with any lead time on them. One issue that should be studied from a social science perspective is how the PAR data, particularly the increased temporal resolution, will affect warning decisions by forecasters who will be seeing detail in storm evolution that they are not familiar with. Will it increase lead times and false alarms? Can we measure this? Can we sufficiently train warning forecasters on the new data before PAR is fielded? I’m also concerned that we might be looking at case studies that were not fully investigated on the ground. Is it possible that some of these storms produced hail, wind or even tornadoes that were not documented?

I noted a couple PAR data quality issues. There was one case where sidelobe contamination masked the evolution of an outflow boundary. There were a a few cases where improperly dealiased data masked a velocity couplet, but this also illustrated the importance of increased temporal resolution since one bad scan meant only a loss of two minutes in the storm’s evolution, versus an equivalent 8 or 10 minute gap in the 88D data.

I understand that the PAR “library” of events is probably rather limited at this time, but I would like to see a case study of a line of convection with short, bowing segments and small, shallow, rapidly evolving circulations, which makes up one of our most common severe weather types in Michigan, especially in the cool season.

Let me end by saying thanks to all of you who were responsible for putting together such a great experience for me as a warning forecaster, and for all your efforts in seeking and documenting our feedback!

Ernie Ostuno (Lead Forecaster, NWS Grand Rapids MI – 2010 Week 3 Evaluator)

Tags: None