Tornado Outbreak

I am posting late this week. It has been a wild ride in the HWT. The convection initiation desk has been active and Tuesday was no exception. The threat for a tornado outbreak was clear. The questions we faced for forecasting the initiation of storms were:
1. What time would the first storms form?
2. Where would they be?
3. How many episodes would there be?

This last question requires a little explanation. We always struggle with the criteria that denotes convection initiation. Likewise we struggle with how to define the multiple areas and multiple times at which deep moist convection initiates. This type of problem is “eliminated” when you issue a product for a long enough time period. Take the convective outlook for example. Since the risk is defined for the entire convective day you can account for the uncertainty in time by drawing a larger risk area and subsequently refining it. But as you narrow down your time window (from 1 day to 3 hours or even 1 hour) the problems can become significant.

In our case, the issue for the day was compounded because the dryline placement in the models was significantly east of the observed position by the time we started making our forecast. We attempted to account for this fact and as such had to adopt to a feature relative perspective of CI along the dryline. However, the mental picture you are assembling of the CI process (location, timing, number of episodes, number of storms) is tied not just to the boundaries you are considering, but the presumed environment in which they will form.

The feature relative environment then would necessarily be in error because we simply do not have enough observations to account for the model error. We did realize that shallow moisture, which was shown on morning soundings, was not going to be the environment in which our storms formed. Surface dew points were higher and staying near 68 in the warm sector. We later confirmed this with soundings at LMN which showed the moist layer increase in depth with time.

So we knew we had two areas of initial storm formation, one in the panhandle of OK and into KS along the cold front to the west and triple point to the east. The other area was along the dryline in OK and TX. We had to decide how far south storms would initiate. As we figuring all of this out, we had to look at the current satellite imagery since that was the only tool which was accounting for the correct dryline placement and estimate how far east it might travel, or mix out to in order to make the forecast.

Sure enough, the warm sector had multiple cloud streets ahead of the dryline. Our 4km model suite is not really capable of resolving cloud streets but we still needed to make our forecast roughly 1-2 hours before CI. So in a sense we were not making a forecast as much as we were making a longer more uncertain nowcast (probably not abnormal given the inherent unpredictability of warm season convection). Most people put the first storm in KS and would end up being quite accurate in placement. Some of us went ahead of the dryline in west central OK and were also correct.

There was one more episode in southern OK and then another in TX later on. This case will require some careful analysis to verify the forecast, other than subjective assessments. Today we got to see some of the potential objective methods via DTC, showing MODE plots of this case. The object identification of reflectivity via neighborhood and also merging and matching were quite interesting and should foster vigorous discussion.

Last but not least, the number of models we interrogated continued to increase, yet we were feeling confident in understanding this wide variety of models using all of the visualization tools including the more rapid web-based plots, and the use of the sub-hourly convectively active fields. We are getting quite good at distilling information from this very large dataset. There are so many opportunities for quantifying model skill that we will be busy for a long time.

It was interesting to be under the threat of tornadoes and to be in the forecast path of them. It was quite a day, especially since the remnant of the hook echo moved over Norman showering debris over the area picked up from the Goldsby Tornado. The NWC was roughly 3-5 miles away from the dissipation point of that Tornado.

Tuesday

We talked about Monday evening weather in Montana and how the forecasts went.

NMM had a high false alarm rate but was the only model to correctly predict the severe storm in ID. ARW missed the storm by 200km to the southeast. Another comparison we always make here at the HWT is the 0Z vs the 12Z runs of thhe NMM. For this day, I think the 12Z NMM was much better than the 0Z. Steve thought it was “somewhat” better. The 12Z run had less false alarms in central MT and captured the ID storm area better.

The probability matched mean is an interesting way of summarizing the ensemble of forecasts. It has the spread of the ensemble, but the sharpness of an individual run. I’m not sure but I was told to check out Ebert/McBride for a reference on this.

The Monday forecast was pretty insignificant, so we also talked about the high-end derecho on Friday May 8. There were some differences between the 1 and 4-km CAPS solutions of this event. (I forget what they were).

David Ahijevych

monday

Here’s my summary of yesterday’s (Monday’s) activity at the HWT.

This is a very quiet week for mid-May.

The DTC has been well-received. I presented Jamie’s verification .ppt and gave it away to several interested people. The need for objective verification is great. There so many models and little time to analyze everything after the fact. Mike Coniglio led two discussions of Friday’s MODE verification output. The CAPS model without radar data assimilation lagged behind the CAPS model with radar data assimilation. MMI was similar for the two models, but the MODE centroid distance was a distinguishing factor. The model that lagged behind had greater centroid distance. This wouldn’t have been possible to quantify with conventional verification metrics.

We also subjectively evalutated the Friday storms over the centeral U.S. The 0Z NMM had a false alarm storm in the morning that disrupted the afternoon forecast. The simulated squall line was much weaker than the observation. This was not as much of a problem with the NSSL model. The 12Z NMM was not a whole lot better with convective mode and individual storm evolution, but its 0-2 h and 6-12 h forecasts had better storm placement than the older 0Z NMM.

As an aside, ARW runs with Thompson microphysics have less intense simulated radar reflectivity than observed.

For Monday afternoon and evening’s severe weather forecast, we chose Billings, MT as the center point. It was the only place to have the possibility of severe weather. We broke up into 2 teams and came up with a less than 5% chance. Two actual reports were northwest of our predicted zone in northern Idaho. Radar indicated some small storms in our predicted zone.

Dave Ahijevych

Model Evaluation Tools

I would like to thank all of the HWT personnel for a fun and interesting week – May 10-15. The experience was well worth it. How quickly I (being in the research community) have lost touch with the daily challenges that an operational forecaster faces. It was good to get back to those roots with a little hand analysis of maps!

I would like to thank you for engaging with the DTC and helping us to evaluate MET/MODE during the Spring Experiment. It is great to have eyes looking at this on a daily basis to give us some good feedback on how the tools are performing. It seemed that while I was there the participants were encouraged by the performance of MODE and its ability to capture objectively what forecasters felt subjectively. This is a great first step towards more meaningful forecast evaluations which we hope, ultimately, feedback to improve overall forecasts by removing systematic biases.

Please feel free to visit the DTC’s HWT page at: http://www.dtcenter.org/plots/hwt/

You were all great hosts. Thanks again!

Posted by Jamie W.