The Denver Hailstorm, 8 May 2017

If you have an interest in severe and unusual weather, you probably already know all about the hailstorm that struck Denver on Monday afternoon, shattering windows and damaging vehicles and roofs across the metro. Indeed, it made for quite the exciting Monday in the Spring Forecasting Experiment.

During the morning forecast discussion, participants noted that good forcing was present over Colorado, along with dewpoints considered sufficient for severe convection by Colorado standards (in the 50’s). The moisture was modified Gulf moisture, arriving in CO by way of the Rio Grande thanks to the surface front that was the focus of most of last week’s severe convection. Also noted was the unidirectional shear, as can be seen on this 1200 UTC (7:00AM CDT) hodograph from Albuquerque, which was upstream of Denver at 250mb and 500mb.

Read more »

Tags: None

Verification Determination

Verification is a huge part of the Spring Forecasting Experiment. Each day, we make multiple forecasts on different time scales (this year ranging from daylong outlooks to hourly probabilistic forecasts), and the first activity participants undertake on Tuesday-Friday is an evaluation of the previous day’s forecasts. Additionally, in the afternoon, participants evaluate numerical guidance, by comparing model output to observations.

Selecting how to use observations for verifying some of the more nebulous aspects of severe convective weather is one of the challenges of designing the SFE. With some fields, it is easy enough to compare the simulated with the observed – take reflectivity, for example:

Read more »

Tags: None

Snow Forecasting Experiment??

Strange considerations can crop up in the SFE. In previous years we have forecasted in areas of low radar coverage such as the mountain west, determined which side of the U.S.-Mexico border a storm would form on, and dealt with the severity of convection coming onshore from the Gulf. However, remnants of last weekend’s storm threw a highly unusual wrinkle in the forecast….

Read more »

Tags: None

CLUEing in on Spring Forecasting Experiment 2017

It’s nearly the beginning of May (even if it doesn’t feel like it in Norman, OK, with a current windchill of 38°F!) and that means that another Spring Forecasting Experiment is about to be underway. This year the Community Leveraged Unified Ensemble (CLUE) is an even more vast than last year, comprised of 81 members from organizations such as NSSL, CAPS, OU, NOAA’s Earth Systems Research Laboratory/Global Systems Division (ESRL/GSD), NCAR, and GFDL. These members will provide forecasts of 36 h to 60 h in length, depending on the subsets of the ensemble being considered.

Read more »

The SFE at the American Meteorological Society’s Annual Meeting

20170126_095431.jpg
A view of Puget Sound from the Washington State Convention Center, home of the 2017 AMS Annual Meeting

During the week before last, over 4500 meteorologists convened in Seattle, Washington for the 97th American Meteorological Society (AMS) annual meeting. As always, I left this meeting with a plethora of new ideas, enthusiasm for the field, and at least a dozen papers added to my to-read pile. However, I also noticed a number of talks which mentioned the Spring Forecasting Experiment, including results from past experiments and hints of what’s to come in SFE 2017.

Read more »

Tags: None

A Late November Outbreak

Greetings from the off-season!

While SFE 2017 (!) is a ways off yet, preparations are already underway for many of the collaborators that provide products to the experiment. Development of the ensembles and guidance tested in the SFEs often occurs across a number of years, as tweaks suggested by prior experiments are implemented alongside new product development.

For example, in SFE 2015 four sets of tornado probabilities were evaluated. While all of the probabilities used 2-5 km updraft helicity (UH) from the NSSL-WRF ensemble, they differed in the environmental criteria used to filter the UH (i.e., if a simulated storm from a member was moving into an unfavorable environment, it was less likely to form a tornado and therefore the ensemble probabilities were lowered). These probabilities showed an overforecasting bias in the seasonally aggregated statistics, and the bias was consequential enough to be noted in subjective participant evaluations. The most typical rating for the probabilities was a 5 or 6 on a scale of 1-10, leaving much room for improvement.

To improve these tornado probabilities, a set of climatological tornado frequencies given a right-moving supercell and a significant tornado parameter (STP) value, as calculated by forecasters at the SPC, were brought to bear on the problem. The application of the climatological frequencies grounded the probabilities in reality. For example, in the prior probabilities if 6/10 ensemble members had a simulated storm passing over the same spot, the forecast probability would be 60%. The updated probabilities consider the magnitude of the STP in the environment the storm is moving into in each member. For example, if all of the storms were moving into an environment with an STP of 2.0, each member is assigned the climatological frequency of a storm to produce a tornado in that situation as the probability of a tornado. Then, the probabilities are averaged across each member. Assuming that 6/10 members now have the storm moving into an environment with STP = 2, the probability would be 60% * the climatological frequency of a tornado given STP = 2. This approach lowers the probabilities, and thus reduces overforecasting.

The new set of probabilities will be tested in SFE 2017. However, these probabilities have been worked on for over a year, and are already available daily on the NSSL-WRF ensemble’s website.

While the statistics for all of the tornado probabilities discussed herein were aggregated over the peak of tornado season (i.e., April-June), the end of November 2016 brought tornadoes to the southeastern United States, and with them, the chance to test the new probabilities. We’ll focus specifically on 29 November 2016, a day that saw 44 filtered tornado local storm reports (LSRs):

The Storm Prediction Center had a good handle on this scenario, showcasing the potential for severe weather across some of the affected region four days in advance. At 0600 UTC on the day of the event, their “enhanced” area covered much of the hardest-hit areas, with the axis of the outlook a bit skewed from the axis of the LSRs. The outlook and LSRs are shown below.

The 0600 UTC outlook is shown here, because that is when the probabilities computed above become available – our hope is that someday forecasters can look at these probabilities as a “first-guess”, encompassing multiple severe storm parameters from the ensemble into one graphic. The SPC’s probabilistic tornado forecast from 0600 UTC encompassed all of the tornado reports, but was a bit too far west initially. Ideally, the ensemble tornado forecasts would resemble the SPC’s forecast:

When we consider the UH-based probabilities, there’s a pocket of high probabilities, between 25-30%, in an area that is close to the highest density of tornado reports. Additionally, all of the reports are not encompassed by the probabilities, and there is an extraneous blob of 5% risk over the DC/Maryland area. The 10% corridor of the probabilities extends further north than the SPC’s, but overall, this was a decent forecast, if a bit high in that “bulls-eye” of probabilities.

Let’s compare this to the STP-based probabilities:

These probabilities have a much lower magnitude, but still encompass most of the tornado reports within the 10% contour. The 2% contour is also extended westward into Louisiana, capturing the tornado report that the prior probabilities missed. Overall, this forecast is more like the SPC’s outlook, and better reflects what happened on the 29th.

Will we see the same trends into the spring? Aggregated seasonal statistics from spring 2014-2015 seem to suggest yes. However, the opportunity to get participant reflection and evaluation on these probabilities and this methodology awaits – and I, for one, am excited to see what new insights our participants will bring.

Tags: None

SFE 2016 Wrap Up

Well, last week concluded SFE 2016. This season was a particularly interesting one. While we always deal with some marginal cases and mesoscale forcing as the mechanism for severe convection, this year seemed to feature many of those cases. Lots of days throughout the experiment were a bit difficult to forecast conceptually, even the high-end days such as 26 May. While the full period forecasts were easier, breaking down the full period into specific four-hour chunks proved challenging, given that these forecasts contained both a forecast of convective initiation/intensification (if the convection was ongoing) of severe storms, as well as the motion and evolution of those storms (i.e., would supercells form and merge into an MCS? Would morning convection reintensify?). Each of those elements is a forecast challenge separately, but we combined them into one.

In a way, it’s ideal that we faced so many of these environments. We’ve seen in past SFEs that when the CAMs are strongly forced, they often do quite well at pinpointing the location and intensity of severe convection. Where do they have the most difficulty? Under weaker forcing, when remnant outflow boundaries and mesoscale details have a large influence on the day’s convection. To have a 65-member CAM ensemble in the CLUE operating during these environments may give us unparalleled insight to what CAM ensemble design characteristics perform best under uncertain circumstances, and can augment the deterministic guidance that is already operational. While we may have come into most days looking at only a small area where CAPE, shear, and a lifting mechanism were present, this set of days will provide us with many case studies of realistic, less-than-ideal circumstances.

As always, a huge thanks goes out to our participants, who hailed from multiple countries and states. We gathered a number of subjective impressions from these participants on various subsets of the CLUE, illustrating forecaster and researcher insights about how these CAMs may best be applied. In the case of the isochrones, this year’s comments will help design a better, more user-friendly product and introduction to the concept for next year.

Two great challenges lie ahead: the verification and analysis of the massive amount of data generated and collected during SFE 2016, and the planning of SFE 2017. Such is the cycle of an annual experiment – the work is never done. Onward!

Tags: None

Chopping the FAR

Afternoons in the SFE are composed of three main parts: A Day 2 forecast, evaluations of various aspects of the CAMs, and updates to the morning forecasts. Sometimes, very little new information contributes to these updates, particularly if convection has not initiated by the time of the update. Other days, convective initiation or intensification has occurred, and we have a much better concept of how the convection will evolve. Yesterday was an excellent example of how the afternoon updates can improve upon the morning forecasts, once we get a sense of the evolution.

Read more »

Tags: None