Here’s my summary of yesterday’s (Monday’s) activity at the HWT.

This is a very quiet week for mid-May.

The DTC has been well-received. I presented Jamie’s verification .ppt and gave it away to several interested people. The need for objective verification is great. There so many models and little time to analyze everything after the fact. Mike Coniglio led two discussions of Friday’s MODE verification output. The CAPS model without radar data assimilation lagged behind the CAPS model with radar data assimilation. MMI was similar for the two models, but the MODE centroid distance was a distinguishing factor. The model that lagged behind had greater centroid distance. This wouldn’t have been possible to quantify with conventional verification metrics.

We also subjectively evalutated the Friday storms over the centeral U.S. The 0Z NMM had a false alarm storm in the morning that disrupted the afternoon forecast. The simulated squall line was much weaker than the observation. This was not as much of a problem with the NSSL model. The 12Z NMM was not a whole lot better with convective mode and individual storm evolution, but its 0-2 h and 6-12 h forecasts had better storm placement than the older 0Z NMM.

As an aside, ARW runs with Thompson microphysics have less intense simulated radar reflectivity than observed.

For Monday afternoon and evening’s severe weather forecast, we chose Billings, MT as the center point. It was the only place to have the possibility of severe weather. We broke up into 2 teams and came up with a less than 5% chance. Two actual reports were northwest of our predicted zone in northern Idaho. Radar indicated some small storms in our predicted zone.

Dave Ahijevych

Model Evaluation Tools

I would like to thank all of the HWT personnel for a fun and interesting week – May 10-15. The experience was well worth it. How quickly I (being in the research community) have lost touch with the daily challenges that an operational forecaster faces. It was good to get back to those roots with a little hand analysis of maps!

I would like to thank you for engaging with the DTC and helping us to evaluate MET/MODE during the Spring Experiment. It is great to have eyes looking at this on a daily basis to give us some good feedback on how the tools are performing. It seemed that while I was there the participants were encouraged by the performance of MODE and its ability to capture objectively what forecasters felt subjectively. This is a great first step towards more meaningful forecast evaluations which we hope, ultimately, feedback to improve overall forecasts by removing systematic biases.

Please feel free to visit the DTC’s HWT page at:

You were all great hosts. Thanks again!

Posted by Jamie W.

Recap of Week 2 from a Forecaster’s Perspective

After spending a week at the HWT, I must say I’m encouraged to see how far the NWP world has come in recent years. For instance, in an effort to keep my mind occupied on my flight to Norman last week, I thought it would be neat to read a paper produced by the SPC on the Super Tornado Outbreak of 1974. If I remember correctly, the old LFM model had a model grid spacing of 190.5 km! After reading this and then coming to the HWT as seeing model output on a scale as low as 1 km is absolutely amazing in my opinion. This is a testament to all the model developers out there who work diligently on a daily basis to produce better models for forecasters in the field. If nothing less, the HWT opportunity made me realize and appreciate the efforts of the model developers more so than I had ever done previously.

Although these models can provide increased guidance for basic severe wx guidance, such as convective mode and intensity, the models only show output (simulated refl, updraft helicity, etc.) on a very small scale. If taken at face value, critical forecasting decisions can be made without having an adequate handle on the overall synoptic and mesoscale pattern. Thus, even with all the high resolution model output, one must still interrogate the atmosphere utilizing a forecast funnel methodology in an effort to develop a convective mode framework to work from. Sadly, if high resolution model output is taken at face value without any ‘behind the scenes” work beforehand, I can see many blown/missed forecasts as forecasters would be forecasting “blind.” Many factors must be taken into account when developing a convective forecast and unfortunately just looking at the new high res model output will likely lead to more questions than answers. In order to answer these questions, a detailed analysis done prior can allow one to see why a particular model may be producing one thing as opposed to the other. Looking back at some of the old severe wx forecasting handbooks, one thing remains clear, much can be gained on the developing synoptic/mesoscale patterns through pattern recognition. Some of the old bow echo/derecho papers (Johns and Hirt, 1987) and a whole list of others have reiterated the fact that much can be gained by recognizing the overall synoptic pattern. How many times last week were the models producing a bow type signature during the overnight hours? Situations like these commonly need deep vertical shear and unfortunately not much shear was available for organized cold pools when the H50 flow was only 5-10 knots. This is just one instance where having a good conceptual model in the back of your mind can assist in the forecasting process.

As for the models, more often than not, I was pleased by the 4-km AFWA runs. For the activity that developed on the Tue (05/12), the 00/12 UTC AFWA runs had better handle the low-level moisture intrusion up the Palo Duro Canyon just SE of AMA. A supercell resulted which led to several wind/hail reports. A look back at the Practically Perfect Forecast based on updraft helicity the following day had a bullseye centered over the area based on the AFWA output. This is more than likely a testament to different initial conditions as the AFWA utilizes the NASA LIS data. This can pay huge dividends for offices along the TX Caprock where these low-level moisture intrusions have been documented to assist in tornadogenesis across the canyon locations along with a backed wind profile (meso-low formation).

Posted by Chris G.

Spring Experiment Week 3 Participants

The Spring Experiment organizers would like to welcome the following participants to Week 3 of the 2009 NSSL/SPC Spring Experiment:

Dave Ahijevich (NCAR/DTC, Boulder, CO)
Lance Bosart and Tom Galarneau (University at Albany-SUNY)
Geoff Manikin (NOAA/NWS/NCEP EMC, Camp Springs, MD)
Morris Weisman (NCAR, Boulder CO)
Jon Zeitler (NOAA/NWS San Antonio/Austin, TX)

Anatomy of a Well Forecast Bow Echo, Part II

A Cautionary Note about Deterministic Guidance from High-Resolution NWP Models (posted by GregC on behalf of David Bright).

image not found
Figure 1. 13-hour WRF-NMM forecast of simulated reflectively (1 KM AGL) valid at 1300 UTC 8 May 2009 (left), and verifying observed base reflectively and severe thunderstorm warning polygons valid at 1300 UTC 8 May 2009 (right). [Image not found]

The 13-hour WRF-NMM forecast of the Missouri Bow Echo (see earlier post with this title) is remarkable in both its accuracy and structure, particularly given the severity of the event. As model forecasts go, it appears to be a perfect piece of numerical guidance. But shifting the grid about 450 miles to the east, the exact same 13-hour WRF-NMM completely missed the MCS (albeit less severe) moving through eastern Tennessee. So while there is little doubt that the model provided an essentially perfect prediction of the intense bow echo over southern Missouri, in a purely deterministic sense, the same model provided little-to-no short-term convective guidance with respect to convective mode and QPF over much of Tennessee.

The information provided by these high-resolution NWP models is revolutionary, and will likely lead to a quantum increase in high-impact services provided by the NWS. But let’s be careful not to oversell the capabilities of a single, deterministic model forecast. In order to fully realize the potential of future NWS forecasting and warning services, an ensemble of convective-resolving models will be required to address the uncertainty that accompanies all weather forecasts. The HWT has evaluated convective-resolving models over a large portion of the CONUS for the past several years, and it is encouraging to see the improvements these models have made in high-impact convective guidance and in their ability to predict intense, realistic convective structures such as the bow echo over southern Missouri. But a single high-resolution NWP forecast, regardless of its ability to reproduce intense convective structures, is unlikely to meet the future uncertainty requirements of the entire NWS at all times and locations. That said, the development and evaluation of these convection resolving models is and will continue to be an essential part of future high-impact, life saving, decision support services provided by the NWS, likely realized through a blend of deterministic guidance, well constructed ensemble systems, and related ensemble interrogation tools.

Forecast experiment–May 12 thoughts

The forecast experiment centered on ABI (Abilene, TX) today to catch development along the dryline and other moisture gradients, including a warm front eroding the morning’s stratus northward. After analyzing and discussing the 12Z sounding plan views, we had about an hour on the schedule to create the 20-00 UTC and 00-04 UTC forecasts. It turns out we took a bit more than 90 minutes, since there were three scenarios we considered: (a) Convective initiation over the mountainous terrain of southwest Texas, coverage, and mode as forcing moved east; (b) Same initiation, coverage, and mode problem but over the South Plains and panhandle of Texas and extreme southwest Oklahoma, and; (c) what to think about some model members’ convection forecasts over southeastern Texas.

The forecast team I participated in wrestled with forcing mechanisms since the shear profiles were less than robust over the central and southern portions of west Texas and looked to be favorable in the panhandle, South Plains, and extreme southwest Oklahoma. We (there were six of us on the team today) settled on two initiation scenarios–the first in southwest Texas in the 20-21 UTC time frame as a shortwave moved toward El Paso and a second toward the space of Texas between Amarillo and Lubbock with a second jet streak moving into the area. The third area was discounted based on standard theories of organized severe convection.

For ensemble displays, we used

  1. The probability of 40 dBZ reflectivities. This helped focus our attention on the areas of concern and timing scenarios. This is probably the quickest way to assess the result of each model’s integration rather than interrogating multiple plan views, soundings, and postage stamp images of significant fields.
  2. Spaghetti outlines of 40 dBZ model-derived reflectivity. This is a noisy but useful depiction.
  3. Max reflectivity from the ensemble members to assess in a very rough manner storm instensity, and
  4. Max updraft helicity to assess the likelihood of severe weather.

From these we went to individual high-res (1-4 km) models and the observations to modify the threat area. It’s interesting to note that most participants prefer to use a single high-res or a familiar lower-res (e.g., 12 km WRF-NMM) as assess potential mode and than use the ensembles to place mental “possibilities” around what I’d call the “individual’s most probable” forecast.

The deterministic and ensemble runs suggested that southern storms would like be isolated and probably end short after 01 UTC. In the north we believed more organized convection, possibly a couple of clusters, was likely due to the presence of better 0-6 km shear. The was some issue as to how far east the convection would progress by 04 UTC, with the 1 km models suggesting propagation as far east as I-35. The operational SPC forecaster acting as our team’s guide tempered our enthusiasm with a little climatology, so the east edge of our forecast area was kept a little west (upstream) of the most aggressive model’s 04 UTC position for convection.

We believed there was a significant hail threat over this area given the shear, mid-level lapse rates, and NAM-KF model soundings suggesting analogs of 2+” hail cases. SPC’s significant hail paramenter on its mesoanalysis page also centered a threat in this area.

As I write this, we were not far enough west with our initiation and the severe threat is continuing a bit north of the area we anticipated. Tomorrow’s review ought to be quite enlightening, assuming we can keep our brains focused on the review and not on Wednesday’s expected event!

— Bruce E, forecasting on the West team today

Anatomy of a Well Forecast Bow Echo


Above is an example of one of the forecasts from the Spring Experiment models from Friday. This bow echo moved across southwest Missouri early Friday morning and these images are centered on Joplin, MO (JLN). On the left is the 13h forecast from the WRF-NMM 4km model initialized at 00Z 08-May-2009 and valid at 13Z. On the right is the verifying 1km base reflectivity image with the model fields for winds overlaid on the radar. The barbs in each of the images are the model’s instantaneous 10m winds in knots (with the grid skipped to lessen the clutter). The isotachs are plotted from the WRF “history variables” for maximum U,V 10m winds (no grid skip). These are the maximum 10m wind speeds in the model over the past hour ending at 13Z.

Instantaneous 10m winds in the model at 13z, near the rotating bow head, are at least 50 knots. The maximum model 10m winds over the past hour range from 60-70 knots near and north of the weak echo channel and around the comma-head of the bow.

This was only one of several exceptional forecasts of this feature from the models being evaluated in this year’s Spring Experiment. To see more output on this case and more, check out the Spring Program website here:


WRF/CAPS Reflectivity Loops on V2 Domain

Plots of WRF/CAPS 1km AGL simulated reflectivity are now available using “V2” as the centerpoint in the URL. The following V2 domain image combinations should work as of today’s 00z and 12z model runs:

At 00Z:

At 12Z, you can compare prior runs:
NMM4 / NMM4_12 / AFWA4 / AFWA4_12
CAPS1 / NCAR3 / NCAR3_12 / NMM4_12

Mix and match the models to build the loop you need based on the URLs, above. You can also add &date=YYYYmmdd to review another date.

The 00Z NCAR run goes out to forecast hour 48 while the other 00Z runs go to forecast hour 36. The loop URLs, above have a start hour of 12z go out 24h. This can be adjusted by using the &starthr and &frames variables in the URL.

The 12Z NMM goes to forecast hour 36 while the NCAR and AFWA versions of the WRF only go out 24h. These loops also start at 12Z and go out 24h. Note, some imagery will drop out when looping beyond the available forecasts from that model. Imagery may also be uavailable if model data fails to arrive prior to the execution of the cron job for the image generation script. These loop pages should not to be considered operational and will not always be available.

Thanks to Ryan for building such an adaptable web-based interface that allows us to build these thumbnail image loops!