The SFE at the American Meteorological Society’s Annual Meeting

20170126_095431.jpg
A view of Puget Sound from the Washington State Convention Center, home of the 2017 AMS Annual Meeting

During the week before last, over 4500 meteorologists convened in Seattle, Washington for the 97th American Meteorological Society (AMS) annual meeting. As always, I left this meeting with a plethora of new ideas, enthusiasm for the field, and at least a dozen papers added to my to-read pile. However, I also noticed a number of talks which mentioned the Spring Forecasting Experiment, including results from past experiments and hints of what’s to come in SFE 2017.

Explicit mentions of the Spring Forecasting Experiment occurred in the abstracts and titles of four talks and four posters, but mentions of the Experiment were peppered throughout some of the other talks. Dr. Bill Lapenta, director of the National Centers for Environmental Prediction, even gave the SFE and CLUE a shout-out during his talk for approaching the ensemble configuration problem in a large-scale, methodical manner. Additionally, data from SFEs as far back as 2010 and 2011 were examined in a study by Eric Loken, to determine whether computational resources should be directed toward increasing grid spacing of current models to a finer resolution than 4 km, or whether those resources would be better spent in running additional ensemble members.

Adam Clark gave a rundown on the design of the Community Leveraged Unified Ensemble (CLUE) and the preliminary results from SFE 2016, including results from the eight experiments contained within the CLUE. Some interesting comparisons between the ARW subset of the ensemble, the NMMB subset of the ensemble, and the mixed core subset of the ensemble showed that for UH, the mixed core ensemble had the highest ROC areas and Fractions Skill Scores, with no difference between the ensembles in the reliability. For QPF, the ARW ensemble generally performed better than the NMMB ensemble, with a bias closer to 1.0 and higher equitable threat scores up to a threshold of approximately an inch of QPF. More results can be found in SFE 2016’s summary document. Steve Weiss gave a talk on the calibrated probabilistic hazard guidance used for the Day 1 individual hazard forecast process. These forecasts used a blend of the SREF for environmental information and the SSEO for explicit storm attributes to determine individual hazard threats. While I was unable to attend this talk at the time (too many interesting talks being given simultaneously!), I will be catching up on it once the recorded talks are released.

The experimental hail size guidance evaluated during SFE 2016 also had a presence at the AMS annual meeting, with both a talk and a poster focusing on the forecasting methods examined in SFE 2016. Rebecca Adams-Selin gave a talk about the HAILCAST model, a one-dimensional hail growth model integrated into WRF-ARW. This model has been coupled with models running in the SFE since 2014, and each year improvements have been made to the model to more accurately forecast hail size at the ground. These improvements also incorporated more realism into the model, such as accounting for wet and dry growth regimes. The machine-learning hail forecasting method tested in SFE 2016 was the subject of a poster by Joseph Nardi, which compared the three hail forecasting methods present in SFE 2016. The selection of a case study from SFE 2016 demonstrated not only the performance of the forecasts, but how the forecasts appeared to participants on a daily basis.

Initial results from the sensitivity study performed by Brian Ancell and Brock Burghardt during SFE 2016 was also the subject of a talk, describing both the process of producing the analyses and subjective participant feedback, which was given approximately once a week. Preliminary results indicate that upper-air dynamical fields, such as 500 hPa heights, were generally more useful than those at lower levels, which occasionally were quite noisy. Two detailed cases can be found in the preliminary findings report. Forecasters liked having the sensitivity fields available to them, but there was a learning curve to their interpretation and forecasters didn’t think they were operationally useful in their current format. As a result, exploration is occurring into using the sensitivity fields to prune a large ensemble of erroneous members, hopefully leading to a more accurate ensemble for the operational forecaster.

The AMS annual meeting also held some sneak peeks for SFE 2017. For example, the member count of the CLUE is planned to increase from 64, to allow for even more experimentation. Collaboration on those members has already begun. Additionally, Montgomery Flora presented a poster looking at the NSSL Experimental Warn-on-Forecast System for Ensembles (NEWS-e), which occasionally ran in the afternoons at SFE 2016. The study tried to isolate the effect of initial condition uncertainty on supercell forecasts using some cases from SFE 2016. While the running of the NEWS-e was not a formal part of SFE 2016, it was discussed in the 2016 Operations Plan and may play a larger role in SFE 2017. Another poster, by Christopher Melick, describes a technique that may help determine convective mode in the presence of UH, which would be a great boon to determining individual hazard threats. This work may also eventually be featured in the Spring Forecasting Experiment. The work related to the SFE goes on year-round, developing new techniques and learning from prior experiments, and these two studies exemplify the type of exploratory science that is inspired by and enriches the SFE.

Finally, I should note that all of the talks discussed herein were recorded and will be available to the public sometime in late February. Stay tuned, and start getting excited for the 2017 Spring Forecasting Experiment!

20170128_155635.jpg
While we don’t see lenticular clouds like these during the SFEs, the unique meteorology of the Pacific Northwest was a treat for those at the conference
Tags: None