Cells in Western Kansas are Moisture Starved

The cells which have been trying to initiate since 2 PM have struggled with only weak reflectivity returns noted. There is instability available for convective updrafts and this is noted on the GOES Theta-E Difference on the bottom right panel of the image below. The blue colors signify an unstable airmass. Some other inhibiting factor is preventing these updrafts from further organization. This is most likely due to poor moisture at the present time. The GOES Vertical PW product shows little change of moisture between the low-levels and the mid-levels of the atmosphere in Western Kansas. This is evident by the light pink/purple colors in the product in the upper right panel. As these cells move east over the next 2-3 hours, they will encounter low-level air that has a bit more moisture available as evident by the darker pink/purple colors in the product. This could have an impact on the cells and we could see the storms become better sustained.

moisture

Hampshire

Tags: None

EWP2013 7 May 2013 1930 UTC Mesoscale Discussion

Broad upper ridging resides over the Southern Plains, with cyclonic flow on both the east and west coasts. In the wake of large upper low currently positions over the Mid-Atlantic, robust moisture has struggled to return, yielding less than stellar severe parameters. Still, the presence of low to mid 50s dewpoints combined with favorable wind fields will yield a chance of a few severe storms, mainly in the form of high based multicell and supercell structures.

A subtle surface trough/boundary extended from the higher terrain of eastern Colorado eastward into western Kansas with a pronounced dryline over far western KS extending down into much of western and central TX. In the presence of appreciable daytime heating, these boundaries are expected to serve as a focus for convective development from now through around 22-23z. Greater storm coverage is expected further north, within area of backed surface flow near subtle trough/dryline intersection. This will correspond to the CWAs of DDC and GLD. Further south, increasing mid level heights will tend to limit convection, though isolated to widely scattered thunderstorms may develop by around 23z to 00z. Large hail, some very large, and damaging winds will be the primary hazards, with significant wind gusts possible due to inverted-V type soundings within meager low level moisture and deeply mixed atmosphere. Tornadoes do not appear to be much of a concern, but if one were to occur, it would be most likely nearer the backed winds over northern/western KS in GLDs forecast area, though lacking wind shear, both in the low levels and aloft, may tend to mitigate this potential.

Mesoscale models (WRF-ARW and WRF-NMM) are in good agreement in developing isolated to scattered supercells across much of western KS as can be seen below. Lesser activity can be noted in the TX panhandles and points southward.highresmods_050713

Likewise, CIRA NSSL WRF simulated IR satellite (upper left panel below) depicts thunderstorm development by around 21z over much of DDC/GLD/AMA CWAs. One difference it he more widespread nature of thunderstorms further south, which the above high res runs do not support. The OUNWRF simulated reflectivity (lower left panel) also supports this more active solution. Attm, it appears to be over convecting somewhat, especially considering lack of strongly backed sfc flow and increasing heights aloft.4paneloun+cira

In the next few hours, storms are expected to develop first over west KS, then, in a more isolated nature, over parts of the TX panhandle and southwest TX. The CI product below has already shown areas of moderate to high CI potential within a line of agitated cumulus along the dryline and sfc trough, with a strong CI and instantaneous CTC signal over eastern CO.CI_Initiation050713

Initial thoughts are to set up shop in GLD and adjacent DDC this afternoon, with potential to migrate southward if more discrete activity develops along southern parts of the dryline. This meshes well with the EFP Severe Probabilities outlook shown below.EFP_RiskArea_050713

We will keep a close on LUB late this afternoon and evening, especially in the case that we can  sample some of the PGLM capabilities within the LMA there.

Austin/Frank

Tags: None

EWP STATUS FOR 7 MAY 2013: 1-9 p.m. SHIFT

EWP STATUS FOR 7 MAY 2013: 1 – 9 PM SHIFT

On Tuesday, forecast parameters look favorable for the development of severe thunderstorms in west Texas.  Strong directional shear will combine with weak to moderate instability to support a risk for elevated supercells.   Given the lack of quality moisture, the tornado threat should be minimal.  However, steep lapse rates and adequate moisture should result in a  large hail threat.   Additionally, a significant damaging wind threat may develop as forecast thermodynamic profiles show the classic “inverted V” shape.

Despite limited upper-level forcing, severe thunderstorms should develop by mid-afternoon.  The combination of upslope flow with seasonably-strong diabatic heating should result in the elimination of the cap by mid-afternoon.  Storms will likely organize into supercells by mid-evening, with a transition to mesoscale convective system possible later on.   However, forcing from an approaching upper-tropospheric shortwave trough will be limited before 06Z, so storms may be diurnally driven.
CWAs Likely to See Operations: Amarillo and Lubbock

-G. Garfield, Week 1 Coordinator

Tags: None

Mesoscale Outlook 5/7 00Z

A cluster of storms continues ahead of a diffuse surface boundary located across west-central North Carolina into southwest Virginia. Convergence continues to be the primary convective driver in association with an upper-level low continuing to rotate across eastern Tennessee.

sfc_boundary
Current radar imagery overlaid with surface analysis analysis showing a surface boundary.

Cold temperatures aloft, as shown by H50 temperatures near -20C, and moderately steep lapse rates will continue to support a marginal threat for large hail through 01Z. Near-surface instability will decrease in the next couple of hours as diurnal heating comes to an end and low-level theta-e values decrease as depicted by the CIMSS Nearcast tool. Tend to side with simulated satellite imagery showing storms coming to an end by 03Z across the Blacksburg County Warning Area as

nearcast_conv_stability
CIMSS Nearcast product showing instability decreasing this evening.
sim_satellite_03z
Simulated satellite imagery at 03Z as storms come to an end.

Hampshire/Guseman

Tags: None

EWP2013 – Mesoscale Outlook 2015 UTC

Convection is ongoing across North-Central North Carolina westward into the Appalachians. The upper low is currently centered in eastern Tennessee. The most robust thunderstorms of the afternoon have developed across northwest Carolina along a a boundary extending from NW North Carolina into eastern Kentucky.

Image1
Current radar imagery is overlayed with surface dewpoint. Storms are initiating along and north of the boundary.
0-3_lapse_rates
21z forecasted low-level lapse rates from the NAM

The environment is not overly robust to sustain severe convection. Surface-based CAPE values across Northern North Carolina  are roughly around 1000 J/Kg with 0-6 km shear 40-50 knots. Low-level lapse rates will be approaching 6-7.5 C/Km by 21z with mid-level lapse rates around 6-7 C/km. The main threat will be marginally severe hail, but 0-3 km helicity values between 150-200 m2/s2 could support a low-end tornado threat. However, as storms move away from the boundary, this threat will decrease.

The EFP has placed a 5% probability of severe storms for the previously mentioned area, while a slight risk of severe storms is forecasted by SPC in small area shown in the image below.

2
Orange color represents a 5% probability of severe weather issued by the EFP.
5-6-13_SPC_DY1
19z Day 1 Outlook from SPC.

As a lobe of energy wraps around the upper-low and moves northward into northern NC, the storms which have already initiated should be able to continue northward into the Blacksburg, VA CWA while some continuing activity is possible across the northwestern extent of the Raleigh, NC CWA.

3
Simulated IR Satellite Imagery at 22z.

Hampshire/Guseman

Tags: None

Welcome to the Experimental Warning Program 2013 spring experiment (EWP2013)

Monday 6 May 2013 begins the first week of our three-week spring experiment of the 2013 NSSL-NWS Experimental Warning Program (EWP2013) in the NOAA Hazardous Weather Testbed at the National Weather Center in Norman, OK.  There will be five primary projects geared toward WFO applications, 1) the development of “best practices” for using Multiple-Radar/Multiple-Sensor (MRMS) severe weather products in warning operations, 2) an evaluation of a dual-polarization Hail Size Discrimination Algorithm (HSDA), 3) an evaluation of model performance and forecast utility of the OUN WRF when operations are expected in the Southern Plains, 4) an evaluation of the Local Analysis Prediction System (LAPS) Space and Time Multiscale Analysis System (STMAS), and 5) an evaluation of multiple CONUS GOES-R convective applications, including pseudo-geostationary lightning mapper products when operations are expected within the Lightning Mapping Array domains (OK, w-TX, AL, DC, FL, se-TX, ne-CO).  We will also be coordinating with and evaluating the EFP’s probabilistic severe weather outlooks as guidance for our warning operations.  Operational activities will take place during the week Monday through Friday.

For the week of 6-10 May, our distinguished NWS guests will be Marc Austin (WFO Norman, OK), Hayden Frank (WFO Boston, MA), Jonathan Guseman (WFO Lubbock, TX), Nick Hampshire(WFO Fort Worth, TX), Andy Hatzos (WFO Wilmington, OH), and Jonathan Kurtz (WFO Norman, OK).  The GOES-R program office, the NOAA Global Systems Divisions (GSD), and NWS WFO Huntsville’s Applications Integration Meteorologist (AIM) Program have generously provided travel stipends for our participants from NWS forecast offices nationwide.

Visiting scientists this week will include Lee Cronce (Univ. Wisconsin), Geoffrey Stano (NASA-SPoRT), Isidora Jankov (NOAA/GSD), and Amanda Terberg (NWS Air Weather Center GOES-R Liaison).

Gabe Garfield
will be the weekly coordinator.  Clark Payne (WDTB) will be our “Tales from the Testbed” Webinar facilitator. Our support team also includes Darrel Kingfield, Kristin Calhoun, Travis Smith, Chris Karstens, Greg Stumpf, Kiel Ortega, Karen Cooper, Lans Rothfusz, Aaron Anderson, and David Andra.

Each Friday of the experiment (10 May, 17 May, 24 May), from 1200-1240pm CDT, the WDTB will be hosting a weekly Webinar called “Tales From the Testbed”.  These will be forecaster-led, and each forecaster will summarize their biggest takeaway from their week of participation in EWP2013.  The audience is for anyone with an interest in what we are doing to improve NWS severe weather warnings.  New for EWP2013, there will be pre-specified weekly topics.  This is meant to keep the material fresh for each subsequent week, and to maintain the audience participation levels throughout the experiment.  The weekly schedule:

Week 1:  GOES-R; pGLM

Week 2:  MRMS, HSDA

Week 3:  EFP outlooks, OUN WRF, LAPS

One final post-experiment Webinar will be delivered to the National Weather Association and the Research and Innovation Transition Team (RITT) in June.  This Webinar will be a combined effort of both sides of the Hazardous Weather Testbed (EFP and EWP).

Here are several links of interest:

You can learn more about the EWP here:

http://hwt.nssl.noaa.gov/ewp/

NOAA employees can access the internal EWP2013 page with their LDAP credentials.

https://secure.nssl.noaa.gov/projects/ewp2013/

Stay tuned on the blog for more information, daily outlooks and summaries, live blogging, and end-of-week summaries as we get underway on Monday 6 May!

Greg Stumpf, CIMMS/NWS-MDL, EWP2013 Operations Coordinator

Tags: None

Experimental Warning Thoughts: Contents

Since a blog presents stories in reverse chronological order, those new coming into the blog will find my most recent stories first, even though they are intended to be later chapters. So, here is a chronological table of contents of the Experimental Warning Thoughts. I’ll update this and bump it to the top every once in a while.

Introduction

Warning Verification Pitfalls:

Part 1: Getting started

Part 2: 2x2x2!

Part 3: Let’s go fishing

Geospatial Verification Technique: Getting on the Grid

Creating Verification Numbers:

Part 1: The “Grid Point” Method

Part 2: Grid point scores for one event

Part 3: The “Truth Event” method

Part 4: Truth Event scores for one event

Part 5: Distribution of “Truth Event” statistics

The Benefits of Geospatial Warning Verification

Examining warning practices for QLCS tornadoes

Limitations of WarnGen polygons:

Part 1: Our storm escaped!

Part 2: Slide to the right

Part 3: But my threat area isn’t a 20 km square

Precise Threat Area Identification and Tracking:

Part 1: Let’s get digital

Part 2: Threats-In-Motion (TIM)

Part 3: How good can TIM be?

Warning Verification Pitfalls (continued):

Part 4: Inflating Probability Of Detection

Part 5: Inflating Lead Time

Tags: None

Warning Verification Pitfalls Explained – Lack of Reports Can Inflate Lead Time

As finally promised as an “aside” in this blog entry, I will cover the issue of how using point observations can lead to a misrepresentation of the lead time of a warning.

Consider that one warning is issued, and a single severe weather report is received for that warning.  We have a POD = 1 (one report is warning, zero are unwarned), and an FAR = 0 (one warning is verified, zero warnings are false).  Nice!

How do we compute the lead time for this warning?  Presently, this is done by simply subtracting the warning issuance time from the time of the first report on the storm.  From this earlier blog post:

twarningBegins = time that the warning begins

twarningEnds= time that the warning ends

tobsBegins = time that the observation begins

tobsEnds= time that the observation ends

LEAD TIME (lt): tobsBegins twarningBegins [HIT events only]

Let’s look at our scenario from the previous blog post:

lead time perceived

For ease of illustration, I’m using the spatial scale to represent the time scale.  The warning begins at some time twarningBegins, and the report is at a later time tobsBegins.  The lead time is shown spatially in the figure, and in this case, it appears that the warning was issued with some appreciable lead time before the event at the reporting location occurs.

However, as we explained in the previous blog post, reports only represent a single sample of the severe weather event in space in time.  How can we be certain that the report above represents the location and time of the very first instance that the storm became severe?  In all but probably rare cases, it does not, and the storm became severe at some time prior to the time of that report.  This tells us that for pretty much every warning (hail and wind events at least), the computed lead times are erroneously too large!  Reality looks more like this:

lead time actual

ADDENDUM (1/10/2013):  Here is another way to view this so that the timeline of events is better illustrated.  In this next example, a warning is issued at t=0 minutes on a storm that is not yet severe, but expected to become severe in the next 10-20 minutes, hence hopefully providing that amount of lead time.  Let’s assume that the red contour in the storm indicates the area over which hail >1″ is falling, and when red appears, the storm is officially severe.  As the storm moves east, I’ve “accumulated” the severe hail locations into a hail swath (much like the NSSL Hail Swath algorithm works using multiple-radar/multiple-sensor data).  Only two storm reports were received on this storm, one at t=25 minutes after the warning was issued, and another at t=35 minutes.  That means this warning verified (was not a false alarm), and both reports were warned (two hits, no misses).  The lead times for each report were 25 and 35 minutes respectively, but official warning verification uses lead time to the first report known as the initial lead time.  Therefore, the lead time recorded for this warning would be 25 minutes, which is very respectable.   However, in this case, the storm was actually severe starting at t=10 minutes.  The lead time between the start of the warning and the start of severe weather was 15 minutes shorter than that officially recorded.

leadtimeloop

Picture Swath

How can we be more certain of the actual lead times of our warnings?  By either gathering more reports on the storm (which isn’t always entirely feasible, although that may be improving with new weather crowdsourcing apps like mPING), or using proxy verification based on a combination of remotely-sensed data (like radar data) and actual reports.  Again, more on this later…

Greg Stumpf, CIMMS and NWS/MDL

Tags: None

Warning Verification Pitfalls Explained – Report Density Can Inflate POD

I’m back after a too-lengthy absence from this blog.  I’ve been thinking about some experimental warning issues again lately, and have a few things to add to the blog regarding some more pitfalls of our current warning verification methodology.  I hinted on these in past posts, but would like to expand upon them.

Have you ever been amazed that some especially noteworthy severe weather days can produce record numbers of storm reports?  Let’s take this day for example, 1 July 2012:

120701_rpts_filtered.gif

Wow!  A whopping 522 wind reports and 212 hail reports.  That must have been an exceptionally-bad severe weather day.  (It actually was the day of the big Ohio Valley to East Coast derecho from last July, a very impactful event).

But what makes a storm report?  Somebody calls in, or uses some kind of software (e.g., Spotter Network), to report that the winds were X mph or the hail was Y inches in diameter from some location and at some time from within the severe thunderstorm.  But the severe weather event is actually impacting an area surrounding the location from which the report was generated, and has been and will occur over the time interval representing the lifetime of the storm.  It is highly unlikely that a hail report represented only a single stone falling at that location, or that the wind report represented a single wind gust local to that single location, and there were no other severe wind gusts anywhere else nor at any other time during the storm.  Each of these reports represent only a single sample of an event that covers a two-dimensional space over a time period.

If you recall from this blog entry, the official Probability Of Detection (POD) is computed to be the number of reports that were within warning polygons over the total number of reports (inside and outside polygons).  It’s easy to see that to effectively improve a office’s overall POD for a time period (e.g., one year), they only need to increase the number of reports that are covered by the warning polygons issued by that office during that time period.  One way to do this is to cast a wide net, and issue larger and longer-duration warning polygons.   But another way to artificially improve POD is to simply increase the number of reports within storms via aggressive report gathering.  Let’s consider a severe weather event like this one:

hail1

Look at all those (presumably) severe-sized hail stones.  We can make a report on each one, at each time they fell.  After about an hour of counting and collecting (before they all melted), this observer found 5,462 hail stones that were greater than 1″ in diameter.  Beautiful – the Probability Of Detection is going to go way up!  We can also count all the damaged trees as well to add hundreds of wind reports.  Do you see the problem here?  Are you getting tired of my extrapolations to infinity?  Yes, there are literally an infinite number of severe weather reports that can be gleaned from this event (technically, there is a finite number of severe-size hail stones fell in this storm, but who’s really counting that gigantic number?).  But let’s scale this back.  Here’s a scenario in which a particular warning is verified two different ways:

adding-reports5

Each warning polygon verifies, so no false alarms.  For the scenario on the top, there is one hit added to all reports for the time period (maybe a year’s worth of warning), but for the bottom scenario, there are seven hits added to the statistics.

But wait, doesn’t the NWS Verification Branch filter storm reports that are in close proximity in space and time when computing warning statistics?  Wouldn’t those seven hits be reduced to a smaller amount?  They use a filter of 10 miles and 15 minutes to avoid my hypothetical over-reporting scenario.  But that really doesn’t address the issue entirely.  One can still try to fill every 10 mile and 15 minute window with a hail or wind report in order to maximize their POD.  But if you think about it, that’s not really a bad idea.  In essence, you are filling a grid with a 10 mile and 15 minute resolution with as much information known about the storm as possible.  But this works only if you also call into every 10-miles/15-minute grid point inside and outside every storm.  Forecasters rarely do this (and realistically can’t), because of workload issues, and because only one report within a warning polygon is all that is needed to avoid that warning from being labelled a false alarm (again, cast the wide net so that one can increase their chance of getting a report within the warning).

CORRECTION (1/10/2013):  I just learned that the 10 mile / 15 minute filtering was only done in the era of county-based warning verification, and is not done for storm-based verification.  Therefore, my arguments against the current verification methodology where hit rates and POD can be stacked by gathering more storm reports is further bolstered.  More information is in the NWS Directive on forecast and warning verification.

If we knew exactly what was happening within the storm at all times and locations at every grid point (in our case, every 1 km and 1 minute), we’d have a very robust verification grid to use for the geospatial warning verification methodology.  But we really don’t know exactly what’s is happening everywhere all the time because it is nearly impossible to collect all those data points.  The Severe Hazards Analysis and Verification Experiment (SHAVE) is attempting to improve on the report density in time and space.  But their resources are also finite, and they don’t have the staffing to call into every thunderstorm.  Their high-resolution data set is very useful, but limited to only the storms they’ve called.  What could we do to broaden the report database so that we have a better idea of the full scope of the impact of every storm?  One concept is proxy verification, in which some other remotely-sensed method is used to make a reasonable approximation of the coverage of severe weather within a storm, like so:

use_swath_instead1

This set of verification data will have a degree of uncertainty associated with it, but the probability of the event isn’t zero, and is thus, useful.  It is also very amenable to the geospatial verification methodology already introduced in this blog series.  More on this later…

Greg Stumpf, CIMMS and NWS/MDL


Tags: None

Forecaster Thoughts: Cloud Top Cooling Products for Aviation

From an email conversation:

Kristen Schuler (CWSU, Kansas City, MO) writes:

“So right now the cloud top cooling and sat cast products are a great way to predict severe storms /areas of severe hail potential. It would be nice to have a product that forecasts echo tops exceeding FL400…something that poses a significant impact to aviation. Is that possible? What are your thoughts?”

Wayne Feltz (UW-CIMSS, Madison, WI) replies:

“Yes, we received this same feedback from another CWSU forecaster out of Houston.  I have attached a paper (currently in review with major revisions so it should be published soon) where this relationship has already been established for 18, 30, and 50 dBZ echo top heights.  I think we can do more to provide better correlation between expect echo top height and CTCR.  We also want to compare CTCR and cloud top growth (feet or meters per 15 minutes) to see how this relationship fares.

“The relationship in paper is within figure 7.  We plan on making a separate CTC training module but with a focus on aviation meteorologist decision support issues rather than severe thunderstorm warning focus.”

Greg Stumpf, EWP2012 Operations Coordinator

Tags: None