Test. First Blog Post

May19th 20:29Z First Image

Good afternoon all. Grant H. signing on for the first blog post of the first day of the HWT. In learning about the GOES Convective Initiation (CI) tool, I felt this might be a good first place to start. CI seems to be the most useful in hunting down areas where convection will begin over the first 2 hours of and event. I can see this being good from the warning coordinator/mesoscale forecaster perspective in making decisions on where/how to divide up/sectorize radar mets in ops.  The stuff off to the left over eastern Nebraska in blue has a very low probability of initiation around 10 to 20 percent. Meanwhile, the cumulus field over the southeastern Wyoming in the Cheyenne area is reaching up to 50%. This helps to force focus to the locations of interest. I do see some problems here in that the algorithm has troubles IDing areas that transition from stratus to cumulus or from cirrus with cumulus embedded underneath such as in northwestern Nebraska… very little CI color is showing up, other than the sheared area in this image.

Grant H.

Tags: None

Starting This Monday – The PHI and Big Spring Experiments (Week 3)

Monday 19 May 2014 begins the third week of our four-week spring experiment of the 2014 NSSL-NWS Experimental Warning Program (EWP2014) in the NOAA Hazardous Weather Testbed at the National Weather Center in Norman, OK.  There will be two primary projects geared toward WFO applications, 1) a test of a Probabilistic Hazards Information (PHI) prototype, as part of the FACETS program and 2) an evaluation of multiple experimental products (formerly referred to as “The Spring Experiment”).   The latter project – known as “The Big Experiment” – will have three components including a) an evaluation of multiple CONUS GOES-R convective applications, including satellite and lightning;  b) an evaluation of the model performance and forecast utility of two convection-allowing models (the variational Local Analysis Prediction System and the Norman WRF); c) and an evaluation of a new feature tracking tool.  We will also be coordinating with and evaluating the Experimental Forecast Program’s probabilistic severe weather outlooks as guidance for our warning operations.  Operational activities will take place during the week Monday through Friday.

For the week of 19-23 May, our distinguished NWS guests will be Joshua Boustead (WFO Omaha, NE), Linda Gilbert (WFO Louisville, KY), Grant Hicks (WFO Glasgow, MT), Julie Malingowski (WFO Grand Junction, CO), and Trisha Palmer (WFO Peachtree City, GA).  Additionally, we will be hosting a weather broadcaster to work with the NWS forecasters at the forecast desk.  This week, our distinguished guest will be Danielle Vollmar of WCVB-TV (Boston, MA).  If you see any of these folks walking around the building with a “NOAA Spring Experiment” visitor tag, please welcome them!   The GOES-R program office, the NOAA Global Systems Divisions (GSD), and the National Severe Storms Laboratory have generously provided travel stipends for our participants from NWS forecast offices and television stations nationwide.

Visiting scientists this week will include Steve Albers (GSD), John Cintineo (Univ. of Wisconsin/CIMSS), Ashley Griffin (Univ. of Maryland), Chris Jewett (Univ. of Alabama – Huntsville), James McCormick (Air Force Weather Agency), Chris Schultz (Univ. of Alabama – Huntsville), and Bret Williams (Univ. of Alabama – Huntsville).

Darrel Kingfield 
will be the weekly coordinator.  Lance VandenBoogart (WDTB) will be our “Tales from the Testbed” Webinar facilitator. Our support team also includes Kristin Calhoun, Gabe Garfield, Bill Line, Chris Karstens, Greg Stumpf,  Karen Cooper, Vicki Farmer, Lans Rothfusz, Travis Smith, Aaron Anderson, and David Andra.

Here are several links of interest:

You can learn more about the EWP here:

https://hwt.nssl.noaa.gov/

NOAA employees can access the internal EWP2014 page with their LDAP credentials:

https://hwt.nssl.noaa.gov/ewp/internal/2014/

 
Gabe Garfield
CIMMS/NWS OUN
2014 EWP Operations Coordinator

Tags: None

Week 2 Summary

This week, the EWP had forecasters from the Louisville, Buffalo, and  Norman WFO’s, as well as a broadcast meteorologist from WUSA (DC CBS affiliate) participate in the Big Spring Experiment. Operations on Monday began in the Davenport and St. Louis CWA’s. Throughout the week, operations slowly shifted eastward as we evaluated the products with severe weather development along an eastbound cold front. These operations included the Detroit, Cleveland, Wilmington, Charleston WV, Pittsburgh, and Sterling CWA’s. One group on Thursday operated in the Shreveport CWA, where marginal severe weather occurred as an upper level disturbance moved through a region characterized by weak low-level moisture but steep lapse rates and only marginal instability. This unique environment posed some interesting forecast challenges, so it was neat to see how the various satellite products and OUN WRF performed.

Participants were able to use all of the demonstration products this week, which included  GOES-R and lightning products, LAPS fields, and the OUN WRF model finally on Thursday. There were many good blog posts written throughout the week highlighting the use of all of these products in various situations across various regions of the US. Below is some end-of-the-week feedback on each product from this weeks participants:

GOES-R

Simulated Satellite Imagery:

  • This gave me a heads up on where clouds would move. There isn’t great guidence for sky grids, so I would look at this to see where stratus is moving, etc. if it was verifying well
  • I think it is especially effective on the large scale because it picks up on large scale features well.

NearCast System:

  • I liked and used it because it is observed thermodynamic data, of which there is very little
  • This added value to my forecast process. For example, in West Virginia no boundary was evident at the surface, but there was a boundary in NearCast, and that is where convection fired. That sold me.
  • I do not like to rely on NWP data, so this was nice.
  • I really liked seeing the gradients, most of the storms developed in theta-e difference minima or moisture maxima or along gradients.
  • There were a few cases where you saw decreasing moisture moving in, which was not picked up in the models, and it did have a big effect on storm development.
  • In Wilmington, dry air moved in and storms decreased, but they actually did increase a later, so it was kind of inconclusive in this case.

GOES-R Convective Initiation

  • Some times it was giving lead time of 30-45 minutes, other times it provided no lead time.
  • It was more useful in rapid scan mode.
  • I was very impressed with its performance, but sometime the lead timne just wasn’t worth it.
  • I thought this product was really great during the daytime, but I do not see it being at all useful at night as it was very inaccurate.
  • It was sometimes hard to get a sense of what the probs meant. If I used it, I would get rid of everything under 50%. I just don’t like that much clutter.
  • It was particularly erratic around the Appalachian mountains.

Prob Severe Model

  • It works awesome in hail situations. I am a fan of it for hail detection and determining which storms will produce hail
  • It does have issues with linear storm modes.
  • The best part for me was teh moiseover sampling and being able to look at the predictors. It really enhances your situational awareness
  • It would be nice to color code the growth rates in the readout
  • I noticed a lot of sat growth rates that were older than an hour, that mad eme lose confidence in the signal.
  • I think it did increase my confidence in hail events, because I was saw a clear progression in probabilities
  • When I saw over 80%, I had great confidence that that storm would become severe
  • I do think it could give additional lead time to warnings
  • I am fine with including the lower probs because the display is not obtrusive, and I like seeing the progression to higher probs.
  • The survey questions were good
  • I think what you have now, for hail I would use this product today.
  • It gives you a good idea of which storm(s) you should be interrogating
  • All participants agreed they would use this in their local WFO.
  • Broadcaster: I would use this on the air. If there were a lot of cells, I would point to this storm [with the higher probs] and say that that is the cell to watch. Would not necessarily show probs, but could show colors, etc.

Overshooting Top Detection

  • This was not useful for me.
  • I see this being most useful when incorporated into another product. This would be a great benefit
  • We were unable to use it at night when it is harder to see OT’s, and when many more OT’s are often detected as storms have matured.

PGLM

  • I really like the total lightning data
  • I’ve never used total lightning, bit I do like it

Lightning Jump Algorithm

  • I think I could use this in a warning environment.
  • I don’t mind the sigma values as indicators.
  • An outline (like prob severe)  might be better then the blob
  • It might be good to incorporate the LJ product in the prob severe tool
  • I don’t see the zero sigma being necessary
  • I told AWIPS-II to blink sigma values that were greater than 2.

Tracking Tool

  • There are too many circles on the screen, too much clutter.
  • I would prefer to have one circle that you just put on the cell, and it gives you the meteogram.
  • Entering the cell id # to track the storm might be a good idea
  • I don’t really mind the circles, but I just can’t see myself using this in a warning situation.
  • I can see this being used after the fact, looking at a storm, but not in real-time. It is too labor intensive.
  • I like the graph itself, but the actual functionality is bad.

  • It is difficult to move the circle-track to align with the track of the storm, especially when many images are loaded. Also, sometimes it does not track at first, so you have to move it around to get it to track. Finally, changing the size of the circles is frustrating, as making some circles bigger makes other smaller.

GOES-14 SRSOR (1-minute imagery)

  • It’s great
  • I saw subtle boundaries that I wouldn’t otherwise see
  • We want quicker satellite updates, it’s a no-brainer
  • No worry about information overload with this
  • I will prefer to view the raw data, but I do see it being useful as input into other products as well

LAPS

  • It seemed to do pretty well with storm mode.
  • Timing of convection was poor.
  • I used reflectivity and CAPE. I would like to see max wind speed (10 m).
  • I would like to use this in lake effect snow situations.

OUN WRF

  • Initially, it produced a little too much covnection, but throughout the day, it caught up.
  • The model picked up gravity waves which was neat to see.
  • It was interesting to see the progression from single cells to clusters/line segments.
  • Storm mode was good, exact location was just a bit off.
  • I am content with the products that are available.
  • 10 m max wind speed was interesting to look at. In one case, it worked out quite well.

Other:

  • I thought the training was good.
  • The week was very well organized, well done, and I liked that we stuck to the schedules, it made things very easy.
  • I liked the relaxed environment
  • Less structure was good, it gave us freedom to see what works well for us.

– Bill Line, SPC/HWT Satellite Liaison and Week 2 EWP Coordinator

Tags: None

Late observation on vLAPS on Thursday

It appears that the vLAPS 800×800 caught up and ended up being well-resolved by the early evening hours! Unfortunately, since I wasn’t tracking it all day, I don’t know how or why the max base reflectivity improved. Here’s the vLAPS at 22Z on Thursday:

LAPSmaxref22ZMay15The model picked up very well on the high reflectivity southwest of DC at the same time, and it did a decent job with the cell north of DC, though a bit underdone. However, the max reflectivity further to the south into southern Virginia appears to be overdone.

Baseref22ZMay15So perhaps this means that the model is off spatially in its forecast, with higher reflectivity values shifted too far to the south.

Tags: None

Simulated IR imagery vs. convective cells in DC

The simulated IR imagery showed the cold front’s areas of convection on the leading edge of the storm in our target area of Virginia and Maryland on Thursday afternoon.

simulatedIR20ZMay15It matched up spatially with what we were seeing in reality on the rapid scan GOES IR imagery at the same time stamp, 20Z.

GOESIR20ZMay15The area in blue on the simulated IR indicates cloud tops colder than -60C. This shading doesn’t show up on the real IR image at all, but the cloud tops do have temperatures below -50C in the same convective regions. It looks like the simulated IR is going to overdo the convection, especially for the southern cell over central Virginia, but I thought I’d keep an eye on it to see if a convective cell spawned a severe warning in that area.

The simulated image valid at 21Z shows the strongest convection has shifted further east and is concentrated into one cell.

simulatedIR21ZMay15That cold cloud top maximum in northern Virginia is much smaller and not nearly as cold in the real GOES IR image from 21Z.

GOESIR21ZDuring this time period, a line of strong to severe thunderstorms was pushing through the DC Metro area.

Baseref21ZMay15Comparing the radar data to the simulated IR at the same time stamp, it appears that the small clusters of convective cells were not well resolved by this product. In fact, the clusters of storms to the south and west of DC were either completely missed by the simulated imagery, or the placement was off by about 50 miles (clouds too far to the southwest to be a match for the convective cells).

This was a day where we had limited tools for severe weather forecasting in the DC Metro area. The threat for hail and tornadoes was very low. ProbSevere, convective initiation, overshooting tops, and PGLM products were rendered useless because of a lack of convection and lightning parameters.

Tags: None

FINALLY! A Lightning Jump Detection

LJDA_LWX-D2DFinally on our last day of EWP operations we were able to capture a weak lightning jump with the Lightning Jump Detection Algorithm. This jump was detected from a discrete cell that was lifting north across the western edge of the District of Columbia around 2109z. The jump from 0 sigma to 1 sigma (or 1 Standard Deviation) shows up as the green blotch on the image above. This is overlaid on top of the Flash Extent Density product which measures total lightning in the storm. At this time in the image above the flash density was 10 flashes per km^2 which was overlaid on 0.5 deg KLWX reflectivity of around 52 dBz.

The Tracking Meteogram Tool was used to see the evolution of the Lightning Jump, reflectivity and Flash Extent Density verses time. The take home from this is that a lightning jump or rapid increase in Flash Density within a storm correlates with a rapid intensity of a storm. Note that between 21:06z and 21:08z the Flash Extent Density rapidly increased or “jumped” from 1 flash/km^2 to 10 flashes/km^2 which triggered the Lightning Jump Detection Algorithm to increase from 0 to 1 sigma. During this time the dBz values of reflectivity increased from 20 dBz to greater than 55 dBz in 8-9 minutes. This cell was also somewhat low-topped with echo tops only reaching to around 32kft. Please keep in mind that this is a weak example of just how rapidly a cell can intensify since the jump was only 1 sigma.

Shawn Smith

Tags: None

ProbSevere Underestimates Storm in N AR on May 15

The storm below produced golf ball size hail (around 1.75 in diameter) and had 50 dBZ up to 31157 ft MSL from SRX radar (114 nm to the west southwest). ProbSevere only indicated 13% for severe with 1049 J/kg, 25.4 kt of EBShear, and 0.60 in MESH.  The lack of nearby radar data with the LZK (Little Rock) WSR-88D being inoperable may have significantly impacted the ProbSevere algorithm.

ProbSevere seems to again be underestimating the severe potential and expected hail size.  The environment was characterized with low topped severe storms with mid/upper trough overhead.  The 12Z LZK sounding is below as the last image.

2151UTCCIMSSProbSevereSRXRef05151412UTCLZKSounding051514Michael Scotten

Tags: None

Nearcast Tool – Convective Coverage

Nearcast_22Z Above is the NearCast imagery at 22Z.  Utilizing this imagery, a pretty apparent boundary is evident across portions of MO into northwest Arkansas.  Along this boundary, convection was much more widespread than it was further south.  Further south, despite better instability, there is no indication of any boundary which likely explains the more scattered nature of the convection.  Operationally, seeing this boundary on the nearcast model would give me higher confidence in convective coverage further north versus further south.

22Z National radar Imagery (super hi-res :P)
22Z National radar Imagery (super hi-res :P)

 

Tags: None