Week 2 Summary

This week, the EWP had forecasters from the Louisville, Buffalo, and  Norman WFO’s, as well as a broadcast meteorologist from WUSA (DC CBS affiliate) participate in the Big Spring Experiment. Operations on Monday began in the Davenport and St. Louis CWA’s. Throughout the week, operations slowly shifted eastward as we evaluated the products with severe weather development along an eastbound cold front. These operations included the Detroit, Cleveland, Wilmington, Charleston WV, Pittsburgh, and Sterling CWA’s. One group on Thursday operated in the Shreveport CWA, where marginal severe weather occurred as an upper level disturbance moved through a region characterized by weak low-level moisture but steep lapse rates and only marginal instability. This unique environment posed some interesting forecast challenges, so it was neat to see how the various satellite products and OUN WRF performed.

Participants were able to use all of the demonstration products this week, which included  GOES-R and lightning products, LAPS fields, and the OUN WRF model finally on Thursday. There were many good blog posts written throughout the week highlighting the use of all of these products in various situations across various regions of the US. Below is some end-of-the-week feedback on each product from this weeks participants:


Simulated Satellite Imagery:

  • This gave me a heads up on where clouds would move. There isn’t great guidence for sky grids, so I would look at this to see where stratus is moving, etc. if it was verifying well
  • I think it is especially effective on the large scale because it picks up on large scale features well.

NearCast System:

  • I liked and used it because it is observed thermodynamic data, of which there is very little
  • This added value to my forecast process. For example, in West Virginia no boundary was evident at the surface, but there was a boundary in NearCast, and that is where convection fired. That sold me.
  • I do not like to rely on NWP data, so this was nice.
  • I really liked seeing the gradients, most of the storms developed in theta-e difference minima or moisture maxima or along gradients.
  • There were a few cases where you saw decreasing moisture moving in, which was not picked up in the models, and it did have a big effect on storm development.
  • In Wilmington, dry air moved in and storms decreased, but they actually did increase a later, so it was kind of inconclusive in this case.

GOES-R Convective Initiation

  • Some times it was giving lead time of 30-45 minutes, other times it provided no lead time.
  • It was more useful in rapid scan mode.
  • I was very impressed with its performance, but sometime the lead timne just wasn’t worth it.
  • I thought this product was really great during the daytime, but I do not see it being at all useful at night as it was very inaccurate.
  • It was sometimes hard to get a sense of what the probs meant. If I used it, I would get rid of everything under 50%. I just don’t like that much clutter.
  • It was particularly erratic around the Appalachian mountains.

Prob Severe Model

  • It works awesome in hail situations. I am a fan of it for hail detection and determining which storms will produce hail
  • It does have issues with linear storm modes.
  • The best part for me was teh moiseover sampling and being able to look at the predictors. It really enhances your situational awareness
  • It would be nice to color code the growth rates in the readout
  • I noticed a lot of sat growth rates that were older than an hour, that mad eme lose confidence in the signal.
  • I think it did increase my confidence in hail events, because I was saw a clear progression in probabilities
  • When I saw over 80%, I had great confidence that that storm would become severe
  • I do think it could give additional lead time to warnings
  • I am fine with including the lower probs because the display is not obtrusive, and I like seeing the progression to higher probs.
  • The survey questions were good
  • I think what you have now, for hail I would use this product today.
  • It gives you a good idea of which storm(s) you should be interrogating
  • All participants agreed they would use this in their local WFO.
  • Broadcaster: I would use this on the air. If there were a lot of cells, I would point to this storm [with the higher probs] and say that that is the cell to watch. Would not necessarily show probs, but could show colors, etc.

Overshooting Top Detection

  • This was not useful for me.
  • I see this being most useful when incorporated into another product. This would be a great benefit
  • We were unable to use it at night when it is harder to see OT’s, and when many more OT’s are often detected as storms have matured.


  • I really like the total lightning data
  • I’ve never used total lightning, bit I do like it

Lightning Jump Algorithm

  • I think I could use this in a warning environment.
  • I don’t mind the sigma values as indicators.
  • An outline (like prob severe)  might be better then the blob
  • It might be good to incorporate the LJ product in the prob severe tool
  • I don’t see the zero sigma being necessary
  • I told AWIPS-II to blink sigma values that were greater than 2.

Tracking Tool

  • There are too many circles on the screen, too much clutter.
  • I would prefer to have one circle that you just put on the cell, and it gives you the meteogram.
  • Entering the cell id # to track the storm might be a good idea
  • I don’t really mind the circles, but I just can’t see myself using this in a warning situation.
  • I can see this being used after the fact, looking at a storm, but not in real-time. It is too labor intensive.
  • I like the graph itself, but the actual functionality is bad.

  • It is difficult to move the circle-track to align with the track of the storm, especially when many images are loaded. Also, sometimes it does not track at first, so you have to move it around to get it to track. Finally, changing the size of the circles is frustrating, as making some circles bigger makes other smaller.

GOES-14 SRSOR (1-minute imagery)

  • It’s great
  • I saw subtle boundaries that I wouldn’t otherwise see
  • We want quicker satellite updates, it’s a no-brainer
  • No worry about information overload with this
  • I will prefer to view the raw data, but I do see it being useful as input into other products as well


  • It seemed to do pretty well with storm mode.
  • Timing of convection was poor.
  • I used reflectivity and CAPE. I would like to see max wind speed (10 m).
  • I would like to use this in lake effect snow situations.


  • Initially, it produced a little too much covnection, but throughout the day, it caught up.
  • The model picked up gravity waves which was neat to see.
  • It was interesting to see the progression from single cells to clusters/line segments.
  • Storm mode was good, exact location was just a bit off.
  • I am content with the products that are available.
  • 10 m max wind speed was interesting to look at. In one case, it worked out quite well.


  • I thought the training was good.
  • The week was very well organized, well done, and I liked that we stuck to the schedules, it made things very easy.
  • I liked the relaxed environment
  • Less structure was good, it gave us freedom to see what works well for us.

– Bill Line, SPC/HWT Satellite Liaison and Week 2 EWP Coordinator

Tags: None