“HWT in Review” by Grant Hicks (GGW)

So what was the purpose of this trip?   I found myself asking that question up to the last minute before entering the office on the first day.  Instruction as to what and how we’d be evaluating anything was rather lacking until we got there. But come to find out this seemed to be a little by design. It seemed the HWT crew wanted us evaluating things just a bit cold and out of our element to switch up our mindsets kick us out of our comfort zones. They kept us with just the vaguest idea of what we were getting into, in order to keep us on our toes.  The thought was that this would then open our minds to new ideas of using the new stuff they had for us on Day 1 in a warning environment.

Ultimately, there were a few things that we did need to maintain focus for and evaluate. A few questions stood out. Where were these new techniques, algorithms, and products, which  the presenters had for us in prearrival training, most useful in ours or someone else’s operations. Were there any immediately noticeable problems or could we find any circumstances where they break or have problems?  Also, how well did they work for the aspects of the forecast/warning ops that each “test subject” was used to working in.  And I do refer to us as “Test Subjects” as it seemed at times like we were also under as much scrutiny as the stuff we were evaluating.

Each morning the four forecasters(us) would huddle in a round table surrounded by a large group of about twenty masters and PHDs; each with their own  axe to grind project to evaluate. We were quizzed over the previous day in what went right, what didn’t, and how these new items would affect or help operations. It was a lot like being stuck in a goldfish bowl at the front of a premier aquarium and being the first exhibit on display. Those people really want to see which way a person swims…. And you know me… I enjoy upside down, sideways, and at a diagonal.

As for the stuff we were evaluating. There were about ten or more items. I’ll start with simulated satellite technique.

#1 Simulated Satellite Imagery (Technique).  The WRF has the ability to simulate satellite Imagery for both IR and WV. This technique allows a forecast to match up current WV or/and IR Satellite to what is in the models and compare the results. The idea being where the satellite matched the model at start is a place for high confidence to the forecaster. Areas where the model and satellite do not match means that the forecaster has low confidence in that area of the model and the forecaster should choose something else to use for the forecast.

The technique did just that. In many places where low stratus or cirrus was over or underrepresented when compared to reality the model solution was immediately put in question, while places where the model handled these features correctly brought high confidence to the immediate forecast for the next 6 hours. Personally, I would like to see this technique sent over to other models as well such as the NAM, HRRR, and GFS. A forecaster could then actually initiate a forecast by finding guidance which best matched the current reality.

WRF data including this simulated satellite in IR and WV is available for our GGW office right now. However we have to request this data from Western Region to get it.

#2.  GOES-R Convective Initiation. This was a tool that is becoming part of a set of algorithms for the satellite “cradle to grave” concept of storm interpretation.  This algorithm works during the cradle phase of a storms life cycle. The tool used initial growth rates to help pick out winners and losers for storms before their development. Once a storm initiates it is discarder allowing the forecaster to focus on the next storm.  Unfortunately, this tool did not so much pick out winners and losers for initiation as it very quickly picked up on whole large areas for development. So, while missing its intended purpose, It did seem to find and equally welcome home for the mesoscale forecaster, warning coordinator, or SPC forecaster.  One of the other problems here was that the tool had a hard time graphically pointing towards which storms were becoming active due to its rainbow colored chart at the top of the probability scale while doing a really good job of showing which one were not active… due to the monochromatic blue at the low end of the color chart.  This was due to the color choice for the probability charts. If a dichromatic color chart such as Prob Severe’s (Blue, fade White, fade Red) was used, the CI algorithm would probably stand out a lot better for determining where large areas of storm development are expected which is the unintended purpose of this algorithm.  This would draw the eye from areas of non-interest immediately to areas of interest.

#3. Overshooting Top Detection.  This is another algorithm from the cradle to grave concept focused around the mature stage of the thunderstorm lifecycle. The algorithm would pick out where overshooting tops(OTs) were on storms and place a red dot directly over it, drawing the eye in. The point being that if an OT exists for a storm, then the storm is very likely to have severe weather.  I could tell that right away warning operations for WFOs were probably not going to be the main focus of the this algorithm, as radar can tell long before a storm has an OT that it is severe.  This tool will experience its main use in places where a radar is not the primary focus or does not exist… such as over oceans or the Great Lakes.  This will probably be a helpful product for the AWC or CWSUs for helping in routing traffic. This algorithm also had some issues with identifying OTs. It missed about a third of the OTs and also had some incorrectly identified OTs were there weren’t any.   This algorithms Hit/Miss/FAR scores will probably improve with the onset of 1 minute super rapid scan satellite era.

#4. GOES-14 One minute super rapid scan. What more is there really to say? Where a met used to only get 15 minute images to piece together between the constant radar updates, now there will be satellite data on a temporal frequency higher than the normal radar can keep up with. This will help for identifying boundary placement and evolution and storm initiation placement. When radar goes down this could quickly become a go-to product with plenty of potential for cradle to grave storm algorithm development.

#5. Near Cast System.  This is an algorithm developed to allow for following areas of instability on satellite and extrapolating them into the future.   It finds the theta-e near the surface and subtracts that from theta-e in the mid-level.  Areas with the greatest difference were primed for some form of storm development.  This algorithm also allowed to extrapolation of low level precipitable water fields/boundaries into the future which was useful as well. The big problem with the theta-e difference field was in determining exactly “what” it was that it was showing/saying.  On some occasions a theta-e diff max would pass through an area and initiate storms but then exit the area leaving a monster cell behind it. Other times it seemed like a theta-e diff max would be feeding directly into a single storm. Still other times the storms seemed to simply ride along a moving boundary of high theta-e difference.  While I have no doubt that it is important as an ingredient in storm development. Determining what its best used for in operations will be a subject of future debate, with many different methods of use likely cropping up from the operational and academic community involving mesoscale forecasting.

#5. Prob Severe Algorithm.  This is an algorithm that pulls from many different and new sources to determine what the chance is that a storm has become severe and then label the storm with a changing color coded band when it is reaching a high probability of severe output.  I think this algorithm was more highly calibrated for hail than for winds. Unfortunately, the time spent on wind evaluation was close to nil and I wish I had a couple more shots at this.  However, the new hail size algorithm was far superior to the old one with totals far closer to the verified reports. The colored bands also help pull the warning operator’s eye onto the storms of focus on the map at any moment in time.  In fact the color bands could quickly alert a met to focus on a storm that would otherwise normally get less attention…. such as a heavy hail producing left split, when the right split is typically more favored for hail and severe weather.  This helps for quick prioritization when multiple cells are popping up.  Monitoring the color trend on the bands also drew the eye into which storms are most likely to produce severe in the next few minutes, which storms are not, and which storms  are slowly dying.

#6. Multiple Radar/Multiple Sensor (MRMS).  What this did was grab all the scans from different surroundings radars and create a composite image.  This included multiple WSR-88Ds, CASAs, Phased Array, and Canadian Radars. It has also allowed underlying algorithms to have access to more data/scans when determining the strength of a cell as they no long relied on only one radar worth of data.  This is probably the main reason for the overall improvement of the hail size algorithm. With MRMS radar coming to AWIPS 2 this does mean that the GGW office will finally get Canadian radars in AWIPs which are sorely needed and arriving far later than they ever should have.  This program could probably benefit from using the 3D viewer that exists in GR2 Analyst.  The viewer allows for varying levels of transparency corresponding to dBz color charts. This means that a transparent outer ghost of a storm is present and showcasing off the inner movements.  The movements can quickly show when a hail core has been cut off from its moisture supply/support and is about to collapse in a microburst or hail drop. If the GR2 3D view was combined with the multiple sources of MRMS then mets would have access to a truly complete 3D storm environment for interrogation.

#7. PGLM Total Lightning. In the era of GOES-R the numbers of optical flashes corresponding to lightning are going to be recorded. A direct correlation to optical flashes with ground instruments has been found which allows for calculation of total lightning. This Includes finding density of initiation of strokes and density of extent of strokes produced in a coarse square pattern in addition to the normal CG data.  The data on Satellite may not be exact to what we see now with the ground based instruments; but will be very close. One of the biggest advantages of this will be the ability to see a storm in a different way which could be completely devoid of radar. So, if radar goes down this will be a great thing to bring up.

#8. Lightning Jump Detection Algorithm.   This is slightly different from the coarser square flash density algorithms above. This algorithm uses either radar or satellite to confine a higher resolution shaped band to lightning that is ongoing, similar to the Prob of Severe.  The usefulness here is that the bands are then coded to correspond to how much change there is in lightning development which is then normalized and placed on a standard deviations chart.  It turns out that when total lightning spikes and standard deviations rise above 3, the chances for severe weather spike right along with it… and there is typically a 15 to 20 minute delay before this weather impacts the ground. This is a sweet spot of information to the warning meteorologist and really makes this the star algorithm of the HWP show.  It was demonstrated on shift that this algorithm with satellite can still allow a meteorologist to warn without radar and have a lead time. It also allows a met another method for picking out winners and losers in convective initiation mode for the first severe storms of a developing cluster.  Low resolutions of this algorithm could be tricked into thinking there was a jump of 3 or more standard deviations when two storms were very close together; but when a higher resolution was overlaid it could immediately point out this false alarm.

#9. Tracking tool.  Right up front, this tool is still in development, and should be given some more time before a final judgment is rendered.  Rather than have an algorithm following a cell and placing data in a SCAN meteogram. They have provided the meteorologist the ability to define the placement of the cell himself and determine where the cell came from and how large the cell is.  This algorithm has usefulness if there is one storm of importance where the met can take the time to do all the adjustments.  However, there are many more negatives at the moment. Some of which simply come from the fact that the algorithm still needs work to run. Currently this tracking tool has a tendency to crash AWIPS2 Cave or hang it up for long periods of time. I also worry that under large storm outbreaks where a radar met cannot focus on one storm this tool will quickly become useless along with the normal subtleties that typically disappear under these situations. This tool may end up being useful for things other than warning ops such as tracking, timing, and trending strength of a non-linear wave in WV satellite imagery. As for the meteograms generated from the tool, these may be more useful if they can be popped out of the D2D display as a separate window to be manipulated or moved around the screen similar to SCAN’s method.

#10. vLAPs. Taking the traditional LAPs analysis and extrapolating this about 2 hours into the future.  How exactly this was done I’m not totally sure.  What I can say for this product is that the CAPE fields were particularly impressive and show all kinds of detail from afternoon non-thunderstorm convective cells, changes due to lakes and rivers, and outflow boundaries from collapsing storms.  The level of detail and user visualization for CAPE has yet to be seen by this forecaster in any other model and seems on par if not above the HRRR capability itself. I wish I had a few more hours to play with the many other output parameters that were at the edge of normal convective mode use. Alas… by the time I realized they existed and began formulating methods for testing them the vLAPs either began having bad times due to internal run problems or was operating on a domain divorced from the field office I was shadowing.

For the program as a whole I would like to make the recommendation that schedule hours be moved up about an hour, maybe two.  This would allow the mets to have time to play with some toys in a pre-storm environment for evaluation. Almost half the tools up for evaluation were needed to be run prior to warning mode to catch their effectiveness and almost every time we started loading up ops in a CWA the storms were firing or had already fired.  I also have to say that being part of the Experimental Forecast Programs briefing each morning before beginning our own ops was very enlightening.

The program overall was an amazing blast to be part of and I did genuinely enjoy helping to make the future tech better with what little input I was able to offer. I look forward to doing it again, but hope to wait a couple years to allow the current tech to cycle through other opinions and a fresh wave tech and ideas to overwhelm me yet again if I’m ever allowed back.

 

Grant Hicks

General Forecaster

Glasgow, MT NWS

 

P.S.

For those that stayed with me this far there are Powerpoints with the weeks in review called “Tales from the testbed” available here.

http://hwt.nssl.noaa.gov/ewp/internal/2014/

 

GGW for the represent during Week 3, W00t W00t!

http://hwt.nssl.noaa.gov/spring_experiment/tales/2014-wk3/

 

Tags: None

Forecaster Thoughts – James McCormick (2013 Week 3)

1.  Introduction

I was granted permission to attend the Hazardous Weather Testbed Warning Forecast Experiment at the National Weather Center in Norman, Oklahoma for the week of May 20 – 24, 2013.  In particular, my focus was to be on high resolution analysis and model forecasting products as well as their applications for forecasting and warning for deep moist convection.  Increasing my knowledge of storm analysis and forecasting helps with the detailed verification of various FSMT forecast products and brings cutting edge research back to our squadron.  In return, I provide feedback and discussion about the success of products to the National Severe Storms Laboratory (NSSL).

The week, as one can imagine given the dates and location, did not go as planned.  Strong tornadoes, including a strong EF-4 tornado near Shawnee, had affected central Oklahoma on the day before the testbed; this storm had created a need for storm surveyors from the testbed.  On Monday, shortly after the first shift began on Monday, a devastating EF-5 tornado developed just to the northwest of Norman, affecting the city of Moore particularly hard.  The event took a significant physical toll on many of the people working for the testbed, as I believe every NSSL employee with the testbed took part in surveys during the week, and the event took an emotional toll on everybody in the project as well.  I know people worked very, very long, hard days this week, and that work is greatly appreciated and admired.

We did the best we could with the resources available to us for the week.

2.  Pre-Course Material

I read all material provided describing experimental products to be tested prior to attending the testbed.  I appreciate all of the material being prepared, and I think, given the reality that I don’t have an AWIPS machine to work with exercises on, I got caught up to speed very quickly even in the chaotic conditions of the week.

It had also helped that I had attended the testbed last year, and I had a general idea of what to expect from the project layout and what type of products to expect.

3.  Schedule

3.1 Monday, May 20

Monday’s shift began at 1 PM.  I arrived at the Weather Center at noon to have lunch with colleagues participating in the other half of the experiment.

At 1 PM, we met as a group in the development lab.  We briefly did introductions and we quickly ran through goals and product sets hoped to be evaluated during the week.  We knew convection was going to fire rather early in the day in Oklahoma, perhaps by 2 PM.  Mr. Gabe Garfield gave a quick discussion about the expected weather conditions.  A stout atmospheric cap was quickly eroding with rapid daytime heating, and the atmospheric conditions were very favorable for severe weather with thunderstorms likely to begin developing at any moment.

The plans for the day quickly became very ragged.  Storms fired quickly just west of Interstate 35 in central Oklahoma before we were even really settled into the lab.  I worked Mr. Jeremy Wesely for the first portion of the afternoon.  We looked quickly at the OUN WRF products, which suggested immediate initiation.  The reflectivity developed a very large cell in central Oklahoma that on the OUN-WRF appeared to be a left moving storm, which clued me to the threat of hail.

Three storms very quickly fired, so we only spent a few moments in forecast mode.  Among the analysis products we were able to see early on included a strong reflectivity core at the -20 C level.  Though we weren’t issuing warnings for this early portion of the testbed, we certainly would have started with the storms forming west of the OKC metro area.  A storm southwest of Norman quickly grew dominant – along the suggestion of the OUN WRF – and began moving to the metro area.  We noticed the updates had stopped, as power glitches began to affect data flow at this point.

The northern supercell quickly developed a large wall cloud.  A tornado warning including Norman was issued by the OUN weather office shortly afterwards.   Seeing the storm moving to our northwest, we decided to continue to work instead of seeking shelter By 2:40 PM was producing a tornado to the northwest of Norman.  Screens in the room were tuned to live coverage of this tornado and everybody – both on our side of the room and the SPC experiment side of the room – quickly grew gravely concerned with the unfolding tragedy to the north.  Several people began calling loved ones and taking care of personal business.  Warnings for this particular storm – and now the storms to the south – were clearly and obviously tornado warnings.  After a few moments of shell shock (I can’t think of another word), we continued to work.  I was particularly impressed with the calm, resolved demeanor in the room even in the face of the enormous tragedy and personal stress.  Power glitches in the area due to the thunderstorm continued to affect the data flow, and it was only around 3:30 PM that data began coming into our systems.  The data, an hour behind or so, allowed for a delayed real time analysis of the Moore tornadic supercell.  By this time, the devastation to our north was quite apparent.

The first product that jumped off of the screen at us was the tornado debris signature.  As one might expect, a violent tornado hitting a major metropolitan area creates a lot of debris, and the radar algorithm quickly picked that signature up, carrying it through the city.  We certainly did not expect to see this signature in other storms, but it was interesting to see the levels reached by the Moore tornado.

We also took a look at low- and mid-level rotation tracks.  Mid level rotation increased dramatically near the city of Bridge Creek.  If one wasn’t paying attention to the supercell and tornadic potential of this storm before, they were certainly needing to do so after the rotation passed Bridge Creek.

We also began paying more attention to the southern storms around this storm, which also were producing severe weather.  A “cloud top cooling (hereafter CTC)” rate of -28 C was noted in Wichita County, TX, at 1902 Z.  The radar data would later show a 2.5 to 3 inch hail icon with that storm, and the storm would be responsible for golf ball sized hail by 1920Z.  In an environment where storms were developing rapidly and producing severe weather unusually quickly, the CTC product allowed for 18 minutes of lead for the first hail report.  Baseball sized hail would later be reported with this storm.  We also noted a tornadic debris signature in Jefferson County, OK, as the storm moved to the east.  While I don’t know if any tornado was ever officially reported with this storm, it sure seemed likely that this storm was producing a tornado, and the tornado debris product would have helped confirm confidence in a tornado warning.

Around 5 PM, I was able to take over my own forecast station with the departure of one of the local testbed participants, working next to Jeremy.  We were switched to the Fort Worth coverage area to follow the supercell traveling along the Red River, moving out of the Norman counties.  The other group continued to work severe storms north and east of Oklahoma City.

Analysis of the storm showed a hard right turn with the supercell looking at the mid level rotation tracks.  The volatile environment suggested all modes of severe weather from supercell thunderstorms, and this rotation track product indeed confirmed that this storm was still a rigorous supercell.

I took a break around 6 PM just to breathe.  The tornado must have hit national news around this time because my phone went off several times.  I didn’t spend much time with the phone; we were requested to leave cell phones for emergency calls in the area because the entire infrastructure had in central Oklahoma had been compromised with several towers damaged or destroyed, but I called my mom at home, as I felt it was very important that she knew that both myself and my brother in law were safe.  This time was also around the time when the initial CNN fatality count was reported, which hit the entire office pretty hard.  Again, I was really impressed with the how the office kept working even in the presence of such difficult news.  We took no breaks for dinner; Greg graciously ordered pizza to the office.

I began to look again at the OUN WRF to see if any other activity was to be expected.  The OUN WRF lit up down to I-20 with storms as the night progressed.  While the coverage was a bit much, another strong thunderstorm producing several tornadoes and strong hail was located very close to I-20.  The geographical extent of convection in the OUN WRF did well for this particular event.  (This storm was located just to the south of our radar product area, so we did not focus on this storm initially.)  I did issue one severe thunderstorm warning for a small updraft west of the DFW metro area, though that storm came down quickly with no reports.  I issued the warning based on cloud ice values noted as similar to other rigorous updrafts.  I would cancel the warning after a few scans when it became clear the updraft had collapsed.  I would issue a couple of warnings on the southern supercell, even without experimental data, just to make sure I knew how to use the software properly and to finish the day off on a somewhat normal note.

After a long and difficult day, we dismissed after completing a short survey a little after 8:30 PM.

3.2   Tuesday, May 21

Tuesday’s shift began after lunch, at 2 PM, with a strongly worded moderate risk in northeastern Texas.  I arrived at noon to have lunch with colleagues and to catch the briefing from the other testbed, the Experimental Forecast Program group.  Their overview focused on a wind threat in northeastern Texas.  Mr. Andrew Zimmerman and I were paired as forecasters for the Shreveport office; we decided to sectorize our forecasts based on Interstate 20.  I would forecast and warn for south of the Interstate; he would cover all products for north of the interstate.  Because we were the eastern CWA, we were able to take a look at some forecast products.  The other group was the Fort Worth CWA; they had convection fire very quickly and immediately went into analysis and warning mode.

The convective pattern went in three main rounds per the OUN-WRF:

1)       East-west oriented convective line pushing north along the Louisiana/Arkansas border.  Elevated in nature due to relation with lifting boundary, we expect these storms to primarily be a hail threat.  Updraft helicity product indicates some potential for these storms to be supercellular, giving the threat for destructive hail.

2)      Pre-frontal squall line pushes in from eastern Texas from west to east.  Based on the OUN-WRF, a strong signal for rotation within updrafts in the line is noted.  Strong winds and potential tornadoes are noted as the threats with this squall line, and we expect this line to be our biggest severe weather maker.   We also note a more isolated storm well to the southwest of our CWA, down by San Antonio with good supercell signatures in the helicity product.  That storm would have the best potential in the model to produce all forms of severe weather, but it is well out of our CWA.  We also noted some negative helicities within this line, suggesting that this line wouldn’t be a pure squall line, but that the line would have some embedded supercells as well, with an environment favoring splitting.  This would introduce a hail threat in this squall line while diminishing the tornado and straight line wind threats a little bit.

3)      Cold frontal squall line pushes in from eastern Texas from west to east a couple of hours after the first squall line.  This line is expected to primarily be a straight line wind producing storm.

I then began to review the thunderstorms pushing north in the southern portion of Arkansas.  We consider a tornado warning for a supercell in Miller County based on base reflectivity apperance, but we decided against this warning.  While the OUN-WRF had suggested supercells, we immediately note a strong outflow boundary pushing south of these storms.  We consulted with some of the MRMS products to help with this decision.  Some low level shear was noted on the shear track history, but nothing real was noted on the mid level track.  Any tornado threat we might have considered early is likely gone.  We’ll keep watching, but the environment just doesn’t seem favorable for surface based storms.  No severe weather was reported with the storm we chose to leave unwarned.  One storm developed a large hail signature, but this storm was north and east of our county warning area.

We quickly coordinated with the Experimental Forecast Program; their final graphics had a very strong risk for winds in northeastern Texas, as they indicated in their briefing.

With the elevated storms moving harmlessly away from our CWA, we turned our attention to the weather in the Fort Worth CWA to see what would be coming our way.  We noted a couple of very strong “Cloud Top Cooling” product values – one of -27 C and one of -41 C, each for 15 minute periods.  Each of these storms was well back into the Fort Worth CWA, so we would not be handling warning responsibilities for either, but we were clued into the fact that some pretty explosive storms might be developing out to our west.

The first squall line first approached our CWA in far southeastern Oklahoma.  The storm was on the far edge of our CWA on the far edge of radar coverage.  Very small values were noted on the “MESH” algorithm, and vertically integrated ice values were smaller than storms in Texas yesterday evening.  We also noted a little bit of potential for large hail in the “HSDA” produt – the dual-pol radar product designed to detect large hail – but we chose not to warn for these storms.  Either by good analysis or by the luck of these storms being in extremely rural, river valley regions, we received no reports from these storms.

We also noted that outflow from the initial Arkansas storms was pushing definitively to the south.  In contrast to what mesoscale models were suggesting, this feature would cause any storm to its north to become elevated, decreasing the threat for all modes of weather.

Storms later approached in my sector from the west.  I issued a handful of warnings and I actually found that the MESH product was doing a good job of finding wind damage reports in addition to the hail reports.  We also got damage reports from an advancing outflow boundary, but we didn’t consider those reports to be of particular interest to the testbed detail.  As the threat shifted to wind with the approaching squall line from the west, we began to get nasty wind signatures aloft.  I began issuing broader severe thunderstorm warnings for wind for the entire line.  The atmosphere, somewhat stabilized from the outflow, still seemed capable of producing damaging winds.  In that regard, the storm structure indicated by the OUN-WRF succeeded; in another, no threat for embedded supercells existed because of the outflow, a weakness that even a short range model could not account for.

At 7:10 PM, with the severe threat turning solely into a marginal wind threat, we decided to switch regions of the country, up to New York, where storms were a bit more isolated and had the potential to produce hail.  Andrew and I were placed in the Binghamton CWA.  We again sectorized our coverage area; this time along the north/south boundary of I-81.  I took responsibility for storms east of the highway.  By 7:30 PM, I was confident enough in hail signatures northeast of the radar to issue my first warning.  I would warn on the radar “Hail Size Discriminator” in addition to the MRMS products.  Giant hail was suggested at 7:55 PM.  I continued the warnings for the next 90 minutes as the storm evolved and moved east ward.  I didn’t receive a single report from any of the storms I warned for, though I do not regret the warnings.  I immediately checked the Binghamton office warning history after the shift ended; they too had been warning for these storms.  I really, really believe that these storms produced severe hail and that the HSDA algorithm did a nice job of analyzing the storm detail.  We just didn’t get the reports in the nighttime in rural New York.  That’s my story and I’m sticking with it.  I was admittedly a little bit exasperated by the end of this event, not having a ground truth to say one way or the other what had happened for the sake of the testbed and whether or not the products had indicated severe weather or not.   Such is life.

Again we did not take a dinner break; Gabe graciously delivered from a local restaurant, which was much appreciated.

We took a short break around 9 PM as a CNN reporter and camera came to our office following an interview in the Weather Forecast Office.  Mr. Travis Smith was interviewed about our testbed, discussing some of the products we were testing on Monday during the Moore tornado.

As of 9:15 PM, I let all of my severe thunderstorm warnings go, happily giving the storm off to the Albany CWA desk.  Shortly afterwards, we finished for the evening, completing surveys and wrapping up for the night.

3.3   Wednesday, May 22

On Wednesday, I participated on the Mesoscale Analysis desk with Mr. Andrew Zimmerman.  Our job was to analyze the performance of high resolution products, monitoring the local storm environments and providing updates to the warning desks about the environments that their CWAs would be experiencing.  We issued mesoscale discussions at roughly 3 PM, 5 PM, and 7 PM.

We were promptly greeted with the unpleasant news that the GOES-13 satellite had suffered a major failure, and that we would be without all of our satellite based products for the rest of the week unless we could completely rely on the western domain, which would require a forecast day in Montana or the intermountain west.  (We would have no such day.)  Our forecast interests again took us back to New York.

We were particularly interested in evaluating the LAPS products on this day to see how our mesoscale environment was being represented.  Temperatures were quickly warming in central New York in the upper 70s and lower 80s.  Storms formed along the theta-E maximum just south of Lake Ontario, where a lake breeze was also pushing slightly southward.  An initial squall line was pushing out of the area through the eastern portion of the Albany CWA and into New England; we were more interested in convection potential behind the initial line.  The SPC mesoanalysis confirmed a deep stable layer in the convective outflow.

Two storms would develop to the north, one early in the forecast period, one a couple of hours later.  Both would be severe, both would track just south of Lake Ontario, and both would be warned for by the respective CWA desks.  We kept noting the theta-E maximum to the north, well represented by the LAPS imagery.  At the Mesoscale Discussion desk, we focused more on the potential for convection elsewhere, as the environment remained fairly similar along the path of the northern supercells.  We also knew that the respective warning desks would likely be more focused on the ongoing storms, benefitting more from having another set of eyes on where new storms might form.

We grew more and more concerned as the day went on about the dry air filtering into the rest of the Buffalo and Binghamton CWAs from the southwest.  We kept seeing dry air signals represented in the environment, and kept noting that convection was going to struggle anywhere in western New York outside of that boundary in the northern portions of the respective CWAs.  We also noted that simulated satellite imagery cleared out convection to the east and left a dryer, stable environment over much of western New York.  Andrew very astutely noted that the dryer air meant that south of the northern supercells, the main severe threat was transitioning quickly to a downburst wind threat instead of a hail threat.

That trend really defined the day.  I really thought the LAPS products did a great job of holding fast to the idea that dry air – characterized by 30 degree dew point depressions – was firmly in place, and that any convection trying to go up would struggle mightily.  We kept seeing storms try to fire on radar but never sustain themselves.  (It was noted that we dearly missed the chance to evaluate satellite-based convective initiation products from UAH.  We would have loved any help we could have gotten with the developing cumulus clouds.)  Even in the presence of other high resolution models suggesting convective development in western New York, the LAPS products really made it clear that nothing substantial was to be expected.  I also thought the LAPS products did a terrific job of representing moisture boundaries well.  If I had a recommendation, I would love to see a dew point depression trend chart – how the DPD changes from hour to hour is quite interesting to me and would show areas where dry air is mixing out the boundary moisture.

We kept waiting for the upper level forcing to arrive to give elevated convection a chance to develop, but that forcing arrived after our shift ended.  The most significant activity in terms of development was terrain-induced convection in Pennsylvania moving towards the southern counties of the Binghamton area.  We did note a little area of higher moisture in the far southern counties, but these storms barely approached the CWA as the shift was ending.  I do believe a couple of warnings were issued, though I don’t know if we stayed long enough for verification.  We ended around 8 PM after completing surveys.

3.4   Thursday, May 23

On Thursday, I again participated on the Mesoscale Anlaysis desk, this time with Ms. Ashlie Sears.  Our shift began after lunch.  I again had a chance to catch the briefing from the Experimental Forecast Program, which talked about the Texas panhandle and a very uncertain convective evolution in the presence of very light wind shear aloft.  A mesoscale complex was expected to develop, but which direction the convection would ultimately move, we did not yet know.  Different models, including the AFWA ensemble members, were showing different solutions.

Forecast interests for our group were back in the plains, this time in the Texas panhandle.  Again, GOES-13 remained out of operations, meaning satellite data was limited, but we were far enough west that we were able to use some of the products in a limited capacity.  For the fourth time this week, convection had already fired by the time we got to the lab and set up, which made model analysis and pre-convective environment identification somewhat difficult.  We did quickly run through OUN WRF data to see what we could gain from the model runs.  We noted scattered storms along the Colorado/New Mexico front range to be forecast, though nothing terribly robust.  To the east, convection was noted by 19Z, which was accurate, with stronger helicity signals near I-40 suggesting a supercell threat with all modes of severe weather possible.  The OUN WRF projected this complex to move eastward into Oklahoma maintaining a severe threat almost to the OKC metro area.

Our first storm of interest quickly came out of the Lubbock area, where a cloud top cooling value of  -17 C/15 minutes was noted.  With the GOES-13 still out, we were using satellite products that were completely reliant on the western imagery, which made values a bit different than usual given that we were on the edge of the domain where we could use the imagery at all.  Our LAPS mesoscale analysis noted a rich theta-E maximum through the I-27 corridor along the boundary, and convective initiation occurred very close to this maximum (within a county).  LAPS updraft helicity was more bullish in western Texas than what we saw on the OUN-WRF – I thought maybe this product’s forecast was too aggressive.  Basically this product was expecting, it appeared, mesocyclones along the entire dry line during initial convective development.  Both the OUN-WRF and the LAPS products gave us the idea that storms that went up initially would threaten with all forms of severe weather, and the Lubbock area storm quickly gave each of the threats, including extremely damaging winds in excess of 100 MPH.

The first storm in the AMA area developed a massive circular outflow that extended all directions, including back to the northwest along the dry line.  We noted that it took the LAPS an hour or so to catch on to the colder, more stable air infiltrating the AMA area.  What was seemingly a prime area for convective development has now become much more stable due to mesoscale influence.  After a couple of weak convective attempts on the outflow boundary, a storm showed a -18 C/15 minutes cloud top cooling value in Potter County.  This storm did become severe for a couple of scans.  Chad Gravelle had asked us to watch the CTC product near radars – this storm was very near the AMA radar, and the algorithm performed quite well.  This storm, with large scale winds working directly in opposition to the motion of the low level forcing, had a motion of nearly 0, contrary to the projection of a progressive complex that several models had advertised.  This storm lasted for a couple of warnings before dissipating.  Away from the forcing of the outflow boundary, the storm could not exist in the ouflow cooled air.

We again wrote a mesoscale discussion around 5:30 PM, which became very complicated, because storms in each of the three CWAs of interest were behaving differently.  Basically, we broke the MD down into the three sections to reflect the local influences at each region.  After this MD, outflow continued to ruin the AMA environment, the lone supercell continued to do its thing in the San Angelo area as it slowly weakened, and terrain convection in the Midland area struggled to organize away from the source of its origin in the hills.  Ms. Sears and I kept an eye on the various environments, but little changed over the last couple of hours.  Again, outflow really wrecked the “expected” severe weather environment, especially north of the highway 82 corridor.  I love the OUN-WRF, but a forecaster really has to get rid of any pre-conceived notions of what the day “should” look like in the face of rapidly changing conditions that models are still hopelessly outmatched against.

We ended the day a bit early to save images and collect our thoughts for the webinar on Friday.  We were each assigned an individual topic to discuss for 2-3 minutes during the webinar.  I was assigned to talk about using MRMS products in the Shreveport area on Tuesday evening.  After collecting images, we completed the last surveys for the week, and then we dismissed for the evening.

3.5   Friday, May 24

Friday began at 9 AM with a detailed discussion and review of the week’s events.  We each gave considerations about some of our favorite products and how we used those products in simulated forecasting and analysis environments.  Thoughts and considerations of products were discussed with points of contact for each product.  We then went through a practice run of our webinar before the ‘real deal’ went live at noon.  After our presentation, the group took questions from the live listening audience from around the country.  We dismissed shortly after 1 PM.  I spent the afternoon and evening with colleagues, enjoying the quiet weather at a grill out.

4.  Final Comments/Random Thoughts

Given what happened on Monday, simply getting through the week – much less as productively and seamlessly as the testbed went – seemed like a minor miracle.

I thought ordering dinner in the office was a great way to keep working while getting a bite to eat.

5.  Thanks and Acknowledgements

I would like to thank UCAR and NG for allowing me permission to attend this testbed.

I would like to my deepest thanks to all of the workers at the testbed, including Mr. Greg Stumpf, Mr. Gabe Garfield, Mr. Darrell Kingfield, Mr. Jim Ladue, and Ms. Kristin Calhoun, for continuing the great work of the testbed in the midst of such extraordinary tragedy and chaos.  I know that each of these individuals worked extremely hard in the face of storm surveys to keep the testbed running as smoothly as possible.  There are not words for how well each of these individuals did their jobs this week.  I also want to thank all of the testbed members for professional, considerate behavior during the week.  We all hurt from the tragedy in Moore, but we all were able to continue to work all week long.

James McCormick
UCAR Associate Scientist I
Aviation Hazards Team
16th Weather Squadron, Air Force Weather Agency.
Offutt AFB, Nebraska

 

 

Tags: None

Forecaster Thoughts – Ashlie Sears (2013 Week 3)

The 2013 EWP was held for three weeks in May, with 6 participants per week attending from Eastern, Central and Southern Regions. I had the opportunity to attend in the third and final week. Unfortunately, many issues arose during my week which affected the initial timeline/plan laid out for us to follow. The first of two EF5 tornadoes to hit the OKC area ravaged Moore, OK on the Monday. We also were limited on the tools to test Wed-Thurs with the loss of GOES-14.

The test bed was arranged to where there were two forecasters on a meso desk for the duration of the shift each day. They would provide hourly updates on the environment or determine if the area of concern needed to be shifted. There were then two teams of two who were given responsibility of different CWAs and providing warnings for their respective area.

Timeline for the Week

Sunday – Arrived in Oklahoma.

Monday – Introduction to the test bed, overview of the week, discussion of expectations for the week. The Moore tornado occurred at the beginning of this shift and with many people helping to run the test bed being affected by the storm (i.e. their kids in the path) along with technical issues we were dealing with, there was a lot of confusion to the first day, limiting to what we were able to accomplish. We were unable to use any of the test bed products in real time to analyze the tornado that occurred, though we were able to go back later in the evening and analyze the day.

Tuesday – Warning Operations over Fort Worth and Shreveport CWA early on, followed by Albany CWA in the evening.

Wednesday – Warning Operations over Buffalo and Binghamton CWA.

Thursday – Warning Operations over Amarillo, Lubbock, San Angelo and Midland CWA.

Friday – Week wrap up, with a 30 minute national call discussing things learned during the week.

Lessons Learned

With a shortened testing period this year, each week was asked to analyze all the tools to be tested, though focusing on two primary areas. We presented nationally for the “Tales of the Testbed” webinar on the last day covering these two specific topics, discussing best practices and how we found we could utilize in the tools in our own office operations. The focus for my week covered the Multi-Radar Multi-Sensor (MRMS) and the Hail Size Discrimination Algorithm (HSDA). In addition to the tools/experiments, we were asked to provide feedback on the usage of AWIPS 2 and the whole warning generation.

Mesoscale Analysis Tools

There were two things we were asked to utilize in determining the mesoscale situation. Unfortunately, these tools are restricted to their domain of Oklahoma and Northern Texas and that general vicinity. However, talking with the representative for the LAPS forecast, I was informed that they eventually would like to spread to the rest of the country, depending on the success they find in the southern Central Plains.

OUN-WRF and LAPS 1km and 3 km forecasts cannot be utilized up here in the Northeast. However, the LAPS 2.5 km analysis is available to be used at this time. I found during our warning situations that using the Theta-E values from LAPS was a best practice in figuring out where the convection was going to initiate and continue to form. The ability of AWIPS2 to allow multiple layers to be easily overlaid and the easy access with zooming in and out without constantly having to reload  the frame allowed the forecaster to analyze the situation at a much quicker pace as well as obtain a much clearer picture. I found it beneficial overlaying the analyzed 2.5 km CAPE values with the latest reflectivity images then comparing to how the LAPS 1 and 3 km forecasted these parameters over the next few hours. Overall, the LAPs had the better grasp of the type of storm development and the location though the timing was off by up to 1-2 hours. It also is run at 15 minute intervals, allowing a more up to date version of what is occurring. The OUN-WRF had a better handle of the timing of the development, but was off on the location.

I would really like to see the forecasted LAPS in our region of the country as it proved quite useful in determining where the most likely area of convection would be. In addition, in discussing other potential uses, the LAPS rep informed me they have found it useful in forecasting the rain/snow line, producing fairly accurate forecasts, during winter events which would also be very useful in the Northeast.

Convective Initiation Tools

Unfortunately I wasn’t able to test these tools as well as the other experiments. On Tuesday, we arrived after initiation had begun, so we were unable to do any pre-event analysis. On Wed, we lost these tools when the GOES-14 satellite was down and our warning domain was outside GOES-13’s track. Luckily Thursday we were able to analyze the CI tools for the storms that occurred over northern Texas, allowing some familiarity but not as much as I had hoped. One side note I would like to make, I have been analyzing these tools available via the web to see how each handle CI here in the Northeast. For the couple of convective events we have had in the past few weeks, I have found that the CI tool has given about 1 hour lead time for areas further inland, but is still having issues with capturing the potential for convective development along the coast. One issue these tools do have is cirrus contamination and this was noted in several areas we were analyzing during the test bed.

Warning Tools

The experiments that were the focus of the week were the MRMS and the HSDA. The incorporation of the MRMS and HSDA will be very beneficial in warning operations. These products combine data from multiple sources, doing a QC over the data and then producing a final result that depicts a clearer image of what is actually occurring, compared to the usage of just one radar/source.

One of the more interesting aspects of the MRMS was the ability to use something other than reflectivity and velocity to create your warning polygon. For the issuance of tornado warnings, using either the 30 minute or 60 minute tornado track allowed the forecaster to see the trend of rotation, the strength of it as well as the path it was taking. While we may have to have four panels or more up to see rotation throughout the vertical column, using the rotation track products allowed us to see general rotation strength in the lower levels as well as in the mid-levels, just by looking at two screens. The color coding of the product also gave us an indication of the severity of the situation and caught our attention easier than diagnosing an all-tilts product.  The same concept was used with the MESH tracks which allowed the forecaster the ability to see the track of the hail core producing severe hail as well as enabling the forecaster to just look at one product (or multiple depending on the time scale) versus having to have an all-tilts up, which takes space and time. One other aspect I found very beneficial with the MRMS was the ability to set up a four panel screen with reflectivity at the 0oC, -10oC and -20oC compared to the lowest level reflectivity scan. This allowed for a quicker diagnosis to determine if severe hail was being produced within the hail growth zones. All the while too, because these products are incorporating many different sources, it also allows the forecaster to not to have to trouble themselves on bringing up all the surrounding office radars in addition to their local radar. This could help in saving memory space within AWIPS, allowing for the program to run quicker as well as general diagnosing time for the forecaster.

The HSDA products also provided quite valuable in determining the hail potential of a storm. Once it is determine if an area of the storm is possibly producing hail, the use of the HSDA then informs you of the potential size of it. We had great success in verifying the large versus giant hail in the storms of the southern Central Plains. Unfortunately, we were able to use the product for the storms here in the Northeast due to a misunderstanding of the use of the product. But I would be curious to see how well the storms verify compared to the HSDA in storms up here as well.

AWIPS2

Overall, I had great success utilizing AWIPS2 into the warning operations during the test bed. The one main issue that occurred when I was producing warnings was that if a warning polygon happened to overlap into a different CWA when I was first constructing it, even after hitting the Warned/Hatched Area button, it still issued the full polygon that overlapped into the other CWA. This was presented to the IT rep during the test bed to hopefully be corrected. Otherwise, the functionality of AWIPS2 really did speed up the analysis as well as warning process.

Ashlie Sears
General Forecaster
NWS Upton NY (New York City)
2013 Week 3 Evaluator

Tags: None

Forecaster Thoughts – Rebecca Mazur (2013 Week 2)

The Experimental Warning Program provides an invaluable opportunity to test the utility of new products and methods in an environment that mimics standard severe weather operations.   In no other way can newly developed products be assessed properly to identify strengths and weaknesses in such a fast paced and high stress environment.  From my experience and hearing from other forecasters and researchers involved in the EWP, it was a truly invaluable experience not only to help research meteorologists transition their products to warning operations, but for forecasters to see the future of warning operations.

The products we tested include high resolution mesoscale and storm scale models to analyze the near storm environment, and new storm interrogation products to diagnose storm severity.  Many of the products proved extremely useful to understand and forecast storm behavior from the ambient conditions.  In particular, utilizing OUN WRF and LAPS high resolution fields of instability gave good indications of why some storms intensified, while others weakened or remained unchanged.  Also, these products helped the forecaster to quickly adjust their mindset from thinking it would be a fairly marginal severe weather day, to being on high alert for potential dangerous storms.  In regards to certain storm interrogation products, MRMS rotation tracks were very useful in making warning decisions during the supercell cycling process and when analyzing cyclone strength, moreso than just utilizing base radar products alone.  In addition, MRMS echo top heights increased situational awareness for rapidly developing thunderstorms likely to produce large hail.  Based on personal experience working in a WFO with not-so-ideal radar coverage, the echo top heights product will be extremely useful as it is imperative to utilize multiple radars for proper storm interrogation.  This product will be a sort of “one stop shop” for large hail diagnosis in a storm, rather than constantly trying to keep up with all-tilts and mid/upper max reflectivity products from multiple radars.  Of particular use will be the identification of potential large hail producers within storm clusters in a low shear, high instability environment common in the high plains mid to late summer.

One final suite of products to mention is the GOES-R products. We looked at convective initiation and growth potential, in addition to GOES-R airmass product.  Incorporating these products into the EWP gave the forecasters hands-on involvement with the future of satellite data.  Some forecasters had not had much experience with the GOES-R products, compared to folks like myself whom are heavily involved in the Proving Ground.  Therefore, the EWP provided them the ability to see what the future of satellite meteorology will entail, which falls in line with the GOES-R Proving Ground objectives.

Not only was the opportunity to test new products and methods a gratifying experience, interacting with other forecasters from various parts of the country, in addition to the EWP facilitators, proved invaluable as well.  In our agency, it’s a rare occasion to actually spend time with or work with other meteorologists within NOAA/NWS to share forecast tools and methodologies, and to discuss agency-related topics and the current/future state of the science.  There is much to be gained by sharing ideas and tools across WFO boundaries, in addition to developing new relationships within the agency.  For me personally, I was able to gauge where myself and my office stood against other WFOs in performance and involvement with emerging technology.  I learned a lot during my week at EWP, and was able to bring back some new ideas and concepts to my office to better serve the people of southeast Wyoming and the western Nebraska panhandle.

Thank you for giving me the chance to participate in the 2013 EWP!

Rebecca Mazur
General Forecaster
NWS Cheyenne WY
2013 Week 2 Evaluator

Tags: None

Forecaster Thoughts – Michael Scotten (2013 Week 2)

The EWP2013 was a great experience to test experimental products and procedures as well as share ideas with operational and research meteorologists.  Thank you for giving me this opportunity!

The derived satellite imagery products were very useful in warning operations and near term forecasting.  In one exceptional case, the UAH CI product provided 20-30 minutes of lead time to storm development, 75-90 minutes of lead time for the onset of severe weather, and 120 minutes of lead time for tornado occurrence in Montague county, Texas during a historic tornado outbreak across north Texas on Wednesday, May 15.  Also on this date, the UW NearCast Vertical Theta-e Difference maxima east of a dryline depicted the location of initial storm development in north Texas, which was a little unexpected as convection was forecast to develop farther west closer to the dryline.  It was shown that storms struggled to develop and intensify in a minima area of UW NearCast Satellite Derived Precipitable Water 700-300 mb across southeast Colorado and southwest Kansas on Thursday, May 16.  Having consistent UW-CTC  depictions of -100C/15 min or less for two or more scans can give 30-90 minutes of lead time for thunderstorm development, even in weakly forced environments.   Furthermore, GOES-R CIRA simulated IR images can be valuable for the depiction of short waves and mid/upper level moisture, which can enhance near term situational awareness.  These simulated images would be handy to import into GFE/IFPS via a smartTool or procedure for making near term forecast updates.

For issuing warnings, MRMS data can be especially valuable.  In particular, I liked using the MRMS derived reflectivity at the -20°C level for the detection of large hail, especially at long ranges from radar sites or where radar coverage was diminished.  MESH confirms the presence of large hail and complements reflectivity products to increase forecast confidence when making critical warning decisions.

High resolution model data will only improve and become increasingly important for near term forecasting in the years to come.  The 1 km LAPS data are tremendous at depicting storm scale subtleties in weather elements such as CAPE, LI, and surface wind.  In one example, storms intensified as they moved in an area with higher LAPS derived CAPE.  Also, the OUNWRF model outputs were beneficial, especially in determining storm mode, timing, and location several hours prior to convection development.

These products are fantastic to have for warning and near term forecasters to improve situational awareness and ultimately make better decisions.  I would like to see these products implemented at a national level.  Increasing POD and lead time as well as decreasing FAR of storm based warnings will likely result from using these innovative products.  Incorporating research and new products into operations will be vital in the next several years to help ensure the NWS provides the best service possible.

Michael Scotten
Senior Forecaster
NWS Norman/Oklahoma City
2013 Week 2 Evaluator

Tags: None

Forecaster Thoughts – Joey Picca (2013 Week 2)

During the week of May 13-17, I was able to participate in the 2013 NOAA Hazardous Weather Testbed Experimental Warning Program (EWP). The week is characterized by a significant collaborative effort amongst NWS forecasters, NSSL scientists, and principal investigators from other research institutes, and it aims to evaluate and improve experimental mesoscale and stormscale products. Among those evaluated were LAPS and OUN-WRF hi-res model output, GOES simulated imagery and sounder data, convective initiation probabilities, cloud-top cooling rates, multi-radar multi-sensor (MRMS) rotational and hail tracks, and dual-polarization hail size and tornadic debris algorithms.

Each day, following a pre-operations briefing, the six forecasters were split into three teams of two – generally one on a mesoscale forecast desk and the two others on warning desks for two different CWAs. Indeed, one of the great features of the EWP is the way in which it replicates warning operations at an actual forecast office. As storms begin to develop, forecasters are tasked with issuing real-time (but experimental) warnings by utilizing the experimental products, in addition to more conventional products and real-time spotter reports via NWSChat, etc. Therefore, the NWS forecasters are in an environment in which they can provide very valuable feedback to the principal investigators, who can then further improve the products.

Without question, operations on the Wednesday of my week, when a tornado outbreak occurred across the Fort Worth CWA, provided our best opportunity to evaluate the products thoroughly. We entered the day with expectations of severe weather, but primarily large hail and high winds, not an outbreak of tornadoes. However, as the afternoon progressed, the experimental mesoscale and hi-res model data started to alert us to a higher threat than we originally thought. For example, the OUN-WRF clearly displayed tracks of higher updraft helicity values progressing across North Central Texas during the evening. As storms developed and the event began to unfold, multiple tornadic storms provided a significant challenge for warning operations, especially considering only two of us were FWD warning forecasters. However, the rotational/azimuthal shear and estimated hail size MRMS tools greatly improved the efficiency and ease of our analysis by giving us a method to check the storm trends relatively quickly.

During the “Tales from the Testbed” webinar at the end of the week, I made sure to emphasize how the new tools provide an improved ability to synthesize the enormous amount of data involved with severe weather operations. Therefore, even in a case where the event is more significant than originally anticipated, a forecaster can stay ahead of the situation.  Additionally, the principal investigators were quite receptive to our suggestions for further improvements so I fully expect these tools to be further developed over the coming months and years. I was quite honored to be chosen for this program, and I believe it is a wonderful opportunity for collaboration and betterment of future warning operations.

Joey Picca
Meteorologist Intern
NWS Upton NY (New York City)
2013 Week 2 Evaluator

Tags: None

Forecaster Thoughts – Andy Hatzos (2013 Week 1)

Andy wrote a very detailed and illustrated summary of his visit to the EWP.  It is available here:

ftp://24.172.201.242/ewp_review_hatzos.pdf

Aside from the above, he added a few more quick items:

1) I struggled a bit on the mesoscale desk, and a part of that was finding that my data-heavy AWIPS I procedures (which work fine here) were almost unloadable on AWIPS II. Because of that, I had to simplify and rebuild a bunch of things on the fly. For next year’s experiment, it may be worth asking participants to create their procedures in advance on an AWIPS II system (ADAM) before sending them on to Oklahoma. That way, they could know for sure that what they’re sending works alright on AWIPS II.

2) The GOES RGB Airmass product sort of seemed to get forgotten about to an extent. I don’t recall seeing it in the training presentations (though we had a WES case on it) and it wasn’t emphasized heavily at the EWP. I’d love to see this covered a bit more next year, on the same level as some of the other GOES-R Proving Ground products. The RGB Airmass is admittedly difficult to interpret, but rather interesting to try to use, once you get an idea of what you’re looking for. I definitely think it’s worth a closer look.

Finally, I just wanted to say thanks again for the invitation to attend. I really enjoyed the week and I hope I was able to help make some progress toward everything the EWP was working for.

Andy Hatzos
General Forecaster
NWS Wilmington OH
2013 Week 2 Evaluator

Tags: None

Forecaster Thoughts – Jonathan Guseman (2013 Week 1)

Here is what I took away from the experiment last week.

Out of the MRMS products, the Maximum Estimated Size of Hail (MESH) one was the most useful since many of the storms we evaluated were primarily hailers. It did underestimate hail size by around 0.50 inches some of the time, especially within colder airmasses (e.g. the cutoff low that lingered across the eastern CONUS early last week). However, it did prove to be effective most of the time across the plains. Plotting the 30, 60, or 120 minute accumulated MESH was nice in extrapolating the storm’s track and subsequently finessing the extent of a warning polygon. Several other products were also useful, including the height of 50 or 60 dBZ cores above the 0 or -20C level as well as isothermal reflectivity products.

POD was very high, but FAR was also high when evaluating the HSDA. The spatial extent of giant hail was overdone at times, but a hail event was rarely if ever missed.
The OUN WRF showed utility in forecasting convective initiation in the short term, but it missed Wednesday’s event as it forecast a complex to develop near the Red River Valley. Switching to the RAP for initial conditions will likely be a benefit for the model. The variational LAPS analysis was useful in the 0-3 hour forecast timeframe, providing 15 minute forecasts in the short term. Lower resolution extended guidance was also available but not used as much.

Lightning jumps using total lightning data were evaluated with the Flash Extent Density product. The 8 km resolution is pretty coarse for evaluating discrete areas, but the product does give a good general idea of where to focus. The idea of drawing a polygon to evaluate a specific area for total lightning data will benefit the Total Lightning Trending tool. Simulated IR and WV imagery look very similar to real-time data and are very useful in getting a feel for convective evolution throughout the day. They do underdo the spatial extent of features, such as MCCs, quite a bit, but the idea is certainly there. Fields like convective instability and PWAT difference using the Nearcast product can be evaluated to aid in short term convective forecasting. The UAHCI product can be sporadic at times showing low probabilities of CI where flat cumulus fields exist, but the larger values usually do well, especially along boundaries (i.e. dryline). Cloud top cooling via the UWCTC product is also very helpful in diagnosing storms capable of becoming severe. Values of -20C/15 min of cooling have shown to provide extended lead times, sometimes on the order of one hour. Both the UAHCI and UWCTC products do suffer from cirrus contamination. They both suffered on one shift when convection was ongoing early, but they were very helpful prior to CI where skies were mostly clear.

Thanks again for the opportunity to participate and hopefully I’ll see you in upcoming experiments!

Jonathan Guseman
General Forecaster
NWS Lubbock, TX
2013 Week 1 Evaluator

Tags: None

Forecaster Thoughts: Cloud Top Cooling Products for Aviation

From an email conversation:

Kristen Schuler (CWSU, Kansas City, MO) writes:

“So right now the cloud top cooling and sat cast products are a great way to predict severe storms /areas of severe hail potential. It would be nice to have a product that forecasts echo tops exceeding FL400…something that poses a significant impact to aviation. Is that possible? What are your thoughts?”

Wayne Feltz (UW-CIMSS, Madison, WI) replies:

“Yes, we received this same feedback from another CWSU forecaster out of Houston.  I have attached a paper (currently in review with major revisions so it should be published soon) where this relationship has already been established for 18, 30, and 50 dBZ echo top heights.  I think we can do more to provide better correlation between expect echo top height and CTCR.  We also want to compare CTCR and cloud top growth (feet or meters per 15 minutes) to see how this relationship fares.

“The relationship in paper is within figure 7.  We plan on making a separate CTC training module but with a focus on aviation meteorologist decision support issues rather than severe thunderstorm warning focus.”

Greg Stumpf, EWP2012 Operations Coordinator

Tags: None

Forecaster Thoughts – Chris Sohl (2011 Week 4)

I think that both operational forecasters and program developers benefit from the opportunity to interact with each that the EWP 2011 program provided. Forecaster participants are introduced to new tools that are becoming available. Not only do they have an opportunity to make a preliminary evaluation of each tool but also to explore how they might be incorporated into an operational setting. It was a plus having folks knowledgeable about the new tools available to answer questions and to suggest possible ways in which forecasters might use the tools. This interaction should result in a better product by the time the new tools are delivered to the entire field.

Some of the datasets explored in EWP 2011 included convective initiation schemes and storm top growth. Based on my initial impressions gained over a period of working only a few days with the data, the UAH CI product seemed to have a greater FAR with CI compared to the UW product which itself seemed to be too conservative. While a high FAR with the UAH product might at first glance seem like a poorer performance, I think it may still provide  useful information (for example, getting a sense on how the cap strength might be evolving).

In the short amount of time that I had to look at the satellite-derived theta-e/moisture fields, I saw enough to keep me interested in spending more time evaluating with these products. The opportunity to discuss possible product display methodologies with Ralph Petersen was helpful.

The 3D-VAR dataset looked very interesting and seems to have potential to provide useful information. There were some issues where the strongest updrafts appeared to be in the trailing part of the storm and it might be interesting to see if that behavior was strictly an artifact of the algorithm or a function of the variability of the updraft strength at various levels in the storm. I would also like to have more opportunity to examine some of the other fields (vorticity, etc.) in several different storms to see if there might be a signal which could provide the forecaster a heads-up regarding what kind of  short-term storm evolution might be expected.

I appreciate that some of the participating organizations continue make much of their data available on-line following the conclusion of the spring experiment. Not only does this help me not forget about the new product some 6 months later, but rather allows me to further explore how I might better include the new datasets  into my shift operations. It is possible that a further review of  a product that initially seemed to have minimal value to me in an operational sense ends up providing more utility than I originally thought.

Chris Sohl (Senior Forecaster, NWS Norman OK – EWP2011 Week 4 Participant)

Tags: None