Summary of Operations
Team 1: Pelcyznski and Anderson (Norman, OK)
Team 2: Fowle and Satterfield (Wichita, KS)
Team 1: Pelcyznski and Anderson (Hastings, NE)
Team 2: Fowle and Satterfield (North Platte, NE)
Team 1: Fowle and Anderson (Louisville, KY; Springfield, MO; Cheyenne, WY)
Team 2: Pelczynski and Satterfield (Boulder, CO)
Team 1: Fowle and Anderson (Boulder, CO)
Team 2: Pelcyznski and Satterfield (Pueblo, CO; Hunstville, AL)
Comments on Experimental Products:
– Forecasters really liked the CAPE analysis; it helped them locate boundaries.
– Forecasters felt that the model didn’t add to their skill, but the analysis did add skill.
– Forecasters would like to see supercell composite and the significant tornado parameter in future versions of vLAPs (as well as mesoanalysis products from SPC).
– Forecasters believe vLAPs “overconvects” less than other models. Caution other forecasters not to throw out the forecast
– Forecasters like the re-locatable domain, especially on big risk days. However, the domain wasn’t quite large enough to capture every event (i.e., the 200 x 200 domain is a bit small)
– Forecasters like having a theta-e forecast.
– Forecasters believe the model is good in a qualitative sense. However, the first run convected too early; the latter runs caught up with reality, though.
– Forecasters thought the placement of developing convection was good. – first two hours of high-res models not as good; simulated IR cloud brightness. How is the model doing?
– Forecasters would like to see a time ensemble.
– Forecasters like the model going out to 8 hours, because it allows the model to spin up.
– Forecasters suggest the use of “nudging” to improve the initialization.
– Forecasters suggest coordination for high-res modeling. There seems to be some redundancy in the models.
– The model missed convection in one case because it missed the cirrus shield.
– Forecasters like the product to help them with the big picture (e.g,, shortwaves). It increases their confidence in their forecast.
– That said, forecasters find it hard to put confidence in details. How much do you trust the models?
– Forecasters suggest displaying a combination of SimSat with reflectivity to see what features are associated with the cloud.
– Forecasters feel SimSat is very valuable.
– They would like to see Sim Sat for the HRRR.
– Forecasters think it is an easy way to spot errors in the model.
– Forecasters note that precipitable water / theta-e helped to show where CI would occur (i.e., on strong gradients). They used the visible satellite in combination with those products to see where the boundary would progress. This worked well on at least one occasion.
– Another forecaster mentioned using the NearCast theta-e product in comparison with vLAPS CAPE. They used it to spot boundaries / instability.
– Forecasters note that NearCast is good as a qualitative tool (i.e., where should I focus?).
– The NearCast is good to use before convection, but is not as useful after (given that storms have already fired, and so CI is already established).
– One forecaster preferred the theta-e difference product. She noted that is better than anything at her office. She also said that is nice to overlay a theta-e image on satellite or radar. She thinks it’s helpful from a forecasting standpoint, because it shows where CI is most likely. After convection formed, she didn’t look at it, but it was good for the 3 hrs before CI. She also mentioned that she prefers the NearCast to the SPC theta-e product (because it’s too noisy)
– NearCast picked up subtle gradients in moisture. In one instance, this corresponded to showers that went up in Colorado.
– One forecaster mentioned that this product could be useful for cold-air damming or sea breezes.
– One forecaster would like to see a change in the color scale.
– One forecaster didn’t see a lot of utility in precipitable water at such high resolution; they tended to focus on theta-e, theta-e difference. Other forecasters disagreed, however.
– Some forecasters think this is a calibration issue. That is, they don’t use theta-e difference very often, so not sure what it all means. Perhaps, instead of theta-e difference, use CAPE, deep moisture convergence, or frontogenesis. They believe that new algorithms could be helpful.
– Forecasters indicate that cloud obscuration – i.e., high cirrus – hindered the product at times.
– Forecasters prefer to look at high values of CI only (strong signals).
– Forecasters would like a quantitative value of growth available (like Cloud Top Cooling), rather than a simple probability. It would add more value to their interrogation. (Something to add to the cursor readout, perhaps?)
– Our broadcaster indicated that he could see great value in the CI product for TV.
– One forecaster mentioned that on one day during Week 4, it didn’t fit their conceptual model of how to use the product.
– A forecaster noted that it worked well outside of cirrus shield. In that case, the CI product was valuable.
– One forecaster mentioned that the output is a little too cluttered – that it confused more than it helped.
– Forecaster think ProbSevere is a good tool – a very good “safety net”.
– They would like to see a little more calibration on some of the thresholds. Right now, it seems to them to be a hail tool.
– This tool could be very helpful for broadcasters, who may be working alone.
– Forecasters note that the color curve in the 10-40% range is tough to discern. It’s good for storms that are developing – but not as good for storms that have already developed.
– One forecaster notes that the colors could be problematic for color blind folks. They suggest potentially using line thickness as a way to convey probability.
– ProbSevere is good for slowly-developing storms; good for hail; poor for wind. Should the product be referred to as ProbHail? It’s not as useful in rapidly growing convection (just verifies warning). The 6 min lag associated with the product makes it harder to make judgments in the case of quickly developing storms.
– Broadcaster likes it from a broadcast standpoint: it helps a broadcaster to multi-task.
– ProbSevere is good as a confirmation tool or regional situational awareness tool, and it could be helpful for updating warnings.
– The forecasters would like to see ProbSevere separated into hail, wind, and tornado probs
– They can envision a new 4-panel: probability of tornado, wind, hail, and total severe.
– The cursor readout was nice, but one of the forecaster didn’t understand the glaciation rate.
– One forecaster didn’t like the cursor readout.
– Another forecaster liked to see the extra information; he suggests that the cursor readout is a matter of personal preference.
– Forecasters saw overshooting tops on visible satellite before the algorithm picked them up.
– They believe that the temporal resolution is too low.
– Different people have different uses for it. WFO forecasters like it for the big picture, but won’t interrogate.
– Ever broadcaster would love it – would find it very helpful.
PGLM / Lightning Jump
– Biggest winner of the Week 4 products.
– “Everyone’s favorite” – Kathleen Pelczynksi
– The lightning jump algorithm helped tremendously in warning operations. It was very helpful to have 1-min lightning jump updates while waiting for radar volume scans. These frequent updates certainly impacted warning decisions.
– Forecasters related anecdote where the lightning data helped issue warning early in an explosive environment.
– Broadcasters are very concerned with lightning; if it has lightning, they consider it severe, even without hail.
– One forecaster still not sure about calibration regarding sigma jumps. Would suggest more in-depth lightning training; many mets don’t understand the dynamics. How does it work?
– Beneficial if it works, but it takes a lot of time to use.
– It would be more valuable if you could use it in one click (not enough time for it otherwise).
– VR shear / time heights tracking might be useful as well.
– Forecasters don’t feel like 1 pm EFP briefing was helpful : “fighting to stay awake.” They did not consider it important for what they were doing.
– Forecasters also felt like the briefing was intended for the EFP (they didn’t mention the synoptic scale much).
– Forecasters felt their time would have been better spent looking at AWIPS
– They suggest that we start earlier than 1 pm.
– Regarding training, broadcaster suggests that other broadcasters get a couple hours of AWIPS training. Broadcaster says it’s good to mix it up with forecasters – found it really valuable.
Week 4 Coordinator