Live Blog – 30 April 2008 (6:23pm)

Notes on Gridded Warning Archive Case

Stream of consciousness notes from Mike Cammarata’s exercise:

Getting used to the software. Mike has some experience with this from earlier this week.

“Calibrated” ourselves with the MESH.

Desire to hide/show products more easily.

Would like to know when the next update is expected in displaced realtime case.

Long list of products (mostly warning output grids) became cumbersome to deal with.

At one point, Mike realized that we were encompassing more than the *current* threat area, made that adjustment.

———————————————————————————————————-

Group discussion: Mike Cammarata and Patrick Marsh, warning participants (Kristin/Kevin M. pw coords.)

A more informed decision could be made with better technology and guidance tools. For example, storm-following loops and cross-sections (even automated).

Took longer to issue due to polygon drawing (hard to get used to different knobology)

How did we feel about issuing probabilities? Probability was very arbitrary. Mike: “At what level of risk are we going to have a tornado?”

Discussion ensued about difference between achieving GPRA goals and current paradigm of warnings.

Discussion about significant call to action. (Are probs the best way? For tornado?)

Every decision maker has their individual cost-loss ratio for each decision made.

Andy feels that the “public” needs to know when to be told to “duck”.

The big issue is how we can objectively calibrate forecasters to the verification and to each other, so that there is a consistent answer for each warning.

Kevin Manross and Greg Stumpf (Gridded Warning Cognizant Scientists)

Tags: None