Improved w2dualpol

As discussed in our previous post, we streamlined the WDSS-II ORPG processing into 1 algorithm, w2dualpol. In addition to this streamlining, we have also found two methods to enhance the speed at which w2dualpol can run. First, the algorithm can find a “capping tilt” and only processes the tilts below that elevation. Second, if you’re interested in only rain rates, the algorithm can determine the lowest elevation unblocked by terrain, and only process the tilts at or below that tilt.

The capping tilt is determined by reading in subsequent tilts of the radar until a tilt is found in which none of the pixels have a reflectivity greater than a user-defined threshold, set with the -m flag. The algorithm then considers this tilt the cap, and does not read any tilts above this cap. The algorithm then runs, reading in all tilts below and including the capping tilt. This continues until either a) another capping tilt is found below the current capping tilt, or b) a pixel is found in the current capping tilt with a reflectivity greater than the threshold specified by the -m flag.

If a new capping tilt is found below the current capping tilt, the algorithm will then reads only the tilts below the new capping tilt. If a pixel is found to exceed the reflectivity threshold in the current capping tilt, the algorithm resumes reading all tilts until it finds another tilt with in which the reflectivity does not exceed the threshold in any pixel, and a new capping tilt is declared.

By specifying a threshold with the -m flag, you are basically telling the algorithm that you are not interested in any echoes below this threshold. You are also assuming that if there are no pixels in which the reflectivity exceeds the threshold in a particular tilt, there are also no pixels in which the reflectivity exceeds the threshold in the tilts above.

Finally, if you are interested in only rain rates, you can set the -E flag to further reduce the time it takes this algorithm to run. Rain rates are determined by examining the pixels nearest to the ground. In a perfectly flat world, we could read in and process only the lowest tilt from the radar and greatly reduce the processing time. However, we know that our world is not perfectly flat, and that many radars have terrain blocking some of the radials at the lower tilts. For these radials blocked by terrain, we need to find the next lowest unblocked radial. Therefore, we have devised a method in which we can determine the lowest tilt unblocked by terrain for each radar. Once this tilt is determined, only data below and including that tilt needs to be processed.

You can specify that you would like to process only up to the lowest unblocked tilt by setting the -E flag to the lowest elevation angle scanned by your radar (0.5 in the case of the WSR-88D network). This needs to be specified so the algorithm knows at which elevation to start looking for the lowest unblocked tilt.

So, if you are interested in only rain rates from a radar in the WSR-88D network, your command would look like:

w2dualpol -i /data/KTLX/code_index.fam -o /data/KTLX -s KTLX -m 10 -E 0.5 -T /data/terrain/KTLX.nc --outputProducts=RREC

Notice that along with the -E flag, the -T flag is also set, specifying the terrain file for your radar. Additionally, the -m flag is set to 10, specifying that the capping elevation be set as the lowest elevation in which no pixels exceed 10 dBZ.

It should be noted that if you’re interested in processing some products at all elevations (say, HCA and MSIG), but the rain rates at only the lowest unblocked elevation, you will want to run two iterations of w2dualpol in order to process the data in the least amount of time.

First:

w2dualpol -i /data/KTLX/code_index.fam -o /data/KTLX -s KTLX -0 1 -m 10 -T /data/terrain/KTLX.nc --outputProducts=DHCA,MSIG

and then:

w2dualpol -i /data/KTLX/code_index.fam -o /data/KTLX -s KTLX -m 10 -E 0.5 -T /data/terrain/KTLX.nc --outputProducts=RREC

Through the implementation of the capping elevation and the lowest unblocked elevation, we are able to halve the processing time of w2dualpol. This means that the processing time is about 1/4 of that of the original ports of the ORPG algorithms, discussed in the previous post.

Tags: None

Streamlined ORPG dual-pol processing with w2dualpol

WDSS-II contains ports of several NWS open radar product generator (ORPG) dual-pol algorithms. Through the use of these ports, it is possible to do a multitude of things, including running the hydrometeor classification algorithm (HCA) and computing instantaneous rain fall rates from the dual-pol variables and the HCA. Previously, the only way to do this in WDSS-II required the use of multiple algorithms. This workflow for computing rain rates from dual-pol observations is:

w2dp_preproc → w2dp_hca → w2dp_rainrates

w2dp_preproc takes the base dual-pol outputs from ldm2netcdf, which tend to be noisy and difficult to interpret or use in algorithms, and recombines them from a 0.5 degree azimuthal resolution to a 1.0 degree azimuthal resolution. Through this recombination, the noise of the dual-pol products is reduced. Additionally, this algorithm creates quality flags for each product, which are required by the HCA algorithm.

Next w2dp_hca reads in all of the output from w2dp_preproc, and, you guessed it, runs the HCA. The hydrometeor classifications, as well as some of the output from w2dp_preproc, are then be passed into w2dp_rainrates to determine instantaneous rainfall rates for each dual-pol variable and the HCA. Unfortunately, this workflow is not quite fast enough to run in real-time, so we sought a way to speed it up.

This speed up is achieved by combining w2dp_preproc, w2dp_hca, and w2dp_rainrates into 1 algorithm: w2dualpol. With w2dualpol, you can process as much or as little data as you want. By default, w2dualpol will run all the way through the computational stream discussed above. However, you can specify where in the computational stream you want w2dualpol to stop with the -O flag, where;

0: Preproc — stop after the preprocessor calculations
1: HCA – stop after the HCA calculations
2: RainRates – stop after the rain rate calculations (default)

Additionally, you can specify the products you want written out with the –outputProducts flag. So, for example, if you’re only interested in the HCA, your command line would look like:

w2dualpol -i /data/KTLX/code_index.fam -o /data/KTLX -s KTLX -O 1 --outputProducts=DHCA

Or, if you’re interested in the HCA and the rain rates computed from the HCA, you can run:

w2dualpol -i /data/KTLX/code_index.fam -o /data/KTLX -s KTLX --outputProducts=DHCA,RREC
With these changes, we were able to process data in less than half the time it took to put the data through the three algorithms discussed above. We then made additional improvement to w2dualpol that sped it up even further. These improvements are discussed in our next blog post.

 

Tags: None

Coordinates in WDSS-II

There seems to be some confusion about the geolocation of WDSS-II’s grids.  First of all, most of the grids produced by WDSS-II algorithms are in Plate Carree (or equirectangular) projection for the reasons ably set forth in this cartoon.

Suppose you were to ask w2merger to make you a grid from radar data and you specify the top (northwest) corner with -t and bottom (southeast) corner with -b and spacing with -s as (35,-97), (34.97, -96.97) and (0.01,0.01) respectively.  You would then get this 3×3 grid:

The grid you get if you ask w2merger for -t "35 -97" -b "34.97 -96.97"
The grid you get if you ask w2merger for -t “35 -97” -b “34.97 -96.97”

I have found that if you consider that all pixels occupy a definite area of the earth, the above representation becomes very logical. It is also intuitive in that there are 3 pixels between 35 degrees and 34.97 degrees at a spacing of 0.01 degrees.

In the netcdf files output by WDSS-II, you will find that the northwest corner and the grid spacing for the above grid would be encoded as (35,-97) and (0.01,0.01).

So, are the pixels in WDSS-II defined by their north-west corners? Unfortunately, no. For that, you have to take into account that while a pixel occupies a certain area, it has only one value. Which location within the pixel does that value correspond to? The value of a bin is the average value within the region covered by that bin.

The answer to the second question leads to some tricky semantics. Before we get to those, let’s move on from the world geographic system to the projected coordinate system of the grid itself (see ArcGIS for an explanation of the difference).  Because the projection in question is Platee Carree, the transformation is a simple linear one between pixel coordinates and latitude-longitude, but such a transformation exists. For this coordinate system, the (0,0) point is the center of the northwest grid point.  This is needed so that we can think of a pixel’s value as being the average value within the bin if we had somehow had infinite resolution. The grid’s coordinate system, to put a picture to it, is like this:

Projected coordinate system
Projected coordinate system

A couple of things may warrant noting. The first coordinate (the slower-changing one) is the latitude direction and the second coordinate (the faster changing one) is the longitude one. In other words, grid values are written starting from the northwest corner in rows.  Confusingly, the first coordinate (the “vertical” one if you are staring at an image) is called the x-axis in the sparse-grid netcdf format (“pixel_x”) and is the coordinate we ask for first on all command-lines that ask for a position or length.  [Side note: This is because my background is in image processing and linear algebra where this right-handed coordinate system is common. By the time I figured out that meteorologists and computer graphics used “x” for the “horizontal” dimension, it was too late and there was too much code written with the matrix notation firmly in place.]

The two definitions above are very intuitive, and if you don’t think much about it, you will probably end up doing the right thing. But just to make sure you are thinking about this the right way, try to answer this question.  Given the grid above, what is the value at the location denoted by a star in this diagram?

To get the value at the location denoted by the star, you would interpolate the 4 pixels closest to the star and assign weights to those values that depend on the distance between the star and the centers of those pixels
To get the value at the location denoted by the star, you would interpolate between the values at the 4 pixels closest to the star. The weights of each of these values would depend on the distance between the star and the centers of the corresponding pixels

To find the nearest neighbor to a point (lat,lon) you would start by computing floor( (nwlat-lat)/deltalat ) to get the first pixel coordinate and floor( (lon-nwlon)/deltalon ) to get the second pixel coordinate.  If you wished to interpolate, you would also compute the ceil() besides computing the floor() so that you get the four pixels in question. Then, you would compute the distance of the stars from the pixel centers to get distance-weights and then compute a weighted average of the four values.

 

Tags: None

Radar coverage

What fraction of the US population is within x kilometers of a weather radar?We can answer this question for the lower 48 states from the w2merger caches, a grid of US population density and a digital elevation map (terrain) data.  The WDSS-II program is called coverageStats and I ran it like this:

coverageStats -i $HOME/.w2mergercache/_55.000_-130.000_500.000___NMQWD_0.010_0.010___33_3500_7000/ -E conus/conusterrain.nc -P conus/nap10ag.asc.gz -o `pwd`/coverage -h 0:5 --verbose

I am defining that some place is covered by a weather radar if it is scanned by that radar at a height of below 5km above ground level.  The population density data is in Esri Grid format from Columbia University. The digital elevation data is from the USGS (the gtopo30 dataset) and has been converted to netcdf using the WDSS-II tool topoBrea.  Radar coverage information comes from the MRMS CONUS 1km resolution cache (created using createCache).

Here’s what the result looks like:

pop_mindist_frac70% of the US population is covered by a weather radar that is less than 100 km away.  A little less than 20% of the US population is not covered, or is covered by a radar beam that is at a height more than 5 km.  Note that these numbers take beam-blockage into account.

Here’s a map of what area is covered at what height.

DistanceToNearestRadar_20130120-120000One thing to realize is because I started with the MRMS cache, parts of Canada are included in these statistics.

If you want to try out different assumptions (What if I drop radar X? Do not consider Department of Defense radars? Use height of 3km to 5km? etc.), feel free to run coverageStats yourself.

 

Tags: None

ldm2netcdf now handles SAILS correctly

The implementation of the Supplemental Adaptive Intra-Volume Low-Level Scan (SAILS) for the 88D radars presented a problem for the WDSS-II ingestor ldm2netcdf  because it relied on VCP definitions stored in XML configuration files.  Those XML files defined which elevations matched up with each tilt. However, SAILS can insert a supplemental 0.5 degree scan into the existing VCP at any time. With no changes, ldm2netcdf would incorrectly label the new 0.5 tilt as the next expected tilt as defined in its VCP XML file.

To solve this problem, ldm2netcdf now processes Message 5 in the Level-II data stream (the RDA Volume Coverage Data)  to map each incoming tilt to the correct elevation. The new 0.5 elevations get correctly labeled and saved just like any other 0.5 tilt.

Algorithms listening to 0.5 elevations will be notified of these new tilts just like normal.  Algorithms that listen to all tilts will insert them into the constantly updating virtual volume as the latest 0.5-degree tilt of data for that elevation. So, with the change to ldm2netcdf, downstream algorithms such as w2qcnndp, w2vil, w2merger, etc. deal with the SAILS tilt transparently.

If you do not want the SAILS elevation to be inserted into the data stream, you can specify the ‘-e’ option on the command line of ldm2netcdf to separate out the extra SAILS tilts. The SAILS tilts will then be saved into a separate directory, such as Reflectivity_SAILS, or AliasedVelocity_SAILS.  We do not recommend this, as you are essentially throwing away the extra information.

Finally, we took this opportunity to eliminate some outdated command line options in ldm2netcdf.  First, is the ‘-D’ option for dealiasing.  The dealiasing code in ldm2netcdf is very old and the dealias2d command provides much better results. Second, the ‘-c’ for compositing will be removed since w2vil does a much better job creating composites.

The new changes are being tested and will be rolled out when all the kinks are worked out.

Reading in HRRR files

There are many occasions in which it is useful to read model data into WDSS-II. Often times, this data comes from the RUC/RAP model, and gribToNetcdf makes it very simple to get this data in a format readable by most of the other WDSS-II algorithms. However, if you are interested in using data from the higher resolution HRRR model, reading in the data is not quite so straightforward.

In order get HRRR data into a format readable by WDSS-II, you must first create a configuration file specifying which meteorological variables you are interested in.The reason the configuration file is created is simply to save time and space. Each HRRR file contains over 100 variables. If you’re interested in only a few of the variables, why waste valuable processing time and space reading in data you are not interested in?

While this all may seem rather cumbersome at first, it’s actually not too difficult. Just follow the steps below, and you will be working with HRRR data in no time.

  1. First of all, you need to make sure you have 2 environment variables set correctly. JAVA_HOME needs to be set to the location of your java instillation, and WDSSII_INSTALL_DIR needs to be set to the wdssiijava directory.
  2. Copy the file wdssiijava/example/griddataingest.xml into the current directory, and rename it hrrr.xml.
  3. Edit hrrr.xml. The inputDir and outputDir variables must be changed to match where your data is. The filenamePatterns variable must also be changed to match something in your input files (i.e., if all of your files are named 20130106_****, you could set filenamePatterns equal to “2013”). Finally, the listVariables variable needs to be uncommented and set to “true”.
  4. Run: ./w2java.sh org.wdssii.ncingest.GridDatasetIngest ./hrrr.xml. This will create your configuration file, varList.xml. This file will contain all of the meteorological variables in your HRRR data.
  5. Edit varList.xml. For all variables that you are interested in utilizing, you will need to set them from “false” to “true” inside of varList.xml.
  6. Edit hrrr.xml. Set listVariables to false.
  7. Run ./w2java.sh org.wdssii.ncingest.GridDatasetIngest ./hrrr.xml once more, and voila, your netcdf HRRR files will be waiting for you in the directory that you specified.

For future processing, you will simply need to change the inputDir and outputDir variables in hrrr.xml. If you are interested in processing different or additional variables, simply change those variables from “false” to “true” in varList.xml.

In order to use wdssiijava, you need to have Java 7 or higher installed and in your PATH.

VCP dependence removed from WDSS-II

In the past, in order to create a volumetric product in WDSS-II, it was required that the VCP used by the radar be known. This was not a problem for users working with data from the WSR-88D network, but for those utilizing data from outside of that network, a few extra steps were required, including the creation of a “fake” VCP file that contained the levels at which the radar had scanned.

However, the WSR-88D network recently adopted two new concepts to its scanning strategies. The Automated Volume Scan Evaluation and Termination (AVSET) concept allows any site to not perform higher-elevation scans when no storms are detected. The Supplemental Adaptive Intra-Volume Low-Level Scan (SAILS) gives radars in the 88D network the capability of adding in a supplemental 0.5 degree scan at any time.

While AVSET and SAILS have many advantages to them, the combination of these concepts have made using the VCP of a radar to help build volumetric products unreliable. Therefore, rather than depending on the VCP to build virtual volumes, we have taken the VCP dependence out of all of our products. This means that when working with data from outside of the WSR-88D network, including data from outside the US, users no longer need to create these “fake” VCP files, nor does the VCP need to be defined in the data. Users simply need to be sure that an appropriate expiry time for each scan is specified (using the ExpiryInterval attribute in the netcdf files) to ensure that old data ages off in a timely fashion.

Algorithms affected include:  w2vil and w2circ.

How to get the -t and -b given a radar name

Lots of WDSS-II algorithms (w2merger, w2pngconvert, etc.) ask for a top-left and bottom-right corner.  Perhaps you are processing single radar data in a script and want to automatically determine what to specify for -t and -b …

Here’s a sequence of UNIX commands that you can use to obtain the -t and -b given a radar name:

TOP=`grep -A 2 $RADAR ~/WDSS2/w2config/misc/radarinfo.xml | head -2 | tail -1 | sed 's/[=\"]/ /g' | awk '{print $3+4,$5-4}'`
BOT=`grep -A 2 $RADAR ~/WDSS2/w2config/misc/radarinfo.xml | head -2 | tail -1 | sed 's/[=\"]/ /g' | awk '{print $3-4,$5+4}'`

Try it with your favorite radar.  I have it in bash syntax, but obviously, you can put this in pretty much any script.

The -4 and +4 at the end indicate that we are going 4 decimal degrees (approximately 400 km) from the radar center. Obviously, you can/should change that depending on your needs.

 

 

Cleaner, crisper rotation tracks

We are really happy when public safety agencies use the imagery from http://ondemand.nssl.noaa.gov/ to show the impact of the recent tornadoes in Illinois.  Hey, that’s our stuff, we want to shout. It reminds us of why we do what we do.

But there’s a lot of noise on those accumulation products, noise that can be removed by the use of Multiple Hypothesis Tracking (MHT).  We couldn’t do MHT on-demand or in real-time because it is so slow, but just a few weeks ago, we figured out a way to do it faster with not much of a tradeoff in noise removal.

We repeated the analysis of the Illinois outbreak using the faster method and boy, is it faster! We can process an hour of data in 5 minutes!  (Here are the cleaned-up rotation tracks for the Illinois tornadoes. It is a KML file, so view it in Google Earth.) MHT will be implemented on the ondemand website in a few days.  So, the next time you see folks sharing Rotation Tracks images, they will be cleaner and crisper.

Rotation Tracks today
Rotation Tracks today
Rotation Tracks with MHT (very, very slow)
Rotation Tracks with MHT (very, very slow)
Optimized MHT
Optimized MHT

 

Tags: None