Snow Case (23 Jan 2016)

Snow Case (23 Jan 2016)

Case overview

Reason for interest: Very strong, high-impact snow storm

This was a classic set-up for a major winter storm which impacted the mid-Atlantic region of the United States and was, in general, well-forecast several days in advance by large-scale prediction models. The system developed near the Gulf Coast, with Canadian air already present over the mid-Atlantic and Appalachian regions. This system strengthened rapidly as it moved slowly up the coast producing significant amounts of snow, sleet and freezing rain. Maximum amounts of 30–42 inches (76–107 cm) of snowfall occurred in the mountains near the border of VA/WV/MD.

Surface analysis with radar reflectivity:

Regional snowfall amounts:

jwolff Wed, 03/20/2019 - 19:35

Set up environment

Set up environment

Set Up Environment

To run the snow storm case, first set up the environment for this case study.

If you have not already done so, navigate to the top-level experiment directory (where you have downloaded the container-dtc-nwp directory) and set the environment variables PROJ_DIR and PROJ_VERSION.

tcsh bash
cd /home/ec2-user
setenv PROJ_DIR `pwd`
setenv PROJ_VERSION 4.1.0
cd /home/ec2-user
export PROJ_DIR=`pwd`
export PROJ_VERSION="4.1.0"

Then, you should set up the variables and directories for the snow storm case:

For tcsh: For bash:
setenv CASE_DIR ${PROJ_DIR}/snow
export CASE_DIR=${PROJ_DIR}/snow
mkdir -p ${CASE_DIR}
cd ${CASE_DIR}
mkdir -p wpsprd wrfprd gsiprd postprd pythonprd metprd metviewer/mysql
 

Extra step for singularity users

Users of singularity containerization software will need to set a special variable for temporary files written by singularity at runtime:

tcsh bash
setenv TMPDIR ${PROJ_DIR}/snow/tmp
export TMPDIR=${PROJ_DIR}/snow/tmp
mkdir -p ${TMPDIR}
jwolff Wed, 03/20/2019 - 19:36

 

 

Run NWP initialization components

Run NWP initialization components

Run NWP Initialization Components

The NWP workflow process begins by creating the initial and boundary conditions for running the WRF model. This will be done in two steps using WPS (geogrid.exe, ungrib.exe, metgrid.exe) and WRF (real.exe) programs.

Initialization Data

Global Forecast System (GFS) forecast files initialized at 00 UTC on 20160123 out 24 hours in 3-hr increments are provided for this case.

Model Domain

The WRF domain we have selected covers the contiguous United States. The exact domain is shown below:

SELECT THE APPROPRIATE CONTAINER INSTRUCTIONS FOR YOUR SYSTEM BELOW:

Step One (Optional): Run Python to Create Image of Domain

A Python script has been provided to plot the computational domain that is being run for this case. If desired, run the dtcenter/python container to execute Python in docker-space using the namelist.wps in the local scripts directory, mapping the output into the local pythonprd directory.

docker run --rm -it -e LOCAL_USER_ID=`id -u $USER` \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case \
-v ${PROJ_DIR}/data/shapefiles:/home/data/shapefiles \
-v ${CASE_DIR}/pythonprd:/home/pythonprd \
--name run-snow-python dtcenter/python:${PROJ_VERSION} \
/home/scripts/common/run_python_domain.ksh

A successful completion of the Python plotting script will result in the following file in the pythonprd directory. This is the same image that is shown at the top of the page showing the model domain.

ls ${CASE_DIR}/pythonprd/
WRF_domain.png

Step Two: Run WPS

Note that with the commands below we will run the same container twice but a different script will be called each time to first run WPS (geogrid.exe, ungrib.exe, metgrid.exe) and then run real.exe.

Using the previously downloaded data (in ${PROJ_DIR}/data), while pointing to the namelists in the local scripts directory, run the dtcenter/wps_wrf container to run WPS in docker-space and map the output into the local wpsprd directory.

docker run --rm -it -e LOCAL_USER_ID=`id -u $USER` \
-v ${PROJ_DIR}/data/WPS_GEOG:/data/WPS_GEOG -v ${PROJ_DIR}/data:/data \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common \
-v ${CASE_DIR}/wrfprd:/home/wrfprd -v ${CASE_DIR}/wpsprd:/home/wpsprd \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case \
--name run-dtc-nwp-snow dtcenter/wps_wrf:${PROJ_VERSION} /home/scripts/common/run_wps.ksh

Once WPS begins running, you can watch the log files being generated in another window by setting the ${CASE_DIR} environment variable and tailing the log files:

tail -f ${CASE_DIR}/wpsprd/run_*.log

Type CTRL-C to exit the tail utility.

A successful completion of the WPS steps will result in the following files (in addition to other files) in the wpsprd directory

ls ${CASE_DIR}/wpsprd/
geo_em.d01.nc
FILE:2016-01-23_00
FILE:2016-01-23_03
FILE:2016-01-23_06
FILE:2016-01-23_09
FILE:2016-01-23_12
FILE:2016-01-23_15
FILE:2016-01-23_18
FILE:2016-01-23_21
FILE:2016-01-24_00
met_em.d01.2016-01-23_00:00:00.nc
met_em.d01.2016-01-23_03:00:00.nc
met_em.d01.2016-01-23_06:00:00.nc
met_em.d01.2016-01-23_09:00:00.nc
met_em.d01.2016-01-23_12:00:00.nc
met_em.d01.2016-01-23_15:00:00.nc
met_em.d01.2016-01-23_18:00:00.nc
met_em.d01.2016-01-23_21:00:00.nc
met_em.d01.2016-01-24_00:00:00.nc

Step Three: Run real.exe

Using the previously downloaded data (in ${PROJ_DIR}/data), output from WPS in step one, and pointing to the namelists in the local scripts directory, run the dtcenter/wps_wrf container to this time run real.exe in docker-space and map the output into the local wrfprd directory. 

docker run --rm -it -e LOCAL_USER_ID=`id -u $USER` \
-v ${PROJ_DIR}/data:/data -v ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common \
-v ${CASE_DIR}/wrfprd:/home/wrfprd -v ${CASE_DIR}/wpsprd:/home/wpsprd \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case \
--name run-dtc-nwp-snow dtcenter/wps_wrf:${PROJ_VERSION} /home/scripts/common/run_real.ksh

The real.exe program should take less than a minute to run, but you can follow its progress as well in the wrfprd directory:

tail -f ${CASE_DIR}/wrfprd/rsl.out.0000

Type CTRL-C to exit the tail utility.

Note that with the commands below we will run the same container twice but a different script will be called each time to first run WPS (geogrid.exe, ungrib.exe, metgrid.exe) and then run real.exe.

Step One (Optional): Run Python to Create Image of Domain

A Python script has been provided to plot the computational domain that is being run for this case. If desired, run the dtcenter/python container to execute Python in singularity-space using the namelist.wps in the local scripts directory, mapping the output into the local pythonprd directory.

singularity exec -B ${PROJ_DIR}/data/shapefiles:/home/data/shapefiles -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case -B ${CASE_DIR}/pythonprd:/home/pythonprd ../python_${PROJ_VERSION}.sif /home/scripts/common/run_python_domain.ksh

A successful completion of the Python plotting script will result in the following file in the pythonprd directory. This is the same image that is shown at the top of the page showing the model domain.

ls ${CASE_DIR}/pythonprd/
WRF_domain.png

Step Two: Run WPS

Using the previously downloaded data (in ${PROJ_DIR}/data), while pointing to the namelists in the local scripts directory, run the dtcenter/wps_wrf container to run WPS in singularity-space and map the output into the local wpsprd directory. 

singularity exec -B ${PROJ_DIR}/data/WPS_GEOG:/data/WPS_GEOG -B ${PROJ_DIR}/data:/data -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case -B ${CASE_DIR}/wpsprd:/home/wpsprd -B ${CASE_DIR}/wrfprd:/home/wrfprd  ../wps_wrf_${PROJ_VERSION}_for_singularity.sif /home/scripts/common/run_wps.ksh
Note: If you are running on NCAR's Cheyenne you will need to modify the the singularity exec command above to include a bind mount for "-B /glade:/glade" as well.

Once WPS begins running, you can watch the log files being generated in another window by setting the ${CASE_DIR} environment variable and tailing the log files:

tail -f ${CASE_DIR}/wpsprd/run_*.log

Type CTRL-C to exit the tail utility.

Step Three: Run real.exe

Using the previously downloaded data (in ${PROJ_DIR}/data), output from WPS in step one, and pointing to the namelists in the local scripts directory, run the dtcenter/wps_wrf container to this time run real.exe in singularity-space and map the output into the local wrfprd directory.

singularity exec -B ${PROJ_DIR}/data:/data -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case -B ${CASE_DIR}/wpsprd:/home/wpsprd -B ${CASE_DIR}/wrfprd:/home/wrfprd ../wps_wrf_${PROJ_VERSION}_for_singularity.sif /home/scripts/common/run_real.ksh
Note: If you are running on NCAR's Cheyenne you will need to modify the the singularity exec command above to include a bind mount for "-B /glade:/glade" as well.

The real.exe program should take less than a minute to run, but you can follow its progress as well in the wrfprd directory:

tail -f ${CASE_DIR}/wrfprd/rsl.out.0000

Type CTRL-C to exit the tail utility.

A successful completion of the WPS steps will result in the following files (in addition to other files) in the wpsprd directory

ls ${CASE_DIR}/wpsprd/
geo_em.d01.nc
FILE:2016-01-23_00
FILE:2016-01-23_03
FILE:2016-01-23_06
FILE:2016-01-23_09
FILE:2016-01-23_12
FILE:2016-01-23_15
FILE:2016-01-23_18
FILE:2016-01-23_21
FILE:2016-01-24_00
met_em.d01.2016-01-23_00:00:00.nc
met_em.d01.2016-01-23_03:00:00.nc
met_em.d01.2016-01-23_06:00:00.nc
met_em.d01.2016-01-23_09:00:00.nc
met_em.d01.2016-01-23_12:00:00.nc
met_em.d01.2016-01-23_15:00:00.nc
met_em.d01.2016-01-23_18:00:00.nc
met_em.d01.2016-01-23_21:00:00.nc
met_em.d01.2016-01-24_00:00:00.nc
jwolff Wed, 03/20/2019 - 19:37

Run Data Assimilation

Run Data Assimilation

Run Data Assimilation

Our next step in the NWP workflow will be to run GSI data assimilation to achieve better initial conditions in the WRF model run. GSI (gsi.exe) updates the wrfinput file created by real.exe.

SELECT THE APPROPRIATE CONTAINER INSTRUCTIONS FOR YOUR SYSTEM BELOW:

Using the previously downloaded data (in ${PROJ_DIR}/data), while pointing to the namelist in the local scripts directory, run the dtcenter/gsi container to run GSI in docker-space and map the output into the local gsiprd directory:

docker run --rm -it -e LOCAL_USER_ID=`id -u $USER` -v ${PROJ_DIR}/data:/data \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common \
-v ${CASE_DIR}/wrfprd:/home/wrfprd -v ${CASE_DIR}/wpsprd:/home/wpsprd -v ${CASE_DIR}/gsiprd:/home/gsiprd \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case \
--name run-dtc-gsi-snow dtcenter/gsi:${PROJ_VERSION} /home/scripts/common/run_gsi.ksh

As GSI is run the output files will appear in the local gsiprd/. Please review the contents of that directory to interrogate the data.

Once GSI begins running, you can watch the log file being generated in another window by setting the ${CASE_DIR} environment variable and tailing the log file:

tail -f ${CASE_DIR}/gsiprd/stdout

Type CTRL-C to exit the tail.

A successful completion of the GSI step will result in the following files (in addition to other files) in the gsiprd directory

ls ${CASE_DIR}/gsiprd/
anavinfo
berror_stats
diag_*
fit_*
fort*
gsiparm.anl
*info
list_run_directory
prepburf
satbias*
stdout*
wrf_inout
wrfanl.2016012300

Using the previously downloaded data (in ${PROJ_DIR}/data), while pointing to the namelist in the local scripts directory, create a container using the gsi image to run GSI in singularity-space and map the output into the local gsiprd directory:

singularity exec -B ${PROJ_DIR}/data:/data -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case -B ${CASE_DIR}/gsiprd:/home/gsiprd -B ${CASE_DIR}/wrfprd:/home/wrfprd ../gsi_${PROJ_VERSION}.sif /home/scripts/common/run_gsi.ksh
Note: If you are running on NCAR's Cheyenne you will need to modify the the singularity exec command above to include a bind mount for "-B /glade:/glade" as well.

As GSI is run the output files will appear in the local gsiprd/. Please review the contents of that directory to interrogate the data.

Once GSI begins running, you can watch the log file being generated in another window by setting the ${CASE_DIR} environment variable and tailing the log file:

tail -f ${CASE_DIR}/gsiprd/stdout

A successful completion of the GSI step will result in the following files (in addition to other files) in the gsiprd directory

ls ${CASE_DIR}/gsiprd/
anavinfo
berror_stats
diag_*
fit_*
fort*
gsiparm.anl
*info
list_run_directory
prepburf
satbias*
stdout*
wrf_inout
wrfanl.2016012300

Type CTRL-C to exit the tail.

jwolff Wed, 03/20/2019 - 19:38

Run NWP model

Run NWP model

Run NWP Model

To integrate the WRF forecast model through time, we use the wrf.exe program and point to the initial and boundary condition files created in the previous initialization, and optional data assimilation, step(s).

SELECT THE APPROPRIATE CONTAINER INSTRUCTIONS FOR YOUR SYSTEM BELOW:

Using the previously downloaded data (in ${PROJ_DIR}/data), while pointing to the namelists in the local scripts directory, run the dtcenter/wps_wrf container to run WRF in docker-space and map the output into the local wrfprd directory.

Note: Read the following two options carefully and decide which one is right for you to run - you DO NOT need to run both. Option One runs with 4 processors by default and Option Two allows for a user specified number of processors using the "-np #" option.

Option One: Default number (4) of processors

By default WRF will run with 4 processors using the following command: 

docker run --rm -it -e LOCAL_USER_ID=`id -u $USER` \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common \
-v ${CASE_DIR}/wrfprd:/home/wrfprd -v ${CASE_DIR}/wpsprd:/home/wpsprd \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case \
--name run-dtc-nwp-snow dtcenter/wps_wrf:${PROJ_VERSION} /home/scripts/common/run_wrf.ksh

Option Two: User-specified number of processors

If you run into trouble on your machine when using 4 processors, you may want to run with fewer (or more!) processors by passing the "-np #" option to the script. For example the following command runs with 2 processors:

docker run --rm -it -e LOCAL_USER_ID=`id -u $USER` \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common \
-v ${CASE_DIR}/wrfprd:/home/wrfprd -v ${CASE_DIR}/wpsprd:/home/wpsprd \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case \
--name run-dtc-nwp-snow dtcenter/wps_wrf:${PROJ_VERSION} /home/scripts/common/run_wrf.ksh -np 2

As WRF is run the NetCDF output files will appear in the local wrfprd/. Please review the contents of that directory to interrogate the data.

Once WRF begins running, you can watch the log file being generated in another window by setting the ${CASE_DIR} environment variable and tailing the log file:

tail -f ${CASE_DIR}/wrfprd/rsl.out.0000

Type CTRL-C to exit the tail.

A successful completion of the WRF step will result in the following files (in addition to other files) in the wrfprd directory:

ls ${CASE_DIR}/wrfprd/wrfout*
wrfout_d01_2016-01-23_00_00_00.nc
wrfout_d01_2016-01-23_01_00_00.nc
wrfout_d01_2016-01-23_02_00_00.nc
wrfout_d01_2016-01-23_03_00_00.nc
...
wrfout_d01_2016-01-23_22_00_00.nc
wrfout_d01_2016-01-23_23_00_00.nc
wrfout_d01_2016-01-24_00_00_00.nc

Using the previously downloaded data in ${PROJ_DIR}/data, while pointing to the namelists in the local scripts directory, create a container from the wps_wrf image to run WRF in singularity-space and map the output into the local wrfprd directory.

Note: Read the following two options carefully and decide which one is right for you to run - you DO NOT need to run both. Option One runs with 4 processors by default and Option Two allows for a user specified number of processors using the "-np #" option.

Option One: Default number (4) of processors

By default WRF will run with 4 processors using the following command: 

singularity exec -B ${PROJ_DIR}/data:/data -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case -B ${CASE_DIR}/wpsprd:/home/wpsprd -B ${CASE_DIR}/wrfprd:/home/wrfprd ../wps_wrf_${PROJ_VERSION}_for_singularity.sif /home/scripts/common/run_wrf.ksh
Note: If you are running on NCAR's Cheyenne you will need to modify the the singularity exec command above to include a bind mount for "-B /glade:/glade" as well.

Option Two: User-specified number of processors

If you run into trouble on your machine when using 4 processors, you may want to run with fewer (or more!) processors by passing the "-np #" option to the script. For example the following command runs with 2 processors:

singularity exec -B ${PROJ_DIR}/data:/data -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case -B ${CASE_DIR}/wpsprd:/home/wpsprd -B ${CASE_DIR}/wrfprd:/home/wrfprd ../wps_wrf_${PROJ_VERSION}_for_singularity.sif /home/scripts/common/run_wrf.ksh -np 2
Note: If you are running on NCAR's Cheyenne you will need to modify the the singularity exec command above to include a bind mount for "-B /glade:/glade" as well.

As WRF is run the NetCDF output files will appear in the local wrfprd/. Please review the contents of that directory to interrogate the data.

Once WRF begins running, you can watch the log file being generated in another window by setting the ${CASE_DIR} environment variable and tailing the log file:

tail -f ${CASE_DIR}/wrfprd/rsl.out.0000

Type CTRL-C to exit the tail.

A successful completion of the WRF step will result in the following files (in addition to other files) in the wrfprd directory:

ls ${CASE_DIR}/wrfprd/wrfout*
wrfout_d01_2016-01-23_00_00_00.nc
wrfout_d01_2016-01-23_01_00_00.nc
wrfout_d01_2016-01-23_02_00_00.nc
wrfout_d01_2016-01-23_03_00_00.nc
...
wrfout_d01_2016-01-23_22_00_00.nc
wrfout_d01_2016-01-23_23_00_00.nc
wrfout_d01_2016-01-24_00_00_00.nc
jwolff Wed, 03/20/2019 - 19:39

Postprocess NWP data

Postprocess NWP data

Postprocess NWP Data

After the WRF model is run, the output is run through the Unified Post Processor (UPP) to interpolate model output to new vertical coordinates, e.g. pressure levels, and compute a number diagnostic variables that are output in GRIB2 format.

SELECT THE APPROPRIATE CONTAINER INSTRUCTIONS FOR YOUR SYSTEM BELOW:

Using the previously created netCDF wrfout files in the wrfprd directory, while pointing to the namelist in the local scripts directory, run the dtcenter/upp container to run UPP in docker-space to post-process the WRF data into grib2 format, and map the output into the local postprd directory:

docker run --rm -it -e LOCAL_USER_ID=`id -u $USER` \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case \
-v ${CASE_DIR}/wrfprd:/home/wrfprd -v ${CASE_DIR}/postprd:/home/postprd \
--name run-snow-upp dtcenter/upp:${PROJ_VERSION} /home/scripts/common/run_upp.ksh

As UPP is run, the post-processed GRIB output files will appear in the postprd/. Please review the contents of that directory to interrogate the data.

UPP runs quickly for each forecast hour, but you can see the log files generated in another window by setting the ${CASE_DIR} environment variable and tailing the log files:

tail -f ${CASE_DIR}/postprd/unipost*.out

Type CTRL-C to exit the tail.

A successful completion of the UPP step will result in the following files (in addition to other files) in the postprd directory:

ls ${CASE_DIR}/postprd/wrfprs*
wrfprs_d01.00
wrfprs_d01.01
wrfprs_d01.02
wrfprs_d01.03
...
wrfprs_d01.23
wrfprs_d01.24

Using the previously created netCDF wrfout data in the wrfprd directory, while pointing to the namelists in the local scripts directory, create a container using the upp image to run UPP in singularity-space and map the output into the local postprd directory:

singularity exec -B ${PROJ_DIR}/data:/data -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case -B ${CASE_DIR}/postprd:/home/postprd -B ${CASE_DIR}/wrfprd:/home/wrfprd ../upp_${PROJ_VERSION}.sif /home/scripts/common/run_upp.ksh
Note: If you are running on NCAR's Cheyenne you will need to modify the the singularity exec command above to include a bind mount for "-B /glade:/glade" as well.

As UPP is run the post-processed GRIB output files will appear in the postprd/. Please review the contents of those directories to interrogate the data.

UPP runs quickly for each forecast hour, but you can see the log files generated in another window by setting the ${CASE_DIR} environment variable and tailing the log file:

tail -f ${CASE_DIR}/postprd/unipost*.out

Type CTRL-C to exit the tail.

A successful completion of the UPP step will result in the following files (in addition to other files) in the postprd directory:

ls ${CASE_DIR}/postprd/wrfprs*
wrfprs_d01.00
wrfprs_d01.01
wrfprs_d01.02
wrfprs_d01.03
...
wrfprs_d01.23
wrfprs_d01.24
jwolff Wed, 03/20/2019 - 19:39

 

 

Create graphics

Create graphics

Create Graphics

After the model output is post-processed with UPP, the forecast fields can be visualized using Python. The plotting capabilities include generating graphics for near-surface and upper-air variables as well as accumulated precipitation, reflectivity, helicity, and CAPE.

SELECT THE APPROPRIATE CONTAINER INSTRUCTIONS FOR YOUR SYSTEM BELOW:

Pointing to the scripts in the local scripts directory, run the dtcenter/python container to create graphics in docker-space and map the images into the local pythonprd directory:

docker run --rm -it -e LOCAL_USER_ID=`id -u $USER` \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case \
-v ${PROJ_DIR}/data/shapefiles:/home/data/shapefiles \
-v ${CASE_DIR}/postprd:/home/postprd -v ${CASE_DIR}/pythonprd:/home/pythonprd \
--name run-snow-python dtcenter/python:${PROJ_VERSION} /home/scripts/common/run_python.ksh

After Python has been run, the plain image output files will appear in the local pythonprd/ directory.

ls ${CASE_DIR}/pythonprd/*.png
10mwind_d01_f*.png
250wind_d01_f*.png
2mdew_d01_f*.png
2mt_d01_f*.png
500_d01_f*.png
maxuh25_d01_f*.png
qpf_d01_f*.png
refc_d01_f*.png
sfcape_d01_f*.png
slp_d01_f*.png

The images may be visualized using your favorite display tool.

Pointing to the scripts in the local scripts directory, create a container using the python_3.5 singularity image to create graphics in singularity-space and map the images into the local pythonprd directory:

singularity exec -B ${PROJ_DIR}/data/shapefiles:/home/data/shapefiles -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case -B ${CASE_DIR}/postprd:/home/postprd -B ${CASE_DIR}/pythonprd:/home/pythonprd ../python_${PROJ_VERSION}.sif /home/scripts/common/run_python.ksh
Note: If you are running on NCAR's Cheyenne you will need to modify the the singularity exec command above to include a bind mount for "-B /glade:/glade" as well.

After Python has been run, the plain image output files will appear in the local pythonprd/ directory.

ls ${CASE_DIR}/pythonprd/*.png
10mwind_d01_f*.png
250wind_d01_f*.png
2mdew_d01_f*.png
2mt_d01_f*.png
500_d01_f*.png
maxuh25_d01_f*.png
qpf_d01_f*.png
refc_d01_f*.png
sfcape_d01_f*.png
slp_d01_f*.png

The images may be visualized using your favorite display tool.

jwolff Wed, 03/20/2019 - 19:40

Run verification software

Run verification software

Run Verification Software

After the model output is post-processed with UPP, it is run through the Model Evaluation Tools (MET) software to quantify its performance relative to observations. State variables, including temperature, dewpoint, and wind, are verified against both surface and upper-air point observations, while precipitation is verified against a gridded analysis.

SELECT THE APPROPRIATE CONTAINER INSTRUCTIONS FOR YOUR SYSTEM BELOW:

Using the previously downloaded data (in ${PROJ_DIR}/data), while pointing to the output in the local scripts and postprd directories, run the dtcenter/met container to run the verification software in docker-space and map the statistical output into the local metprd directory:

docker run --rm -it -e LOCAL_USER_ID=`id -u $USER` \
-v ${PROJ_DIR}/data:/data \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common \
-v ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case \
-v ${CASE_DIR}/postprd:/home/postprd -v ${CASE_DIR}/metprd:/home/metprd \
--name run-snow-met dtcenter/nwp-container-met:${PROJ_VERSION} /home/scripts/common/run_met.ksh

MET will write a variety of ASCII and netCDF output files to the local metprd/. Please review the contents of the directories: grid_stat, pb2nc, pcp_combine, and point_stat, to interrogate the data.

ls ${CASE_DIR}/metprd/*
grid_stat/grid_stat*.nc
grid_stat/grid_stat*.stat
pb2nc/prepbufr*.nc
pcp_combine/ST2*.nc
pcp_combine/wrfprs*.nc
point_stat/point_stat*.stat

Using the previously downloaded data (in ${PROJ_DIR}/data), while pointing to the output in the local scripts and postprd directories, create a container using the nwp-container-met image to run the verification software in singularity-space and map the statistical output into the local metprd directory:

singularity exec -B ${PROJ_DIR}/data:/data -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/common:/home/scripts/common -B ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123:/home/scripts/case -B ${CASE_DIR}/postprd:/home/postprd -B ${CASE_DIR}/metprd:/home/metprd ../nwp-container-met_${PROJ_VERSION}.sif /home/scripts/common/run_met.ksh
Note: If you are running on NCAR's Cheyenne you will need to modify the the singularity exec command above to include a bind mount for "-B /glade:/glade" as well.

MET will write a variety of ASCII and netCDF output files to the local metprd/. Please review the contents of the directories: grid_stat, pb2nc, pcp_combine, and point_stat, to interrogate the data.

ls ${CASE_DIR}/metprd/*
grid_stat/grid_stat*.nc
grid_stat/grid_stat*.stat
pb2nc/prepbufr*.nc
pcp_combine/ST2*.nc
pcp_combine/wrfprs*.nc
point_stat/point_stat*.stat
jwolff Wed, 03/20/2019 - 19:41

Visualize verification results

Visualize verification results

Visualize Verification Results

The METviewer software provides a database and display system for visualizing the statistical output generated by MET. After starting the METviewer service, a new database is created into which the MET output is loaded. Plots of the verification statistics are created by interacting with a web-based graphical interface.

SELECT THE APPROPRIATE CONTAINER INSTRUCTIONS FOR YOUR SYSTEM BELOW:

In order to visualize the MET output using the METviewer database and display system, you first need to launch the METviewer container.

cd ${PROJ_DIR}/container-dtc-nwp/components/metviewer
docker-compose up -d
Note: you may need to wait 1-2 minutes prior to running the next command, as some processes starting up in the background may be slow.

The MET statistical output then needs to be loaded into the MySQL database for querying and plotting by METviewer

docker exec -it metviewer /scripts/common/metv_load_all.ksh mv_snow

The METviewer GUI can then be accessed with the following URL copied and pasted into your web browser:

http://localhost:8080/metviewer/metviewer1.jsp

Note, if you are running on AWS, run the following commands to reconfigure METviewer with your current IP address and restart the web service:

   docker exec -it metviewer /bin/bash
/scripts/common/reset_metv_url.ksh
exit

The METviewer GUI can then be accessed with the following URL copied and pasted into your web browser (where IPV4_public_IP is your IPV4Public IP from the AWS “Active Instances” web page):

http://IPV4_public_IP:8080/metviewer/metviewer1.jsp

Click the "Load XML" button in the top-right corner.

The METviewer GUI can be run interactively to create verification plots on the fly. However, to get you going, two sample plots are provided. Do the following in the METviewer GUI:

Note, if you are running on AWS, you will need to pull the example XML files from the Git repository and have them available on your local machine. To do this, simply right-click on the link provided and save the file to the desired location.
  • Click "OK" to load the XML to the GUI and populate all the required options.
  • Click the "Generate Plot" button on the top of the page to create the image.

Next, follow the same steps to create a plot of 10-meter wind components with this XML file:

Feel free to make changes in the METviewer GUI and use the "Generate Plot" button to make new plots. For example, the following plot was created by changing the "Independent Variable" field to "VALID_HOUR", including times from 0 through 12 hours, and changing the X label appropriately:

Note: Use of METviewer with Singularity is only supported on AWS!

In order to visualize the MET output using the METviewer database and display system, you first need to build Singularity sandbox from the docker container using 'fix-perms' options. The execution of this step creates a metv4singularity directory.

cd ${PROJ_DIR}/container-dtc-nwp/components/metviewer/METviewer
singularity build --sandbox --fix-perms --force metv4singularity docker://dtcenter/nwp-container-metviewer-for-singularity:${PROJ_VERSION}

Next, start the Singularity instance as 'writable' and call it 'metv':

singularity instance start --writable metv4singularity metv

Then, initialize and start MariaDB and Tomcat:

singularity exec --writable instance://metv bash init_singularity.sh
Note: When the script is done running, you may need to press some key (e.g., Enter) to get back to the prompt. At this point the image is ready!

Then, navigate to the scripts area and run a shell in the Singularity container:

cd ${PROJ_DIR}/container-dtc-nwp/components/scripts/common
singularity shell instance://metv

Now it is time to load the MET output into a METviewer database. As a note, the metv_load_singularity.ksh script requires two command-line arguments: 1) name of the METviewer database (e.g., mv_snow), and 2) the ${CASE_DIR}

./metv_load_singularity.ksh mv_snow ${CASE_DIR}
Note, if you need to stop the instance:

singularity instance stop metv

 

The METviewer GUI can then be accessed with the following URL copied and pasted into your web browser (where IPV4_public_IP is your IPV4Public IP from the AWS “Active Instances” web page):

http://IPV4_public_IP:8080/metviewer/metviewer1.jsp

Click the "Load XML" button in the top-right corner.

The METviewer GUI can be run interactively to create verification plots on the fly. However, to get you going, two sample plots are provided. Do the following in the METviewer GUI:

Note, if you are running on AWS, you will need to pull the example XML files from the Git repository and have them available on your local machine. To do this, simply right-click on the link provided and save the file to the desired location.
  • Click "OK" to load the XML to the GUI and populate all the required options.
  • Click the "Generate Plot" button on the top of the page to create the image.

Next, follow the same steps to create a plot of 10-meter wind components with this XML file:

Feel free to make changes in the METviewer GUI and use the "Generate Plot" button to make new plots.

You can also create plots via the METviewer batch plotting capability (i.e., not the METviewer GUI). A script to run the two supplied METviewer XMLs provides an example on how to create plots. Note you must be in your metviewer singularity shell to run it, as shown below:

singularity shell instance://metv
cd ${PROJ_DIR}/container-dtc-nwp/components/scripts/snow_20160123/metviewer ./metv_plot_singularity.ksh ${CASE_DIR}

The output goes to: ${CASE_DIR}/metviewer/plots, and you can use display to view the images.

jwolff Wed, 03/20/2019 - 19:41