Session 2
Session 2In this session, you will:
- Change the predefined grid resolution
- Change the physics suite and the external models for generating ICs and LBCs
- Define a custom regional ESG grid
1. Changing the Predefined Grid
1. Changing the Predefined GridChanging the Predefined Grid
Starting with the configuration file from Session 1, we will now generate a new experiment that runs a forecast on a CONUS domain with 3km nominal resolution (as opposed to the 25km from Session 1). Since this is only for demonstration purposes, we will also decrease the forecast time from 48 hours to 6 hours. The steps are as follows:
1. For convenience, in your bash shell set SR_WX_APP_TOP_DIR to the path in which you cloned the UFS-SRW App, e.g.
Then set the following variables:
USHDIR="$HOMErrfs/ush"
2. Change location to USHDIR and make a backup copy of your config.sh from the Session 1 experiment:
cp config.sh config.sh.session_1
3. Edit config.sh as follows:
a. Change the name of the experiment directory (which is also the name of the experiment) to something that properly represents the new settings:
b. Change the name of the predefined grid to the CONUS grid with 3km resolution:
c. Change the forecast length to 6 hours:
4. [OPTIONAL] In order to have cron automatically relaunch the workflow without the user having to edit the cron table, add the following lines to the end of (or anywhere in) config.sh:
CRON_RELAUNCH_INTVL_MNTS=03
During the experiment generation step below, these settings will create an entry in the user’s cron table to resubmit the workflow every 3 minutes until either all workflow tasks complete successfully or there is a failure in one of the tasks. (Recall that creation of this cron job was done manually in Session 1.)
5. Generate a new experiment using this configuration file:
a. Ensure that the proper workflow environment is loaded:
b. Generate the workflow:
./generate_FV3LAM_wflow.sh
After this completes, for convenience, in your shell define the variable EXPTDIR to the directory of the experiment created by this script:
EXPTDIR=${SR_WX_APP_TOP_DIR}/../expt_dirs/${EXPT_SUBDIR}
6. If you did not include the parameters USE_CRON_TO_RELAUNCH and CRON_RELAUNCH_INTVL_MNTS in your config.sh (the optional step above), create a cron job to automatically relaunch the workflow as follows:
a. Open your cron table for editing
This will open the cron table for editing (in vi).
b. Insert the following line at the end of the file:
The path to my_expt_name must be edited manually to match the value in EXPTDIR.
7. Monitor the status of the workflow:
rocotostat -w FV3LAM_wflow.xml -d FV3LAM_wflow.db -v 10
Continue to monitor until all workflow tasks complete successfully. Note that the status of the workflow will change only after every 3 minutes because the cron job to relaunch the workflow is called at this frequency.
If you don’t want to wait 3 minutes for the cron job (e.g. because your workflow tasks complete in a much shorter time, which is the case for this experiment), you can relaunch the workflow and update its status yourself at any time by calling the launch_FV3LAM_wflow.sh script from your experiment directory and then checking the log file it appends to. This is done as follows:
./launch_FV3LAM_wflow.sh
tail -n 40 log.launch_FV3LAM_wflow
The launch_FV3LAM_wflow.sh script does the following:
- Issues the rocotorun command to update and relaunch the workflow from its previous state.
- Issues the rocotostat command (same as above) to obtain the table containing the status of the various workflow tasks.
- Checks the output of the rocotostat command for the string “FAILURE” that indicates that at least one task has failed.
- Appends the output of the rocotorun and rocotostat commands to a log file in $EXPTDIR named log.launch_FV3LAM_wflow.
- Counts the number of cycles that have completed successfully.
8. Once all workflow tasks have successfully completed, plot and display the output using a procedure analogous to the one in Session 1 but modified for the shorter forecast time. The steps are:
- Purge all modules and load the one needed for the plotting:
module purge
module load ncarenv
ncar_pylib /glade/p/ral/jntp/UFS_SRW_app/ncar_pylib/python_graphics - Change location to where the python plotting scripts are located:
cd $USHDIR/Python - Call the plotting script. Since this takes some time, here we will plot only the 0th and 6th hour forecasts as follows:
Note: The following is a one-line command.python plot_allvars.py 2019061500 0 6 6 $EXPTDIR /glade/p/ral/jntp/UFS_SRW_app/tools/NaturalEarth -
This takes about 6 minutes to complete. The plots (in png format) will be placed in the directory $EXPTDIR/2019061500/postprd. If you wish, you can generate plots for all forecast hours by changing the second "6" in the call above to a "1", i.e.
Note: The following is a one-line command.python plot_allvars.py 2019061500 0 6 1 $EXPTDIR /glade/p/ral/jntp/UFS_SRW_app/tools/NaturalEarthNote: If you have gone back and forth between experiments in this session, before issuing the python plotting command above, make sure that you have (re)defined EXPTDIR in your shell to the one for the correct experiment. You can check its value usingecho $EXPTDIR - As the plots appear in that directory, if you have an X-windows server running on your local machine, you can display them as follows:
cd $EXPTDIR/2019061500/postprd
display name_of_file.png &where name_of_file.png should be replaced by the name of the file you’d like to display.
2. Changing the SDF and the External Models for Generating ICs and LBCs
2. Changing the SDF and the External Models for Generating ICs and LBCsChanging the Physics Suite and the External Models for Generating ICs and LBCs
Starting with the configuration file created above for the 3km run with the GFS_v15p2 physics suite, we will now generate a new experiment that runs a forecast on the same predefined 3km CONUS grid but uses instead the RRFS_v1alpha physics suite. This will require that we change the external models from which the initial conditions (ICs) and lateral boundary conditions (LBCs) are generated from the FV3GFS (as was the case in the previous experiment) to the HRRR for ICs and the RAP for LBCs because several of the fields needed by the RRFS_v1alpha suite are available in the HRRR and RAP output files but not in those of the FV3GFS. Also, we will demonstrate how to change the frequency with which the lateral boundaries are specified. The steps are as follows:
- Change location to USHDIR and make a backup copy of your config.sh from the experiment in Session 2.1:
cd $USHDIR
cp config.sh config.sh.session_2p1 - Edit config.sh as follows:
- Change the name of the experiment directory to something that properly represents the new experiment:
EXPT_SUBDIR="test_CONUS_3km_RRFS_v1alpha" - Change the name of the physics suite:
CCPP_PHYS_SUITE="FV3_RRFS_v1alpha" - Change the time interval (in hours) with which the lateral boundaries will be specified (using data from the RAP):
LBC_SPEC_INTVL_HRS="3"Note that for the 6-hour forecast we are running here will require two LBC files from the RAP: one for forecast hour 3 and another for hour 6. We specify the names of these files below via the parameter EXTRN_MDL_FILES_LBCS. Note also that the boundary conditions for hour 0 are included in the file for the ICs (from the HRRR). - Change the name of the external model that will provide the fields from which to generate the ICs to the "HRRR", the name of the HRRR file, and the base directory in which it is located:
Note the "X" at the end of the path name.EXTRN_MDL_NAME_ICS="HRRR"
EXTRN_MDL_SOURCE_BASEDIR_ICS="/glade/p/ral/jntp/UFS_SRW_app/staged_extrn_mdl_files/HRRRX"
EXTRN_MDL_FILES_ICS=( "hrrr.out.for_f000" ) - Change the name of the external model that will provide the fields from which to generate the LBCs to "RAP", the names of the RAP files, and the base directory in which they are located:
Note the "X" at the end of the path name.EXTRN_MDL_NAME_LBCS="RAP"
EXTRN_MDL_SOURCE_BASEDIR_LBCS="/glade/p/ral/jntp/UFS_SRW_app/staged_extrn_mdl_files/RAPX"
EXTRN_MDL_FILES_LBCS=( "rap.out.for_f003" "rap.out.for_f006" )The files rap.out.for_f003 and rap.out.for_f006 will be used to generate the LBCs for forecast hours 3 and 6, respectively.
- Change the name of the experiment directory to something that properly represents the new experiment:
- Generate a new experiment using this configuration file:
- Ensure that the proper workflow environment is loaded and generate the experiment directory and corresponding rocoto workflow:
source ../../env/wflow_cheyenne.env
cd $USHDIR
./generate_FV3LAM_wflow.sh - After this completes, for convenience, in your shell redefine the variable EXPTDIR to the directory of the current experiment:
EXPT_SUBDIR="test_CONUS_3km_RRFS_v1alpha"
EXPTDIR="${SR_WX_APP_TOP_DIR}/../expt_dirs/${EXPT_SUBDIR}"
- Ensure that the proper workflow environment is loaded and generate the experiment directory and corresponding rocoto workflow:
- If in Session 2.1 you did not set the parameters USE_CRON_TO_RELAUNCH and CRON_RELAUNCH_INTVL_MNTS in your config.sh, you will have to create a cron job to automatically relaunch the workflow. Do this using the same procedure as in Session 2.1.
- Monitor the status of the workflow:
cd $EXPTDIR
rocotostat -w FV3LAM_wflow.xml -d FV3LAM_wflow.db -v 10Continue to monitor until all the tasks complete successfully. Alternatively, as in Session 2.1, you can use the launch_FV3LAM_wflow.sh script to relaunch the workflow (instead of waiting for the cron job to relaunch it) and obtain its updated status as follows:
cd $EXPTDIR
./launch_FV3LAM_wflow.sh
tail -n 40 log.launch_FV3LAM_wflow - Once all workflow tasks have successfully completed, you can plot and display the output using a procedure analogous to the one described in Session 2.1. Here, we will demonstrate how to generate difference plots between two forecasts on the same grid. The two forecasts we will use are the one in Session 2.1 that uses the GFS_v15p2 suite and the one here that uses the RRFS_v1alpha suite. The steps are:
- Purge all modules and load the one needed for the plotting:
module purge
module load ncarenv
ncar_pylib /glade/p/ral/jntp/UFS_SRW_app/ncar_pylib/python_graphics - Change location to where the python plotting scripts are located:
cd $USHDIR/Python - For convenience, define variables containing the paths to the two forecast experiment directories:
EXPTDIR_2p1=${SR_WX_APP_TOP_DIR}/../expt_dirs/test_CONUS_3km_GFSv15p2
EXPTDIR_2p2=${SR_WX_APP_TOP_DIR}/../expt_dirs/test_CONUS_3km_RRFS_v1alpha - Call the plotting script. Since this takes some time, here we will plot only the 0th and 6th hour differences, as follows:
Note: The following is a one-line command.python plot_allvars_diff.py 2019061500 0 6 6 ${EXPTDIR_2p2} ${EXPTDIR_2p1} /glade/p/ral/jntp/UFS_SRW_app/tools/NaturalEarthThe difference plots (in png format) will be placed under the first experiment directory specified in the call above, which in this case is EXPTDIR_2p2. Thus, the plots will be in ${EXPTDIR_2p2}/2019061500/postprd. (Note that specifying EXPTDIR_2p1 first in the call above would overwrite the plots generated in Session 2.1.) If you wish, you can generate difference plots for all forecast hours by changing the second "6" in the call above to a "1".
- As the difference plots appear in that directory, you can display them as follows:
cd ${EXPTDIR_2p2}/2019061500/postprd
display name_of_file.png &
- Purge all modules and load the one needed for the plotting:
3. Defining a Custom Regional ESG Grid
3. Defining a Custom Regional ESG GridIn this part, we will demonstrate how to define a custom regional ESG (Extended Schmidt Gnomonic) grid instead of using one of the predefined ones. Note that in addition to this grid, we must define a corresponding write-component grid. This is the grid on which the fields are provided in the output files. (This remapping from the native ESG grid to the write-component grid is done because tasks downstream of the forecast may not be able to work with the native grid.)
To define the custom ESG grid and the corresponding write-component grid, three groups of parameters must be added to the experiment configuration file config.sh:
- ESG grid parameters
- Computational parameters
- Write-component grid parameters
Below, we will demonstrate how to add these parameters for a new 3 km sub-CONUS ESG grid. For reference, note that for the predefined grids, these parameters are set in the script $USHDIR/set_predef_grid_params.sh.
First, change location to USHDIR and make a backup copy of your config.sh from the experiment in Session 2.2:
cp config.sh config.sh.session_2p2
For this experiment, we will start with the configuration file used in Session 2.1 and modify it. Thus, first copy it into config.sh:
Now edit config.sh as follows:
- Change the name of the experiment directory to something that properly represents the purpose of this experiment:
EXPT_SUBDIR="test_custom_ESGgrid" - Comment out the line that defines the predefined grid, i.e. change the line
PREDEF_GRID_NAME="RRFS_CONUS_3km"to
#PREDEF_GRID_NAME="RRFS_CONUS_3km" - Ensure that the grid generation method is set to "ESGgrid", i.e. you should see the line
GRID_GEN_METHOD="ESGgrid"right after the line for PREDEF_GRID_NAME that you commented out in the previous step. If not, make sure to include it. The other valid value for this parameter is "GFDLgrid", but we will not demonstrate adding that type of grid here; ESG grids are preferred because they provide more uniform grid size distributions.
- Ensure that QUILTING is set to "TRUE". This tells the experiment to remap output fields from the native ESG grid onto a new orthogonal grid known as the write-component grid and then write the remapped grids to a file using a dedicated set of MPI processes.
- Add the definitions of the ESG grid parameters after the line for QUILTING:
- Define the longitude and latitude (in degrees) of the center of the grid:
ESGgrid_LON_CTR="-114.0"
ESGgrid_LAT_CTR="37.0" - Define the grid cell size (in meters) in the x (west-to-east) and y (south-to-north) directions:
ESGgrid_DELX="3000.0"
ESGgrid_DELY="3000.0" - Define the number of grid cells in the x and y directions:
ESGgrid_NX="420"
ESGgrid_NY="300" - Define the number of halo points around the grid with a “wide” halo
ESGgrid_WIDE_HALO_WIDTH="6"This parameter can always be set to "6" regardless of the other grid parameters.
- Define the longitude and latitude (in degrees) of the center of the grid:
- Add the definitions of computational parameters after the definitions of the ESG grid parameters:
- Define the physics time step (in seconds) to use in the forecast model with this grid:
DT_ATMOS="45"This is the largest time step in the model. It is the time step on which the physics parameterizations are called. In general, DT_ATMOS depends on the horizontal resolution of the grid. The finer the grid, the smaller it needs to be to avoid numerical instabilities.
- Define the MPI layout:
LAYOUT_X="16"
LAYOUT_Y="10"These are the number of MPI processes into which to decompose the grid.
- Define the block size. This is a machine-dependent parameter that does not have a default value. For Cheyenne, set it to "32":
BLOCKSIZE="32"
- Define the physics time step (in seconds) to use in the forecast model with this grid:
- Add the definitions of the write-component grid parameters right after the computational parameters. The workflow supports three types of write-component grids: regional lat-lon, rotated lat-lon, and Lambert conformal. Here, we demonstrate how to set up a Lambert conformal grid. (Note that there are scripts to calculate the write-component grid parameters from the ESG grid parameters, but they are not yet in user-friendly form. Thus, here, we use an approximate method to obtain the write-component parameters.) The steps are:
- Define the number of write-component groups (WRTCMP_write_groups) and the number of MPI processes per group (WRTCMP_write_tasks_per_group):
WRTCMP_write_groups="1"
WRTCMP_write_tasks_per_group=$(( 1*LAYOUT_Y ))Each write group consists of a set of dedicated MPI processes that writes the fields on the write-component grid to a file on disk while the forecast continues to run on a separate set of processes. These parameters may have to be increased for grids having more grid points.
- Define the type of write-component grid. Here, we will consider only a Lambert conformal grid:
WRTCMP_output_grid="lambert_conformal" - Define the longitude and latitude (in degrees) of the center of the write-component grid. The most straightforward way to define these is to set them to the coordinates of the center of the native ESG grid:
WRTCMP_cen_lon="${ESGgrid_LON_CTR}"
WRTCMP_cen_lat="${ESGgrid_LAT_CTR}" - Define the first and second standard latitudes associated with a Lambert conformal coordinate system. For simplicity, we set these to the latitude of the center of the ESG grid:
WRTCMP_stdlat1="${ESGgrid_LAT_CTR}"
WRTCMP_stdlat2="${ESGgrid_LAT_CTR}" - Define the grid cell size (in meters) in the x (west-to-east) and y (south-to-north) directions on the write-component grid. We simply set these to the corresponding quantities for the ESG grid:
WRTCMP_dx="${ESGgrid_DELX}"
WRTCMP_dy="${ESGgrid_DELY}" - Define the number of grid points in the x and y directions on the write-component grid. To ensure that the write-component grid lies completely within the ESG grid, we set these to 95% of ESGgrid_NX and ESGgrid_NY, respectively. This gives
WRTCMP_nx="399"
WRTCMP_ny="285" - Define the longitude and latitude (in degrees) of the lower left (southwest) corner of the write-component grid. Approximate values (from a linearization that is more accurate the smaller the horizontal extent of the grid) for these quantities can be obtained using the formulas (for reference):
WRTCMP_lon_lwr_left = WRTCMP_cen_lon - (degs_per_meter/(2*cos_phi_ctr))*WRTCMP_nx*WRTCMP_dx
WRTCMP_lat_lwr_left = WRTCMP_cen_lat - (degs_per_meter/2)*WRTCMP_ny*WRTCMP_dywhere
cos_phi_ctr = cos((180/pi_geom)*WRTCMP_lat_lwr_left)
degs_per_meter = 180/(pi_geom*radius_Earth)Here, pi_geom ≈ 3.14 and radius_Earth ≈ 6371e+3 m. Substituting in these values along with the write-component grid parameters set above, we get
WRTCMP_lon_lwr_left="-120.74"
WRTCMP_lat_lwr_left="33.16"
- Define the number of write-component groups (WRTCMP_write_groups) and the number of MPI processes per group (WRTCMP_write_tasks_per_group):
For simplicity, we leave all other parameters in config.sh the same as in the experiment in Session 2.1. We can now generate a new experiment using this configuration file as follows:
- Ensure that the proper workflow environment is loaded and generate the experiment directory and corresponding rocoto workflow:
source ../../env/wflow_cheyenne.env
cd $USHDIR
./generate_FV3LAM_wflow.sh - After this completes, for convenience, in your shell redefine the variable EXPTDIR to the directory of the current experiment:
EXPT_SUBDIR="test_custom_ESGgrid"
EXPTDIR="${SR_WX_APP_TOP_DIR}/../expt_dirs/${EXPT_SUBDIR}" - Relaunch the workflow using the launch_FV3LAM_wflow.sh script and monitor its status:
cd $EXPTDIR
./launch_FV3LAM_wflow.sh
tail -n 80 log.launch_FV3LAM_wflow - Once all workflow tasks have successfully completed, plot and display the output using a procedure analogous to the one in Session 2.1. (Before doing so, make sure that the shell variable EXPTDIR is set to the directory for this experiment; otherwise, you might end up generating plots for one of the other experiments!)