HWRF Online Tutorial V3.9a | Experiment Configuration

Setting up HWRF Experiment

The configuration of an HWRF experiment is controlled through the use of the configuration (*.conf) files in the hwrfrun/parm/ directory. Each of these configuration files have sections with headers contained in square brackets, e.g. [config][dir], etc. Within each section, related variables are set. There are four configuration files required by HWRF which set all of the variables needed to run each component of the workflow. The four files listed below set the required default values for all of the configuration options used by the HWRF system:

hwrf.conf
hwrf_basic.conf
hwrf_input.conf
system.conf

In general, there is no need to edit these files directly. In order to choose a user-defined configuration without overwriting these four files, it is best to create additional configuration files that can be used to override the default values at run time. A general description of these files are as follows, with instructions for configuring the tutorial run to follow.

Overview of hwrf.conf
The variables contained in hwrf.conf primarily define namelist options for the components of HWRF. Please refer to the descriptions of the variables within the hwrf.conf, along with the Users' Guides associated with each component of HWRF.

Overview of hwrf_basic.conf
This file contains the configuration options that control the workflow options in HWRF, i.e. whether to run with a given component (such as ocean, vortex relocation, data assimilation, etc.), the length of the forecast, etc. This file also sets up the directory structure of the output directories. There are a few notable directories that will be referenced throughout the tutorial:

WORKhwrf={CDSCRUB}/{RUNhwrf}/{vit[YMDH]}/{vit[stormid3]} -- Main working directory
HOMEhwrf={CDSAVE}/{EXPT} -- Main HWRF installation top directory
com={CDSCRUB}/{RUNhwrf}/com/{vit[YMDH]}/{vit[stormid3]} -- COM directory for communication between cycles

Users should not change the above environments.

Overview of hwrf_input.conf

This file defines the default input data directory structure, and defines the locations of input data on many of the NOAA operational and research computers. Users can use the format of this file to generate their own input data locations. The hwrf_v3.9a_release.conf explained below includes a section that is sufficient for using data staged in a specific area on disk. Users who wish to adopt a different input data directory structure may define it within an additional conf file by adding an additional section, or by editing the existing [comm_hist] section. While the input data can be placed anywhere that is locally available to the compute nodes, users are not advised to change the input file naming convention.

The choice of which set of input data will actually be used in an experiment is determined by variable fcst_catalog in file system.conf. In order to use the test datasets provided by DTC, users should set this variable to [comm_hist]. The user must also set the path to the location of the input data within the [comm_hist] section of hwrf_v3.9a_release.conf by editing the variable inputroot. This is explicity described in the instructions below.

Overview of system.conf

This file defines the top-level output directory structure and a handful of other variables used for running HWRF. Community users running HWRF on Cheyenne need to copy or link the example file system.conf.cheyenne to system.conf.

 

Instructions for Tutorial

The following personalized configuration files set a few variables that are required by the computational platform on which HWRF is running.

Begin the configuration step by entering the parm directory for editing the conf files.

cd $SCRATCH/hwrfrun/parm

Link the Cheyenne version of system.conf.

ln -sf system.conf.cheyenne system.conf

Overview of hwrf_v3.9a_release.conf
For the HWRF v3.9a public release, there is a fifth configuration file, hwrf_v3.9a_release.conf, which sets configuration options that require changes by the user, or that are required to be set based on the capabilities of HWRF v3.9a. The relevant variables are explained below.

Please check hwrf_v3.9a_release.conf to verify that each of the lines with red comments below matches the tutorial configuration:

[config]
disk_project=dtc-hurr
fcst_catalog=comm_hist
archive=none
publicrelease=yes
run_ensemble_da=no
scrub=no
forecast_length=12

[dir]
inputroot=/glade/p/ral/jnt/HWRF/datasets_v3.9a/Matthew --> This links the HWRF datasets
syndat=/glade/p/ral/jnt/HWRF/datasets_v3.9a/SYNDAT-PLUS
outputroot=/glade/scratch/{ENV[USER]} --> This is the output directory
CDNOSCRUB={outputroot}/noscrub
CDSCRUB={outputroot}/pytmp
CDSAVE=/glade/scratch/{ENV[USER]}/HWRF_v3.9a --> This is the top of HWRF directory

[comm_hist]
inputroot=/glade/p/ral/jnt/HWRF/datasets_v3.9a/Matthew --> This links the HWRF datasets

To turn off scrubbing for any component of HWRF, set scrub=no for that section of configuration, this will preserve the output files from being deleted.
The [comm_hist] section provides the directory structure and naming conventions for the input data, where inputroot defines the parent data directory path.
The [exe] section provides the paths to the compiled executables.

Note: By setting the values above, the following directories will be expanded to point to specific locations on disk. The variables are available to the Python scripts by default, but can also be used as Linux environment variables by setting them to the values found in the ${COMIN}/storm1.holdvars.txt. For this tutorial, they were included in the .cshrc file that was set up when you first logged into Cheyenne.

HOMEhwrf = /glade/scratch/{ENV[USER]}/HWRF_v3.9a/hwrfrun
WORKhwrf = /glade/scratch/{ENV[USER]}/pytmp/hwrfrun/2016100400/14L
COMIN = /glade/scratch/{ENV[USER]}/pytmp/hwrfrun/com/2016100400/14L

To get more information about the HWRF system please consult the HWRF Users Guide.
If you are looking at this page while compiling hwrf-utilities, check back to see whether the compilation was successful.