Download and Build
Download and BuildDownload and Build
Defining your workspace
The first step is to create an environment variable that contains the path to the directory in which you will build and run the SCM.
Go to the directory in which you will build and run the SCM and run the command below substituting /path/to/my/build/and/run/directory with your actual directory name
cd /path/to/my/build/and/run/directory
Next, define an environment variable that contains the path
For bash (Bourne Shell), k:
export SCM_WORK=`pwd`
For C-Shell:
setenv SCM_WORK `pwd`
Download the Tutorial Files
Files that are used throughout the exercises in this tutorial are available to download. Down load these to the $SCM_WORK directory.
wget https://github.com/NCAR/ccpp-scm/releases/download/v7.0.0/tutorial_files.tar.gz
tar -xvf tutorial_files.tar.gz
rm -f tutorial_files.tar.gz
curl -OL https://github.com/NCAR/ccpp-scm/releases/download/v7.0.0/tutorial_files.tar.gz
Determining how you will use the SCM
There are two supported ways to build and run the Single Column Model: the standard build, and with a Docker container. Please follow one of the options below for information on these two methods:
Option 1 Using a pre-configured platform
Option 1 Using a pre-configured platformOption 1. Using a computational platform with the required software stack
The SCM can build on most modern UNIX-based operating systems, including both MacOS and Linux. It has a few prerequisite libraries and software, including a Fortran compiler compatible with the FORTRAN-08 standard, Python version 3.8 or later, CMake version 3.14 or later, and a few external libraries which may need to be installed:
- NetCDF-c/NetCDF-FORTRAN
- NCEP libraries BACIO, SP, and W3EMC
- Python modules f90nml and netcdf4
This tutorial will assume that you already have these prerequisites installed. Machines that already have the prerequisite software include:
- NCAR Derecho
- NOAA Hera, Jet
- MSU Orion, Hercules
For more details on software prerequisites, including instructions for building on a custom platform, see the SCM Users Guide.
Obtaining the code
The source code for the CCPP and SCM is provided through GitHub.com. This tutorial accompanies the v7.0.0 release tag, which contains the tested and supported version for general use.
Clone the the v7 release code using
git clone --recursive -b release/public-v7 https://github.com/NCAR/ccpp-scm
The recursive option in this command clones the release/public-v7 branch of the NCAR authoritative SCM repository (ccpp-scm) and all submodules/sub-repositories (ccpp-physics, ccpp-framework, and CMakeModules).
Setting up the environment in a preconfigured platform
If using a preconfigured platform, it may be necessary to run on a compute node.
On Hera:
On Derecho:
qinteractive
Computational platforms that meet the system requirements and have the prerequisite software prebuilt and installed in a central location are referred to as preconfigured platforms. Examples of preconfigured platforms are the Hera NOAA high-performance computing machine and the NCAR Derecho system (using the Intel and GNU compilers). The SCM repository contains modulefiles for all pre-configured platforms that can be loaded to set up the environment as needed. These modules set up the needed environment variables, particularly $PATH, so that the SCM will build correctly.
To load the needed module, ensure you are in the top-level ccpp-scm directory, and run the following commands:
module use scm/etc/modules/
module load [machine]_[compiler]
The last command will depend on which machine you are on and what compiler you are using. For example on the NCAR Derecho machine for GNU compilers:
Setting up the environment in a non-preconfigured platform
If you are not using a preconfigured platform, you need to install spack-stack yourself following the instructions found in Section 4.2.4 of the CCPP SCM User and Technical Guide v7-0-0.
After performing the installation and setting environment variables bacio_ROOT, sp_ROOT, and w3nco_ROOT to the location where spack-stack is installed, continue following the instructions in this tutorial.
Staging additional datasets
You need to download the lookup tables (large binaries, 324 MB) for the Thompson microphysics package, the GOCART climatological aerosols, and other datasets. The aerosol data is very large (~12 GB) and is needed when the first digit of the aerosol flag (iaer) in the physics namelist is =1, e.g. GFS_v17_p8_ugwpv1 and HRRR_gf suites.
./contrib/get_all_static_data.sh
./contrib/get_thompson_tables.sh
./contrib/get_aerosol_climo.sh
Building the code
Following the commands below, you will run cmake to query system parameters, execute the CCPP pre-build script to match the physics variables (between what the host model – SCM – can provide and what is needed by physics schemes in the CCPP), and build the physics caps needed to use them. Subsequently, you will run make to build the SCM executable.
mkdir -p bin && cd bin
cmake ../src
make -j4
A successful build will produce the executable file $SCM_WORK/ccpp-scm/scm/bin/scm
Option 2 Using a Docker Container
Option 2 Using a Docker ContainerOption 2. Using a Docker container
In order to run a precompiled version of the CCPP SCM in a container, Docker needs to be available on your machine. Please visit https://www.docker.com to download and install the version compatible with your system. Docker frequently releases updates to the software; it is recommended to apply all available updates. NOTE: In order to install Docker on your machine, you need to have root access privileges. More information about getting started can be found at https://docs.docker.com/get-started and in Section 4.5 of the CCPP SCM User and Technical Guide.
In the first exercise you will use a prebuilt Docker image available on Docker Hub. In subsequent exercises you will use the same image to rebuild your own SCM executable with various modifications.
Using a prebuilt Docker image
A prebuilt Docker image is available on Docker Hub. In order to use this, execute the following from the terminal where Docker is run:
To verify that it exists afterward, run the command:
Proceed to “Set up the Docker image”.
Set up the Docker image
Next, you will set up an output directory so that output generated by the SCM and its scripts within the container can be used outside of the container using the following steps:
- Set up a directory that will be shared between the host machine and the Docker container. When set up correctly, it will contain output generated by the SCM within the container for manipulation by the host machine. For Mac/Linux,
For Windows, you can try to create a directory of your choice to mount to the container, but it may not work or may require more configuration, depending on your particular Docker installation. We have found that Docker volume mounting in Windows can be difficult to set up correctly. One method that worked for us was to create a new directory under our local user space, and specify the volume mount as below.
In addition, with Docker Toolbox, double check that the mounted directory has correct permissions. For example, open VirtualBox, right click on the
running virtual machine, and choose “Settings”. In the dialog that appears, make sure that the directory you’re trying to share shows up in “Shared Folders" (and add it if it does not) and make sure that the “auto-mount" and “permanent" options are checked.
- Set an environment variable to point to the directory that was set up in the previous step.
- To use the SCM in the container interactively, run non-default configurations, create plots, or even develop code, issue the following command:
Note that this command will not actually run the SCM, but will put you within the container space and within the bin directory of the SCM with a pre-compiled executable. At this point, you can run the scripts as described in the following sections.
A couple things to note when using a container:
- The run scripts should be used with the -d option if output is to be shared with the host machine
- Since the container is ephemeral, if you do development you should push your changes to a remote git repository to save them (i.e. a fork on GitHub.com).