The architectural layers are organized into a hierarchical call tree structure, correspond to separate functionalities, and can be used to identify the areas of responsibility of the various development groups.
In many models, such as in the FY17 operational GFS and in FV3GFS, the selection of physical parameterizations is available from a Fortran namelist. The processing that is handled within the atmospheric model must necessarily be consistent with the choice of physical parameterizations. For example, the numbers of hydrometeors and the contributions of various tendencies are coordinated.
When various options for a category of physical parameterization are available, such as multiple convective schemes, the interface between the atmospheric model and the physics becomes cluttered. The calls to the schemes become surrounded by computations that are partially for the physics scheme as a pre-, and for the atmospheric model or subsequent scheme as a post-. It is precisely this interstitial processing, the small computations
The Suite Definition File (SDF) is a text file read by the model at run time. It is used to specify the physical parameterization suite, and includes information about the number of parameterization groupings, which parameterizations that are part of each of the groups, the order in which the parameterizations should be run, and whether subcycling will be used to run any of the parameterizations with shorter timesteps.
In addition to the six or so major parameterization categories (such as radiation, boundary layer, deep convection, resolved moist physics, etc.), the SDF can also have an arbitrary number of additional interstitial schemes in between the parameterizations to prepare or postprocess data. In many models, this interstitial code is not known to the model user but with the Suite Definition File (SDF), both the physical parameterizations and the interstitial processing are listed explicitly.
The SDF also invokes an initialization step, which is run only once when the model is first initialized.
Finally, the name of the suite is listed in the SDF.
By default, this suite name is used to compose the name of the shared library (
that contains the code for the physical parameterizations and that must be dynamically linked at run time.
A definition file for the SDF (i.e., a
.xsd file) is provided with the IPDe code and
the syntax for the SDF is explained in the
Use Cases section.
The CCPP uses a simple way to verify that the correct data is being transferred between the host application and the physical parameterizations. Within the host application cap, the variables passed to the CCPP are clearly identified with metadata. Since the CCPP-compliant physical parameterizations also have their input and output variables clearly identified, any discrepancies are automatically flagged.
In the host application, variables are added to the
cdata list, along with their metadata,
through calls to the subroutine
For example, to add the variable
surf_t to the list cdata with the metadata name
'surface_temperature', the following call would be made:
call ccpp_field_add(cdata, 'surface_temperature', surf_t, ierr, 'K')
The calls to
ccpp_field_add in the host application cap are auto-manufactured by a
script that reads in a documentation table that lists the variables and their metadata and inserts
the calls in the cap before compilation.
The format of this table is very similar to the one for the physics caps, with the only difference
being that intent in/out does not apply.
The host application cap takes the variables in the data structure native to the host application
and creates a list of pointers,
cdata above, to each of these variables and some of their metadata.
As shown in the example above, a Fortran subroutine is provided in the IPDe for constructing this list.
The Interoperable Physics Driver (IPD) for the CCPP expands the functionality of the IPDv4. The augmentation of the IPDv4 for use with the CCPP is termed IPDe (IPD extension), and the augmented IPD will be referred to as IPDv5. It is also possible to use IPDe as a stand-alone driver, but IPDv5 has been used to implement the CCPP in FV3GFS.
The extended IPD handles user-defined ordering of the physical parameterizations,
allowing experimentation with, for example, calling SHOC before or after the deep convection scheme.
The expanded functionality also allows users to group the parameterizations in sets, returning to the
atmospheric model (for example, for communications or advection) in between calls to the physics.
The number of groups of schemes, and which parameterizations belong in each one, is defined through
ipds in the SDF. Another capability enabled by the expanded functionality is subcycling,
which allows any subset of parameterizations to be called on a smaller time step than the rest of
the physics, e.g. for computational stability.
The structure and syntax of the SDF are described in the later sections, as well as in the Use Cases section.
It should be noted that, with the CCPP and IPDe, it is not necessary for the host application to
have a suite driver (e.g.,
GFS_physics_driver.F90 in FV3GFS), as all the parameterizations schemes,
as well as any pre- and post-interstitial computations are called directly from the CCPP as defined in the SDF.
call ccpp_run ( cdata%suite%ipds(1), cdata, ierr)
The above code snippet shows the required IPD code for calling all of the physical parameterizations
(plus the interstitial processing) that is defined in the named suite's SDF. This code is called from
within the host application cap. The
cdata argument contains all of the data and metadata provided by
the host application cap, and includes sufficient information to call the physical parameterizations.
This eliminates the need for complex conditional statements (if SAS do this, else if RAS do that, else ...).
The CCPP layer is comprised of CCPP-compliant physical parameterizations. To enable the CCPP capabilities (user-defined ordering, grouping, and subcycling of parameterizations), practices need to be put into place. As will be discussed more thoroughly in the next section, there are compliance requirements for the physical parameterization source codes. Additionally, there are best practices considerations when constructing local collections of physics schemes, such as repository management, hosting services, and contributing to tests.
With the CCPP, physical parameterizations can be interoperable and used in more than one host application without changes in their source code. With the binding of the Fortran and C capabilities provided in Fortran 2003, every physical parameterization is called with an identical set of arguments (the list of pointers to all available data), and the subroutine call itself is manufactured from the character string information provided in the run-time SDF.
Clearly, the argument list for a radiation scheme would be markedly different from a boundary layer scheme. To allow the IPD to see the same interface for all of the physics schemes, an interfacing file for each physics scheme is required, the physics cap.
To minimize the coding work required of scientists, caps for the CCPP-compliant physics
schemes are constructed automatically, at build-time, based on a standardized table that describes
the arguments of the entry point subroutine of a scheme. This is simple example of a Computer-Assisted
Software Engineering (CASE) tool, in which text is an input to a program that then manufactures
source code. The text-based CASE tool information also serves as documentation. It is important to
note that a single cap for each category of scheme (deep convection, radiation, microphysics, etc.)
would not be acceptable. Perhaps more than twenty years ago all schemes could be individually binned
with this simple taxonomy, but with some modern schemes including multiple process functionalities
(SHOC, CLUBB), the
per scheme category cap is not general enough.