Scheduled Maintenance - June 4

  • We will be undergoing some maintenance to our website on Thursday, June 4, around 11 am MST.  Our site will be unavailable for a period of time while we are making a web server change.
  • Sorry for the inconvenience. We’ll be back up and running as fast as possible.

CCPP-SCM Version 4.0

Release Date:

The CCPP-SCM Version 4.0 was released on March 16, 2020.

Release Notes

The Developmental Testbed Center is pleased to announce the Common Community Physics Package (CCPP) v4.0.0 public release on March 10, 2020. The CCPP contains a library of physical parameterizations (CCPP-Physics), and the framework that connects it to host models (CCPP-Framework). This release also includes the CCPP Single Column Model (SCM) v4.0.0. 

Five experimental cases are available for use with the CCPP SCM: BOMEX maritime shallow convection, LASSO continental shallow convection, ASTEX stratocumulus-to-cumulus transition, ARM SGP Summer 1997 continental deep convection, and TWP-ICE maritime deep convection.

The CCPP-Physics is envisioned to contain parameterizations used in the NOAA Unified Forecast System (UFS) for weather through seasonal prediction timescales, as well as developmental schemes under consideration for upcoming operational implementations. This release contains suite GFS_v15p2, which is an updated version of the operational GFS v15 implemented on June 12, 2019; it replaces suite GFS_v15. Three developmental suites are included in this release: csawmg has minor updates, GSD_v1 is an update over the previously released GSD_v0, and GFS_v16beta is the target suite for implementation in the upcoming operational GFSv16 (it replaces suite GFSv15plus). Additionally, there are two new suites, GFS_v15p2_no_nsst and GFS_v16beta_no_nsst,  which are variants that treat the sea surface temperature more simply. The CCPP Scientific Documentation describes the suites and their parameterizations in detail.

There are some important changes in the CCPP-Framework in this release. The format of the metadata used to communicate variables between the physics and the host model has changed to accomodate more information and be more extensible. To better meet the needs of the various host models using CCPP, the dynamic build option has been discontinued in favor of the static option with the potential for multiple suites to be defined at compile-time. Finally, the capability to automatically convert units of selected variables, when physics and host use different units, has been added.

Major changes in the SCM include the ability to use CCPP v4.0.0, support added for NOAA’s Hera HPC platform and Docker containers, and inclusion of the static build. Minor changes are the use of CMake for integration with CCPP and adoption of the new CCPP metadata format. There has also been a name change. This code is now known as the CCPP SCM instead of the GMTB SCM, although filenames in the code have not been changed yet.

Not all suites are supported for use with all hosts. The DTC provides support for four suites for use with the UFS (GFS_v15p2, GFS_v15p2_no_nsst, GFS_v16beta, GFS_v16beta_no_nsst) and six suites for use with the CCPP SCM (GFS_v15p2, GFS_v15p2_no_nsst, GFS_v16beta, GFS_v16beta_no_nsst, csawmg, and GSD_v1). For access to the SCM and CCPP code and documentation, please visit the CCPP website at https://dtcenter.org/community-code/common-community-physics-package-ccpp, where you will find a Users’ Guide, a list of known issues, frequently-asked questions, technical documentation, and scientific documentation. For information about the UFS, including its use with CCPP, please visit https://ufscommunity.org/

For questions or comments about the CCPP and the SCM, please contact our helpdesk at gmtb-help@ucar.edu. When using CCPP with the UFS, you can also direct your questions to the UFS community forum.CCPP-SCM Version 4.0.0

 


Known Issues and Fixes

None at this time.

The variable nio_tasks_per_group has been changed from a scalar to an array in a recent commit. It should be specified, for example, as

&namelist_quilt
poll_servers = T
nio_tasks_per_group = 4,4,4
nio_groups = 4,

That means the 4*4+4*4+4*4=48 processors will be used for I/O, so you need to adjust your total number of processors accordingly.