The Overarching goals are multifaceted. For the physics developers, the intent is to reduce the overall time spent porting new physics into the atmospheric model. For the model developers and maintainers, a known interface using modern Fortran constructs is used to help isolate the model from the individual schemes. For those involved directly with the production of the forecasts themselves, the new system is designed to not impose additional run-time costs when compared to the existing model setup. To assist verifying that the model's forecast skill is not degraded, the development has been staged to allow incremental bit-wise comparisons throughout the process.
Each physical parameterization has its own internal assumptions regarding how it functions. Examples are things like index ordering, or role of the first vertical index location (surface or the top of the model). Since there are no rules directing this aspect of physical parameterizations (and the CCPP imposes no such regulation), copying of data will almost always be required within the physical parameterizations. In the code planned for the CCPP v1 release, no effort has been put into making automatic conversions in index ordering. Going forward, it would be helpful to have metadata both in the host application cap and in the entry point of the physical parameterizations to indicate index ordering. This information could then be used to automate the necessary conversions.
The physics driver uses Fortran and C pointers to move data. The construction of the list of these pointers is handled within the host application cap, therefore the pointers reflect the data and work sharing strategy of the host application.
The Fortran 2003 standard provides for a supported method to interface the C and Fortran languages. In IPDe (the augmentation of the IPDv4 for use with the CCPP), C code is used to get a pointer to a named Fortran subroutine and therefore no source code changes are needed in IPDe when new parameterizations are added to the CCPP. This works well with both Intel and GNU compilers, and can be extended to other compilers in the future if needed.
One of the larger obstacles to porting physical parameterizations is the overhead required to
implement the necessary
required to hook the atmospheric model up to the physics scheme.
The various manipulations of data by the atmospheric model to match the largely undocumented
requirements for each physics scheme, and the assumptions of required processing of data by
the model making physics porting difficult.
To address these concerns, model and physics scheme developers have been inserting
one time only
driver code before and after various physics calls. This swelling of interstitial software is not
easy to unravel when trying to determine what was introduced for the model and what was
introduced for the physics.
Several possible future requirements are included in the overall consideration for the design, given their likelihood of need.
As horizontal resolution becomes finer, the necessary time steps for dynamical stability are reduced. Even within some physical parameterizations, there are stability concerns for processes such as fall speeds. Physics schemes that target sub-kilometer scales may need to be called multiple times within a larger dynamics time step. There could be two schemes that need to interact iteratively within a single dynamics step. Subcycling is a supported CCPP capability that is not utilized by the FY17 GFS physics suite.
For performance considerations, it is possible that some subset of the physics schemes (such as radiation) could be cut off and computed on other processing units. This would be a requirement for concurrent processing, allowing a partitioned sets of processors to work on separate physics calls at the same time. Another performance consideration would be recognizing a priori functioning of physics regarding the computational domain. A land model would not need to run over the ocean. Possibly the resolution of a scheme does not need to be the same as the other physics schemes or the dynamics of the model. This introduces the concept of regridding into the model. Neither concurrent processing nor regridding are supported by this system; however, neither are precluded. If the atmospheric model has the necessary machinery in place for concurrency or regridding, and the atmosphere cap calls the IPD with those assumed processing requirements, the IPD will function correctly.
There is a concerted effort to keep the IPD as an intra-component coupler, without the use of available inter-component coupling software. Any such necessity for a full inter-component can be handled above the physics driver. The IPD is designed to be a lightweight and portable pass-through layer that also serves as a driver to run the schedule of parameterizations in a suite.
To enhance portability, it is preferable that the physics schemes do not invoke I/O libraries and that I/O functionalities be handled by the host application. However, it is noted that several operational schemes currently invoke NCEP libraries. The IPD does not restrict these functionalities.