Transitions Newsletter Header

Issue 32 | Winter 2023

Lead Story

Using the CCPP SCM as a Teaching Tool

Contributed by Cristiana Stan, Professor of Climate Dynamics, Department of Atmospheric, Oceanic and Earth Sciences, George Mason University, Fairfax, VA

The climate graduate programs at George Mason University offer an Earth System Modeling course. The course is divided into two subtopics, theory and practicum. The theoretical session offers lectures introducing students to the physical and dynamical components of an Earth system model, their interactions, and how these components are used to predict the behavior of weather and climate. When I became the instructor of the Earth System Modeling course, I added a module to the practicum session that provides students the technical skills to contribute to the development of an Earth system model. For this module, I opted for the single-column model (SCM) approach. Given the number of Earth system models developed in the U.S. alone, coupled with our aim to familiarize our students with more than one model, the Earth system model and the SCM were selected to come from two modeling groups.

As a researcher, I have experience running various Earth system models, yet never had the opportunity to work with a SCM. My decision to select this model, developed by the DTC as a simple host model for the Common Community Physics Package (CCPP), was influenced by my current work with the NOAA Unified Forecast System (UFS), which uses the CCPP for the majority of physical parameterizations in the atmospheric component. I was further motivated by the detailed user and technical guide that accompanies the public release of the CCPP-SCM code.

I was cautiously optimistic about successfully porting the code to the Mason high-performance computing (HPC) clusters, which are not part of the preconfigured platforms on which the CCPP-SCM code has been tested. If only one step from the list of instructions fails, it can cause a domino effect on the subsequent steps. To my surprise, step after step was successfully completed. The biggest challenge to port the code was building the three libraries that are part of the UFS hpc-stack package. Thankfully, the developers of the UFS hpc-stack have done an excellent job in providing a system for building the software stack. Building the library required an entire day of suspense, yet its successful completion was well worth the wait.

Thankfully, the developers of the UFS hpc-stack have done an excellent job in providing a system for building the software stack. Building the library required an entire day of suspense, yet its successful completion was well worth the wait.

In addition to the relatively easy process of porting and compiling the code, there are other attractive elements in the CCPP-SCM framework that expand its appeal as a teaching tool. It offers a relatively large library of physical parameterizations (or physics suites) that have been scientifically validated, and provides a variety of pre-processed forcing data. These allow students to design experiments to understand the behavior of physical parameterizations in different environments, and explore the limitations of the approach.

Students discussing an instructional slide

Following the developer instructions, which I adapted to work with Mason’s HPC cluster, students quickly installed their own copy of CCPP-SCM and were ready to work on the practical application. The goal of the assignment was to understand the similarities and differences between the behavior of a cloud parameterization scheme when tested over land and ocean environmental conditions. The variety of observations included with the package allow students to focus on the science without spending time on finding the data sets required to drive the SCM.

The outcome of the assignment exceeded my expectations. Students set up their own numerical experiments without any help from me. This was a rewarding experience for me as instructor and for the students who gained confidence they can master a model that allows them to zoom into the complexity of an Earth system model. Next, students will learn how to run an Earth system model and the NCAR CESM will be used for that purpose.

 


Director's Corner

Hui-Ya Chuang

Contributed by Contributed by Dr. Hui-Ya Chuang, NOAA EMC and DTC SAB co-chair

I was excited when the DTC nominated me to serve on the DTC Science Advisory Board in 2020 and then felt very honored to be asked to become a co-chair in 2022. As one of the first group of EMC staff to be sent to work with DTC on bridging the gap between research and operations, I have watched DTC grow into an organization that accomplished its mission to bridge the gaps, by not only providing community support for several operational softwares, but also facilitating collaboration by developing Common Community Physics Package (CCPP) and METPlus.

I was the developer and code manager for NCEP’s operational Unified Post Processor (UPP) for a decade and, through collaboration with the DTC, we made UPP a community post-processing software package. The UPP was developed to become a common post processor for all of NOAA’s operational atmospheric models. This was initiated by requests from NOAA forecasters as this allows them to perform a fair comparison of all model output when they are derived using the same algorithms. The UPP supports post processing of all operational Atmospheric models including GFS, GEFS, NAM, RAP/HRRR, HWRF, as well as to-be-implemented HAFS, and RRFS.

Hui-Ya Chuang

It has always been my belief that UPP benefited greatly from more than a decade of collaboration with DTC, and I am grateful that the DTC has been providing UPP community support. The DTC was instrumental in helping EMC develop a software management plan during the early stage of collaboration, as well as updating UPP to make it portable on several platforms. These efforts enabled UPP to become a community software that users from different communities can run on their own computer and contribute updates, based on the code-management plan. Additionally, the portable UPP has made it easy for EMC to transition UPP to different operational supercomputers every three years. For more than 10 years, DTC worked with EMC to bring operational UPP updates and bug fixes to the community and also contributed bug fixes from the community back to operations through public releases and semi-annual code merges with EMC. Finally, I cannot thank the DTC enough for developing and continuing to update UPP documentations. EMC's post-processing group was small so DTC’s help in community support and documentation was much appreciated. This documentation has been very helpful in providing guidance to EMC’s collaborators who were interested to work on advancement of post processing algorithms. With the DTC’s effort, UPP is widely used by international weather communities.

The DTC was instrumental in helping EMC develop a software management plan during the early stage of collaboration, as well as updating UPP to make it portable on several platforms.

While serving on the DTC Science Advisory Board, I’ve witnessed the many challenges the DTC navigated, such as when NOAA asked them to spin up EPIC to take over community software support and transition themselves to focus mainly on testing and evaluation. Although I am concerned about continuity of UPP community support, I am delighted to see that DTC has been stepping up to the challenge of transitioning themselves to this new focus of becoming the T&E power house while winding down on software support. I was glad I could provide advice on operational aspects during their transition.

 


Who's Who

Mike Kavulich

Mike Kavulich grew up in suburban Connecticut, the oldest of three children with a younger brother and sister. From his earliest memories he had an obsession with the weather, giving weather reports to his class in preschool, watching and re-watching weather documentaries recorded from TV on VHS, and tracking hurricanes on a copied paper hurricane tracking map taped to the wooden basement door. Because he did not have cable or internet, he would call his grandmother every day of the late summer to turn on the Weather Channel at 10 minutes before the hour to get the latest coordinates for active tropical cyclones, marking their locations with pins on the hurricane tracking chart in his basement. At the age of 10, a summer thunderstorm downed trees across his town, and produced a brief tornado. Mapping the paths of each downed tree and limb in his backyard convinced him that the tornado had struck there too – though the National Weather Service never called for his consultation. With this knowledge in hand, it shouldn’t be a surprise that he always knew he wanted to study the weather. Though he achieved that dream, it turned out quite a bit different from how young Mike imagined it.

From his earliest memories he had an obsession with the weather, giving weather reports to his class in preschool, watching and re-watching weather documentaries recorded from TV on VHS, and tracking hurricanes on a copied paper hurricane tracking map

His high school physics teacher encouraged him to broaden his horizons beyond the atmosphere, so rather than taking a direct academic path to meteorology, he attended Worcester Polytechnic Institute in Massachusetts for a degree in Physics. Despite developing a learned love for all branches of physics, it was the atmosphere that kept drawing him back: he wrote his senior thesis on the physics of sand dune movement in the thin atmosphere of Mars. This led him to graduate studies at Texas A&M, where he studied the energy budget of storms in the Martian atmosphere, earning a masters in Atmospheric Science in 2011. After graduation, he was hired as an associate scientist in the NCAR Mesoscale and Microscale Meteorology Laboratory. There he worked on the WRF Data Assimilation (DA) project, creating documentation, tutorials, and eventually doing software development, not just for WRFDA but for the whole WRF system. 

In 2017, Mike moved to the DTC as a member of NCAR’s Research Applications Laboratory (RAL) Joint Numerical Testbed, where he now works on a variety of projects related to the Unified Forecasting System, or UFS – the next generation of weather prediction models for NOAA. He develops and improves the software framework for the UFS as well as the individual components, contributing to the build system, the workflow, and the Common Community Physics Package used by the UFS weather model. While most of his job could more accurately be described as software engineering than meteorology, he is grateful to be immersed in such a weather-centric environment. In addition to water-cooler discussion with his more scientific colleagues, he is able to find an outlet for his meteorological pursuits by making his own forecasts for his hobbies of skiing, hiking, and storm chasing.

Mike Kavulich at Hidden Lake Pass in Glacier National Park, Montana

Mike is married to Dr. Christina Holt, a former DTC member herself, who now works at NOAA in the Global Systems Laboratory. The two of them have lived a nomadic lifestyle since May 2021, traveling the country and living and working out of their camper van, Vincent. In September 2022 they finished their goal of visiting all 48 states in the Continental US. In addition to countless other goals, they hope to eventually visit all 63 National Parks – or however many there are by the time they finish!

Mike, Christina, and Vincent at Guadalupe Mountains National Park in western Texas

 


Bridges to Operations

Informing NCEP Legacy Operational Model Retirement Through Scorecards

Contributed by Michelle Harrold (NCAR and DTC) and Jeff Beck (NOAA GSL, CU CIRES and DTC)

NOAA is undergoing a massive, community-driven initiative to unify the NCEP operational model suite under the Unified Forecast System (UFS) umbrella. A key component of this effort is transitioning from the legacy systems to unified Finite-Volume Cubed-Sphere (FV3)-based deterministic and ensemble operational global and regional systems. For the UFS, the goal is to consolidate operational models around a common software framework, reduce the complexity of the NCEP operational suite, and maximize available HPC resources, which is especially imperative with a shift toward using ensemble-based operational systems. As such, a number of current operational systems are slated to be retired; however, before systems can be phased out, the upcoming systems need to perform on par or better than the systems they are replacing.

To address this evaluation requirement, the DTC was charged with creating performance summary “scorecards” to inform model developers, key stakeholders, and decision makers on the retirement readiness of legacy systems, as well as highlight areas that can be targeted for improvement in future versions of UFS-based operational systems. Scorecards are used as a graphical synthesis tool that allow users to objectively assess statistically significant differences between two models (e.g., the UFS-based Rapid Refresh Forecast System (RRFS)  and one of the operational systems) for user-defined combinations of forecast variables, thresholds, and levels for select verification measures.

Scorecards are used as a graphical synthesis tool that allow users to objectively assess statistically significant differences between two models, e.g., the UFS-based Rapid Refresh Forecast System and one of the operational systems, for user-defined combinations of forecast variables, thresholds, and levels for select verification measures.

This exercise focused on evaluating the UFS-based Global Forecast System (GFS) against the North American Mesoscale (NAM) model and Rapid Refresh (RAP) model, as well as the UFS-based Global Ensemble Forecast System (GEFS) against the Short-Range Ensemble Forecast (SREF) system. The eventual goal is to replace the NAM and RAP with the GFS for medium-range forecasting, and replace the SREF with the GEFS as a medium-range ensemble-based system. The scorecards were created with the METplus Analysis Suite using verification output from April 2021 - March 2022; verification output was provided by NOAA/EMC (special thanks to Logan Dawson and Binbin Zhou at EMC for facilitating the data transfer!). The provided verification output allowed for deterministic, ensemble, and probabilistic grid-to-grid and grid-to-point evaluations over four seasonal aggregations.

Key results indicated promising GFS results against the NAM, but the RAP is still largely competitive. When evaluating the GFS against the NAM, precipitation is consistently better forecast in the GFS. For upper-air fields, the GFS generally performs as well or better than the NAM; however, convective-season upper-air forecasts could be a target for improvements in the GFS, as the NAM was most competitive during this period. When evaluating the GFS against the RAP, surface-based and low-to-mid-level verification for the GFS is generally on par or worse than the RAP. The GFS scores well aloft and with cold season precipitation, but an area of focused improvement should be directed toward warm season precipitation. When considering GEFS versus SREF, the GEFS performance was slightly better overall in the fall and winter seasons, but worse in the spring and summer seasons. Results have been shared through presentations at the weekly NOAA/EMC Model Evaluation Group (MEG) meeting as well as a UFS Short-Range Weather/Convective-Allowing Model (SRW/CAM) Application Team meeting. Examples of the scorecards created during this evaluation are provided in Figures 1 and 2.

 

Figure 1. Scorecard of Gilbert Skill Score (GSS) for 3-h accumulated precipitation at specified thresholds and forecast lead times for the 00 UTC initializations over the period July 1, 2021 - Sept. 30, 2021. Results indicate that after the 12-h forecast lead time, when there are statistically significant differences, the GFS outperforms the NAM.

Figure 2. Scorecard of Continuous Ranked Probability Score (CRPS), bias, and root-mean squared-error (RMSE) for 2-m temperature, 2-m relative humidity, U-component of the wind, V-component of the wind, pressure reduced to mean-sea level (PRMSL), and total cloud for various forecast lead times for the 12/15 UTC initializations over the period April 1, 2021 - June 30, 2021. Results are mixed, with GEFS having slightly worse performance overall; however, GEFS does frequently outperform SREF at 06-, 24-,30-,48-,54-,72-, and 78-h forecast lead times.

 


Visitors

How do TC-specific Planetary Boundary Layer (PBL) physics impact forecasts across scales and UFS applications?

Visitor: Andrew Hazelton
Contributed by Andrew Hazelton (University of Miami CIMAS/NOAA AOML), Weiwei Li (NCAR and DTC), and Kathryn Newman (NCAR and DTC)

One of the most important aspects of numerical modeling is the series of approximations made to represent certain physical processes, known as “parameterizations.” These approximations of critical atmospheric phenomena can make a huge impact on the solutions that a model provides, so making these parameterizations more accurate across a variety of applications is a major goal of numerical weather prediction (NWP).

One of the primary goals of this 2022 DTC Visitor Project was to examine how planetary boundary layer (PBL) physics changes affect atmospheric prediction across a variety of scales and applications, specifically on tropical cyclones (TCs) and synoptic weather. This was done through two avenues of research.

Hurricane Laura Runs

One task for this project was to examine how model physics affect TC forecast skill across a variety of scales and different Unified Forecast System (UFS) applications. The case chosen for this analysis was Hurricane Laura (2020). Two different Hurricane Analysis and Forecast System (HAFS) versions (both with 2-km grid spacing, with differing microphysics and PBL physics) exhibited a notable left bias in track (orange and green lines in Figure 1). The UFS short-range-weather (SRW) runs at 3-km grid spacing using two similar physics suites (red and blue lines) also showed a similar leftward bias. However, two SRW runs with 13-km resolution gave relatively accurate track forecasts using the same two physics suites. This indicates that the behavior of the model physics at the higher resolution might be part of the problem. This motivates us further to examine ways that model physics impact atmospheric flow at different resolutions, as explored in the next section.

Figure 1: Track forecasts for Hurricane Laura initialized at 00 UTC August 25, 2020 for two 2-km HAFS configurations (orange and green), two 3-km SRW configurations (red and dark blue), and two 13-km SRW configurations (magenta and light blue).

GFS With Modified PBL Physics

We collaborated with Dr. Sundararman Gopalakrishnan and Dr. Xiaomin Chen to implement a modification to the turbulence kinetic energy (TKE)-based eddy-diffusivity mass-flux (EDMF-TKE) PBL physics in HAFS (known as the tc_pbl), to better represent turbulent mixing in the TC boundary layer (e.g. Chen et al. 2022, 2023, Hazelton et al. 2022). Several of these changes were based on large-eddy simulations (LES) conducted by Dr. Chen, and results showed improvement to TC structure and intensity in HAFS. We wanted to see how these changes impact large-scale atmospheric prediction. To accomplish this, we ran a month (September 2022) of forecasts of the global forecast system (GFS, which uses a physics configuration generally similar to HAFS-A), the global component of the Unified Forecast System (UFS), at 25-km resolution. Figure 2 shows the 500-hPa anomaly correlation from the default (black) and modified (red) forecasts. The modifications produce slightly lower global skill. This tells us that we need to work further to unify these changes to the PBL physics so that they improve forecast skill, not only for TC applications, but also for other worldwide prediction regimes and applications.

Figure 2: Anomaly Correlation of geopotential height (500 hPa) for September 2022 GFS runs using default (black) and modified (red) EDMF-TKE PBL physics.

The DTC visitor program provided an excellent opportunity to meet and collaborate with other scientists working on various aspects of UFS, at DTC and EMC, and gain a better understanding of the types of model physics evaluation and testing being performed across a variety of scales and applications. We are especially appreciative of the guidance and support provided by DTC collaborators Brianne Nelson from NCAR, Linlin Pan from CIRES/GSL, Man Zhang from CIRES/GSL, and Evelyn Grell from CIRES/PSL in setting up this project. Kate Friedman and Mallory Row from EMC were very helpful in running the GFS and the global verification.

Ongoing and future work on this topic includes applying the modified PBL physics (tc_pbl) to the UFS SRW application to see how it impacts TC and other forecasts on both 3-km and 13-km scales in that configuration. We also plan to examine how the “scale-awareness” (adjustments for the grid size) is being handled in HAFS, and whether modifications to this adjustment can improve the model physics and TC forecasts.

Andrew Hazelton

 


Community Connections

JEDI projects adopts and contributes to CCPP variable naming standard

Contributed by Steven Vahl (UCAR and JCSDA) and Dominikus Heinzeller (UCAR and JCSDA)

In September of 2022, the JCSDA (Joint Center for Satellite Data Assimilation) officially adopted the CCPP standard names, originally developed for use with the Common Community Physics Package, as the model variable names to be used within the JEDI (Joint Effort for Data assimilation Integration) software.

The JEDI software is employed by many Earth observing systems and requires agreed-upon names to be used for the quantities being input and computed. It is critical that these names are understood identically between systems to prevent code errors, misuse, and duplication. Within the JEDI software, variables are used in two different broad contexts: as variables for Earth observations, and as variables for different Earth system models. Earlier in 2022, a team led by Greg Thompson at JCSDA developed a naming standard for the JEDI observation variables. This team surveyed the available existing variable naming standards and found that none of them were adequate for JEDI needs, and so they developed a new observation variable naming standard, name-by-name.

It is critical that these model variable names are understood identically between systems to prevent code errors, misuse, and duplication.

Later, Steven Vahl (JCSDA) was tasked with organizing the task to develop or adopt a naming standard for the JEDI model variables. Dom Heinzeller, formerly part of the CCPP development team, brought the variable naming standard that had been developed for use with the CCPP to the attention of the team for consideration. Since there were no other known viable model variable naming standards, the decision came down to either adopt the CCPP standard, or extend the naming standard begun by the JEDI Observation team. A meeting of the JEDI community was held to discuss the proposal of adopting the CCPP naming standard. The advantages of adopting it were 1) it already contained some of the needed model variable names, 2) it had a list of rules for creating new names, and 3) it had a Github-based pull request process for adding names and rules that would hopefully minimize the number of meetings needed. The primary disadvantage of adopting the CCPP naming standard was that the names for some quantities would be different from the standard name for the same quantity being used for JEDI observation variables. Ultimately it was decided that within JEDI software there was only one place where these two kinds of variables would be used close to one another, and even there, the context in which the variable was being used (observation or model variable) would be clear, and so the appropriate naming standard for the context could be applied. It should be noted that there is a conceptual difference in how the CCPP standard names are used within JEDI. While CCPP makes use of the standard names only in the metadata tables, JEDI uses them directly in the code and configuration files.

Recently the first few JEDI model variable names were added to the CCPP naming standard via a pull request. More such pull requests will be coming in 2023, and also the new standardized names will be adopted universally inside JEDI code.

XKCD.com by Randal Munroe (https://xkcd.com/927/)

 


Did you know?

DTC will be involved in a number of upcoming events

Did you know?

➤  Upcoming Event: The First UFS Physics Workshop

When: May 16-18, 2023
Location: Boulder, CO, TBD Soon (to be held at either NOAA/ESRL or NCAR).
Registration is open: See the link above to sign up.

The UFS Physics Working Group is organizing the first in what is envisioned to be an annual event to discuss the latest advances in research and development that should be considered in further UFS physics development and implementation to address research and operation needs of the UFS community.  The focus for the first workshop, which will consist of presentations and breakout discussions, will be cloud microphysics.

Upcoming Event: AMS 32nd Conference on Weather Analysis and Forecasting / 28th Conference on Numerical Weather Prediction / 20th Conference on Mesoscale Processes

When: 17-21 July 2023
Where: Madison, Wisconsin and online.
Abstract Submission Deadline: 17 March 2023 for all three conferences <--- New date

➤  Upcoming Event: The Unifying Innovations in Forecasting Capabilities Workshop (UIFCW), a UFS Collaboration Powered by EPIC 

When: Monday, July 24, 2023 – Friday, July 28, 2023
Where: Boulder, CO
Objective: To understand how academia, industry, and operations work together to enhance The Unified Forecast System and leverage this knowledge to accelerate contributions and measure their success.

Registration is open. Abstract Submissions will open in March and close in May 2023.
See link for more details about #UIFCW2023.

https://twitter.com/noaaepic

DYK that there is a DTC Community on Twitter https://twitter.com/DTC_Community that YSK