Transitions Newsletter Header

Issue 18 | Winter 2019

Lead Story

The Community Leveraged Unified Ensemble in the NOAA/Hazardous Weather Testbed Spring Forecasting Experiments

The Community Leveraged Unified Ensemble, or CLUE, is an unprecedented collaboration between academic and government research institutions to help guide NOAA’s operational environmental modeling at the convection-allowing scale. The CLUE is produced during the annual NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiment (SFE), where the primary goal is to document performance characteristics of experimental Convection-Allowing Modeling systems (CAMs). The HWT SFE is co-organized by NOAA’s National Severe Storms Laboratory (NSSL) and the Storm Prediction Center (SPC).

Since 2007, the number of CAM ensembles examined in the HWT has increased dramatically, going from one 10-member CAM ensemble in 2007 to six CAM ensembles in 2015 that totaled about 70 members.  With these large and complex datasets, major advances were made in creating, importing, processing, verifying, and providing analysis and visualization tools. After the 2015 SFE it was clear that progress toward identifying optimal CAM ensemble configurations was being inhibited by the contributions of independently designed CAM systems created with different research goals. This made it difficult to analyze performance characteristics.  Furthermore, a December 2015 report by the international UCACN Model Advisory Committee, charged with developing a unified NOAA modeling strategy to advance the US to world leadership in numerical modeling, recommended that:

  1. The NOAA environmental modeling community requires a rational, evidence-driven approach towards decision-making and modeling system development,
  2. A unified collaborative strategy for model development across NOAA is needed, and
  3. NOAA needs to better leverage the capabilities of the external community.

In the spirit of these recommendations and recognizing the need for more controlled experiments, SFE organizers developed the concept of the CLUE system.  Beginning with the 2016 SFE, the ensemble design effort was much more coordinated. All collaborators agreed on a set of model specifications (e.g., model version, grid-spacing, domain size, physics, etc.).  Forecasts contributed by each group could then be combined to form one large, carefully designed ensemble, which comprises the CLUE. The CLUE design for each year has been built around already existing funded projects led by external HWT collaborators.  Thus, HWT partners can run experimental systems to meet the expectations of their funded projects, and at the same time contribute to something much bigger.

During the last three years of the SFEs, the CLUE configurations have enabled experiments focused around several aspects of CAM ensemble design including: impact of single, mixed, and stochastic physics, data assimilation strategies, impact of multi-model vs. single model, forecast skill of FV3, microphysics sensitivities, and impact of ensemble size.  Collaborators have included the Center for Analysis and Prediction of Storms at the University of Oklahoma (OU), the National Center for Atmospheric Research, The University of North Dakota, the Multi-scale Data Assimilation and Predictability Laboratory at OU, NSSL, NOAA’s Global Systems Division of the Earth Systems Research Laboratory, and NOAA’s Geophysical Fluid Dynamics Laboratory.

The Developmental Testbed Center (DTC) has been a major contributor to the CLUE effort. To date, two DTC Visitor Program projects have involved examining the impact of radar data assimilation and mixed versus single physics using CLUE data.  Furthermore, DTC has led much of the configuration design and verification of CLUE stochastic physics experiments. Finally, CLUE data is being used in a Model Evaluation Tool development project directed towards providing the ability to produce a CAM verification scorecard.  Ultimately, the CLUE is a positive step towards improving US modeling and is already providing helpful insight for designing future operational systems, impacting broad sectors of the weather enterprise including NOAA’s efforts to develop a Weather-Ready Nation. 

Hazardous Weather Testbed Spring Forecasting Experiments. Photo by James Murnan, NSSL.
Contributed by Adam Clark and Jamie Wolff

 


Director's Corner

NOAA’s emerging effort in community modeling

Ricky Rood, University of Michigan

I am the Co-chair, along with Hendrik Tolman, of the Unified Forecast System – Steering Committee (UFS-SC), one of the governance bodies in NOAA’s emerging community modeling effort. The overall goal of the UFS activity is to have a unified forecast system that can be configured to meet the many applications in NOAA’s product suite.

Ricky Rood, University of Michigan

The UFS-SC is a review, coordination, and decision-making body, with major milestones and the schedule of the Environmental Modeling Center (EMC) applications a foundational consideration.  The UFS-SC approves strategic direction and strategic plans for the UFS and recommends the content and development path of the production suite. More information on the UFS-SC can be found at: https://www.earthsystemcog.org/projects/ufs-sc/.

The selection of the FV-3 dynamical core and the cubic-sphere grid as a primary algorithm of the atmospheric model was an important scientific and computational step.

For many years the forecasts of the National Weather Service (NWS) have been criticized as being of less quality than those of the European Center for Medium-range Weather Forecasts. Much of the public criticism has been based on hurricane track forecasts from the global, medium-range forecast system.  In both weather and climate modeling, the Europeans have been better able to organize and focus their activities. As I wrote in the Washington Post in 2013 “To be the best in weather forecasting: Why Europe is beating the U.S.”, our research community is fragmented. We find it difficult to overcome this fragmentation and focus research advances on operational applications.

The forecast mission of the NWS is far more complex than the medium-range, global forecast system. Looking to the future it will require models that couple atmospheric, oceanic, land, chemical, and cryospheric processes. These models will need to operate, for some applications, at cloud-resolving scales.  The U.S. modeling research community holds excellence in all of these fields, and leveraging this expertise into operational systems to improve environmental security is a major motivation.

NOAA has taken some important steps to improve its operational modeling.  The Strategic Implementation Plan (SIP) has become the foundation of the evolving modeling system. The plan is reviewed and updated on a regular basis. It is central to the Steering Committee’s deliberations, linking the operational requirements to research outside of NWS’s programmatic domain. 

The selection of the FV-3 dynamical core and the cubic-sphere grid as a primary algorithm of the atmospheric model was an important scientific and computational step. ( See, “The Weather Service just took a critical first step in creating a new U.S. forecasting model” ) The selection was also an important organizational and managerial step; it facilitates the focus of activities in atmospheric physics on predictive problems in a controlled scientific environment.

Another decision by NOAA has an important strategic benefit.  The NOAA Environmental Modeling System (NEMS) is a community-based, software infrastructure that supports multiple applications. For more than a decade, NOAA has participated with other agencies in evolving this infrastructure and integrating model components into a common architecture (“The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability” – BAMS 2016 ). This approach supports community participation in a software environment that advances scientific integrity and robustness. It allows sharing of algorithms and intellectual expertise.

NOAA has also been seeking to formalize a relationship with the National Center for Atmospheric Research (NCAR), which is a recognized leader in community modeling. With these decisions at hand, NOAA has improved its position to develop its world-class suite of predictive products.

The community governance is still evolving. Indeed, it was recognized from the beginning that a rising role of NOAA as a partner in the U.S. community modeling culture could not be defined as a hierarchical management structure.  The UFS-SC has its role at the interfaces of NOAA’s operational and research missions and the broader research community. The organizational and cultural changes required to deliver the UFS will take time; there are gaps to fill and barriers to overcome. Never before in my career have the decisions been made and the leadership aligned in such a way to make such advances in modeling possible. This is an exciting time with reasons to be optimistic.

Contributed by Ricky Rood.

Richard B. Rood is a Professor at the University of Michigan. At NASA, he managed modeling, data assimilation, and high-performance computing to provide products for NASA’s observational missions and campaigns. He was detailed to the Office of Science and Technology Policy from 1998 – 2001 to develop strategies for modeling and computing.

 


Who's Who

Tracy Hertneky

Tracy Hertneky "floats through the air with the greatest of ease" -- or does she? This daring young woman flying in an aerial silk (one of her many hobbies is being an aerialist - like in the Cirque du Soleil)-- doesn’t actually like to fly and requests the superpower to be able to teleport. She dreams of visiting places from New Zealand to Belgium (loves the beer) and the Summer Olympics! Plus, the ability to teleport would certainly be more efficient for travel and help in her role as a new mom to Juliet who was born in January “and has the best laugh ever!”

This Colorado native always wanted to become a civil engineer, just like her grandpa. While Tracy was NOT inspired by the tornado warning that forced her to grab her dog and guinea pig to hide in the bathroom, she WAS fascinated by the weather. She realized it was a career option when she got older and earned a B.S. in Meteorology with a math minor from Metropolitan State College of Denver in 2012 and M.S. from the University of Colorado at Boulder in 2014.

Tracy first started at NCAR as a part-time student assistant working with radar data in 2008. Now her time is split between several (mainly) DTC projects that provide unique learning experiences and allow her to think a little differently. More than half her time is spent on Numerical Weather Prediction testing and evaluation applications such as with the Model Evaluation for Research Innovation Transition (MERIT) project, which focuses on creating a testing framework that can be used by developers and the research community to assess and improve upon model deficiencies, with the ultimate goal of improving operational numerical weather prediction. The rest of her time is spent on release testing, documentation and user support for the Unified Post Processor (UPP). “Sometimes this leads to digging through code and getting messy,” she says.

Tracy’s proudest moments have been being an invited keynote speaker at the European radar conference in 2014 and holding her daughter for the first time. She and husband Andrew spend as much time with Juliet as possible  - she is growing way too fast! Tracy loves to hike, camp, backpack, kayak, snowboard, and play tennis when the weather permits. When she slows down, she likes to work on stained glass, sew, cook, crochet, read, and build towers so Juliet can knock them down.

Tracy Hertneky, NCAR

 


Bridges to Operations

Use of Model Evaluation Tools in NWS QPF Verification

The National Weather Service (NWS) Meteorological Development Laboratory (MDL) is developing an automated, nationally consistent and centralized service that verifies Quantitative Precipitation Forecasts (QPF). This QPF Verification Service (QPFVS)  will provide objective assessments of the predictive skill of numerical model guidance and official NWS forecasts to help increase the accuracy of quantitative precipitation forecasts. QPFVS will be implemented as a component of a larger gridded verification system with custom front-ends to serve various user communities, such as aviation weather, public weather, and water management/hydrology.

MDL uses the Model Evaluation Tools (MET) software package from the Developmental Testbed Center (DTC) to generate verification results for QPFVS. The MET software has a robust set of verification techniques (station, grid, ensemble, object-oriented) and metrics that meet the NWS requirements for QPFVS and is well supported by extensive documentation and a responsive help desk. The NWS Weather Prediction Center (WPC) and Environmental Modeling Center (EMC) also use MET software, which allows MDL to ensure consistency in techniques and verification scores across the NWS.

QPFVS is accessed via a web-based Graphical User Interface (GUI) and includes datasets from the National Digital Forecast Database (NDFD), National Blend of Models (NBM), High-Resolution Rapid Refresh (HRRR), and Global Forecast System (GFS). To verify, QPFVS uses UnRestricted Mesoscale Analysis (URMA) QPE06 (Quantitative Precipitation Estimation) gridded analysis as the truth. The forecasts and analysis are displayed on a flexible zoom-and-roam interface.

The QPFVS Statistics page can be used to query a database to generate plots and tables of verification scores. The plots are interactive, allowing users to interrogate and save graphics for reports and presentations and download tabular verification data in CSV (Comma Separated Values) format. The current version, QPFVS v1.0 contains gridded verification scores with plans to add station-based verification and more sources in QPFVS v2.0.

Figure 1. QPFVS Viewer allows users to view forecasts and verifying analysis within the same map panel with the ability to zoom and roam through the entire grid. The images preload for quick manipulation and viewing.

QPFVS leverages the MET Docker Container to produce gridded statistics in real-time.  To generate gridded verification, QPFVS first uses MET to convert NDFD forecasts and guidance to the common URMA grid definition. The forecasts, guidance, and analysis are then processed through additional MET programs to generate gridded verification statistics at various geographic scales (i.e., national, regional, and local) and are stored in a database. The QPFVS GUI allows users to easily build a custom query of the database with choices such as location(s) of interest, data source(s), and date range.

MET output of forecasts, guidance, and analysis data on the common URMA grid are also converted into Georeferenced Tagged Image File Format (GeoTIFF) images. Additional features include the ability to view time series of QPF data at individual grid points.

The MET software and team have been very helpful in establishing QPFVS v1.0. MDL anticipates MET will continue to be useful in meeting additional QPFVS requirements, including station-based verification, probabilistic verification, and object-oriented verification.

Contributed by Tabitha Huntemann and Dana Strom.

Figure 2. QPFVS can display the verification metrics in a multitude of ways. Pictured above is a performance diagram for the month of October 2017 for all grid points where a forecast or an observation was >= 0.25”.

 


Visitors

Towards a better understanding of the vertical aerosol distribution in the atmosphere

Visitor: Barbara Scherllin-Pirscher

On 14 April 2010, increasing volcanic activity, including explosive eruptions, were observed at the Icelandic volcano Eyjafjallajökull. The volcano was largely unknown by the general public until then. On that particular day, however, the volcano started ejecting fine ash into the atmosphere, which was advected towards continental Europe. Major disruptions of the air traffic across western and northern Europe were necessary in order to ensure aviation safety. Several countries closed their airspace, affecting approximately 10 million travelers all over the world and causing an enormous economic loss.

Barbara Scherllin-Pirscher, DTC Visitor

The widespread disruption of the air traffic could have been strategically localized and reduced had the horizontal and vertical dispersion of the ash plume been predicted with higher accuracy. At present, there is limited information regarding the vertical distribution of aerosols since aerosol observations are mainly surface-based in-situ measurements or vertically integrated measurements such as Aerosol Optical Depth (AOD). Ground-based LIght Detection And Ranging (LIDAR) measurements and LIDAR measurements from satellites can be used to close this gap. The aim of my DTC visiting scientist project is to implement the assimilation of vertically-resolved LIDAR measurements in the Gridpoint Statistical Interpolation (GSI) data assimilation system. This addition is expected to lead to improved knowledge of the vertical distribution of aerosols in the atmosphere and improved model forecasts.

Figure 1. Global maps of AOD at 550 nm from CRTM (top) and AOD difference between CRTM and MERRA (bottom) on 17 April 2010. Positive differences correspond to CRTM AOD values larger than those for MERRA.

The backbone of high-quality aerosol data assimilation is a good radiative transfer model. We have implemented the simulation of aerosol extinction and backscattering coefficients into the Community Radiative Transfer Model (CRTM), which is embedded in the GSI and tested its performance. Global fields of Modern Era Retrospective-analysis for Research and Applications (MERRA) aerosol concentrations were used as input to calculate Aerosol Optical Properties (AOP). Figure 1 (top) shows AOD at 550 nm on 17 April 2010. Highest aerosol loads were found above the Saharan region in North Africa as well as in East Asia. High AOD in the northwestern part of Russia was caused by aerosols from the Eyjafjallajökull eruption. Comparing CRTM and MERRA (Fig. 1 bottom) reveals higher AOD in the CRTM over the oceans but lower values over continental regions with high dust load.

Figure 2. Vertically-resolved CALIPSO measurements of the backscattering coefficient at 532 nm from an overpass over North Africa (top) ​​​and the North Atlantic (bottom) on 17 April 2010.

To investigate these features, we selected a set of vertically-resolved LIDAR measurements from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite (CALIPSO) satellite obtained over North Africa and the North Atlantic (Fig. 2). The comparison between the models and the observations (Fig. 3) reveals a better performance of MERRA AOP over the dusty region in North Africa but a slightly better performance of CRTM over the ocean. Preliminary results suggest CRTM deficiencies in modeling the non-spherical shape of dust aerosols but a better parameterization of the hygroscopic growth of sea salt. These features, however, need to be investigated in more detail.

Figure 3. Vertical profiles of the backscattering coefficient difference of CALIPSO and MERRA (median difference in red, mean difference in yellow) as well as of CALIPSO and CRTM (median difference in blue, mean difference in green) over North Africa (top) and the North Atlantic (bottom).

My visit at NOAA and NCAR is coming to an end but the project is not finished yet and I will continue my work from Austria. I’m very grateful to Benjamin Johnson (UCAR/JCSDA) and Mariusz Pagowski (NOAA/ESRL/GSD and CIRES/CU Boulder) who cordially hosted me during my DTC visiting scientist project, shared their knowledge and supported me whenever necessary. This project would not have been possible without the financial support of the DTC Visiting Scientist Program and the Horizon2020 project EUNADICS-AV (No. 723986).

Contributed by Barbara Scherllin-Pirscher.

 


Community Connections

The DTC helps the research community enhance the GSI/EnKF operational data assimilation system

Gridpoint Statistical Interpolation (GSI)/Ensemble Kalman Filter (EnKF) are operational data assimilation systems, open to contributions from scientists and software engineers from both operations and research. The development and maintenance of NOAA GSI/EnKF data assimilation systems are coordinated and managed by the Data Assimilation Review Committee (DARC), which incorporates all major GSI/EnKF data assimilation development teams in the United States within a unified community framework. DARC established a code review and transition process for all GSI/EnKF developers, reviews proposals for code commits to the GSI/EnKF repository and ensures that coding standards and tests are being fulfilled. Once DARC approves, the contributed code is committed to the GSI/EnKF code repository and available for operational implementation and public release.

The Developmental Testbed Center (DTC) Data Assimilation (DA) Team serves as a bridge between the research and operational communities by making the operational data assimilation system available as a community resource and by providing a mechanism to commit innovative research to the operational code repository. Prospective code contributors can contact the DTC DA helpdesk to prepare, integrate, and document the expected impact of their code and ensure that any proposed code change meets GSI coding standards. They can also apply to the DTC Visitor Program for their DA research and code transition.

Since the code transition procedures were established, the DTC DA team has helped many researchers contribute code to the repository:

  • NOAA/GSD and NCAR MMM scientists improved chemical initial conditions for WRF-Chem and GO-CART forecasts by using WRF-Chem and GOCART as background to analyze surface measurements of fine particulate matter (PM2.5) and MODerate resolution Imaging Spectroradiometer (MODIS) total Aerosol Optical Depth at a wavelength of 550 nm. These functions are available through the DTC GSI release and have been used by many researchers including Barbara Scherllin-Pirscher from the Central Institute for Meteorology and Geodynamics, Vienna, Austria. Scherllin-Pirscher is a DTC visitor working to further enhance GSI chemical analysis by assimilating vertical light detection and ranging (LIDAR) measurements to improve the vertical aerosol representation in WRF-Chem forecasts.

  • The DTC hosted Mengjuan Liu from the Shanghai Meteorological Service to study how to use GSI to improve the surface data analysis. Liu found the conventional observation operator can introduce large representativeness errors when surface conditions are inhomogeneous, such as on coastlines. An improved forward model for surface observation along the coastline was developed and added to GSI repository. This new forward observation operator was used in the recent operational Rapid Refresh/High-Resolution Rapid Refresh (RAP/HRRR) update.

  • The DTC Visitor Program hosted Ting-Chi Wu from CIRA/CSU to add the capability to assimilate solid-water content path (SWCP) and liquid-water content path (LWCP), which are satellite retrieved hydrometeor observations from Global Precipitation Measurement (GPM) from the Goddard PROFiling algorithm (GPROF).

  • The DTC Visitor Program hosted Karina Apodaca from CIRA/CSU to incorporate two new lightning flash rate observation operators suitable for the Geostationary Operational Environmental Satellite (GOES)/Global Lightning Mapper (GLM) instrument in the GSI variational data assimilation framework.  One operator accounts for coarse resolution and simplified cloud microphysics in the global model to evaluate the impact of lightning observations on the large-scale environment around and prior to storm initiation. Another forward operator for use with non-hydrostatic, cloud-resolving models permits the inclusion of precipitating and non-precipitating hydrometeors as analysis control variables.

The GSI/EnKF code commit procedures established by DARC and the DTC successfully moves innovative contributions into the repository. The DTC and its Visitor Program is a great resource for the research community to introduce new techniques and model components to advance numerical weather prediction technology.

Contributed by Ming Hu and Chunhua Zhou.

 


Did you know?

The Formal FV3GFS Evaluation

Implementing the GFS global model with the FV3 dynamic core upgrade into the National Centers for Environmental Prediction (NCEP) Production Suite will be the first step towards a Unified Forecast System (UFS). But prior to any new code being delivered to NCEP Central Operations, the NCEP Director must decide whether the implementation should occur, based on recommendations from the Environmental Modeling Center (EMC) and customers and stakeholders in the field at the conclusion of a formal evaluation period. EMC’s  Model Evaluation Group (MEG) is part of the Verification, Post Processing, and Product Generation Branch in EMC and provides independent validation of new or revised models. To assist the field in making their recommendations on a new model or major upgrade, the MEG is tasked with providing information on the model and its performance through webinars and online material, and then making its own formal recommendation.

As the cornerstone of the five-month evaluation period, the MEG created a single, central location web site where users could view data, obtain files, get detailed information about the model, and see statistics from both the real-time parallel testing and three years of retrospective runs. The MEG devoted eleven of its weekly webinars to discussions on expectations for the evaluation.  This included issues with new products, resolution of problems seen in the parallel output, statistical and subjective assessment of model performance on standard performance metrics, and high-impact cases involving hurricanes, winter storms, major rainfall, severe weather, and extreme hot and cold temperatures. The collaboration with partners across the weather enterprise to evaluate the GFS with the FV3 core was unprecedented in terms of comprehensiveness and scope.  Implementation of the GFS with the FV3 upgrade is currently scheduled for January 2019.

Figure 1. The 500 hPa day 5 anomaly correlation scores for forecasts covering a 3-year retrospective period from the FV3 version of the GFS (red) compared  to the operational GFS (black).
Figure 2. Graphics taken from the retrospective assessments page of the MEG website for the GFS upgrade. This image compares 96-hour Hurricane Joaquin forecasts from the GFS with the FV3 core (upper left) and the then operational GFS (upper right) for the 1200 UTC cycle on 4 October 2015, with a difference field in the lower left and the verifying GFS analysis (lower right, contours) along with difference between the analysis and the GFS forecast with the FV3 upgrade (contoured).

 


PROUD Awards

Kate Fossell, Associate Scientist IV, NCAR/MMM and DTC |

Kate Fossell is an Associate Scientist IV in MMM at NCAR, contributing to two DTC projects; the Unified Post Processor (UPP) and the Numerical Weather Prediction cloud container projects.

Kate has worked with the DTC for 10 years with much of her time dedicated to co-leading the UPP project and its growing team through its transition to the Earth Prediction Innovation Center (EPIC) in 2023. During this period, she built a strong collaborative relationship with the UPP team at the NOAA Environmental Modeling Center (EMC), establishing a solid code management plan to shepherd multiple community innovations into the software package and support EMC's recent refactoring project. She worked tirelessly to ensure that DTC's support of the UPP was valuable to the community and DTC partners. She has co-led the UPP task with great skill, ensuring the transition to EPIC transpired efficiently and seamlessly.

Kate also demonstrated her outstanding leadership skills when she volunteered to lead DTC's NWP container and cloud project to its completion, stepping in when the prior lead stepped down for other pursuits.

In addition to her excellent leadership style and mentorship, Kate brings strong technical skills to her DTC projects, pursuing in-depth knowledge of the underlying code and demonstrating a drive to expand her knowledge and skills. The icing on the cake is Kate's positive and friendly approach to her interactions with DTC staff and collaborators. She obviously cares about her team members by the way she advocates for them, providing them with growth opportunities to further their own technical and leadership skills.

Kate has exemplified the mission and values of the DTC on a daily basis. Aside from her contributions to DTC projects, she played an integral role in organizing the 2022 DTC retreat, providing a welcoming and inclusive atmosphere for all. Even with the sunsetting of the DTC’s support of UPP, Kate’s impact on the UPP and other projects will live on. Thank you Kate for all of your hard work and support over the past decade!

,