News | Director's Corner

Director's Corner

How the DTC has helped build a modeling community

Winter 2024

As I near the end of my 31+ year career with NOAA, this is a wonderful (and timely) opportunity to reflect on the last (almost) 6 years of model development and DTC engagement. The last time I provided my perspective for a Director’s Corner was during the Summer of 2018. At that time, I had recently joined NOAA’s National Weather Service (NWS) and was excited at the prospect of moving toward community-based model development to enhance NWS’ operational numerical weather prediction (NWP) systems. The DTC has played an essential role in that development community, and we would not be where we are today without their dedication, skill, and attention to model improvement. I also had the privilege of chairing the DTC’s Executive Committee, which gave me a chance to develop relationships with my colleagues at NOAA’s Global Systems Laboratory in the Office of Oceanic and Atmospheric Research (OAR), the US Air Force, and at NCAR.

The DTC has played an essential role in that development community, and we would not be where we are today without their dedication, skill, and attention to model improvement.

When I arrived, the DTC was already well on its way to supporting community collaboration through its numerous activities.They understood the challenge of creating a single modeling system (quite distinct from a single model!) that served the needs of both the research and operational communities. Leveraging their existing relationships with the model development community, the DTC focused on supporting the development of a UFS-based Convective-Allowing Model (CAM) out of the gate, as well as establishing a longer-term vision for support across all UFS applications. They also anticipated the need to use computing resources in the cloud, developing containers for a number of applications.

Over the last few years, the DTC has been able to shift its focus more (but not totally!) toward testing and evaluation. I think this is one of the most important elements in a successful community modeling enterprise. Perhaps the DTC activity I quote most frequently is the 2021 UFS Metrics Workshop. The DTC led the community in developing a set of agreed-to metrics across the UFS applications, verification datasets, and gaps. The community they engaged was large, including NOAA (both research labs and operational offices), NASA, NCAR, DoD, and DoE. To bring all of these organizations together was an immense achievement in my book, and the output from the workshop will be used for years to come.

One important facet of unifying around a common set of metrics is finding a common way to calculate them when evaluating model output. The DTC has been a leader in the development and release of the METplus suite of evaluation tools. At EMC, we have based the EMC Verification System (EVS) on the DTC’s METplus work, producing the first ever unified system for quantifying performance across all of NOAA’s operational NWP systems, measuring the forecast accuracy of short-term, high-impact weather and ocean-wave forecasts to long-term climate forecasts. The information will help quantify model performance with much more data than previously available, in a single location.

Atmospheric physics is at the heart of the performance of NWP systems and drives some of the most critical forecast parameters we deliver. The DTC develops, manages, and provides the Common Community Physics Package (CCPP) to simplify the application of a variety of physical parameterizations to NWP systems. The CCPP Framework, along with a library of physics packages that can be used within that framework, takes the concept and flexibility of framework based modeling systems and applies it to the milieu of possible physics algorithms. This has become an essential part of EMC’s development of operational NWP systems. Indeed, the first operational implementation of the CCPP in an operational environment was with the new UFS-based Hurricane and Analysis Forecast System, which became operational in June 2023.

I am extraordinarily grateful for my years-long association with the DTC. Their leadership in establishing a community-modeling paradigm with the UFS has been extraordinary, and the future of NWP across the enterprise looks quite bright because of what the DTC does. Thank you!


Brian Gross, Director of EMC

Dr. Steven Lack and Dr. Bonnie Brown

Autumn 2023

The U.S. Air Force’s partnership with the DTC dates back to 2003 and one of the main areas of collaboration has been the advancement of scientific and technical research into a community-backed set of tools for model verification known as Model Evaluation Tools (MET) and its subsequent evolution into the METplus suite of tools. The USAF has provided financial as well as in-kind participation in governance to the DTC and in return benefits from the expansion of this tool set across many operational numerical weather centers from the UK Met Office and its partners, NOAA/NWS, and others.

The USAF operates both global and regional atmospheric models, including deterministic and probabilistic solutions, leveraging external collaborations and state-of-the-science solutions. The products that are produced must be verified and the results communicated effectively to Air Force leadership and to the downstream warfighters generating actionable insights daily. Over the last few years, the 557th Weather Wing (the Air Force’s operational modeling center) has moved its modeling operations to a new high performance computing environment at Oak Ridge National Lab and is now leveraging the METplus framework on that system. Previous instantiations of operational model verification involved a series of scheduled calls to older versions of MET via many scripts resulting in a cluttered repository and hard to trace orchestration. In the new environment, a Development Security Operations approach using a Continuous Integration and Continuous Deployment (CI/CD) model has been employed for all software development, including these new Model Verification Tools (MVTK).

In future operations, containerized versions of METplus will become the standard for upgrading our model verification software stack across multiple computing platforms and a streamlined end-to-end approach leveraging METdatabase and METviewer.

The Cylc workflow engine used by the 557 WW, UK Met Office and their partners is an orchestration tool for modeling which can initiate jobs based on events or at specific times. Using Cylc as the backbone for MVTK allows for routine triggering of the necessary observation and modeling pre-processing METplus jobs upon file receipt. Subsequently, the main statistics modules contained in MET are run and the resultant output aggregate files are sent to a shared file repository. Figures are generated using home-grown python scripts and are scheduled in Cylc at desired intervals, including near real-time and monthly/seasonal aggregation. In future operations, containerized versions of METplus will become the standard for upgrading our model verification software stack across multiple computing platforms and a streamlined end-to-end approach leveraging METplus analysis tools and METviewer. The USAF is looking forward to continuing to work with DTC to benefit from enhancements to METplus.

Example of the Cylc workflow for the calculation of the Commission for Basic Systems (CBS) score (left) and our systematic processing of PrepBUFR files for use in MET for other verification scores (right).

Our previous partnerships with DTC can be viewed as three pronged: support to 557 WW users and their requests for capabilities (described above), cybersecurity, and veracity testing. Creating a cybersecure METplus that can be run in the Air Force computing environments has proven to be a big lift, but we hope that the hard work of DTC personnel will benefit all users of METplus with a more secure codebase. While the 557 WW continuously verifies operationally running models, our acquisition corps, planners, and leadership often want to know what benefit new innovations are bringing to the Air Force Weather Enterprise – this is where veracity testing by DTC plays a role. In the past we have exercised options to evaluate new methods for forecasting dust and flooding. We are very excited to include all three activities from our previous support to DTC in our next contract, covering fiscal years 2024-26.

Dr. Steven Lack and Dr. Bonnie Brown

Israel Jirak

Summer 2023

DTC plays an important role in model development and evaluation for NOAA and the broader Unified Forecast System (UFS) community.  My first interaction with DTC was back in 2010 in the Hazardous Weather Testbed (HWT), when the Method for Object-Based Diagnostic Evaluation (MODE) was being tested for convective applications.  MODE is part of DTC’s Model Evaluation Tools (MET) and serves as an innovative alternative to traditional grid-based metrics for model evaluation.  MODE offers a unique perspective on evaluating different object attributes (e.g., size, shape, orientation, etc.) and offers the flexibility to match forecast objects to observed objects, which is important for convection-allowing model (CAM) simulations where “close” forecasts (as defined by the user) can be counted as a hit.  Another notable DTC project on model evaluation in the HWT was the development of the initial scorecard for CAMs.  This project explored the creation of a short list of the most relevant weather parameters used for evaluating CAMs that should be included in a scorecard when comparing model performance (e.g., prior to operational implementation).

DTC also contributes to model development activities for NOAA and the broader community.  For example, DTC has made significant contributions to the Common Community Physics Package (CCPP) to ensure that the latest physics parameterization schemes are made available to the UFS community.  Staff at DTC have also tested and implemented stochastic physics schemes in legacy modeling systems that enabled these options to be ported into the UFS framework, which is an important step to increasing forecast diversity in a single-model, single-physics ensemble design.  Another ongoing project at DTC involves the exploration of initial-condition uncertainty in a CAM ensemble to improve probabilistic forecasts from next-generation UFS systems, like the Rapid Refresh Forecast System (RRFS).

DTC is well positioned and poised to continue expanding its role in contributing to model evaluation and development activities in NOAA and the UFS community.

Hazardous Weather Testbed (HWT), where the Method for Object-Based Diagnostic Evaluation (MODE) was being tested for convective applications

After serving on the DTC Science Advisory Board for the past three years, I see that DTC is well positioned and poised to continue expanding its role in contributing to model evaluation and development activities in NOAA and the UFS community.  My experience with the operational side of NOAA suggests that DTC has tremendous potential in assisting with model development, testing, and evaluation of the next-generation operational forecast systems. In my opinion, DTC can expand its role in the research-to-operations space by actively engaging and participating in real-time evaluation activities at other NOAA testbeds, whether that be contributing a model run with physics upgrades or generating verification statistics for next-day evaluation.

Israel Jirak

Jennifer Mahoney

Spring 2023

The DTC serves a vital role in advancing NOAA’s modeling efforts and the community-based Unified Forecasting System (UFS). The DTC is research to operations (R20) and operations to research (O2R) in action. Through model evaluations and support to the community, the impact of the DTC in the delivery of new models and techniques to operations and the community has been profound. I would like to highlight a few achievements here.

First, the partnerships cultivated through the DTC are far reaching and have been highly successful in connecting the weather community with the operational prediction community. The DTC Visitor Program is an excellent example of this successful collaboration.

Secondly, the DTC is fostering significant scientific improvements in NOAA’s operational models. For instance, the DTC contributed foundational infrastructure to the Short-Range Weather (SRW) Application, which is providing the foundation for the Rapid Refresh Forecast System (RRFS), NOAA’s newest community-base model soon headed for operational implementation in 2024. In order to engage the weather community in the development of the RRFS, the DTC spearheaded the development of code management protocols and a user workflow, which were adopted by the Earth Prediction Innovation Center (EPIC) and provided to the research community. These technologies are being used to accelerate RRFS development toward the finish line.

The future of the DTC is bright. With the renewed emphasis on testing and evaluation of model improvements, innovations into NOAA’s models are sure to accelerate.

In another example, the DTC was instrumental in preparing the Hurricane Analysis and Forecast System (HAFS) for operational implementation, which is scheduled for operations in the 2023 hurricane season. Using process-based diagnostics to explore model performance, the DTC staff provided information that improved the representation of physical processes in HAFS and provided insight for advancing forecast skill. The figure below highlights one of the comparisons performed in the study.

The DTC is facilitating improvements in FAA operational weather products. In one example, the DTC staff evaluated the use of the hybrid sigma-pressure coordinate in the Rapid Refresh (RAP) and the High-Resolution Rapid Refresh (HRRR) operational models. Based on their analyses, spurious noise in complex terrain within the models was reduced resulting in a more coherent mountain wave feature. By improving these mountain wave features, improvements in turbulence forecasts produced by NWS serving aviation operational decisions were realized (Beck et. al, 2020:  An Evaluation of Hybrid, Terrain-Following Vertical Coordinate in the WRF-based RAP and HRRR models).

3-h accumulated rainfall from the Multi-Radar/Multi-Sensor (MRMS) system (top left graphic), HAFS configurations using the GFDL microphysics (HAFSA) and the Thompson microphysics (HAFSS). (Reference Pan, L., K. Newman, M. Biswas, and B. Nelson, 2023: Impacts of Different Physics Suites on the Hurricane Analysis and Forecast System Performances, AMS 103th annual meeting, Denver, CO.)

Tools and technologies are also developed by the DTC staff. For instance, the popular Common Community Physics Package (CCPP) was designed and developed to facilitate the implementation and transition of physics innovations into NOAA’s atmospheric operational models. Moreover, the tool has become a core component to all UFS applications and has been released to the community for advancing numerical model research; it is on track for transition to NWS as part of the HAFS system; and the CCPP is undergoing pre-operational testing at the Naval Research Laboratory.

The future of the DTC is bright. With the renewed emphasis on testing and evaluation of model improvements, innovations into NOAA’s models are sure to accelerate.

Acknowledgement: I would like to thank Jeff Beck, Linlin Pan, Man Zhang, and Ligia Bernardet for their helpful input.

Jennifer Mahoney is one of the Developmental Testbed Center (DTC) Executive Committee members and a proud sponsor of the DTC, in addition to her current role as Director of the Global Systems Laboratory at NOAA.

Hui-Ya Chuang

Winter 2023

I was excited when the DTC nominated me to serve on the DTC Science Advisory Board in 2020 and then felt very honored to be asked to become a co-chair in 2022. As one of the first group of EMC staff to be sent to work with DTC on bridging the gap between research and operations, I have watched DTC grow into an organization that accomplished its mission to bridge the gaps, by not only providing community support for several operational softwares, but also facilitating collaboration by developing Common Community Physics Package (CCPP) and METPlus.

I was the developer and code manager for NCEP’s operational Unified Post Processor (UPP) for a decade and, through collaboration with the DTC, we made UPP a community post-processing software package. The UPP was developed to become a common post processor for all of NOAA’s operational atmospheric models. This was initiated by requests from NOAA forecasters as this allows them to perform a fair comparison of all model output when they are derived using the same algorithms. The UPP supports post processing of all operational Atmospheric models including GFS, GEFS, NAM, RAP/HRRR, HWRF, as well as to-be-implemented HAFS, and RRFS.

Hui-Ya Chuang

It has always been my belief that UPP benefited greatly from more than a decade of collaboration with DTC, and I am grateful that the DTC has been providing UPP community support. The DTC was instrumental in helping EMC develop a software management plan during the early stage of collaboration, as well as updating UPP to make it portable on several platforms. These efforts enabled UPP to become a community software that users from different communities can run on their own computer and contribute updates, based on the code-management plan. Additionally, the portable UPP has made it easy for EMC to transition UPP to different operational supercomputers every three years. For more than 10 years, DTC worked with EMC to bring operational UPP updates and bug fixes to the community and also contributed bug fixes from the community back to operations through public releases and semi-annual code merges with EMC. Finally, I cannot thank the DTC enough for developing and continuing to update UPP documentations. EMC's post-processing group was small so DTC’s help in community support and documentation was much appreciated. This documentation has been very helpful in providing guidance to EMC’s collaborators who were interested to work on advancement of post processing algorithms. With the DTC’s effort, UPP is widely used by international weather communities.

The DTC was instrumental in helping EMC develop a software management plan during the early stage of collaboration, as well as updating UPP to make it portable on several platforms.

While serving on the DTC Science Advisory Board, I’ve witnessed the many challenges the DTC navigated, such as when NOAA asked them to spin up EPIC to take over community software support and transition themselves to focus mainly on testing and evaluation. Although I am concerned about continuity of UPP community support, I am delighted to see that DTC has been stepping up to the challenge of transitioning themselves to this new focus of becoming the T&E power house while winding down on software support. I was glad I could provide advice on operational aspects during their transition.

DTC Contributions to Other NOAA testbeds and the US Air Force Weather Enterprise

Christopher Melick

Autumn 2022

The DTC was established in 2003 as a multi-agency effort with funding from National Oceanic and Atmospheric Administration (NOAA), the US Air Force, and the National Center For Atmospheric Research (NCAR) and has made its mark as the “clearing-house” for testing and evaluation (T&E) activities within the meteorological and associated Earth science community. As such, it provides a fundamental bridge between research and operations where cutting-edge ideas can be explored and vetted to enhance understanding of NWP physical processes and improve forecast verification with various techniques and metrics. The DTC's mission priorities overlap with its fellow NOAA testbeds, as they are unique collaborative spaces where researchers and forecasters work together to improve weather products and services ( Thus, the DTC’s role and involvement in partner NOAA testbeds have broadened for more than a decade.

The proper infusion of science into T&E activities at DTC has been guided on an annual basis through review, evaluation, and recommendations from the DTC Science Advisory Board (SAB). The SAB convenes experts from all fields in the society that can help to shape the strategic direction and objectives for DTC.

The DTC’s participation in NOAA’s Hazardous Weather Testbed (HWT) Spring Forecasting Experiments (SFE) happened to coincide with my role as a facilitator of the SFE in 2010 and 2011 while working at the NOAA/NWS Storm Prediction Center (SPC). The DTC’s role was to conduct objective verification of the experimental model forecasts and provide results as feedback to the participants in the evaluation process (Clark et al. 2012; For this evaluation, DTC applied its locally developed Model Evaluation Tools (MET). Traditionally, the SFE examines experimental high-resolution (convection-allowing) models and ensembles that can explicitly simulate convective mode, and provide details about potential hazards (tornadoes, severe hail, or strong-straight line winds). While the storm-attribute fields often appear realistic, conventional grid-point verification methods routinely penalize high-resolution forecasts when small offsets in time and space exist between observed and forecast event objects. As a result, more emphasis has been placed on spatial techniques that avoid the inherent “double penalty problem” (i.e., the standard verification metrics produce both a miss and false alarm for what subjectively appears a reasonable forecast). Under this framework, a more reliable practice of object-oriented verification was applied by the DTC by applying MET’s Method for Object-based Diagnostic Evaluation (MODE). Some results from the 2010 SFE using MODE are available in Clark et al. (2012). Alternatively, starting with the 2012 SFE, neighborhood-type evaluations also tended to be a popular choice with the participants, as the statistics (e.g., Fractions Skill Score) were made available in near real-time the next day, along with spatial plots on dedicated webpages (Melick et al. 2012;

Scott Air Force Base -- Weather team support

MET was developed over several years, continually maintained, and expanded by DTC, and has served the academic, government, and private sectors, as well as the international community. DTC provides a consistent code repository for MET with robust documentation, training, and support, which are always improving. These capabilities and resources are essential for sustaining both research and operational purposes. In 2017, I made the transition to becoming a meteorologist for the 16th Weather Squadron (within the 557th Weather Wing) at Offutt Air Force Base, Nebraska. An extensive collaborative history exists between the Air Force and DTC as they’ve funded the DTC on specific T&E projects related to improving understanding weather phenomena and ultimately, environmental intelligence on a global level. Some of the noteworthy byproducts from these investigations have been incorporated into the MET software upgrades. In the past, MODE had been used by the 16th Weather Squadron for case-study evaluations, although not on a routine basis. The adoption of the remainder of the MET tools into an operational context for objective verification has been gradual over the last five years. Our experience with DTC has been very encouraging as there is often a quick turnaround when troubleshooting problems with the software suite and providing solutions (as well as identifying shortcomings that are often addressed in subsequent releases).

The proper infusion of science into T&E activities at DTC has been guided on an annual basis through review, evaluation, and recommendations from the DTC Science Advisory Board (SAB). The SAB convenes experts from all fields in the society that can help to shape the strategic direction and objectives for DTC. For my second year on the DTC SAB, I was honored to serve as one of the chairs during the Fall of 2022. During the three days of meetings in September and October, my responsibilities included guiding SAB discussions and giving a presentation on my career insights on the DTC, which included offering suggestions on limitations in the field that could be addressed in the future. Finally, with feedback from other members on the board, I oversaw the development of our final report delivering constructive critique and advice to DTC. I am extremely grateful on both a personal and professional level to have this experience and I view it as rewarding to the US Air Force to serve on their behalf and promote their interests and goals, with respect to weather challenges.

Christopher Melick, PhD, USAF and DTC SAB co-chair

The era of funky grids, influence of interpolation, and the importance of community verification codes

Marion Mittermaier

Summer 2022

Model intercomparison has always come with challenges, not least of all, the decisions such as which domain and grid to compare them on and the observations to compare them against. The latter also involves the often “hidden” decision about which interpolation to use. We interpolate our data almost without thinking about it and forget just how influential that decision might be. Let’s also add the software into the mix, because, in reality we need to. In 2019 the UK Met Office made the decision to adopt METplus as the replacement for all components of verification (model development, R2O, and operational). Almost three years into the process, we know that despite the fact that we have two robust and well-reviewed software systems, the two systems do not produce the same results when fed the same forecasts and observations, yet both are correct! The reasons why this may be the case can be many and varied, from the machine architecture we run them on, the languages used (C++ and Fortran), and even whether you use GRIB or netCDF file format.

It brought to mind the World Meteorological Organisation (WMO) Commission for Basic Systems (CBS) exchange of verification scores (deterministic and ensemble) where, despite having some detailed instructions of what statistical scores to compute and how to compute them, each global modelling centre computes them using their own software. In essence the scores are comparable, but they are not as comparable as we might believe. The only way they would be comparable on all levels is if the forecasts and observations were put through the same code with all inputs processed identically. Community codes, therefore, form a vital link in model-intercomparison activities, which is a point we may not have thoroughly considered. In short, common software provides confidence in the process and the outcomes.

The next-generation Model for Prediction Across Scales (MPAS) grid


Cue the “cube-sphere revolution,” as I have come to call it. Europe, in particular, has been in the business of funky grids for a long time (thinking of Météo France’s stretched and Met Office’s variable resolution and German Weather Service’s (DWD) icosahedral grids). The next-generation Model for Prediction Across Scales (MPAS) and FV3 make use of unstructured grids (e.g. Voronoi and cube-sphere meshes, respectively) and the Met Office future dynamical core is also based on the cube-sphere. Most of these grids remove the singularity at the poles primarily to improve the scaling of codes on new HPC architectures. These non-regular grids (in the traditional sense) bring new challenges. Yes, most users don’t notice any difference because forecast products tend to be interpolated onto a regular grid before they see or interact with them. However, model developers want to assess model output and verification on the native grid because interpolation (and especially layers of) can be very misleading. For example, if the cube-sphere output is first interpolated onto a regular latitude-longitude raster coordinate system (to make it more manageable), that’s the first layer of interpolation. If these interpolated regular gridded fields are then fed into verification software such as METplus and further interpolation to observations is requested, then that’s the second layer of interpolation. In this instance, the model output has been massaged twice before it is compared to the observations. This is not verification of the raw model output anymore. However, shifting the fundamental paradigm of our visualisation and verification codes away from a structured and regular one is a challenge. It means supporting the new UGRID (Unstructured Grid) standard, which is fundamentally very different. Just as the old regular grid models are not scalable on new HPC architectures, the processing of unstructured grids doesn’t scale well with the tools we’ve used thus far (e.g. python matplotlib for visualisation). New software libraries and codes are being developed to keep pace with the changes in the modelling world.

As the models change and advance, our tools must change as well. This can’t happen without significant investment, and can be a big ask for any single institution, further underlining the importance of working together on community codes. The question is then how fast we can adapt and develop the community codes so that we can support model development in the way we need to.

Marion Mittermaier

The NOAA Testbeds and Proving Grounds: A coordinated effort to transition research to operations, services and applications

Andrea Ray and J.J. Brost

Spring 2022

The Test Bed and Proving Ground Coordination Committee (TBPGCC, is composed of representatives from the 12 NOAA Testbeds (TBs) and Proving Grounds (PGs), including a member from the DTC.

There are 12 NOAA Testbeds and Proving Grounds. Collectively and individually, they facilitate the orderly transition of research capabilities to operational implementation, and thus, are crucial mechanisms for transitioning research into operations at NOAA and other partners, and ultimately into societal benefits.

Collectively and individually, they facilitate the orderly transition of research capabilities to operational implementation and other applications, often called R2X, and thus, are crucial for transitioning research into operations at NOAA and other partners, and ultimately into societal benefits. The TBPGCC strives to build collaborations and synergies among TBs, and PGs where appropriate, to ameliorate the realities of organizational and funding stovepipes, and take advantage of common opportunities. The TBPG holds an annual workshop and monthly meetings to carry out these endeavors. Just as DTC is a distributed facility where the NWP community can test and evaluate new models and techniques for use in research and operations, TBPGs are working relationships for developmental testing, in a quasi-operational framework among researchers and operational scientists/experts (measurement specialists, forecasters, IT specialists, etc.). Typically, this includes partners in academia, the private sector, and government agencies, whose activities aim to improve or enhance operations in the context of user needs. TBPG activities are two-way interactions with both R2O and O2R, and are iterative whereby any particular project is generally tested multiple ways, and often more than one TBPG involved. Metrics for evaluation and acceptance for operational use vary among testbeds and proving grounds, but generally include evaluation of the research product. First, the tool, method, or analysis is evaluated for how it impacts the resulting skill or quality of resulting products or services. Second is an evaluation of whether it is efficient and effective both in terms of the forecasters’ process to create products, and use by customers. Third, it is evaluated for compatibility with operational hardware, software, data, and communication. 

Picture of the Operations Proving Ground (OPG) Lab, Kansas City, MO.


The network has been engaging in new ways to collaborate over the past several years. Beginning in 2019, the annual meeting changed format from primarily science sharing talks from each TBPG to focusing more on project updates, to a format with sessions framed as learning exchanges about successes, challenges and lessons learned. Sessions also focused on the role and challenges of testbeds for external efforts such as the Unified Forecast System (UFS), artificial intelligence (AI), and cloud computing. This format change improved our collaborative efforts and fostered more open and honest discussion between TBPGs. The 2021 meeting also included the first ever session on social and behavioral sciences, as well as a session about working in the virtual environment. Several testbeds have included some virtual aspects in the past, but the pandemic forced all experiments to go completely virtual. While the virtual environment has had benefits, such as allowing people to participate who might not be able to travel or to leave their positions for a week, it has also had challenges. Inherently face-to-face interactions, including experiments - especially ones that involved using and sharing equipment and facilities - have been seriously hampered. The meeting included sessions focusing on coordination of other TBPGs with the Operations PG, the Satellite PG, and the emerging Fire Weather TB, and on funding issues as well. The push of research funding currently exceeds the capacity of TB/PGs to evaluate efforts. Furthermore, while TBPGs are excited about the advent of UFS and opportunities to use Cloud and AI, generally flat funding limits the potential to take advantage of these for R2X, especially during the transition period from the current NCEP production suite to the Unified Forecast System (UFS). 


NOAA Test Beds and Proving Grounds


The TBPGCC plans to expand efforts to coordinate planning in the upcoming year, and look for collaboration opportunities in the next two to three years, and will expand its efforts to speak as a group on topics such as UFS development. 

Andrea Ray and J.J. Brost

DTC Transferal of Code, and Certain Tasks and Responsibilities to EPIC

Bill Mahoney

Winter 2022

The DTC is at an interesting pivot point in its 19-year history as it has the opportunity to redirect  its efforts back to its original imperative  of advanced numerical weather prediction (NWP) testing and evaluation. This opportunity is due to the establishment of the Earth Prediction Innovation Center (EPIC). As stated in the DTC charter, “The DTC was initiated in 2003 as a means for the NWP community to test and evaluate new models, technologies, and techniques for use in research and operations and serves as a bridge between research and operations to facilitate the activities of both halves of the NWP community in pursuit of their own objectives…”. Likewise, the goal of EPIC is to “Accelerate scientific research and modeling contributions through continuous and sustained community engagement to produce the most accurate and reliable operational modeling systems in the world.” The DTC and EPIC include a development environment, use of high-performance computing, code repository, observations and tools, and delivery of data with the latest codes, utilities, documentation, and user support.

Advancing the nation’s weather prediction capabilities and improving model performance skill requires that the DTC and EPIC work in a tightly coordinated fashion.

From a distance, it appears that there is now some overlap between the DTC and EPIC, given that the DTC has been performing similar services as those envisioned for a fully established EPIC. In fact, the DTC’s role over several years expanded to include a larger fraction of user support and code management. So with these activities now shifting to EPIC, the DTC can rebalance its portfolio to focus its unique expertise on the critical areas of hierarchical testing and evaluation of community contributed model enhancements.    

The DTC leadership team has been working hard over the last two years to ensure that while EPIC is ramped up, the tasks previously performed by the DTC and planned for EPIC would transition to EPIC. Given the DTC’s successful history and extensive experience in user support, community engagement, code management, and documentation among other topics, it is imperative that EPIC take advantage of this experience. As this transition of tasks takes place, it is critical that the DTC and EPIC collaborate closely to define their respective roles in the advancement of NWP capabilities and clearly articulate these roles to the user community. Advancing the nation’s weather prediction capabilities and improving model performance skill requires that the DTC and EPIC work hand-in-glove in a tightly coordinated fashion. Annual work plans need to be synchronized to the greatest extent possible to optimize the roles of both centers. The leadership and governing bodies of both the DTC and EPIC need to consider tasking that is complementary. As stated by Maoyi Huang, EPIC Program Manager, Weather Program Office, NOAA, “DTC’s experience and expertise in code management, and user and developer support for the UFS weather code base and applications are indispensable assets to the community.” The community’s ability to effectively utilize the expertise and capabilities of both the DTC and EPIC is dependent on both centers working in unison and in a complementary manner.

William "Bill" Mahoney


Bill Mahoney is the Director of the Research Applications Laboratory at the National Center for Atmospheric Research. He serves as an Executive Committee Member of the DTC.  

Participating in Research-to-Operations process through the DTC

Michael Toy

Autumn 2021

The DTC plays a vital role in the development of the Unified Forecast System (UFS), particularly in the areas of physics parameterization development, and testing and evaluation (T&E). In these ways, the DTC is facilitating research to operations (R2O) efforts from a broad scientific community. Recently, I became a member of this community as a model-physics parameterization developer, coming from a background in experimental dynamical core development, the intricacies of which had become very familiar to me.

Being tasked to add a new physical parameterization module, which is code that describes the grid-scale evolution of forecast variables due to sub-grid scale (unresolved) physical processes to a highly complex model like the UFS Weather Model, was quite daunting. The Common Community Physics Package (CCPP), developed by the DTC, made the development process more manageable by providing a modular structure to the physics parameterization where one codes more or less as one wishes, and the code gets connected to the dynamical core of the UFS Weather Model (i.e., FV3 - Finite-Volume Cubed-Sphere), or that of any other model, with a skillfully designed software interface. The CCPP makes it easy to test different combinations of parameterizations, and to combine aspects of separate schemes to find the optimal configurations for future versions of the operational UFS.

Michael Toy, NOAA

Serving as the DTC Science Advisory Board (SAB) chair at the annual SAB meeting in September was an honor. The SAB consists of scientists in the UFS community from government laboratories, the Department of Defense, academia and the private sector. It was good to hear first-hand perspectives from stakeholders, who are not all model developers, about guidance on DTC's strategic direction and objectives. There is optimism that the transfer of code management and support to the Earth Prediction Innovation Center (EPIC) will free DTC's resources in expertise to more science related activities such as further T&E development, facilitating physics parameterization innovations, and increased training and outreach to the broader UFS community. There is also interest in seeing the METplus verification system become more widely used, and this will certainly be facilitated through the recent addition of new tutorials on the DTC website and the upcoming training workshop, as well as acceptance of METplus into NCEP operations. From the discussions the SAB had during the September 2021 annual meeting, it is evident that the DTC is instrumental in supporting the success the UFS has had so far, and will have in the future.

DTC Science Advisory Board 2021

DTC Outlook for the Future

Louisa Nance

Summer 2021

Over the years, a major component of the DTC’s approach to facilitating the transition of research innovations into operations has been its support for distributed development of community software that includes the capabilities of the current operational systems.  This software support has provided the foundation for conducting testing and evaluation that can inform the research to operations (R2O) process.  The DTC’s software support responsibilities have involved, to varying degrees, the Gridpoint Statistical Interpolation (GSI) and Ensemble Kalman Filter (EnKF) data assimilation systems, the Weather Research and Forecasting (WRF) model, the Hurricane WRF (HWRF), the Unified Post Processor (UPP), the advanced Model Evaluation Tools (METplus), and more recently the Common Community Physics Package (CCPP) with its companion Single-Column Model (SCM) capability and the Unified Forecast System (UFS) Medium Range Weather (MRW), Short Range Weather (SRW) and Hurricane Applications. 

With the award of the contract for the EPIC, the DTC is on the cusp of an exciting evolution.

These software support activities range from code management efforts that include code reviews and regression testing to user and developer support activities with documentation, Users’ Guides, help desk support and training.  Supporting all aspects of an end-to-end numerical weather prediction (NWP) system requires substantial resources such that some years as much as 75% of the DTC’s annual budget has been dedicated to its software support activities, leaving limited resources to dedicate to actually testing innovations, a role that should be central to a testbed. 

With the award of the contract for the Earth Prediction Innovation Center (EPIC), the DTC is on the cusp of an exciting evolution.  EPIC represents a significant infusion of funding that will address the code management and support needs for the UFS.  The UFS is envisioned to be a community-based, coupled, comprehensive Earth modeling system that supports applications spanning local to global domains and predictive time scales from sub-hourly analyses to seasonal predictions.  Over the next few years, the DTC will be working closely with the EPIC Program Office to transition aspects of its software support activities to the EPIC contractor.  Stepping back from these software support activities will free up resources to increase the DTC’s investment in testing and evaluation activities, which will be a dream come true.  While the DTC will be stepping back from some of its software support activities, we envision the DTC will continue to play a key role in the development and support of two software tools that are central to a hierarchical testing framework (HTF) - CCPP and METplus.  The DTC is looking forward to working closely with the NWP community to engage in in-depth testing and evaluation activities that will include more diagnostic information on model performance to inform the development process.

Louisa Nance

Louisa Nance is the Director of Developmental Testbed Center. She has been with the DTC since it's inception in 2003, as its first official hire when Bob Gall was the Director and Steve Koch the Deputy Director. She has served in many leadership roles in the DTC, including the DTC Assistant Director.

Reflections on community-wide physics development for operations

Lisa Bengtsson

Spring 2021


My involvement in model physics development and coordination for operational use, in both Europe and the U.S., has afforded me an interesting and rewarding perspective on the unique philosophies and approaches taken along the path of research to operations on both sides of the Atlantic. In many ways, the challenges faced by large organizations or collaborative consortiums when developing/selecting physics for operations are very similar for Europe and the U.S. For instance, when scientists have invested a significant portion of their careers in their physics development, agreeing on best practices often becomes emotional and personal, and the process of selecting physics for operations can even become territorial. On the other hand, thanks to disparate policies regarding model open-source code, license agreements, and memorandums of understanding (MoU) in European regional model development, the approaches to physics parameterization research to operations (R2O) also diverge significantly between Europe and the US.   

It is extremely important that communication and trust is established among the involved parties, and I strongly believe that we are on the right track during this exciting time for NWP in the US.

The U.S. has been immensely successful convening academia in numerical weather prediction research in general, and in physics development in particular, thanks to the WRF community. To a large extent, this is also realized in the development of physics parameterizations for the UFS through the introduction of the Common Community Physics Package (CCPP). In Europe, on the other hand, the closed source and various MoUs among participating nations have caused serious challenges involving academia in developing relatively recent versions of model code - in particular, in the regional model community. The European Centre for Medium-Range Weather Forecasts (ECMWF) has been more successful than the European limited-area model community in this regard with the release of the open Integrated Forecast System, IFS; although, this scaled-down academic version still lags behind operational releases. However, the European approach whereby one suite is developed, improved, and continuously refined in close collaboration with the developers of the different components - again drawing the parallel to ECMWF - is compelling, and also notably successful. It works because the research scientists who are developing the cloud microphysics scheme sit next door to the scientists working on convection and Planetary Boundary Layer (PBL) schemes, and they all work in the same building as the operational branch. A similar approach is also seen in the regional model community in Europe, as model physics suites are developed and refined in a collaborative manner within different consortia (e.g. HIRLAM, ALADIN, COSMO), under strict MoUs. In the US, the challenge with open source and community involvement is instead the coordination and organization piece. The know-how needed to combine different physics schemes that interact in complex ways, contributed from all over the world, into a functioning suite suitable for operations pose an array of challenges. In a large organization such as NOAA this coordination does not take place by simply walking down the hall to see your favorite convection-parameterization developer, because schemes are contributed by a wide variety of universities, research labs, and operational centers. Can we combine the best of both worlds? 

Our approach to the UFS R2O project has been to work under a new paradigm in regards to physics development for operations at NOAA. EMC and the OAR labs, as well as DTC, work closely together on a common suite aimed for operations. Some frustration associated with operational implementation is the feeling of always having to “put out fires,” chasing systematic errors, instead of having the liberty of thinking long term to include advanced process descriptions. In this project we are aiming to find a balance between the two, in a two-stream approach addressing both short-term needs and long-term goals. The long-term goal is a seamless (across space and timescales) physics parameterization suite that aims to unify, to the extent possible, processes that represent vertical mixing and cloud formation. The key here is to approach the challenge as a unified representation of physical processes with a well thought out conceptualization of the final suite, and avoid patching together existing schemes as an afterthought. However, we recognize the limitations in scope of the R2O project as such, as funding is only available on a year-to-year basis, and thus hope that this project can be a long-term sustained activity, so that our goals in developing a next-generation physics suite can be realized. Another challenge we are facing is the extremely rapid development of the rest of the coupled model system that constitutes the UFS, such as a new land-surface scheme, coupled ocean, ice and waves, chemistry, stochastic physics, and fully coupled data assimilation. For our project to be successful, it is extremely important that communication and trust is established among the involved parties, and almost a year in, I strongly believe that we are on the right track during this exciting time for NWP in the US.


Lisa Bengtsson is a scientist/meteorologist at the University of Colorado/NOAA Earth System Research Lab. She shared her perspective on her work with the UFS R2O physics development project for this Director’s Corner article. She co-leads this sub-project with Jian-Wen Bao, PSL, and in close collaboration with Fanglin Yang, EMC.

The Age of EPIC: What does this mean for DTC?

Gretchen Mullendore

Winter 2021

In 2017, Congress instructed NOAA to establish the Earth Prediction Innovation Center (EPIC) to advance “observational, computing, and modeling capabilities to support substantial improvement in weather forecasting and prediction of high impact weather events.”  What will the details of EPIC look like?  EPIC has not yet been formed, so the community doesn’t know, and it’s not yet clear that NOAA knows either.  Hopefully, it can achieve its goal to improve accessibility to the operational code and data so that scientists across the entire weather enterprise community, both research and operations, can collaborate more easily, leading to more timely identification of forecasting challenges and adoption of forecasting improvements.

Both UFS and EPIC need to coordinate closely with academic, research labs, private companies and across multiple agencies. Fortunately, DTC already does this in their everyday work.

This goal sounds familiar to anyone who has worked with DTC, because this is a significant portion of the core work that DTC has been doing successfully for many years.  When browsing the research and tool development performed by (and supported by) DTC, one finds numerous examples of impactful R2O ranging from parameterization development to widely adopted model validation strategies and tools. So, with EPIC poised to take on workflow and tool development traditionally done by DTC, what will DTC’s new role be in the Age of EPIC?

In preparation for inevitable change, DTC has already begun preparing by starting with a critical examination of its own software portfolio, and steps have been taken to ramp down support of some products. Ongoing products will still be supported, but are being shifted toward more community-based support and development strategies. Likewise, research and support of testing and evaluation (T&E) activities have ramped up, which have fostered new partnerships.

DTC is also playing a significant role in supporting the Unified Forecast System (UFS).  The UFS is a key component of NOAA’s effort to share code with the community and implement model improvements with the envisioned EPIC framework acting as its software and platform support.  In addition to supporting UFS workshops, DTC is also helping to develop and manage the Common Community Physics Package (CCPP), which includes a framework and physics schemes for the atmospheric portion of UFS, and METPlus, which is used for validation and verification in the UFS system.

The UFS-R2O Project is a subset of the UFS supported by NOAA that focuses on the transfer of innovations into operations, see the lower part of the funnel. EPIC is a new NOAA initiative that will be providing infrastructure and user support. (Image from NOAA/NWS/OSTI-Modeling.)


To build a successful entity, both UFS and EPIC need to coordinate closely with academic, research labs, private companies and across multiple agencies. Fortunately, DTC already does this in their everyday work.  NOAA should leverage DTC’s deep expertise in model development and verification, broadening software accessibility, and building R2O/O2R collaborative teams to ensure EPIC’s success.

As a professor at University of North Dakota (UND), I had the pleasure of working with DTC in multiple capacities.  From 2015-2016, a graduate student and I collaborated with DTC through the DTC Visitor Program on a project focused on convective-allowing model verification using MODE. It was a great introduction to the many useful tools developed and supported by DTC.  I collaborated again through the Big Weather Web project and development of a containerized version of WRF, which is still used in numerics classes at UND.  Most recently, I have served for two years as an academic member of the DTC Scientific Advisory Board.  Now that I am the Director of MMM, I look forward to continued collaborations in my new role.

Gretchen Mullendore, NCAR's MMM Laboratory Director

USAF Perspective on DTC’s role in meeting future challenges and opportunities

Mike Farrar, USAF

Autumn 2020

The U.S. Air Force (USAF) partnership with the DTC has provided superb value to the USAF over the years, as this collaboration dates back as far as 2003.  Early in the relationship, the DTC benefited the USAF-NCAR partnership with WRF development and transition to USAF operations.  In recent years, the DTC has focused primarily on USAF needs for model evaluation through continued development of the Model Evaluation Tools (MET), along with project efforts related to model testing and evaluation (T&E).  Throughout this period, the DTC and the other DTC partners have been valuable teammates, exemplifying the true model (pun intended) for interagency cooperation.

Moving into the next decade, the science and application of environmental modeling face numerous evolving challenges and opportunities, and the DTC will play a central role in several of them.  One challenge is the proliferation of emergent commercial weather data (CWD) sources available for Research and Development (R&D), as well as for data assimilation (DA) with operational weather forecast models.  These new data sources are being released under much shorter timelines than those from traditional government observing systems, necessitating that we incorporate the data more rapidly into our model DA systems, which will require us to test at a faster pace and evaluate model performance (enhanced or degraded) for each new data type. The DTC has a golden opportunity to partner with the Joint Center for Satellite Data Assimilation (JCSDA), a peer organization working on next-generation DA architectures and science improvements, to enhance our collective ability to test more rapidly and evaluate the utility of new data types for DA in the sponsoring agencies’ modeling systems.

"Who better than the DTC to lead the way."

In addition to standard verification statistics, which are the expected output of model evaluation studies (e.g., Observing System Experiments/OSEs), the sponsoring agencies will also need to measure the value added by each new data type in monetary terms.  Given our limited budgets, which are unlikely to significantly increase, it is highly unlikely we will be able to purchase new data without a trade-off from somewhere else.  In other words, in order to purchase new data, we will probably have to draw the budget away from another valid requirement. But how do we decide which element is more important and is the best use of our limited funds?  One objective measure would be to select the options that provide the most value per dollar.  Because most of the sponsor agencies will face decisions of this type, building and operating a model T&E capability would be a promising opportunity for the DTC to support their interagency partners to manage the flood of CWD headed our way.

USAF operates valuable assets at risk from environmental threats. This underlines the need for high-quality environmental forecast information. (U.S. Air Force photo by Airman 1st Class William Rio Rosado)

Another pending opportunity for the DTC is to take the lead in model T&E for the future of interagency model collaboration envisioned by the nascent Environmental Prediction Innovation Center (EPIC). The initial NOAA-managed contract for EPIC is still in the early stages and the first increment focuses on software engineering, the software development environment, and related tools and applications.  As such, EPIC will need advanced model T&E capabilities and tools to incorporate into the new national collaborative environment, and who better than the DTC to lead the way in that area?  One clear example is MET, a capability in which the USAF is already heavily invested.  By continuing to expand the capabilities of MET to meet the needs of multiple partner agencies, as well as working to include MET in EPIC’s new software development environment, the DTC can expand its role by becoming one of the cornerstone, essential partners in the U.S. modeling community.

The USAF can point to several successful DTC-partnered projects that have benefited both USAF operations as well as contributed to improved capabilities for our interagency partners.  However, even when considering those past successes, we feel the DTC’s best days are ahead, and we in the USAF are looking forward to being a part of the new efforts yet to come.

Dr. Mike Farrar, Chief Scientist for Weather, HQ USAF/A3W, the USAF representative on the DTC Executive Committee.

Community Modeling and the Transition from Research to Operations and Operations to Research

Jim Kinter

Summer 2020

This is an exciting era in Earth system prediction research and operational forecasting. Researchers are gaining access to a powerful set of tools that are deployed or soon to be used to produce the nation’s weather forecasts, climate outlooks, and more. Operational forecasters are gaining more direct access to the latest breakthroughs and innovations in modeling and prediction research. Here’s why I am so enthusiastic.

First, an anecdote. When I was studying for the PhD, my advisor, Kikuro Miyakoda, who led our group at the Geophysical Fluid Dynamics Laboratory (GFDL), was developing the “E-physics” experimental subgrid physical parameterization in a global atmospheric model. After a successful demonstration predicting the extreme winter of January 1977 (Miyakoda et al. 1983), the E-physics parameterization transitioned in 1985 from research to operations (R2O) in the Medium Range Forecast (MRF) model at the National Meteorological Center (NMC), thanks to the painstaking collaborative effort by GFDL and NMC scientists. Shortly after joining Shukla’s group at the University of Maryland, they signed an MOU with Bill Bonner and John Brown of NMC to use the MRF for predictability research. With the support of NMC scientists including Joe Gerrity, Joe Sela, Bob Kistler and Glenn White, we ran the model on the NMC computer and then transferred the JCL, Fortran code, and test data on 9-track tape to computers outside NOAA. That transition from operations to research (O2R), without documentation and lots of operations-specific arcana, required a heroic effort from several members of our group, notably Larry Marx and Ed Schnieder, and led to the first publication using MRF by a university group (Kinter et al. 1988).   

If NOAA makes wise, balanced investments, the US will regain the leadership role in numerical weather and climate prediction worldwide. 

Fast forward 30+ years to the current generation of R2O and O2R. Amazingly, the paradigm of the 1980s – special arrangements between groups, heroic efforts to port undocumented code, etc. – were still in force until just a few years ago. However, in 2016, a paradigm shift occurred when NOAA adopted a fresh, unified strategy for modeling. This new way of doing business is a bold experiment to conduct research and development in a truly collaborative way, within the constraints imposed by operational imperatives and marching to the cadence of public releases and operational implementations. The strategy is unified, both because a single modeling platform is envisioned for all forecast applications and because we have formed a single collaborative community to address the scientific, technical and operational challenges. Engaged and willing participants from inside and outside NOAA are thinking strategically about R2O with a 3-5 year vision and 1 to 2-year goals. The Earth Prediction Innovation Center is being conceived to provide the desperately needed O2R community support for codes and workflows to enable experimentation with the operational codes of tomorrow and lower the barriers to R2O transition. 

Moving forward, NOAA has an opportunity to build upon this foundation by making critical investments in scientific research, dedicated high-performance computing for research and development, and software and user support. If NOAA makes wise, balanced investments, the US will regain the leadership role in numerical weather and climate prediction worldwide. 

Jim Kinter

A Look Ahead

Dorothy Koch

Spring 2020


At the National Weather Service’s Office of Science and Technology Integration (OSTI), we’re currently planning milestones for the coming year with our partners. OSTI manages a broad portfolio of research and development efforts designed to improve operational forecast capabilities, and our partners in these efforts are in both the research and operational communities. We aim to understand forecaster priorities, and to communicate operational requirements so that community members can come up with innovative solutions. Building connections across these diverse communities has meant learning new skills. We are accustomed to rigorous testing and evaluation processes before operational transitions, but are still figuring out how to ensure that researchers working in federal laboratories, academic organizations and industry can easily get, run, and improve our forecasting applications. We’re currently working with organizations that have experience in this area, such as the DTC and community modeling groups.

It’s a time of great change and opportunity; it will take some time to establish effective communication and infrastructure across forecaster and researcher communities, but once established, we will be in a powerful position to accelerate the improvement of the nation’s forecast systems.

An essential element in our strategy is to shift from running a “quilt” of many separate, diverse software applications to running a Unified Forecast System, the UFS.  The UFS can produce different kinds of forecast guidance (e.g., medium-range weather prediction, hurricane forecasts, subseasonal to seasonal predictions, severe-weather, air-quality) as different configurations of a single overall system, with shared infrastructure and scientific components. The unified approach reduces the amount of software that needs to be maintained and increases the physical consistency across timescales. We plan to retire or “sunset” many legacy codes and absorb their functionality into the UFS. Since there are dozens of codes in the quilt, it will take some years to fully transition to the streamlined, unified system.

One of the recent milestones we are particularly excited about is the release of the UFS medium-range weather application to the research community in March 2020. This ties together some of our big goals: it is the very first public release of the UFS and is the first release that has the research customer as its primary target. It is based on the Global Forecast System v15 software that was transitioned to operations in June 2019. The UFS organization is making major changes in how code is managed and the release process. All the code is on GitHub, so it is easy to access. There are teams that are ensuring that it is portable to computer platforms outside of NOAA and that users are able to get their support questions answered. Although this first release is necessarily limited in scope and options, it put some of the key elements in place for the planned Earth Prediction Innovation Center (EPIC) (a program anticipated to launch from the Weather Program Office - formerly the Office of Water and Air Quality - later in FY2020). EPIC will be making even greater strides toward improved software usability and more effective “Research to Operations and Operations to Research” or R2O2R.

There are a host of updates to operational codes planned for the coming year as well. We are expecting the final implementation of the High-Resolution Rapid Refresh short-range weather application in spring 2020 (HRRRv4) using WRF-ARW. A coupled space weather Whole Atmosphere Model - Ionosphere Plasmasphere Electrodynamics is planned for implementation in early 2021 (WAM-IPEv1), as well as an update to the Global Ensemble Forecast System in fall 2020 (GEFSv12). GEFSv12 includes separate aerosol and wave components, and will be a coupled UFS implementation. Those are just a few examples; there are about ten implementations planned in all for FY20. We are hoping to finalize the selections and schedule in the next few weeks.

It’s a time of great change and opportunity; it will take some time to establish effective communication and infrastructure across forecaster and researcher communities, but once established, we will be in a powerful position to accelerate the improvement of the nation’s forecast systems.

Dorothy Koch, National Weather Service's Office of Science and Technology Integration (OSTI).

A NOAA-NCAR Collaboration Toward Unified Modeling

Chris Davis

Autumn 2019

What does unified modeling mean to you? Perhaps an obvious meaning is that “unified” implies
one single model: one choice for dynamics, physics, data assimilation, and postprocessing.
Everyone uses the same code: universities, private companies, research labs, and operational
prediction centers. It is a grand vision. As a community, we are not there yet.

I argue that there is a more practical, and potentially more useful definition of unified
modeling: codes are easily shared and interoperable.

Why do I say this? At first, this approach might seem to perpetuate the lack of coordination in
modeling efforts across the US, which has prevented our field from achieving its potential. But
the solution to better coordination is probably not one model for everything.

Consider this question: if all researchers and agencies began with exactly the same code today,
how long would it take for that code to diverge? The answer is probably a few days or less. A
single code is not a realizable state without a mechanism to ensure compatibility and to
effectively manage innovations. The real question is how do we create an infrastructure where
all components can be interchanged with minimal effort?. A common framework, with agreed-
upon best practices in software engineering, is essential to minimize the time from innovation,
to testing and evaluation, and potentially to operations.

A co-developed infrastructure defines the core of the NOAA-NCAR collaboration toward unified
modeling, as described in the Memorandum of Agreement (MoA) finalized in 2018. The MoA’s
seven areas of infrastructure were outlined in the Spring 2019 edition of DTC Transitions. Co-
development requires reconciling requirements among different agencies or institutions
through careful design and the use of accepted standards such as the Earth System Modeling
Framework (ESMF) and the National Unified Operational Prediction Capability (NUOPC). Co-
development recognizes that multiple organizations, in this case NOAA and NCAR, have similar
modeling goals, and wish to reduce duplication of effort by aligning resources.

The vision for unified modeling can be summarized by examples. A researcher at a university
develops a new representation of cloud physics that is designed to work across a wide range of
length scales. They do their initial development using the Weather Research and Forecasting
(WRF) model but want to test this code in the Unified Forecast System (UFS). This switch
becomes trivial from a software engineering perspective because the code followed standards
for compatibility established as part of the Common Community Physics Package. Another
example is a researcher who wishes to isolate the ocean dynamics in the Modular Ocean Model
(MOM6) from the UFS by prescribing the atmospheric forcing. Third, an operational researcher
wants to adapt object-based evaluation methods to forecast output on seasonal time scales.
These are all examples that are made very tractable through unified modeling, and as
envisioned through a compatible infrastructure. Notably, the vision includes a pathway for
innovations that are initially completely separate from a particular codebase. Thus, revolution,
in addition to evolution, is possible.

It is important to remember that the goal of unified modeling places some additional
responsibility on developers to follow software engineering best practices and design
specifications. Unification also makes no explicit reference to the science needed to make
things work. With the right infrastructure, most of the time can be spent on valuable scientific
analysis instead of wrangling with gnarly portability issues. The vision for unified modeling is
thus not a single code, but a system that emphasizes scientific collaboration. Such collaboration
will be essential to overcome the challenge of predicting our complex Earth system.

Chris Davis, NCAR

The Operational FV3 Era

Clark Evans

Spring 2019

We stand at the dawn of the operational FV3 era, representing the first step in the implementation of NOAA's Next-Generation Global Prediction System (NGGPS). With this new era comes many challenges, but also many opportunities to work collaboratively to improve the operational modeling system. How can NOAA best work with the community to leverage these opportunities and develop a truly world-class weather and climate prediction system? A few ideas:

Develop a decadal vision for world-class numerical weather prediction.

As FV3 becomes operational, now is the time for NOAA and the community to look forward, collaboratively, through the 2020s. Already, the field is advancing toward global convection-allowing weather prediction and ensemble data assimilation, fully coupled atmosphere-ocean modeling, and unified physical parameterizations. These are all likely to be ready for operations by 2030. Getting there, however, requires collaboration across the weather enterprise. I encourage NOAA to identify the visionaries within the field who can bring the community together to develop a vision for the operational modeling system of 2030, establish infrastructures and promote collaborations in support of this vision, and advocate for the long-term investments necessary to see it to reality.

Support applied research at lower readiness levels that have a high potential for transformative advancements.

Many of NOAA's collaborative funding programs target applied research with intermediate-to-high readiness levels and anticipated pathways to operations within two to five years. The advances resulting from such research are undoubtedly valuable. Yet, outside of limited base funding, there is arguably insufficient support for applied research at lower readiness levels with higher risk but concordantly high potential reward. This poses a significant opportunity cost: the few funded initiatives remain under development for lengthy periods of time, and many worthy endeavors never see daylight because the resources to advance their development do not exist. Regular funding opportunities for early-stage applied research tied toward the decadal vision would help to develop a truly world-class weather and climate prediction system.

Support collaboration across the operational, university, and research communities.

Fortunately, there are many examples of effective collaborations between NOAA and partners/stakeholders across the weather enterprise, with the NGGPS initiative representing one such example - albeit one that started with significant disagreements on what its focus should be, and continues today without perhaps as much buy-in across all interested parties as is needed. Unfortunately, NOAA's ability to support these collaborations is resource-constrained; as new opportunities have come, so too have existing opportunities been lost. Establishing a decadal vision for world-class numerical weather prediction may provide NOAA leadership with a cohesive plan that can be used to advocate for additional support for collaborative research.

To be sure, there are many examples of successful collaborations between NOAA and the broader community toward advancing its operational modeling systems, and the DTC is both an example and a crucial facilitator of such collaborations. With some tweaks, however, I feel that NOAA's collaborations can become even more effective. As a community, let's begin to have the needed conversations to help make this a reality.

Clark Evans, Professor and Chair of University of Wisconsin-Milwaukee Atmospheric Science Program

NOAA’s emerging effort in community modeling

Ricky Rood, University of Michigan

Winter 2019

I am the Co-chair, along with Hendrik Tolman, of the Unified Forecast System – Steering Committee (UFS-SC), one of the governance bodies in NOAA’s emerging community modeling effort. The overall goal of the UFS activity is to have a unified forecast system that can be configured to meet the many applications in NOAA’s product suite.

Ricky Rood, University of Michigan

The UFS-SC is a review, coordination, and decision-making body, with major milestones and the schedule of the Environmental Modeling Center (EMC) applications a foundational consideration.  The UFS-SC approves strategic direction and strategic plans for the UFS and recommends the content and development path of the production suite. More information on the UFS-SC can be found at:

The selection of the FV-3 dynamical core and the cubic-sphere grid as a primary algorithm of the atmospheric model was an important scientific and computational step.

For many years the forecasts of the National Weather Service (NWS) have been criticized as being of less quality than those of the European Center for Medium-range Weather Forecasts. Much of the public criticism has been based on hurricane track forecasts from the global, medium-range forecast system.  In both weather and climate modeling, the Europeans have been better able to organize and focus their activities. As I wrote in the Washington Post in 2013 “To be the best in weather forecasting: Why Europe is beating the U.S.”, our research community is fragmented. We find it difficult to overcome this fragmentation and focus research advances on operational applications.

The forecast mission of the NWS is far more complex than the medium-range, global forecast system. Looking to the future it will require models that couple atmospheric, oceanic, land, chemical, and cryospheric processes. These models will need to operate, for some applications, at cloud-resolving scales.  The U.S. modeling research community holds excellence in all of these fields, and leveraging this expertise into operational systems to improve environmental security is a major motivation.

NOAA has taken some important steps to improve its operational modeling.  The Strategic Implementation Plan (SIP) has become the foundation of the evolving modeling system. The plan is reviewed and updated on a regular basis. It is central to the Steering Committee’s deliberations, linking the operational requirements to research outside of NWS’s programmatic domain. 

The selection of the FV-3 dynamical core and the cubic-sphere grid as a primary algorithm of the atmospheric model was an important scientific and computational step. ( See, “The Weather Service just took a critical first step in creating a new U.S. forecasting model” ) The selection was also an important organizational and managerial step; it facilitates the focus of activities in atmospheric physics on predictive problems in a controlled scientific environment.

Another decision by NOAA has an important strategic benefit.  The NOAA Environmental Modeling System (NEMS) is a community-based, software infrastructure that supports multiple applications. For more than a decade, NOAA has participated with other agencies in evolving this infrastructure and integrating model components into a common architecture (“The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability” – BAMS 2016 ). This approach supports community participation in a software environment that advances scientific integrity and robustness. It allows sharing of algorithms and intellectual expertise.

NOAA has also been seeking to formalize a relationship with the National Center for Atmospheric Research (NCAR), which is a recognized leader in community modeling. With these decisions at hand, NOAA has improved its position to develop its world-class suite of predictive products.

The community governance is still evolving. Indeed, it was recognized from the beginning that a rising role of NOAA as a partner in the U.S. community modeling culture could not be defined as a hierarchical management structure.  The UFS-SC has its role at the interfaces of NOAA’s operational and research missions and the broader research community. The organizational and cultural changes required to deliver the UFS will take time; there are gaps to fill and barriers to overcome. Never before in my career have the decisions been made and the leadership aligned in such a way to make such advances in modeling possible. This is an exciting time with reasons to be optimistic.

Contributed by Ricky Rood.

Richard B. Rood is a Professor at the University of Michigan. At NASA, he managed modeling, data assimilation, and high-performance computing to provide products for NASA’s observational missions and campaigns. He was detailed to the Office of Science and Technology Policy from 1998 – 2001 to develop strategies for modeling and computing.

NOAA Embraces Community Modeling to Advance Weather Prediction

Brian Gross, Acting Director EMC

Summer 2018

NOAA and the National Weather Service (NWS) are embarking on a new strategy of community engagement for developing the numerical weather prediction models that provide NWS forecasters with the best possible guidance. By engaging the broader numerical modeling community, NWS will leverage the vast modeling expertise that resides therein, since each model developer offers a unique perspective about modeling challenges and possible solutions.

EMC is excited at the prospect of leveraging the modeling expertise in the numerical modeling community to improve NOAA guidance, forecasts, and other products and services.


The goal of this community effort is to evolve the Next Generation Global Prediction System (NGGPS) towards a national unified Earth system modeling framework for operations and research, to the mutual benefit of both. A Unified Forecasting System (UFS) will  function on temporal scales from seasonal to sub-seasonal (S2S) on the order of months, down to short-term weather prediction on the order of hours to days. The UFS will also work across spatial scales, from global-scale predictions down to high-resolution, convection-resolving local/regional scales. Operational implementations of the UFS will be guided by the NWS’s National Centers for Environmental Prediction Environmental Modeling Center that leads the integration of research innovations into operational models.

The UFS is being developed by NOAA, other federal partners, and the broader research and academic community to build the best national modeling system possible. The definition of “community” is important, and not all community efforts will be identical. We are learning from prior and ongoing community modeling efforts (such as WRF, CESM, WW3, MOM6, etc.) and are adopting best practices that meet our specific situation.

NOAA recognizes that the UFS must support the needs of both operations and research. Without that linkage, the incentives will not be there for the research community to help make improvements that will benefit operational predictions, nor will operational innovations feed back into the models used for research. Building a community model involves both give and take from the operational and research sides. Lessons learned, such as from the Developmental Testbed Center (DTC), have shown us that the community will expect sufficient training, full support (including help desk), and vetting of scientific advances. Also, through NGGPS, the Joint Technology Transfer Initiative, and other coordinated programs, NOAA has opportunities for partners to engage through recurring Federal Funding Opportunities.

Working groups that span the specific modeling areas needed for the UFS began meeting in Spring 2017 to develop three-year plans that identify key partners and provide a set of milestones to benchmark progress. The collective input of those groups resulted in the publication of the first Strategic Implementation Plan (SIP) in November 2017.  It is a dynamic process, and these working groups are working on updates to the SIP, with version 2 expected later this year.

To effectively coordinate the activities of the community partners, as well as to manage the collaborative projects of those partners described in the SIP, a robust governance structure is being put in place. Our governance approach is based a commitment by core development partners, informed practices, and community values. A UFS Steering Committee, comprised of both NOAA and non-NOAA members, is already working to provide technical guidance to the Working Groups. A Technical Oversight Board is about to be established to provide support for the programmatic elements across the NOAA Line Offices.

EMC is excited at the prospect of leveraging the modeling expertise in the numerical modeling community to improve NOAA guidance, forecasts, and other products and services. Better predictions can come from ensembles of model runs, better models can come from the assembled intellectual might of the entire modeling community. If you’d like to become engaged, please contact Tim Schneider at While a UFS web portal is being developed, up to date information can be found on the NGGPS web pages.

Brian Gross, Acting Director EMC

DTC Plays an Increased Role in Assisting R2O

Georg Grell

Spring 2018

As chief of the Model Development Branch (MDB) at NOAA’s Global System Division (GSD), I am honored to work with many of the scientists from the DTC. The DTC has been a vital partner for the development, testing, and evaluation of the Rapid Refresh (RAP) and High-Resolution Rapid Refresh (HRRR) models, and I look forward to continuing this valuable collaboration as the community works together to define and develop the next-generation modeling system.  The work performed in my branch at GSD is strongly linked to the five task areas that the DTC supports: Regional Ensembles, Hurricanes, Data Assimilation, Verification and the Global Model Test Bed (GMTB). Each of these areas will also be an important component of the Next Generation Global Prediction System (NGGPS), currently under development.

The goals of the NWS NGGPS include not only to generate a much-advanced global modeling system with state-of-the-art non-hydrostatic dynamics, physics and data assimilation, but also to foster much broader involvement of other labs and the academic community. This opens an opportunity for a new role for the DTC. While the initial objective for NGGPS was the selection of a dynamic core, other tasks are now of high priority. Among those, the selection of the advanced physics parameterizations and the involvement of the community may be the most challenging. The selection of an advanced physics suite should be completed by the end of 2018, to allow rigorous testing and tuning to be performed with the complete package. It is recognized that all of the physics schemes will likely need further testing, development, and calibration prior to, during, and after implementation.  The DTC is ideally positioned to contribute to this task.

Model physics describe the grid-scale changes in forecast variables due to sub-grid scale diabatic processes, as well as resolved-scale physical processes.  Physical parameterization development has been a critical driver of increased forecast accuracy of global and regional models, as more processes are accounted for with sophistication appropriate for the model resolution and vertical domain.  Key atmospheric processes that are parameterized in current global models include subgrid turbulent mixing in and above the boundary layer, cloud microphysics and ‘macrophysics’ (subgrid cloud variability), cumulus convection, radiative heating, and gravity wave drag.  Parameterizations of surface heat, moisture, and momentum fluxes over both ocean and land, subgrid mixing within the ocean due to top and bottom boundary layers, gravity waves and unresolved eddies, land surface and sea ice properties are also important on weather and seasonal time scales.

A NGGPS modeling workshop and meetings of the Strategic Implementation Plan (SIP) Working Groups were held in August of 2017 at the National Center for Weather and Climate Prediction in College Park, Maryland, to finalize the SIP document that describes projects for the development of the Unified Forecast System (Link to SIP document). The future of physics parameterization development, as described in this report, listed many of the current DTC tasks. A possible DTC priority could be to test (at multiple resolutions), evaluate, and maybe even assist in tuning scale-aware and stochastic parameterizations for processes such as microphysics, cumulus convection and gravity wave drag.

To achieve the goal of involving the broader community in the development, testing, and assessment of physical parameterizations, the GMTB was established as a new task area in the DTC. The GMTB, led by Ligia Bernadet and Grant Firl, is an ambitious project to make model development much more user-friendly, catalyzing partnerships between EMC and research groups in national laboratories and academic institutions. The GMTB aims to implement transparent and community-oriented approaches to software engineering, metrics, documentation, code access, and model testing and evaluation.  

The initial charge to GMTB was the development and community support of the Common Community Physics Package, a software and governance framework to facilitate Research to Operation (R2O) transitions of community contributions. Additionally, the GMTB is tasked with defining a hierarchy of tests (model configuration, initial conditions, etc.), iteratively exercising each candidate physics configuration over the tests, and providing assessments in an open manner – tasks which are also needed for the development of unified metrics. GMTB already was of great help to implement the Grell-Freitas convective parameterization into the GFS physics package, now running in versions of FV3 and GFS.

The second essential project defined by the SIP physics working group is the design of unified metrics (or at least standardized metrics, dependent on application). The development of the Model Evaluation Tools (MET) is another component of DTC work. MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. The development of the Model Evaluation Tools (MET), which is designed to be a highly-configurable, state-of-the-art suite of verification tools, is another component of DTC work. Current MET development efforts are focused on expanding its capabilities to capture the full range of EMC’s multiple verification packages.

With the existing task areas spanning ensemble work, stochastics, verification software development, community support, data assimilation, and GMTB, DTC will be able to play an increased role in assisting R2O transfer with respect to development and improvements in physical parameterizations. Once the advanced physics suite is selected, this will require gaining expertise in the different parameterizations that are chosen. DTC can conduct carefully controlled, rigorous testing, including the generation of objective verification statistics, and provide the operational community with an increased amount of guidance for selecting new NWP technologies with potential value for operational implementation.


Russ S. Schumacher

Autumn 2017

Faculty and graduate students at universities typically conduct basic research to better understand the fundamental workings of their area of interest, which in our field is the atmosphere. Transitioning these findings into practical applications, including operational weather forecasting, is then done by national labs and their cooperative institutes.  Yet in many cases, university researchers are working on problems that are directly relevant to operations, and have the potential (with a little help) to be considered for transition into the operational environment.  How to cross the many hurdles associated with this transition, however, is not a topic that is well understood in the academic lab setting, where the project may be developed by a faculty member and one or two graduate students.

Participants in the Flash Flood and Intense Rainfall experiment, forecast discussion at the Weather Prediction Center and Hydrometeorology Testbed during summer 2017.

I've collaborated closely with forecasters and forecast centers in the past, mainly on what might be called "operationally relevant" research -- work that can inform the forecast process but isn't immediately applicable.  This summer, I had my first real experience as a faculty member in formally testing a product that could be considered for operational transition. With support from NOAA's Joint Technology Transfer Initiative, we tested my Ph.D. student’s heavy rainfall forecasts at the Flash Flood and Intense Rainfall (FFaIR) experiment at the Weather Prediction Center (WPC) and Hydrometeorology Testbed during June and July of 2017.

Preparing for the experiment raised several issues of a scientific and technical nature, that I was not really accustomed to having to think about in an academic setting.  Some were fairly mundane, like "how do we generate files in the proper format for the operational computers to read them?"  But others were more conceptual and philosophical: “How should a forecaster use this product in their forecast process?  What should the relationship be between the forecast probabilities and observed rainfall/flooding?  How can we quantify flooding rainfall in a consistent way to use in evaluating the forecasts?”

So why do I bring all of these experiences up in the “DTC Transitions” newsletter? Because one of the DTC’s key roles is to facilitate these types of research-to-operations activities for the broader community (including universities as well as research labs.)  One particular contribution that the DTC makes to this effort is the Model Evaluation Tools (MET), a robust, standardized set of codes that allow for evaluating numerical model forecasts in a variety of ways.  For new forecast systems or tools to be accepted into operational use, they should demonstrate superior performance over the existing systems, and the only way to establish this is through thorough evaluation of their forecasts.  Careful evaluation can also point to areas for additional research that can lead to further model improvements.  The DTC also sponsors a visitor program that supports university faculty and graduate students to work toward operational implementation of their research findings.

Conducting research-to-operations activities in an academic setting will certainly fall outside the comfort zone of many university researchers.  Furthermore, we should be sure not to lose our focus on basic research, which is often best suited to academia.  But the fruits of that basic research are also often ready to take the next step to direct application and broader use, and I encourage fellow academics to test out taking that step, especially with the support and tools offered by the DTC.

Michael Farrar

Winter 2017

As the new Director of the Environmental Modeling Center (EMC), I have the pleasure of leading nearly two hundred world-class scientists, engineers and other staff in developing, transitioning, improving and maintaining a suite of global and regional environmental models to support the National Weather Service (NWS) mission of the protecting life and property for our country.  To meet the challenges of this mission, EMC has formed strategic partnerships with numerous community organizations over the years to collaboratively manage the existing suite of models, as well as to build the next generation of environmental models.  The DTC has been a vital partner for developmental testing, verification, and community support activities related to large segments of EMC’s modeling suite.  While the genesis of DTC began with WRF and so originally focused on regional models and related applications, the DTC has begun to expand their efforts into global modeling in support of the Next Generation Global Prediction System (NGGPS).  In particular, DTC has formed a Global Model Test Bed (GMTB) to support NGGPS development on a variety of tasks, most notably to assist in the development and testing of Common Community Physics Package and support for an associated interoperable physics driver. 

While the first step of the NGGPS program will be to migrate the legacy spectral model dynamic core of EMC’s Global Forecast System (GFS) to the Finite Volume Cubed Sphere version 3 (FV3) from NOAA’s Geophysical Fluid Dynamics Lab (GFDL), this represents much more than just a dynamic model core change to the global model. Instead it represents the first step toward migration of EMC’s modeling suite to a unified modeling system across both spatial (mesoscale/regional and global) and temporal (weather, sub-seasonal and seasonal) scales. With their legacy of mesoscale/regional applications and the new global modeling work under the GMTB umbrella, the DTC is ideally situated to play an integral role in helping EMC and NOAA evolve our legacy modeling suite towards a unified modeling system. 

The evolution towards a unified system is more than just a NOAA imperative; it is a National one. As such, I am happy to report that we recently brought the vast majority of NOAA organizations involved in model research, development, testing and operations together with several of our key national partners for a November meeting at the David Skaggs Research Center in Boulder, CO to start developing a short-term strategy for a National unified modeling system.  In addition to EMC (who organized and chaired the meeting), the other NOAA participants included 8 NOAA labs from the Office of Oceanic and Atmospheric Research (OAR) and the National Ocean Service (NOS); the NOS Center for Operational Oceanographic Products and Services (CO-OPS); the new NWS Office of Water Prediction (OWP); and 3 program offices from NWS and OAR.  Representatives of these NOAA organizations were joined by members of 3 Labs from the National Center for Atmospheric Research (NCAR), including the Research and Applications Lab (RAL) of which DTC is a part; the NASA Global Modeling and Assimilation Office (GMAO); the Naval Research Lab (NRL); and the Joint Center for Satellite Data Assimilation (JCSDA).  All these organizations came together with the intent to begin development of a Strategic Implementation Plan (SIP) that can orchestrate collaborative activities over the next 2-3 years under a single, coordinated “play book”. 

Following this successful meeting, the next steps will be the formation of working groups composed of experts from each of these organizations, other partner organizations, and members of the broader R&D community to tackle specific functional areas of the SIP, to include such areas as governance, system architecture, model dynamics, model physics, data assimilation, and post processing, to name a few.  The output of these working groups will be brought together in a public community workshop, targeted for Spring 2017, to pull together the first draft of a comprehensive, integrated plan.  The power of this approach will be to harness the collective resources of many of our country’s top R&D institutions along with other partners from the broader research community, who can all work together under a single, coordinated plan towards a common goal of building a truly National unified modeling system.  I and the other members of the Environmental Modeling Center look forward to working with the DTC and our other strategic partners as we work towards this common goal. 

Bill Kuo

Summer 2017

The Developmental Testbed Center (DTC) was established in 2003 with a mission to facilitate research to operation transitions in regional Numerical Weather Prediction (NWP). The DTC fulfills this mission by (i) providing community support for regional operational systems, (ii) performing testing and evaluation of NWP innovations for possible operational implementation, and (iii) promoting interaction and collaboration between research and operational NWP communities through special workshops, visitor programs, and the publication of this newsletter.

The Developmental Testbed Center (DTC) was established in 2003 with a mission to facilitate research to operation transitions in regional Numerical Weather Prediction (NWP).

When the DTC was first established, the initial focus was the Weather Research and Forecasting (WRF) model. In fact, during the early days the DTC was called the “WRF” DTC. Over the years, the scope and activities of the DTC have expanded in response to the needs of operational centers and the research community. Today, the DTC provides support for five community systems, including WRF, UPP (Unified Post-Processor), HWRF (Hurricane WRF), GFDL vortex tracker (a component of HWRF), GSI-EnKF data assimilation system, and the MET (Model Evaluation Tools) verification system. This work has been valuable in encouraging the research community to use operational NWP systems for research applications, and has contributed to their continued improvement.

Currently, efforts are grouped into five task areas focused on the operational systems the DTC supports: Regional Ensembles, Hurricanes, Data Assimilation, Verification and Global Modeling. The Global Modeling task was added in 2015, in response to the request from the NWS NGGPS (Next Generation Global Prediction System) Program Office, to establish a Global Model Test Bed (GMTB). The development and community support of a Common Community Physics Package are the initial foci of GMTB.

A key objective of the NGGPS program is upgrading the current operational Global Forecast System (GFS) to run as a unified, fully-coupled model within the NEMS (NOAA Environmental Modeling System) infrastructure. This unified model is expected to improve hurricane track and intensity forecast, and extend weather forecasting out to 30 days, in addition to other objectives. The NGGPS program presents an exciting opportunity for the U.S. NWP community to collaborate on the development of a single modeling system, which can then be used to support both the research and operational sectors.

In the last issue of the DTC Transitions Newsletter, Environmental Modeling Center (EMC) Director Mike Farrar articulated how the migration of the legacy GFS spectral model dynamic core to the FV3 (Finite Volume Cubed Sphere Version 3) core represents a first step toward unified modeling. The consolidation of EMC’s modeling suite will concentrate resources and result in considerable savings.

The DTC Executive Committee has asked the DTC to develop a strategic vision for the next 10 years to ensure it evolves hand-in-hand with the operational center, and aligns its activities in support of the unified modeling transition. This is a very welcome request -- it has been challenging to support multiple modeling systems with limited resources.

This summer, DTC staff will begin to develop this new vision. The overarching question is: “What is the role of the DTC in the NGGPS era with unified modeling?” How should a complicated unified and fully-coupled Earth system model with multiple components be supported to the community, and what is the DTC’s role in the support of such a system? With limited resources, where should the DTC focus its testing and evaluation efforts to most effectively facilitate R2O transition? How effective are the DTC community engagement activities and should they be revised to further support collaboration and interaction between research and operational NWP communities?

Our goal is to develop a draft strategic vision by the end of the summer. The DTC Strategic Vision will be vetted with the DTC Science Advisory Board and the DTC Management Board, before it is submitted to the DTC Executive Committee for consideration.

Ralph Stoffler

Spring 2016

Ralph Stoffler, a member of the US government Senior Executive Service, is the USAF Director of Weather, and the USAF Deputy Chief of Staff for Operations. In this capacity, he is responsible for the development of weather and space environmental doctrine, policies, plans, programs, and standards in support of Army and Air Force operations. He is responsible for overseeing and advocating for Air Force weather resources and monitors the execution of the weather program. He is the functional manager for 4,300 total-force weather personnel and interfaces with Air Force major commands and the U.S. Army regarding full exploitation of Air Force weather resources and technology. He is also an AF representative on the DTC Executive Committee. See article on this page about USAF perspectives with the DTC and more.

Ralph Stoffler, Director of Weather, US Air Force

Tom Hamill

Winter 2016

I’d like to highlight some recent work in model diagnostics by my one-time colleague in NOAA Physical Science Division, Thomas Galarneau. Tom was funded under a DTC visitor’s program grant, with additional funding through the Hurricane Forecast Improvement Project. As a DTC visitor in 2013, Tom applied a new diagnostic method for quantifying the phenomena responsible for errors in tropical cyclone (TC) storm tracks to an inventory of recent hurricanes. (See a related article about Tom’s work in the DTC Newsletter Winter-Spring 2014, page 4.) Tom has since moved on to NCAR, and then more recently to the University of Arizona.

Readers are likely aware that the prediction of tropical cyclone track, intensity, and genesis in the medium-range (120–180-h forecast leads) continues to be a formidable challenge for operational numerical weather prediction models.

While 36–84-h TC track predictions from the operational NCEP Global Forecast System (GFS) have significantly improved over the last ten years, predictions for forecast leads beyond 120-h have shown no such improvement. In order to examine and diagnose medium-range forecasts for the NCEP GFS, Tom and his NCAR colleagues Chris Davis and Bill Kuo developed the Real-Time Diagnosis of GFS Forecasts website [] that provides analyses and diagnoses of GFS forecasts in real time. To complement other websites that show the statistics of GFS forecast performance, this website provides synoptic charts and diagnostic calculations to link model physical processes to the evolution of the synoptic-scale flow in the GFS forecast. Analyses and diagnostics are generated four times daily, and they provide information on how the GFS has behaved over the previous 60-day period. A wide variety of charts can be generated at the web site above.

Consider one relatively simple example of a systematic error in the GFS identified with this web site. Since its inception, continuous monitoring of the GFS forecasts has revealed that the GFS systematically loses atmospheric water vapor over the tropical west Pacific in quiet regimes during the summer months. It appears that the GFS atmosphere overturns early in the forecast, producing a positive rainfall bias in the day-1 forecast over the tropics. The GFS atmosphere does not recover from the stabilization following the early burst of rainfall due to a low surface flux bias. As a consequence, the GFS atmosphere continues to dry and stabilize through the medium-range, resulting in the negative rainfall bias over the tropics by day 7. The drier conditions make it difficult for the GFS to accurately predict TC genesis in the medium range. Examination of rainfall forecasts over the last 60 days of 2015 shows that the dry bias in the tropics is also a problem during the cool season. The attached figure shows a systematic inability of medium-range GFS forecast to maintain tropical rains associated with a westerly wind burst along the equator in the tropical west-central Pacific. With tropical cyclone formation and propagation affected by this bias, one can expect that tropical-midlatitude interactions in the eastern Pacific are affected, which of course can affect the accuracy of downstream forecasts over the US.

Identification of major systematic biases in our forecast model is the first step toward better predictions. I personally would like to see DTC do even more in this arena, producing diagnostics that further aid the model developer in teasing out the underlying sources of model biases. Can the systematic errors Tom identified be attributed to convective parameterizations? To cloud-radiative feedbacks? The faster we can diagnose the ultimate potential sources of forecast bias, the more rapidly we can reallocate the development resources to address them, thus speeding the rate of forecast system improvement. Tom and his colleagues pioneering work is a very admirable first step in this process, for which NOAA and the weather enterprise is grateful.

Tom Hamill is a research scientist who is interested in all aspects of ensemble weather and climate prediction, from data assimilation to stochastic parameterization to post-processing and verification. Tom also is currently a member of the DTC management board and is a co-chair of the World Meteorological Data Assimilation and Observing Systems working group.

Time-mean (a) CMORPH-derived * and (b) GFS day-7 forecast (0000 UTC initializations only) daily rainfall (shaded in mm) for the 60-day period ending on 1 Jan 2016. * NOAA’s Climate Prediction Center MORPHing (CMORPH) technique.

Paula Davidson

Autumn 2016

NOAA’s testbeds and proving grounds (NOAA TBPG) are an important link between research advances and applications, and especially NOAA operations. Some are long-recognized, like the Developmental Testbed Center (DTC), while others have been chartered more recently. With the 2015 launch of the Arctic Testbed in Alaska, twelve NOAA TBPG follow execution and governance guidelines to be formally recognized by NOAA. These facilities foster and host competitively-selected, collabora- tive transition testing projects to meet NOAA mission needs. Projects are supported through dedicated or in-kind facility sup- port, and programmatic resources both internal and external to NOAA. Charters and additional information on NOAA TBPG, as well as summaries of recent coordination activities and workshops, are posted at the web portal. See www.testbeds.noaa. gov.

NOAA’s testbeds and proving grounds (NOAA TBPG) are an important link between research advances and applications,and especially NOAA operations. Some are long-recognized, like the Developmental Testbed Center (DTC), while others have been chartered more recently. With the 2015 launch of the Arctic Testbed in Alaska, twelve NOAA TBPG follow execution and governance guidelines to be formally recognized by NOAA. These facilities foster and host competitively-selected, collabora- tive transition testing projects to meet NOAA mission needs. Projects are supported through dedicated or in-kind facility sup- port, and programmatic resources both internal and external to NOAA. Charters and additional information on NOAA TBPG, as well as summaries of recent coordination activities and workshops, are posted at the web portal. See www.testbeds.noaa. gov.

Along with adopting systematic guidelines for function, execution, and governance of NOAA TBPG, in 2011 NOAA instituted formal coordination among the TBPG, to better leverage progress across the spectrum of testing, and provide a consistent voice and advocacy for programs and practices involving the TBPG. The coordination committee hosts annual workshopsfeaturing collaborative testing on high-value mission needs, fosters practices consistent with rigorous, transparent testing and increased communication of test results, and provides a forum to advance program initiatives in transitions of research to operations and of operations to research.

NOAA’s TBPG conducts transition testing to demonstrate the degree of readiness of advanced research capabilities for operations/applications. Over the past two years, these facilities completed more than 200 transition tests, demonstrating readiness for NOAA operations for more than 70 candidate capabilities. More than half have already been deployed. Beyond the simple transition statistics, NOAA TBPG have generated a wealth of progress in developing science capabilities for use by NOAA and its partners through more engaged partnerships among researchers, developers, operational scientists and end- user communities. Incorporating appropriate operational systems and practices in development and testing is a key factor in speeding the integration of new capabilities into service and operations.

DTC, in collaboration with public and private-sector partners, plays an increasingly important role in NOAA transitions of advanced environmental modeling capabilities to operations, and with rigorous testing to evaluate performance and potential readiness for NOAA operations. Readiness criteria include capability-specific metrics for objective and subjective performance, utility, reliability and software engineering/production protocols. DTC facilitates R&D partners’ use of NOAA’s current and developmental community modeling codes in related research, leading to additional evaluation and incorpora- tion of partner-generated innovations in NOAA’s operational models.

NOAA programs that have recently supported projects conducted at NOAA TBPG, and especially at DTC, include the Next Generation Global Prediction System (NGGPS), Collaborative Science and Technology Applied Research Program, Climate Program Office, the US Weather Research Program, and the Hurricane Forecast Improvement Program. Under NGGPS auspic- es, the DTC added a new unit for testing prototypes for the NOAA’s next global prediction system. DTC’s contributions to the success of NGGPS will be the foundation for improved forecasts in critical mission areas such as high-impact severe/extreme weather in the 0-3 day time frame, in the 6-10 day time frame, and for weeks 3-4. As chair of NOAA’s TBPG coordinating committee, I am excited about the tremendous opportunity and capability that the DTC brings to these efforts to enhance NOAA’s science-based services.

Vijay Tallapragada EMC

Vijay Tallapragada EMC

Summer 2016

My association with the Developmental Testbed Center (DTC) dates back to early 2008 when the NCEP operational Hurricane Weather Research and Forecast (HWRF) modeling system developed at Environmental Modeling Center (EMC) was adopted to create a community modeling framework for hurricane model development supported by the Hurricane Forecast Improvement Project (HFIP). I enjoyed working with DTC in various capacities as the Hurricane Team Leader at EMC and the Development Manager of HFIP.

Operational hurricane model development undoubtedly has been one of the most successful initiatives by HFIP project that enabled a process for effective transition of research to operations (R2O), and the outcome is clearly evident in terms of tangible improvements in hurricane track and intensity forecast guidance from operational HWRF as demonstrated in real-time in the past few years. HWRF model has evolved as a unique high-resolution atmosphere-ocean-wave coupled system for all global tropical cyclones, serving various operational forecast agencies, researchers and private industry. HWRF is the only operational hurricane model in the world freely distributed and supported to the research community through extensive documentation, in-person and online tutorials, and user guides.

While HWRF’s development is centralized at EMC, it incorporates contributions from a variety of scientists spread out over several governmental laboratories and academic institutions. This distributed development scenario poses significant challenges: a large number of scientists need to learn how to use the model, operational and research codes need to stay synchronized to avoid divergence, and promising new capabilities need to be tested for operational consideration. DTC’s contributions for HWRF are pivotal in the areas of code management, advanced support for model developers and general users, and extensive testing and evaluation of new innovations.

My recent transition as Chief of the Global Climate and Weather Modeling Branch (GCWMB) at EMC coincided with two other major initiatives – the National Weather Service (NWS) Next Generation Global Prediction System (NGGPS) project, and the expansion of DTC’s role into global modeling through creation of the Global Modeling Test Bed (GMTB). Operational global modeling at NOAA/NCEP is the most significant attribute for the entire US weather enterprise. The NCEP Global Forecast System (GFS) stands as a back bone for the whole operational production suite. GCWMB at NCEP/EMC is responsible for developing, implementing and advancing the global modeling system that spans from weather to climate scales. With support from the NGGPS project, a new non-hydrostatic dynamic core is being selected for replacing the current spectral model for serving the future needs of the NWS. A community based NOAA Environmental Modeling System (NEMS) architecture will enable seamless development and integration of atmosphere, ocean, land, sea-ice, waves, and aerosols for weather, sub-seasonal and seasonal forecast capabilities. Similar to the HFIP project, the emphasis of NGGPS is to foster enhanced R2O capabilities for accelerated model development and transition to operations. GMTB has embarked on designing a Common Community Physics Package (CCPP) within NEMS that allows for systematic evaluation of advanced physics.

There is enormous talent in the US NWP community that has largely been untapped. With a focused approach, we can bring together the best in the field by adopting the community modeling concept. I am looking forward to continue working with DTC to strengthen the relationship between operational and research communities, and to realize the goals of NGGPS in becoming second to none in global weather prediction.

Frederick Toepfer

Summer 2015

The Next Generation Global Prediction System (NGGPS) project is a National Weather Service initiative to design, develop, and implement a next generation global prediction system to take advantage of evolving high performance computing architectures, to continue pushing toward higher resolutions needed to increase forecast accuracy at shorter timeframes, and to address growing service demands for increased skill at extended ranges (weeks to months). The NGGPS project goals are: (1.) expansion and implementation of critical weather forecasting research to operations (R2O) capabilities to accelerate the development and implementation of global weather prediction model advances and upgrades; (2.) continued improvement in data assimilation techniques; and (3.) improved software architecture and system engineering to support broad community interaction with the system. The introduction of Community Codes for all major components of the NGGPS and the establishment of a Global Modeling Test Bed (GMTB) to oversee community interaction with the codes are significant changes to the National Weather Service business model to advance numerical weather prediction skill in the US. Over the next five years, contributions from a wide sector of the numerical weather prediction community including NCEP, NOAA and other agency laboratories and private sector and universities, will be incorporated into an upgraded operational system to deliver a NGGPS that meets the evolving national prediction requirements.

Major work areas in the NGGPS project include selecting and further developing a new atmospheric dynamic core and improving model physics to better describe phenomenon at global to regional scales. Additional work will be conducted to accelerate development and implementation of weather prediction model components such as ocean, wave, sea ice, land surface and aerosol models, and improve coupling between these various components of the model system.

The DTC will play an important role in the NGGPS project. The new GMTB has been established as an extension of the current DTC. The GMTB is initially funded by the NGGPS project to assist in the development and testing of a Common Community Physics Package, as well as provide code management and support for an associated interoperable physics driver. The GMTB will also assist the NGGPS project in the development and testing of a sea ice model.

Active community participation in the development of the NGGPS is considered key to the success of the project. The DTC is perfectly positioned to assist in this aspect of the project. As the NGGPS Project Manager, I am excited about the DTC’s role, through the GMTB, in the project and anticipate a productive and successful relationship going forward.

Barb Brown

Winter 2015

The DTC was established a number of years ago to facilitate the transition of new capabilities in weather forecasting from research to operations (R2O), with a focus on the WRF model. Over the years, many things have changed – for example, the DTC now works with multiple sets of code (WRF, GSI, MET, HWRF, and so on) which will soon include global prediction systems – but that fundamental mission remains the same. The DTC accomplishes its goals through strong connections to the operational and research communities. These dual connections, forming the bridge between the communities, are what make the DTC unique.

In fact, the bridge is the key aspect of the DTC that has led to its success, and will lead to additional successes in the future as the DTC continues to grow and mature.

Connecting the research and operational communities through workshops (e.g., the recent physics workshop, see page 4), support and training on operational codes, and the DTC visitor program provide the keys to developing relationships that will lead to new successes in R2O. Moreover, the DTC’s independent testing and evaluation of new innovations developed by the research community, and its efforts to enable such testing by the research community (e.g., through the Mesoscale Modeling Evaluation Testbed, MMET) help speed the identification and transfer of new capabilities across that bridge. These key factors have the potential to lead to a vibrant and well connected R2O process.

It has been a great pleasure for me to work closely with the DTC over the last six years as a member of the Management Board and as the Director of NCAR’s Joint Numerical Testbed Program (JNTP). I feel lucky to be part of this grand effort to improve forecasting for our nation through community activities, and will enjoy watching the success of the DTC in the years to come. It is with pleasure that I hand over the reins of the JNTP to Dr. Joshua Hacker, who will bring new leadership, ideas, and energy to the DTC effort.

Bob Gall

Summer 2014

I was part of the Development Testbed Center from its beginnings as part of the WRF (Weather Research and Forecasting) model development, which in turn was part of the US Weather Research Program (USWRP). The years are beginning to blur for me but I believe the first discussions of a DTC in Boulder were during an IWG (Interagency Working Group) meeting of the USWRP at NCAR on October 22, 2002. At that meeting the vision for the DTC was stated as a facility which would:

• Provide for a rapid and direct transfer of new NWP research results into operational forecasting

• Evaluate strengths and weaknesses of new methods and models for NWP prior to consideration for operational implementation

• Evaluate strengths and weaknesses of current operational systems

And later we added:

• Do these in a way that doesn’t interfere with operations

More discussions followed, but the DTC project basically was underway by the next summer. I led the DTC from its inception until I left to be Development Manager of the HFIP (Hurricane Forecast Improvement Program) in 2009. During that time Steve Koch and I gradually built up the project both at NCAR and at GSD/ESRL in Boulder as a joint agency effort. Louisa Nance was its first employee and has been with the program ever since. The program was gradually expanded to what you see today and Bill Kuo took over as Head after I left.

As we began to spin up the HFIP Project in 2009 we realized early that one of its goals needed to be to make the operational NWP hurricane model system widely available. Only in that way could HFIP make effective use of ideas and technology from the community (broadly defined as university, government laboratory and other folks). The mission of EMC is to develop, test and implement the operational system—a full time job for the HWRF group—and they are not equipped to deal extensively with making the codes available to the community.

The DTC, on the other hand, was equipped for this task, and thus early in the HFIP program we began to fund a significant program to make the HWRF available to the community.

It was our intention to focus the HFIP program on a single model system (like the original idea for the WRF system) as a way to make maximum progress in improving the hurricane forecast guidance system. From the beginning we felt that system needed to be HWRF, principally because that system was being developed at EMC at the time and there was a team there focused on all aspects of the model development (core, physics and the initialization system) and how they all integrated together. The only other similar team in the US was the one running TC-COAMPS for NRL. Such a team was not in place for other hurricane NWP forecast systems in the US, such as the AHW (Advanced Hurricane WRF) being developed at NCAR. Since the central goal of HFIP is to develop the NCEP operational hurricane system into the best in the world, HWRF was the obvious choice. The DTC, which had an extensive knowledge base for making codes available to the community and to handle interactions with the community for HWRF, was also an obvious choice. HFIP has provided significant funding for the last several years to the DTC to set up and make available a code system in Boulder for HWRF, including documentation, and to work with EMC to coordinate that code with the most current operational HWRF codes. In addition we also provided funding for university projects to work with these codes. The end result, for HWRF, is a paradigm that is essentially equivalent to the original vision for both WRF and the DTC given above.

DTC Director

Winter 2014

One outcome of the DTC Science Advisory Board (SAB) meeting last September was establishment of an annually rotating SAB Chair. As the first to assume the role under this change, I welcome the future opportunity to address the DTC from the perspective of a researcher. Taking over from last year’s chair Mark Stoelinga is no small task; Mark efficiently and effectively led our discussions and drew several points of consensus from among many disparate ideas and concerns. I encourage those interested to browse the report that resulted from our meeting last September (which can be found on the website). Here I step aside from Chair duties, and instead put forth what I see as the DTC’s key strengths, challenges, and opportunities.

During my short two years on the SAB, the DTC has made noteworthy improvements in its ability to disseminate and support operational codes, such as the WRF-NMM and the GSI. Although many point out that the DTC’s primary goal is capability transfer from research to operations, that goal cannot be realized without first opening the doors for the research community to adopt operational codes and methods. The ARW is well established in the research community, and may be difficult to supplant at any significant level. The GSI may prove more successful, and the DTC’s experience and support of the GSI opens a plausible path to a success in research to operations. Consider the following scenario: university investigators continue to expand use of complex data assimilation codes, and university use of the ARW-compatible GSI grows commensurately. With NCEP and AFWA invested in the GSI for some years to come, a viable opportunity for research to operations transfer emerges. With convection-allowing forecasts offering several important science challenges, it is not a stretch to think that a university investigator with sufficient computational resources would find a RAP or HRRR-like system desirable to facilitate research in the near future. NCEP and AFWA may see tangible contributions that follow.

In the present environment defined by funding uncertainty, the DTC continues to face the challenge of balancing its core mission against funding opportunities that may redirect limited staff time. A step into global modeling could present one such difficulty. Suppose an opportunity emerges to begin testing global models. In the absence of increased total funding, staff would necessarily be diverted away from valuable mesoscale model testing, and from supporting operational codes to the community. It is unlikely that large numbers of university investigators will, in the next few years, be running operations-grade global models with regularity and resolution needed to inform operational centers. Maintaining focus on growing strength in mesoscale model testing, especially at convection-allowing scales, should position the DTC for future research to operations transition opportunities.

Finally, the rising importance of the HRRR and HRRRE, and the continued emphasis on probabilistic forecasting at AFWA, give the DTC its greatest opportunity to realize research to operational transitions. The HRRR, and nearly all of AFWA’s operational NWP, are based on the widely used ARW. As noted above, the recent success in hosting GSI workshops, and supporting the code to the research community, positions DTC for future success.

After what some perceive as struggles during its first few years, the DTC has laid a solid foundation. Independent testing of operational models continues to be valuable, particularly to AFWA; the visitor program remains popular and effective at offering operational-relevant problems to the community. By remaining focused on its strengths while continuing its work making operational codes available to the research community, the DTC should realize more future success in line with its core mission.

Kevin Kelleher

Kevin Kelleher, NOAA GSD

Autumn 2014

During my first 15 months as the ESRL Global Systems Division Director, I have learned about the DTC and its role in the modeling community. The DTC has made remarkable gains in supporting the WRF model within the community that has contributed to the great success and usage of the model both nationally and internationally. I believe the DTC is unique in how it is funded and operated as a joint effort between NCAR and NOAA, along with partners from the Air Force. It is my observation that there is a significant effort to develop global models at resolutions traditionally associated with mesoscale/ regional models. Therefore, it is a good time for the mission of the DTC to be reviewed and possibly updated such that the DTC has a viable and robust future, should global models eventually begin to replace mesoscale/regional models within NOAA NCEP operations, for example. At GSD, we have recently reorganized in response to these changes in the modeling community.

All of our modeling efforts are now under a single Branch, the Earth Modeling Branch led by Dr. Stan Benjamin. Nearly all of the GSD DTC efforts now fall within this Branch, which includes researchers working on modeling scales from the storm scale through global scale. Having convenient access to such a wide range of talented personnel should benefit DTC tasking in the future. In addition, in my role as a member on the DTC Management Board, I have begun to work closely with NWS NCEP & EMC and Dr. Bill Kuo to work toward improving alignment of the current NCEP operational needs with the DTC mission, capabilities, and services.

Welcome Message by Bill Kuo

Dear Colleagues,

Spring 2013

Welcome to the first issue of a quarterly newsletter for the Developmental Testbed Center (DTC). The research to operations (R2O) transition in numerical weather prediction (NWP) is a major challenge facing the U.S. meteorological community. It has been recognized that the U.S. has the largest community around the world working on weather research and numerical modeling. Yet, most of these research results do not directly benefit operational NWP. The DTC was established in 2003 with a mission to facilitate the transition of research innovations in regional modeling into operations.

An effective R2O process requires active participation of research and operational communities. With this quarterly newsletter, we hope to provide a forum for discussion of important issues facing the NWP community. We will also provide updates on DTC activities that are of interest to the community.

We welcome articles submitted for consideration for publication in upcoming issues.

Bill Lapenta

Summer 2013

Dear Colleagues,

The end-to-end modeling systems in the NOAA operational numerical guidance suite are scientifically based, and research results must and do cross the “Valley of Death” into operations. However, the operational and research communities need to make this journey more efficient and cost effective. That’s one reason why we have testbeds like the DTC. During 20 years as a research scientist at NASA, I had the opportunity to work closely with NWS forecasters in Huntsville and offices across the Southeast. When I accepted a job at NOAA with EMC, I thought my understanding of what it takes to work in an operational environment was sound based on these earlier experiences. However, it soon became apparent that my perceptions about the transition of research into operations were woefully incomplete. I believe that there are many ways NOAA can build a better transition process between research and operations, and I would like to share my thoughts associated with the upgrade process at EMC in future releases of the DTC Newsletter.

Bill Lapenta

Greetings from the Heartland!

Autumn 2013

The Air Force Weather Agency (AFWA) has long been a partner with the DTC since its “unofficial” early days of “core testing” (remember those), to its official charter membership signing in September of 2009, to date. Air Force Weather (AFW) recognized early, the essential role this organization could, and would, play in bolstering and mitigating an ever-growing US Air Force resource constrained terrestrial weather RDT&E environment. Further partnering with our NOAA National Centers for Environmental Prediction (NCEP) compatriots in this endeavor became even more than doubly beneficial to AFW mission needs. For over a decade, AFW has not had a terrestrial weather R&D lab to foster continuous environmental NWP advancements.

Among other alternative research avenues, the DTC was seen as a leveraging mechanism to enable and smooth the transition of terrestrial weather advancements into the operational NWP weather tool used by the USAF…WRF. After slow going in the early years, over the last 4+ years the DTC has been that research to operations (R2O) enabler we expected, but not in the conventional thinking sense many have of the DTC.

The highest priority mission AFWA has for the DTC is reference configuration testing and evaluation (T&E). T&E is an essential last step in AFW’s R2O process. To facilitate this, the DTC has set up a nearly “functionally equivalent” operational design of AFWA’s WRF model operations. In the past four WRF community release cycles, the DTC has T&E’d several promising reference configurations of WRF against AFWA’s operational configuration providing the final actionable detail needed to decide whether the new scientific advancement has a positive operational impact worthy of implementation. Having this fidelity enables AFWA to greatly reduce its R2O timelines if it otherwise had to rely on its own available resources.

Furthermore, the modeling community benefits from these configurations tests by building a performance baseline to track reference configurations. This should guide scientists toward fruitful R&D tracks and steer them from unfruitful approaches, ultimately providing further R2O efficiencies.

This T&E focus for R2O is why AFW has funded a Model Evaluation Tool (MET) solely developed and matured by DTC. A standardized tool, for standardized tests, for our common R2O future…a great beginning and a valuable partnership—DTC, AFW, & NOAA/NWS.