Transitions Newsletter Header

Issue 3 | Autumn 2013

Lead Story

SREF and the Impact of Resolution and Physics Changes

Contributed by Contributed by Isidora Jankov

As operational centers move inexorably toward ensemble-based probabilistic forecasting, the role of the DTC as a bridge between research and operations has expanded to include testing and evaluation of ensemble forecasting systems.

In 2010 the ensemble task area in the DTC was designed with the ultimate goal of providing an environment in which extensive testing and evaluation of ensemble-related techniques developed by the NWP community could be conducted. Because these results should be immediately relevant to the operational centers (e.g., NCEP/EMC and AFWA), the planning and execution of these DTC evaluation activities has been closely coordinated with the operational centers. All of the specific components of the ensemble system have been subject to evaluation, including ensemble design, post-processing, products, and verification. More information about the DTC Ensemble Task organization and goals can be found at: http://journals.ametsoc.org/doi/abs/10.1175/BAMS-D-11-00209.1

"It appears that finer resolution improves SREF forecast performance more than changes in microphysics.”

Recently, efforts of the DTC Ensemble team have included evaluation of the impact that changes in the National Centers for Environmental Prediction/Environmental Modeling Center (NCEP/EMC) Short-Range Ensemble Forecast (SREF) configuration have had on its performance. The focus has been on two areas: the impact of increased horizontal resolution and the impact due to changes in the model microphysical schemes. In an initial experiment, SREF performance using 16 km horizontal grid spacing (the current operational setting) was compared with the performance of SREF with potential future horizontal grid spacing of 9 km. In the second experiment the focus was on changes in microphysical parameterizations.

In the current operational version of SREF only one microphysical scheme (Ferrier) is used. That version has now been compared with results from an experimental ensemble configuration that includes two other microphysics options (called WSM6 and Thompson). Although these preliminary tests have used only SREF members from one WRF core (WRF-ARW), future tests will add NMMB members into the analysis. The sets of comparison ensemble systems each consisted of seven members: a control, and two pairs of three members with varying initial perturbations. This preliminary study was performed over the transition month of May 2013, and over the continental US domain. By good fortune, the time period captured one of the most active severe weather months in recent history, promising an interesting dataset for future in-depth studies.

Verification for the set of runs was performed using the DTC’s Model Evaluation Tools (MET) for both single-value and probabilistic measures aggregated over the entire month of study. Some of the relevant results are illustrated in the accompanying figures, each of which displays arithmetic means from the corresponding ensemble system. The first figure shows box plots of bias corrected root mean square error (BCRMSE) with analysis and two lead times for 850 mb temperature for the operational 16 km SREF (yellow), a parallel configuration with a different combination of microphysics (red), and the experimental 9 km setting (purple). For this preliminary run, it appears that finer resolution improves SREF forecast performance more than changes in microphysics. Indeed, the pairwise differences between the 16 km and 9 km SREF forecasts in the second figure represent a comparison for the 24 hr lead time that is statistically significant, albeit for a limited data sample. Additional detailed analyses of an expanded set of these data are under way.

 


Director's Corner

Greetings from the Heartland!

Contributed by Steve Rugg

The Air Force Weather Agency (AFWA) has long been a partner with the DTC since its “unofficial” early days of “core testing” (remember those), to its official charter membership signing in September of 2009, to date. Air Force Weather (AFW) recognized early, the essential role this organization could, and would, play in bolstering and mitigating an ever-growing US Air Force resource constrained terrestrial weather RDT&E environment. Further partnering with our NOAA National Centers for Environmental Prediction (NCEP) compatriots in this endeavor became even more than doubly beneficial to AFW mission needs. For over a decade, AFW has not had a terrestrial weather R&D lab to foster continuous environmental NWP advancements.

Among other alternative research avenues, the DTC was seen as a leveraging mechanism to enable and smooth the transition of terrestrial weather advancements into the operational NWP weather tool used by the USAF…WRF. After slow going in the early years, over the last 4+ years the DTC has been that research to operations (R2O) enabler we expected, but not in the conventional thinking sense many have of the DTC.

The highest priority mission AFWA has for the DTC is reference configuration testing and evaluation (T&E). T&E is an essential last step in AFW’s R2O process. To facilitate this, the DTC has set up a nearly “functionally equivalent” operational design of AFWA’s WRF model operations. In the past four WRF community release cycles, the DTC has T&E’d several promising reference configurations of WRF against AFWA’s operational configuration providing the final actionable detail needed to decide whether the new scientific advancement has a positive operational impact worthy of implementation. Having this fidelity enables AFWA to greatly reduce its R2O timelines if it otherwise had to rely on its own available resources.

Furthermore, the modeling community benefits from these configurations tests by building a performance baseline to track reference configurations. This should guide scientists toward fruitful R&D tracks and steer them from unfruitful approaches, ultimately providing further R2O efficiencies.

This T&E focus for R2O is why AFW has funded a Model Evaluation Tool (MET) solely developed and matured by DTC. A standardized tool, for standardized tests, for our common R2O future…a great beginning and a valuable partnership—DTC, AFW, & NOAA/NWS.

 


Who's Who

Hui Shao

In many ways, Hui is the quintessential DTC lead. Stationed at NCEP’s Environmental Modeling Center but spending several weeks a year in Boulder, she lives R2O (data assimilation variety) day in and day out. Besides the frequent commutes, she is a long-distance veteran in another way, with undergraduate and masters’ level education in China, a PhD at Florida State, and now her dual appointment of sorts in DC and Boulder. There was some chance at Nanjing University that she would follow a different career: space science was her first choice but the availability of meteorology courses led her in that direction. She credits two events during her studies as important points in her career. A DA seminar in Nanjing impressed her with the ‘forward/backward’ mathematical beauty of adjoint formulations, and during her first year in Tallahassee regular meetings with a professor helped to bring out the ‘bigger picture’ of her dynamic meteorology courses.

At EMC and the DTC, Hui is most proud of helping to form a stronger and closer partnership between these two centers and creating a new pathway between operations and research communities. The collaboration between two centers has now been expanded beyond just data assimilation. Her vision of DA needs and requirements include a strong sense that better ways to handle extremes are needed, and she is a firm believer that effective DA can’t be just about data but must have a physical grounding as well. If anyone is well-placed to bring that vision into the R2O arena, it would be Hui.

 


Bridges to Operations

Support for Operational DA at AFWA

Unlike some other forecast model components, a data assimilation (DA) system is usually built to be flexible in order to be run by different forecast systems at varying scales.

Its testing and evaluation must therefore be performed in the context of a specific application; in other words, it must be adaptable to different operational requirements as well as to research advances. Established in 2009, the DTC DA team started providing data assimilation support and testing and evaluation for Air Force Weather Agency (AFWA) mesoscale applications throughout its global theaters. This task has become one important component of the DTC’s effort to accelerate transitions from research to operations (R2O). Between 2009 and 2011, the focus of extensive DA testing for AFWA at the DTC was to provide a rational basis for the choice of the next generation DA system. Various analysis techniques and systems were selected by AFWA for testing, including WRF Data Assimilation (WRFDA), Gridpoint Statistical Interpolation (GSI), and the NCAR Ensemble Adjustment Kalman Filter. During this testing, the impacts of different data types, background error generation, and observation formats were also investigated.

“The developmental experiment outperformed the baseline”

Testing activity by the DTC DA team took a sharp turn in August 2012. To assist AFWA in setting up an appropriate configuration for their 2013 implementation of GSI, the DTC adapted their DA testbed to complement AFWA’s pre-implementation parallel tests in real-time. In support of providing new code and configurations, the team now performs two types of tests for AFWA:

The baseline experiment is usually generated by running the current operational or parallel system at AFWA. Whenever an AFWA baseline is updated, the DTC checks its reproducibility (or similarity) using the DTC functionally-similar testing environment to ensure that any following tests are comparable, and that there is no code divergence between research and operations. One such test conducted during the summer of 2013 (see figure next page) revealed that wind analysis fits to observations in AFWA forecasts were not reproduced by the DTC due to an inadvertent AFWA code change reading their own conventional data files. Other data assimilation components and applications (new configurations, techniques, observations, etc.) can also be tested in the DTC end-to-end DA testbed, see figure to the left.

 

During DTC real-time tests of the AFWA 2013 implementation, the AFWA GO index (a multivariate combined statistical score) dropped when the (then) AFWA parallel run configuration was used. When the GO index exceeded 1 (i.e., before November), the developmental experiment (which used the DTC-suggested configuration) outperformed the baseline (here, GFS-initialized). For wind variables in particular, the DTC configuration significantly improved the wind analyses. Further retrospective tests narrowed down the contributing factors, and the DTC suggested that the North American Mesoscale (NAM) static background errors generated by NCEP be used. AFWA adopted this configuration for its first GSI implementation in its global coverage domains in July 2013.

 


Community Connections

DTC Science Advisory Board Meeting

Contributed by Mark Stoelinga (Chair of DTC SAB) and Bill Kuo (DTC Director)

Given its mission to facilitate the research to operations transition in numerical weather prediction, the DTC has a mandate to stay connected with both the research and operational NWP communities.

As a means to that end, the DTC Science Advisory Board (SAB) was established to provide (i) advice on strategic directions, (ii) recommendations for new code or new NWP innovations for testing and evaluation, and (iii) reviews of DTC Visitor Program proposals and recommendations for selection.

The third meeting of the SAB (the first with the new members announced in an earlier DTC Newsletter) was held recently (25-27 September 2013) in Boulder. To stimulate communication between the research and operational NWP communities, the DTC invited Geoff DiMego and Vijay Tallapragada to present future plans for mesoscale and hurricane modeling at the National Centers for Environmental Prediction’s (NCEP) Environmental Modeling Center (EMC), and Mike Horner to give an Air Force Weather Agency (AFWA) modeling update. Both informative presentations stimulated considerable discussion.

John Murphy, Chair of the DTC Executive Committee (EC), presented NWS’s view on research to operations and the important role of the DTC. Col. John Egentowich, also a member of DTC EC, spoke positively of the contributions of the DTC to Air Force weather prediction modeling. During summary and discussion periods, SAB members provided many valuable recommendations for the DTC to consider during their planning for the new DTC Annual Operating Plan (AOP) for 2014. These recommendations will be detailed in a DTC SAB Meeting Summary. Of these, several were of particular interest.

The SAB recommended that the DTC and EMC develop a community model-testing infrastructure at NCEP/EMC. The goal for such an infrastructure would be to allow visiting scientists easy access to EMC operational models, enabling them to collaborate with EMC scientists to perform model experiments using alternative approaches. Such an infrastructure would hopefully facilitate accelerated R2O transition in NWP.

The SAB voiced their belief that operational centers will have significantly more computing resources in the near future, putting nationwide high-resolution mesoscale ensemble forecasting within reach. Given that possibility, they recommended that DTC should help facilitate transition to cloud-permitting scale ensemble forecasting with multiple physics. The current members of the DTC Science Advisory Board can be found at http://www.dtcenter.org under governance.

 


Did you know?

TESTBEDS AND THE DTC

Contributed by Kathryn Newman

The Tropical Cyclone Modeling Team (TCMT) was formed as part RAL’s Joint Numerical Testbed (JNT) in 2009 to help assess hurricane and tropical storm forecasts from experimental models. As such, its members interact with the DTC in two particular ways: by designing methods and products appropriate for tropical cyclone verification that can be installed in maintained software at the DTC, and by providing both real-time and retrospective performance measures for each years’ hurricane forecasts. At a working level, members of the TCMT are often contributing DTC members as well.

The intent of the yearly retrospective evaluations are to provide guidance to the National Hurricane Center as they choose particular experimental forecast models to use for operational guidance during the upcoming hurricane season. In recent years these retrospective studies have focused on hurricane track and intensity forecasts from suites of comparison models forwarded by universities, research laboratories, and national centers. This evaluation is intended to help achieve the goals of NOAA’s Hurricane Forecast Improvement Project (HFIP), a program in which the DTC hurricane task is also involved.

 

The accompanying figure, to the right, illustrates some results from the 2013 Retrospective Exercise (covering hurricane seasons 2010-2012). In the figure, the rank of a single experimental model hurricane intensity forecast is shown relative to that of the three top performing models. As a general rule, while hurricane tracking has dramatically improved in recent years, better intensity forecasts remain elusive.