Impact of Vertical Advection Schemes of the Hydrometeors on the Simulated Hurricane Structure and Intensity

Summer 2018

Advection is a computationally expensive process in Numerical Weather Prediction (NWP) models. Therefore, time-sensitive operational forecast models sometimes sum up the hydrometeors, including cloud water, rainwater, ice and snow, prior to calling the advection scheme. For this configuration, the model only needs to calculate the advection of the total condensate. However, the impact of such a time-saving technique has not been systematically evaluated. With the release of HWRF 3.9a, a version of the operational HWRF microphysics scheme with separate hydrometeor advection became available to the research community, providing an excellent opportunity to study how simulated hurricane structures differ according to the advection schemes they use.  As a DTC Visitor, Shaowu Bao evaluated the impact of vertical advection schemes of the hydrometeors on the simulated hurricane structure and intensity in the Hurricane Weather Research and Forecasting (HWRF) model.

Hurricanes Matthew (2016) Hermine (2016) and Jimena (2015) were simulated using the operational HWRF 2017 with the advection of total condensate (hereinafter T_ADV) and that of separate hydrometeors (hereinafter S_ADV). Their results were then compared against the infrared (IR) brightness temperature images data from NOAA's Geostationary Operational Environmental Satellite GOES 13.

The most distinct difference between the T_ADV and S_ADV results was the simulated storm size. In Figure 1, a deep blue IR brightness temperature indicates a cold cloud top and red-brown identifies the warm surface of the Earth with no-cloud or fewer cloud conditions. T_ADV and S_ADV produced similar storm locations and shapes that both matched the observed. However,  S_ADV produced cloud coverage that was noticeably larger than that produced by T_ADV.

Figure 1: IR brightness temperature for Hurricane Hermine at 18Z 09/01/2016 for a) observed and 36-h forecast with b) total condensate advection and c) separate hydrometeor advection
Figure 1: IR brightness temperature for Hurricane Hermine at 18Z 09/01/2016 for a) observed and 36-h forecast with b) total condensate advection and c) separate hydrometeor advection.

Our hypothesis suggests the total condensate advection in T_ADV overestimates the upward advection of rainwater and underestimates that of cloud water. By correcting this problem, the S_ADV scheme transports more cloud water upward than T_ADV, leading to more diabatic heating from condensation and more angular momentum to be imported into the hurricane vortex. This causes the larger size of hurricanes simulated by S_ADV than those by T_ADV. Our results of the angular momentum (Figure 2) and other analysis confirmed this hypothesis.

Figure 2: Pressure-radial cross-section of the azimuthally averaged angular momentum in T_ADV (left) and S_ADV (right) for the simulation of hurricane Matthew 2016 14L 2016100100 cycle valid at 96h
Figure 2: Pressure-radial cross-section of the azimuthally averaged angular momentum in T_ADV (left) and S_ADV (right) for the simulation of hurricane Matthew 2016 14L 2016100100 cycle valid at 96h.

Although in theory the separate advection of hydrometeors in S_ADV is more realistic than the advection of total condensate in T_ADV, this evaluation showed that S_ADV simulated much larger storms than T_ADV and the observed hurricanes, and therefore degraded the HWRF performance. Future work is needed to identify the adjustments in the model that may have masked the error related to the total condensate advection, so that the separate hydrometeors advection can achieve better forecast performance.

Shaowu found that the two weeks spent at NCAR collaborating with DTC scientists was a very pleasant and productive experience. He wants to especially thank Ligia Bernardet, Evan Kalina, Mrinal Biswas, Greg Thompson and Louisa Nance, as well as Kathryn Newman. Without their help and support setting up the model, providing input data, and analyzing the results, this project was impossible to complete.


Shaowu Bao

Variational lightning data assimilation in GSI

Spring 2018
A contribution by Karina Apodaca and coauthor Milija Zupanski on the work they conducted with the DTC Visitor Program on variational lightening data assimilation. This article covers highlights of their journal article. See, also, the link at the end of this article.

The launch of new observing systems offers tremendous potential to advance the operational weather forecasting enterprise. However, “mission success” is strongly tied to the ability of data assimilation systems to process new observations. One example is making the most of new measurements of lightning activity by the Geostationary Lightning Mapper (GLM) instrument aboard the GOES-16 satellite. The GLM offers the possibility of monitoring lightning activity on a global scale. Even though its resolution is significantly coarser, as compared to ground-based lightning detection networks, its measurements of lightning can be particularly useful in less observed regions such as elevated terrain or open oceans. The GLM identifies changes in an optical scene, which are indicative of the presence of lightning activity, and produces “pictures” that can provide estimates of the frequency, location, and extent of lightning strikes. How can we capitalize on the information provided by these pictures of lightning events for the benefit of operational numerical weather prediction models and in particular at the NOAA/National Weather Service?

We enhanced the NCEP operational Gridpoint Statistical Interpolation (GSI) system within the GLobal Data Assimilation SYstem (GDAS) by adding a new lightning flash rate observation operator and by following a variational data assimilation framework. Given the coarse resolution and simplified microphysics of the current operational global forecasting system, we designed this new lightning flash rate observation operator to update large-scale model fields such as humidity, temp, pressure, and wind.

To start, we used surface-based Lightning Detection Network (LDN) data from the World Wide Lightning Location Network as a GLM-proxy (Fig 1a). Real-Earth latitude, longitude, and timing of total lightning strikes were extracted in a way similar to what the GLM instrument measures. These data were then converted into BUFR (a binary data format) required for assimilation by the GSI system and ingested as a cumulative count of geo-located lightning strikes (Fig 1b). This lightning assimilation package has been prepared to handle actual GLM observations once they are well-curated, suitable for testing, and readily available to the public.

The lightning assimilation package has been fully incorporated in a version of the GSI system and is being evaluated by the GSI review committee. We are now verifying the effects of lightning observations on the forecast through global parallel experiments with the NCEP/4DEnVar system. Thus far, an assessment of the processing of lightning observations and the impacts on the initial conditions for some of the dynamical fields of the GFS model seems promising. The analyses increments of temperature, pressure, humidity, and winds shown in  Fig. 1 (c, d, e, and f), and the location of the raw lightning strikes coincide with the location of the high-precipitation contours in Fig. 2.

In preparation for the NOAA/NGGPS FV3-based Unified Modeling System, we hope to further develop this lightning capability for the GOES GLM instrument following a hybrid (EnVar) methodology and by incorporating a cloud-resolving/non-hydrostatic-suitable observation operator for lightning flash rate capable of also updating cloud hydrometeor fields. Once GLM observations are available, we will evaluate their actual impact with GSI system and assess their benefit in operational weather prediction at NCEP.

More information see the Joint Center for Satellite Data Assimilation Quarterly, No. 58, Winter 2018, JCDSA: ftp://ftp.library.noaa.gov/noaa_documents.lib/NESDIS/JCSDA_quarterly/no_58_2018.pdf#page=12


Figure 1. (a) Raw lightning observations from the WWLLN network, (b) assimilated lightning flash rate, both valid at 12 UTC 27 August 2013. Analysis increments of (c) temperature (K), (d) u-component of wind (m/s), (e) v-component of wind (m/s), and (f) specific humidity (g/kg) from a GFS/GDAS lightning data assimilation experiment.

Figure 2. 24-hr precipitation valid at 2013-08-27_12:00:00 (Courtesy: NWS). Note the region of maximum precipitation near the Arizona-Nevada border, which coincides with the region of a positive analysis increment in specific humidity shown in Fig. 2. The assimilation of lightning observations has a positive impact in the initial conditions of the GFS model.

Are mixed physics helpful in a convection-allowing ensemble?

Autumn 2017

As a 2017 DTC visitor, William Gallus is using the Community Leveraged Unified Ensemble (CLUE) output from the 2016 NOAA Hazardous Weather Testbed Spring Experiment to study the impact of mixed physics in a convection-allowing ensemble.  Two of the 2016 CLUE ensembles were similar in their use of mixed initial and lateral boundary conditions (at the side edges of the model domain), but one of them also added mixed physics, using four different microphysics schemes and three different planetary boundary layer schemes.

Traditionally, ensembles have used mixed initial and lateral boundary conditions. Their perturbations generally resulted in members equally likely to verify; a good quality in ensembles.  However, as horizontal grid spacing was refined and the focus of forecasts shifted to convective precipitation, studies suggested that problems with insufficient spread might be alleviated through the use of mixed physics.  Although spread often did increase, rules of well-designed ensemble approaches were violated such as biases related to the particular physics schemes, and in some cases members that were more likely to verify than others. Improved approaches for generating mixed initial and lateral boundary conditions for use in high-resolution models now prompt the question – is there any advantage to using mixed physics in the design of an ensemble?



To explore the impact of mixed physics, the Meteorological Evaluation Tools (MET) has been run for 18 members of the two ensembles for 20 cases that occurred in May and early June 2016.  Standard point-to-point verification metrics such as Gilbert Skill Score (GSS) and Bias are being evaluated for hourly and 3-hourly precipitation and hourly reflectivity.  In addition, Method for Object-Based Diagnostic Evaluation (MODE) attributes are being compared among the nine members of each ensemble.  

Preliminary results suggest that more spread is present in the ensemble that used mixed physics, and that the median values of convective system precipitation and reflectivity are closer to the observed values.  However, the median values are achieved by having a few members with unreasonably large high biases that are balanced by a larger set of members suffering from systematic low biases.  Is such an ensemble the best guidance for forecasters?

Accumulated measures of error from each member would suggest that the ensemble using mixed physics performs more poorly.  The figure shows an example of the 90th percentile value of reflectivity among the systems identified by MODE as a function of forecast hour for the nine members examined in both ensembles.  Additional work is needed along with communication with forecasters to determine which type of ensemble has the most value for those who interpret the guidance.

MET output and how the ensembles depicted convective initiation are also being examined, along with an enhanced focus on systematic biases present in different microphysical schemes.  It is hoped that the results of this project will influence the design of the 2018 CLUE ensemble and that future operational ensembles used to predict thunderstorms and severe weather can be tailored in the best way possible.  This visit has been an especially nice one for Dr. Gallus since he had done several DTC visits about ten years ago when the program was new, so the experience feels a little like “coming home”!  The DTC staff are always incredibly helpful, and the visits are a great way to become familiar with many useful new research tools. Universities can become a bit like ghost towns in the summer, so he also enjoys the chance to get away to Boulder, with its more comfortable climate, great opportunities to be outdoors, numerous healthy places to eat, and opportunities to interact with the many scientists at NCAR.

Dr. Gallus is a meteorology professor at Iowa State University whose research has often focused on improved understanding and forecasting of convective systems. The CLUE output was provided by Dr. Adam Clark from NSSL, while observed rainfall, reflectivity, and storm rotation data were gathered by Jamie Wolff at the DTC, who is serving as his host, and Dr. Patrick Skinner from NSSL who is also working with CLUE output as a DTC visitor this year.   Dr. Gallus is also working closely with John Halley-Gotway at the DTC who has provided extensive assistance with model verification via the MET and METViewer tools.


The 90th percentile reflectivity values from forecast hour 6 through 30 for the nine members studied in the single physics ensemble (top) and the ensemble that includes mixed physics (bottom). Both ensembles use mixed initial and lateral boundary conditions. The red curve is the control member (common to both ensembles) and the black curve identifies the observed values.

Cloud Overlap Influences on Tropical Cyclone Evolution

Winter 2017

As visitors to the DTC in 2016, Michael Iacono and John Henderson of Atmospheric and Environmental Research (AER) used the Hurricane Weather Research and Forecasting model (HWRF) to investigate an improved way of representing the vertical overlap of partial cloudiness and showed that this process strongly influences the transfer of radiation in the atmosphere and can impact the evolution of simulated tropical cyclones.

Clouds are a critical element of Earth’s climate because they strongly affect both the incoming solar (or shortwave) energy, which fuels the climate system, and the thermal (or longwave) energy passing through the atmosphere. Understanding the way that clouds absorb, emit, and scatter radiation is essential to modeling their role effectively.

One limitation to simulating clouds accurately is the challenge of representing their variability on scales smaller than the typical grid spacing of a dynamical model such as HWRF. Individual cloud elements are often sub-grid in size, and radiative transfer through fractional clouds strongly depends on whether they are vertically correlated such as for deep, convective clouds, or uncorrelated such as for randomly situated shallow cumulus under high clouds.


Height by longitude (west to east) cross-section of longwave radiative heating rate through the eye of Hurricane Joaquin as simulated by HWRF using different cloud overlap methods. The x-axis spans roughly ten degrees of longitude across the HWRF inner domain, and the linear vertical scale extends from the surface to the model top at 2 hPa while emphasizing the troposphere.

Using the Rapid Radiative Transfer Model for Global Climate Models (RRTMG) radiation code in HWRF, the primary objective of this project is to examine the effect of replacing the default maximum-random (MR) cloud overlap assumption with an exponential-random (ER) method, which has been shown to be more realistic relative to radar measurements within vertically deep clouds. The MR approach forces a condition of maximal overlap throughout adjacent partly cloudy layers, while the ER method relaxes this restriction by allowing the correlation to transition exponentially from maximum to random with vertical distance through the cloudy layers.

A first step in assessing this change in HWRF is to show that it alters radiative heating rates enough to affect the development of a tropical cyclone (TC), since heating rates, along with surface fluxes, are the primary means by which a radiation code influences a dynamical model. For Hurricane Joaquin, a 2015 Atlantic basin storm with an unusual track that was challenging to predict, each overlap method causes longwave and shortwave heating rates to evolve very differently within and near the storm over multiple five-day HWRF forecast cycles. Over time, these changes modify the temperature, moisture and wind fields that exert a considerable influence on the predicted strength and movement of Joaquin.


HWRF five-day forecasts of Hurricane Joaquin track for the 2015 operational model (green) and the DTC/HWRF 2016 model using MR cloud overlap (blue) and ER cloud overlap (red) relative to the best-track observed position (white).

The full impact on TC track and intensity remains under investigation, since the cases studied to date respond very differently. Hurricane Joaquin track forecasts are dramatically altered in some forecast cycles, while more modest track changes are seen for storms embedded in strong steering flows such as East Pacific Hurricane Dolores from 2015 and Atlantic Hurricane Gonzalo from 2014. Intensity impacts are also case-dependent with improvement seen in some Joaquin forecast cycles and degraded intensity forecasts in other cases.

Our interaction with the DTC was a rewarding opportunity to acquire new insights on this topic, and we will pursue further research collaborations with the DTC and the NOAA/EMC Hurricane Team in the future.

Evaluating Convective Structure

Summer 2017

As a visitor to DTC in 2016, University of North Dakota Ph.D. candidate Mariusz Starzec investigated the performance of regional summertime convective forecasts. In particular, he focused on model skill in predicting the coverage, morphology, and intensity of convection. Further emphasis was placed on how representative the simulated internal convective structure is of observed convection by using the reflectivity field as proxy for convective processes.

Convection plays a major role in everyday weather and long term climate. Any biases present in convective forecasts have important implications on the accuracy of operational forecasts, potential severe weather hazards, and climatic feedbacks. Validation of model forecasts are required to identify if any of these biases exist.

For the DTC project, four months of forecasts from six 3-km Weather Research and Forecasting (WRF) model configurations were assessed, where one of the configuration was the High Resolution Rapid Refresh (HRRR). The WRF configurations consisted of a combination of varying microphysics and model versions. The simulated reflectivity field was compared against the radar-observed reflectivity field, which is an instantaneous snapshot into what is occurring in the convective system. More importantly, this approach allows for the entire three-dimensional vertical structure of convective systems to be evaluated.


Total area of discrete objects above 45 dBZ with height for a variety of models simulations (colored) and observations (black).

Forecasts were analyzed using an object-based approach, where bulk attributes of discrete storm cells are emphasized instead of exact timing and location. Object counts and their respective areas with height were evaluated, along with the vertical distribution of reflectivity values within these objects. Overall, convective forecasts were generally more intense, contained more and larger objects, and covered more area than observed convection. The largest over-predictions occurred during the peak in the diurnal cycle. No major differences were found between model versions, although varying the microphysics caused large differences in the vertical distributions of object counts and areas.

Vertical distributions of reflectivity in forecasted and observed objects showed that simulated convection has a wider distribution of reflectivity values, especially aloft (>5 km). In general, reflectivity distributions were overly intense by 5-10 dBZ and reflectivity magnitudes in the melting layer were frequently and notably over-pronounced. A further inter-comparison of the model physics and versions revealed that although minor differences can be found near the surface at 1 and 2 km, major differences in convective structure can be found aloft.


Contoured Frequency by Altitude Diagrams (CFADs) of reflectivity within 45 dBZ objects present at 2 km for a sample model dataset (left) and radar dataset (middle). The difference in frequency between the model and radar CFAD (right), where higher model frequency is red.

One of the findings of this project indicates that it is important to validate forecasts at multiple heights, as evaluation of model fields at one level may not reveal any biases. More research is required into three-dimensional model verification, so new verification tools and algorithms that can accomplish such tasks are needed.

Mariusz was hosted by Tara Jensen and found that traveling to NCAR and collaborating with DTC was an invaluable learning experience, and he enjoyed getting to meet everyone and learn about their research. Outside of the DTC project, he had fun exploring around Boulder and hiking as many trails as possible in both the foothills and the Rockies.


Mariusz Starzec enjoying a trail with a backdrop of Mount Meeker and Longs Peak.

DTC Visitor Project

Spring 2016

This past winter, the DTC had the pleasure of hosting visiting scientist Dr. Liantang Deng of the Numerical Weather Prediction Center, China Meteorological Administration (CMA). His visit stemmed from the combined Weather Research and Forecasting (WRF) and Global/Regional Assimilation and PrEdiction System (GRAPES) modeling workshop in 2014, when Dr. Bill Kuo, Director of the DTC, helped facilitate interactions between the DTC and CMA. During his 2-month stay, he evaluated the GRAPES model by utilizing baseline data sets within the Mesoscale Model Evaluation Testbed (MMET), a framework established by the DTC to assist the research community in efficiently demonstrating the merits of new developments.


Composite reflectivity of the a) ARW and b) GRAPES model at forecast hour 36. The red oval is the location of the observed derecho as shown in the radar observation inset in the upper-left corner.

Dr. Deng’s testing focused on the historic derecho case of 29 June 2012, which impacted many states in the Mid-western and Mid-Atlantic regions. The GRAPES model domain and forecast period were set up similarly to the MMET baseline 12-km parent domain, which covers the full CONUS region, and is integrated out to 84 hours; the initial and boundary conditions were derived from the GFS at 12 UTC on 28 June 2012. For Dr. Deng’s visit, he focused on the Advanced Research WRF (ARW) baseline, which was initialized with NAM and run with the operational Rapid Refresh (RAP) physics suite.

Post-processing of the GRAPES model output was conducted using the NCEP Unified Post-Processing (UPP) software. Even though UPP does not currently support the GRAPES model, Dr. Deng worked diligently to add and modify routines necessary for proper I/O, including addressing the vertical and horizontal grid-staggering, as well as addressing routines for select post-processed fields. These modifications are a welcome addition to UPP and can potentially be released as a community contribution in a future release. After the post-processing step, the DTC’s Model Evaluation Tools (MET) were used for the verification process and included statistical results for surface and upper-air point observations, as well as gridded precipitation observations.


Frequency Bias of 03-h accumulated precipitation over the Midwest region for the 36-h forecast lead time.

Although the GRAPES model was able to resolve a storm over the Midwest, the timing was behind and the location was too far north (Figure b). The strong leading edge observed in the storm did not form in the model, and it weakened as is moved eastward. This, compared to the ARW baseline failed to capture the event (Figure a), illustrating the impact of the different physics suites and different initial and lateral boundary conditions have on the model forecasts. The ARW initiated a storm, but instead of strengthening as it moved eastward across the Midwest, it actually dissipated. GRAPES formed a storm but was not accurate in timing, location, or forming the strong leading edge. A look at the 3-hour accumulated precipitation frequency bias at the 36 hour lead time over the Midwest (Figure below) shows a small high bias at the lowest thresholds, transitioning to a small low bias at higher thresholds, with exception of the highest threshold. This plot shows that even though the timing and location of the storm were off, GRAPES did a decent job of forecasting the accumulated precip amounts (with a little over prediction of the low thresholds and underprediction of higher thresholds).

Dr. Deng was very grateful to collaborate with the DTC and was surprised at how much he was able to accomplish in his short visit. We here at the DTC enjoyed working with him and would welcome him back for future visits to continue his work.


Implementation and Validation of a Geo-Statistical Observation Operator for the Assimilation of Near Surface Winds in GSI

Winter 2016

As a 2015 DTC visitor, Joël Bédard is working with Josh Hacker to apply a geo-statistical observation operator for the assimilation of near-surface winds in GSI for the NCEP Rapid Refresh (RAP) regional forecasting system.

Biases and representativeness errors limit the global influence of near-surface wind observations. Although many near-surface wind observations over land are available from the global observing system, they had not been used in data assimilation systems until recently and many are still unused. Winds from small islands, sub-grid scale headlands and tropical lands are still excluded from the UK Met Office data assimilation system, while other operational systems simply blacklist wind observations from land stations (e.g. Environment Canada). Similarly, the RAP systems uses strict quality control checks to prevent degrading the near-surface wind analysis due to representativeness errors.

Model Output Statistics (MOS) methods are often used for forecast post-processing, and Bédard et al. previously evaluated MOS for use in the data assimilation. Doing so increases the consistency between observations, analyses and forecasts. They also addressed representativeness and systematic error issues by developing a geo-statistical observation operator based on a multiple grid-point approach called GMOS (Geophysical Model Output Statistics). The idea behind this operator is that the nearest grid-points, or a simple interpolation of the surrounding grid-points, may not represent conditions at an observing station, especially if the station is located on complex terrain or coastal site. On the other hand, amongst the surrounding grid-points, there are generally one or several grid-points that are more representative of the observing site. Thus, GMOS uses a set of geo-statistical weights relating the closest NWP grid-points to the observation site. GMOS takes advantage of the correlation between resolved scales and unresolved scales to correct the stationary and isotropic components of the systematic and representativeness error associated with local geographical characteristics (e.g. surface roughness or coastal effects). As a result, GMOS attributes higher weights to the most representative grid-points and it better represents the meteorological phenomena onsite (see Figure).

Near-surface wind observations from ~5000 SYNOP (surface synoptic observations) stations were assimilated along with the operational assimilation dataset in Environment Canada global deterministic prediction system. Although results are encouraging, they are not statistically significant as a large quantity of observations are already assimilated in the system (14 million observations per day). With the objective of making a better use of near-surface wind observations and improving their impact on short-term tropospheric forecasts, this collaborative project aims at assimilating near-surface wind observations over land in the RAP system. To address the statistical significance issue, near-surface wind observations from all available surface stations located over the North American continent are considered (~20 000 SYNOP, Metar and Mesonet stations).

As of now, the GMOS operator was implemented in the GSI code and the operators statistical coefficients were obtained using historical data. The evaluation runs are currently ongoing.


Figure: Comparison of the Numerical Weather Prediction model representation of the surface roughness and topographic height with the multipoint linear regression weights at the North Cape site: (a) subset of the GEM-LAM (2.5km) horizontal grid superimposed on the site map; (b) multipoint linear regression weights; (c) modelled surface roughness; (d) modelled topographic height. Figure from Bédard et al., 2013.

Object-based Verification Methods

Autumn 2016

As visitors to the DTC in 2015, Jason Otkin, Chris Rozoff, and Sarah Griffin explored using object-based verification methods to assess the accuracy of cloud forecasts from the experimental High Resolution Rapid Refresh (HRRR) model. Though the forecast accuracy could be assessed using traditional statistics such as root mean square error or bias, additional information about errors in the spatial distribution of the cloud field could be obtained by using more sophisticated object-based verification methods.

The primary objective of their visit to the DTC was to learn to use the Meteorological Evaluation Tools’ Method for Object-Based Diagnostic Evaluation (MODE). Once they learned how MODE defines single objects and clusters of objects, they could use MODE output of individual objects and matched pairs to assess the forecast accuracy.

The team also wanted to develop innovative methods using MODE output to provide new insights. For example, they were able to calculate and compare how well certain characteristics of the forecast cloud object, suchs as its size and location, match those of the observed cloud object.

One outcome of their DTC visit was the development of the MODE Skill Score (MSS). The MSS uses the interest val- ues generated by MODE, which characterize how closely the forecast and observed objects match each other, along with the size of the observed object, to portray the MODE output as a single number.

For their project, they assessed the 1-h experimental HRRR forecast accuracy of cloud objects occurring in the upper troposphere, where satellite infrared brightness temperatures are most sensitive. They used simulated Geostation- ary Operational Environmental Satellite (GOES) 10.7μm brightness temperatures generated for each HRRR forecast cycle, and compared them to the corresponding GOES observations. Forecast statistics were compiled during August 2015 and January 2016 to account for potential differences in cloud characteristics between the warm and cool seasons.

Overall, the higher interest value scores during August indicate that the sizes of the forecast objects more closely match those of the observed objects, and that the spatial displacement between their centers’ of mass is smaller. They also found smaller cloud objects have less predictability than larger objects, and that the size of the 1-h HRRR forecast cloud objects is generally more accurately predicted than their location.

The researchers hope this knowledge helps HRRR model developers identify reasons why a particular forecast hour or time period is more accurate than another. It could also help diagnose problems with the forecast cloud field to make forecasts more accurate.

Otkin, Rozoff, and Griffin were visiting from the University of Wisconsin-Madison Space Science and Engineering Center and Cooperative Institute for Meteorological Satellite Studies. They were hosted by Jamie Wolff of NCAR. The DTC visitor project allowed the team to discuss methods, insights, and results face-to-face. The team feels this project scratched the surface of how to use satellite observations and object-based verification methods to assess forecast accuracy, and that the door is open for future collaboration.

Contributed by Jason Otkin, Sarah Griffin, and Chris Rozoff.

Harnessing the Power of Evolution for Weather Prediction

Summer 2016

As a DTC Visitor in 2015, Paul Roebber explored an idea for generating ensemble weather predictions known as evolutionary programming (EP). The method relies on a gradually and increasingly restrictive cost function to produce and to evaluate succeeding generations of a population of algorithms until such time as a best ensemble solution is determined based on cross-validation. The approach was developed by Roebber to produce baseline prediction equations equivalent to linear or nonlinear multiple regression equations (a kind of model output statistics or MOS) modified by if-then conditionals and using observations as well as numerical weather prediction (NWP) model output.

The prime objective of his DTC Visitor project was to explore possible improvements to the method. A first step, using the Yellowstone supercomputer, was to consider the relative contribution of large ensemble populations, numbering from 3,000 to as many as 500,000 possible members, to ensemble diversity. As illustrated in the figure below for 60 hour forecasts of minimum temperature, smaller as well as very large EP ensembles outperform the GFS 21-member ensemble MOS forecasts in both a deterministic (RMSE) and probabilistic (Brier Skill Score; BSS) sense, but the increase in ensemble size (indicated by the size of the bubbles) provides only minor additional skill.

Specific issues explored in the context of next-day heavy convective rainfall forecasting included: the performance of the method regionally and locally, compared to multiple logistic regression (MLR) and artificial neural networks (ANN); and ensemble member selection for use in bias calibration such as Bayesian Model Combination.

As illustrated in the performance diagram in the figure below for regional forecasts of rainfall in excess of 1.5 inches, the MLR and EP demonstrate comparable skill, and both superior to that of a trained ANN. The slightly different performance characteristics (higher hits and false alarms versus lower hits and false alarms) of the three methods suggests the possibility of combining the information in useful ways operationally. Insights gained from this work are leading to several collaborations with NOAA scientists related to adaptive systems and deep learning networks.

Diagnosing Tropical Cyclone Motion Forecast Errors in HWRF

Winter 2014

As a DTC visitor in 2013, Thomas Galarneau has applied a new diagnostic method for quantifying the phenomena responsible for errors in tropical cyclone (TC) storm tracks to an inventory of recent hurricanes.

The method is founded on the notion that errors in storm motion at relatively short lead times (12- 48 h) lead to large position errors at later times. The objective of his DTC Visitor Project was to diagnose sources of error in TC motion forecasts from the HWRF model. Of particular interest was the impact of model errors in forecasts of the environmental steering flow at different stages of Atlantic Basin TC evolution. By isolating the vortex structure from the larger-scale flow in a TC-relative framework, he has been able to show (as in the scatterplot in the figure below) that during the northeastward-moving (post-curvature) phase, TC motion errors are generally southwestward. As illustrated in the TC-relative geographical plot of the figure below, this error can be attributed to a northeasterly environment wind error larger than 1.0 m/s, which in turn appears to be associated with an anticyclonic error to the northwest, and a cyclonic error to the southeast, of the forecasted TC. Further details of his project will be available soon at http://www.dtcenter. org/visitors/year_archive/ 2013/ when DTC visitor reports are posted.



Cold Pools and the WRF

Summer 2013

Robert Fovell and Travis Wilson from the University of California/Los Angeles recently completed a visitor project titled “Improvements to modeling persistent surface cold pools in WRF”, aspects of which will be part of Travis’ PhD work. Travis spent nine months working at the DTC in Boulder, much of the time with his DTC host Jamie Wolff, and Rob visited for two weeks in March and June. A principal motivation for their study was the occasionally poor prediction in numerical models (including in WRF) of the formation and breakup of fog in the Central Valley in California and the possibility that better land surface models would improve those predictions. One significant result of their study is the development of a hybrid land surface model that convolves the complexity of the Noah land surface model’s soil moisture formulation with the simplicity of a thermal diffusion (slab) heat transfer model. Some of their results were presented at the recent WRF Users Workshop in Boulder and can be linked to at http://www.mmm.ucar.edu/wrf/users/ workshops/WS2013/ppts/4.4.pdf