Next Article in Journal
Baroclinic Instability of a Time-Dependent Zonal Shear Flow
Next Article in Special Issue
Changes in Extreme Temperature Events and Their Contribution to Mean Temperature Changes during Historical and Future Periods over Mainland China
Previous Article in Journal
Evaluation of Sea Ice Simulation of CAS-ESM 2.0 in Historical Experiment
Previous Article in Special Issue
Application of Affinity Propagation Clustering Method in Medium and Extended Range Forecasting of Heavy Rainfall Processes in China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison between Multi-Physics and Stochastic Approaches for the 20 July 2021 Henan Heavy Rainfall Case

1
CMA-GDOU Joint Laboratory for Marine Meteorology & South China Sea, Institute of Marine Meteorology, Guangdong Ocean University, Zhanjiang 524088, China
2
College of Ocean and Meteorology, Guangdong Ocean University, Zhanjiang 524088, China
3
Shenzhen Institute of Guangdong Ocean University, Shenzhen 518000, China
4
Institute of Urban Meteorology, China Meteorological Administration, Beijing 100089, China
5
School of Marine Sciences, Nanjing University of Information Science and Technology, Nanjing 210044, China
*
Authors to whom correspondence should be addressed.
Current address: South China Sea Institute of Marine Meteorology Guangdong Ocean University, Zhanjiang 524088, China.
Atmosphere 2022, 13(7), 1057; https://doi.org/10.3390/atmos13071057
Submission received: 27 May 2022 / Revised: 22 June 2022 / Accepted: 2 July 2022 / Published: 3 July 2022
(This article belongs to the Special Issue Meteorological Extremes in China)

Abstract

:
In this study, three model perturbation schemes, the stochastically perturbed parameter scheme (SPP), stochastically perturbed physics tendency (SPPT), and multi-physics process parameterization (MP), were used to represent the model errors in the regional ensemble prediction systems (REPS). To study the effects of different model perturbation schemes on heavy rainfall forecasting, three sensitive experiments using three different combinations (EXP1: MP, EXP2: SPPT + SPP, and EXP3: MP + SPPT + SPP) of the model perturbation schemes were set up based on the Weather Research and Forecasting (WRF)-V4.2 model for a heavy rainfall case that occurred in Henan, China during 20–22 July 2021. The results show that the model perturbation schemes can provide forecast uncertainties for this heavy rainfall case. The stochastic physical perturbation method could improve the heavy rainfall forecast skill by approximately 5%, and EXP3 had better performance than EXP1 or EXP2. The spread-to-root mean square error ratios (spread/RMSE) of EXP3 were closer to 1 compared with those of the EXP1 and EXP2; particularly for the meridional wind above 10 m, the spread/RMSE was 0.94 for EXP3 and approximately 0.85 for EXP1 and EXP2. EXP3 exhibited better performance in Brier score verification. EXP3 had a 5% lower Brier score than EXP1 and EXP2, when the rainfall threshold was 25 mm. The growth of the initial ensemble variances of different model perturbation schemes were explored, and the results show that the perturbation energy of EXP3 developed faster, with a magnitude of 27.22 J/kg, whereas those of EXP1 and EXP2 were only 19.18 J/kg and 20.81 J/kg, respectively. The weak initial perturbation associated with the wind shear north of the heavy rainfall location can be easily developed by EXP3.

1. Introduction

Forecast skill has recently improved with the development of mesoscale models; however, the initial errors, model errors, and the chaotic characteristics of the atmosphere limit the forecasts of high-impact weather, particularly for heavy rainfall cases. The growth rate of small initial errors was studied, and the idea of ensemble forecasting was put forward by Edward [1]. Ensemble forecasting is an important method for solving the uncertainties in numerical weather forecasting and is an important field in numerical forecasting [2]. It estimates the range of error distribution of the initial values using mathematical methods by obtaining a collection of initial values, each of which representing the real state of the atmosphere, then finally makes a reasonable numerical weather forecast based on the initial values.
The sources of forecast uncertainties in ensemble forecast systems include initial, boundary condition, and model errors [3]. When numerical weather prediction (NWP) ensemble systems are underdispersive, unreliable, and overconfident, ensemble forecasts are generated. Many studies have revealed that initial perturbations cannot comprehensively explain prediction uncertainties [4]. Model uncertainty plays an important role in ensemble prediction systems (EPS). There are several perturbation strategies for mesoscale EPS: the multi-model perturbation schemes [5,6,7,8] that use a combination of several different models to describe the uncertainties of the model’s dynamical and physical processes; and multi-physics parameterization ensemble schemes [9,10,11,12] that use multi-physics parameterization ensemble schemes to describe the varied physical processes. However, developing and maintaining different parameterization schemes are resource intensive, and each ensemble member has a different climatological mean error. Random perturbations are added while integrating to increase the ensemble spread. Three representative model stochastic physical perturbation methods [13] are the stochastic kinetic-energy backscatter (SKEB), stochastically perturbed physics tendency (SPPT), and stochastically perturbed parameter schemes (SPP).
The SPPT scheme is used to perturb the cumulative physical trends of the potential temperature, wind, and humidity. It was first introduced into the European Centre for Medium-Range Weather Forecasts Integrated Forecasting System (ECMWF IFS) by Buizza [14]. This scheme assumes that model random errors associated with physical parameterizations are simulated by multiplying the total parameterized tendencies by a random number sampled from a uniform distribution between 0.5 and 1.5. The results show that the scheme increases the spread of the forecasting system and improves the probabilistic forecasting skill of elements such as rainfall. Currently, numerous operational forecasting systems apply the SPPT scheme [15,16]. For example, the Canadian Center for Environmental Prediction and Japan Meteorological Agency use the SPPT scheme in their global ensemble forecasts and the French Meteorological Agency uses the SPPT scheme in its constructed regional ensemble forecast system. Romine [17] used the SPPT scheme to represent model errors to improve the reliability of ensemble forecasts and performed ensemble forecast experiments, revealing that the SPPT scheme improved probabilistic forecast skill.
The spatial scales of most physical processes, such as convection and boundary layer exchange processes, are very small, therefore they cannot be directly solved, and must be parameterized. This involves numerous empirical, tunable parameters, which are typically subjective; however, the SPP scheme can describe these uncertainties [13]. The SPP scheme is used to perturb parameters in selected physical packages such as the Grell–Freitas (GF) convection scheme, Mellor–Yamada Nakanishi Niino (MYNN) boundary layer schemes, and RUC land surface model (LSM). The random disturbance parameter scheme can directly characterize the model uncertainty caused by the physical process parametrization scheme. A random parameter (RP) scheme with stochastic perturbation parameters was first developed by the UK Met Office in a global ensemble forecasting system and subsequently applied to convective-scale ensemble forecasting systems [10]. The European Centre for Medium-Range Weather Forecasts (ECMWF) also conducts research on stochastic parametric perturbation schemes in the framework of SPP scheme [18]. Xu et al. [19] conducted a random perturbation ensemble forecast experiment based on the SPP scheme and demonstrated that the random parameter perturbation (SPP) scheme can effectively improve not only the ensemble probability forecast effect but also the ensemble rainfall probability forecast skill.
The performance of different stochastic physical perturbation schemes has recently been studied. Four model–error schemes for probabilistic forecasts with the WRF-ARW mesoscale ensemble system were evaluated in terms of performance [20]. Notably, including a model–error representation substantially increased the forecast skill near the surface. Combining multiple model–error schemes produced the best-performing ensemble systems. Cai et al. [21] constructed a storm-scale ensemble forecast system based on the SPPT and SKEB schemes and then combined them to construct a mixed-mode disturbance scheme, clarifying the effect of the combination schemes on rainfall. Qiao et al. [22] implemented two stochastic methods: the stochastically perturbed temperature tendency from the micro-physics method and stochastically perturbed intercept parameters of the micro-physics method in the Lin scheme based on the ARPS model; they investigated the idealized super-monolithic case. Zhang et al. [23] developed a mixed-mode perturbation method, combining a multi-physics process combination and SKEB, and the results showed that the mixed-mode perturbation method had the best performance in perturbation-growth capability. A novel hybrid stochastically perturbed parameterization (HSPP) scheme is proposed and implemented in the Convection Permitting Limited Area Ensemble Forecasting, indicating that HSPP significantly increases the ensemble spread of temperature, humidity, wind speed, and pressure, especially in the lower levels of the atmosphere [24]. Zhang [25] developed three new implementing strategies for surface and model physics perturbations, evaluating 19 tropical cyclones (TCs) making landfall in China during 2014–16, and demonstrated that these methods improved forecasts more significantly for non-intensifying than intensifying TCs.
Despite the increasing prevalence of SPP and SPPT in ensembles, less attention has been given to the benefits of SPP and SPPT in extreme heavy rainfall forecasting. In this study, we have conducted a convective-scale regional ensemble forecast study based on the WRF model for an extreme heavy rainfall process that occurred in the Henan region on 20–22 July 2021, using the SPP and SPPT schemes. We attempted to explore whether the ensemble forecasts constructed by SPP and SPPT with a single physical parameterization scheme and multiphysical parameterization combination scheme, respectively, are as good or better than the ensemble forecasts using the multiphysical parameterization combination scheme. We expect to apply this ensemble forecast scheme for the 20 July case and evaluate the performance of the combined perturbation scheme for this extreme case. The remainder of the paper is organized as follows: Section 2 introduces the sources used in this study, Section 3 presents the model setup and experimental scheme, Section 4 presents the analysis of the ensemble forecast experiment results, and Section 5 presents the conclusions.

2. Data and Methods

2.1. Data

The Global Ensemble Forecast System (GEFS) of the National Centers for Environmental Prediction (NCEP) was used as the initial value and boundary for the regional ensemble forecasting experiments in this study. The number of ensemble members was 20 for the GEFS. The resolution was 1° × 1°, and the initial time of the model was 00:00 (UTC) on 20 July 2021. The forecast lead time was 48 h.
The verification data for the rainfall were obtained from the meteorological station data of the China Meteorological Administration (CMA) with a time resolution of 1 h, including 1-h, 3-h, 6-h, 12-h, and 24-h accumulated rainfall. ERA5 (the 5th generation of ECMWF global climate reanalysis) combines vast amounts of historical observations into global estimates using advanced modeling and data assimilation systems and is one of the world’s best reanalysis data sources. The ground and upper air variables were verified using ERA5 with a time resolution of 1 h and a horizontal resolution of 0.25° × 0.25°. The ERA5 data used for this verification include meteorological variables at the pressure level and single level, such as temperature, U WIND, and V WIND.

2.2. Methods

2.2.1. Spread and Root Mean Square Error (RMSE)

The spread is the standard deviation of the ensemble members from the ensemble mean, and the larger the dispersion in a certain range, the more its potential to include various possibilities of the real atmosphere. When the degree of spread is extremely small, systematic spread will occur; when the degree of spread is extremely large, the forecast error will be large, and the forecast reliability will be low. RMSE verifies the difference between the forecast field and analysis field; the larger the value, the larger the forecast error. For an ideal ensemble forecast system, RMSE and dispersion should have the same magnitude and rate of change. Briefly, the reliability of an ensemble forecast system can be measured by whether the ensemble dispersion is equal to or similar to the ensemble mean forecast error over the entire forecast period [26].
Spread formula can be represented as follows:
S i , t = 1 N n = 1 N f i , t n f ¯ i , y 2
where f i , t n is the forecast at i-th station, t-th time point, and n-th ensemble member; f ¯ i , y is the ensemble average forecast; and N is the number of ensemble members.
RMSE formula can be expressed as follows:
E i , t = 1 M i = 1 M f ¯ i , t o b s i , t 2  
where o b s i , t is the observation at i-th station and t-th time, and M is the number of stations.

2.2.2. Percent Bias

Percent bias (PBIAS) measures the average tendency of the simulated values to be larger or smaller than their observed ones. The optimal value of PBIAS is 0.0, with low-magnitude values indicating accurate model simulation. Positive values indicate overestimation bias, whereas negative values indicate model underestimation bias. The calculation formula can be expressed as follows:
PBIAS = i = 1 n P i O i i = 1 n O i × 100
where n is the number of stations, P i is the ensemble forecast probability of variables, O i is the observation at the n-th station. The result is given in percentage (%) [27].

2.2.3. Perturbation Energy

In this study, dry turbulent energy (DTE) was used to characterize the perturbations of the three ensemble forecast experiments. The total perturbation energy of a grid point can be defined as follows:
DTE = 1 2 u 2 i , j , k + v 2 i , j , k + C p T r T 2 i , j , k
where u , v and T and represent the perturbations of the horizontal wind and temperature, respectively, and the perturbation value can be defined as the difference between the ensemble member forecast and ensemble average. T r is the reference temperature (270 K); C p is the specific heat of dry air at constant pressure (1005.7 K/kg); i and j represent the number of horizontal east–west and north–south grid points, respectively; and k is the number of vertical layers [28,29].

2.2.4. Brier Score

The Brier score, a common method in ensemble forecasting, is a mean square probability error defined by Brier [30], describing the degree of deviation of the ensemble probability from the observed probability. The Brier score is always between 0.0 and 1.0, where a score of 1.0 indicates the worst accuracy of the ensemble forecast system, and a Brier score of 0.0 indicates a perfect forecasting skill. The calculation formula can be expressed as follows:
BS = 1 N n = 1 N P n O n 2  
where N is the number of stations, P n is the ensemble forecast probability of cumulative rainfall at the n-th station, and O n is the observation frequency at the n-th station. The value of O n is 1, when the observation is higher than the set threshold, and 0 otherwise.

2.2.5. Bias Score

The bias score is a verification method that reflects the overall rainfall forecast effectiveness. The bias score represents the frequency of forecast events without considering the accuracy of the forecast and is used to measure the forecast deviation of the model for a certain rainfall magnitude. The score is numerically equal to the ratio of the total number of grid points that meet a certain precipitation threshold in the forecast area to the total number of the corresponding live precipitation grid points.
To calculate the bias score, the number of stations for hits, misses, and false alarms was calculated (Table 1). When the number of false alarms is more than the number of misses (bias > 1), the forecast would be corrected many times, and the forecast result would be wetter than the actual condition; when the number of misses is more than the number of false alarms (bias < 1), the forecast would not be sufficient, and the forecast result would be dryer than the observation. When bias = 1, the forecast would be correct, and the forecast deviation would be 0, which is the highest forecast skill. The calculation formula can be expressed as follows:
F a l s e a l a r m s = P r e d i c t i o n H i t s
M i s s e s = O b s r v a t i o n H i t s
B I A S = h i t s + f a l s e a l a r m s ( h i t s + m i s s e s  

2.2.6. Area Under the Curve

Area Under the Curve (AUC) is the model selection metric for bi–multi class classification problems. ROC is a probability curve for different classes. ROC tells us how good the model is for distinguishing the given classes, in terms of the predicted probability. A typical ROC curve has False Positive Rate (FPR) on the X-axis and True Positive Rate (TPR) on the Y-axis. The area covered by the curve is the area between the orange line (ROC) and the axis. This area covered is AUC. The bigger the area covered, the better the models is at distinguishing the given classes. The ideal value for AUC is 1. The calculation formula can be expressed as follows:
F P R = F a l s e a l a r m s F a l s e a l a r m s + C o r r e c t n e g a t i v e s
T P R = H i t s H i t s + M i s s e s

2.2.7. TALAGRANG Histogram

The TALAGRANG histogram provides information on the reliability of the ensemble forecast system as well as information on the system spread. Ideally, the distribution of the TALAGRANG histogram tends to be flat, indicating that the reliability of the ensemble forecast system is excellent and that the ensemble spread accurately reflects the forecast uncertainty. If the TALAGRANG histogram exhibits a “U”-shaped distribution, this indicates that the spread of the ensemble system is small (under dispersion); if the TALAGRANG histogram exhibits an “A”-shaped distribution, this indicates that the spread of the ensemble system is extremely high (over dispersion); if the TALAGRANG histogram exhibits an “L”-shaped distribution, this indicates that the ensemble system has the bias characteristic of over-forecasting; and if the TALAGRANG histogram exhibits a “J”-shaped distribution, this indicates that the ensemble system has the bias characteristic of under-forecasting.
In an ensemble forecast, the ensemble forecast system has N members and K lattice points in the tested region. At each grid point j, the forecast value of a meteorological variable for the i-th member can be expressed as x i , j f , and the corresponding observation as x i , j o , where f represents the forecast, o represents the observation, and i = 1, 2, 3, ..., N represents the ensemble member. When the number of samples increases, the frequency S of the observations falling in the (N + 1) interval Z can be calculated. If the total number of valid samples is M, the probability distribution P and probability mean squared error Q of observations falling in interval Z can be calculated.   P ¯ is the average frequency of each member, and the Talagrand distribution or histogram can be plotted from the probability distribution P i [2].
P i = S i M
Q = 1 N + 1 i = 1 N + 1 P ¯ P i 2

3. Model and Experiment

3.1. Mode Settings

In this study, the regional ensemble forecast experiment used the WRF model V4.2, with a regional setting of 3 km × 3 km resolution, 50 vertical layers, and 50 hPa model layer top. The model simulates an area of 29 ° 089 38 ° 986   N , 108 ° 395 117 ° 719   E (Figure 1), covering Henan and the surrounding provincial sub-regions, with a total of 287 × 369 grid points. There were 20 ensemble forecasts used in this study, and the boundary data used were the NCEP GEFS with a resolution of 1° × 1°. The boundary data required by the model were obtained through GEFS, and the regional ensemble forecast field in this study was obtained by applying the dynamic downscaling method. The experimental period in this study was two consecutive days from July 20 to July 22 2021, starting at 00 UTC on 20 July, with a model forecast time limit of 48 h.

3.2. Experimental Scheme

Three sensitive experiments using different model perturbation schemes were used in this study. The details are presented in Table 2.
Multiphysical parameterization scheme (EXP1: MP only). A combination of different physical parametrization schemes was used for each ensemble member. There were five micro-physics schemes used in this study: the Morrison double-moment, Thompson, WRF single-moment 6-class, Eta, and P3 schemes [32,33,34,35,36]. The revised MM5 and Eta similarity schemes were used for the surface layer schemes [37,38]. Three planetary boundary layer physics schemes, the Yonsei University, Mellor–Yamada–Janjic, and Mellor–Yamada Nakanishi Niino (MYNN) level 2.5 schemes [39,40,41], were combined with the micro-physics and surface layer schemes for each ensemble member. The combination details are presented in Table 3.
Combination of SPP and SPPT schemes (EXP2: SPP + SPPT). The SPPT scheme is a random pattern that is used to perturb the accumulated physics tendencies (except those from micro-physics) of potential temperature, wind, and humidity. The SPP scheme is a random pattern that is used to perturb parameters in selected physics packages, namely the GF convection, MYNN boundary layer, and RUC LSM schemes. EXP2 is a single physical parameterization scheme combined with the SPP scheme, and the SPPT scheme was used in this experiment. There are also some physics schemes used in this study: the Morrison double-moment scheme was used in mp_physics, Revised MM5 scheme was used in sf_sfclay_physics, and MYNN level 2.5 scheme was used in bl_pbl_physics.
Combination of SPP and SPPT with multiple physical parameterization schemes (EXP3: SPP + SPPT + MP). The combination of different physical parameterization schemes was used to obtain physical parameterization schemes for different ensemble members (configurations of ensemble member parameterization schemes numbered 01–20 in Table 3), combined with the SPP and SPPT schemes, for the ensemble forecasting experiments.

4. Results

Ensemble forecasts provide probabilistic forecasts, making the ensemble system forecast experiments complex, and their forecast results require a range of probabilistic forecast verification methods for evaluation. The accuracy of the ensemble forecast results can be expressed in two different aspects: the forecast skill, indicating whether the ensemble forecast results are close to the observed values, and dispersion, indicating whether the ensemble forecast can accurately reflect the uncertainty of the numerical forecast model. To compare the forecast results of EXP1, EXP2, and EXP3 for an extremely heavy rainfall process occurring in the Henan region on 20 July, 2021, this study evaluated the three experiments through a series of verified methods and compared the forecast effects of the three schemes.

4.1. Near-Ground Variable Forecast Verification

Figure 2 shows the three sets of ensemble forecast experiments near the ground from 00:00 on 20 July, 2021, to 00:00 on 22 July 2021. The spread of 2 m temperature and 10 m U and V winds was increased in EXP1 and EXP3 compared to EXP2. The RMSE of 2 m temperature and 10 m U and V winds for EXP2 and EXP3 was lower than that of EXP1 (Figure 2d–f). This indicates that the model stochastic physical perturbation method combined with the multi-physics parameterization combination scheme (EXP3) not only increased the spread, but also had low forecast error. For the ground variable 2 m temperature (Figure 2g), the spread-to-root mean square error ratio (spread/rmse) was greater for EXP1 and EXP3 than for EXP2. For a U wind of 10 m (Figure 2h), the spread/rmse of EXP3 was significantly greater than that of EXP1 and EXP2. In the first 30 h, the spread/rmse values of EXP1 and EXP2 were comparable; after 30 h, the spread/rmse of EXP2 was higher than that of EXP1. For a V wind of 10 m (Figure 2i), the spread/rmse of EXP1 and EXP3 was significantly larger than that of EXP2.
Figure 3 shows three sets of ensemble forecast experiments at 06:00 on 20 July 2021 (UTC): 2 m temperature, 10 mU wind, and 10 mV Taylor diagram. The RMSE of 2 m temperature and 10 m U and V winds for EXP2 and EXP3 were lower than that of EXP1. For 2 m temperature, the RMSE of EXP2 and EXP3 were 1.795 and 1.814, while EXP1’s was only 2.028. For a U wind of 10 m, the RMSE of EXP2 and EXP3 were 1.278 and 1.233 while EXP1’s is only 0.096. For a V wind of 10 m, the RMSE of EXP2 and EXP3 were 1.206 and 1.189 while EXP1’s was only 1.144. The correlation and standard deviation of 2 m temperature and 10 m U and V winds were increased in EXP1 and EXP3 compared to EXP2.
Figure 4 shows three sets of ensemble forecast experiments from 00:00 on 20 July, 2021 to 00:00 on 22 July 2021: (a) 2 m temperature, (b) 10 m U wind, and (c) 10 m V wind PBIAS. For the ground variable 2 m temperature, the Pbias of EXP3 was significantly lower than that of EXP1 and EXP2 in the first 18 h; after 30 h, the Pbias of three sets of ensemble forecast experiments were comparable. For a U wind and V wind of 10 m, the Pbias of EXP3 was significantly less than that of EXP1 and EXP2, indicating that the EXP3 perform better simulations of U wind and V wind.
In summary, the model stochastic physical perturbation method and multi-physics parametric combination scheme forecasts were comparable for ground variables, and the model stochastic physical perturbation method was superior to the multi-physics parametric combination scheme forecasts for some variables, such as 10 m U winds. The overall forecasting effectiveness of EXP3 was relatively good compared with EXP1 and EXP2, which also indicates that the model errors in the current ensemble forecast system are extremely complex to be represented by a single scheme alone.

4.2. High Altitude Variable Forecast Verification

Figure 5 shows three sets of ensemble forecast experiments at 850 hPa from 00:00 on 20 July 2021, to 00:00 on 22 July 2021 (UTC). The spread of the 850 hPa temperature, U wind, and relative humidity was higher in EXP1 and EXP3 than in EXP2. The RMSE of the 850 hPa temperature for EXP1 and EXP3 was lower than that of EXP2 (Figure 5d), and the RMSE of the U wind and relative humidity for EXP1 was smaller than that of EXP2 and EXP3 (Figure 5e,f). For the 850 hPa temperature (Figure 5g), spread/rmse ratio was higher for EXP1 and EXP3 than for EXP2, and the spread/rmse ratio was approximately 1 for EXP1 and EXP3. For the U wind at 850 hPa (Figure 5h), the spread/rmse of EXP1 and EXP3 was significantly larger than that of EXP2. For the 850 hPa relative humidity (Figure 5i), the spread/rmse of EXP1 and EXP3 was higher than that of EXP2, and after 18 h, the spread/rmse of EXP1 was higher than that of EXP2 and EXP3.
Figure 6 shows three sets of ensemble forecast experiments at 06:00 on 20 July 2021 (UTC): 850 hPa temperature, U wind, and relative humidity Taylor diagram. At 850 hPa, The RMSE was lower for EXP1 and EXP3 than for EXP2, and The RMSE was 0.699 and 0.772 for EXP1 and EXP3, while EXP2’s was 0.786. For the U wind at 850 hPa, the RMSE of EXP2 and EXP3 was lower than that of EXP1. For the 850 hPa relative humidity, three sets of ensemble forecast experiments were similar; the RMSE of EXP2 and EXP3 was lower than that of EXP1, but EXP3 perform best. The correlation of the 850 hPa temperature, U wind, and relative humidity was higher in EXP1 and EXP3 than in EXP2.
Figure 7 Three sets of ensemble forecast experiments at 06:00 on 20 July 2021 (UTC): (a) 850 hPa temperature, (b) U wind, and (c)relative humidity PBIAS; For the 850 hPa temperature, the Pbias was lower for EXP1 and EXP3 than for EXP2, and the EXP3 was the lowest. For the U wind at 850 hPa, three sets of ensemble forecast experiments were comparable. For the 850 hPa relative humidity, the Pbias of EXP3 was significantly lower than that of EXP1 and EXP2 in the first 18 h, exhibiting that the EXP3 perform better simulations.
In conclusion, the model stochastic physical perturbation method and combined multi-physics parameterization scheme are comparable for forecasting ground variables, whereas the model stochastic physical perturbation method is not superior to the combined multi-physics parameterization scheme for forecasting variables at 850 hPa. The SPP scheme in the model stochastic physical perturbation method is used to perturb parameters in specific physical packages (GF convection scheme, MYNN boundary layer scheme, and RUC LSM surface layer); therefore, the land surface process perturbation is more intensely perturbed relative to the 850 hPa high altitude, thereby effectively improving the forecast for the surface than for the high altitude.

4.3. Disturbance Energy

Figure 8 shows the variations in the perturbation kinetic energy of the U wind at a temperature of 500 hPa from 03:00 on 20 July 2021, to 05:00 on 20 July 2021 (UTC). The perturbation energy of the three sets of ensemble forecast experiments exhibited a gradually increasing trend from 03:00 to 05:00 on 20 July 2021, and the perturbation energy developed in northern Henan and southern Shanxi, which are associated with large wind speed values. After 3 h of forecasting, the perturbation energy development of the three sets of ensemble forecasting experiments did not significantly differ, with perturbation energy values of 19.90 J/kg, 20.79 J/kg, and 21.85 J/kg for EXP1, EXP2, and EXP3, respectively. After 5 h of forecasting, the perturbation energy of EXP3 developed faster with a magnitude of 27.22 J/kg; however, the perturbation energy levels of EXP1 and EXP2 were only 19.18 J/kg and 20.81 J/kg, and the perturbation energy values of both experiments were smaller than that of EXP3. Moreover, the perturbation energy excited by the model stochastic physical perturbation method or the multi-physics parameterization combination scheme alone is extremely weak to accurately describe the energy development of the intense rainfall weather process; however, the model stochastic physical perturbation method combined with the multi-physics parameterization combination scheme (EXP3) can effectively solve this problem.

4.4. Heavy Rainfall Forecast Verification

4.4.1. Analysis of 6-h Cumulative Rainfall Results

Figure 9 shows the three sets of ensemble forecasts from 20 July 2021 (UTC) on the forecast and the observation of 0–6 h. For the observation of rainfall (a), the area of heavy rainfall from 00:00 to 06:00 (UTC) on July 20 was primarily located in the northern part of Henan Province, and the center of heavy rainfall was located in Zhengzhou City and its surrounding areas, with a maximum accumulated heavy rainfall of approximately 185 mm. Compared with the observation of rainfall, the heavy rainfall area of the ensemble mean (b) simulated by the 6-h ensemble forecast of EXP1 is consistent with the observation. However, the effect of the simulation on the magnitude of rainfall is not ideal, and heavy rainfall in the eastern part of Zhengzhou was not predicted. The 6-h total rainfall average of EXP2 and EXP3 did not significantly differ from the observed rainfall areas; however, the rainfall centers were located to the east compared to the observations, and EXP2 and EXP3 did not forecast the magnitude of rainfall. This also indicates that although the ensemble-averaged forecast can balance the forecast bias among members, it can filter out useless forecast information and improve the accuracy of forecasts. However, it also filters out information that is beneficial to the forecast results and obscures some useful forecast information.

4.4.2. Brier Score

Figure 10 shows the 6-h cumulative rainfall Brier score from 00:00 to 06:00 on 20 July 2021. For rainfall magnitudes below 4 mm, the 6-h rainfall Brier scores for EXP2 and EXP3 were 0.164 and 0.148, respectively, which were significantly lower than the 0.298 for EXP1, EXP2, and EXP3, exhibiting better simulations for small magnitudes of rainfall. The 6-h rainfall Brier scores were lower in EXP2 and EXP3 than in EXP1 for the rainfall levels of 4–13 mm, and EXP3 had the lowest Brier score (0.043). The Brier score of the 6-h rainfall simulated in EXP3 was lower than that of EXP1 and EXP2 for rainfall levels higher than 50 mm; however, the Brier scores were not significantly different.

4.4.3. Bias Score

Figure 11 shows the bias score chart of the accumulated rainfall over 6 h from 00:00 to 06:00 on 20 July, 2021. As shown in the figure, for a small rainfall magnitude of 0–4 mm, the bias score of the 6-h rainfall ensemble for the three ensemble forecast experiments was greater than 1. The bias score values for EXP1, EXP2, and EXP3 (SPP) were 3.02, 1.725, and 1.717, respectively; the experiment forecasts were wet compared to observations. The bias scores for the simulated results of EXP2 and EXP3 (SPP) were approximately 1, with values of 0.842 and 0.876 for rainfall levels of 4–13 mm; however, the bias score for EXP1 was 1.366, and the forecast was wet. For rainfall magnitude of 13–100 mm, the 6-h simulated rainfall bias scores of the three ensemble forecasting experiments were all less than 1, and the experimental forecasts were dryer than the observations. For a rainfall magnitude higher than 100 mm, the 6-h rainfall bias scores of the three ensemble forecasting experiments were not ideal.

4.4.4. Area Under the Curve (AUC)

Figure 12 shows the AUC–ROC curve of accumulated rainfall in 6 h from 00:00 to 06:00 on 20 July 2021 (>0.1 mm). Three sets of ensemble forecast experiments were all greater than 0.5, indicating that the ensemble forecasting skills were great. The AUC of three sets of ensemble forecast experiments were similar. The AUC was greater for EXP3 than for EXP1 and EXP2, and The AUC was 0.843 for EXP3, while EXP1 and EXP2 were 0.821 and 0.830. The simulation of heavy rainfall by the model stochastic physical perturbation method or the multi-physics parameterization combination scheme alone is too weak to accurately describe the intense rainfall weather; the model stochastic physical perturbation method combined with the multi-physics parameterization combination scheme (EXP3) perform better simulations of heavy rainfall.

4.4.5. Talagrand Histogram

Figure 13 shows the histogram distribution of talagrand cumulative rainfall in 6 h from 00:00 to 06:00 on 20 July 2021. The ensemble spread produced by the multi-physics parameterization combination scheme used solely in EXP1 was insufficient. The spread increased in EXP2 by applying the model stochastic physical perturbation method alone. Moreover, the EXP3 ensemble forecasting system increased the ensemble spread and improved the forecasting skill after the application of the model stochastic physical perturbation method and multi-physics parameterization combination scheme.

5. Conclusions and Discussions

In this study, a regional ensemble forecast was conducted based on WRF v4.2 for an extremely heavy rainfall process that occurred in the Henan region on 20–22 July 2021. Three sets of ensemble forecast experiments were performed by applying the multi-physics parameterization (MP) combination method, SSP, and SPPT schemes. The following conclusions were obtained:
(1)
For the results of heavy rainfall by the three sets of ensemble forecast experiments, the 6-h rainfall areas of the three sets of ensemble forecast experiments were consistent with the observed. The model stochastic physical perturbation method combined with the multi-physics parameterization combination scheme (EXP3) performed the best with at least 5% improvement over the multi-physics parameterization combination scheme (EXP1) and the model stochastic physical perturbation method (EXP2).
(2)
For the near-surface variables, the overall forecasting effectiveness of EXP3 was greater than that of EXP1 and EXP2, and was therefore better able to effectively simulate the true state of the atmosphere. This indicates that the model errors in the current ensemble forecast system are extremely complex, and are unlikely to be effectively represented by a single scheme alone.
(3)
For the high altitude variable, the model stochastic physical perturbation method was not superior to the forecasting effect of the combined multi-physics parameterization scheme. This implies that the land surface process perturbation is intensely perturbed relative to the high altitude process, thereby effectively improving the forecast for the ground level compared with the forecast for high altitude.
(4)
For the perturbation energy, the perturbation energy excited by EXP1 or EXP2 alone was extremely weak, therefore it was unable to accurately describe the energy development of extreme heavy rainfall weather processes. However, the EXP3 effectively resolved this problem.
Ensemble forecasting estimates the range of error distribution of the initial value via mathematical methods, obtaining a collection of initial values, each of which may represent the real state of the atmosphere. Therefore, ensemble forecasts can provide more information than single deterministic forecasts, and the forecast results of some ensemble members are closer to the observation. Thus, the forecast information of individual members cannot be ignored. This also indicates that, although ensemble-averaged forecasts enable the forecast biases among individual members to offset, the ensemble-averaged forecast can balance the forecast bias among members, filter out useless forecast information, and improve the accuracy of forecasts. However, it can also filter out information that is beneficial to the forecast results and obscure some useful forecast information.
This paper only discusses the impact of the ensemble forecast using the model perturbation schemes and the multi-physics process parameterization, and does not set a control experiment with single physical parameterization scheme. The conclusivity of the test results still needs improvement. There are also some shortcomings in the selection of the case. This paper only selects a heavy rainfall process in the Henan region on 20–22 July 2021. The selected cases are too few, the time of heavy rainfall is too short, and the representativeness of the case is insufficient. Due to the limitation of computing resources, the horizontal resolution of the model is 3 km, and the ability to stimulate to strong convective weather processes is not satisfactory. The resolution of the simulation can be appropriately improved by follow-up research; the ensemble members of the ensemble forecast are only 20, and the ensemble members can be appropriately increased in future ensemble forecast experiments. Moreover, this paper only discusses the advantages and disadvantages of the mode perturbation scheme and the multi-physics parameterization scheme, but does not explore which is better and which is worse. This problem must be solved by follow-up work.

Author Contributions

Conceptualization, Y.Z. and J.X.; Software, S.C.; Writing—original draft, D.S.; Writing—review & editing, H.Z. and S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This study was jointly supported by National Key R & D Program of China (2019YFC1510002), the National Natural Science Foundation of China (42130605 and 41705140), Guangdong Basic and Applied Basic Science Research Foundation (2019B1515120018), and Shenzhen Science and Technology Program (JCYJ20210324131810029).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The GEFS data used in this work are available from the National Centers for Environmental Prediction (https://nomads.ncep.noaa.gov/pub/data/nccf/com/gens/prod/, accessed on 20 July 2021). The ERA5 data are available from the ECMWF (https://cds.climate.copernicus.eu/cdsapp#!/dataset/, accessed on 23 March 2022).

Acknowledgments

The authors are very grateful to the editor and anonymous reviewers for their help and recommendations.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lorenz, E.N. A study of the predictability of a 28-variable atmospheric model. Tellus 1965, 17, 321–333. [Google Scholar] [CrossRef] [Green Version]
  2. Tan, Y.; Chen, D.H. Study of Mesoscale Intense Precipitation Weather Ensemble Forecasting Techniques. 2006. Available online: https://oversea.cnki.net/KCMS/detail/detail.aspx?dbname=CMFD0506&filename=2006083048.nh (accessed on 26 May 2022).
  3. Wang, T.H.; Chen, D.H. Study of Mesoscale Model Uncertainty and Initial Value Perturbation. 2008. Available online: https://oversea.cnki.net/KCMS/detail/detail.aspx?dbname=CMFD2008&filename=2008125903.nh (accessed on 26 May 2022).
  4. Chen, C.H.; Wang, Y.; He, H.R.; Chen, X.G.; Zhuang, X.R.; Jiang, Y.Q. Review of the Ensemble Prediction Using Stochastic Physics. Adv. Meteorol. Sci. Technol. 2021, 11, 10. [Google Scholar]
  5. Hagedorn, R.; Doblas-Reyes, F.; Palmer, T.N. The rationale behind the success of multi-model ensembles in seasonal forecasting-I.Basic concept. Tellus 2005, 57, 219–233. [Google Scholar]
  6. Park, Y.; Buizza, R.; Leutbecher, M. TIGGE: Preliminary results on comparing and combining ensembles. Q. J. R. Meteorol. Soc. 2008, 134, 2029–2050. [Google Scholar] [CrossRef]
  7. Candille, G. The multi-ensemble approach: The NAEFS example. Mon. Weather Rev. 2009, 137, 1655–1665. [Google Scholar] [CrossRef]
  8. Clark, A.; Kain, J.; Stensrud, D. Probabilistic precipitation forecast skill as a function of ensemble size and spatial scale in a convection-allowing ensemble. Mon. Weather Rev. 2011, 139, 1410–1418. [Google Scholar] [CrossRef] [Green Version]
  9. Bright, D.; Mullen, S. Short-range ensemble forecasts of precipitation during the southwest monsoon. Weather Forecast. 2002, 17, 1080–1100. [Google Scholar] [CrossRef]
  10. Bowler, N.E.; Arribas, A.; Mylne, K.R.; Robertson, K.B.; Beare, S.E. The MOGREPS short-range ensemble prediction system. Q. J. R. Meteorol. Soc. 2008, 134, 703–722. [Google Scholar] [CrossRef]
  11. Gebhardt, C.; Theis, S.; Krahe, P.; Renner, V. Experimental ensemble forecasts of precipitation based on a convection-resolving model. Atmos. Sci. Lett. 2008, 9, 67–72. [Google Scholar] [CrossRef]
  12. Berner, J.; Ha, S.Y.; Hacker, J.P.; Fournier, A.; Snyder, C. Model Uncertainty in a Mesoscale Ensemble Prediction System: Stochastic versus Multiphysics Representations. Mon. Weather Rev. 2011, 139, 1972–1995. [Google Scholar] [CrossRef] [Green Version]
  13. Jankov, I.; Berner, J.; Beck, J.; Jiang, H.; Olson, J.B.; Grell, G.; Smirnova, T.G.; Benjamin, S.G.; Brown, J.M. A Performance Comparison between Multi-Physics and Stochastic Approaches within a North American RAP Ensemble. Mon. Weather Rev. 2017, 145, 1161–1179. [Google Scholar] [CrossRef]
  14. Buizza, R.; Miller, M.; Palmer, T. Stochastic representation of model uncertainties in the ECMWF ensemble prediction system. Q. J. R. Meteorol. Soc. 1999, 125, 2887–2908. [Google Scholar] [CrossRef]
  15. Charron, M.; Pellerin, G.; Spacek, L.; Houtekamer, P.L.; Gagnon, N.; Mitchell, H.L.; Michelin, L. Toward random sampling of model error in the Canadian ensemble prediction system. Mon. Weather Rev. 2010, 138, 1877–1901. [Google Scholar] [CrossRef]
  16. Bouttier, F.; Vié, B.; Nuissier, O.; Raynaud, L. Impact of stochastic physics in a convection-permitting ensemble. Mon. Weather Rev. 2012, 140, 3706–3721. [Google Scholar] [CrossRef]
  17. Romine, G.S.; Schwartz, C.S.; Berner, J.; Fossell, K.R.; Snyder, C.; Anderson, J.L.; Weisman, M.L. Representing Forecast Error in a Convection-Permitting Ensemble System. Mon. Weather Rev. 2014, 142, 4519–4541. [Google Scholar] [CrossRef] [Green Version]
  18. Ollinaho, P.; Lock, S.J.; Leutbecher, M.; Bechtold, P.; Beljaars, A.; Bozzo, A.; Forbes, R.M.; Haiden, T.; Hogan, R.J.; Sandu, I. Towards process-level representation of model uncertainties: Stochastically perturbed parametrizations in the ECMWF ensemble. Q. J. R. Meteorol. Soc. 2016, 143, 408–422. [Google Scholar] [CrossRef]
  19. Xu, Z.Z.; Chen, J.; Wang, Y.; Li, H.Q.; Chen, F.J.; Fan, Y.E. Sensitivity test of random parameter perturbation method for mesoscale precipitation ensemble forecast. Acta Meteorol. Sin. 2019, 7, 849–868. [Google Scholar]
  20. Berner, J.; Fossell, K.R.; Ha, S.Y.; Hacker, J.P.; Snyder, C. Increasing the Skill of Probabilistic Forecasts: Understanding Performance Improvements from Model-Error Representations. Mon. Weather Rev. 2015, 43, 1295–1320. [Google Scholar] [CrossRef]
  21. Cai, Y.C.; Min, J.Z.; Zhuang, X.R. Comparison of different stochastic physics perturbation schemes on a storm-scale ensemble forecast in a heavy rain event. Plateau Meteor 2017, 36, 407–423. [Google Scholar]
  22. Qiao, X.S.; Wang, S.Z.; Min, J.Z. The impact of a stochastically perturbing microphysics scheme on an idealized supercell storm. Mon. Weather Rev. 2018, 146, 95–118. [Google Scholar] [CrossRef]
  23. Zhang, H.B.; Fan, S.Y.; Chen, M. Study on a Synthetic Model Perturbation Method Based on SKEB and Multi-Physics for Regional Ensemble Forecast. Meteorol. Mon. 2019, 45, 17–28. [Google Scholar]
  24. Wastl, C.; Wang, Y.; Atencia, A.; Wittmann, C. A Hybrid Stochastically Perturbed Parametrization Scheme in a Convection-Permitting Ensemble. Mon. Weather Rev. 2019, 147, 2217–2230. [Google Scholar] [CrossRef]
  25. Zhang, X. Impacts of New Implementing Strategies for Surface and Model Physics Perturbations in TREPS on Forecasts of Landfalling Tropical Cyclones. Adv. Atmos. Sci. 2022, 1–26. [Google Scholar] [CrossRef]
  26. Zhang, K.F.; Wang, D.H.; Zhang, Y.; Wu, Z.Z.; Li, G.P. Study on impacts of dynamic downscaling and multi-physical parameterization scheme combination on ensemble forecast of annually first rainy season in south China. J. Trop. Meteorol. 2020, 36, 668–682. [Google Scholar]
  27. Yapo, P.O.; Gupta, H.V.; Sorooshian, S. Automatic calibration of conceptual rainfall-runoff models: Sensitivity to calibration data. J. Hydrol. 1996, 181, 23–48. [Google Scholar] [CrossRef]
  28. Palmer, T.N.; Gelaro, R.; Barkmeijer, J.; Buizza, R. Singular vectors, metrics, and adaptive observations. J. Atmos. 1998, 55, 633–653. [Google Scholar] [CrossRef]
  29. Ehrendorfer, M.; Errico, R.M.; Raeder, K.D. Singular-Vector Perturbation Growth in a Primitive Equation Model with Moist Physics. J. Atmos. Sci. 1999, 56, 1627–1648. [Google Scholar] [CrossRef] [Green Version]
  30. Brier, G.W. Verification of forecasts expressed in terms of probability. Mon. Weather Rev. 1950, 78, 1–3. [Google Scholar] [CrossRef]
  31. Hacker, J.P.; Snyder, C.; Ha, S.-Y.; Pocernich, M. Linear and nonlinear response to parameter variations in amesoscale model. Tellus 2011, 63, 429–444. [Google Scholar] [CrossRef] [Green Version]
  32. Morrison, H.; Thompson, G.; Tatarskii, V. Impact of Cloud Microphysics on the Development of Trailing Stratiform Precipitation in a Simulated Squall Line: Comparison of One- and Two-Moment Schemes. Mon. Weather Rev. 2009, 137, 991–1007. [Google Scholar] [CrossRef] [Green Version]
  33. Thompson, G.; Field, P.R.; Rasmussen, R.M.; Hall, W.D. Explicit Forecasts of Winter Precipitation Using an Improved Bulk Microphysics Scheme. Part II: Implementation of a New Snow Parameterization. Mon. Weather Rev. 2008, 136, 5095–5115. [Google Scholar] [CrossRef]
  34. Hong, S.-Y.; Lim, J.-O.J. The WRF single-moment 6-class microphysics scheme (WSM6). J. Korean Meteor. Soc. 2006, 42, 129–151. [Google Scholar]
  35. Rogers, E.; Black, T.; Ferrier, B.; Lin, Y.; Parrish, D.; Di Mego, G. National Oceanic and Atmospheric Administration Changes to the NCEP Meso Eta Analysis and Forecast System: Increase in resolution, new cloud microphysics, modified precipitation assimilation, modified 3DVAR analysis. NWS Tech. Proced. Bull 2001, 488, 15. [Google Scholar]
  36. Morrison, H.; Milbrandt, J.A. Parameterization of cloud microphysics based on the prediction of bulk ice particle properties. Part I: Scheme description and idealized tests. J. Atmos. Sci. 2015, 72, 287–311. [Google Scholar] [CrossRef]
  37. Jiménez, P.A.; Dudhia, J.; González-Rouco, J.F.; Navarro, J.; Montávez, J.P.; García-Bustamante, E. A revised scheme for the WRF surface layer formulation. Mon. Weather Rev. 2012, 140, 898–918. [Google Scholar] [CrossRef] [Green Version]
  38. Janjic, Z.I. Nonsingular implementation of the Mellor-Yamada Level 2.5 Scheme in the NCEP Meso model. NCEP Off. Note 2002, 437, 61. [Google Scholar]
  39. Hong, S.-Y.; Yign, N.; Jimy, D. A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Weather Rev. 2006, 134, 2318–2341. [Google Scholar] [CrossRef] [Green Version]
  40. Janjic, Z.I. The Step-Mountain Eta Coordinate Model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Weather Rev. 1994, 122, 927–945. [Google Scholar] [CrossRef] [Green Version]
  41. Nakanishi, M.; Niino, H. Development of an improved turbulence closure model for the atmospheric boundary layer. J. Meteor. Soc. Jpn. 2009, 87, 895–912. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Range of the simulated area of the ensemble forecast experiment showing altitude information.
Figure 1. Range of the simulated area of the ensemble forecast experiment showing altitude information.
Atmosphere 13 01057 g001
Figure 2. Three sets of ensemble forecast experiments from 00:00 on 20 July 2021 to 00:00 on 22 July 2021: (ac) 2 m temperature, 10 mU wind, and 10 mV wind dispersion; (df) 2 m temperature, 10 mU wind, and 10 mV wind rms error; and (gi) 2 m temperature, 10 mU wind, and 10 mV wind dispersion/rms error.
Figure 2. Three sets of ensemble forecast experiments from 00:00 on 20 July 2021 to 00:00 on 22 July 2021: (ac) 2 m temperature, 10 mU wind, and 10 mV wind dispersion; (df) 2 m temperature, 10 mU wind, and 10 mV wind rms error; and (gi) 2 m temperature, 10 mU wind, and 10 mV wind dispersion/rms error.
Atmosphere 13 01057 g002
Figure 3. Three sets of ensemble forecast experiments at 06:00 on 20 July 2021 (UTC): 2 m temperature, 10 m U wind, and 10 mV Taylor diagram. The red dots represent three sets of experiments; numbers 1–3 correspond to EXP1, EXP2 and EXP3.
Figure 3. Three sets of ensemble forecast experiments at 06:00 on 20 July 2021 (UTC): 2 m temperature, 10 m U wind, and 10 mV Taylor diagram. The red dots represent three sets of experiments; numbers 1–3 correspond to EXP1, EXP2 and EXP3.
Atmosphere 13 01057 g003
Figure 4. Three sets of ensemble forecast experiments from 00:00 on 20 July, 2021 to 00:00 on 22 July 2021: (a) 2 m temperature, (b) 10 mU wind, and (c) 10 mV wind PBIAS.
Figure 4. Three sets of ensemble forecast experiments from 00:00 on 20 July, 2021 to 00:00 on 22 July 2021: (a) 2 m temperature, (b) 10 mU wind, and (c) 10 mV wind PBIAS.
Atmosphere 13 01057 g004
Figure 5. Three sets of ensemble forecast experiments from 00:00 on 20 July 2021 to 00:00 on 22 July 2021 (UTC): (ac) 850 hPa temperature, U wind, and relative humidity dispersion; (df) temperature, U wind, and relative humidity root mean square error; (gi) 850 hPa temperature, U wind, and relative humidity dispersion/average root square error.
Figure 5. Three sets of ensemble forecast experiments from 00:00 on 20 July 2021 to 00:00 on 22 July 2021 (UTC): (ac) 850 hPa temperature, U wind, and relative humidity dispersion; (df) temperature, U wind, and relative humidity root mean square error; (gi) 850 hPa temperature, U wind, and relative humidity dispersion/average root square error.
Atmosphere 13 01057 g005
Figure 6. Three sets of ensemble forecast experiments at 06:00 on 20 July 2021 (UTC): 850 hPa temperature, U wind, and relative humidity Taylor diagram. The red dots represent three sets of experiments; numbers 1–3 correspond to EXP1, EXP2 and EXP3.
Figure 6. Three sets of ensemble forecast experiments at 06:00 on 20 July 2021 (UTC): 850 hPa temperature, U wind, and relative humidity Taylor diagram. The red dots represent three sets of experiments; numbers 1–3 correspond to EXP1, EXP2 and EXP3.
Atmosphere 13 01057 g006
Figure 7. Three sets of ensemble forecast experiments at 06:00 on 20 July 2021 (UTC): (a) 850 hPa temperature, (b) U wind, and (c) relative humidity PBIAS.
Figure 7. Three sets of ensemble forecast experiments at 06:00 on 20 July 2021 (UTC): (a) 850 hPa temperature, (b) U wind, and (c) relative humidity PBIAS.
Atmosphere 13 01057 g007
Figure 8. Variations in the perturbation kinetic energy of the U wind with a temperature of 500 hPa from 03:00 on 20 July, 2021 to 05:00 on 20 July 2021 (UTC) for the three groups of ensemble forecast experiments: (ac) EXP 1, (df) EXP2, and (gi) EXP3.
Figure 8. Variations in the perturbation kinetic energy of the U wind with a temperature of 500 hPa from 03:00 on 20 July, 2021 to 05:00 on 20 July 2021 (UTC) for the three groups of ensemble forecast experiments: (ac) EXP 1, (df) EXP2, and (gi) EXP3.
Atmosphere 13 01057 g008
Figure 9. Three sets of ensemble forecasts from 20 July 2021 (UTC) on the forecast of 0–6 h and the distribution of 6-h cumulative rainfall. (a) Observation; (b) EXP1; (c) EXP2; (d) EXP3.
Figure 9. Three sets of ensemble forecasts from 20 July 2021 (UTC) on the forecast of 0–6 h and the distribution of 6-h cumulative rainfall. (a) Observation; (b) EXP1; (c) EXP2; (d) EXP3.
Atmosphere 13 01057 g009
Figure 10. Six-hour cumulative rainfall Brier score from 00:00 to 06:00 on 20 July 2021.
Figure 10. Six-hour cumulative rainfall Brier score from 00:00 to 06:00 on 20 July 2021.
Atmosphere 13 01057 g010
Figure 11. Bias score chart of accumulated rainfall in 6 h from 00:00 to 06:00 on 20 July 2021.
Figure 11. Bias score chart of accumulated rainfall in 6 h from 00:00 to 06:00 on 20 July 2021.
Atmosphere 13 01057 g011
Figure 12. AUC–ROC curve of accumulated rainfall in 6 h from 00:00 to 06:00 on 20 July 2021 (>0.1 mm).
Figure 12. AUC–ROC curve of accumulated rainfall in 6 h from 00:00 to 06:00 on 20 July 2021 (>0.1 mm).
Atmosphere 13 01057 g012
Figure 13. Talagrand histogram distribution of cumulative rainfall in 6 h from 00:00 to 06:00 on 20 July, 2021. (a) EXP1; (b) EXP2; (c) EXP3.
Figure 13. Talagrand histogram distribution of cumulative rainfall in 6 h from 00:00 to 06:00 on 20 July, 2021. (a) EXP1; (b) EXP2; (c) EXP3.
Atmosphere 13 01057 g013
Table 1. List of forecast and observations.
Table 1. List of forecast and observations.
Forecast OccurrenceForecast Does Not OccurObservation Summation
Observations appearHitsMissesHits + Misses
Observations
do not appear
FalsealarmsCorrectnegativesFalsealarms +
Correctnegatives
Table 2. Summary of experiments.
Table 2. Summary of experiments.
EXPModel–Error RepresentationReference
EXP1MPHacker et al. [31]
EXP2SPP + SPPTJankov et al. [13]
EXP3MP + SPP + SPPTZhang et al. [25]
Table 3. Parameterization scheme configuration of the 20-member multiphysical process combination.
Table 3. Parameterization scheme configuration of the 20-member multiphysical process combination.
MembersMicro PhysicsSurface LayerPlanetary Boundary Layer
01WSM6RMM5YSU
02ThompsonEtaMYJ
03MorrisonRMM5YSU
04EtaRMM5YSU
05P3EtaMYNN
06WSM6EtaMYJ
07ThompsonRMM5YSU
08MorrisonRMM5YSU
09EtaEtaMYJ
10P3EtaMYNN
11WSM6RMM5YSU
12ThompsonRMM5YSU
13MorrisonEtaMYJ
14EtaRMM5YSU
15P3RMM5YSU
16WSM6EtaMYNN
17ThompsonEtaMYNN
18MorrisonRMM5MYJ
19EtaEtaYSU
20P3RMM5MYJ
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shao, D.; Zhang, Y.; Xu, J.; Zhang, H.; Chen, S.; Tu, S. Comparison between Multi-Physics and Stochastic Approaches for the 20 July 2021 Henan Heavy Rainfall Case. Atmosphere 2022, 13, 1057. https://doi.org/10.3390/atmos13071057

AMA Style

Shao D, Zhang Y, Xu J, Zhang H, Chen S, Tu S. Comparison between Multi-Physics and Stochastic Approaches for the 20 July 2021 Henan Heavy Rainfall Case. Atmosphere. 2022; 13(7):1057. https://doi.org/10.3390/atmos13071057

Chicago/Turabian Style

Shao, Duanzhou, Yu Zhang, Jianjun Xu, Hanbin Zhang, Siqi Chen, and Shifei Tu. 2022. "Comparison between Multi-Physics and Stochastic Approaches for the 20 July 2021 Henan Heavy Rainfall Case" Atmosphere 13, no. 7: 1057. https://doi.org/10.3390/atmos13071057

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop