Next Article in Journal
On the Potential for Remote Observations of Coastal Morphodynamics from Surf-Cameras
Next Article in Special Issue
Estimation of Downwelling Surface Longwave Radiation with the Combination of Parameterization and Artificial Neural Network from Remotely Sensed Data for Cloudy Sky Conditions
Previous Article in Journal
Modeling the CO2+ Ultraviolet Doublet Emission from Mars with a Multi-Instrument MAVEN Data Set
Previous Article in Special Issue
Evaluation of Surface Upward Longwave Radiation in the CMIP6 Models with Ground and Satellite Observations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Reanalysis and Satellite Cloud Information to Estimate Surface Downward Long-Wave Radiation

1
Instituto Dom Luiz (IDL), Faculty of Sciences, University of Lisbon, Campo Grande, 1749-016 Lisbon, Portugal
2
Instituto Português do Mar e da Atmosfera (IPMA), Rua C do Aeroporto, 1749-077 Lisbon, Portugal
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(7), 1704; https://doi.org/10.3390/rs14071704
Submission received: 18 March 2022 / Revised: 29 March 2022 / Accepted: 30 March 2022 / Published: 1 April 2022

Abstract

:
The estimation of downward long-wave radiation (DLR) at the surface is very important for the understanding of the Earth’s radiative budget with implications in surface–atmosphere exchanges, climate variability, and global warming. Theoretical radiative transfer and observationally based studies identify the crucial role of clouds in modulating the temporal and spatial variability of DLR. In this study, a new machine learning algorithm that uses multivariate adaptive regression splines (MARS) and the combination of near-surface meteorological data with satellite cloud information is proposed. The new algorithm is compared with the current operational formulation used by the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) Satellite Application Facility on Land Surface Analysis (LSA-SAF). Both algorithms use near-surface temperature and dewpoint temperature along with total column water vapor from the latest European Centre for Medium-range Weather Forecasts (ECMWF) reanalysis ERA5 and satellite cloud information from the Meteosat Second Generation. The algorithms are trained and validated using both ECMWF-ERA5 and DLR acquired from 23 ground stations as part of the Baseline Surface Radiation Network (BSRN) and the Atmospheric Radiation Measurement (ARM) user facility. Results show that the MARS algorithm generally improves DLR estimation in comparison with other model estimates, particularly when trained with observations. When considering all the validation data, root mean square errors (RMSEs) of 18.76, 23.55, and 22.08 W·m−2 are obtained for MARS, operational LSA-SAF, and ERA5, respectively. The added value of using the satellite cloud information is accessed by comparing with estimates driven by ERA5 total cloud cover, showing an increase of 17% of the RMSE. The consistency of MARS estimate is also tested against an independent dataset of 52 ground stations (from FLUXNET2015), further supporting the good performance of the proposed model.

1. Introduction

The downward long-wave radiation (DLR hereafter), defined as the irradiance reaching the surface in the infrared range between 4 and 100 µm, is an essential component of the Earth’s surface radiation budget [1,2,3]. DLR has a high dependency with the vertical profiles of atmospheric temperature, water vapor (the largest contributor to the greenhouse effect [4]), and cloud cover. Therefore, accurate estimations of DLR are important for a wide range of applications dealing with climate variability [5]. Since DLR is a key component of the land surface radiative balance, it is essential to model and estimate the land surface turbulent fluxes (latent and sensible), which are relevant to predict the effects that climate and land use changes have on water resources, ecosystems, and the agricultural sector [6]. Naud and Miller [7] have reported DLR high sensitivity to changes in water vapor in high-elevation regions, which are among the most sensitive regions to future climate change. This is of particular relevance, since in such remote regions there is usually a lack of measured DLR and, therefore, an absence of information for determination of possible warming rates triggers. There are other applications in which DLR estimates are essential, such as for the design of passive cooling systems in buildings, where measured values of DLR are usually absent [8].
In the past decades, several research works have been performed to estimate DLR recurring to empirical formulations. The earlier studies were conducted only for clear-sky conditions (e.g., [9,10,11,12,13]). Most of these formulations considered the two main modulators of DLR in clear-sky conditions: temperature and moisture. Recently, several studies explored the effects of clouds on the sky apparent emissivity and therefore on DLR, introducing cloud fraction parameterizations to estimate DLR under all-sky conditions (e.g., [14,15,16,17,18]). These new models brought more flexible and complex approaches to estimate DLR under different sky conditions, considering atmospheric profile databases, semi-empirical or multiple regression methods (e.g., [2,14,16,19,20]), hybrid systems that combine physical models and remotely sensed data (e.g., [17]), and, more recently, machine-learning techniques (e.g., [21,22,23,24,25,26,27,28,29,30]). The latter provided the capability to handle complex nonlinear statistical problems, particularly the nonlinear relation between DLR and its main modulators. These studies have shown satisfactory results when combining remote sensed information with machine-learning algorithms for DLR estimation, such as extremely randomized trees (ERT) [24], random forest (RF) [25,27,29], and artificial neuron network (ANN) [30], surpassing the previous simpler methods. In particular, to the present date, only a restricted number of machine-learning studies have applied MARS for the estimation of DLR. For instance, Feng et al. [21] demonstrated the MARS potential for the determination of daily and monthly DLR values under all-sky conditions, including regions of high and low altitude. However, despite their good results, the authors underlined the need for reducing the obtained bias and model overfitting. Zhou et al. [23,28] included MARS as part of a hybrid system that performed estimates of DLR under clear-sky conditions using the moderate-resolution imaging spectroradiometer (MODIS) thermal infrared bands top of the atmosphere radiances and surface measurements of DLR. Although the proposed methodology (entirely dependable on satellite and ground data without the use of NWP models) led to some deviations in the results, these showed an overall good performance of the method when remote sensing-based DLR estimations are used. In another study, Jung et al. [31] used several machine-learning methods, including MARS, to combine energy flux measurements acquired from FLUXNET eddy covariance towers with MODIS and meteorological data and to produce the FLUXCOM dataset. The resulting FLUXCOM estimates were found to be suitable for the quantification of global land–atmosphere interactions and land surface model simulations benchmarking. Although there is no definitive regression model technique to be used for all situations, MARS algorithms have proven to have a good bias-variance trade-off (with fairly low bias and variances), being flexible enough for modelling non-linearity and handling a large number of input variables (i.e., more than two variables), while in other simpler models such dimensionality generates problems [32].
In the present work, we present a novel and synergetic approach that uses MARS to combine the European Centre for Medium-range Weather Forecasts (ECMWF) reanalysis with ground, and remote sensed information to estimate DLR hourly values under all-sky conditions. Although it is not possible to directly infer DLR from remotely sensed observations under overcast conditions [10], the combination of satellite information and numerical weather models has the advantage of providing accurate DLR values over large areas [14]. Moreover, this combination of data sources allows the estimate of DLR in remote locations of difficult man-made access in which the installation of measuring equipment is not viable. Despite improvements, the precise determination of DLR following these approaches has some degree of dependency with factors that hinder their accuracy. For instance: (i) pure empirical methods are limited due to particular calibration conditions (frequently applicable to clear-sky conditions only); (ii) physical models are dependent on the quality and availability of the atmospheric profile databases used; and (iii) satellite derived-data often lack accuracy, especially because top-of-atmosphere observations are only indirectly related with DLR. The latter generally provides information on cloud fraction or type, which clearly influences DLR. However, more specific variables affecting DLR estimations, particularly cloud-base properties (height, temperature, and emissivity), are difficult to measure or to model (e.g., [33,34,35]). It is therefore fundamental to create a robust synergistic approach that combines the use of ground and remote sensing observations with numerical weather prediction (NWP) models to quantify accurately DLR at the surface.
The starting point of this study is the semi-empirical model presented by Trigo et al. [14], named here as the LSA-SAF model, which is currently operational in near real-time by the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) Satellite Application Facility on Land Surface Analysis (LSA-SAF). More details regarding LSA-SAF, and respective products, are available online (https://www.eumetsat.int/lsa-saf; accessed on 31 March 2022). The LSA-SAF currently provides DLR estimates as 30-min instantaneous fluxes (product MSDSLF, LSA-204) and as daily averages (product DIDSLF, LSA-206), both covering the Meteosat Second Generation (MSG) disk. The LSA-SAF model is based on a small set of atmospheric variables, namely total column water vapor, near-surface temperature, and near surface dewpoint temperature, being calibrated separately for clear and cloudy-sky conditions. It uses cloud cover information, provided by the spinning enhanced visible and infrared imager (SEVIRI) on board of the MSG satellite series, to establish the sky classification. The MSG main optical payload SEVIRI was built for the purpose of providing several NWP and climate applications over the European and African regions [36], although South America is also covered. The LSA-SAF model calibration was based on downward infrared flux simulations provided by the Moderate Resolution Atmospheric Transmittance and Radiance Code (MODTRAN-4, [37]), using the Thermodynamic Initial Guess Retrieval (TIGR) atmospheric-profile database presented by Chevallier et al. [38]. The latter includes a subset of temperature and humidity profiles from the ECMWF ERA-40 reanalysis covering a wide range of different atmospheric states (classified into dry cold, dry warm and moist) fed to the MODTRAN-4 to calculate DLR at the surface. The LSA-SAF algorithm was derived by adjusting such DLR estimates to a semi-empirical function using uniquely total water vapor content and screen variables (near surface temperature and dew point). The model was then validated using an independent dataset, which showed that the algorithm is able to reproduce reasonably well DLR values at the surface under clear and cloudy skies, with low bias and root mean square errors. Furthermore, Trigo et al. [14] showed that combining satellite cloud information with bulk and screen variables led to competitive results when compared with ECMWF estimates, a result that highlighted the essential role of clouds in DLR. The LSA-SAF simple model was therefore shown to be a viable option to derive DLR over large areas. Nevertheless, after a long period in operations, the systematic comparison of DLR estimates from the LSA-SAF model with station observations suggests that there is still room for improvements. More details regarding the LSA-SAF model are available in the product user manual [39].
This work aims at establishing a simple and improved DLR algorithm for operational purposes. The resulting DLR product should be compatible with MSG cloud information to guarantee consistency among the different LSA-SAF products (e.g., downward solar radiation). To this end, a new and more flexible formulation is proposed to estimate DLR, making use of a machine learning algorithm based on multivariate adaptive regression splines (MARS). This new approach combines recursive partitioning and spline fitting in the form of a series of step (or hinge) functions and knots, as demonstrated by Freidman [40], replacing simpler regression methods as the one performed in the original calibration of the LSA-SAF algorithm. Similarly to Trigo et al. [14], the proposed method treats clear and cloudy conditions separately, thus allowing the training of two different models using ground measured DLR fluxes from several in situ stations as reference. Cloud classification is based on satellite (MSG) observations, while each model uses ERA5 total water column vapor and screen variables (2-metre temperature and dew point) as independent variables. The validation of the new methodology consists of the assessment of DLR estimates from MARS against another set of ground stations previously excluded from the training process. The in situ measurements are provided by the Baseline Surface Radiation Network (BSRN, [41]) and the Atmospheric Radiation Measurement (ARM, [42]) user facility, all within the MSG-disk (as described in Section 2.1). Additionally, to assess the consistency of the proposed methodology, MARS estimates are also validated against an independent spatiotemporal dataset of 52 ground stations from FLUXNET2015. In this analysis, besides DLR estimates from the MARS model, other model estimates are also considered for comparison purposes, including the operational LSA-SAF model, a new LSA model calibrated with ERA5 and measured data, and ERA5 radiation fluxes. Additionally, the MARS and LSA models are also driven by ERA5 cloud information to assess the added value of the MSG cloud information in the estimation of DLR. More details regarding MARS application, as well as a description of other models used in this study, are presented further on in Section 2.2.
The remainder of this paper is structured as follows: in Section 2, data and methods are presented, including a description for the LSA and MARS models; Section 3 provides the results for the validation of the MARS model against in situ measurements and other models estimates within the MSG-disk; in Section 4, a discussion for the obtained results is presented; while conclusions and future perspectives are given in Section 5; additional information related to the analysis is added at the end of this paper in Appendix A, Appendix B, Appendix C, Appendix D.

2. Methodology

2.1. Observations and Reanalysis

The BSRN [41] has been operational since 1995, as part of the world climate research program, supported by the World Meteorological Organization (WMO) and others. As a network of surface radiation monitoring, significant improvements have been achieved over the last decades with the simultaneous increase of globally scattered stations and the quality of ground measured data. Although there are currently 57 operational stations installed over different surface types, measured data from a total of 77 stations is freely available (https://bsrn.awi.de/; accessed on 31 March 2022), with quality control procedures.
In this work a total of 22 BSRN stations were used (Figure 1a), being located within the MSG-disk (i.e., longitude/latitude within +/− 75°E/N), and with available data within the 16-year period from 2004 to 2019. Three BSRN stations located in the MSG-disk are not considered for analysis, either because of representative issues of the measurements (Izaña, in Tenerife, Spain) or due to complete absence of data (Ilorin, Nigeria; and Rolim de Moura, Brazil). Izaña, is a relatively high-altitude station (2372.9 m), often above the clouds that cover most of the island; under these conditions, the MSG pixel is usually correctly classified as “cloud covered”, but DLR observations are characteristic of “clear sky” and therefore inconsistent with the satellite information. In addition to the BSRN stations, the Niamey station (13.4773°N; 2.1758°W) from the ARM user facility was also included (60-s downwelling irradiances from the sky radiation sensor SKYRAD60S). Niamey station was selected due to its particular local atmospheric features, in particular the aerosol load [43], which can be in the form of severe dust events, such as desert storms. More details concerning Niamey mobile facility and radiation observations are available in Sengupta et al. [44].
Although quality control procedures have been previously applied, several outliers were found across the 23 stations (see Table 1), as well as a period of about 5 months in SMS station with measurements made with malfunctioning equipment. Accordingly, the corresponding sets of data were removed from the analysis, increasing the number of gaps in the in situ observations. There is a diverse amount of temporal coverage across all stations. For instance, the station that has the most complete list of records (TAM) only has 0.83% of missing data during the 16-year period, while the station with the lowest number of records (BUD) has 99.49% of missing data. For comparison purposes, in this analysis all DLR observations were temporally aggregated to hourly frequency.
To further improve and reinforce the proposed validation procedure, an independent dataset of ground observations from the FLUXNET2015 network has also been considered (https://fluxnet.org/data/fluxnet2015-dataset/, [45]; accessed on 31 March 2022). To this end, half-hourly measurements from 52 ground stations were aggregated to hourly values and then used for validation purposes. The selected measured variable was the incoming longwave radiation (LW_IN_F_MDS), which is gap-filled using the Marginal Distribution Sampling (MDS) method, i.e., taking into account observations made under similar meteorological, physical and temporal conditions [46]. It should be noted that only 52 stations (Table A8) were eligible for the present study, since these needed to follow several requirements at the same time, i.e., to be within the MSG-disk, within the 2004–2015 period, and should have representative values. Regarding the latter, during a quality control check it was observed that several stations had different measuring periods with MDS gap-filling applied, being characterized by a “poor-quality” flag (i.e., the lowest level) and, therefore, are removed from the analysis. In this context, out of 198 stations globally available, only 52 were suitable. Moreover, out of these, 48 stations are located in Europe (Figure 1b). More details regarding the FLUXNET2015 stations and respective validation results for the MARS model are shown in Appendix D.
In addition to ground measured data, several fields with hourly frequency of the most recent ECMWF reanalysis, ERA5 [47], were extracted from the Copernicus Data Store (CDS). The fields include total column water vapor (tcwv, mm), 2-metre temperature (t2m, K), 2-metre dewpoint temperature (d2m, K), total cloud cover (tcc), and downwelling surface thermal radiation (strd or DLR, W·m−2), with the latter being produced through the McRad radiation scheme [48]. For comparison purposes, the ERA5 fields were interpolated to the measuring station’s location following a nearest neighbor approach. It is worth noting that, for the evaluation of MARS and LSA models against observations, both t2m and d2m temperatures were adjusted to each measuring altitude considering a reference temperature lapse rate of −6.5 K/km [49]. Similarly, ERA5 radiation fluxes were adjusted considering a correction factor of −2.8 W/m2 per 100 m [13].
Cloud mask information retrieved from the SEVIRI sensor on board MSG is also used in this work (at 15-min frequency) for the definition of sky conditions. As described by Derrien and Gléau [50], the MSG cloud mask was developed by the LSA on support to Nowcasting and Very Short-Range Forecasting (NWC-SAF, https://www.eumetsat.int/nwc-saf; accessed on 31 March 2022), allowing to identify cloud free areas where different products can be computed (e.g., total precipitable water, land, or sea surface temperatures), as well as cloudy areas from which other products can be derived (e.g., cloud type or cloud top temperature/height). Several research works have shown the added value of the MSG information for cloud detection (e.g., [14,51]). In particular, Trigo et al. [14] showed an overall good performance of the MSG cloud mask in cloud identification during the validation of DLR estimates. However, despite the satisfactory results, these authors also observed that, in regions under high aerosol load, the accuracy of the satellite cloud mask could contribute to a lower performance of the proposed method. In the context of DLR estimation, the present study uses cloud fraction (denoted cf) retrieved from the SEVIRI sensor for the training and evaluation of both MARS and LSA algorithms. For this purpose, 15-min cloud mask data is aggregated to hourly cf using an hourly rolling mean. The procedure allows to select pure situations of clear (cf = 0) and cloudy (cf = 1) conditions during a particular hour.

2.2. Models

Two algorithms that estimate DLR are evaluated in this study: (i) the current operational semi-empirical algorithm used by LSA-SAF (referred as LSA hereafter) and (ii) a new MARS algorithm, a more flexible approach for the definition of the different atmospheric states under which DLR is calculated. The resulting models (LSA and MARS) are both driven by ERA5 atmospheric conditions (tcwv, t2m, and d2m), satellite cloud cover from MSG/SEVIRI observations, and are calibrated using DLR observations from the BSRN and ARM stations. Additionally, estimates of DLR from ECMWF-ERA5 reanalysis (ERA5) and of the current LSA-SAF operational product (LSA_OPER) are also considered in the analysis. For completeness, to assess the value of using satellite cloud information to calculate DLR with both algorithms, LSA and MARS models were also applied using ERA5 total cloud cover (denoted LSA* and MARS*, respectively) instead of the MSG cf. Table 2 resumes the key characteristics of the models used in the analysis, including the MARS algorithm for the MARS and MARS* models, the LSA-SAF for the LSA and LSA* models, and the ERA5 reanalysis for the ERA5 model.
The LSA-SAF algorithm is presented in detail by Trigo et al. [14] and resumed in Appendix A. The piecewise regression approach used by LSA-SAF is based on three classes of atmospheric profiles, independent for clear and cloudy conditions, which were manually selected (Table A1). Similarly, MARS [40,52] is based on a weighted sum of piecewise functions, also known as basis functions, in which the MARS additive model follows the recursive partitioning regression form, as described by Friedman [40]. The resulting regression coefficients are then adjusted to find the best fitting to the data. The selection of the basis functions is a fundamental process in MARS: this consists of an automatic procedure following a two-stage building process, established by performing a forward and a backward step. Compared with the LSA-SAF algorithm, the automatic procedure to establish the piecewise regression in MARS is a key advantage. In this study, the MARS algorithm available in the py-earth python package (version 0.1.0, https://github.com/scikit-learn-contrib/py-earth; accessed on 31 March 2022) is used. As in the case of the LSA algorithm, two MARS sub-models are trained for clear and cloudy conditions, respectively, considering “pure types” identified by the MSG cf (0 or 1); the all-sky DLR is computed following Equation (A4).
Both LSA and MARS models were calibrated with the same subsets of data independently for clear and cloudy conditions, using the MSG cf. The models used ERA5 tcwv, t2m, and d2m as predictors, and observations (BSRN and ARM) of DLR as predictand. Since there are large differences in data availability for each station (Table 1), the models were calibrated with a randomly selected sample of 40% of the full-time series in each station limited to a maximum of 6 months of data. The procedure allowed us to avoid the dominance of some stations with longer periods in the training dataset. In an initial phase, the MARS model was also tested with different combinations between the three predictors (tcwv, t2m, d2m). The results (not shown) indicated that the use of the three predictors provides the best outputs, although tcwv and t2m alone could already generate reasonable results. Moreover, the addition of the cf was also tested as an explicit input (predictor) in MARS, i.e., as an alternative to the two sub-models for clear and cloudy conditions. This approach did not perform as well as for the two sub-models (not shown), most likely due to the binary nature of the cloud information, which is not optimal for the MARS model. Considering these preliminary tests, and for consistency with LSA, it was decided to keep the three predictors in MARS and two independent sub-models, i.e., one for clear and another for cloudy conditions.
The training of the LSA-SAF algorithm follows Trigo et al. [14], described briefly in Appendix A, where the resulting calibrated parameters are shown in Table A2. For MARS, a repeated k-fold cross-validation procedure was considered during the training phase. The process involves the performance of repeated cross-validation procedure several times and calculation of the mean result across all folds. For the present study, a 10-fold was considered, since this value is typically used in machine learning models (e.g., [53,54,55]), being found to provide a good trade-off of low computational cost and low bias. The results of the training of both MARS and LSA models for clear and cloudy conditions are presented in detail in Appendix B, showing the respective model performance in the training and validation datasets.

2.3. Evaluation Metrics

The performance of the different DLR estimates obtained from each model is assessed through a series of conventional error metrics, such as bias (µ), the root mean square error (RMSE), standard deviation of the error or unbiased root mean square error ( σ ), and the temporal correlation coefficient (R):
µ = 1 N i = 1 N d i ,   d i = y i o i
RMSE = 1 N i = 1 N ( d i ) 2 ,
σ = 1 N 1 i = 1 N ( d i µ ) 2 ,
R = i = 1 N ( y i y ¯ ) ( o i o ¯ ) i = 1 N ( y i y ¯ ) 2 i = 1 N ( o i o ¯ ) 2 ,
where y i is the modelled value of the i-th sample (N is the number of samples), o i is the corresponding reference value, d i is the difference between the modelled and reference, with the overbar representing the temporal mean of a variable.

3. Results

3.1. Model Evaluation

The models were evaluated considering the whole DLR observational dataset between 2004 and 2019; statistics obtained for the independent datasets used in model training and verification are shown in Appendix B. Since there is a large range of the temporal coverage among the stations, the evaluation was performed following two approaches: (i) merging all station data before the evaluation, computing the overall metrics, and displaying the results as density scatter plots; and (ii) computing the evaluation metrics for each station independently, displaying the distributions of the metrics as boxplots.
The overall performance of the models, i.e., merging all stations before the evaluation, is depicted in Figure 2. The corresponding density scatter plot for each model and sky condition (i.e., clear, cloudy, and all-sky) shows the model DLR as function of the observations along with the different metrics. The absolute biases are always below 2 W·m−2 for both LSA and MARS models. This is expected since the model’s training aims at the minimization of systematic differences. For the case of ERA5, biases are also small under clear-sky, however these grow to −14.54 W·m−2 in cloudy conditions, which result in an all-sky bias of −5.25 W·m−2. The root mean square error is dominated by the error variability (standard deviation of the error) in all models. This can be primarily attributed to temporal/spatial variability that is not captured by ERA5 or by the ERA5 predictors used in LSA and MARS. In terms of the RMSE, MARS has the best performance in all the different sky conditions, with an error of 18.76 W·m−2 under all-sky, being followed by LSA with 20.24, ERA5 with 22.08, and LSA_OPER with 23.55 W·m−2. Moreover, the linear correlations are always above 0.91 in all models and sky conditions, with MARS showing a consistently better performance, although differences are small.
The performance of each model can also be assessed considering different DLR ranges for all-sky conditions. The following ranges were established: (UL) upper limit for values above 400 W·m−2, with a total of 100,521 samples; (ML) middle limit for between 200–400 W·m−2, with a total of 1,571,217 samples; and (LL) lower limit for below 200 W·m−2, with a total of 65,888 samples. Figure 1a gives a reasonable overview of the actual conditions represented by those ranges of DLR: there are values above 400 W·m−2 under warm and very moist conditions, such as those found in the tropics, while the other end of the range, with DLR below 200 W·m−2, is only found under very cold and dry conditions, such as in high latitudes winter. Table 3 includes a summary of the results for all-sky conditions considering all the data (ALL) and the median (MED) of the metrics distribution when computed for each station. Moreover, as complementary material to these results, detailed information concerning the statistics found in each station under all, clear, and cloudy-sky conditions is provided in Appendix C (Table A5, Table A6 and Table A7, respectively). The highest performance is found in all models within the “middle” range, with similar metrics to those computed when considering the entire dataset. This is expected due to a higher sampling, which impacts the training of the models prior to the evaluation. On the other hand, focusing on the upper and lower limits, a clear reduction of all model’s performance is found. MARS and LSA underestimate the most extreme conditions, namely at higher values of DLR, while LSA_OPER and ERA5 shows a systematic underestimation of DLR in all conditions with an exception to the ERA5 overestimation at lower values. The large biases in MARS and LSA in the upper and lower conditions lead to high RMSE, with LSA_OPER showing a better overall performance. This is likely associated with the small sampling of these extreme conditions in the training dataset initially used. Despite the limitation in extreme conditions, these results are favorable to the MARS model. When pulling all stations together, the results will be dominated by those stations with larger temporal extend. This could potentially hide some problematic stations (or regions), which will be partially addressed in the following analysis.
The performance of each model in estimating hourly DLR in each station can be assessed through the distributions of the various metrics displayed, as shown in Figure 3 boxplots for all, clear, and cloudy-sky conditions (i.e., left, middle, and right column, respectively). Each boxplot has a reference at the top, corresponding to the median value (also shown in Table 3) found for each error metric (i.e., bias, standard deviation, RMSE and correlation coefficient) and each model. Additionally, Figure 3 shows the same boxplots for the LSA* and MARS* models, which will be discussed in the next subsection for the assessment of the cloud information in DLR estimation. The results are qualitatively consistent with the previous analysis, when all data was merged, with MARS always showing better adjustments to observations (being followed by LSA, LSA_OPER, and ERA5). However, quantitatively, the median of station metrics differs from the metric considering all the data. A clear example is the temporal correlations in cloudy conditions with median values ranging between 0.86 for MARS and 0.82 for ERA5 (Figure 3l), which varied between 0.94 for MARS (Figure 2c) and 0.92 for ERA5 (Figure 2l) when considering the full data. Similarly, for the RMSE, the all-sky median varies between 16.96 in MARS and 20.65 W·m−2 in ERA5 (Figure 3g), while it varied between 18.76 in MARS (Figure 2a) and 22.08 W·m−2 in ERA5 (Figure 2j) when considering the full data. Moreover, the graphical display of the metrics distribution also allows to clearly identify a better performance in clear conditions (RMSE and correlation) when compared with cloudy conditions in all models. This is associated with the different radiative impact of clouds, in particular cloud base, which is not considered in the LSA and MARS models, and limitations due to model uncertainty in ERA5. Finally, it is worth noting the presence of outliers in all estimates, which are due to several factors that can affect model accuracy in a group of stations, leading to higher deviations from observations, as shown next.
In addition to these results, and as a parallel validation of the MARS model, we used a set of (52) stations from an independent network with DLR observations from FLUXNET2015 [45]. The results, shown in detail in Appendix D, demonstrate the MARS model consistency between the different networks. Similarly to Table 3, in Table A9 it is possible to observe that, despite an overall error increase in all models metrics, MARS has the best performance. The best scores are found in the “middle range” for all models, where MARS presents the lowest bias and RMSE of 0.06 and 18.32 W·m−2, respectively. The same behavior is observed when considering all data from FLUXNET2015, as well as when using data from each individual station. As previously noted, larger errors are also found at the lower and upper limits in all models. It is important to note that such results, beside reinforcing the proposed methodology, suggest BSRN observations are more appropriate for the MARS training (and validation) within the MSG-disk than the FLUXNET2015 network. BSRN operates exclusively for continuous radiation measurements at surface, where most sites provide both downward longwave and shortwave fluxes, following high standards in terms of instrument calibration and observations quality checks [41], while FLUXNET2015 targets a broader set of observations aimed at characterizing exchanges of energy, water, and carbon between the surface and the atmosphere, where available radiation plays an important role. Within FLUXNET2015, and despite the quality checks performed on measurements, availability of a complete set of measured variables (including longwave and shortwave radiation fluxes) is considered crucial [45]; across-variable quality checks are regularly performed, but the BSRN standards for radiation flux observations may not be always followed. The geographical distribution of FLUXNET2015 sites with acceptable quality radiation fluxes is limited to Europe, if we only consider sites within the MSG-disk as opposed to BSRN. These aspects are confirmed by the FLUXNET2015 validation, as depicted by the overall error increase in all models.
The evaluation procedure continues, now focusing on different case studies to highlight several aspects (positive and negative) of the different models. As previously mentioned, there are stations (most noticeably GVN, SMS, and SON) from which model estimates deviate further from observations. On the other hand, there are also stations (e.g., CAR and TAM) in which models have an overall good correspondence with observations. The following examples presented in Figure 4 depict the behavior of each model during a 36 h-period in such stations, which also include the NIM station, due to particular atmospheric effects that occur in the region. The DLR time-series of each station are shown at the hourly resolution from the different models and observations, as well as the cloud information (in the bottom subplot) from ERA5 (tcc) and MSG (cf). For the best performance cases (Figure 4a,b), the MARS model is the one that has better adjustments to observations, while ERA5 produces higher deviations. A suitable example is the CAR station (Figure 4a), where a good relation is found between the observations’ DLR variability and the MSG cf, reflected in the MARS, LSA, and LSA_OPER simulations, while ERA5 DLR shows some deviations associated with tcc variability. In TAM station (Figure 4b), the reduced cloud variability clearly leads to lower deviations between models and observations, in which DLR values are found between 250–350 W·m−2 during this time of the year. When analyzing NIM station (Figure 4c), underestimation of DLR occurs in all models. Despite a slightly higher deviation in comparison with the LSA estimates, MARS shows a smoother variation than the former, closer to the observed behavior. Regarding the worst performances cases (Figure 4d–f), significant deviations are observed in all models. In GVN station (Figure 4d), all models deviated from the observations, missing the increase in DLR at the start of the period and underestimating DLR in the following hours. Such behavior can be explained by the fact that GVN is in Antarctica, at a very high latitude, near the MSG-disk limit (close to 80°), posing significant challenges to the identification of cloudy pixels under very high view angles and under circumstances that make it difficult to separate the signature of clouds from those of snow or ice in SEVIRI/MSG observations. In SON and SMS stations, an overall overestimation of models estimates towards observations is visible, particularly in SMS (Figure 4f). For the case of SON station, the measuring equipment is located at a relatively high altitude (about 3109 m), which, similarly to Izaña station in Tenerife, can be measuring DLR values above clouds, instead of recording values below cloud-base height. In SMS, the frequent occurrence of stratiform and shallow convective clouds [56] can lead to higher deviations, since under such conditions there is a higher difficulty to model DLR.

3.2. Impact of Satellite Information

Following the previous results describing MARS and LSA performances, the added value that the satellite cloud information has in the calculation of DLR in both MARS and LSA algorithms is clearly observed through the MARS* and LSA* results (Figure 3). In comparison to all the other models, MARS* and LSA* stand out with an overall error increase in all the metrics. This can be primarily associated with cloud misrepresentation in ERA5. However, MARS* still shows an overall better performance than the LSA* model, although with small differences. For instance, for all-sky conditions, a correlation of 0.87 and 0.86 is found for MARS* and LSA*, respectively, while 0.88 is obtained with ERA5 (Figure 3j). These deviations are generally higher than the ERA5 estimates due to the linear relation between clear/cloudy and tcc that both MARS and LSA algorithms consider, which does not happen in ERA5. Following the results obtained for different ranges of DLR in the remaining models, a similar behavior of both LSA* and MARS* models is observed in Table 4. Despite an overall error increase due to the use of the tcc to calculate DLR, lower deviations and higher correlations (0.88 and 0.89, respectively) occur in the “middle” limit.

4. Discussion

The present work focusses the estimation of DLR fluxes at surface using MARS combined with hourly observations of DLR (from BSRN and ARM stations), ERA5 atmospheric profiles (tcwv, t2m, and d2m), and the MSG cf. The fact that ground and remote sensed observations are used for model training under different sky conditions provides a novel approach to estimate DLR. Similarly, to all NWP models, despite an overall good result in comparison to other models estimates, the proposed MARS model also has a few limitations. The main source of uncertainty is related to the adopted training procedure, particularly to the 23 ground stations used, which provide some degree of data availability differential. This means that stations with higher samplings will induce a local bias dependency. Moreover, the selected stations are mainly distributed in Europe, which also creates a regional dependency. When using a spatiotemporal independent set of 52 ground stations from FLUXNET2015 to validate the MARS model over a period of about 11 years, it was possible to observed that, regardless of its limitations, the proposed methodology is consistent. Most of the FLUXNET2015 stations used are located in Europe (i.e., a total of 48 stations), which limits a more global assessment of the results. Nevertheless, the MARS model continues to demonstrate an overall better estimation of DLR when compared with the remaining models. Furthermore, we should keep in mind that in situ observations may also be subject to significant uncertainties. Other sources of error can result from the cf information used for the sky classification, in which the data quality is dependent on the satellite interpretation of clouds and associated errors.
Considering all the available data, the validation of the MARS model (Figure 2) shows that, generally, and despite its limitations, using measured data for training (i.e., a total of about 5.86 years) purposes produces better adjustments to observed values. This is the case for the MARS (Figure 2a–c) and LSA (Figure 2d–f) model estimates. Particularly, MARS provides the best performance under the different sky conditions as a result of using an automatic piecewise regression method instead of the least square fitting method used in the LSA-SAF original algorithm. In comparison, the worst results are found with ERA5, which also depict an overall decrease in performance. As previously mentioned, the ERA5 negative bias in cloudy conditions might be partially explained by problems in the representation of clouds and their radiative effect. It is worth noting that this effect is observed not only during cloudy-sky periods (where higher deviations are found) but also during clear-sky periods (with a very low bias of −0.46 W·m−2), in which ERA5 is likely assuming some situations of cloud occurrence when none are to be observed, thus leading to overestimation that lowers bias close to zero. Moreover, the separation between clear and cloudy conditions is performed using the satellite information, which can introduce a few inconsistencies regarding the actual observations (due to satellite footprint and uncertainties in cloud detection) and ERA5 atmospheric conditions (e.g., cloud base errors). Therefore, the interpretation of the model’s evaluation in clear versus cloudy conditions is not straightforward. Nevertheless, in the absence of accurate cloud information from NWP models, satellite information should be used instead of reanalysis data for model training purposes, as particularly shown by the improved DLR estimates of MARS in comparison to the ones found with MARS*. Another overall underestimation of DLR is similarly found with the LSA-SAF operational model (LSA_OPER), although with smaller deviations than in the ERA5 model. In that case, the poorer performance must be attributed to the original calibration carried out by Trigo et al. [13], where TIGR-like and MODTRAN-4 simulations (not observations) were used to calibrate the model parameters (Table A1). Additionally, a common feature of the original LSA-SAF algorithm is the ‘S’ shape curve in DLR estimates, as shown by the LSA_OPER results (Figure 2g–i), where its effects are particularly visible under cloudy conditions. Since the plots of MARS, the newly calibrated LSA model, and ERA5 do not present that characteristic, it is likely an artifact introduced by the MODTRAN-4 estimates used in the fitting. It should be noted that, although MARS eliminates most of the previous error signatures from LSA-SAF and ERA5 models, there is still room for improvement, particularly for cloudy conditions. In particular, the way to further incorporate satellite observations related to, e.g., cloud type and cloud phase, is still largely unexplored.
The results analyzed so far suggest an overall good performance of the considered models, but also reveal the presence of several outliers in all model estimates (Figure 3). Taking into account the information provided by Table A5, Table A6 and Table A7, it is possible to find significant deviations towards observations in three stations (GVN, SMS, and SON), either due to the latitude or altitude effect, as well as measurement inaccuracies related to equipment malfunction. Figure 4 presents (36-h) examples of the behavior in each model for a selected group of very different stations, which are aimed to represent the best- and worst-case studies. Despite the good results in CAR and TAM stations (Figure 4a,b), it is important to consider the fact that the spatial sampling is not equally distributed throughout all the selected stations within the MSG-disk. In terms of the spatial distribution, the use of observations for validation allows to test the performance of different model estimates over different climate regions, strengthening the validation of the proposed formulation. However, as previous mentioned, regional dependencies should be expected in regions that have a higher number of stations (e.g., Europe), therefore contributing to an overall bias reduction to the MARS and LSA models. Nevertheless, for the best-case studies, ERA5 continues to produce higher deviations due deficiencies in cloud representation. When analyzing NIM station (Figure 4c), a clear underestimation of measured values is provided by all models (between 340–450 W·m−2). This aspect can be explained with the fact that NIM station may be subject to high aerosol loads, usually desert dust, which can lead to a significant increase in observed DLR when compared to similar aerosol free conditions, which is not captured well by any model. In particular, this station usually experiences higher occurrence of extreme dust events (e.g., desert storms), resulting in larger deviations of estimations from observations [44]. For the worst-case studies, the latitude effect in GVN (Figure 4d), the altitude effect in SON (Figure 4e), and the measuring inaccuracies in SMS (Figure 4f) result in very high errors between estimated and observed values, particularly in the former. At a very high latitude, the LSA_OPER seems to provide a higher deviation from observations, which is in accordance with one of the LSA-SAF model limitation stated by Trigo et al. [14], while the MARS and LSA models demonstrate to have a better approximation to observations, particularly MARS.

5. Conclusions

This work aimed at contributing to a new and improved formulation for the estimation of downward long-wave radiation (DLR) at the surface. The new formulation was built considering the combination of hourly reanalysis, ground, and remote observed inputs to train a state-of-the-art machine learning algorithm based on multivariate adaptive regression splines (MARS). The use of satellite data not only allows us to perform better estimates of DLR with suitable temporal and spatial samplings under different sky conditions, but also provides a wide spatial coverage at high-resolution.
When compared with the Satellite Application Facility on Land Surface Analysis (LSA-SAF) algorithm, results showed that the MARS algorithm performs very well, providing better adjustments to observed DLR fluxes than the former, where lower errors and higher correlations were evident under all, clear, and cloudy-sky conditions. This is mainly related to the fact that MARS allows to replace the previous least square fitting criterion for a set of pre-defined atmospheric states implemented in the LSA-SAF with a more refined discretization that accounts with the best fitting option based on maximum reduction on sum-of-squares residual error. Systematic differences and an overall underestimation were found in both LSA_OPER and ERA5 models, being linked to the original calibration with the Thermodynamic Initial Guess Retrieval (TIGR, [38]) atmospheric-profile database and the Moderate Resolution Atmospheric Transmittance and Radiance Code (MODTRAN-4, [37]) fluxes, and the cloud representation by the total cloud cover retrieved from ERA5, respectively. The role of satellite information in the calculation of DLR was also evaluated using both MARS and LSA models but considering ERA5 cloud information instead of the satellite cloud information to separate clear and cloudy situations (MARS* and LSA* models). The results clearly showed the added value of using remotely sensed data instead of reanalysis cloud cover.
The evaluation analysis, performed within the MSG-disk (i.e., longitude/latitude within 75°E/N), continued to show that MARS provided best results in comparison to the remaining models (LSA, LSA_OPER, and ERA5). In particular, the use of ground observations, from the baseline surface radiation network (BSRN, [41]) and the atmospheric radiation measurement (ARM, [42]) user facility, to calibrate MARS led to improved adjustments with lower errors and higher correlations (in which sampling plays an important role). During the validation procedure it was shown that, when using all available data from the 23 stations, MARS allows us to obtain RMSEs of 18.76, 17.07, and 17.13 W·m−2 under all, clear, and cloudy conditions, respectively. Lower errors were found when considering the performance of the model at each measuring location, as shown by the median values of the RMSE for all, clear, and cloudy-sky, i.e., 16.96, 15.44, and 16.00 W·m−2, respectively. Moreover, the reduction and elimination of previous systematic differences and overall underestimation carried out by LSA_OPER and ERA5 was achieved. The added value of using the satellite cloud information was accessed by comparing with estimates driven by ERA5 total cloud cover, showing an increase of 17% of the RMSE. Finally, the proposed methodology was further validated against independent observations gathered from 52 FLUXNET2015 ground stations over an 11–year period, showing that MARS DLR estimates have a better approximation to observations than the remaining models.
There is potential in using the proposed MARS formulation for operational purposes, however there are still a few steps towards improvement that need to be carried out in the future. These include: (i) the assessment of DLR estimates on a regional level, by producing regional maps and comparing MARS estimates with LSA-SAF product outputs (a fundamental procedure in order to operationalize MARS estimates); (ii) improvements of MARS estimates with enhanced input fields from ECMWF numerical weather prediction (e.g., increased resolution, better model physics, data assimilation); and (iii) other MARS model variants that can make use of other satellite products, such as measurements of thermal infrared bands and the top of atmosphere radiances as inputs for the training phase, similarly to Zhou et al. [23].
An application example for the estimation of hourly DLR values from the MARS model is made available in the Supplementary Materials, including a python code and the two calibrated MARS submodels (i.e., for clear and cloudy skies), and a synthetic test data for a 24-h period.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs14071704/s1; Code S1: zip compressed archive with 3 files: (i) MARS clear-sky model: mars_bsrn_model_clear_sky.sav; (ii) MARS cloudy-sky model: bsrn_model_cloud_sky.sav and (iii) example python script to run the MARS model and calculate DLR at surface using synthetic input data: MARS_DLR_output_2020.py.

Author Contributions

Conceptualization, E.D. and I.F.T.; Methodology, E.D., I.F.T. and F.M.L.; Software, F.M.L.; Validation, F.M.L.; Formal Analysis, F.M.L.; Investigation, E.D., I.F.T. and F.M.L.; Resources, E.D. and I.F.T.; Data Curation, F.M.L.; Writing—Original Draft Preparation, F.M.L.; Writing—Review & Editing, F.M.L., E.D. and I.F.T.; Visualization, F.M.L.; Supervision, E.D. and I.F.T.; Project Administration, E.D.; Funding Acquisition, E.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was performed within the framework of the LSA-SAF (https://landsaf.ipma.pt/en/; accessed on 31 March 2022) project, funded by EUMETSAT, and by the European Union Horizon 2020 research and innovation program No 958927.

Data Availability Statement

Public and private datasets were analyzed in this study. However, data supporting the results are available from the corresponding author upon reasonable request.

Acknowledgments

The authors are thankful for the availability of Copernicus and ECMWF in providing the needed data extracted from ERA5 through the climate data store website (https://cds.climate.copernicus.eu; accessed on 31 March 2022), and access to the BSRN and ARM station data. F.M.L. acknowledges the funding by Fundação para a Ciência e a Tecnologia (FCT) grant number PTDC/CTA-MET/28946/2017 (CONTROL), and from European Union Horizon 2020 research and innovation program under grant agreement No 958927 (CoCO2).

Conflicts of Interest

The authors declare no conflict of interest. The sponsors had no role in the design, execution, interpretation, or writing of the study.

Appendix A. LSA-SAF Algorithm

In the LSA-SAF algorithm, DLR ( F ) is estimated through a bulk parameterization given by the following equations, as described by Trigo et al. [14]:
F = σ ϵ s k y T s k y 4 ,
where σ is the Stefan–Boltzmann constant, ϵ s k y and T s k y 4 are the sky effective emissivity and the sky effective temperature, respectively. The former is given as a function of the total column of water vapor (tcwv), as follows:
ϵ s k y = 1 [ 1 + ( t c w v 10 ) exp ( ( α + β t c w v 10 ) m ) ] ,
where m are values that best adjust to clear and cloudy conditions (0.5 and 1, respectively). For the case of the latter, the following relation is used:
T s k y = T 0 + ( δ Δ d 0 + γ ) ,
where T 0 is the 2-metre temperature corrected through the 2-metre observed dewpoint depression ( δ Δ d 0 ). The parameters α , β , γ , and δ in Equations (A2) and (A3) are fitted independently for cloudy-sky and clear-sky conditions, and the all-sky DLR is the sum of the clear F c l e a r ) and cloudy F c l o u d y ) contributions considering the cloud fraction (cf):
D L R = c f   F c l o u d y + ( 1 c f ) F c l e a r .
More information regarding the calibration procedure of the parameters in Equations (A2) and (A3) is presented by Trigo et al. [13]. The method follows a piecewise regression independent for clear and cloudy conditions considering three classes of profiles: (i) dry cold, with tcwv  10 mm and t2m < 270 K; (ii) dry and warm, with tcwv  10 mm and t2m > 270 K; and (iii) moist, with tcwv > 8 mm. For the calibration phase of the operational LSA-SAF algorithm (LSA_OPER) atmospheric profiles from the TIGR-like database, namely the tcwv, t2m, d2m, and fluxes simulated with MODTRAN-4, were used. Moreover, the separation of clear and cloudy skies in the calibration database considers the total cloud cover (tcc), where clear and cloudy conditions are assigned for tcc = 0 and tcc > 0.9, respectively, in which a piecewise regression method is then applied to each set of clear and cloudy conditions, separately. The parameters used by the current LSA-SAF operational algorithm (LSA_OPER) are presented in Table A1.
Table A1. Calibrated parameters for the LSA_OPER model [14], i.e., the LSA-SAF operational algorithm that makes use of TIGR-like database (1992–1993) [38] for different atmospheric profiles under clear and cloudy-sky.
Table A1. Calibrated parameters for the LSA_OPER model [14], i.e., the LSA-SAF operational algorithm that makes use of TIGR-like database (1992–1993) [38] for different atmospheric profiles under clear and cloudy-sky.
Clear-SkyCloudy-Sky
Profilesαβγδαβγδ
Dry Cold0.6534.7961.253−0.7390.9682.257−0.236−0.877
Dry Warm0.7043.7201.655−0.1513.4460.3690.278−0.443
Moist0.5873.3441.686−0.2033.4460.3690.278−0.443
Table A2. Calibrated parameters for the LSA model using ERA5 inputs and observed DLR (BSRN and ARM) for the different atmospheric profiles under clear and cloudy-sky.
Table A2. Calibrated parameters for the LSA model using ERA5 inputs and observed DLR (BSRN and ARM) for the different atmospheric profiles under clear and cloudy-sky.
Clear-SkyCloudy-Sky
Profilesαβγδαβγδ
Dry Cold2.2894.992−2.368−1.1291.8043.0260.436−0.991
Dry Warm0.8653.7010.532−0.1353.2290.3240.737−0.562
Moist1.4663.0510.5709−0.1873.2290.3240.737−0.562

Appendix B. Models Training

For the training of both LSA and MARS models (i.e., for clear and cloudy conditions), the full dataset was divided in two components: training and verification. The training dataset was constructed by randomly selecting 40% of the full-time series for each station limited to a maximum of 6 months of data used from each station, with the remaining data being used as the verification dataset. This corresponded to a total of 51,386 (5.87 years) and 932,282 (106.42 years) hourly samples for the training and verification period, respectively. The bias and root mean square error (RMSE) for clear and cloudy conditions in the training and verification samples are presented in Table A3 and Table A4, respectively. The results show that MARS always performs better than LSA under clear and cloudy-sky conditions. During the training stage, MARS presents a very small bias (close to zero) as opposed to LSA, which shows a relatively large bias in comparison with higher deviation for the training under clear-sky. The RMSE, besides being lower in MARS (although with less discrepancies than the bias found for each model), does not vary significantly from clear to cloud sky. A similar behavior is observed for the verification phase, despite an overall increase in the bias in both models, which is related to the sampling increase with a higher data availability differential.
Table A3. Bias and root mean square error of LSA-SAF and MARS for the training and verification samples (#) in clear-sky conditions. Units are in W·m−2.
Table A3. Bias and root mean square error of LSA-SAF and MARS for the training and verification samples (#) in clear-sky conditions. Units are in W·m−2.
Training (#31820)Verification (#541360)
ModelsBiasRMSEBiasRMSE
LSA0.2420.41−0.2718.99
MARS−0.0019.710.1218.00
Table A4. Bias and root mean square error of LSA-SAF and MARS for the training and verification samples (#) in cloudy-sky conditions. Units are in W·m−2.
Table A4. Bias and root mean square error of LSA-SAF and MARS for the training and verification samples (#) in cloudy-sky conditions. Units are in W·m−2.
Training (#19566)Verification (#390922)
ModelsBiasRMSEBiasRMSE
LSA1.2021.71−1.0619.95
MARS−0.0019.36−1.5518.19

Appendix C. Evaluation Detailed Results

The following tables comprise all the statistical error metrics obtained. Table A5, Table A6 and Table A7 show the scores for each station in all, clear, and cloudy-sky conditions (respectively).
Table A5. Error metrics between model (MARS, LSA_OPER, LSA, and ERA5) and measuring station (all-sky conditions) for 2004–2019. Bias (µ), standard deviation ( σ ), and root mean square error (RMSE) are in W·m−2; temporal correlation coefficient. (R) is given between 0–1.
Table A5. Error metrics between model (MARS, LSA_OPER, LSA, and ERA5) and measuring station (all-sky conditions) for 2004–2019. Bias (µ), standard deviation ( σ ), and root mean square error (RMSE) are in W·m−2; temporal correlation coefficient. (R) is given between 0–1.
µ σ RMSER
StationMARSLSALSA_OPERERA5MARSLSALSA_OPERERA5MARSLSALSA_OPERERA5MARSLSALSA_OPERERA5
BRB5.725.7710.331.469.3610.1010.8411.2213.2614.0317.0914.990.920.910.910.88
BUD0.401.146.66−1.807.937.448.139.429.989.4912.2212.610.910.920.910.86
CAB−0.41−0.390.00−7.0911.8413.2913.6513.9615.6017.3317.9920.650.930.910.910.89
CAM0.440.872.79−4.5815.3816.6115.8816.9619.8721.3921.0323.330.870.850.850.82
CAR3.604.095.86−8.049.7410.6011.0911.3813.5714.8915.9817.890.960.950.950.94
CNR0.343.964.75−0.1112.8413.4314.2215.8316.8818.5220.0121.070.910.900.890.86
DAA5.715.456.38−4.2411.6212.6912.7710.4616.2717.4418.0716.280.940.930.930.93
ENA−8.78−7.89−3.89−6.2414.3514.3413.9616.7820.1220.1519.0722.690.840.840.830.77
FLO−6.96−7.09−1.79−7.2810.8010.9511.0012.4315.7315.9214.7318.140.910.910.910.88
GAN3.525.9014.38−9.0715.1818.5518.9913.8721.7725.4429.0622.530.900.870.860.91
GOB−7.61−6.99−6.68−8.8611.3512.1712.7211.6218.7219.0719.5319.540.880.880.880.88
GVN−1.70−7.07−34.06−10.8818.4920.2821.9918.7523.2326.6842.6327.260.880.850.850.86
NIM−9.96−9.00−4.59−13.8411.8812.0114.5814.6318.4818.0418.7223.710.930.940.940.92
LIN3.592.380.78−3.8812.7614.0315.6813.8816.9618.1420.3420.050.930.910.900.90
PAL0.690.140.77−4.6212.2713.4314.0514.2016.1917.4918.6020.870.920.910.900.88
PAR−1.20−3.024.90−4.589.098.348.2810.3811.2910.7211.3013.450.780.820.830.71
PAY0.605.222.72−6.2513.0213.1815.5817.0117.2418.6722.0623.970.920.910.890.85
PTR7.899.7516.618.069.7910.1710.1512.8714.8716.2121.0418.690.860.850.860.77
SBO−6.85−4.39−4.37−5.8113.8214.1314.9013.7119.4419.6620.7019.520.880.870.850.88
SMS22.0721.7025.1021.1220.3422.1721.9320.4035.0436.1938.2334.650.850.830.830.85
SON9.731.81−9.88−5.0824.9827.2928.4426.2632.7633.0335.6733.050.810.780.780.79
TAM−6.15−8.79−6.82−14.0510.5512.6411.3310.4114.9518.6216.1420.360.960.940.950.95
TOR−2.67−2.30−8.38−7.8013.9114.7318.5014.3818.3919.3525.1421.860.930.920.900.91
Table A6. Error metrics between model (MARS, LSA_OPER, LSA, and ERA5) and measuring station (clear-sky conditions) for 2004–2019. Bias (µ), standard deviation ( σ ), and root mean square error (RMSE) are in W·m−2; temporal correlation coefficient. (R) is given between 0–1.
Table A6. Error metrics between model (MARS, LSA_OPER, LSA, and ERA5) and measuring station (clear-sky conditions) for 2004–2019. Bias (µ), standard deviation ( σ ), and root mean square error (RMSE) are in W·m−2; temporal correlation coefficient. (R) is given between 0–1.
µ σ RMSER
StationMARSLSALSA_OPERERA5MARSLSALSA_OPERERA5MARSLSALSA_OPERERA5MARSLSALSA_OPERERA5
BRB4.295.415.035.366.817.458.498.059.8411.0011.9112.220.910.910.900.89
BUD−4.58−2.890.89−1.716.126.466.897.798.928.488.7111.060.940.940.940.89
CAB−1.45−1.53−2.192.106.476.546.8512.029.529.7710.1516.400.970.970.970.92
CAM3.442.321.888.6412.9913.1113.2014.9719.0219.0318.9922.860.840.830.830.80
CAR2.272.682.23−3.586.195.996.737.998.318.248.9611.610.990.990.980.97
CNR1.581.961.2710.878.558.628.9711.6312.4912.7112.9818.880.960.960.960.93
DAA6.105.013.14−2.0010.8011.1111.187.6915.4415.3414.9712.550.930.930.930.94
ENA−9.59−10.46−8.862.5115.2115.4015.0215.7721.4922.0521.1221.000.830.830.830.79
FLO−7.48−7.56−5.23−0.8310.8210.8011.2212.8916.7416.8216.3616.930.910.910.910.89
GAN−9.00−6.89−5.11−7.0313.7614.2815.3014.5421.7321.4521.8721.500.860.860.860.86
GOB−7.55−6.56−7.70−7.2210.3211.1611.9010.1517.8718.2019.1417.240.880.870.870.89
GVN−2.98−11.82−28.34−6.2320.3723.8923.6217.1225.5030.7740.0225.130.730.640.650.75
NIM−13.80−10.86−10.66−15.6810.8310.9913.7012.9019.9418.1120.5023.450.940.950.950.93
LIN4.054.102.936.486.867.016.9510.7510.6810.9210.4616.540.980.980.980.95
PAL0.610.36−0.223.027.727.918.0511.9411.5011.7411.9417.160.960.960.960.92
PAR1.53−1.276.954.788.126.947.228.0710.929.8612.1611.600.670.700.700.65
PAY2.963.342.176.177.987.878.0513.9213.1013.1113.0319.670.960.960.960.91
PTR8.7412.7416.2712.147.347.708.4811.0413.4116.6819.9218.840.850.850.850.78
SBO−7.54−6.36−7.90−3.6512.8612.8313.2611.9218.6118.3019.2017.190.900.900.890.91
SMS25.2624.9925.4029.4019.0719.6519.6719.2037.1337.4037.6439.750.790.780.790.81
SON12.648.160.129.7125.3226.2726.3026.0035.4634.2533.0534.500.710.710.720.74
TAM−2.67−4.41−7.61−11.258.538.939.337.1411.4312.1714.1515.200.960.960.960.97
TOR−2.22−2.36−5.751.6910.6410.6310.9113.4816.2216.0817.2719.240.950.950.960.94
Table A7. Error metrics between model (MARS, LSA_OPER, LSA, and ERA5) and measuring station (cloudy-sky conditions) for 2004–2019. Bias (µ), standard deviation ( σ ), and root mean square error (RMSE) are in W·m−2; temporal correlation coefficient. (R) is given between 0–1.
Table A7. Error metrics between model (MARS, LSA_OPER, LSA, and ERA5) and measuring station (cloudy-sky conditions) for 2004–2019. Bias (µ), standard deviation ( σ ), and root mean square error (RMSE) are in W·m−2; temporal correlation coefficient. (R) is given between 0–1.
µ σ RMSER
StationMARSLSALSA_OPERERA5MARSLSALSA_OPERERA5MARSLSALSA_OPERERA5MARSLSALSA_OPERERA5
BRB3.301.599.20−9.948.379.049.3311.7311.4412.0815.3317.960.710.640.620.62
BUD5.354.0111.75−6.807.967.307.0310.5311.3310.0014.6614.860.830.830.840.63
CAB−2.74−2.99−3.17−15.5310.1911.5712.3413.8814.1215.9717.6123.710.900.880.870.87
CAM−0.650.192.94−13.5612.3613.6712.9815.4217.2018.8518.1724.270.840.830.830.83
CAR1.090.674.45−17.5411.1112.7812.6514.3615.3817.5018.1625.450.900.870.860.87
CNR−2.923.694.49−12.5211.3312.513.7015.3815.8017.5420.3423.320.860.830.810.83
DAA−0.100.088.82−17.9013.5417.7116.7115.3418.2023.2323.5726.650.900.850.860.89
ENA−10.56−7.51−0.98−17.5810.989.559.6114.1117.8415.2113.3625.280.810.840.840.75
FLO−7.35−8.22−1.46−13.588.257.537.4910.7312.8212.789.8619.420.860.880.880.81
GAN6.7712.4723.50−12.5114.4220.1219.2213.1221.0728.2533.8222.670.930.920.920.93
GOB−9.19−11.47−1.60−25.7613.6512.3012.0516.5519.7321.1717.0733.160.840.830.840.82
GVN−2.29−4.06−39.48−17.5314.0414.6516.3220.6118.7619.7644.4331.100.860.850.850.77
NIM−0.76−4.196.61−11.0511.7814.2712.3116.8716.0019.2317.4524.680.790.670.730.70
LIN−0.27−1.98−6.62−12.9411.1511.9114.7913.5315.1816.5421.4122.270.900.890.870.88
PAL−2.19−3.03−3.53−13.8110.5011.0912.6014.3814.2615.5218.2123.390.900.880.860.86
PAR−5.96−5.651.32−13.616.997.056.347.6010.4810.387.9316.660.540.460.590.49
PAY−4.003.42−2.22−16.6811.0711.7516.2216.9516.1316.8823.5227.530.890.880.850.83
PTR−3.92−3.206.48−7.998.848.197.7712.7312.0411.1412.0618.070.650.610.660.56
SBO−4.714.0811.83−19.2615.4218.6818.1516.4620.4824.0225.5828.740.830.750.770.80
SMS13.4110.9816.516.5416.2617.8018.1216.1125.6126.2929.2023.010.780.730.730.81
SON9.83−0.99−16.69−15.1419.2321.2723.6220.3128.3029.4036.3031.490.770.700.730.76
TAM−17.03−27.47−9.71−27.5613.1617.9915.3115.8623.9835.8022.3434.430.930.880.910.89
TOR−4.93−4.37−13.86−14.4811.2711.716.8313.3715.8916.6726.4823.210.920.900.890.89

Appendix D. FLUXNET2015 Validation

The following results show the use of FLUXNET2015 [45] dataset for the validation of the MARS model. To this end, 30-min data from 52 ground stations within the MSG-disk were aggregated to hourly values and then used for validation purposes considering a period of 11 years between 2004 and 2015 (Table A8). In Table A9, a summary of the error metrics distribution for different ranges of DLR and under all-sky conditions is shown.
Table A8. List of 52 stations from FLUXNET2015 used within the Meteosat Second Generation (MSG) disk for validation purposes of estimated downward long-wave radiation (DLR) at the surface. The name, acronym, location, geographical coordinates (°), elevation (m), availability (total number of years available between 2004 and 2015), and annual mean DLR (W·m−2) for each station is shown.
Table A8. List of 52 stations from FLUXNET2015 used within the Meteosat Second Generation (MSG) disk for validation purposes of estimated downward long-wave radiation (DLR) at the surface. The name, acronym, location, geographical coordinates (°), elevation (m), availability (total number of years available between 2004 and 2015), and annual mean DLR (W·m−2) for each station is shown.
StationAcronymLocationLatitude and Longitude (°)Elev. (m)Avail. (Years)Annual DLR
(W·m−2)
NeustiftAT-NeuAustria47.12°N; 11.32°E9706.96288.86
BrasschaatBE-BraBelgium51.31°N; 4.52°E167.33322.16
LonzeeBE-LonBelgium50.55°N; 4.75°E1677.38320.71
ChamauCH-ChaSwitzerland47.21°N; 8.41°E3939.29321.45
DavosCH-DavSwitzerland46.82°N; 9.86°E16397.70273.69
FrüebüelCH-FruSwitzerland47.12°N; 8.54°E9828.80306.25
LaegernCH-LaeSwitzerland47.48°N; 8.36°E6899.25304.40
Oensingen grasslandCH-Oe1Switzerland47.29°N; 7.73°E4504.87326.28
Oensingen cropCH-Oe2Switzerland47.29°N; 7.73°E45210.66326.24
Bily Kriz forestCZ-BK1Czech Republic49.50°N; 18.54°E8757.23313.66
Bily Kriz grasslandCZ-BK2Czech Republic49.49°N; 18.54°E8555.32313.12
TrebonCZ-wetCzech Republic49.03°N; 14.77°E4267.82322.84
AnklamDE-AkmGermany53.87°N; 13.68°E−14.52320.87
GebeseeDE-GebGermany51.10°N; 10.92°E16211.00307.30
GrillenburgDE-GriGermany50.95°N; 13.51°E3858.07311.01
HainichDE-HaiGermany51.08°N; 10.45°E4308.84310.49
KlingenbergDE-KliGermany50.89°N; 13.52°E47810.58303.74
LackenbergDE-LkbGermany49.10°N; 13.31°E13083.60296.89
LeinefeldeDE-LnfGermany51.33°N; 10.37°E4515.96306.42
OberbärenburgDE-ObeGermany50.79°N; 13.72°E7346.90301.53
RollesbroichDE-RuRGermany50.62°N; 6.30°E5153.48318.72
Selhausen JuelichDE-RuSGermany50.87°N; 6.45°E1032.65330.62
Schechenfilz NordDE-SfNGermany47.81°N; 11.33°E5902.44323.40
SpreewaldDE-SpwGermany51.89°N; 14.03°E614.33322.56
TharandtDE-ThaGermany50.96°N; 13.57°E38510.74311.30
ZarnekowDE-ZrkGermany53.88°N; 12.89°E01.61328.96
SoroeDK-SorDenmark55.49°N; 11.65°E406.81314.17
HyytialaFI-HyyFinland61.85°N; 24.30°E1814.25306.04
LompolojankkaFI-LomFinland67.99°N; 24.21°E2742.93282.32
GrignonFR-GriFrance48.84°N; 1.95°E12510.80328.80
Le BrayFR-LBrFrance44.72°N; 0.77°W615.01333.98
PuechabonFR-PueFrance43.74°N; 3.60°E2709.11318.07
GuyafluxGF-GuyFrench Guiana5.28°N; 52.93°W482.00411.47
AnkasaGH-AnkGana5.27°N; 2.69°W1242.00405.94
Borgo CioffiIT-BCiItaly40.52°N; 14.96°E204.12331.83
Castel d’Asso1IT-CA1Italy42.38°N; 12.03°E2003.35341.22
Castel d’Asso2IT-CA2Italy42.38°N; 12.03°E2002.59345.66
Castel d’Asso3IT-CA3Italy42.38°N; 12.02°E1972.90339.64
CollelongoIT-ColItaly41.85°N; 13.59°E15607.35280.26
Ispra ABC-ISIT-IspItaly45.81°N; 8.63°E2102.00335.75
LavaroneIT-LavItaly45.96°N; 11.28°E135310.43289.81
Monte BondoneIT-MBoItaly46.02°N; 11.05°E15508.97282.15
Arca di NoeIT-NoeItaly40.61°N; 8.15°E259.50349.74
RenonIT-RenItaly46.59°N; 11.43°E17308.84280.63
Roccarespampani 1IT-Ro1Italy42.49°N; 11.93°E2351.00310.57
Roccarespampani 2IT-Ro2Italy42.39°N; 11.92°E1601.51332.02
TorgnonIT-TorItaly45.84°N; 7.58°E21605.81274.69
HorstermeerNL-HorNetherlands52.24°N; 5.07°E27.00326.18
LoobosNL-LooNetherlands52.17°N; 5.74°E2510.95343.79
FyodorovskoyeRU-FyoRussia56.46°N; 32.92°E2652.76293.74
Stordalen grasslandSE-St1Sweden68.35°N; 19.05°E3511.99297.90
MonguZM-MonZambia15.44°S; 23.25°E10531.85358.19
Table A9. Comparison of bias (µ), standard deviation of the error ( σ ), root mean square error (RMSE), and temporal correlation coefficient (R) between different models (MARS, LSA, LSA_OPER, and ERA5) and observations from all 52 FLUXNET2015 ground stations for all-sky (2004–2015) in different conditions: considering all data (ALL); observations with values above 400 W·m−2 (UL); observations with values between 200–400 W·m−2 (ML); observations with values below 200 W·m−2 (LL); and the median of the distribution of the metrics computed independently for each station (MED). Units are in W·m−2, while correlations are given between 0–1.
Table A9. Comparison of bias (µ), standard deviation of the error ( σ ), root mean square error (RMSE), and temporal correlation coefficient (R) between different models (MARS, LSA, LSA_OPER, and ERA5) and observations from all 52 FLUXNET2015 ground stations for all-sky (2004–2015) in different conditions: considering all data (ALL); observations with values above 400 W·m−2 (UL); observations with values between 200–400 W·m−2 (ML); observations with values below 200 W·m−2 (LL); and the median of the distribution of the metrics computed independently for each station (MED). Units are in W·m−2, while correlations are given between 0–1.
MARSLSA
ConditionµσRMSERµσRMSER
ALL0.0518.4924.140.880.8619.0124.660.87
UL−16.9115.5326.450.41−16.7914.9925.820.42
ML0.0618.3223.900.850.7718.7624.350.84
LL20.5116.7530.810.3926.6816.0734.780.41
MED0.4216.4022.520.871.1616.8522.770.86
LSA_OPERERA5
ConditionµσRMSERµσRMSER
ALL−1.6020.9027.240.86−8.7118.9326.710.87
UL−9.9415.9423.070.41−20.9017.7430.770.37
ML−1.6321.0527.400.82−8.9218.7626.550.84
LL9.9516.3724.820.4615.3218.2428.790.38
MED−2.0218.8325.880.85−8.0816.2923.820.86

References

  1. Cheng, J.; Liang, S.; Wang, W. Surface Downward Longwave Radiation. Compr. Remote Sens. 2018, 5, 196–216. [Google Scholar] [CrossRef]
  2. Iziomon, M.G.; Mayer, H.; Matzarakis, A. Downward Atmospheric Longwave Irradiance Under Clear and Cloudy Skies: Measurement and Parameterization. Atmos. Sol. Terr. Phys. 2003, 65, 1107–1116. [Google Scholar] [CrossRef]
  3. Wild, M.; Folini, D.; Schär, C.; Loeb, N.; Dutton, E.G.; König-Langlo, G. The Global Energy Balance from a Surface Perspective. Clim. Dyn. 2013, 40, 3107–3134. [Google Scholar] [CrossRef] [Green Version]
  4. Held, I.M.; Soden, B.J. Water Vapor Feedback and Global Warming. Annu. Rev. Energy Environ. 2000, 25, 441–475. [Google Scholar] [CrossRef] [Green Version]
  5. Intergovernmental Panel on Climate Change (IPCC). Climate Change 2001: The Scientific Basis; Houghton, J.T., Ding, Y., Griggs, D.J., Noguer, M., van der Linden, P.J., Dai, X., Maskell, K., Johnson, C.A., Eds.; Cambridge University Press: New York, NY, USA, 2001. Available online: https://www.ipcc.ch/site/assets/uploads/2018/07/WG1_TAR_FM.pdf (accessed on 14 January 2022).
  6. Bertoldi, G.; Rigon, R.; Tappeiner, U. Modelling Evapotranspiration and the Surface Energy Budget in Alpine Catchments. In Evapotranspiration—Remote Sensing and Modelling; IntechOpen: London, UK, 2012; Chapter 17. [Google Scholar] [CrossRef] [Green Version]
  7. Naud, C.M.; Miller, J.R.; Landry, C. Using Satellites to Investigate the Sensitivity of Longwave Downward Radiation to Water Vapour at High Elevations. Geophys. Res. Atmos. 2012, 117, D05101. [Google Scholar] [CrossRef]
  8. Chang, K.; Zhang, Q. Modeling of Downward Longwave Radiation and Radiative Cooling Potential in China. Renew. Sustain. Energy 2019, 11, 066501. [Google Scholar] [CrossRef]
  9. Dilley, A.C.; O’Brien, D.M. Estimating Downward Clear Sky Long-wave Irradiance at the Surface from Screen Temperature and Precipitable Water. R. Meteorol. Soc. 1998, 124, 1391–1401. [Google Scholar] [CrossRef]
  10. Prata, A.J. A New Long-Wave Formula for Estimating Downward Clear-Sky Radiation at the Surface. R. Meteorol. Soc. 1996, 122, 1121–1151. [Google Scholar] [CrossRef]
  11. Berdahl, P.; Fromberg, R. The Thermal Radiance of Clear Skies. Sol. Energy 1982, 29, 299–314. [Google Scholar] [CrossRef]
  12. Brutsaert, W. On a Derivable Formula for Long-wave Radiation from Clear Skies. Water Resour. Res. 1975, 11, 742–744. [Google Scholar] [CrossRef]
  13. Tuzet, A. A simple method for Estimating Downward Longwave Radiation from Surface and Satellite Data by Clear Sky. Remote Sens. 1990, 11, 125–131. [Google Scholar] [CrossRef]
  14. Trigo, I.F.; Barroso, C.; Viterbo, P.; Freitas, S.C.; Monteiro, I.T. Estimation of Downward Long-wave Radiation at the Surface Combining Remotely Sensed Data and NWP Data. Geophys. Res. Atmos. 2010, 115, D24118. [Google Scholar] [CrossRef]
  15. Bilbao, J.; De Miguel, A.H. Estimation of Daylight Downward Longwave Atmospheric Irradiance under Clear-Sky and All-Sky Conditions. Appl. Meteorol. Climatol. 2007, 46, 878–889. [Google Scholar] [CrossRef]
  16. Josey, S.A.; Pascal, R.W.; Taylor, P.K.; Yelland, M.J. A New Formula for Determining the Atmospheric Longwave Flux at Ocean Surface at Mid-High Latitudes. Geophys. Res. Oceans 2003, 108, 3108. [Google Scholar] [CrossRef]
  17. Diak, G.R.; Bland, W.L.; Mecikalski, J.R.; Anderson, M.C. Satellite-based Estimates of Longwave Radiation for Agricultural Applications. Agric. For. Meteorol. 2000, 103, 349–355. [Google Scholar] [CrossRef]
  18. Crawford, T.M.; Duchon, C.E. An Improved Parameterization for Estimating Effective Atmospheric Emissivity for Use in Calculating Daytime Downwelling Longwave Radiation. Appl. Meteorol. Climatol. 1999, 38, 474–480. [Google Scholar] [CrossRef]
  19. Formetta, G.; Bancheri, M.; David, O.; Rigon, R. Performances of Site Specific Parameterizations of Longwave Radiation. Hydrol. Earth Syst. Sci. 2016, 20, 4641–4654. [Google Scholar] [CrossRef] [Green Version]
  20. Cheng, C.-H.; Nnadi, F. Predicting Downward Longwave Radiation for Various Land Use in All-Sky Condition: Northeast Florida. Adv. Meteorol. 2014, 2014, 525148. [Google Scholar] [CrossRef]
  21. Feng, C.; Zhang, X.; Wei, Y.; Zhang, W.; Hou, N.; Xu, J.; Jia, K.; Yao, Y.; Xie, X.; Jiang, B.; et al. Estimating Surface Downward Longwave Radiation using Machine Learning Methods. Atmosphere 2020, 11, 1147. [Google Scholar] [CrossRef]
  22. Obot, N.I.; Humphrey, I.; Chendo, M.A.C.; Udo, S.O. Deep Learning and Regression Modelling of Cloudless Downward Longwave Radiation. Beni-Suef Univ. J. Basic Appl. Sci. 2019, 8, 23. [Google Scholar] [CrossRef] [Green Version]
  23. Zhou, W.; Wang, T.; Shi, J.; Peng, B.; Zhao, R.; Yu, Y. Remote Sensed Clear-Sky Surface Longwave Downward Radiation by Using Multivariate Adaptive Regression Splines Method. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018. [Google Scholar] [CrossRef]
  24. Cao, Y.; Li, M.; Zhang, Y. Estimating the Clear-Sky Longwave Downward Radiation in the Artic from FengYun-3D MERSI-2 Data. Remote Sens. 2022, 14, 606. [Google Scholar] [CrossRef]
  25. Wang, T.; Shi, J.; Ma, Y.; Letu, H.; Li., X. All-Sky Longwave Downward Radiation from Satellite Measurements: General Parameterizations Based on LST, Column Water Vapor and Cloud Top Temperature. Photogramm. Remote Sens. 2020, 161, 52–60. [Google Scholar] [CrossRef]
  26. Yu, S.; Xin, X.; Liu, Q.; Zhang, H.; Li, L. An Improved Parameterization for Retrieving Clear-Sky Downward Longwave Radiation from Satellite Thermal Infrared Data. Remote Sens. 2019, 11, 425. [Google Scholar] [CrossRef] [Green Version]
  27. Zhou, W.; Shi, J.C.; Wang, T.X.; Peng, B.; Husi, L.; Yu, Y.C.; Zhao, R. New Methods for Deriving Clear-Sky Surface Longwave Downward Radiation Based on Remotely Sensed Data and Ground Measurements. Earth Space Sci. 2019, 6, 2071–2086. [Google Scholar] [CrossRef] [Green Version]
  28. Zhou, W.; Shi, J.C.; Wang, T.X.; Peng, B.; Zhao, R.; Yu, Y.C. Clear-Sky Longwave Downward Radiation Estimation by Integrating MODIS Data and Ground-Based Measurements. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 12, 450–459. [Google Scholar] [CrossRef]
  29. Zhou, Q.; Flores, A.; Glenn, N.F.; Walters, R.; Han, B. A Machine Learning Approach to Estimation of Downward Solar Radiation from Satellite-Derived Data Products: An Application over a Semi-Arid Ecosystem in the U.S. PLoS ONE 2017, 12, e0180239. [Google Scholar] [CrossRef] [Green Version]
  30. Wang, T.; yan, G.; Chen, L. Consistent Retrieval Methods to Estimate Land Surface Shortwave and Longwave Radiative Flux Components under Clear-Sky Conditions. Remote Sens. Environ. 2012, 124, 61–71. [Google Scholar] [CrossRef]
  31. Jung, M.; Koirala, S.; Weber, U.; Ichii, K.; Gans, F.; Camps-Valls, G.; Papale, D.; Schwalm, C.; Tramontana, G.; Reichstein, M. The FLUXCOM Ensemble of Global Land-Atmosphere Energy FLUXES. Sci. Data 2019, 6, 74. [Google Scholar] [CrossRef] [Green Version]
  32. Nisbet, R.; Miner, G.; Yale, K. Advanced Algorithms for Data Mining. In Handbook of Statistical Analysis and Data Mining Applications, 2nd ed.; Academic Press: Cambridge, MA, USA, 2018; Chapter 8; pp. 149–167. [Google Scholar] [CrossRef]
  33. Wang, K.; Dickinson, R.E. Global Atmospheric Downward Longwave Radiation at the Surface from Ground-based Observations, Satellite Retrievals, and Reanalysis. Rev. Geophys. 2013, 51, 150–185. [Google Scholar] [CrossRef]
  34. Wang, W.; Liang, S. Estimation of High-spatial Resolution Clear-sky Longwave Downward and Net Radiation Over Land Surfaces from MODIS Data. Remote Sens. Environ. 2009, 113, 745–754. [Google Scholar] [CrossRef]
  35. Wild, M.; Ohmura, A.; Gilgen, H.; Morcrette, J.-J.; Slingo, A. Evaluation of Downward Longwave Radiation in General Circulation Models. Am. Meteorol. Soc. 2001, 14, 3227–3239. [Google Scholar] [CrossRef]
  36. European Space Agency. Meteosat Second Generation: The Satellite Development; ESA Publishing Division: Noordwijk, The Netherlands, 1999; ISBN 92-9092-634-1.
  37. Berk, A.; Anderson, G.P.; Acharya, P.K.; Hoke, M.L.; Chetwynd, J.H.; Bernstein, L.S.; Shettle, E.P.; Matthew, M.W.; Adler-Golden, S.M. MOD-TRAN4 Version 2 User’s Manual Air Force Res. Lab; Space Vehicles Directorate, Air Force Material Command: Hanscom Air Force Base, MA, USA, 2000; Available online: https://home.cis.rit.edu/~cnspci/references/berk2003.pdf (accessed on 14 January 2022).
  38. Chevallier, F.; Chédin, A.; Chéruy, F.; Morcrette, J.-J. TIGR-like Atmospheric-Profile Databases for Accurate Radiative-Flux Computation. R. Meteorol. Soc. 2000, 126, 777–785. [Google Scholar] [CrossRef]
  39. LSA-SAF. EUMETSAT Network of Satellite Application Facility on Land Surface Analysis: Down-Welling Longwave Flux (DSLF); Product User Manual, Issue 3.4, SAF/LAND/IPMA/PUM_DSLF/3.4; EUMETSAT Network of Satellite Application Facilities: Darmstadt, Germany, 2015. [Google Scholar]
  40. Friedman, J.H. Multivariate Adaptative Regression Splines. Ann. Stat. 1991, 19, 1–141. Available online: https://projecteuclid.org/journals/annals-of-statistics/volume-19/issue-1/Multivariate-Adaptive-Regression-Splines/10.1214/aos/1176347963.full (accessed on 14 January 2022).
  41. Driemel, A.; Augustine, J.; Behrens, K.; Colle, S.; Cox, C.; Cuevas-Agulló, E.; Denn, F.M.; Duprat, T.; Fukuda, M.; Grobe, H.; et al. Baseline Surface Radiation Network (BSRN): Structure and data description (1992–2017). Earth Syst. Sci. Data 2018, 10, 1491–1501. [Google Scholar] [CrossRef] [Green Version]
  42. Mlawer, E.J.; Turner, D.D. Spectral Radiation Measurements and Analysis in the ARM Program. Meteorol. Monogr. 2016, 57, 14.1–14.17. [Google Scholar] [CrossRef]
  43. Emetere, M.E.; Akinyemi, M.L. Documentation of Atmospheric Constants Over Niamey, Niger: A Theoretical Aid for Measuring Instruments. R. Meteorol. Soc. Meteorol. Appl. Sci. Technol. Weather Clim. 2017, 24, 260–267. [Google Scholar] [CrossRef]
  44. Sengupta, M. Atmospheric Radiation Measurement (ARM) User Facility: Sky Radiometers on Stand for Downwelling Radiation (SKYRAD60S); 2005-11-26 to 2007-01-07, ARM Mobile Facility (NIM) Niamey, Niger (M1); ARM Data Center: Oak Ridge, TN, USA, 2005. [Google Scholar] [CrossRef]
  45. Pastorello, G.; Trotta, C.; Canfora, E.; Chu, H.; Christianson, D.; Cheah, Y.-W.; Poindexter, C.; Chen, J.; Elbashandy, A.; Humphrey, M.; et al. The FLUXNET2015 Dataset and the ONEFlux Processing Pipeline for Eddy Covariance Data. Sci. Data 2020, 7, 225. [Google Scholar] [CrossRef]
  46. Reichstein, M.; Falge, E.; Baldocchi, D.; Papale, D.; Aubinet, M.; Berbigier, P.; Bernhofer, C.; Buchmann, N.; Gilmanov, T.; Granier, A.; et al. On the Separation of Net Ecosystem Exchange into Assimilation and Ecosystem Respiration: Review and Improved Algorithm. Glob. Chang. Biol. 2005, 11, 1424–1439. [Google Scholar] [CrossRef]
  47. Hersbach, H.; Bell, B.; Berrisford, P.; Hirahara, S.; Horanyi, A.; Muñoz-Sabater, J.; Nicolas, J.; Peubey, C.; Radu, R.; Schepers, D.; et al. The ERA5 Global Reanalysis. R. Meteorol. Soc. 2020, 146, 1999–2049. [Google Scholar] [CrossRef]
  48. Morcrette, J.-J.; Barker, H.W.; Cole, J.N.S.; Iacono, M.J.; Pincus, R. Impact of a New Radiation Package, McRad, in the ECMWF Integrated Forecasting System. Am. Meteorol. Soc. 2008, 136, 4773–4798. [Google Scholar] [CrossRef]
  49. Dutra, E.; Muñoz-Sabater, J.; Boussetta, S.; Komori, T.; Hirahara, S.; Balsamo, G. Environmental Lapse Rate for High-Resolution Land Surface Downscaling: An Application to ERA5. Earth Space Sci. 2020, 7, e2019EA000984. [Google Scholar] [CrossRef] [Green Version]
  50. Derrien, M.; Le Gléau, H. MSG/SEVIRI Cloud Mask and Type from SAFNWC. Remote Sens. 2005, 26, 4707–4732. [Google Scholar] [CrossRef]
  51. Derrien, M.; Le Gléau, H. Improvement of Cloud Detection Near Sunrise and Sunset by Temporal-Differencing and Region-Growing Techniques with Real-Time SEVERI. Remote Sens. 1010, 31, 1765–1780. [Google Scholar] [CrossRef]
  52. Friedman, J.H.; Roosen, C.B. An introduction to Multivariate Adaptative Regression Splines. Stat. Methods Med. Res. 1995, 4, 197–217. [Google Scholar] [CrossRef] [PubMed]
  53. Marcot, B.G.; Hanea, A.M. What is an Optimal Value of K in K-fold Cross-validation in Discrete Bayesian Network Analysis. Comput. Stat. 2020, 36, 2009–2031. [Google Scholar] [CrossRef]
  54. Wu, C.Z.; Goh, A.T.C.; Zhang, W.G. Study on Optimization of Mars Model for Prediction of Pile Drivability Based on Cross-validation. In Proceedings of the 7th International Symposium on Geotechnical Safety and Risk (ISGSR), Taipei, Taiwan, 11–13 December 2019; pp. 572–577, ISBN 978-981-11-27285-0. [Google Scholar]
  55. Zhao, Y.; Hasan, Y.A. Machine Learning Algorithms for Predicting Roadside Fine Particulate Matter Concentration Level in Hong Kong Central. Comput. Ecol. Softw. 2013, 3, 61–73. [Google Scholar]
  56. Palharini, R.S.A.; Vila, D.A. Climatological Behaviour of Precipitating Clouds in the Northeast Region of Brazil. Adv. Meteorol. 2017, 2017, 5916150. [Google Scholar] [CrossRef]
Figure 1. (a) Example of the annual mean (2020) downward long-wave radiation (DLR) at the surface estimated with the LSA-SAF operational algorithm within the Meteosat Second Generation (MSG) disk, including the location for each of the 23 Baseline Surface Radiation Network (BSRN) ground stations marked by the green triangles; (b) Zoom of the European region, including both BSRN and also the FLUXNET2015 48 ground stations marked by the cyan circles.
Figure 1. (a) Example of the annual mean (2020) downward long-wave radiation (DLR) at the surface estimated with the LSA-SAF operational algorithm within the Meteosat Second Generation (MSG) disk, including the location for each of the 23 Baseline Surface Radiation Network (BSRN) ground stations marked by the green triangles; (b) Zoom of the European region, including both BSRN and also the FLUXNET2015 48 ground stations marked by the cyan circles.
Remotesensing 14 01704 g001
Figure 2. Scatter density plots of the observed DLR fluxes (W·m−2) in the horizontal axis versus the modelled fluxes in the vertical axis for all (left), clear (middle) and cloudy (right) sky conditions estimated with 4 models: MARS (ac), LSA (df), LSA_OPER (gi), and ERA5 (jl). The evaluation metrics are shown in the top right corner of each plot, including the number of samples used (#). The data represented includes all valid observations for the 23 ground stations in the period 2004–2019. Units are in W·m−2.
Figure 2. Scatter density plots of the observed DLR fluxes (W·m−2) in the horizontal axis versus the modelled fluxes in the vertical axis for all (left), clear (middle) and cloudy (right) sky conditions estimated with 4 models: MARS (ac), LSA (df), LSA_OPER (gi), and ERA5 (jl). The evaluation metrics are shown in the top right corner of each plot, including the number of samples used (#). The data represented includes all valid observations for the 23 ground stations in the period 2004–2019. Units are in W·m−2.
Remotesensing 14 01704 g002
Figure 3. Distribution of the metrics computed for each station displayed as boxplots for all, clear, and cloudy-sky conditions (i.e., left, middle, and right column, respectively): bias,   μ in (ac), standard deviation of the error, σ in (df), root mean square error, RMSE in (gi), and the correlation coefficient, R in (jl). The red line and blue cross inside each boxplot identify the median and mean of the distribution, respectively, with the boxes extending from the 25th to the 75th percentiles and whiskers 1.5 times the interquartile range. Units are in W·m−2, while correlations are given between 0–1.
Figure 3. Distribution of the metrics computed for each station displayed as boxplots for all, clear, and cloudy-sky conditions (i.e., left, middle, and right column, respectively): bias,   μ in (ac), standard deviation of the error, σ in (df), root mean square error, RMSE in (gi), and the correlation coefficient, R in (jl). The red line and blue cross inside each boxplot identify the median and mean of the distribution, respectively, with the boxes extending from the 25th to the 75th percentiles and whiskers 1.5 times the interquartile range. Units are in W·m−2, while correlations are given between 0–1.
Remotesensing 14 01704 g003
Figure 4. DLR (W·m−2) hourly time-series for different ground measuring stations: CAR (a), TAM (b), NIM (c), GVN (d), SON (e), and SMS (f). The study focusses a 36 h-window between 12 and 00 UTC for each station for different periods. Different models are shown: MARS (in blue), LSA (in orange), LSA_OPER (in green), and ERA5 (in red); together with in situ observations (in black). Hourly cloud cover (CC) information is also added in the inside plots with the total cloud cover (tcc) from the ERA5 (blue dots) and the cloud fraction (cf) from the MSG (black dots).
Figure 4. DLR (W·m−2) hourly time-series for different ground measuring stations: CAR (a), TAM (b), NIM (c), GVN (d), SON (e), and SMS (f). The study focusses a 36 h-window between 12 and 00 UTC for each station for different periods. Different models are shown: MARS (in blue), LSA (in orange), LSA_OPER (in green), and ERA5 (in red); together with in situ observations (in black). Hourly cloud cover (CC) information is also added in the inside plots with the total cloud cover (tcc) from the ERA5 (blue dots) and the cloud fraction (cf) from the MSG (black dots).
Remotesensing 14 01704 g004
Table 1. List of 23 stations used within the Meteosat Second Generation (MSG) disk for validation purposes of estimated downward long-wave radiation (DLR) at the surface. The name, acronym, network of origin, location, geographical coordinates (°), elevation (m), availability (total number of years available between 2004 and 2019), and annual mean DLR (W·m−2) for each station is shown.
Table 1. List of 23 stations used within the Meteosat Second Generation (MSG) disk for validation purposes of estimated downward long-wave radiation (DLR) at the surface. The name, acronym, network of origin, location, geographical coordinates (°), elevation (m), availability (total number of years available between 2004 and 2019), and annual mean DLR (W·m−2) for each station is shown.
StationAcronymNetworkLocationLatitude and Longitude (°)Elev. (m)Avail. (Years)Annual DLR
(W·m−2)
BrasíliaBRBBSRNBrazil15.60°S; 47.71°W10237.12364.45
BudapestBUDBSRNHungary47.43°N; 19.18°E1390.08373.82
CabauwCABBSRNNetherlands51.97°N; 4.93°E014.69323.69
CamborneCAMBSRNU.K.50.22°N; 5.32°W8811.64324.57
CarpentrasCARBSRNFrance44.08°N; 5.06°E10014.15321.74
CenerCNRBSRNSpain42.82°N; 1.60°W47110.28321.71
De AarDAABSRNSouth Africa30.67°S; 23.99°E12876.25303.88
Eastern North AtlanticENABSRNAzores39.09°N; 28.03°W15.21.00359.34
FlorianopolisFLOBSRNBrazil27.61°S; 48.52°W115.70386.40
GandhinagarGANBSRNIndia23.11°N; 72.63°E651.58401.45
GobabebGOBBSRNNamibia23.56°S; 15.04°E4077.54338.67
NeumayerGVNBSRNAntarctica70.65°S; 8.25°W4214.89216.87
NiameyNIMARMAfrica13.48°N; 2.18°E2231.02392.11
LindenbergLINBSRNGermany52.21°N; 14.12°E12513.99315.06
PalaiseauPALBSRNFrance48.71°N; 2.21°E15615.63322.61
ParamariboPARBSRNSuriname5.81°N; 55.22°W40.58421.16
PayernePAYBSRNSwitzerland46.82°N; 6.94°E49115.70315.05
PetrolinaPTRBSRNBrazil9.07°S; 40.32°W3877.56386.86
Sede BoqerSBOBSRNIsrael30.86°N; 34.78°E5007.49332.86
São Martinho da SerraSMSBSRNBrazil29.44°S; 53.82°W4896.04327.19
SonnblickSONBSRNAustria47.05°N; 12.96°E31096.28249.07
TamanrassetTAMBSRNAlgeria22.79°N; 5.53°E138515.88330.70
ToravereTORBSRNEstonia58.25°N; 26.46°E7015.70308.71
Table 2. List of models used in the analysis, including respective predictors, predictands, and cloud information, for the training and evaluation periods.
Table 2. List of models used in the analysis, including respective predictors, predictands, and cloud information, for the training and evaluation periods.
Training Evaluation
ModelPredictorsCloud Info.PredictandPeriodPredictorsCloud Info.
MARStcwv, t2m,
d2m (ERA5)
cf
(MSG)
DLR
(BSRN, ARM)
2004–2019 1tcwv, t2m,
d2m (ERA5)
cf
(MSG)
LSA
MARS*tcwv, t2m,
d2m (ERA5)
tcc
(ERA5)
LSA*
LSA_OPERtcwv, t2m,
d2m (ERA-40)
tcc(ERA-40)DLR (MODTRAN-4)1992–1993tcwv, t2m,
d2m (ECMWF operational NWP)
cf
(MSG)
ERA5------
1 Random selection of 6 months of data from each station.
Table 3. Comparison of bias (µ), standard deviation of the error ( σ ), root mean square error (RMSE), and temporal correlation coefficient (R) between different models (MARS, LSA, LSA_OPER, and ERA5) and observations from all 23 ground stations for all-sky (2004–2019) in different conditions: considering all data (ALL); observations with values above 400 W·m−2 (UL); observations with values between 200–400 W·m−2 (ML); observations with values below 200 W·m−2 (LL); and the median of the distribution of the metrics computed independently for each station (MED). Units are in W·m−2, while correlations are given between 0–1.
Table 3. Comparison of bias (µ), standard deviation of the error ( σ ), root mean square error (RMSE), and temporal correlation coefficient (R) between different models (MARS, LSA, LSA_OPER, and ERA5) and observations from all 23 ground stations for all-sky (2004–2019) in different conditions: considering all data (ALL); observations with values above 400 W·m−2 (UL); observations with values between 200–400 W·m−2 (ML); observations with values below 200 W·m−2 (LL); and the median of the distribution of the metrics computed independently for each station (MED). Units are in W·m−2, while correlations are given between 0–1.
MARSLSA
Conditionµ σ RMSERµ σ RMSER
ALL0.6513.8618.760.950.4815.0020.240.94
UL−9.1312.0218.540.61−11.6513.7421.510.54
ML0.5113.5318.350.920.7714.7819.960.91
LL18.8214.5226.970.6912.2115.7024.470.70
MED0.4012.2716.960.910.8713.2918.520.91
LSA_OPERERA5
Conditionµ σ RMSERµ σ RMSER
ALL−1.3017.2923.550.93−5.2515.4022.080.93
UL−2.4713.3717.760.57−11.9114.3422.490.52
ML−0.8617.5323.910.90−5.3115.3722.070.90
LL−9.8714.5022.560.736.4415.5121.730.70
MED0.7814.0519.530.89−5.8113.8820.650.88
Table 4. Bias (µ), standard deviation of the error ( σ ), root mean square error (RMSE) and temporal correlation coefficient (R) between MARS and LSA models hourly estimates using ERA5 cloud information (MARS* and LSA*) against observations from all 23 ground stations for all-sky (2004–2019). Different conditions are considered for analysis: all data (ALL); observations with values above 400 W·m−2 (UL); observations with values between 200–400 W·m−2 (ML); observations with values below 200 W·m−2 (LL); and the median of the distribution of the metrics computed independently for each station (MED). Units are in W·m−2, while correlations are given between 0–1.
Table 4. Bias (µ), standard deviation of the error ( σ ), root mean square error (RMSE) and temporal correlation coefficient (R) between MARS and LSA models hourly estimates using ERA5 cloud information (MARS* and LSA*) against observations from all 23 ground stations for all-sky (2004–2019). Different conditions are considered for analysis: all data (ALL); observations with values above 400 W·m−2 (UL); observations with values between 200–400 W·m−2 (ML); observations with values below 200 W·m−2 (LL); and the median of the distribution of the metrics computed independently for each station (MED). Units are in W·m−2, while correlations are given between 0–1.
MARS*LSA*
Conditionµ σ RMSERµ σ RMSER
ALL3.0716.6822.050.933.2117.7923.160.92
UL−9.3913.8720.590.53−11.2614.5522.140.51
ML3.0716.4421.730.893.5817.7323.020.88
LL22.0115.9230.250.7016.4816.8527.610.71
MED2.7315.5420.940.873.5915.6421.320.86
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lopes, F.M.; Dutra, E.; Trigo, I.F. Integrating Reanalysis and Satellite Cloud Information to Estimate Surface Downward Long-Wave Radiation. Remote Sens. 2022, 14, 1704. https://doi.org/10.3390/rs14071704

AMA Style

Lopes FM, Dutra E, Trigo IF. Integrating Reanalysis and Satellite Cloud Information to Estimate Surface Downward Long-Wave Radiation. Remote Sensing. 2022; 14(7):1704. https://doi.org/10.3390/rs14071704

Chicago/Turabian Style

Lopes, Francis M., Emanuel Dutra, and Isabel F. Trigo. 2022. "Integrating Reanalysis and Satellite Cloud Information to Estimate Surface Downward Long-Wave Radiation" Remote Sensing 14, no. 7: 1704. https://doi.org/10.3390/rs14071704

APA Style

Lopes, F. M., Dutra, E., & Trigo, I. F. (2022). Integrating Reanalysis and Satellite Cloud Information to Estimate Surface Downward Long-Wave Radiation. Remote Sensing, 14(7), 1704. https://doi.org/10.3390/rs14071704

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop