# Evaluation of Analysis by Cross-Validation, Part II: Diagnostic and Optimization of Analysis Error Covariance

^{*}

## Abstract

**:**

## 1. Introduction

_{3}, PM

_{2.5}, PM

_{10}, NO

_{2}, and SO

_{2}from the AirNow gateway with additional observations from Canada. These analyses are not used to initialize the air quality model and we wish to evaluate them by cross-validation, that is by leaving out a subset of observations from the analysis to use them for verification. Observations used to produce the analysis are called active observations while those used for verification are called passive observations.

_{3}and PM

_{2.5}.

## 2. Theoretical Framework

#### 2.1. Diagnostic of Analysis Error Covariance in Passive Observation Space

#### 2.2. A Complete Set of Diagnostics of Error Covariances in Passive Observation Space

**B**may not be trivial.

#### 2.3. Geometrical Interpretation

_{c}). The origin T corresponds to the truth of the scalar quantity, and also corresponds to the zero of the central moment of each random variables, e.g., $y-\mathrm{E}[y]$, since each variables are assumed to be unbiased. We also assume that the background, active and passive observations errors are uncorrelated to one another, so the three axes; ${\epsilon}^{o}$ for the active observation error, ${\epsilon}^{b}$ for the background error, and ${\epsilon}_{c}^{o}$ for the passive observation error are orthogonal. The plane defined by ${\epsilon}^{o}$ and ${\epsilon}^{b}$ axes is the space where the analysis takes place, and is called the analysis plane. However, since we define the analysis to be linear and unbiased, only linear combinations of the form ${y}^{a}=k{y}^{o}+(1-k){y}^{b}$ where k is a constant are allowed. The analysis A then lies on the line (B, O). The thick lines in Figure 1 represent the norm of the associated error. For example, the thick line along the ${\epsilon}^{o}$ axis depict the (active) observation standard deviation ${\sigma}_{o}$, and similarly for the other axes and other random variables. Since the active observation error is uncorrelated with the background error, the triangle ΔOTB is a right triangle, and by Pythagoras theorem we have, $\overline{{({y}^{o}-{y}^{b})}^{2}}:=\langle (O-B),(O-B)\rangle ={\sigma}_{o}^{2}+{\sigma}_{b}^{2}$. This is the usual statement that the innovation variance is the sum of background and observation error variances. The analysis is optimum when the analysis error $\left|\right|{\epsilon}^{a}|{|}^{2}={\sigma}_{a}^{2}$ is minimum, in which case the line $(\mathrm{T},\mathrm{A})$ is perpendicular to line (O, B).

_{c}. The passive observation error is perpendicular to the analysis plane, thus the triangle ΔO

_{c}TA is a right triangle,

_{c}is also a right triangle so that $\overline{{({y}^{c}-{y}^{b})}^{2}}:=\langle {(O-B)}_{c},{(O-B)}_{c}\rangle ={\sigma}_{c}^{2}+{\sigma}_{b}^{2}$, which is the scalar version of Equation (9).

#### 2.4. Error Covariance Diagnostics in Active Observation Space for Optimal Analysis

#### 2.5. Error Covariance Diagnostics in Passive Observation Space for Optimal Analysis

_{c}, O and B. We thus have $\mathrm{E}[{(\widehat{A}-B)}_{c}{(\widehat{A}-B)}_{c}^{T}]+\mathrm{E}[{(O-\widehat{A})}_{c}{(O-\widehat{A})}_{c}^{T}]=$ $\mathrm{E}[{(O-B)}_{c}{(O-B)}_{c}^{T}]$. Combining this result with Equation (7) and using, $\mathrm{E}[{(O-B)}_{c}{(O-B)}_{c}^{T}]={H}_{c}B{H}_{c}^{T}+{R}_{c}$, we then get

## 3. Results with Near Optimal Analyses

#### 3.1. Experimental Setup

_{3}and PM

_{2.5}at 21 UTC for a period of 60 days (14 June to 12 August 2014) were performed using an optimum interpolation scheme combining the operational air quality model GEM-MACH forecast and the real-time AirNow observations (see Section 2 of [5] for further details). The analyses are made off-line so they are not used to initialize the model. As input error covariances, we use uniform observation and background error variances, with $\tilde{R}={\sigma}_{o}^{2}I$ and $\tilde{B}={\sigma}_{b}^{2}C$, where $C$ is a homogeneous isotropic error correlation based on a second-order autoregressive model. The correlation length is estimated by using a maximum likelihood method using at first, error variances obtained from a local Hollingworth-Lönnberg fit [11] (and only for the purpose of obtaining a first estimate of the correlation length). We then conduct a series of analyses by changing error variance ratio $\gamma ={\sigma}_{o}^{2}/{\sigma}_{b}^{2}$ while at the same time respecting the innovation variance consistency condition, ${\sigma}_{o}^{2}+{\sigma}_{b}^{2}=\mathrm{var}(O-B)$. This corresponds basically in searching for the minimum of the tr (trace) of Equation (7) while the trace of the innovation covariance consistency, $tr[R+HB{H}^{T}]=tr\{\mathrm{E}[(O-B){(O-B)}^{T}]\}$, is respected.

**iter 0**. No observation or model bias correction was applied, nor were the mean of the innovation at the stations were removed prior to performing the analysis. The variance statistics are first computed at the station using the 60-day members, and then averaged over the domain to give the equivalent of $tr\{\mathrm{E}[{(O-A)}_{c}{(O-A)}_{c}^{T}\}$. We repeat the procedure by exhausting all permutations possible, in this case 3. The mean value statistic for the three verifying subsets are then averaged. More details can be found in Section 2 and beginning of Section 3 of Part I [5].

**iter 1**, we first re-estimate the correlation length by applying a maximum likelihood method, as in Ménard [15], using the

**iter 0**error variances that are consistent with the optimal ratio $\widehat{\gamma}$ obtained in

**iter 0**and the innovation variance consistency. Then, with this new correlation length, we estimate a new optimal ratio $\widehat{\gamma}$ (

**iter 1**), which turn out to be very close to the value obtained in

**iter 0**. We recall that we use uniform error variances, both to keep things simple but also because the optimal ratio is obtained by minimizing a domain-averaged variance $\mathrm{var}({O}_{c}-A)$. A summary of the error covariance parameters obtained for

**iter 0**and

**iter 1**are presented in Table 1.

**iter 0**to

**iter 1**in terms of ${\chi}^{2}/{N}_{s}$ but is still not equal to one. We thus refer the analysis of

**iter 1**as near optimal.

**iter 1**. Figure 3 displays iterates 0 to 4 with our estimation procedure for O

_{3}. With one iteration update we nearly converge. A similar procedure was used in Ménard [15], where the variance and correlation length (estimated by maximum likelihood) were estimated in sequence, which taught us that a slow and fictitious drift in estimated variances and correlation length can occur when the correlation model is not the true correlation. So in regard of similar considerations that may occur here, we do not extend our iteration procedure beyond the first iterate.

#### 3.2. Statistical Diagnostics of Analysis Error Variance

**iter 1**(but not

**iter 0**). We also note that ${\chi}^{2}/{N}_{s}$ is closer to one for

**iter 1**. These two facts indicate that the updated correlation length (

**iter 1**) with uniform error variances is closer to the innovation covariance consistency. The Hollingsworth and Lönnberg [13] method however, is very sensitive and negatively biased in the lack of innovation covariance consistency.

**iter 1**), although this distinction is not that clear for PM

_{2.5}.

#### 3.3. Comparison with the Perceived Analysis Error Variance

_{3}

**iter 1**) is displayed in Figure 4 (A similar figure but for PM

_{2.5}is given in supplementary material). We note that although the input statistics used for the analysis are uniform (i.e., uniform background and observation error variances, and homogeneous correlation model), the computed analysis error variance at the active observation location displays large variations, which is attributed to the non-uniform spatial distribution of the active observations.

_{3}

**iter 1**(panel

**b**) and for the first experiment O

_{3}

**iter 0**(panel

**a**) without optimization (A similar figure is but for PM

_{2.5}is given in supplementary material).

_{3}

**iter 1**the scalar analysis error variance gives 16.2, and for O

_{3}

**iter 0**we get 15.0, thus explaining the secondary maxima on the high end of the histogram.

_{3}

**iter 1**and PM

_{2.5}

**iter 1**, the perceived analysis error variance roughly agrees with all analysis error variances estimated with diagnostics (Table 2).

_{3}

**iter 0**and PM

_{2.5}

**iter 0**, there is a general disagreement between all estimated values. Looking more closely, however, we note that the agreement in the optimal case is not perfect. The perceived analysis error variance is about 20% lower than the best estimates $tr(H{\widehat{A}}_{MDJ}{H}^{T})/{N}_{s}$ and $tr(H{\widehat{A}}_{D}{H}^{T})/{N}_{s}$. The optimal ${\chi}^{2}/{N}_{s}$ values in the “optimal” cases are slightly above one, thus indicating that the innovation covariance consistency is not exact and some further tuning of the error statistics could be done. More on that matter will be presented in Section 4.5.

## 4. Discussion on the Statistical Assumptions and Practical Applications

#### 4.1. Representativeness Error with In situ Observations

#### 4.2. Correlated Observation-Background Errors

**a**and when they are correlated on the panel

**b**.

#### 4.3. Estimation of Satellite Observation Errors with In situ Observation Cross-Validation

#### 4.4. Remark on Cross-Validation of Satellite Retrievals

_{2}or AOD’s.

#### 4.5. Lack of Innovation Covariance Consistency and Its Relevance to the Statistical Diagnostics

**iter 1**we got ${\chi}^{2}/{N}_{s}$ values of 1.36 for O

_{3}and 1.25 for PM

_{2.5}(see Table 1), indicating that the innovation covariance consistency is deficient, although less serious than with the experiment

**iter 0**where values of 2 and higher have been obtained.

## 5. Conclusions

_{3}and PM

_{2.5}. Our method applied the theory in a simplified way. First by considering the averaged observation and background error variances and finding an optimal ratio $\gamma ={\sigma}_{o}^{2}/{\sigma}_{b}^{2}$ using as a constraint the trace of the innovation covariance consistency [15]. Second, using a single parameter correlation model, its correlation length, we used the maximum likelihood estimation [11] to obtain near optimal analyses. Also we did not attempt to account for representativeness error in the observations by, for example, filtering observations that are close. Despite all these limitations, our results show that with near optimal analyses, all estimates of analysis error variance roughly agree with each other, while disagreeing strongly when the input error statistics are not optimal. This check on estimating the analysis error variance gives us confidence that the method we propose is reliable, and provides us an objective method to evaluate different analysis components configurations, such as the type of background error correlation model, the spatial distribution of error variances and possibly the use of thinning observations to circumvent effects of representativeness errors.

## Supplementary Materials

_{2.5}

**iter 1**. Left panel is the analysis error on the model grid and on the right panel at the active observation sites. Note that the color bar of the left and right panels are different. The maximum of the color bar for the left panel correspond to ${\sigma}_{o}^{2}+{\sigma}_{b}^{2}$, Figure S2: Distribution (histogram) of the ozone analysis error variance at the active observation locations. First analysis experiment PM

_{2.5}

**iter 0**(no optimization) on the left panel, and optimal analysis case PM

_{2.5}

**iter 1**on the right panel.

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## Appendix A. A Geometrical Derivation of the Desroziers et al. Diagnostic

## Appendix B. Diagnostics of Analysis error Covariance and the Innovation Covariance Consistency

## References

- Ménard, R.; Robichaud, A. The chemistry-forecast system at the Meteorological Service of Canada. In Proceedings of the ECMWF Seminar Proceedings on Global Earth-System Monitoring, Reading, UK, 5–9 September 2005; pp. 297–308. [Google Scholar]
- Robichaud, A.; Ménard, R. Multi-year objective analysis of warm season ground-level ozone and PM
_{2.5}over North-America using real-time observations and Canadian operational air quality models. Atmos. Chem. Phys.**2014**, 14, 1769–1800. [Google Scholar] [CrossRef] - Robichaud, A.; Ménard, R.; Zaïtseva, Y.; Anselmo, D. Multi-pollutant surface objective analyses and mapping of air quality health index over North America. Air Qual. Atmos. Health
**2016**, 9, 743–759. [Google Scholar] [CrossRef] [PubMed] - Moran, M.D.; Ménard, S.; Pavlovic, R.; Anselmo, D.; Antonopoulus, S.; Robichaud, A.; Gravel, S.; Makar, P.A.; Gong, W.; Stroud, C.; et al. Recent advances in Canada’s national operational air quality forecasting system. In Proceedings of the 32nd NATO-SPS ITM, Utrecht, The Netherlands, 7–11 May 2012. [Google Scholar]
- Ménard, R.; Deshaies-Jacques, M. Evaluation of analysis by cross-validation. Part I: Using verification metrics. Atmosphere
**2018**, in press. [Google Scholar] - Daley, R. The lagged-innovation covariance: A performance diagnostic for atmospheric data assimilation. Mon. Weather Rev.
**1992**, 120, 178–196. [Google Scholar] [CrossRef] - Daley, R. Atmospheric Data Analysis; Cambridge University Press: New York, NY, USA, 1991; p. 457. [Google Scholar]
- Talagrand, O. A posteriori evaluation and verification of analysis and assimilation algorithms. In Proceedings of Workshop on Diagnosis of Data Assimilation Systems, November 1998; European Centre for Medium-Range Weather Forecasts: Reading, UK, 1999; pp. 17–28. [Google Scholar]
- Todling, R. Notes and Correspondence: A complementary note to “A lag-1 smoother approach to system-error estimaton”: The intrinsic limitations of residuals diagnostics. Q. J. R. Meteorol. Soc.
**2015**, 141, 2917–2922. [Google Scholar] [CrossRef] - Hollingsworth, A.; Lönnberg, P. The statistical structure of short-range forecast errors as determined from radiosonde data. Part I: The wind field. Tellus
**1986**, 38A, 111–136. [Google Scholar] [CrossRef] - Ménard, R.; Deshaies-Jacques, M.; Gasset, N. A comparison of correlation-length estimation methods for the objective analysis of surface pollutants at Environment and Climate Change Canada. J. Air Waste Manag. Assoc.
**2016**, 66, 874–895. [Google Scholar] [CrossRef] [PubMed] - Janjic, T.; Bormann, N.; Bocquet, M.; Carton, J.A.; Cohn, S.E.; Dance, S.L.; Losa, S.N.; Nichols, N.K.; Potthast, R.; Waller, J.A.; et al. On the representation error in data assimilation. Q. J. R. Meteorol. Soc.
**2017**. [Google Scholar] [CrossRef] - Hollingsworth, A.; Lönnberg, P. The verification of objective analyses: Diagnostics of analysis system performance. Meteorol. Atmos. Phys.
**1989**, 40, 3–27. [Google Scholar] [CrossRef] - Desroziers, G.; Berre, L.; Chapnik, B.; Poli, P. Diagnosis of observation-, background-, and analysis-error statistics in observation space. Q. J. R. Meteorol. Soc.
**2005**, 131, 3385–3396. [Google Scholar] [CrossRef] - Ménard, R. Error covariance estimation methods based on analysis residuals: Theoretical foundation and convergence properties derived from simplified observation networks. Q. J. R. Meteorol. Soc.
**2016**, 142, 257–273. [Google Scholar] [CrossRef] - Kailath, T. An innovation approach to least-squares estimation. Part I: Linear filtering in additive white noise. IEEE Trans. Autom. Control
**1968**, 13, 646–655. [Google Scholar] - Marseille, G.-J.; Barkmeijer, J.; de Haan, S.; Verkley, W. Assessment and tuning of data assimilation systems using passive observations. Q. J. R. Meteorol. Soc.
**2016**, 142, 3001–3014. [Google Scholar] [CrossRef] - Waller, J.A.; Dance, S.L.; Nichols, N.K. Theoretical insight into diagnosing observation error correlations using observation-minus-background and observation-minus-analysis statistics. Q. J. R. Meteorol. Soc.
**2016**, 142, 418–431. [Google Scholar] [CrossRef] - Caines, P.E. Linear Stochastic Systems; John Wiley and Sons: New York, NY, USA, 1988; p. 874. [Google Scholar]
- Cohn, S.E. The principle of energetic consistency in data assimilation. In Data Assimilation; Lahoz, W., Boris, K., Richard, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Mitchell, H.L.; Daley, R. Discretization error and signal/error correlation in atmospheric data assimilation: (I). All scales resolved. Tellus
**1997**, 49A, 32–53. [Google Scholar] [CrossRef] - Mitchell, H.L.; Daley, R. Discretization error and signal/error correlation in atmospheric data assimilation: (II). The effect of unresolved scales. Tellus
**1997**, 49A, 54–73. [Google Scholar] [CrossRef] - Joiner, J.; da Silva, A. Efficient methods to assimilate remotely sensed data based on information content. Q. J. R. Meteorol. Soc.
**1998**, 124, 1669–1694. [Google Scholar] [CrossRef] - Migliorini, S. On the quivalence between radiance and retrieval assimilation. Mon. Weather Rev.
**2012**, 140, 258–265. [Google Scholar] [CrossRef] - Chapnik, B.; Desroziers, G.; Rabier, F.; Talagrand, O. Properties and first application of an error-statistics tunning method in variational assimilation. Q. J. R. Meteorol. Soc.
**2005**, 130, 2253–2275. [Google Scholar] [CrossRef] - Ménard, R.; Deshiaes-Jacques, M. Error covariance estimation methods based on analysis residuals and its application to air quality surface observation networks. In Air Pollution and Its Application XXV; Mensink, C., Kallos, G., Eds.; Springer International AG: Cham, Switzerland, 2017. [Google Scholar]
- Skachko, S.; Errera, Q.; Ménard, R.; Christophe, Y.; Chabrillat, S. Comparison of the ensemlbe Kalman filter and 4D-Var assimilation methods using a stratospheric tracer transport model. Geosci. Model Dev.
**2014**, 7, 1451–1465. [Google Scholar] [CrossRef] - Efron, B. An Introduction to Boostrap; Chapman & Hall: New York, NY, USA, 1993; p. 436. [Google Scholar]

**Figure 1.**Hilbert space representation of a scalar analysis and cross-validation problem. The arrows indicate the directions of variability of the random variables, and the plane defined by the background and observation errors ${\epsilon}^{b}$, ${\epsilon}^{o}$ defines the analysis plane. The thick lines represent the norm associated with the different random variables. T indicate the truth, O the active observation, B the background, A the analysis and O

_{c}the passive observation.

**Figure 2.**Red line, variance of observation-minus-analysis of O

_{3}in passive observation space. Blue line, variance of observation-minus-model.

**Figure 3.**Optimal estimates of ${\sigma}_{o}^{2}$, ${\sigma}_{b}^{2}$ and maximum likelihood estimate of correlation length ${L}_{c}$ for the first four iterates. Blue, is the optimal background error variance, green, the optimal observation error variance and in red the correlation length (in km, with labels on the right side of the figure).

**Figure 4.**Analysis error variance for ozone optimal analysis case O

_{3}

**iter 1**. (

**a**) is the analysis error on the model grid and (

**b**) at the active observation sites. Note that the color bar of the left and right panels are different. The maximum of the color bar for the left panel correspond to ${\sigma}_{o}^{2}+{\sigma}_{b}^{2}$.

**Figure 5.**Distribution (histogram) of the ozone analysis error variance at the active observation locations. (

**a**) First analysis experiment O

_{3}

**iter 0**(no optimization) on the left panel; (

**b**) Optimal analysis case O

_{3}

**iter 1**. Note that the scales are different between the left and right panels.

**Figure 6.**Geometrical representation of the analysis. (

**a**) for observation errors uncorrelated with the background error. (

**b**) with correlated errors. T indicate the truth, O the observation, B the background and $\widehat{\mathrm{A}}$ the optimal analysis.

Experiment | ${\mathit{L}}_{\mathit{c}}$ (km) | $\langle {(\mathit{O}-\mathit{B})}^{\mathbf{2}}\rangle $ | $\widehat{\mathit{\gamma}}={\widehat{\mathit{\sigma}}}_{\mathit{o}}^{\mathbf{2}}/{\widehat{\mathit{\sigma}}}_{\mathit{b}}^{\mathbf{2}}$ | ${\widehat{\mathit{\sigma}}}_{\mathit{o}}^{\mathbf{2}}$ | ${\widehat{\mathit{\sigma}}}_{\mathit{b}}^{\mathbf{2}}$ | ${\mathit{\chi}}^{\mathbf{2}}/{\mathit{N}}_{\mathit{s}}$ |

O_{3} iter 0 | 124 | 101.25 | 0.22 | 18.3 | 83 | 2.23 |

O_{3} iter 1 | 45 | 101.25 | 0.25 | 20.2 | 81 | 1.36 |

PM_{2.5} iter 0 | 196 | 93.93 | 0.17 | 13.6 | 80.3 | 2.04 |

PM_{2.5} iter 1 | 86 | 93.93 | 0.22 | 16.9 | 77 | 1.25 |

Experiment | Active$\mathbf{var}(\widehat{\mathit{A}}-\mathit{B})$ | Active$\mathit{t}\mathit{r}(\mathbf{H}{\widehat{\mathbf{A}}}_{\mathit{M}\mathit{D}\mathit{J}}{\mathbf{H}}^{\mathit{T}})/{\mathit{N}}_{\mathit{s}}$ | Active$\mathit{t}\mathit{r}(\mathbf{H}{\widehat{\mathbf{A}}}_{\mathit{D}}{\mathbf{H}}^{\mathit{T}})/{\mathit{N}}_{\mathit{s}}$ | Active$\mathbf{var}(\mathit{O}-\widehat{\mathit{A}})$ | Active$\mathit{t}\mathit{r}(\mathbf{H}{\widehat{\mathbf{A}}}_{\mathit{H}\mathit{L}}{\mathbf{H}}^{\mathit{T}})/{\mathit{N}}_{\mathit{s}}$ |

O_{3} iter 0 | 60.29 | 22.69 | 9.61 | 24.33 | −6.03 |

O_{3} iter 1 | 67.66 | 13.32 | 13.68 | 11.26 | 8.94 |

PM_{2.5} iter 0 | 62.29 | 17.98 | 7.71 | 16.78 | −3.18 |

PM_{2.5} iter 1 | 66.3 | 10.68 | 9.51 | 9.57 | 7.33 |

Experiment | Passive$\mathbf{var}[{(\widehat{\mathit{A}}-\mathit{B})}_{\mathit{c}}]$ | Passive$\mathit{t}\mathit{r}({\mathbf{H}}_{\mathit{c}}{\widehat{\mathbf{A}}}_{\mathit{M}\mathit{D}\mathit{J}}{\mathbf{H}}_{\mathit{c}}^{\mathit{T}})/{\mathit{N}}_{\mathit{s}}$ | Passive$\mathbf{var}[{(\mathit{O}-\widehat{\mathit{A}})}_{\mathit{c}}]$ | Passive$\mathbf{var}[{(\mathit{O}-\widehat{\mathit{A}})}_{\mathit{c}}]-{\mathit{\sigma}}_{\mathit{oc}}^{\mathbf{2}}$ |

O_{3} iter 0 | 56.95 | 26.03 | 51.02 | 32.72 |

O_{3} iter 1 | 52.04 | 28.95 | 48.95 | 28.75 |

PM_{2.5} iter 0 | 62.29 | 22.65 | 38.09 | 24.49 |

PM_{2.5} iter 1 | 66.3 | 24.62 | 38.28 | 21.38 |

Experiment | Perceived $\mathit{t}\mathit{r}(\mathbf{H}{\widehat{\mathbf{A}}}_{\mathit{P}}{\mathbf{H}}^{\mathit{T}})/{\mathit{N}}_{\mathit{s}}$ |

O_{3} iter 0 | 5.77 |

O_{3} iter 1 | 11.60 |

PM_{2.5} iter 0 | 4.37 |

PM_{2.5} iter 1 | 8.21 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Ménard, R.; Deshaies-Jacques, M. Evaluation of Analysis by Cross-Validation, Part II: Diagnostic and Optimization of Analysis Error Covariance. *Atmosphere* **2018**, *9*, 70.
https://doi.org/10.3390/atmos9020070

**AMA Style**

Ménard R, Deshaies-Jacques M. Evaluation of Analysis by Cross-Validation, Part II: Diagnostic and Optimization of Analysis Error Covariance. *Atmosphere*. 2018; 9(2):70.
https://doi.org/10.3390/atmos9020070

**Chicago/Turabian Style**

Ménard, Richard, and Martin Deshaies-Jacques. 2018. "Evaluation of Analysis by Cross-Validation, Part II: Diagnostic and Optimization of Analysis Error Covariance" *Atmosphere* 9, no. 2: 70.
https://doi.org/10.3390/atmos9020070