Uncertainty Quantification for Epidemic Risk Management: Case of SARS-CoV-2 in Morocco

In this paper, we propose a new method for epidemic risk modelling and prediction, based on uncertainty quantification (UQ) approaches. In UQ, we consider the state variables as members of a convenient separable Hilbert space, and we look for their representation in finite dimensional subspaces generated by truncations of a suitable Hilbert basis. The coefficients of the finite expansion can be determined by approaches established in the literature, adapted to the determination of the probability distribution of epidemic risk variables. Here, we consider two approaches: collocation (COL) and moment matching (MM). Both are applied to the case of SARS-CoV-2 in Morocco, as an epidemic risk example. For all the epidemic risk indicators computed in this study (number of detections, number of deaths, number of new cases, predictions and human impact probabilities), the proposed models were able to estimate the values of the state variables with precision, i.e., with very low root mean square errors (RMSE) between predicted values and observed ones. Finally, the proposed approaches are used to generate a decision-making tool for future epidemic risk management, or, more generally, a quantitative disaster management approach in the humanitarian supply chain.


Introduction
In Morocco, the number of cumulative confirmed cases of SARS-CoV-2 is currently 1,258,761 with 7209 individuals undergoing treatment in July 2022 [1,2]. The first cases, from Wuhan, were notified to the WHO on 31 December 2019 [3,4] while, Morocco notified its first case on 02 March 2020 [1]. As in many countries of the world, it is valuable to understand the growth and the timing in responding to the logistic needs of the health system [5][6][7]. The aim of the present paper is to study different wave comparisons of the number of virus patients, visualization of the most critical distribution of contamination cases in Morocco and the cumulative distribution function of the number of confirmed cases and deaths.
As an approach, we use a mathematical model and an uncertainty quantification technique for epidemics that integrates the different stages of individuals (susceptible, deaths, recovered, exposed and asymptomatic). We consider that our analysis highlights the importance of anticipation and timing to avoid overwhelming in the health system [7][8][9].
More precisely, our analysis shows, in particular, that, while the epidemic growth is exponentially fast, by respecting a certain logistic growth, the epidemic might remain under control [6]. As a consequence, a non-negligible number of expenses can be saved. Finally, a probative measure approach to project forwards the cumulative number of virus-infected cases is provided.
Epidemic risk modeling and prediction is not new [25]. However, we aim to take into account the sources of uncertainties in the model and to test their influence on the accuracy of the results. The uncertainties can be of an aleatory or epistemic type, and can have different sources, namely, input data, model bias or numerical errors [26]. Uncertainties cannot be avoided, so, the effective solution is to quantify them by calibrating the proposed models and by using these calibrated models for making predictions, estimations and/or approximations. SARS-CoV-2 modelling is fundamentally uncertain: on the one hand, for the asymptomatic cases, the available data are limited and sometimes based on approximations. On the other hand, the algorithms used for modelling are themselves uncertain: they are only an approximation of reality and include many unknowns (speed of propagation, virus mutation, etc.) They do not give enough information for the estimation of possible impacts and damages. All these uncertainties significantly influence the results of the modelling and must be considered.
Uncertainty quantification has gained massive attention from many researchers in different fields, namely, applied mathematics, engineering, physics, management and economics. Uncertainty quantification in terms of state estimation of discrete/continuous time systems is a fundamental problem in engineering sciences [27]. UQ can be defined as that process of quantifying all sources of uncertainties linked to the systems and testing their influence on the accuracy of the results in order to: (i) keep their desired behavior, (ii) avoid undesired states, faults and damages, (iii) to furnish a realistic prognosis in the future, and so on. UQ tools has been used in several studies to assess and manage risks. Hammadi et al., 2022 [27,28] suggested a model using uncertainty quantification for risks linked to the customs supply chain based on the moment matching method and considering seasonality of illicit traffics in five sites in Morocco. Lopez et al., 2017 [29] has used methods of uncertainty quantification for differential equations in order to predict and model the propagation of influenza in boarding school using three approaches: moment matching (MM), collocation (COL) and variational (VAR), including a discrete linear system. Abdo et al., 2017 [30] developed a model based on uncertainty quantification for atmospheric dispersion risk assessment and treatment by using five methods of uncertainty representation and characterization, namely: probability theory, interval analysis, the fuzzy approach, the mixed probabilistic-fuzzy approach and the evidence theory. In our application, the aim is to develop an uncertainty quantification model for epidemic risk, taking into consideration the uncertainties linked to input data. Based on the fitted models, both a determination of the cumulative distribution function and a prediction of the most significant indicators of SARS-CoV-2 in Morocco are provided.
The rest of the paper is organized as follows: a brief introduction which recalls the general framework of uncertainty quantification and fixes the notation employed in this text is given in Section 2. Section 3 presents the two methods for the numerical determination of the coefficients of the expansion with examples in order to validate the proposed approach. It is followed by Section 4, concerning the estimation of the unknown parameters of the model and a discussion of numerical results. In Section 5, we propose a new approach for managing future epidemic risks. Finally, Section 6 presents the main conclusion and future works drawn from this work.

Uncertainty Quantification-Based Model
In this section, we introduce the theoretical framework of uncertainty quantification and the parameter setting of our proposed model for epidemic risk.

Essence of Uncertainty Quantification
Uncertainty quantification (UQ) is a practice for making a model-based prediction, estimation and/or approximation reliable. Uncertainties can be of different types: stochastic (aleatory) or epistemic. Stochastic uncertainties that come from the inherent randomness of natural phenomena can be generated by either sampling uncertainty or measurement uncertainty [26]. Epistemic uncertainties are those generated from the lack of information or knowledge about the phenomena; they may have different sources, namely: data, parameters, algorithmic or method uncertainty [26].
Many studies were conducted to handle the uncertainties and researchers developed various methods in order to improve the quality and the efficiency of UQ (see, for instance, [31][32][33][34]). Among these methods, Monte Carlo simulation (MCS) may be considered as being the basic UQ approach to be used when no other information is known. The MCS provides successive resolutions of a deterministic system incorporating uncertain parameters modeled by random variables. Historically, only the generation of variables is available, and this technique was developed for these types of situations with a lack of information. It generates, for uncertain parameters and according to their probability distributions and their correlations, random draws. For each draw, a set of parameters is obtained and a deterministic calculation, following well defined analytical or numerical models, is made. The statistics generated by this method are poor in accuracy [35] and significant results require a large number of samples which makes the MCS prohibitive in terms of computational cost [36]. Improvements may be obtained by using Latin hypercube sampling (LHS). Such a sampling approach gives a better accuracy [35]. However, it has some limitations because even if the number of samples increases, the error estimates may not be enhanced [28].
To avoid this kind of problem, polynomial chaos expansion (PCE) [37] was introduced. The original PCE was based on the representation of random variables by a multivariate series of polynomials of Gaussian variables [28,38,39]. Nowadays, it has evolved to consider an arbitrary Hilbert basis and other distributions (see Examples in Section 3). Although, PCE has proved to be less expensive and computationally more accurate than the MCS and LHS, this approach needs previous knowledge of the distributions considered, which generate some difficulties, so that variations were introduced, tending to use samples instead of the random variables themselves.

The Model Situation
Uncertainty quantification models are oriented towards the determination of probability distributions (cumulative or density functions) of random variables in its general model. UQ considers a system having entries X ∈ R k X , internal parameters U ∈ R k U and an output Z ∈ R k Z : Z is the system's response to the entry X, and the system is characterized by its parameters U. Both the parameters and the entry can be affected by uncertainty, so that they are grouped in a vector of uncertain variables S = (X, U) ∈ R k S , k S = k X + k U . In the UQ framework, Z is a function of S, Z = Z (S) and some statistical information is available on the couple (S, Z)-for instance, a finite sample of the couple. UQ also proposes methods for the situation where S remains totally or partially unobserved and only observations of Z are available, by introducing a convenient artificial random vector T, destined to replace the unobserved parts of S with observed ones.
The Hilbert approach in the UQ framework considers a probability space (Ω, P) and the separable Hilbert space H = L 2 (Ω, P) with scalar product (X, Y) H = E(XY). Let S ∈ H k S be such that Im(S) = I ⊂ (a, b) k S ⊂ R k S and Z : I → R k Z be such that Z = Z(S). Under suitable conditions [40], we can consider a Hilbert basis B = {ϕ i } i∈N * ⊂ L 2 (a, b) (N * is the set of the strictly positive integers) and represent each component of Z = Z 1 , . . . , Z k Z as an expansion [28,36,41,42]: Notice that the random variables Z and S are defined on the probability space (Ω, P), while the Hilbert basis B is not probabilistic and takes real vectors as arguments and produces real vectors as results. Such a choice is possible since the image of S is formed by real vectors. The literature proposes methods for the generation of a Hilbert basis by considering eigenfunctions of convenient operators-see, for instance [43]. Most of the bases proposed in the literature are polynomial or trigonometrical. In this work, we shall use a polynomial basis.
The expansion (1) can be written in vectorial form as In practice, a truncation of the series is used: Examples of Hilbert bases can be found in the literature, namely, polynomial and trigonometrical bases. We can find works in the literature involving non orthogonal, orthogonal, or orthonormal bases. In this work, we shall consider a non-orthogonal polynomial basis: P n Z is a polynomial function of S.
The coefficients z can be assembled in a matrix: and we have Z = Φ(S)z , Φ(S) = (ϕ 1 (S), . . . , ϕ n (S)) (5) P n Z corresponds to an orthogonal projection: let V n be the linear subspace of H k S given by V n = Y ∈ H k S : Y = Φ(S)y : y ∈ R n×k Z (6) Notice that V n = [B n ] k Z , where [B n ] is the linear span of B n = {ϕ 1 (S), . . . , ϕ n (S)}. The best approximation of Z in V n is its orthogonal projection onto V n , denoted as P n Z. We have P n Z → Z for n → +∞ .
The coefficients z must be determined by appropriated methods-for instance, by collocation or moment matching. The literature shows that such an approach allows the calculation of probabilities of events related to Z by using, the distribution of PZ, with good results [44].
In the framework of the Hilbert space L 2 (Ω, P) k Z , the determination of the orthogonal projection P n Z involves the solution of the variational equation Taking Y = Φ(S)y, we obtain so that Equation (9) can be transformed in a linear system or a family of k Z linear systems [32,44,45], but the determination of z by this method involves the determination of E(ϕ i (S)ϕ j (S)), E(Z ϕ j (S)) for 1 ≤ i, j ≤ n, 1 ≤ ≤ k Z , so that it requires knowledge of the joint distribution of the pair (S, Z). Such information can be unavailable, so that alternative approaches, based on samples, were developed [36,41,42].
If Z depends on time t, id est, Z = Z(t), the coefficients z become functions of time [36,41,42]. In our application: S is a random vector representing the aleatory/epistemic uncertainties in the health system, and Z denotes the daily number of infected people and deaths by SARS-CoV-2 in Morocco.

Numerical Methods for UQ
In this section, we present two non-intrusive methods for the evaluation of the coefficients z ∈ R n×k Z defining the approximation P n Z.
Indeed, we can find in the literature methods for their determination, which are usually classified in intrusive or non-intrusive.
Intrusive approaches request a numerical or theoretical model, which is modified: the coefficients are introduced in the model by replacing Z with P n Z and generating modified equations by adapted methods, such as using orthogonal projections. As an example, let us consider a model defined by a vectorial system of algebraic equations ψ(S, Z) = 0: we rewrite this equation as E(ϕ i (S)ψ(S, P n Z)) = 0, i = 1, . . . n, which generates a system of nonlinear equations for the coefficients z. To exemplify, we can consider the simple model ψ(S, Z) = S − Z = 0. In this case, we obtain Equation (6), which is linear. In a general situation, we obtain a nonlinear system, which must be solved by a convenient method to determine z. Such an approach is intrusive since it is necessary to intervene in the model.
Non-intrusive methods do not require model modifications-they can even be used when the model is not available, as they are most often based on samples. For instance, non-intrusive approaches can be used when ψ is unknown, but we have a sample S = {(S i , Z i ) : 1 ≤ i ≤ m s }. Examples of non-intrusive approaches are those furnished by moment matching (MM) and collocation (COL), which are used in this work.
In the next subsections, we are going to briefly present MM and COL, including simple examples destined to clarify for the reader the steps requested for their implementation, namely, the choice of the random variable S.
We present the results of this section as the cumulative density function (CDF) of the state variables Z. From the CDF, one may obtain any desired statistical information of the state variables, such as mean value, variance, or any given quantile of interest. If desired, the probability density function (PDF) can be determined by numerical derivation of the CDF-in this task, particle derivatives have been shown to be efficient [41,42,46]. In this work, PDF is not requested, since we are mainly interested in statistical quantities and probabilities that can be determined from the CDF.

Approach by Collocation (COL)
Fitting the underlying model, for given a distribution of the couple (S, Z), consists of determining the coefficients z of the approximation. As previously observed, these coefficients are defined by a variational equation, which can be reduced to a linear system, but whose solution requests the knowledge of the joint distribution of the pair (S, Z). Thus, in practice, it may be necessary to use an approach based on a sample S = {(S i , Z i ) : 1 ≤ i ≤ m s }. COL is such an approach: it is an interpolation method, equivalent to the approximation of the means in Equation (9) by their estimations on the sample [42]. In COL, we consider that the approximation coincides with the real value on the sample: Then, Let us assemble Z 1 , . . . ., Z m s into a matrix Z having as line i the vector formed by the values of Z i : for Then z is the solution of a linear system: with F ∈ R m s ×n , G ∈ R m s ×k Z , given by This linear system involves m s × k Z equations for n × k Z unknowns. For: m s > n, the number of equations is higher than the number of unknowns and the system is overdetermined: it must be solved by a generalized solution, such as, for instance, the minimum squares one. As pointed out previously, the minimum squares solution corresponds to the approximation of the means in Equation (9) by their estimations on the sample S. It can also be interpreted as a discrete version of the Hilbertian approximations on the finite dimensional subspaces V n [36]. Other interpolation methods can be considered, such as collocation by intervals method or spline approximations.
As a first example, let us consider a situation where k Z = k S = 2, Z = (S 1 + S 2 , S 1 S 2 ), with S 1 ∼ N(0, 1); S 2 ∼ Exp(1); S 1 , S 2 independent. We consider a sample of 100 variates from (S, Z), generated by S 1i , S 2 j , 1 ≤ i, j ≤ 10 (Table 1). We use a polynomial basis given by with As previously explained, this choice is suggested by the fact that many Hilbert bases considered in the literature are polynomial. The index transformation used is classical in the framework of finite element methods and uncertainty quantification (see, for instance, [44,45]) and allows the determination of the coefficients by standard methods for the solution of linear systems of algebraic equations.
The results obtained with k 1 = k 2 = 3 are shown in Figure 1. We compared the prediction furnished by P n Z on the data, on an uniform grid of 201 × 201 points on (a 1 , b 1 ) × (a 2 , b 2 ) and on a random sample of 217 × 235 points on (a 1 , b 1 ) × (a 2 , b 2 ) (we generated 250 variates of S i , i = 1, 2 with the normal and the exponential generators from Matlab. Points outside the region were excluded from the sample). The errors are synthetized in Table 2.  We compared the prediction furnished by on the data, on an uniform grid of 201 × 201 points on ( , ) × ( , ) and on a random sample of 217 × 235 points on ( , ) × ( , ) (we generated 250 variates of , = 1,2 with the normal and the exponential generators from Matlab. Points outside the region were excluded from the sample). The errors are synthetized in Table 2  The marginal distributions were determined and are shown in Figure 2. The marginal distributions were determined and are shown in Figure 2. We compared the prediction furnished by on the data, on an uniform grid of 201 × 201 points on ( , ) × ( , ) and on a random sample of 217 × 235 points on ( , ) × ( , ) (we generated 250 variates of , = 1,2 with the normal and the exponential generators from Matlab. Points outside the region were excluded from the sample). The errors are synthetized in Table 2  The marginal distributions were determined and are shown in Figure 2. As a second example, let us consider k Z = k S = 1, with Z normally distributed, Z ∼ N m Z = 0.052, σ 2 Z = 0.011 . We assume that a sample of 100 variates of Z is available, but S is unknown. We consider two artificial choices for S: (i) a Gaussian variable N(0, 1) (independent from Z), (ii) an exponential random variable Exp(1). Using n = 7 and ϕ i given by Equation (15), we obtain the coefficients in Table 3: Table 3. Coefficients found in the second example by COL.
The cumulative distribution function associated with PZ is compared to the exact one in Figure 3, as we see, the approximation has a good quality in the two cases. As a second example, let us consider = = 1, with normally distributed, ∼ ( = 0.052, = 0.011). We assume that a sample of 100 variates of is available, but is unknown. We consider two artificial choices for : (i) a Gaussian variable (0,1) (independent from ), (ii) an exponential random variable (1). Using = 7 and φ given by Equation (15), we obtain the coefficients in Table 3: Table 3. Coefficients found in the second example by COL.
The cumulative distribution function associated with is compared to the exact one in Figure 3, as we see, the approximation has a good quality in the two cases.

Approach by Moment Matching (MM)
Moment matching looks for an element such that a finite set of its moments coincides with those of . In practice, a sample is used to evaluate the moments [27,36,41,42].

Approach by Moment Matching (MM)
Moment matching looks for an element P n Z such that a finite set of its moments coincides with those of Z. In practice, a sample is used to evaluate the moments [27,36,41,42].
Let us consider a multi-index = 1 , . . . , k Z ∈ (N * ) k Z and denote | | = 1 + · · · + k Z . We consider J (k) = ∈ (N * ) k Z : | | ≤ k and we set, for X = X 1 , . . . , X k Z and ∈ J (k): Moment matching looks for an approximation P n Z such that Let us consider a F : R n×k Z → R given by Then, the coefficients z correspond to the solution of the nonlinear algebraic system z ∈ R n×k Z and F (z) − M (Z) = 0, ∀ ∈ J (k). (19) z ∈ R n×k Z and Numerically, we can look for where d : Examples of possible choices for d are: When k Z = 1, the equations simplify: we have or z ∈ R n and and The main difficulty in moment matching is the numerical solution of Equations (19) and (20) or (21) (or (23), (24), (25) for k Z = 1). Indeed, the equations are strongly nonlinear and y → F(y) is nonconvex, so that a global optimization algorithm must be employed.
Theorem (Levy): Let {X n } be a sequence of random variables and φ n (t) = E e itX n be the characteristic function of X n . Let X be a random variable and φ(t) = E e itX be its characteristic function. Then X n → X in distribution if and only if φ n (t) → φ(t) almost everywhere.
Notice that this theorem ensures only convergence in distribution, so that the variable P n Z may not converge to Z.
As a first example, let us consider again the situation where k Z = k S = 2, Z = (S 1 + S 2 , S 1 S 2 ), with S 1 ∼ N(0, 1); S 2 ∼ Exp(1); S 1 , S 2 independent. Assume that we do not have any information about S: only the values of Z are available. In such a situation, we do not know how many random variables must be used to describe the data and we do not know their distributions. This lack of information can be completed by using more data: assume that we have a sample formed by 400 variates of Z and let us use moment matching: we shall try to represent each component of Z by using a single random Gaussian variable, id est, To do this, we generate two independent samples of 400 variates from S 1 and S 2 . Then we look for a representation of Z i as a polynomial of S i using moment matching. We considered 4 alternative methods of solution: (23), solving (24), (25) with d 1 , (25), with d 1,rel . We used n = 6 and tested k ∈ {8, 9, . . . , 12}. All these choices led to results analogous to those shown in Figure 4-the optimization was carried by fminsearch (Matlab) with stochastic perturbations introduced via an output function, the results shown correspond to k = 8. The coefficients found are shown in Table 4.
The cumulative distribution function associated with PZ is compared to the exact one in Figure 4. Again, the results are good. via an output function, the results shown correspond to = 8. The coefficients found are shown in Table 4. The cumulative distribution function associated with is compared to the exact one in Figure 4. Again, the results are good. As a second example, let us consider again normally distributed ∼ ( = 0.052, = 0.011). A sample of 100 variates is used to estimate the cumulative distribution function CDF based on the moment matching method as shown in Figure 5. An approximated solution is obtained by an expansion of degree = 6 with φ (S) defined by (15) and S ∼ (0,1). The coefficients obtained for = 12 are shown in Table 5.  As a second example, let us consider again Z normally distributed ∼ N(m Z = 0.052, σ 2 Z = 0.011). A sample of 100 variates is used to estimate the cumulative distribution function CDF based on the moment matching method as shown in Figure 5. An approximated solution is obtained by an expansion of degree n = 6 with ϕ i (S) defined by (15) and S ∼ N(0, 1). The coefficients obtained for k = 12 are shown in Table 5. We observe that both the methods (collocation and moment matching) produce good results. Collocation requests less data, but more information about the distributions, which must be known. Moment matching requests more data, but produces results even for unknown distributions. In addition, the computational cost is much larger for MM,  We observe that both the methods (collocation and moment matching) produce good results. Collocation requests less data, but more information about the distributions, which must be known. Moment matching requests more data, but produces results even for unknown distributions. In addition, the computational cost is much larger for MM, which requests the use of global optimization procedures, while collocation involves the solution of a linear system, generally of small or medium size.

Numerical Analysis to SARS-CoV-2
Making the approximation and determining the cumulative distribution function (CDF) of an epidemic risk time series, we apply our approach by considering a polynomial basis {ϕ i } i∈N * , then we look for a finite expansion based on collocation and the moment matching method. In the following sections, we focus on the case of SARS-CoV-2 in Morocco.

Data Analysis
The simulations were carried out from data in [1], from 02 March 2020 to 15 August 2022. The numerical tests were performed by using Python with the Panda library, on a computer with the following characteristics: Intel(R) Xeon(R) Silver 4214R CPU@ 2.4 GHz, 64.0 Gb of RAM, using a Windows system. According to daily reports, we first analyze the data and perform some data preprocessing before simulations. We obtain various summary statistics (per day), by giving the mean, standard deviation, minimum and maximum values, and the quantiles of the data (see Table 6). Then, Figure 6 illustrates the cumulative numbers of confirmed and death cases, and Figure 7 illustrates the daily numbers of confirmed cases. The average increase in number of confirmed cases (each day) is 1403.0, and the average death count increase (every day) is 18.0.
The Figures 8 and 9 illustrate the rolling average of the five different waves, with: • We see that wave 2 (alpha) lasted much longer (280 days). The largest peak was reached during wave 3 (delta), followed by omicron (wave 4), alpha (wave 2), waves 5 and 1.
In addition, let us define the fatality rate [19]. Then, Figure 6 illustrates the cumulative numbers of confirmed and death cases, and Figure 7 illustrates the daily numbers of confirmed cases. The average increase in number of confirmed cases (each day) is 1403.0, and the average death count increase (every day) is 18.0.
The Figures 8 and 9 illustrate the rolling average of the five different waves, with: • We see that wave 2 (alpha) lasted much longer (280 days). The largest peak was reached during wave 3 (delta), followed by omicron (wave 4), alpha (wave 2), waves 5 and 1.       In addition, let us define the fatality rate [19].
Fatality rate = number of death cases number of confirmed cases × 100 The fatality rate is given by Figure 10, and its average value is 1.9268398. Due to the fact that the number of confirmed cases is much higher than the number of deaths, we can see that the fatality rate has started to fall again, which is a good sign. The fatality rate is given by Figure 10, and its average value is 1.9268398. Due to the fact that the number of confirmed cases is much higher than the number of deaths, we can see that the fatality rate has started to fall again, which is a good sign.
In addition, let us define the fatality rate [19].
Fatality rate = number of death cases number of confirmed cases × 100 The fatality rate is given by Figure 10, and its average value is 1.9268398. Due to the fact that the number of confirmed cases is much higher than the number of deaths, we can see that the fatality rate has started to fall again, which is a good sign.

Results and Discussion
In the following section, we present the numerical results of our proposed model based on UQ to analyze and predict the propagation of SARS-CoV-2 in Morocco. Based on the collected data in [1], from 02 March 2020 to 20 December 2021, we estimate main key pandemic parameters, especially CDFs, and make predictions on the infected and deaths to help with how to propose concrete actions to control the pandemic crisis. We highlight the importance of anticipation and timing to avoid overwhelming that could impact considerably the treatment of patients and the well-being of health care workers.
In the next subsections, the results are presented in the following manner: we first solve the state estimation problem using collocation and moment matching for the determination of the coefficients. Then, we calculate the CDFs of new infected cases and deaths as the main key pandemic parameters and test whether the epistemic uncertainties affect the results obtained. Also, we present the root mean square error (RMSE) results of the two approaches to compare their performance. Finally, we determine the probabilities related to humanitarian logistics which may be used to develop a new approach of risk management for potential epidemic risks faced in future in Morocco. The numerical tests were performed using Matlab [45]. The numerical experiments were executed on a computer intel(R) Core-i7 CPU 2.60 GHz, 24.0 Gb of RAM, using a UNIX system.

Results Using Collocation
The effect of epidemic disease [44,47], such as SARS-CoV-2, is modeled here using a UQ model. It considers a fixed population with only three state variables: infected (I), recovered (R) and dead (D) people. "I" denotes the number of individuals who have been infected with the disease and are capable of spreading the disease to the susceptible category. "D" are those individuals who have been infected and then removed due to death. "R" represents individuals who have been infected and then removed due to treatment. Since we assume that the population is fixed, we only need to evaluate two of the three state variables, e.g., I and D.
We apply the collocation method with n = 6 and ϕ i given by Equation (15), where S is exponentially distributed Exp(1). The cumulative distribution functions (CDFs) of the infected individuals and deaths are computed. The estimated values obtained are illustrated in Figures 11 and 12. recovered (R) and dead (D) people. "I" denotes the number of individuals who have been infected with the disease and are capable of spreading the disease to the susceptible category. "D" are those individuals who have been infected and then removed due to death. "R" represents individuals who have been infected and then removed due to treatment. Since we assume that the population is fixed, we only need to evaluate two of the three state variables, e.g., I and D.
We apply the collocation method with = 6 and φ given by Equation (15)   We may see from the Figures 11 and 12 that there is a good agreement between the estimated values of CDFs and the true response of the model. The coefficients of the orthogonal projection are given in Table 7. The orthogonal projection approximated from the given data is as follow: New deaths by SARS-CoV-2 We may see from the Figures 11 and 12 that there is a good agreement between the estimated values of CDFs and the true response of the model. The coefficients of the orthogonal projection PZ are given in Table 7.
The orthogonal projection PZ approximated from the given data is as follow: Now, we suppose that there was a population not detected or asymptomatic, that was, however, affected by SARS-CoV-2. This population will be considered to be a random variable so that: z = z + ∆z. The aim of this part is to validate the results of the previous section and test if the epistemic uncertainties, i.e., uncertainties in data, affect the results obtained by the collocation method. Uncertainties supposed to be in detections belong to the value range [0, 3] for both new cases and new deaths numbers. By using the Hausdorff distance we obtain the results in Figure 13. For that, we firstly review the computation of the Hausdorff distance [16]. Secondly, we apply this distance to compute the CDFs of key parameters of SARS-CoV-2 in Morocco. In mathematics, and more specifically in geometry, the Hausdorff distance is a topological tool that measures the distance between two subsets of an underlying metric space. This distance appears in two very different contexts: in the field of image processing and in mathematics. The Hausdorff distance consists in considering a metric space ( , ). Let and be two non-empty compact subsets of . We first define, for any subset of , the open r-neighbourhood of as being the set [48]: The Hausdorff distance ( , ) between and is defined as the smallest real number so that the r-neighborhood of contains and the r-neighborhood of contains [16]. In other words: The results presented in Figure 12 show that the uncertainties linked to non-detection of infected individuals or deaths by SARS-CoV-2 do not affect the results of CDFs computed by interpolation. Indeed, the CDFs under uncertainties have kept the same appear- In mathematics, and more specifically in geometry, the Hausdorff distance is a topological tool that measures the distance between two subsets of an underlying metric space. This distance appears in two very different contexts: in the field of image processing and in mathematics. The Hausdorff distance consists in considering a metric space (W, d). Let A and D be two non-empty compact subsets of W. We first define, for any subset X of W, the open r-neighbourhood of X as being the set [48]: The Hausdorff distance D h (A, D) between A and D is defined as the smallest real number r so that the r-neighborhood of A contains D and the r-neighborhood of D contains A [16]. In other words: The results presented in Figure 12 show that the uncertainties linked to non-detection of infected individuals or deaths by SARS-CoV-2 do not affect the results of CDFs computed by interpolation. Indeed, the CDFs under uncertainties have kept the same appearance of the distribution functions in Figures 11 and 12. Consequently, the response of our proposed model of epidemic risk is not affected by epistemic uncertainties, i.e., those generated from the lack of information or knowledge about the phenomena, such as data, parameters, etc. So, the uncertainties that affected our study are of a stochastic type that come from the inherent randomness of natural phenomena; they are generated either by sampling uncertainty or measurement.

Results Using Moment Matching
As mentioned previously, the second alternative approach used for determining the coefficients of the expansion is moment matching. The corresponding results are exhibited in Figures 14 and 15.
The results presented in Figure 1 show a good quality of approximation of the cumulative density function of the number of infected individuals by SARS-CoV-2 for an expansion of degree and n = 9 and k = 5 first empirical moments, with the polynomial basis ϕ i defined in Equation (15) and S ∼ Exp(1). From the corresponding CDF, one may obtain any desired statistical information of the state variables, such as mean value, variance or any given quantile of interest that will be used to establish preventive measures to manage the pandemic crisis, especially to anticipate the growth and timing in responding to the logistics needs of the health system. obtain any desired statistical information of the state variables, such as mean value, variance or any given quantile of interest that will be used to establish preventive measures to manage the pandemic crisis, especially to anticipate the growth and timing in responding to the logistics needs of the health system. Governmental agencies and health institutions should be prepared in advance for the control of epidemic outbreaks. This means that they should have in place robust contingency plans addressing issues such as vaccines and commodities, availability of emergency medical stocks and well-trained personnel, their appropriate deployment, the availability of different types of vehicles for the transportation of essential medical supplies, etc. Effective epidemic management requires the combination of managerial decisions, such as planning and resource allocation [49]. Our proposed model for epidemic risk provides useful insights for planning and mitigating against a potential risk in future.
However, the lack of any significant policies on disaster planning and implementation issues for the suggested control and intervention strategies remains a major gap in governmental planning. Our work emphasizes the importance of logistics decisions at every stage of disaster and risk management.   The results presented in Figure 15, show an approximation for an expansion of degree n = 5 and = 7 first empirical moments using a Gaussian polynomial-id est, φ given by Equation (15) and S ∼ N(0,1). The estimated CDF of the number of deaths by SARS-CoV-2 presents a slightly worse solution, but it is still a reasonable approximation. One of the well-known drawbacks of the moment matching approach is that the quality of the approximation depends on the quality of the optimization-if the algorithm converges to a non-interesting global optimum of , a poor quality of approximation is obtained. This kind of approximation is connected to Levy's theorem [36]. Such a theorem ensures the convergence in distribution, which is a weak convergence involving the approximation of the cumulative distribution function of the variable, but not the approximation of the variables themselves [28]. In practice, it means that moment matching is not anticipated to furnish a good approximation of the variables, but only of their distributions. The coefficients of orthogonal projection are given in Table 8. Now, we present some comparative analysis between the two proposed methods: collocation and moment matching for SARS-CoV-2 case study. We investigate the performance of the two numerical methods for the determination of the coefficients of expansion presented in Section 4: COL and MM. In the two approaches, we employed different values of ns in order to analyze its influence on the results. In the COL and MM approaches, we generate the results of this example considering the two indicators of SARS-CoV-2. In order to quantify the accuracy of each approach, we evaluate the RMSE of the approximation given by each method considering = %. The results are summarized in Table 9. Governmental agencies and health institutions should be prepared in advance for the control of epidemic outbreaks. This means that they should have in place robust contingency plans addressing issues such as vaccines and commodities, availability of emergency medical stocks and well-trained personnel, their appropriate deployment, the availability of different types of vehicles for the transportation of essential medical supplies, etc. Effective epidemic management requires the combination of managerial decisions, such as planning and resource allocation [49]. Our proposed model for epidemic risk provides useful insights for planning and mitigating against a potential risk in future.
However, the lack of any significant policies on disaster planning and implementation issues for the suggested control and intervention strategies remains a major gap in governmental planning. Our work emphasizes the importance of logistics decisions at every stage of disaster and risk management.
The results presented in Figure 15, show an approximation for an expansion of degree n = 5 and k = 7 first empirical moments using a Gaussian polynomial-id est, ϕ i given by Equation (15) and S ∼ N(0, 1). The estimated CDF of the number of deaths by SARS-CoV-2 presents a slightly worse solution, but it is still a reasonable approximation. One of the well-known drawbacks of the moment matching approach is that the quality of the approximation depends on the quality of the optimization-if the algorithm converges to a non-interesting global optimum of A, a poor quality of approximation is obtained. This kind of approximation is connected to Levy's theorem [36]. Such a theorem ensures the convergence in distribution, which is a weak convergence involving the approximation of the cumulative distribution function of the variable, but not the approximation of the variables themselves [28]. In practice, it means that moment matching is not anticipated to furnish a good approximation of the variables, but only of their distributions. The coefficients of orthogonal projection PZ are given in Table 8. Now, we present some comparative analysis between the two proposed methods: collocation and moment matching for SARS-CoV-2 case study. We investigate the performance of the two numerical methods for the determination of the coefficients of expansion presented in Section 4: COL and MM. In the two approaches, we employed different values of ns in order to analyze its influence on the results. In the COL and MM approaches, we generate the results of this example considering the two indicators of SARS-CoV-2. In order to quantify the accuracy of each approach, we evaluate the RMSE of the approximation given by each method considering α i = 10%. The results are summarized in Table 9. One may easily see from these results that COL provided precise results compared to MM. For the two approaches, the accuracy of the estimated values of the state variables increases as the sample size ns augments. Finally, it must be pointed out that the MM approach takes more computational time to furnish the results.

Forecast for the Pandemic Parameters
Here, we perform predictions for the pandemic parameters (from 02 March 2020 to 20 December 2021) for the case of Morocco using the collocation approach. Most importantly, with the model and parameters in hand, we can carry out simulations and forecast the potential tendency of the COVID-19 pandemic. For that, we firstly perform the predictions of the cumulative number of confirmed cases, suspected cases and the number of deaths cases for the same period. Secondly, we investigate the degree of dependency between these key variables by computing the correlation coefficient ρ, as well as the performance of the prediction, by computing the RMSE between the model predictions and the observations. In the following, the effect of SARS-CoV-2 is analyzed considering three state variables: susceptible (S), infected (I) and removed (R) people. "S" is used to represent the number of individuals not yet infected with the disease at time t, or those susceptible to the disease. "I" denotes the number of individuals who have been infected with the disease and are capable of spreading the disease to those in the susceptible category. "R" are those individuals who have been infected and then removed due to death [46]. The predictions of these state variables are given bellow. Figures 16 and 17 present the results of the estimation of the two state variables of the number of infected cases and death cases using the collocation approach. We may see from these figures that the collocation approach was able to estimate "I" and "R" with precision, i.e., with very low RMS values. The number of deaths in the blue curve reached the peak in ∆ wave (from 5 July 2021 to 14 November 2021), this can be justified by the fact that ∆ wave is more severe than α wave (first wave of the pandemic in Morocco). From Figure 18, the dependency degree between the two variables, the number of infected people and the number of deaths, is examined by computing the correlation coefficient . We can deduce that there is a strong dependence between the two state variables = .
with the best delay of 9 days. Such dependency has a causal meaning, which may affect the efficiency of the preparedness strategy of the government and health system confronted with such an epidemic crisis. From Figure 18, the dependency degree between the two variables, the number of infected people and the number of deaths, is examined by computing the correlation coefficient . We can deduce that there is a strong dependence between the two state variables = .
with the best delay of 9 days. Such dependency has a causal meaning, which may affect the efficiency of the preparedness strategy of the government and health system confronted with such an epidemic crisis. From Figure 18, the dependency degree between the two variables, the number of infected people and the number of deaths, is examined by computing the correlation coefficient ρ. We can deduce that there is a strong dependence between the two state variables ρ = 0.96 with the best delay of 9 days. Such dependency has a causal meaning, which may affect the efficiency of the preparedness strategy of the government and health system confronted with such an epidemic crisis.    Figures 19 and 20 show the results of the estimation of the two state variables of the number of suspected cases and infected cases using collocation approach. The results indicate a good quality of estimation of "S" and "I", i.e., with very low RMS value. The number of infected peoples in blue curve reached the peak in ∆ wave (from 05 July 2021 to 14 November 2021). Figure 21 investigates the dependency degree between the two variables: the number of suspected individuals, "S", and the number of infected ones, "I", by computing the correlation coefficient ρ. The results show low dependence between the two state variables ρ = 0.66 when compared with the correlation between "I" and "R". Such dependency is not causal, i.e., in the broadest sense, "correlation" may indicate any type of association; here it refers to the degree to which a pair of variables are linearly related.   Figure 21 investigates the dependency degree between the two variables: the number of suspected individuals, "S", and the number of infected ones, "I", by computing the correlation coefficient . The results show low dependence between the two state variables = 0.66 when compared with the correlation between "I" and "R". Such dependency is not causal, i.e., in the broadest sense, "correlation" may indicate any type of association; here it refers to the degree to which a pair of variables are linearly related.

Probabilities of the Epidemic Disaster Impact
The interest of this work is not only looking for the cumulative distribution functions (CDFs), but also to describe the impacts of SARS-CoV-2's behavior according to the new infected cases and the new deaths detections. For that, a classification of the disaster impact was established according to the classification of the "National Risk Assessments: A Cross Country Perspective" (OCDE, 2018) [50] for disaster evaluation.   Figure 21 investigates the dependency degree between the two variables: the number of suspected individuals, "S", and the number of infected ones, "I", by computing the correlation coefficient . The results show low dependence between the two state varia bles = 0.66 when compared with the correlation between "I" and "R". Such depend ency is not causal, i.e., in the broadest sense, "correlation" may indicate any type of asso ciation; here it refers to the degree to which a pair of variables are linearly related.

Probabilities of the Epidemic Disaster Impact
The interest of this work is not only looking for the cumulative distribution functions (CDFs), but also to describe the impacts of SARS-CoV-2's behavior according to the new infected cases and the new deaths detections. For that, a classification of the disaster im pact was established according to the classification of the "National Risk Assessments: A Cross Country Perspective" (OCDE, 2018) [50] for disaster evaluation.

Probabilities of the Epidemic Disaster Impact
The interest of this work is not only looking for the cumulative distribution functions (CDFs), but also to describe the impacts of SARS-CoV-2's behavior according to the new infected cases and the new deaths detections. For that, a classification of the disaster impact was established according to the classification of the "National Risk Assessments: A Cross Country Perspective" (OCDE, 2018) [50] for disaster evaluation.
The probabilities corresponding to Tables 10 and 11 are computed and summarized in  Tables 12 and 13. According to the probabilities, it is clear that SARS-CoV-2 has a sever human impact. In terms of disaster management, especially analyzing a disaster using the severity of human impacts, the epidemic risk is evidently considered as a critical disaster.
The probabilities obtained may be used to perform predictions of the tendency of a pandemic in a region, in general, and the prediction of potential epidemic risk in future. In fact, the principle is simple; it is about making a draw with replacement of a region by taking into consideration the basic probabilities obtained by our model. This method is very effective and is applicable if the considered regions are independent in terms of risk/disaster occurrence (the independence of the regions may be tested by the conditional expectation). However, this method has limitations because it gives a good result only for long-term predictions. The results of this study show that SARS-CoV-2 does not have a known usual distribution and may even be a mixture of probability distributions, as shown by the cumulative distribution function (CDFs). This is justified by the fact that epidemic disasters behave randomly according to the climate seasonality, epidemiologic and clinical parameters, population behaviors and their adherence to health system instructions. Our application goes beyond the use of uncertainty quantification tools to determine the distribution of the data. It aims to use the proposed model to predict behavior and effectively manage potential epidemic risks in future. The proposed approach shows its effectiveness as an important decision-making tool for government and organizations in terms of disaster management, especially pre-disaster preparedness.

Proposed Quantitative Epidemic Risk Management Approach
In this section, the aim is to propose a new approach for epidemic risk management based on mathematical modeling. Therefore, we move towards the concept of humanitarian logistics, a branch of logistics which deals with disaster preparedness and intervention (including epidemics). The proposed approach, shown in Figure 22, presents the interactive process of risk management deployed in disaster situations to support decision-making during each phase of the disaster management cycle.
In this section, the aim is to propose a new approach for epidemic risk management based on mathematical modeling. Therefore, we move towards the concept of humanitarian logistics, a branch of logistics which deals with disaster preparedness and intervention (including epidemics). The proposed approach, shown in Figure 22, presents the interactive process of risk management deployed in disaster situations to support decision-making during each phase of the disaster management cycle.
Taking a decision at the right time during each phase leads to better preparation, better alerts, reduced vulnerability and/or prevention of future disasters. The cycle in orange as seen in the framework represents the risk management process while the blue one represents the correspondence between the stages of risk management and those of the disaster management cycle.  Mitigation actions: refer to activities that avoid the negative impact of hazards and ways to minimize the disasters associated with them.

Quantitative:
Specification of the probability of apparition of the epidemic disaster and its impact degree

Epidemic risk treatment
Refer to activities and measures taken in advance to ensure an effective response to the impact of hazards, including prompt, timely and effective alerts as well as the temporary evacuation of people and material goods from the premises threat.

Monitoring and Review
Response to epidemic

Damages assessment Damages treatment Risk Reassessment
Assessment of the Current epidemic risk (COVID 19) and comparison with the previous epidemic risk assessment to develop mitigation and preparedness actions in the future Taking a decision at the right time during each phase leads to better preparation, better alerts, reduced vulnerability and/or prevention of future disasters. The cycle in orange as seen in the framework represents the risk management process while the blue one represents the correspondence between the stages of risk management and those of the disaster management cycle.

Risk Management Process
Risk management consists of identifying, evaluating and controlling the disasters to which we are exposed. The process of risk management in a disaster context contains seven basic steps [28]: (a) Setting the context: any process of risk management begins by establishing the context of what needs to be managed. At this step we define the internal and external context, the objectives and the criteria of the epidemic disaster (rate of transmission, mortality, mutation, contamination conditions, treatment, vaccination, etc.). Setting the context helps to identify the parameters for the next steps of risk management process. (b) Disaster risk identification: risk identification allows decision-makers to be aware of events likely to create uncertainty. This step requires an effort to analyze and identify the sources of epidemic risk. (c) Disaster risk analysis: once the specific epidemic risk has been identified, we determine its likelihood of occurrence and its impacts. The objective of the analysis is to better understand the disaster, and how it might affect individuals and ordinary life. (d) Disaster risk evaluation: the risk assessment is carried out using the mathematical model in order to estimate the key parameters of the epidemic, the cumulative density function and to make predictions to anticipate the behavior of the disaster, and therefore, to better propose the mitigation measures. (e) Disaster risk prioritization: at this stage we can decide if the risk is acceptable, if it requires treatment or if it must be eliminated. (f) Disaster risk treatment: this facilitates the elimination of the risk or reduces it to an acceptable level. We have to accept the risk, which is the retention strategy adopted by an organization, when the impact of the occurrence of an adverse event is considered to be low. However, there are cases in which taking on the epidemic disaster is not a choice, in particular in the case where the risk treatment cost is very high. The risk treatment can be matched with the response step in the disaster management cycle, where preventive and/or corrective plans to minimize the negative impacts of the disaster are implemented. (g) Monitoring and review: this step is about continuously monitoring the new and existing epidemic risks together with the corresponding plans so that the risk management process will be reviewed and updated regarding the epidemic mutation.

Disaster Management Process
Disaster management includes the activities of preparation, response, improvement and reconstruction; it is a part of what is called humanitarian logistics, which consists of bringing the right material to the right place by the right person in the right quantity and the right quality, at the right time, in the case of an emergency [49,51]. Disaster management encompasses all the activities and processes designed to be implemented before, at the time of, and after the disaster, with the aim of preventing or mitigating human and material damage [52]. So, the main objective of disaster management is to reduce or avoid possible damages from disasters, ensure good help to the people in need, and plan for a quick and efficient rehabilitation and recovery [53]. Disaster management emphasizes the coordination of efforts and the management of resources and interventions to deal with all aspects of emergencies, in all aspects of mitigation, preparedness, response and recovery stages. We define five steps of the disaster management cycle [49,51]: a.
Disaster assessment: it is a set of activities to analyze and evaluate previous disasters in order to define their criticality. b.
Mitigation: this step includes laws and mechanisms that reduce the vulnerability of the population and increase their resilience. c.
Preparedness: this step refers to the implementation of response plans in order to act against factors that society has failed to avoid or mitigate. d. Response: is the act of attending to the disaster. e.
Rehabilitation/Recovery: it is the final step of the cycle which comes after the intervention against the disaster, with the objective of restoring the normal functioning of systems and victim lives.

Conclusions and Future Works
In this paper a new approach for epidemic risk modeling based on uncertainty quantification (UQ) is presented in order to effectively predict and manage the disaster in the preparedness and response phases. This approach is based on the representation of random variables using polynomials. It consists in expanding the state variables using deterministic coefficients of this expansion. For the determination of the coefficients of the expansion, we presented two approaches: moment matching (MM) and collocation (COL). It is important to mention that the proposed approach provides not only an estimation of the state variables, but also gives their full probabilistic description (CDFs).
In the numerical analysis section, the case of SARS-CoV-2 in Morocco, as a case of pandemic risk, was analyzed using COL and the MM method. The advantages and issues of each approach were pointed out. For instance, collocation was the more precise approach; however, it required some dependence between the variables. On the other hand, MM provided a reasonable estimation, but it required the solution of a complex optimization problem for the determination of the coefficients. More generally, the proposed epidemic risk model based on UQ was able to estimate the values of the main key parameters of SARS-CoV-2 with precision, i.e., with very low RMS values.
From the point of view of quantitative risk management, the contribution of this work is in line with considering the variable of death number, confirmed cases and suspected cases in epidemic risk, for modelling and making predictions for future behavior of the disaster under uncertainties. In fact, previous studies generally focused on risk occurrence predictions and clinical views, and they did not take into consideration uncertainties for disaster modeling. The available data themselves are uncertain due to the fraction of non-detected cases infected by SARS-CoV-2, so in this work, this kind of uncertainty was taken into consideration in the model validation. In general, the method has shown its effectiveness and given us a good quality of results which we used to calculate the probabilities of the risk impacts and thus to propose a new approach of risk management based on quantitative indicators of potential epidemic disasters.
This standard method for the knowledge and assessment of disasters is a fundamental element for the strengthening of risk management policies in Morocco and more in general worldwide. Its implementation will significantly improve the effectiveness of plans, whether for risk prevention, crisis management preparation, or post-disaster recovery and reconstruction, which are the subject of future works. It will be based on the use of optimization tools for facility location problems. In addition, other mathematical models will be developed for the prediction of natural and technological disasters, in particular Markov chains and neural networks. Therefore, a comparison will be made between the models for the final validation of the developed pre-disaster approach.
Data Availability Statement: The data collected was taken from official sources, the information relating to is detailed in the references attached to the manuscript.