Towards Overcoming the Curse of Dimensionality: The Third-Order Adjoint Method for Sensitivity Analysis of Response-Coupled Linear Forward / Adjoint Systems, with Applications to Uncertainty Quantiﬁcation and Predictive Modeling

: This work presents the Third-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for response-coupled forward and adjoint linear systems. The 3rd-ASAM enables the e ﬃ cient computation of the exact expressions of the 3rd-order functional derivatives (“sensitivities”) of a general system response, which depends on both the forward and adjoint state functions, with respect to all of the parameters underlying the respective forward and adjoint systems. Such responses are often encountered when representing mathematically detector responses and reaction rates in reactor physics problems. The 3rd-ASAM extends the 2nd-ASAM in the quest to overcome the “curse of dimensionality” in sensitivity analysis, uncertainty quantiﬁcation and predictive modeling. This work also presents new formulas that incorporate the contributions of the 3rd-order sensitivities into the expressions of the ﬁrst four cumulants of the response distribution in the phase-space of model parameters. Using these newly developed formulas, this work also presents a new mathematical formalism, called the 2nd / 3rd-BERRU-PM “Second / Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling”) formalism, which combines experimental and computational information in the joint phase-space of responses and model parameters, including not only the 1st-order response sensitivities, but also the complete hessian matrix of 2nd-order second-sensitivities and also the 3rd-order sensitivities, all computed using the 3rd-ASAM. The 2nd / 3rd-BERRU-PM uses the maximum entropy principle to eliminate the need for introducing and “minimizing” a user-chosen “cost functional quantifying the discrepancies between measurements and computations,” thus yielding results that are free of subjective user-interferences while generalizing and signiﬁcantly extending the 4D-VAR data assimilation procedures. Incorporating correlations, including those between the imprecisely known model parameters and computed model responses, the 2nd / 3rd-BERRU-PM also provides a quantitative metric, constructed from sensitivity and covariance matrices, for determining the degree of agreement among the various computational and experimental data while eliminating discrepant information. The mathematical framework of the 2nd / 3rd-BERRU-PM formalism requires the inversion of a single matrix of size N r × N r , where N r denotes the number of considered responses. In the overwhelming majority of practical situations, the number of responses is much less than the number of model parameters. Thus, the 2nd-BERRU-PM methodology overcomes the curse of dimensionality which a ﬀ ects the inversion of hessian matrices in the parameter space.


Introduction
The functional derivatives (also called "sensitivities") of results (also called "responses") are needed for many purposes, including: (i) understanding the model by ranking the importance of the various parameters; (ii) performing "reduced-order modeling" by eliminating unimportant parameters and/or processes; (iii) quantifying the uncertainties induced in a model response due to model parameter uncertainties; (iv) performing "model validation," by comparing computations to experiments to address the question "does the model represent reality?" (v) prioritizing improvements in the model; (vi) performing data assimilation and model calibration as part of forward "predictive modeling" to obtain best-estimate predicted results with reduced predicted uncertainties; (vii) performing inverse "predictive modeling"; (viii) designing and optimizing the system. As is well known, even the approximate determination of the first-order sensitivities ∂R/∂α i , i = 1, . . . , N α of a model response R to N α parameters α i using conventional finite-difference methods would already require N α large-scale computations with altered parameter values, which is unfeasible for large-scale models comprising many parameters. The computation of higher-order sensitivities by conventional methods is limited in practice by the so-called "curse of dimensionality," since the number of such large-scale computations increases exponentially with the order of the response sensitivities. For the exact computation of the first-and second-order response sensitivities to parameters, the "curse of dimensionality" has been overcome by the Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) conceived and developed by Cacuci [1][2][3]. The unique capability of the 2nd-ASAM to compute comprehensively and efficiently the exact first-and second-order sensitivities of a response to parameters in a large-scale physical system has been demonstrated [4][5][6] by an application to a reactor physics system which comprises 21,976 first-order sensitivities and 482,944,576 second-order sensitivities.
Sections 2 and 3 of this work present the Third-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for coupled forward and adjoint linear systems, which evidently extends and generalizes the 2nd-ASAM. The 3rd-ASAM aims at the efficient computation of the exact expressions of the 3rd-order functional derivatives ("sensitivities") of a system response that depends on both the forward and adjoint state functions with respect to all of the parameter underlying the respective forward and adjoint systems. Such responses are often encountered when representing mathematically detector responses and reaction rates in reactor physics problems. The 3rd-ASAM will be applied to the reactor physics system analyzed in [4][5][6][7] to compute the exact magnitude of the 3rd-order sensitivities to the model parameters that were found in [4][5][6][7] to have unexpectedly large 2nd-order sensitivities. Furthermore, the 3rd-order sensitivities computed using the 3rd-ASAM are incorporated in the new formulas, presented in Section 4, for computing to 3rd-order the first four cumulants of the response distribution in the phase-space of model parameters. Section 5 presents a new mathematical formalism, which will be called the "Second/Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling (2nd/3rd-BERRU-PM)." Set in the joint phase-space of responses and parameters, the 2nd/3rd-BERRU-PM incorporates experimental and computational information, including the complete (as opposed to partial) vector of 1st-order response sensitivities, the complete hessian matrix of 2nd-order second-sensitivities and also the 3rd-order sensitivities, all computed using the 3rd-ASAM presented in Section 3. Thus, the 2nd/3rd-BERRU-PM extends the "BERRU Predictive Modeling" [7], thereby generalizing and including, as particular cases, similar formulas used in other fields e.g., [8][9][10]. The 2nd/3rd-BERRU-PM uses the maximum entropy (MaxEnt) principle [11] to eliminate the need for introducing and "minimizing" a user-chosen "cost functional quantifying the discrepancies between measurements and computations." Incorporating correlations, including those between the imprecisely known model parameters and computed model responses, the 2nd/3rd-BERRU-PM also provides a quantitative metric, constructed from sensitivity and covariance matrices, for determining the degree of agreement among the various computational and experimental data and helping eliminate discrepant information. Conclusions regarding the significance of this work's novel results in the quest to overcome the curse of dimensionality in sensitivity analysis, uncertainty quantification and predictive modeling are presented in Section 6.

Mathematical Description of the Physical System
A linear physical system is generally represented by means of N u coupled operator equations of the form N u j=1 L ij (α)ϕ j (x) = Q i (α), i = 1, . . . , N u , x ∈ Ω x , in which the operators L ij (α) act linearly on the state functions ϕ j (x). The above system of equations can be written in matrix form as follows: Matrices and vectors will be denoted using bold letters. Since the right-side of Equation (1) may contain distributions, the equality in this equation is considered to hold in the weak ("distributional") sense. Similarly, all of the equalities that involve differential equations in this work will be considered to hold in the weak/distributional sense. All vectors in this work are considered to be column vectors, and transposition will be indicated by a dagger ( †) superscript. The vectors, matrices and operators appearing in Equation (1) are defined as follows: 1.
α (α 1 , . . . , α N α ) † denotes a N α -dimensional column vector whose components are the physical system's imprecisely known parameters, which are subject to uncertainties; α ∈ E α ⊂ R N α , where E α denotes a subset of the N α -dimensional real vector space R N α . The symbol " " will be used to denote "is defined as" or "is by definition equal to." The vector α ∈ E α ⊂ R N α is considered to include any imprecisely known model parameters that may enter into defining the system's boundary in the phase space of independent variables. 2.
x (x 1 , . . . , x N x ) † ∈ R N x denotes the N x -dimensional phase-space position vector, defined on a phase-space domain denoted as Ω x . 3.
L(α) [L 1 (α), . . . , L N u (α)] † denotes a N u -component column vector. The components of L(α) are operators acting linearly on ϕ and nonlinearly on α. When L(α) contains differential operators, a set of boundary and/or initial conditions which define the domain of L(α) must also be given. Since L(α) is considered to act linearly on ϕ(x), the accompanying boundary and/or initial conditions must also be linear in ϕ(x). Such linear boundary and/or initial conditions are represented in the following operator form: In Equation (3), the operator B F (α) B ij (α) ; i = 1, . . . , N B ; j = 1, . . . , N u is a matrix comprising, as components, operators that act linearly on ϕ(x) and nonlinearly on α; the quantity N B denotes the total number of boundary and initial conditions. The operator C F (α) C 1 (α), . . . , C N B (α) † is a N B -dimensional vector comprising components that are operators acting, in general, nonlinearly on α. The subscript "F" in Equation (3) indicates boundary conditions associated with the "forward" system of equations.
In most practical situations the Hilbert space H ϕ is self-dual. The operator L(α) admits an adjoint (operator), which will be denoted as L + (α), and which is defined through the following relation for an arbitrary vector ψ(x) ∈ H ϕ : In Equation (4), the formal adjoint operator L + (α) is the N u × N u matrix: comprising elements L + ji (α) obtained by transposing the formal adjoints of the operators L ij (α). Thus, the system adjoint to Equations (1) and (3) has the following general representation in operator form: The domain of L + (α) is determined by selecting the adjoint boundary and/or initial conditions represented in operator form in Equation (7), where the letter "A" indicates "adjoint" and the letter "B" indicates "boundary and/or initial conditions." These adjoint boundary and/or initial conditions are selected so as to ensure that the boundary terms that arise when going from the left-side to the right side of Equation (4) vanish, in conjunction with the forward boundary conditions given in Equation (3). (6) is associated with the system's response which, in this work, is considered to be a scalar-valued nonlinear functional of the adjoint and forward fluxes, which will be denoted as R[ϕ(x), ψ(x); α]. Such responses are often encountered when representing mathematically detector responses and reaction rates in reactor physics problems, and can be generally represented in the following inner-product form: where S(ϕ; ψ; α) denotes a suitably differentiable function of its arguments. The nominal solution of Equations (1) and (3) is denoted as ϕ 0 (x), and is obtained by solving these equations at the nominal parameter values α 0 . The superscript "zero" will henceforth be used to denote "nominal" or "expected" or "mean" values. Thus, the vectors ϕ 0 (x) and α 0 satisfy the following equations: Equations (9) and (10) represent the "base-case" or nominal state of the forward physical system. Similarly, the "base-case" or nominal state of the adjoint physical system is given by the following equations: The nominal value of the response, R ϕ 0 (x), ψ 0 (x); α 0 , is determined by using the nominal parameter values α 0 , the nominal value of the forward function ϕ 0 (x) obtained by solving Equations (9) and (10), and the nominal value of the adjoint function obtained by solving Equations (11) and (12).

The Third-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for Coupled Linear Forward and Adjoint Systems: Another Step Towards Overcoming the Curse of Dimensionality in the Exact Computation of High-Order Response Sensitivities
The model parameters α i are imprecisely known quantities, so their actual values may differ from their nominal values by quantities denoted as δα i α i − α 0 i , i = 1, . . . , N α . Since the model parameters α and the state functions are related to each other through the forward and adjoint systems, it follows that variations δα (δα 1 , . . . , δα N α ) in the model parameters will cause corresponding variations δϕ . . , N u in the forward and adjoint state functions. In turn, the variations δα, δϕ, and δψ cause a response variation R ϕ 0 + δϕ; ψ 0 + δψ; α 0 + δα around the nominal response value R ϕ 0 (x), ψ 0 (x); α 0 .

The First-Level Adjoint System (1st-LASS) for Computing Exactly and Efficiently the First-Order Model Response Sensitivities to Parameters
The total first-order sensitivity of the response R[ϕ(x), ψ(x); α] to variations δα [δα 1 , . . . , δα N α ] † in the model parameters is given by the definition of the Gateaux (G-) differential, denoted as δR ϕ 0 ; ψ 0 ; α 0 ; δϕ; δψ; δα of R[ϕ(x), ψ(x); α] around the nominal values ϕ 0 ; ψ 0 ; α 0 . By definition, this G-differential is: where the "direct-effect term" δR ϕ 0 ; ψ 0 ; α 0 ; δα dir depends solely on the parameter variations δα and is generally defined as follows: while the "indirect-effect term" δR ϕ 0 ; ψ 0 ; α 0 ; δϕ; δψ ind depends solely on the variations δϕ and δψ in the forward and, respectively, adjoint functions, and is generally defined as follows: Since the nominal values of the forward and adjoint functions are known after having solved Equations (9) through (12), it follows that the direct-effect term δR ϕ 0 ; ψ 0 ; α 0 ; δα dir can already be computed at this stage. The indirect-effect term δR ϕ 0 ; ψ 0 ; α 0 ; δϕ; δψ ind can be computed only after having determined the functions δϕ and δψ. These functions are obtained by solving the 1st-Level Forward Sensitivity System (1st-LFSS), which is obtained by G-differentiating the original forward and adjoint transport equations and respective boundary conditions given in Equations (1), (3), (6) and (7). Performing these differentiations yields the following 1st-LFSS: together with the following boundary conditions: The source-terms Q (1) 1 (ϕ; α; δα) and Q (1) 2 (ψ; α; δα) in Equation (16) are defined as follows: Solving the 1st-LFSS defined by Equations (16) and (17) is computationally expensive, since the 1st-LFSS would need to be solved anew for every variation δα i , i = 1, . . . , N α in the model parameters, as each such variation would affect the source term on the right-side of Equation (16). The computationally expensive evaluation of the indirect-effect term by using the 1st-LFSS can be avoided by expressing this indirect-effect term δR ϕ 0 ; ψ 0 ; α 0 ; δϕ; δψ ind in terms of the solution of the 1st-Level Adjoint Sensitivity System (1st-LASS), which is constructed by implementing the following sequence of steps: (i) Consider two vector-valued functions u (1) 2 (x) † , each having two N u -dimensional vector-components defined as follows: u The components of these vectors are assumed to be square-integrable functions.
(ii) Introduce a Hilbert space, denoted as H (1) , endowed with the following inner product, denoted as u (1) (1) , u (1) (1) , between the two functions defined in item (i), above: (iii) In the Hilbert space H (1) , form the inner product of Equation (16) with a yet undefined vector-valued function ψ (1) 2 (x) † ∈ H (1) to obtain the following relation, evaluated at ϕ 0 ; ψ 0 ; α 0 , in which the superscript "zero" is omitted to simplify the notation: (iv) Use the definition of the adjoint operator in the Hilbert space H (1) to recast the left-side of Equation (20) as follows: where the bilinear concomitant P (1) δϕ; δψ; ψ is defined on the phase-space boundary x ∈ ∂Ω x . The superscript "zero" denoting nominal values for the quantities ϕ 0 ; ψ 0 ; α 0 was also omitted in Equations (20) and (21), in order to simplify the notation. Omitting henceforth the superscript "zero" denoting nominal values for the quantities ϕ 0 ; ψ 0 ; α 0 should not cause any loss of clarity, since all quantities are to be evaluated/computed using the respective nominal values of the model parameters and using the nominal values of the forward and adjoint functions evaluated at preceding stages/steps at nominal parameter values.
(v) Identify the term on the left-side of Equation (21) with the indirect effect term defined in Equation (15), i.e., require that (1) and use Equation (21) in conjunction with the boundary conditions given in Equation (17) on the above relation to construct the following 1st-Level Adjoint Sensitivity System (1st-LASS): (vi) The boundary conditions given in Equation (17) are now implemented in Equation (21), thereby reducing by half the number of unknown boundary-values in the bilinear concomitant 2 . The boundary conditions for the adjoint functions ψ    2 (x) can be represented in operator form as follows: (vii) In most cases, the above choice of boundary conditions for the 1st-level adjoint function ψ (1) (x) will cause the bilinear concomitant P (1) δϕ, δψ, ψ 2 ; α; δα , which will contain only known values of its arguments. (viii) Use the 1st-LASS defined by Equations (22) and (23) together with Equations (20) and (21) to obtain the following expression for the indirect-effect term defined in Equation (15), in terms of the adjoint functions ψ 2 ; α; δα . (24) As indicated in Equation (22), the function ψ  1 (x) is obtained by solving the original adjoint equation with the source [∂S(ϕ; ψ; α)/∂ϕ] † . Thus, after the 1st-LASS is solved to determine these two adjoint functions, the "indirect-effect term" is computed efficiently and exactly by simply performing the integrations ("quadratures") indicated by the inner-product on the right-side of Equation (24).
Replacing Equations (24) and (14) in Equation (13) eliminates the appearance of the functions δϕ(x), δψ(x) in the resulting expression. Consequently, the total 1st-order response sensitivity can be expressed in terms of the adjoint functions ψ (1) 1 (x) and ψ (1) 2 (x) as follows: All of the quantities shown in Equation (25) are to be evaluated at the nominal values ; α 0 . The partial 1st-order response sensitivities, denoted in Equation (25) as . . , N α , of the response R(ϕ; ψ; α) to a generic parameter α i are obtained by identifying the quantities that multiply the various parameter variations δα i in Equation (25).

The Second-Level Adjoint System (2nd-LASS) for Computing Exactly and Efficiently the Second-Order Model Response Sensitivities to Parameters
The second order sensitivities of the response R(ϕ; ψ; α) with respect to the parameters α i , i = 1, . . . , N α , are obtained by determining the first-order G-differentials of the 1st-order sensitivities ∂R ϕ; ψ; ψ (1) For this purpose, it is convenient to use the where the "indirect-effect term" δR 1 and δψ (1) 2 in the forward and, respectively, adjoint functions and is defined as follows: The indirect-effect term defined in Equation (27) can be computed only after having obtained both the solution the functions δϕ, δψ, δψ 1 and δψ (1) 1st-LFSS defined by Equations (16). Furthermore, the functions δψ (1) 1 and δψ (1) 2 are the solutions of the following system of equations obtained by G-differentiating the 1st-LASS: where: The system comprising Equations (16) and (28), together with the boundary conditions provided in Equations (17) and (29) constitutes the 2nd-Level Forward Sensitivity System (2nd-LFSS). Since the source-terms of the 2nd-LFSS depend on the parameters variations δα i , it follows that the determination of the functions δψ (1) 1 and δψ (1) 2 is at least as expensive computationally as determining the functions δϕ and δψ by solving the 1st-LFSS. To avoid the need for solving the 2nd-LFSS, the indirect-effect term defined in Equation (27) will be expressed in terms of a 2nd-Level Adjoint Sensitivity System (2nd-LASS), which will be constructed by following the general principles introduced by Cacuci [1-3], comprising the following sequence of steps: (i) Define a Hilbert space, denoted as H (2) , having vector-valued elements of the form the u (2) 4 (x) † ∈ H (2) , with components that are N u -dimensional vectors of the form i,j (x). In H (2) , define the inner-product, denoted as u (2) (2) , of two functions u (2) (x) ∈ H (2) and v (2) (x) ∈ H (2) as follows: Using the definition provided in Equation (32), construct the inner product of a vector ψ (16) and (28) to obtain the following relation: (ii) Use the definition of the adjoint operator in the Hilbert space H (2) to recast the left-side of Equation (33) as follows: where the bilinear concomitant P (2) δϕ; δψ; δψ (iii) Identify the first term on the right-side of Equation (34) with the indirect-effect term defined in Equation (27) by requiring that the following system of equations be satisfied for i = 1, . . . N α : (iv) The boundary conditions given in Equations (17) 4,i (x) . The boundary conditions for the 2nd-level adjoint functions ψ 4,i (x) are now chosen so as to eliminate the remaining unknown boundary-values of the functions δϕ, δψ, δψ 1 (x) and ψ (1) 2 (x) can be represented in operator form as follows: In most cases, the above choice of boundary conditions for the 1st-level adjoint function ψ (1) (x) will cause the bilinear concomitant P (2) δϕ; δψ; δψ 4,i (x) in Equation (34) to vanish. Even when it does not vanish, however, this bilinear concomitant will be reduced to a quantity, denoted here asP (2) 4,i (x); α; δα , which will contain only known values of its arguments.
The system of equations comprising Equations (35) and (36) will be called the 2nd-Level Adjoint Sensitivity System (2nd-LASS) for the 2nd-level adjoint functions ψ 4, j (x), i = 1, . . . N α . These 2nd-level adjoint functions are obtained by solving the 2nd-LASS successively by using two "forward" and two "adjoint" computations, for each of the imprecisely known scalar model parameters.
(v) Use the 2nd-LASS defined by Equations (35) and (36) together with Equations (33)-(35) to obtain the following expression for the indirect-effect term defined in Equation (27), in terms of the 2nd-level adjoint functions ψ As Equation (37) indicates, the indirect-effect term can be computed speedily by quadratures once the 2nd-level adjoint functions ψ (26) to obtain the following expression for the total 2nd-order response sensitivity to model parameters: . . , N α denotes the 2nd-order partial sensitivity of the response to the model parameters and is defined as follows: Note that the 2nd-LASS is independent of parameter variations δα. Thus, the exact computation of all of the partial second-order sensitivities, R . . , N α , requires at most N α large-scale (adjoint) computations using the 2nd-LASS, rather than O N 2 α large-scale computations as would be required by forward methods. It is also important to note that by solving the 2nd-LASS N α -times, the "off-diagonal" 2nd-order mixed sensitivities 4,i (x); α will be computed twice, in two different ways (using distinct 2nd-level adjoint functions), thereby providing an independent intrinsic (numerical) verification that the 1st-and 2nd-order sensitivities are computed accurately. In practice, it is useful to prioritize the computation of the 2nd-order sensitivities by using the rankings of the relative magnitudes of the 1st-order sensitivities as a "priority indicator": the larger the magnitude of the relative 1st-order sensitivity, the higher the priority for computing the corresponding 2nd-order sensitivities. Also, since vanishing 1st-order sensitivities may indicate critical points of the response in the phase-space of model parameters, it is also of interest to compute the 2nd-order sensitivities that correspond to vanishing 1st-order sensitivities. Thus, only the 2nd-order partial sensitivities of the response R[ϕ(x), ψ(x); α] which are deemed important will need to be computed. Information provided by the 1st-order sensitivities might indicate which 2nd-order sensitivities could be neglected.

The Third-Level Adjoint System (3rd-LASS) for Computing Exactly and Efficiently the Third-Order Model Response Sensitivities to Parameters
The third-order sensitivities of the response R(ϕ; ψ; α) with respect to the model parameters α i , i = 1, . . . , N α , are obtained by determining the first-order G-differential, 4,i (x); α , computed in Section 3.2, which is given by the following expression: 4,i ; α; δϕ; δψ; δψ The quantity δR 4,i ; α; δϕ; δψ; δψ in the forward and adjoint functions, respectively, and is defined as follows: 4,i ; α; δϕ; δψ; δψ The indirect-effect term defined in Equation (41) can be computed only after having computed the functions δψ 4,i , in addition to the functions δϕ, δψ, δψ 1 and δψ (1) 2 . Altogether, the functions δϕ, δψ, δψ 4,i , are the solutions of Equation (28), augmented by the solutions of the system of equations obtained by G-differentiating the 2nd-LASS, which can be written in matrix form as follows: where: The boundary conditions for the functions δϕ, δψ, δψ 2,i , δψ 3,i , δψ 4,i , are those provided in Equations (17) and (29), augmented by the boundary conditions obtained by G-differentiating Equation (36), i.e.: The system comprising the 2nd-LFSS together with Equation (42) and the boundary conditions provided in Equation (62) is called the 3rd-Level Forward Sensitivity System (3rd-LFSS). Since the source-terms of the 3rd-LFSS depend on the parameters variations δα i , it follows that solving this system of equations is prohibitively expensive computationally. To avoid the need for solving the 3rd-LFSS, the indirect-effect term defined in Equation (41) will be expressed in terms of a 3rd-Level Adjoint Sensitivity System (3rd-LASS), which will be constructed by implementing the following sequence of steps: (i) Define a Hilbert space, denoted as H (3) , having vector-valued elements of the form the , with components that are N α -dimensional vectors of the form i,j (x). In H (3) , define the inner-product, denoted as u (3) , of two functions u (3) Using the definition provided in Equation (63), construct the inner product of a vector ψ (16) and (28) to obtain the following relation: (ii) Use the definition of the adjoint operator in the Hilbert space H (3) to recast the left-side of Equation (64) as follows: 4,i ; ψ where the bilinear concomitant P (3) δϕ; δψ; δψ 4,i ; ψ 1,ij , . . . , ψ 8,ij is defined on the phase-space boundary x ∈ ∂Ω x . (iii) Identify the first term on the right-side of Equation (65) with the indirect-effect term defined in Equation (41) by requiring that: 4,i ; ψ 1,ij , . . . , ψ 8,ij . The boundary conditions for the 3rd-level adjoint functions ψ 8,ij can be represented in operator form as follows: In most cases, the above choice of boundary conditions for the 3rd-level adjoint functions ψ 8,ij in Equation (65) to vanish. Even when it does not vanish, however, this bilinear concomitant will be reduced to a quantity, denoted here asP (3) 8,ij ; α; δα , which will contain only known values of its arguments. The system of equations comprising Equations (66) and (67) will be called the 3rd-Level Adjoint Sensitivity System (3rd-LASS) for the 3rd-level adjoint functions ψ 1,ij , . . . , ψ 8,ij : 4,i ; ψ 1,ij , . . . , ψ 4,i ; ψ 1,ij , . . . , ψ 8,ij ; α; δα , i = 1, . . . , N α ; j = 1, . . . , i.
(vi) Replace Equation (68) in Equation (40) to obtain the following expression for the total 2nd-order response sensitivity to model parameters: 4,i ; ψ 1,i j , . . . , ψ 8,ij ; α; δα 4,i ; ψ 1,ij , . . . , ψ where the quantity R 4,i ; ψ 1,ij , . . . , ψ 8,ij ; α denotes the 3rd-order partial sensitivity of the response to the model parameters and is defined as follows: (vii) Note that the 2nd-LASS is independent of parameter variations δα. Thus, the exact computation of all of the partial third-order sensitivities, R 8,ij ; α , i = 1, . . . , N α ; j = 1, . . . , i; k = 1, . . . , j, requires at most N α (N α + 1)/2 large-scale (adjoint) computations using the 3rd-LASS, rather than N α (N α + 1)(N α + 2)/6 large-scale computations as would be required by forward methods. In order to implement the practical computation of the 3rd-level adjoint functions, ψ  6,ij . Note that solving using Equations (71) and (72) would be performed by using the same forward and adjoint solvers (i.e., computer codes) as used for solving the original forward and adjoint systems, namely Equations (1) and (6) subject to the corresponding boundary conditions, except that the right-sides of the respective solvers would have as "sources" the terms ∂R (2) ij /∂ψ (2) 1,i and ∂R (2) ij /∂ψ (2) 2,i , respectively. After having obtained the 3rd-level adjoint functions ψ 8,ij , respectively. As before, solving using Equations (73) and (74) would be performed by using the same forward and adjoint solvers (i.e., computer codes) as used for solving the original forward and adjoint systems, namely Equations (1) and (6)  1,ij , respectively. Thus, solving the 3rd-ASAM in order to determine the 3rd-level adjoint functions does not require any significant "code development," since the original forward and adjoint solvers (codes) do not need to be modified; only the right-sides (i.e., "sources") for these solvers/codes would need to be programmed accordingly.
Using the 3rd-LASS enables the specific computation of the 3rd-order sensitivities in the priority order set by the user, so that only the important 3rd-order partial sensitivities R[ϕ(x), ψ(x); α] would be computed. Information provided by the first-and second-order sensitivities might indicate which 3rd-order sensitivities could be neglected.

Third-Order Expressions for the Cumulants of the Response Distribution in Parameter Space
The 3rd-ASAM presented in Section 3, above, provides the most efficient way for computing exactly the first-, second-and third-order sensitivities of a response that couples the forward and adjoint systems that describe physical problems which are linear in the state-functions. The availability of these sensitivities enables the use of a third-order multivariate Taylor series-expansion (of the response around the known nominal parameter values) for quantifying the cumulants of the response distribution in the phase-space of model parameters. For a model's computed-response, denoted as r c i 1 (α), where the superscript "c" denotes "computed" and the subscript i 1 = 1, . . . , N r denotes one of a total of N r responses that would be of interest, the third-order Taylor-series of r c a (α) around the model's parameters' mean (expected) values α 0 α 0 1 , . . . , α 0 N α is: where r c In practice, the values of the parameters α n are determined experimentally. Therefore, these parameters can be considered to be variates that behave stochastically, obeying a multivariate probability distribution function, denoted as p α (α), which is seldom known in practice, particularly for large-scale systems involving many parameters. Considering that the multivariate distribution p α (α) is formally defined on a domain D α , the various moments (e.g., mean values, covariance and variances, etc.) of p α (α) can be defined in a standard manner by using the following notation: where u(α) is a continuous function of the parameters α. Using the notation defined in Equation (80), the expected (or mean) value of a model parameter α i , denoted as α 0 i , is defined as follows: The covariance, cov α i , α j , of two parameters, α i and α j , is defined as follows: The variance, var(α i ), of a parameter α i , is defined as follows: The standard deviation, σ i , of α i , is defined as follows: σ i var(α i ); The correlation, ρ ij , between two parameters α i and α j , is defined as follows: The 3rd-order moment, µ ijk 3 , of the multivariate parameter distribution function p(α), and the 3rd-order parameter correlation, t ijk , respectively, are defined as follows: The 4th-order moment, µ ijkl 4 , of the multivariate parameter distribution function p(α), and the 4th-order parameter correlation, q ijkl , respectively, are defined as follows: Using Equations (79) together with Equations (81) through (85) yields the following expression for the expected (mean) value, denoted as E r c Using Equations (79) together with Equations (81) through (86) yields the following expression for the covariance, denoted as cov r c i 1 , r c i 2 , of two responses, r c i 1 (α) and r c i 2 (α) f or i 1 , i 2 = 1, . . . , N r : In particular, the variance of a response r c i 1 (α) is obtained as by setting i 1 = i 2 in Equation (88). The covariance of a response, r c i 1 (α) and a parameter α , i 1 = 1, . . . , N r and = 1, . . . , N α , which is denoted as cov r c i 1 , α and is given by the following expression: The third-order cumulant for three responses, r c , which is denoted below as . . , N r , is obtained similarly by using Equations (79) together with Equations (81) through (86), and has the following expression: In particular, the skewness of a single response is customarily denoted as γ 1 (R), and is defined as follows: For normally-distributed uncorrelated parameters and a single response, the expression in Equation (90) simplifies to yield µ 3 (R) = 3 As is well-known, the skewness provides a quantitative measure of the asymmetries in the respective distribution.
The first-order sensitivities contribute the leading terms to the second-, third-, and fourth-order moments of the response distribution, thus providing the leading contributions to the responses variance/covariances, skewness, and kurtosis. Obtaining the exact and complete set of first-order sensitivities of responses to model parameters is of paramount importance for any analysis of a computational model.
The second-order sensitivities contribute the leading correction terms to the response's expected value (causing it to differ from the response's computed value). The second-order sensitivities also contribute to the response variances and covariances. If the parameters follow a normal (Gaussian) multivariate distribution, the second-order sensitivities contribute the leading terms to the response's third-order moment. Thus, neglecting the second-order response sensitivities to normally distributed parameters would nullify the third-order response correlations and hence would nullify the skewness of a response.

2nd/3rd-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling (2nd/3rd-BERRU-PM) in the Joint Phase-Space of Responses and Parameters
Cacuci [7] has summarized the scope of "BERRU-PM" as follows: "BERRU-PM commences by identifying and characterizing the uncertainties involved in every step in the sequence of the numerical simulation processes that ultimately lead to a prediction. This includes characterizing: (a) errors and uncertainties in the data used in the simulation (e.g., input data, model parameters, initial conditions, boundary conditions, sources and forcing functions), (b) numerical discretization errors, and (c) uncertainties in (e.g., lack of knowledge of) the processes being modeled. Under ideal circumstances, the result of is a probabilistic description of possible future outcomes based on all recognized errors and uncertainties." Consider a vector-valued variate x (x 1 , . . . , x N ) † , the components of which obey an unknown multivariate distribution p(x). Of course, the probability distribution function p(x) would need to be properly normalized, i.e., it must also satisfy the constraint: Consider further that the moments of several known functions F k (x) over the unknown distribution p(x), denoted as F k , and defined as: are also known. The problem of reconstructing a function from a finite number of its moments has been investigated for many decades in the mathematical and physical sciences. For the purposes of predictive modeling, the main goal is to determine a probability density function p(x) which is consistent with the knowledge expressed by Equation (93) and introduces no unwarranted information. Such a probability density function, p(x) , can be constructed using the method of maximum entropy ("MaxEnt"), which generates the most conservative estimate of a probability distribution with the given information and the most non-committal one with regard to missing information [11]. According to the MaxEnt principle, the unknown probability density function p(x) must satisfy the constraints expressed by Equation (93) while having its Boltzmann-Shannon-Gibbs (BSG) entropy (also referred to as the "information entropy") as large as possible. For a continuous distribution having a probability density function p(x), the expression for its information/BSG-entropy is: where m(x) is a prior density function that ensures form invariance under change of variable. Intuitively, in a bounded domain, the most conservative distribution, i.e., the distribution of maximum entropy, is the one that assigns equal probability to all the accessible states. Hence, the method of maximum entropy can be thought of as choosing the most "uniform" distribution p(x) that satisfies the given moment constraints expressed by Equation (93) and introduces no unwarranted information. Any probability density function p(x) satisfying the constraints which has smaller entropy will contain more information (less uncertainty), and thus would predict something stronger than warranted by our knowledge and/or assumptions. The probability density function with maximum entropy, satisfying the constraints imposed, is the one which should be least surprising in terms of the predictions it makes. Selecting the unique p(x) which would maximize the information entropy defined by Equation (94) while simultaneously satisfying the known constraints given in Equations (93) and (92) is a variational problem that can be solved by the well-known method of using Lagrange multipliers, λ k , k = 1, 2, . . . , K, to construct the following Lagrangian functional: The critical point of L[p(x)] is obtained by solving the equation that results from setting the first Gateaux-differential of L[p(x)] to zero, namely: It follows from Equation (96) that: Replacing the results obtained in Equation (97) into Equation (92) and eliminating the Lagrange multiplier λ 0 from the resulting expression leads to the following expression for the probability density function p(x): where the normalization constant Z in Equation (98) is defined as follows: In statistical mechanics, the normalization constant Z is called the partition function (or sum over states), and carries all of the information available about the possible states of the system.
The expected integral data is obtained by differentiating Z with respect to the Lagrange multiplier λ k , to obtain the following relationships: When the integral data F k are not yet known, the uniform distribution m(x) = 1 is the most appropriate to consider. In this case, the maximum entropy (MaxEnt) algorithm yields the uniform distribution as would be required by the principle of insufficient reason. Thus, the MaxEnt principle generalizes the principle of insufficient reason. The MaxEnt can be applied to both discrete and to continuous distributions. The MaxEnt method has been shown [12] to be equivalent to constrained variational inference, thus establishing the link between MaxEnt and Bayesian approximations. The MaxEnt method has been used in many fields; enumerating these fields is beyond the scope of this work, which is limited to nuclear engineering applications. The pioneering application of the MaxEnt method to time-independent nuclear reactor physics problems was initiated in the 1970s [13][14][15][16][17].
The first application of the MaxEnt method to a time-dependent nuclear energy system was by Barhen et al. [18]. This work was subsequently extended by Cacuci and Ionescu-Bujor [19], which presented analytical formulas for predicted mean values and covariance matrices, for both predicted model parameters and responses, which have generalized the previous results presented in data assimilation procedures for geophysical sciences and linear Bayesian models. Other applications to nuclear energy systems are presented in [20][21][22][23][24][25][26][27].
To the author's knowledge, none of the analytical results published thus far include comprehensively all of the second-and third-order response sensitivities to all of the model's parameters. The end-results to be presented in this section will extend the results published thus far in the open literature by presenting analytical formulas for both predicted model responses and parameters by including all of the second-and third-order sensitivities of computed model responses to the model's parameters.

2nd/3rd-BERRU-PM: A Priori Information
This Subsection will present the mathematical form of the information that will be ultimately used for predicting the optimal, best-estimate mean values for both the model responses and model parameters, with reduced predicted uncertainties, in the combined parameter-response phase space.

Expected Values and Covariances of Measured Responses
Consider that N r quantities of interest, henceforth called "system responses" and denoted as r m i , i = 1, . . . , N r , have been experimentally measured, yielding their expected values, as well as their corresponding covariances (i.e., standard deviations and correlations). For the mathematical derivations to follow, it is convenient to consider the responses r m i to constitute the components of the N r -dimensional column vector r m defined as follows: Since the experimentally measured responses cannot be measured exactly, they are usually considered to be variates that follow an unknown multivariate distribution function of the observations, denoted as p r (r), which is formally defined on a domain D r . Methods for finding estimates of a measured quantity and indicators of the quality of the estimates depend on the assumed form of the unknown distribution function p r (r) of the observations. The moments of the distribution of measured responses can be conveniently denoted by introducing the following notation for the expectation (or mean value) of a function w(r m ): When the distribution is discrete, the integral in Equation (102) denotes a sum over the respective discrete probabilities. Using the notation introduced in Equation (102), the expectation (or mean value) of the experimentally measured responses r m i is formally defined as follows: The expected values of the measured responses will be considered to constitute the components of the vector r m defined as: The covariance matrix of measured responses will be denoted as C m , and is defined as: For subsequent computations, it is convenient to consider that the expected values α 0 i of the components α i of the N α -dimensional vector of model parameters α (α 1 , . . . , α N α ) † are components of the following vector of mean (expected) values: The variances and covariances defined in Equations (82) and (83) are considered to constitute the elements of a symmetric, positive-definite parameter covariance matrix of dimension N α × N α , denoted as C α and defined as follows: Consider that the values of the N r experimentally measured responses r m i can be computed using a multi-physics model that comprises N α model parameters α n , n = 1, . . . , N α which are related to the model's independent and dependent variables through the model's underlying equations, correlations, tables, etc. Of course, the computed response values will not coincide with the measured ones, because, just like the experimentally measured responses, the model's parameters and numerical solution of the underlying equations, and consequently the computed response values, are also subject to uncertainties. The computed responses r c k (α), k = 1, . . . , N r , are considered to be elements of an N r -dimensional vector r c (α), defined as follows: The expectation values E r c k (α) given by Equation (87) are considered to be the components of the following vector of "expected values of the computed response": The response covariances defined in Equation (88) are considered to be the components of a (N r × N r )-dimensional matrix denoted as C r and defined as follows: The covariances between the computed responses and the model -parameters defined in Equation (89) are considered to be the components of an (N r × N α )-dimensional matrix denoted as C rα and defined as follows: The joint covariance matrix, denoted as C M , of the model parameters and model-computed responses is defined as follows:

2nd/3rd-BERRU-PM: Analytical Expressions for Best-Estimate Results with Reduced Uncertainties for Responses and Parameters in the Joint Phase-Space of Responses and Parameters
Consider the joint probability function p(α, r) of the multi-variates α and r m , which is defined on the domain D D α × D r and is properly normalized such that: The exact form of p(α, r) is unknown, of course. Since the (multi)variates α and r m are statistically independent of each other, it follows that p(α, r) = p r (r)p α (α). Therefore, the expected value of a function w(r m ) satisfies the following relations: while the expected value of a function u(α) satisfies the following relations: Therefore, the a priori information about the model parameters, computed and measured responses can be conveniently summarized by considering that the physical system under consideration is described mathematically by a multivariate vector: obeying an unknown joint multivariate distribution function: p(α, r) = p r (r)p α (α) but having a known vector of expected values denoted as: and a covariance matrix denoted as: Applying the MaxEnt principle as described in the Appendix A to the information provided in Equations (116) through (119) indicates that the MaxEnt form, denoted as p 2 x|x 0 , C , of the unknown distribution p(α, r) will have the following multivariate Gaussian form: The MaxEnt-Gaussian form shown in Equation (120) can also be written in the equivalent form: The expression provided in Equation (121) highlights the "Bayesian" construction underlying the expression of the a posteriori joint maximum-entropy provability distribution function p 2 x|x 0 , C of the physical system's responses and parameters. In order for the measured and computed responses to represent the same physical quantity, it is necessary that: where the vector r represents both the computed and measured responses. Determining the moments of p 2 x|x 0 , C for subsequent predictions will require evaluations of integrals of the following form: Expressions such as shown in Equation (123) can be evaluated to a high degree of accuracy (a priory controlled) by using the saddle-point method (also called Laplace approximation, or steepest descent method), which relies on evaluating the respective integrals at the so-called "saddle point(s)." For the integral in the denominator of Equation (123), the saddle point, denoted as x D (α D , r D ), is the point at which the gradient of the function h(x) vanishes, i.e., ∇ x h(x D ) = 0, so that h(x) can be expanded in the following Taylor series: where the quantity "SOT(x)" denotes terms of second-and higher-order in x. When the function g(x) varies slowly, it is simply evaluated at the respective saddle point, and the resulting expression of E[g(α, r)] in Equation (123) becomes: where the saddle-point (α D , r D ) is defined as the point in phase-space where the gradients of h(α, r) vanish, i.e., When the function h(x) has a Taylor-series containing powers higher than second-order in x, the gradient ∇ x h(x) of the function h(x) may vanish at multiple saddle points, in which case the contributions from all of these saddle points would need to be accounted for when evaluating the integrals in Equation (123).

Predicted Best-Estimate Expected Values for the Responses and Parameters in the Joint Phase-Space of Responses and Parameters
The MaxEnt Gaussian p 2 x|x 0 , C has a quadratic form and hence possesses a single saddle point, which is determined by requiring the first-variation δQ(α D , r D ; δα, δr) of the exponential term in Equation (121) to vanish at the saddle point x D (α D , r D ), namely: Imposing the requirement indicated in Equation (127) while taking Equation (122) into account yields the following system of equations: all of the previous formulas of this type found in data assimilation/assimilation procedures published to date (which contain at most first-order sensitivities).

Predicted Best-Estimate Covariances for the Responses and Parameters in the Joint Phase-Space of Responses and Parameters
The second-order moments of the posterior distribution p(α, r) = p r (r)p α (α) comprise the covariances between the best estimated response, which are denoted as C be r , the covariances between the best-estimate parameters, which are denoted as C be α , and the covariances between the best-estimate parameters and responses., which are denoted as C be αr . The expression of the "best-estimate" posterior parameter covariance matrix C be r for the best-estimate responses r be is derived by using the results given in Equations (133) and (135) to obtain: As indicated in Equation (137), the initial covariance matrix C m is multiplied by the matrix I − (C m + C r ) −1 C m , which means that the variances contained on the diagonal of the best-estimate matrix C be r will be smaller than the experimentally measured variances contained in C m . Hence, the addition of new experimental information has reduced the predicted best-estimate response variances in C be r by comparison to the measured variances contained a priori in C m . Since the components of the matrix C r contain 2nd-order and 3rd-order sensitivities, the formula presented in Equation (137) generalizes all of the previous formulas of this type found in data assimilation/assimilation procedures published to date (which contain at most first-order sensitivities).
The expression of the "best-estimate" posterior parameter covariance matrix C be α for the best-estimate parameters α be is derived by using the result given in Equation (136) to obtain: Both matrices C α and C αr (C m + C r ) −1 C rα are symmetric and positive definite. Therefore, the subtraction indicated in Equation (138) implies that the components of the main diagonal of C be α must have smaller values than the corresponding elements of the main diagonal of C α . In this sense, the introduction of new computational and experimental information has reduced the best-estimate parameter variances on the diagonal of C be α . Since the components of the matrices C α , C αr , and C r contain 2nd-order and 3rd-order sensitivities, the formula presented in Equation (138) generalizes all of the previous formulas of this type found in data assimilation/assimilation procedures published to date (which contain at most first-order sensitivities).
The expression of the "best-estimate" posterior parameter covariance matrix C be α for the best-estimate parameters α be and best-estimate responses r be is derived by using the results given in Equations (135) and (136) to obtain: C be αr D α − α be r − r be † p(α, r) dα dr = C αr (C m + C r ) −1 C m .
As before, since the components of the matrices C αr , and C r contain 2nd-order and 3rd-order sensitivities, the formula presented in Equation (139) generalizes all of the previous formulas of this type found in data assimilation/assimilation procedures published to date (which contain at most first-order sensitivities).
The expression of the best-estimate covariance matrix C be rα is derived by performing a sequence of operations similar to that shown in Equation (139) to obtain: C be rα D r − r be α − α be † p(α, r) dα dr = C m (C m + C r ) −1 C rα = C be αr †

. (140)
It is important to note from the results shown in Equations (135) through (140) that the computation of the best estimate parameter and response values, together with their corresponding best-estimate covariance matrices, only requires the computation of (C m + C r ) −1 , which entails the inversion of a matrix of size N r × N r . This is computationally very advantageous, since N r N α (i.e., the number of responses is much less than the number of model parameters) in the overwhelming majority of practical situations.

Data Consistency Indicator
As will be shown in the following, the minimum value, Q min , of the functional Q α be , r be in the exponential in Equation (121), provides a "consistency indicator" which quantifies the mutual and joint consistency of the information available for model calibration. The minimum value, Q min , of the functional Q α be , r be in the exponential in Equation (121) has the following expression: For subsequent matrix algebra, it is convenient to use the following matrix: Using Equations (142) and (132) in Equation (141) yields the following result: where d [E(r c ) − E(r m )] As the expression obtained in Equation (143) indicates, the quantity Q min represents the square of the length of the vector d [E(r c ) − E(r m )], measuring (in the corresponding metric) the deviations between the experimental and nominally computed responses. The quantity Q min can be evaluated directly from the given data (i.e., given parameters and responses, together with their original uncertainties) after having inverted the covariance matrix (C m + C r ). It is also important to note that Q min is independent of calibrating (or adjusting) the original data. As the dimension of [E(r c ) − E(r m )] indicates, the number of degrees of freedom characteristic of the calibration under consideration is equal to the number N r of experimental responses. In the extreme case of absence of experimental responses, no actual calibration takes place. An actual calibration (adjustment) occurs only when including at least one experimental response.

Conclusions
This work has presented the Third-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM), which enables the efficient computation of the exact expressions of the 3rd-order functional derivatives ("sensitivities") of a general system response that depends on both the forward and adjoint state functions, with respect to all of the parameters underlying the respective forward and adjoint systems. Such responses are often encountered when representing mathematically detector responses and reaction rates in reactor physics problems. The 3rd-ASAM extends the 2nd-ASAM in the quest to overcome the "curse of dimensionality" in sensitivity analysis, uncertainty quantification and predictive modeling.
Very importantly, the computation of the 2nd-level adjoint functions ψ (2) 1, j (x), ψ 2, j (x), ψ 3,j (x), ψ (2) 4, j (x), and of the 3rd-level adjoint functions, ψ 1,ij , . . . , ψ 8,ij , is performed by using the same forward and adjoint solvers (i.e., computer codes) as used for solving the original forward and adjoint systems, namely Equations (1) and (6) subject to the corresponding boundary conditions. Thus, solving the 2nd-LASS and 3rd-ASAM would not require any significant "code development," since the original forward and adjoint solvers (codes) would not need to be modified; only the right-sides (i.e., "sources") for these solvers/codes would need to be programmed accordingly. Of course, if the response depends only on the original forward or original adjoint function, than only half of the equations underlying the 2nd-ASAM and, correspondingly, the 3rd-ASAM will need to be solved.
This work also presents new formulas that incorporate the contributions of the 3rd-order sensitivities into the expressions of the first four cumulants of the response distribution in the phase-space of model parameters. Using these newly developed formulas, this work also presents a new mathematical formalism, called the 2nd/3rd-BERRU-PM ("Second/Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling"), which combines experimental and computational information in the joint phase-space of responses and model parameters, including not only the 1st-order response sensitivities, but also the complete hessian matrix of 2nd-order second-sensitivities and also the 3rd-order sensitivities, all computed using the 3rd-ASAM. The 2nd/3rd-BERRU-PM formalism uses the maximum entropy principle to eliminate the need for introducing and "minimizing" a user-chosen "cost functional quantifying the discrepancies between measurements and computations," thus yielding results that are free of subjective user-interferences and generalizing and significantly extending the 4D-VAR data assimilation procedures. Incorporating correlations, including those between the imprecisely known model parameters and computed model responses, the 2nd/3rd-BERRU-PM also provides a quantitative metric, constructed from sensitivity and covariance matrices, for determining the degree of agreement among the various computational and experimental data while eliminating discrepant information. The mathematical framework of the 2nd/3rd-BERRU-PM formalism requires the inversion of a single matrix of size N r × N r , where N r denotes the number of considered responses. In the overwhelming majority of practical situations, the number of responses is much less than the number of model parameters. Thus, the 2nd-BERRU-PM methodology overcomes the curse of dimensionality which affects the inversion of hessian matrices in the parameter space. space. It often occurs in practice that the variances µ ii 2 (x) are known but the covariances µ ij 2 (x), i j, are not known. In this case, the covariance matrix C would a priori be diagonal. Consequently, only the Lagrange parameters b ii would be non-zero, so that the matrix B b ij would also be a priori diagonal. In other words, in the absence of information about correlations, the maximum entropy algorithm indicates that unknown covariances/correlations can be taken to be zero.