Fourth-Order Comprehensive Adjoint Sensitivity Analysis (4th-CASAM) of Response-Coupled Linear Forward/Adjoint Systems: I. Theoretical Framework

: The most general quantities of interest (called “responses”) produced by the computational model of a linear physical system can depend on both the forward and adjoint state functions that describe the respective system. This work presents the Fourth-Order Comprehensive Adjoint Sensitivity Analysis Methodology (4th-CASAM) for linear systems, which enables the efﬁcient computation of the exact expressions of the 1st-, 2nd-, 3rd- and 4th-order sensitivities of a generic system response, which can depend on both the forward and adjoint state functions, with respect to all of the parameters underlying the respective forward/adjoint systems. Among the best known such system responses are various Lagrangians, including the Schwinger and Roussopoulos functionals, for analyzing ratios of reaction rates, the Rayleigh quotient for analyzing eigenvalues and/or separation constants, etc., which require the simultaneous consideration of both the forward and adjoint systems when computing them and/or their sensitivities (i.e., functional derivatives) with respect to the model parameters. Evidently, such responses encompass, as particular cases, responses that may depend just on the forward or just on the adjoint state functions pertaining to the linear system under consideration. This work also compares the CPU-times needed by the 4th-CASAM versus other deterministic methods (e.g., ﬁnite-difference schemes) for computing response sensitivities These comparisons underscore the fact that the 4th-CASAM is the only practically implementable methodology for obtaining and subsequently computing the exact expressions (i.e., free of methodologically-introduced approximations) of the 1st-, 2nd, 3rd- and 4th-order sensitivities (i.e., functional derivatives) of responses to system parameters, for coupled forward/adjoint linear systems. By enabling the practical computation of any and all of the 1st-, 2nd, 3rd- and 4th-order response sensitivities to model parameters, the 4th-CASAM makes it possible to compare the relative values of the sensitivities of various order, in order to assess which sensitivities are important and which may actually be neglected, thus enabling future investigations of the convergence of the (multivariate) Taylor series expansion of the response in terms of parameter variations, as well as investigating the range of validity of other important quantities (e.g., response variances/covariance, skewness, kurtosis, etc.) that are derived from Taylor-expansion of the response as a function of the model’s parameters. The 4th-CASAM presented in this work provides the basis for signiﬁcant future advances towards overcoming the “curse of dimensionality” in sensitivity analysis, uncertainty quantiﬁcation and predictive modeling.


Introduction
The functional derivatives of results (customarily called "responses") produced by computational models of physical systems with respect to the model's parameters are This work presents the Fourth-Order Comprehensive Adjoint Sensitivity Analysis Methodology (4th-CASAM) for general response-coupled forward/adjoint systems. The 4th-CASAM enables the efficient computation of the exact expressions of the 1st-, 2nd-, 3rd-and 4th-order sensitivities of a generic system response, which depends on both the forward and adjoint state functions with respect to all of the parameters underlying the respective systems. The qualifier "comprehensive" is used because this methodology provides exact expressions for the sensitivities of a system response not only to the system's internal parameters, but also to its (possibly uncertain) boundaries and internal interfaces in phase-space. The development of the 4th-CASAM for coupled forward/adjoint linear system will provide the basis for significant advances towards overcoming the "curse of dimensionality" in sensitivity analysis, uncertainty quantification, as well as forward and inverse predictive modeling [22][23][24].
This work is structured as follows: Section 2 presents the generic mathematical formulation of the forward and adjoint equations that underlies the computational model of a linear physical system, having a response that depends nonlinearly on the forward and adjoint state functions and parameters. Section 3 presents the development of the novel 4th-CASAM, which is developed successively from the 1st-, 2nd-and 3rd-CASAM. Section 3 also presents, for comparison, the expressions and number of large-scale computations that would be needed if the 1st-, 2nd-, 3rd-, and 4th-order response sensitivities were computed by using finite-difference formulas (which would not only be impractical for large-scale systems, but would provide only approximate values for the respective sensitivities) and the Forward Sensitivity Analysis Methodology (FSAM), which would provide exact expressions for the respective sensitivities, but would also be prohibitively expensive in terms of computational costs for large-scale systems. Finally, Section 4 offers conclusions regarding the significance of this work's novel results in the quest to overcome the curse of dimensionality in sensitivity analysis, uncertainty quantification and predictive modeling.

Background: Mathematical Description of the Physical System
A physical system is modeled by using independent variables, dependent variables ("state functions"), as well as parameters which are seldom, if ever, known precisely. Without loss of generality, the model parameters can be considered to be real scalar quantities, having known nominal (or mean) values and, possibly, known higher-order moments or cumulants (i.e., variance/covariances, skewness, kurtosis), which are determined outside the model, e.g., from experimental data. These imprecisely known model parameters will be denoted as α 1 , . . . ,α N α , where N α denotes the total number of imprecisely known parameters underlying the model under consideration. These model parameters are considered to include imprecisely known geometrical parameters that characterize the physical system's boundaries in the phase-space of the model's independent variables. For subsequent developments, it is convenient to consider that these parameters are components of a "vector of parameters" denoted as α (α 1 , . . . , α N α ) † ; α ∈ R N α , where R N α denotes the N α -dimensional subset of the set of real scalars. The vector α ∈ R N α is considered to include any imprecisely known model parameters that may enter into defining the system's boundary in the phase-space of independent variables. The symbol " " will be used to denote "is defined as" or "is by definition equal to." Matrices and vectors will be denoted using bold letters. All vectors in this work are considered to be column vectors, and transposition will be indicated by a dagger ( †) superscript.
The model is considered to comprise N x independent variables which are denoted as x i , i = 1, . . . , N x , and are considered to be components of the vector x (x 1 , . . . , x N x ) † ∈ R N x .The vector x ∈ R N x of independent variables is considered to be defined on a phasespace domain, denoted as Ω x , and defined as follows: . . , N x }. The lower boundary-point of an independent variable is denoted as λ i (α) (e.g., the inner radius of a sphere or cylinder, the lower range of an energy-variable, etc.), while the corresponding upper boundary-point is denoted as ω i (α) (e.g., the outer radius of a sphere or cylinder, the upper range of an energy-variable, etc.). A typical example of boundaries that depend on imprecisely known parameters is provided by the boundary conditions needed for models based on diffusion theory, in which the respective "flux and/or current conditions" for the "boundaries facing vacuum" are imposed on the "extrapolated boundary" of the respective spatial domain. As is well known, the "extrapolated boundary" depends not only on the imprecisely known physical dimensions of the problem's domain, but also on the medium's microscopic transport cross sections and atomic number densities. The boundary of Ω x , which will be denoted as ∂Ω x (α), comprises the set of all of the endpoints λ i (α), ω i (α), i = 1, . . . , N x , of the respective intervals on which the components of x are defined, i.e., ∂Ω x (α) {λ i (α) ∪ ω i (α), i = 1, . . . , N x }.
The model is considered to comprise N ϕ dependent variables (also called "state functions"), denoted as ϕ i (x), i = 1, . . . N ϕ , which are considered to be the components of the "vector of dependent variables" defined as ϕ(x) ϕ 1 (x), . . . , ϕ N ϕ (x) † . A linear physical system is generally modeled by a system of N ϕ linear operatorequations which can be generally represented as follows: where (x; α) L ij (x; α) , i, j = 1, . . . , N ϕ , is a matrix of dimensions N ϕ × N ϕ , while q ϕ (x; α) q ϕ,1 (x; α), . . . ., q ϕ,N ϕ (x; α) † is a column vector of dimension N ϕ . The components L ij (x; α) are operators that act linearly on the dependent variables ϕ j (x) and are, in general, nonlinear functions of the imprecisely known parameters α ∈ R N α . The components L ij (α) are operators that act linearly on the dependent variables ϕ j (x) and are, in general, nonlinear functions of the imprecisely known parameters α ∈ R N α . The components q ϕ,i (x; α) of q ϕ (x; α), where the subscript "ϕ" indicates sources associated with the "forward" system of equations, are also nonlinear functions of α ∈ R N α . Since the right-side of Equation (1) may contain distributions, the equality in this equation is considered to hold in the weak ("distributional") sense. Similarly, all of the equalities that involve differential equations in this work will be considered to hold in the weak/distributional sense. When L(x; α) contains differential operators, a set of boundary and/or initial conditions which define the domain of L(x; α) must also be given. Since the complete mathematical model is considered to be linear in ϕ(x), the boundary and/or initial conditions needed to define the domain of L(x; α) must also be linear in ϕ(x). Such a linear boundary and/or initial conditions are represented in the following operator form: In Equation (2), the operator B ϕ (x; α) b ij (x; α) ; i = 1, . . . , N b ; j = 1, . . . , N ϕ is a matrix comprising, as components, operators that act linearly on ϕ(x) and nonlinearly on α; the quantity N B denotes the total number of boundary and initial conditions. The operator c ϕ (x; α) c ϕ,1 (x; α), . . . , c ϕ,N B (x; α) † is a N B -dimensional vector comprising components that are operators acting, in general, nonlinearly on α. The subscript "ϕ" in Equation (2) indicates boundary conditions associated with the forward state function ϕ(x). In this work, capital bold letters will be used to denote matrices (whose components may be operators rather than just functions) while lower case bold letter will be used to denote vectors. The nominal solution of Equations (1) and (2) is denoted as ϕ 0 (x), and is obtained by solving these equations at the nominal (or mean) values of the model parameter α 0 . The superscript "zero" will henceforth be used to denote "nominal" (or, equivalently, "expected" or "mean" values). Thus, the vectors ϕ 0 (x) and α 0 satisfy the following equations: B ϕ x; α 0 ϕ 0 (x) − c ϕ x; α 0 = 0, x ∈ ∂Ω x α 0 .
Linear systems differ fundamentally from nonlinear systems in that linear operators admit adjoint operators, whereas nonlinear operators do not admit adjoint operators. Furthermore, important model responses of linear systems can be functions of both the forward and the adjoint state functions, a situation that cannot occur for nonlinear problems. Therefore, linear physical systems cannot be simply considered to be particular cases of linear systems (although they may be just that in particular cases), but need to be treated comprehensively in their own right.
Physical problems modeled by linear systems and/or operators are naturally defined in Hilbert spaces. Thus, for physical systems represented by Equations (1) and (2), the components ϕ i (x), i = 1, . . . , N ϕ are considered to be square-integrable functions and ϕ(x) ∈ H 0 , where H 0 is the "original", or "zeroth-level" Hilbert space, as denoted by the subscript "zero". Subsequently in this work, higher-level Hilbert spaces, which will be denoted as H 1 , H 2 , etc., will also be introduced. Evidently, all of the elements of H 0 are N ϕ -dimensional vectors that are functions of the independent variables x. For two elements ϕ(x) ∈ H 0 and ψ(x) ∈ H 0 , the Hilbert space H 0 is endowed with an inner product that will be denoted as ϕ(x), ψ(x) 0 and which is defined as follows: In Equation (5), the product-notation [] dx i compactly denotes the respective multiple integrals, while the dot indicates the "scalar product of two vectors" defined as follows: In most practical situations the Hilbert space H 0 is self-dual. The operator L(x; α) admits an adjoint (operator), which will be denoted as L * (x; α), and which is defined through the following relation for an arbitrary vector ψ(x) ∈ H ϕ : In Equation (7), the formal adjoint operator comprising elements L * ji (x; α) which are obtained by transposing the formal adjoints of the operators L ij (x; α) . Thus, the system adjoint to the linear system represented by Equations (1) and (2) has the following general representation in operator form: The domain of L * (x; α) is determined by selecting the adjoint boundary and/or initial conditions represented in operator form in Equation (10), where the subscript "ψ" indicates adjoint boundary and/or initial conditions associated with the adjoint state function ψ(x). These adjoint boundary and/or initial conditions are selected so as to ensure that the boundary terms that arise in the so-called "bilinear concomitant" when going from the left-side to the right side of Equation (7) vanish, in conjunction with the forward boundary conditions given in Equation (2).
The nominal solution of Equations (9) and (10) is denoted as ψ 0 (x), and is obtained by solving these equations at the nominal parameter values α 0 , i.e., In view of Equations (1) and (9), the relationship shown in Equation (7), which is the basis for defining the adjoint operator, also provides the following fundamental "reciprocitylike" relation between the sources of the forward and the adjoint equations, respectively: The functional on the right-side of Equation (13) represents a "detector response", i.e., a particle reaction-rate representative of the "count" of particles incident on a detector of particles, measuring the respective particle flux ϕ(x). Thus, the source term in Equation (11) is usually associated with the "result of interest" to be measured and/or computed, which is customarily called the system's "response".
The system response generally depends on the model's state-functions and on the system parameters, which are considered to also include parameters that may specifically appear only in the definition of the response under consideration (but which may not appear in the definition of the model). Thus, the (physical) "system" is defined in this work to comprise both the system's computational model and the system's response. The system's response will be denoted as R[ϕ(x), ψ(x); α] and, in the most general case, is a nonlinear operator acting on the model's forward and adjoint state functions, as well as on imprecisely known parameters, both directly and indirectly through the state functions. The nominal value of the response, R ϕ 0 (x), ψ 0 (x); α 0 , is determined by using the nominal parameter values α 0 , the nominal value ϕ 0 (x) of the forward state function [obtained by solving Equations (3) and (4)] and the nominal value ψ 0 (x) of the adjoint function [obtained by solving Equations (11) and (12)].
A particularly important class of system responses comprises (scalar-valued) functionals of the forward and adjoint state functions. Such responses occur in many fields, including optimization, control, model verification, data assimilation, model validation and calibration, predictive modeling, etc. For example, the well-known Lagrangian functional is usually represented in the following form: In particular, the Schwinger "normalization-free Lagrangian" is usually represented in the following form [1,2]: When Equation (1) takes on the eigenvalue (or "separated") form M[α(x)]ϕ(x) = µ N[α(x)]ϕ(x), which implies that the adjoint function ψ(x) is the solution of the cor- , the well-known Raleigh quotient is usually represented in the following form: Actually, any response can be represented in terms of functionals using the inner product underlying the Hilbert space in which the physical problem is formulated (in this case, H 0 ). For example, a measurement of a physical quantity can be represented as a response R p ϕ x p , ψ x p ; α which is located at a specific point, x p , in phase-space. Such a response can be represented mathematically as a functional of the following form: where δ x − x p denotes the multidimensional Dirac-delta functional. Furthermore, a function-valued (operator) response R op [ϕ(x), ψ(x); α] can be represented by a spectral (in multidimensional orthogonal polynomials or Fourier series) expansion of the form: where the quantities P m i (x i ), i = 1, . . . , N x , denote the corresponding spectral functions (e.g., orthogonal polynomials or Fourier exponential/trigonometric functions) and where the spectral Fourier) coefficients c m 1 ...m Nx are defined as follows: The coefficients c m 1 ...m Nx can themselves be considered as system responses since the spectral polynomials P m i (x i ) are perfectly well known while the expansion coefficients will contain all of the dependencies of the respective response on the imprecisely known model and response parameters. Consequently, the sensitivity analysis of operator-valued responses can be reduced to the sensitivity analysis of scalar-valued responses. The expressions in both Equations (17) and (19) are functionals of the forward and adjoint state functions and can be represented in the following general form: where S[ϕ(x), ψ(x); α] denotes a suitably Gateaux-(G-) differentiable function of the indicated arguments. Specific examples that illustrate the effects of the first-and second-order sensitivities of responses of the form shown in Equations (14)- (20) to imprecisely known model and domain-boundary parameters are provided in [25][26][27][28][29][30][31][32][33][34].
The generic response defined in Equation (20) provides the basis for constructing any other responses of specific interest, and will therefore be used for the generic "Fourth-Order Comprehensive Sensitivity Analysis Methodology (4th-CASAM) for Linear Systems" to be developed in the remainder of this work. Note that the generic response defined in Equation (20) is, in general, a nonlinear function of all of its arguments, i.e., S[ϕ(x), ψ(x); α] is nonlinear in ϕ(x), ψ(x), and α.

The Fourth-Order Comprehensive Adjoint Sensitivity Analysis Methodology (4th-CASAM) for Linear Systems
The model parameters α i are imprecisely known quantities, so their actual values may differ from their nominal values by quantities denoted as δα i α i − α 0 i , i = 1, . . . , N α . Since the model parameters α and the state functions are related to each other through the forward and adjoint systems, it follows that variations δα (δα 1 , . . . , δα N α ) in the model parameters will cause corresponding variations δϕ . . , N u in the forward and, respectively, adjoint state functions. In turn, the variations δα, δϕ, and δψ cause a response variation R ϕ 0 + δϕ; ψ 0 + δψ; α 0 + δα around the nominal response value R ϕ 0 (x), ψ 0 (x); α 0 .

Computation of the First-Order
Since the closed-form analytical expression of the response R[ϕ(x), ψ(x); α] is not available in practice, the first-order sensitivities of the response cannot be computed directly from Equation (20), but need to be computed by other means. There are three methods for computing response sensitivities: (i) by using finite-difference formulas or statistical procedures to approximate Equation (20) in conjunction with "brute-force" re-computations using altered parameter values in Equations (1) and (2); (ii) by using the "forward sensitivity analysis methodology (FSAM)," which requires solving the differentiated forms of Equations (1), (2), (9) and (10), evaluated at the known nominal and response parameter values; (iii) by applying the "comprehensive adjoint sensitivity analysis methodology" which has been developed based on the pioneering work of Cacuci [4][5][6]. These methods will be discussed in Sections 3.1.1-3.1.3 below.

Finite-Difference Approximation Using Re-Computations with User-Modified Parameters
The first-order sensitivities of the response, R[ϕ(x), ψ(x); α], can be computed approximately using the well-known finite-difference formula presented below: where and where h j denotes a "judiciously-chosen" variation in the parameter α j around its nominal value α 0 j . The values R i+1 and R i−1 are obtained by re-solving the original forward and adjoint systems of equations repeatedly, using the changed parameter values (α i ± h i ). The value of the variation h j is chosen by "trial and error" for each parameter α i , and varies from parameter to parameter. If the value of h j is too large or too small, the result produced by Equation (21) will be in considerable error from the exact value of the derivative ∂R(α)/∂α i , and this negative outcome may be exacerbated by the fact that the user may not be aware of this error since the exact result for the respective sensitivity can computed only by using the forward or the adjoint sensitivity analysis methods (to be presented in Sections 3.1.2 and 3.1.3). It is important to note that finite difference formulas introduce their intrinsic "methodological errors," such as the error O h j 2 indicated in Equation (21), which are in addition to, and independent of, the errors that might be incurred in the computation of R ϕ 0 (x), ψ 0 (x); α 0 . In other words, even if the computation of R ϕ 0 (x), ψ 0 (x); α 0 were perfect (error-free), the finite-difference formulas nevertheless introduce their own, intrinsic, numerical errors into the computation of the sensitivity dR(α)/dα j α 0 . This statement is also valid for any of the statistical methods (e.g., Latin-hypercubes, etc.) that might be used in conjunction with re-computations.

Forward Sensitivity Analysis Methodology (FSAM)
The aim of the Forward Sensitivity Analysis Methodology (FSAM) is to compute the response sensitivities without introducing approximations of their own (i.e., methodological errors), as was the case when using methods that need brute-force re-computations (e.g., finite-difference errors, statistical errors). The mathematical framework of the FSAM is set in the vector-space of the model parameters α. In this vector-space, the first-order partial sensitivity of R[ϕ(x), ψ(x); α] to a generic model parameter, α i , can in principle be obtained by taking the first-order partial of the response and equations underlying the model with respect to a generic model/respond parameter α i , evaluated at the nominal parameter values α 0 . Recall that in a linear vector space comprising function-valued elements of the form η(x) [η 1 (x), . . . , η K (x)] † , the most comprehensive-and practically computable-concept of a "partial derivative" of a nonlinear operator F[η 1 (x), . . . , η K (x)] with respect to any one of its arguments, η i , stems from the properties of the 1st-order partial Gateaux-(G-) variation, , which is defined as follows: where ε is a scalar (i.e., ε ∈ F, where F denotes the underlying field of real scalars) and where h i (x) denotes an arbitrary variation in η i (x).
for arbitrary variations h (h 1 , . . . , h K ) † is defined as follows: The first-order G-variation δF(η; h) is an operator defined on the same domain as F(η), and has the same range as F(η). The G-variation δF(η; h) satisfies the relation The existence of the G-variation δF(η; h) does not guarantee its numerical computability. Numerical methods most often require that δF(η; h) be linear in the variations h in a neighborhood (η + εh) around η. The necessary and sufficient conditions for the G-differential δF(η; h) of a nonlinear operator F(η) to be linear in the variations h in a neighborhood (η + εh) around η are as follows: (i) F(η) satisfies a weak Lipschitz condition at η; (24) (ii) For two arbitrary vectors of variations h 1 and h 2 , the operator F(η) satisfies the relation In practice, it is not necessary to investigate if F(η) satisfies the conditions presented in Equations (24) and (25) since it is usually evident if the right-side of Equation (23) is linear (or not) in the variations h. When the total variation δF(η; h) is linear in h, it is called the "total differential of F(u)" and is denoted as DF(η; h). In this case, the partial variation δF i (η 1 , . . . , η K ; h i ) is called the "partial differential δF i (η 1 , . . . , η K ; h i ) of F(η) with respect to η i " and the following representation holds: where the quantities ∂F(η 1 , . . . , η K )/∂η i denote the partial derivatives of F(u) with respect to its arguments η i . It will henceforth be assumed that all of the operators considered in this work satisfy the conditions presented in Equations (24) and (25), so that they admit partial G-derivatives such that the representation shown in Equation (26)  with respect to the respective parameter α j , for each j = 1, . . . , N α . Since the forward state function ϕ(x) is the solution of Equations (1) and (2), while the adjoint state function ψ(x) is the solution of Equations (9) and (10), it follows that both ϕ(x) and ψ(x) are implicit functions of the model parameters α (α 1 , . . . , α N α ) † . Hence, considering that ϕ(x) = ϕ(x; α), ψ(x) = ψ(x; α), and applying the definition provided in Equations (20)-(22) yields the following expression for the 1st-order partial sensitivity ∂R[ϕ(x), ψ(x); α]/∂α j (α 0 ) , for each j = 1, . . . , N α : The functions ∂ϕ(x)/∂α j and ∂ψ(x)/∂α j are obtained by solving the system of equations obtained by taking the first partial G-derivative of Equations (1), (2), (9) and (10) with respect to the model parameter α j . Applying the definition provided in Equation (22) to Equations (1), (2), (9) and (10) yields, after cancelling the scalar parameter variation δα j on both sides of the equal signs in resulting expressions, the following First-Order Forward Sensitivity System (1st-OFSS): The system of equations comprising Equations (28) through (31) is called the 1st-OFSS because its solutions are the first-order derivatives of the forward and adjoint state functions ∂ϕ(x)/∂α j and ∂ψ(x)/∂α j , respectively. Evidently, the 1st-OFSS would need to be solved 2N α -times, with different right-sides in order to compute the 1st-order derivatives of the forward and, respectively, adjoint state functions with respect to all model parameters α j , j = 1, . . . , N α . Subsequently, the solutions ∂ϕ(x)/∂α j and ∂ψ(x)/∂α j of the 1st-OFSS would be used in Equation (27) to compute the first-order response sensitivity ∂R[ϕ(x), ψ(x); α]/∂α j (α 0 ) , for each j = 1, . . . , N α , using quadrature formulas. The quadrature computations are small-scale inexpensive computations but solving the 1st-OFSS entails large-scale computations, even if the same linear operators, namely L(x; α) and L * (x; α), would need to be inverted each time (which means that the same code/solvers would be used, without needing additional programming for inverting this operator). Thus, the FSAM is advantageous to employ only if, in the problem under consideration, the number N α of model parameters is considerably less than the number of responses of interest. This is rarely the case in practice, however, since most problems of practical interest are characterized by many model parameters and comparatively few responses.
It is evident from Equations (27)-(31) that the 1st-order partial sensitivities ∂R[ϕ(x), ψ(x); α]/∂α j (α 0 ) are functions of the quantities ∂ϕ(x)/∂α j and ∂ψ(x)/∂α j , as well as of the index j = 1, . . . , N α . Therefore, for the subsequent derivation of the 2ndand higher-order sensitivities of the response with respect to the model parameters, it is convenient to introduce the following notation for the 1st-order response sensitivities obtained by using the FSAM: 3.1.3. First-Order Comprehensive Adjoint Sensitivity Analysis Methodology (1st-CASAM) The aim of the First-Order Comprehensive Adjoint Sensitivity Analysis Methodology (1st-CASAM) is to find an alternative way for expressing the contribution of the flux variation (the so-called "indirect-effect term" contribution) to the total response sensitivity, so as to avoid the need for having to compute the derivatives ∂ϕ g (r, Ω)/∂α i of the state functions with respect to the model's parameters, as is the case when using the FSAM. In contradistinction to the FSAM, which is framed in the vector-space of the parameters α, the 1st-CASAM is framed in the combined phase-space of the parameters α and state-functions ϕ(x) and ψ(x).
The 1st-order Gateaux-(G-) variation, denoted as δR ϕ 0 , ψ 0 , α 0 ; δϕ, δψ, δα , of the response R(e) for arbitrary variations δϕ(x), δψ(x), δα in the model parameters and state functions, in a neighborhood ϕ 0 (x) + εδϕ(x), ψ 0 (x) + εδψ(x); α 0 + εδα around ϕ 0 , ψ 0 , α 0 , where ε ∈ F is a real scalar (F denotes the underlying field of scalars), is defined as follows: where the "direct-effect" term δR ϕ 0 , ψ 0 , α 0 ; δα dir depends only on the parameter variations δα and is defined as follows: while the "indirect-effect" term δR ϕ 0 , ψ 0 , α 0 ; δϕ, δψ ind depends only on the variations δϕ and δψ in the state functions, and is defined as follows: In Equations (34) and (35), the notation {} α 0 has been used to indicate that the quantity within the brackets is to be evaluated at the nominal values of the parameters and state functions. This simplified notation is justified by the fact that when the parameters take on their nominal values, it implicitly means that the corresponding state functions also take on their corresponding nominal values, in view of Equations (3), (4), (11) and (12). This simplified notation will be used throughout this work.
The "direct effect" term δR ϕ 0 , ψ 0 , α 0 ; δα dir defined in Equation (34) depends directly on the parameter variations δα, and can be computed immediately since it does not depend on the variations δϕ and δψ(x). On the other hand, the "indirect effect" term δR ϕ 0 , ψ 0 , α 0 ; δϕ, δψ ind defined in Equation (35) depends indirectly on the parameter variations δα, through the variations δϕ(x) and δψ(x) in the forward state functions ϕ(x) and ψ(x), and can be computed only after having computed the values of the variations δϕ(x) and δψ(x).
The variations δϕ(x) and δψ(x) are the solutions of the system of equations obtained by taking the G-differentials of Equations (1), (2), (9) and (10), which can be written in the following matrix-vector form: where The matrices ∂q ϕ (α)/∂α and ∂[L(α)ϕ]/∂α, which appear on the right-side of Equation (36), are defined as follows: The system of equations comprising Equations (36) and (37) will be called the "1st-Level Variational Sensitivity System" (1st-LVSS) and its solution, δu (1) (x), will be called the "1st-Level variational sensitivity function." The superscript "(1)" used in Equations (36) and (37) indicates "1st-Level" (as opposed to "first-order") because the 1st-LVSS differs from the 1st-OFSS in that the 1st-LVSS involves the total first-order differential δu (1) (x) of the state functions whereas the 1st-OFSS involves the first-order partial derivatives of the state functions with respect to a single model parameter α j . In principle, the 1st-LVSS could be solved for each possible component of the vectors of parameter variations δα to obtain the functions δϕ(x) and δψ(x). Subsequently, the solutions δϕ(x) and δψ(x) of the 1st-LVSS could be used together with the known parameter variations δα in Equation (35) to compute the indirect-effect term δR ϕ 0 , ψ 0 , α 0 ; δϕ, δψ ind . Computing the indirecteffect term δR ϕ 0 , ψ 0 , α 0 ; δϕ, δψ ind by solving the 1st-LVSS would require at least 2N α large-scale computations (to solve the 1st-LVSS) for every independent component of the vectors of parameter variations δα. Therefore, solving the 1st-LVSS is advantageous to employ only if, in the problem under consideration, the number N α of model and boundary parameters is considerably less than the number of responses of interest. This is rarely the case in practice, however, since most problems of practical interest are characterized by many model parameters and comparatively few responses.
The indirect-effect term δR ϕ 0 , ψ 0 , α 0 ; δϕ, δψ ind defined in Equation (35) can be computed by applying of the First-Order Comprehensive Adjoint Sensitivity Analysis Methodology (1st-CASAM), which avoids the need for computing the functions δϕ(x) and δψ(x). The principles underlying the 1st-CASAM are as follows: 1 Introduce a Hilbert space, denoted as H 1 , comprising square-integrable functions vector-valued elements of the form η (1) i,j (x), . . . , η (1) i,N ϕ (x) † , i = 1, 2, and endowed with an inner product between two elements, η (1) , of this Hilbert space, which will be denoted as η (1) 1 and defined as follows: In the Hilbert H 1 , form the inner product of Equation (36) with a yet undefined vectorvalued function a (1) 2 (x) † ∈ H 1 to obtain the following relation: 3 Using the definition of the adjoint operator in the Hilbert space H 1 , recast the left-side of Equation (43) as follows: where P (1) δu (1) (x); a (1) (x); α; δα α 0 denotes the bilinear concomitant defined on the phase-space boundary x ∈ ∂Ω x α 0 , and where A (1) (α) is the operator formally adjoint to V (1) (α), i.e., 4 Require the first term on right-side of Equation (44) to represent the indirect-effect term defined in Equation (35), to obtain the following relation: where 5 Implement the boundary conditions given in Equation (37) into Equation (44) and eliminate the remaining unknown boundary-values of the functions δϕ and δψ from the expression of the bilinear concomitant P (1) δu (1) (x); a (1) (x); α α 0 by selecting appropriate boundary conditions for the function a (1) 2 (x) † , to ensure that Equation (46) is well-posed while being independent of unknown values of δϕ, δψ, and δα. The boundary conditions thus chosen for the function a (1) 6 The selection of the boundary conditions for the adjoint function 2 (x) † represented by Equation (48) eliminates the appearance of the unknown values of δu (1) δα α 0 and reduces this bilinear concomitant to a residual quantity that contains boundary terms involving only known values of ϕ(x), ψ(x), a (1) (x), α and δα. This residual quantity will be denoted as P (1) In general, this residual quantity does not automatically vanish, although it may do so occasionally. 7 The system of equations comprising Equation (46) together with the boundary conditions represented by Equation (48) constitute the 1st-Level Adjoint Sensitivity System (1st-LASS). the solution a (1) 2 (x) † of the 1st-LASS will be called the 1st-level adjoint function. The 1st-LASS is called "first-level" (as opposed to "first-order") because it does not contain any differential or functional-derivatives, but its solution a (1) (x) will be used below to compute the first-order sensitivities of the response with respect to the model parameters. This terminology will be also used in the sequel, when deriving the expressions for the 2nd-and 3rd-order sensitivities. 8 It follows from Equations (43) and (44) that the following relation holds: 9 Recalling that the first term on the right-side of Equation (49) is, in view of Equation (46), the indirect-effect term δR ϕ 0 , ψ 0 , α 0 ; δϕ, δψ ind , it follows from Equation (49) that the indirect-effect term can be expressed in terms of the 1st-level adjoint function 2 (x) † as follows: As has been indicated by the last equality in Equation (50), the variations δϕ and δψ have been eliminated from the original expression of the indirect-effect term, which now instead depends on the 1st-level adjoint function a (1) 2 (x) † . As indicated in Equation (46) 2 (x). It is very important to note that the 1st-LASS is independent of parameter variations δα. Hence, the 1st-LASS needs to be solved only once (as opposed to the 1st-LFSS, which would need to be solved anew for each parameter variation) to determine the 1st-level adjoint function a (1) is computed efficiently and exactly by simply performing the integrations over the adjoint function a (1) 2 (x) † , as indicated on the right-side of Equation (50).
As indicated in Equation (33), the total 1st-order sensitivity of the response R[ϕ(x), ψ(x); α] to the model parameters is obtained by adding the expressions of the directeffect term defined in Equation (34) and indirect-effect term as obtained in Equation (50), which yields the following expression: The expression in Equation (51) no longer depends on the variations the variations δϕ and δψ, which are expensive to compute, but instead depends on the 1st-level adjoint function a (1) 2 (x) † . In particular, this expression also reveals that the sensitivities of the response R[ϕ(x), ψ(x); α] to parameters that characterize the system's boundary and/or internal interfaces can arise both from the direct-effect and indirect-effect terms. It also follows from Equation (51) that the total 1st-order sensitivity δR ϕ 0 , ψ 0 ; a (1) ; α 0 ; δα can be expressed in terms of the 1st-level adjoint functions as follows: where the quantities Represent the 1st-order response sensitivities, evaluated at the nominal values of the state functions and model parameters. In particular, if the residual bilinear concomitant is non-zero, the functions S (1) j; ϕ(x), ψ(x); a (1) (x); α would contain suitably defined Dirac delta-functionals for expressing the respective non-zero boundary terms as volume-integrals over the phase-space of the independent variables. Dirac-delta functionals would also be needed to represent, within S (1) j; ϕ(x), ψ(x); a (1) (x); α , the terms containing the derivatives of the boundary end-points with respect to the model and/or response parameters.
In the particular case in which the model's response depended only on the forward state function ϕ(x), the sensitivities ∂R ϕ(x); a (1) 1 (x); α /∂α j ,j = 1, . . . , N α , would depend only on the 1st-level adjoint function a 1 (x) and one large-scale computation for computing the first-level adjoint state function a The Forward Sensitivity Analysis Method (FSAM) requires 2N α large-scale computations: N α large-scale computations for computing the first-order derivatives of the forward state function ϕ(α; x) and N α large-scale computations for computing the first-order derivatives of the adjoint state function ψ(α; x). C.
The finite-difference (FD) method requires 4N α large-scale computations: 2N α largescale computations for computing the forward state functions ϕ α j ± h j ; x and 2N α large-scale computations for computing the adjoint state functions ψ α j ± h j ; x . If the response depends only on the forward state function, the following simplifications occur: A similar simplification occurs in the boundary conditions represented by Equation (48).
If the response depends only on the adjoint state function, the following simplifications occur: A similar simplification occurs in the boundary conditions represented by Equation (48). Hence, if the response depends only on the forward or only on the adjoint state function, then only half of the number of large-scale computations discussed in points A-C, above, would be needed.
It is evident that the 1st-CASAM is the most efficient of all of the above methods for computing the first-order sensitivities of scalar-valued responses to parameters, and the relative superiority of the 1st-CASAM over the other methods increases as the number of model parameters increases. This efficiency was evidently illustrated in [10][11][12][13][14][15], which presented the CPU times needed to compute the first-order sensitivities of the response to the model parameters for an OECD/NEA reactor physics benchmark [16], which consists of a plutonium metal sphere surrounded by a spherical shell made of polyethylene. The benchmark's response of interest was the experimentally measured leakage of particles (neutrons) out of the sphere's outer boundary, which depends only on the forward state function, but does not depend on the adjoint state-function (adjoint flux), so this response was modeled using a particular form of the right-side of Equation (13) within the PARTISN neutron transport code [17]. To compute the 7477 non-zero 1st-order sensitivities of the benchmark's neutron leakage response to the benchmark's parameters, the 1st-CASAM needed 1 large-scale adjoint computation for computing the 1st-level adjoint function [the equivalent of the function a (1) 1 (x)], which required 85 CPU-seconds. In addition, the 1st-CASAM required 10 CPU-seconds for computing the 7477 integrals over the adjoint functions [i.e., the equivalent of computing Equation (50)] for computing the numerical values of all of the 7477 1st-order sensitivities. In contradistinction, the FSAM required 7477 large-scale computations for solving the respective 1st-OFSS [i.e., the equivalent of solving Equations (28) and (29)], which required 694-CPU Hours. The finite-difference (FD) approximation would have required 1388-CPU Hours for obtaining approximate values for the 7477 1st-order sensitivities.

Computation of the Second-Order Sensitivities of R[ϕ(x), ψ(x); α]
Since this work ultimately aims at deriving the explicit expressions of the 4th-order sensitivities of the response R[ϕ(x), ψ(x); α] with respect to the model parameters, the proliferation of indices, superscripts and subscripts is unavoidable. Nevertheless, "subscriptedsubscripts" can be avoided by using subscripts of the form j1 = 1, . . . , N α ; j2 = 1, . . . , j1, where the index j1 will replace the index j (which was used in the Section 3.1) to index the 1st-order sensitivities, and where the index j2 will be used (in addition to j1) to index the 2nd-order sensitivities. Furthermore, anticipating the notation to be used in subsequent Subsections, the index j3 will be used (in addition to the indices j1 and j2) to index the 3rd-order sensitivities and the index j4 will be used (in addition to j1, j2 and j3) to index the 4th-order sensitivities.
There are N α (N α + 1)/2 distinct second-order sensitivities of the response with respect to the model and response parameters. Section 3.2.1, below, summarizes the computational aspects of using finite-difference formulas and Section 3.2.2 describes the "forward sensitivity analysis methodology" for computing the 2nd-order response sensitivities to the model parameters. Section 3.2.3 presents the 2nd-Order Comprehensive Adjoint Sensitivity Analysis Methodology (2nd-CASAM), for computing the 2nd-order response sensitivities along the concepts originally developed by Cacuci [7][8][9].

Finite-Difference Approximation Using Re-Computations with User-Modified Parameters
The second-order responses sensitivities could be computed approximately by "brute force" re-computations using standard forward or backward differences. Thus, the secondorder unmixed sensitivities can be calculated using the following formula: where and h j1 denotes a "judiciously-chosen" variation in the parameter α j1 around its nominal value α 0 j1 . The 2nd-order mixed sensitivities, ∂ 2 R(α)/∂α j1 ∂α j2 , can be calculated by using the following finite-difference formula: where h j1 and h j2 denote the variations in the parameters α j1 and α j2 , respectively. The values of the quantities R j1+1,j2+1 R α j1 are obtained by repeatedly re-solving Equations (1), (2), (9) and (10), using the changed parameter values α j1 ± h j1 and α j2 ± h j2 . The values of h j1 and h j2 , respectively, must be chosen "judiciously" by "trial and error" for each of the parameters α j1 and α j2 ; otherwise, the result produced by Equation (57) will be erroneous, significantly removed from the exact value of the derivative ∂ 2 R(α)/∂α j1 ∂α j2 . These finite-difference formulas introduce their intrinsic "methodological errors" of order O h j1 2 and/or O h j1 2 , h j2 2 , as indicated in Equations (56) and (57), which are in addition to, and independent of, the errors that might be incurred in the computation of L(α). In other words, even if the computation of R(α) were perfect (error-free), the finite-difference formulas nevertheless introduce their own, intrinsic, numerical errors into the computation of ∂ 2 R(α)/∂α j1 ∂α j2 .

Forward Sensitivity Analysis Methodology (FSAM)
Within the framework of the FSAM, the second-order sensitivities ∂ 2 R(α)/∂α j1 ∂α j2 , j1, j2 = 1, . . . , N α , of the response to the model and response parameters are obtained by computing the partial G-derivatives of the first-order sensitivities represented by Equation (32), which yields the following result: As indicated by the functional dependence of the last term in Equation (58), the evaluation of the second-order response sensitivities requires prior knowledge of the 1stand 2nd-order derivatives of the state functions with respect to the model parameters. The functions ∂ϕ(x)/∂α j1 and ∂ψ(x)/∂α j1 are obtained by solving the 1st-OFSS. The functions ∂ 2 ϕ/∂α j1 ∂α j2 and ∂ 2 ψ/∂α j1 ∂α j2 are the solutions of the following Second-Order Forward Sensitivity System (2nd-OFSS), which is obtained by taking the partial G-derivative of the 1st-OFSS with respect to a generic model parameter α j2 : (60) Evidently, the 1st-OFSS would need to be solved 2N α -times, with different right-sides in order to compute the 1st-order derivatives ∂ϕ(x)/∂α j1 and ∂ψ(x)/∂α j1 with respect to all model parameters, which will entail large-scale computations. Furthermore, there are N α (N α + 1)/2 distinct 2nd-order functions ∂ 2 ϕ/∂α j1 ∂α j2 and just as many distinct 2ndorder functions ∂ 2 ψ/∂α j1 ∂α j2 . This means that the 2nd-OFSS would need to be solved N α (N α + 1) times to compute these 2nd-order functions. Hence, the computation of the N α (N α + 1)/2 distinct 2nd-order response sensitivities to the model parameters using the FSAM would necessitate a total number of N 2 α + 3N α large-scale computations, evidently highlighting the impact of the "curse of dimensionality" [Bellman, 1957].

The 2nd-Order Comprehensive Adjoint Sensitivity Analysis Methodology (2nd-CASAM)
The starting point for the development of the 2nd-CASAM is the expressions of the sensitivities provided in terms of the 1st-level adjoint functions, namely Equation (53), rather that expression of the 1st-order sensitivities produced by the FSAM [cf. Equation (32)]. Thus, the 2nd-order total sensitivity of the model response is obtained by applying the definition of the 1st-order G-differential to Equation (53), which yields the following result: where the direct-effect term δR (1) j1; u (1) (x); a (1) (x); α; δα dir is defined as follows: while the "indirect-effect term" δR (1) j1; u (1) (x); a (1) (x); α; δu (1) (x); δa (1) ind is defined as follows: Note that the left-side of Equation (69) is to be computed at nominal parameter and state function values but the corresponding indication (namely, the superscript "zero") has been omitted in Equation (69) in order to simplify the notation.
The system of equations represented by Equation (76) together with the boundary conditions represented by Equation (77) will be called the "2nd-Level Variational Sensitivity System" (2nd-LVSS). The solution, δu (2) (x), of the 2nd-LVSS will be called the "2nd-level variational sensitivity function". The 2nd-LVSS differs from the 2nd-OFSS in that the 2nd-LVSS involves first-order differentials of the state functions, whereas the 2nd-OFSS involves the second-order partial derivatives of the state functions with respect to model parameters α j2 and α j1 , j1 = 1, . . . , N α ; j2 = 1, . . . , j1. In principle, the 2nd-LVSS could be solved for each possible component of the vectors of parameter variations δα to obtain the 2nd-level variational vector δu (2) (x). Subsequently, δu (2) (x) could be used together with the known parameter variations δα in Equation (69) to compute the indirect-effect term δR (1) j1; u (2) ind by solving the 2nd-LVSS would require at least 2N α (N α + 1) large-scale computations (to solve the 2nd-LVSS) for every independent component of the vectors of parameter variations δα. Therefore, solving the 2nd-LVSS is advantageous to employ only if, in the problem under consideration, the number N α of model and boundary parameters is considerably less than the number of responses of interest. This is rarely the case in practice, however, since most problems of practical interest are characterized by many model parameters and comparatively few responses.
The system of equations represented by Equation (85) together with the boundary conditions represented by Equation (87) constitute the 2nd-Level Adjoint Sensitivity System (2nd-LASS). The solution a (2) 2 (j1; x), a 3 (j1; x), a (2) 4 (j1; x) † ∈ H 2 ,j1 = 1, . . . , N α , of the 2nd-LASS will be called the 2nd-level adjoint function. The results provided in Equations (82), (83) and (85) are employed in Equation (69) to obtain the following expression for the indirect-effect term δR (1) j1; u (2) in terms of the 2nd-level adjoint functions a (2) (j1; x), for j1 = 1, . . . , N α : Inserting the expressions that define the vector q (2) u (2) ; α; δα from Equation (78) into Equation (88) and adding the resulting expression for the indirect-effect term with the expression of the direct-effect term given in Equation (68) yields the following expression for the total second-order G-differential of the response R[ϕ(x), ψ(x); α]: δR (1) j1; u (2) (x); α; δu (2) 2 (ψ; α; δα) where R (2) j2; j1; u (2) (x); a (2) (j1; x); α denotes the 2nd-order partial sensitivity of the response with respect to the model parameters, evaluated at the nominal parameter values α 0 , and has the following expression for j1, j2 = 1, . . . , N α : 3 (j1; x), 4 (j1; x), (2) u (2) (x); a (2) Since the 2nd-LASS is independent of parameter variations δα, the exact computation of all of the partial second-order sensitivities R (2) j2; j1; u (2) (x); a (2) (j1; x); α requires at most N α large-scale (adjoint) computations using the 2nd-LASS, rather than O N 2 α largescale computations as would be required by forward methods. In component form, the equations comprising the 2nd-LASS are as solved for each j1 = 1, . . . , N α in the following order: 4 (j1; x) + ∂S (1) (j1;u (2) ;α) It is important to note that by solving the 2nd-LASS N α -times, the "off-diagonal" 2nd-order mixed sensitivities ∂ 2 R/∂α j1 ∂α j2 will be computed twice, in two different ways (i.e., using distinct 2nd-level adjoint functions), thereby providing an independent intrinsic (numerical) verification that the 1st-and 2nd-order response sensitivities are computed accurately. The information provided by the 1st-order sensitivities usually indicates which 2nd-order sensitivities are important and which could be neglected. Therefore, it is useful to prioritize the computation of the 2nd-order sensitivities by using the rankings of the relative magnitudes of the 1st-order sensitivities as a "priority indicator": the larger the magnitude of the relative 1st-order sensitivity, the higher the priority for computing the corresponding 2nd-order sensitivities. Also, since vanishing 1st-order sensitivities may indicate critical points of the response in the phase-space of model parameters, it is also of interest to compute the 2nd-order sensitivities that correspond to vanishing 1st-order sensitivities. In practice, only those 2nd-order partial sensitivities which are deemed important would need to be computed.
Dirac delta-functionals in Equation (90) may need to be used for expressing the non-zero residual terms in the respective residual bilinear concomitant and/or the terms containing derivatives with respect to the lower-and upper-boundary points, so that the expression of the partial second-order sensitivities R (2) j2; j1; u (2) (x); a (2) (j1; x); α can be written in the following form, in preparation for computing the 3rd-order sensitivities:

Comparison of Computational Requirements for Computing the Second-Order Response Sensitivities with Respect to the Model Parameters
Numerical results have been presented in [10][11][12][13][14][15] for the 27,956,503 nonzero 2ndorder sensitivities that correspond to the 7477 non-zero first-order sensitivities of the PERP benchmark's leakage response to the benchmark's parameters. These numerical results were computed using the PARTISN neutron transport code [17] running on a DELL desktop computer (AMD FX-8350) with an 8-core processor. A typical large-scale (forward or adjoint) computation required ca. 160 s CPU-time while the computation of a single 2nd-order sensitivity, including reading the adjoint functions and the integration over the adjoint functions, required ca. 0.046 s CPU-time. Altogether, the computation of all of the 27,956,503 nonzero 2nd-order sensitivities of the PERP leakage response using the 2nd-CASAM required ca. 929 CPU-hours, comprising 735 CPU-hours used for the 14, 843 large-scale computations (needed to compute the 2nd-level adjoint functions) and 194 CPU-hours used for performing the integrations needed to compute the respective unmixed and mixed second-order sensitivities.
In contradistinction, the FSAM would have required ca. 38,000 CPU-hours; using finite-differences (FD) would have required over 147,000 CPU-hours to compute the 27,956,503 nonzero 2nd-order sensitivities. Evidently, the 2nd-CASAM is the only method that can be used in practice for computing accurately and efficiently the 2nd-order response sensitivities for large-scale systems involving many model parameters.

Computation of the Third-Order Sensitivities of R[ϕ(x), ψ(x); α]
There are N α (N α + 1)(N α + 2)/3! distinct 3rd-order sensitivities of the response R[ϕ(x), ψ(x); α] with respect to the model and response parameters. The finite-difference formulas for computing these sensitivities are presented in Section 3.3.1. The Forward Sensitivity Analysis Methodology (FSAM) for computing the 3rd-order response sensitivities are presented in Section 3.3.2 and the 3rd-Order Comprehensive Adjoint Sensitivity Analysis Methodology (3rd-CASAM) is presented in Section 3.3.3. Section 3.3.4 presents a comparison of computational requirements for computing the third-order response sen-sitivities with respect to model parameters in the case of the OECD/NEA benchmark measured in [16].
In the particular case when the response depends only on the forward function ϕ(x), i.e., if the response has the functional dependence R[ϕ(x); α], the quantities ψ, a

δα.
A reduction of the 3rd-LVSS to a 4 × 4 matrix-equation, which would be similar to the reduction shown in Equation (133), would also occur for a response R[ψ(x); α] which depended only on the adjoint function ψ(x).
Since the 3rd-LVSS equations depend on the parameter variations δα i , solving them is prohibitively expensive computationally for large-scale systems involving many parameters. The need for solving the 3rd-LVSS can be avoided by expressing the indirect-effect term δR (1) (j2; j1) ind defined in Equation (109) in an alternative way, namely in terms of the solutions of a 3rd-Level Adjoint Sensitivity System (3rd-LASS), which will not involve any variations in the state functions. This 3rd-LASS is constructed by implementing the same sequence of logical steps as used to construct the 1st-LASS and the 2nd-LASS, namely: 1 Define a Hilbert space, denoted as H 3 , comprising vector-valued elements of the form i,j (x), . . . , η (3) i,N ϕ (x) † , i = 1, .., 8. The inner product between two elements, η (3) (x) ∈ H 3 and ξ (3) (x) ∈ H 3 , of this Hilbert space, will be denoted as 3 and is defined as follows: i (x) 0 .
The 3rd-LASS together with the results provided in Equations (135) and (136) are employed in Equation (109) to obtain the following expression for the indirect-effect term δR (2) j2; j1; u (3) (j1; x); α; δu (2) ; δa (2) As the identity in the last line of Equation (149) indicates, the dependence of the indirect-effect term on the variations δu (2) and δa (2) (j1) in the state functions has been eliminated, by having replaced this dependence by the dependence on the 3rd-level adjoint function a (3) (j2; j1; x).

Comparison of Computational Requirements for Computing the Third-Order Response Sensitivities with Respect to Model Parameters
The largest third-order sensitivities of the leakage response of the OECD/NEA reactor physics benchmark measured in [16] have been computed in [19][20][21], where it was shown that these large sensitivities (which reached relative values of 10 4 ) involved the benchmark's 180 imprecisely known microscopic total cross sections. Using a DELL computer (AMD FX-8350) with an 8-core processor, the CPU-time for a typical adjoint computation (which represents a large-scale computation) of one (of the 8) of the adjoint functions needed for computing the 3rd-order response sensitivities to the benchmark's total microscopic cross sections is ca. 20 s, while the CPU-time computing an integral (which represents a small-scale computation) over the adjoint function(s) which appear(s) in the formula for computing a 3rd-order sensitivity is ca. 0.003 s. By applying the 3rd-CASAM, the computation of the 3rd-order sensitivities of the benchmark's leakage response to the 180 total microscopic cross sections required 33,301 large-scale adjoint computations, which required 175 CPU-hours. By comparison, computing these 3rd-order sensitivities using the FSAM would have required 1,004,730 large-scale forward computations (which would have required 12,559 CPU-hours). Using finite-differences would have required 7,905,360 largescale forward computations (which would have required 98,817 CPU-hours) to obtain, at best, approximate values for these 3rd-order sensitivities. Evidently, the 3rd-CASAM is the only practical methodology capable of computing, without introducing methodological errors, 3rd-order sensitivities for large-scale systems involving many parameters.

Computation of the Fourth-Order Sensitivities of R[ϕ(x), ψ(x); α]
There are N α (N α + 1)(N α + 2)(N α + 3)/4! distinct 4th-order sensitivities of the response with respect to the model and response parameters. It is not practicable to compute these sensitivities using finite-difference formulas or the forward sensitivity analysis methodology. The finite-difference formulas for computing these sensitivities are presented in Section 3.4.1. The Forward Sensitivity Analysis Methodology (FSAM) for computing the 3rd-order response sensitivities are presented in Section 3.4.2, while the 4th-Order Comprehensive Adjoint Sensitivity Analysis Methodology (4th-CASAM) is presented in Section 3.4.3.

Discussion and Conclusions
This work has presented the 4th-Order Comprehensive Adjoint Sensitivity Analysis Methodology (4th-CASAM), which enables the efficient computation of the exact expressions of the 4th-order functional derivatives ("sensitivities") of a general system response, which depends on both the forward and adjoint state functions, with respect to all of the parameters underlying the respective forward and adjoint systems. Since nonlinear operators do not admit adjoint operators (only linearized versions of nonlinear operators can admit a bona-fide adjoint operator), responses that simultaneously depend on forward and adjoint functions can arise only in conjunction with linear systems, thus providing the fundamental motivation for treating linear systems in their own right rather than just as particular cases of nonlinear systems. Very importantly, the computation of the 2nd-, 3rd-, and 4th-level adjoint functions uses the same forward and adjoint solvers (i.e., computer codes) as used for solving the original forward and adjoint systems, thus requiring relatively minor additional software development for computing the various-order sensitivities.
The 4th-CASAM presented in this work is the only practically implementable methodology for obtaining and subsequently computing the exact expressions (i.e., free of methodologically-introduced approximations) of the 1st-, 2nd-, 3rd-, and 4th-order sensitivities (i.e., functional derivatives) of responses to system parameters, for coupled forward/adjoint linear systems. By enabling the practical computation of any and all of the 1st-, 2nd-, 3rd-, and 4th-order response sensitivities to model parameters, the 4th-CASAM makes it possible to compare the relative values of the sensitivities of various order, in order to assess which sensitivities are important and which may actually be neglected, thus enabling future investigations of the convergence of the (multivariate) Taylor series expansion of the response in terms of parameter variations, as well as investigating the actual validity of expressions that are derived from Taylor-expansion of the response (e.g., response variances/covariance, skewness, kurtosis, etc.) as a function of the model's parameters.
The 4th-CASAM presented in this work provides a fundamentally important step in the quest to overcome the "curse of dimensionality" in sensitivity analysis, uncertainty quantification, and predictive modeling. Ongoing work aims at generalizing the 4th-CASAM, attempting to provide a general methodology for the practical, efficient, and exact computation of arbitrarily-high order sensitivities of responses to model parameters, both for linear and nonlinear systems.