Illustrating Important Effects of Second-Order Sensitivities on Response Uncertainties in Reactor Physics

: This paper illustrates the relative importance of the largest ﬁrst- and second-order sensitivities of the leakage response of an OECD/NEA reactor physics benchmark (a polyethylene-reﬂected plutonium sphere) to the benchmark’s underlying total cross sections. It will be shown that numerous 2nd-order sensitivities of the leakage response with respect to the total cross sections are signiﬁcantly larger than the largest corresponding 1st-order sensitivities. In particular, the contributions of the 2nd-order sensitivities cause the mean (expected) value of the response to differ appreciably from its computed value and also cause the response distribution to be skewed towards positive values relative to the mean. Neglecting these large 2nd-order sensitivities would cause very large non-conservative errors by under-reporting the response’s variance and expected value. The results presented in this paper also underscore the need for obtaining reliable cross section covariance data, which are currently unavailable. Finally, comparing the CPU-times needed for computations, this paper demonstrates that the Second-Order Adjoint Sensitivity Analysis Methodology is the only practical method for computing 2nd-order sensitivities exactly, without introducing methodological errors, for large-scale systems characterized by many uncertain parameters.


Introduction
The work reported in this work is based on the PHYSOR-2020 plenary invited conference paper entitled "On the importance of second-order response sensitivities to nuclear data in reactor physics uncertainty analysis." This paper was selected by the PHYSOR-2020 Technical Committee to be published in the Special Issue of the Journal of Nuclear Engineering. To avoid duplicate publication, the contents and the title of the conference paper have been re-written to make the present work suitable for publication in this journal's Special Issue.
The OECD Nuclear Energy Agency (OECD/NEA) International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook [1] describes several fundamental subcritical reactor physics benchmarks that use a 4.5 kg alpha-phase plutonium sphere constructed at Los Alamos National Laboratory in 1980 [2]. This sphere was initially used as a neutron source for conducting subcritical experiments aimed at estimating the reactivity worth of beryllium reflectors (and was therefore colloquially called the "BeRP ball") but was subsequently also used for other subcritical experiments using tungsten, nickel and/or polyethylene reflectors. Miller et al. [3] evaluated computationally neutron multiplicity measurements for the "polyethylene-reflected BeRP ball", noticing significant disagreement between their computational results and the corresponding measurements.
In addition to the nuclear data investigated in [3], the "polyethylene-reflected plutonium sphere" benchmark contains other uncertain data, the most prominent being the total neutron cross sections. The 1st-and 2nd-order sensitivities of this benchmark's leakage response to the benchmark's cross sections have been computed by applying Cacuci's 2nd-ASAM [4]; the most significant results thus obtained are summarized in this work. Many of the 2nd-order sensitivities were found to be much larger than the corresponding 1st-order sensitivities, shifting the leakage response's mean value significantly away from its computed value and causing asymmetries in the response distribution. All in all, the effects on the benchmark's leakage response's expected value, variance, and skewness of the 2nd-order sensitivities with respect to the total cross sections are much larger than the corresponding effects stemming from the corresponding 1st-order sensitivities.

Largest 1st-and 2nd-Order Sensitivities of the Perp Benchmark's Leakage with Respect to Perp's Nuclear Data
The "polyethylene-reflected plutonium sphere benchmark" which will henceforth be called using the acronym "PERP" (benchmark) contains two materials, as follows: "Material 1" (core) contains four isotopes: 239 Pu, 240 Pu, 69 Ga, 71 Ga; "Material 2" (reflector) contains two isotopes: C and 1 H. The neutron flux distribution within the PERP benchmark was computed by using the PARTISN [5] multigroup discrete ordinates particle transport code, which solves the following multi-group approximation of the neutron transport equation with a spontaneous fission source: where r d denotes the radius of the spherical benchmark, and The PARTISN [5] computations of the neutron flux used the MENDF71X [6] 618-group cross section data collapsed to G = 30 energy groups, as well as a P 3 Legendre expansion of the scattering cross section, an angular quadrature of S 256 , and a fine-mesh spacing of 0.005 cm (comprising 759 meshes for the plutonium sphere of radius of 3.794 cm, and 762 meshes for the polyethylene shell of thickness of 3.81 cm). The quantities appearing in Equations (1)-(4) are defined as follows: 1.
The quantity ϕ g (r, Ω) is the customary "group-flux" for group g, and is the unknown state-function obtained by solving Equations (1) and (2), where r d is the radius of the PERP sphere while the vector n denotes the outward unit normal vector at each point on the sphere's outer boundary, denoted as S b .

2.
The total cross section Σ g t (r) for energy group g, g = 1, . . . , G, and material m, is computed using the following expression: where σ g f ,i (r) and σ g c,i (r) denote, respectively, the tabulated group microscopic fission and neutron capture cross sections for group g, g = 1, . . . , G. Other nuclear reactions are negligible in the PERP benchmark.

3.
The quantity N i,m denotes the atom density of isotope i in material m; i = 1, . . . , I, m = 1, . . . , M, where I denotes the total number of isotopes, and M denotes the total number of materials. The computation of N i,m uses the well-known expression: where ρ m denotes the mass density of material m, m = 1, . . . , M; w i,m denotes the weight fraction of isotope i in material m; A i denotes the atomic weight of isotope i, i = 1, . . . , I; N A denotes Avogadro's number. For the PERP benchmark, I = 6 and M = 2. 4.
The quantity Σ g →g s r, Ω → Ω represents the scattering transfer cross section from energy group g , g = 1, . . . , G into energy group g, g = 1, . . . , G, and is computed in terms of the l-th order Legendre coefficient σ g →g s,l,i (of the Legendre-expanded microscopic scattering cross section from energy group g into energy group g for isotope i), which are tabulated parameters, in the following finite-order expansion: where ISCT = 3 denotes the order of the respective finite expansion in Legendre polynomial. The expressions in Equations (5) and (7) indicate that the zeroth order (i.e., l = 0) scattering cross sections must be considered separately from the higher order (i.e., l ≥ 1) scattering cross sections, since the former contribute to the total cross sections, while the latter do not. 5.
The quantity N f denotes the total number of spontaneous-fission isotopes. The spontaneous-fission isotopes in the PERP benchmark are "isotope 1" ( 239 Pu) and "isotope 2" ( 240 Pu), so N f = 2, and the spontaneous fission neutron spectrum of 239 Pu and 240 Pu, respectively, is approximated by a Watts fission spectrum using the evaluated parameters a k and b k . The decay constant for actinide nuclide k is denoted as λ k , and F SF k denotes the fraction of decays that are spontaneous fission (the "spontaneous-fission branching fraction").

6.
PARTISN [5] computes the quantity νΣ f g by directly using the quantities (νσ) g f ,i , which are provided in nuclear data files for each isotope i, and energy group g, as follows 7.
The quantity χ g (r) quantifies the fission spectrum in energy group g. 8.
The numerical model of the PERP benchmark contains 7477 nonzero parameters which are subject to uncertainties, as follows: (i) 180 group-averaged microscopic total cross sections; (ii) 7101 group-averaged microscopic scattering cross sections; (iii) 60 group-averaged microscopic fission cross sections; (iv) 60 "averaged" number of neutron per fission; (v) 60 group-averaged fission spectrum constants; (vi) 10 external neutron source parameters; (vii) 6 isotopic number densities. The vector α, which appears in the expression of the Boltzmann-operator B g (α), represents the "vector of uncertain model parameters".
The fundamental quantity underlying the neutron measurements is the total leakage of neutrons leaving the PERP sphere, represented mathematically as follows: dΩ Ω · n ϕ g (r, Ω).
The expressions of the 1st-order sensitivities of the leakage response to the model parameters underlying the total cross section are as follows: , i = 1, . . . , I; g = 1, . . . , G, (10) where the multigroup adjoint fluxes ψ (1),g (r, Ω) are the solutions of the following 1st-Level Adjoint Sensitivity System (1st-LASS): where The 2nd-order sensitivities of the leakage response to the parameters involved in the definitions of the total cross sections are given by the following expression: where the 2nd-level adjoint functions ψ (2),g 1,j , j = 1, . . . , J σt ; g = 1, . . . , G, and ψ (2),g 2,j , j = 1, . . . , J σt ; g = 1, . . . , G, are the solutions of the following Second-Level adjoint Sensitivity system (2nd-LASS): All of the quantities appearing in Equations (14)-(17) are evaluated at the nominal values of all model parameters. The details of these computations are presented in [7]. All of the computations were performed using a DELL computer AMD FX-8350 having an 8-core processor.
By comparison, it would require 694 h-CPU to compute approximately the first-order responses sensitivities using the two-point finite difference scheme shown below: where R i+1 R α 0 i + δα i and R i−1 R α 0 i − δα i . Note that Equation (19) introduces its own intrinsic methodological error when approximating the respective derivative; this error would be present even if the numerical computation of the response R(α i ) at the respective "sampling point" α i were perfect. Additionally, one needs to "play around" to find the "correct" value to use for δα i , because if δα i is either too large or too small, Equation (19) would produce erroneous results. The 694 h-CPU required to compute approximately the first-order responses sensitivities using Equation (19) does not include the time needed to find the "appropriate" δα i which gives a "satisfactory" approximation to the respective derivative.
As indicated by Equation (14), the computation of each of the 27,956,503 distinct nonzero second-order sensitivities of the response (to two non-zero parameters) requires one forward PARTISN computation to obtain the function ψ (2),g 1,j by solving Equations (15) and (16), as well as one adjoint PARTISN computation to obtain the function ψ  (14) to obtain the unmixed and mixed second-order sensitivities.
Consider, for comparison, the simplest finite-difference scheme for computing the second-order responses sensitivities, namely: where R i,j R α 0 i + δα i , α 0 j + δα j , etc., are computed response values at the indicated "sampling points." Note again that the finite-difference schemes introduce their own intrinsic methodological error when approximating the respective derivative; this error would be present even if the numerical computation of the response R(α i ) at the respective "sampling point" α i were perfect. Furthermore, one needs to find by "trial and error" the "correct" value to use for δα i , because if δα i is either too large or too small, the finite-difference schemes would produce wrong numbers. Not counting the "trial and error" the "correct" value to use for δα i , the CPU time (using a DELL computer with an 8-core processor, AMD FX-8350) needed to compute the 27,956,503 distinct non-zero 2nd-order sensitivities by using Equations (20) and (21) at the respective 111,811,058 "sampling points" would be 592 YEARS-CPU time! Furthermore, these sensitivities would not be exact (as produced by the 2nd-ASAM), but would contain second-order errors. Evidently, it is not feasible to compute the 2nd-order sensitivities using "sampling approaches".

Uncertainty Quantification
The numerical results reported in this paper comprise just the effects of the groupaveraged total microscopic cross sections, which will be indicated by using the subscript "t". Since correlations among the group total cross sections are not available for the PERP benchmark, two extreme situations can be considered: (i) all cross sections are uncorrelated, which will be considered in Section 3.1 below; (ii) all cross sections are fully correlated, which will be considered in Section 3.2 below. The formulas used in this section are as originally used in [12]; for convenient referencing, they are reproduced in Sections 3.1 and 3.2.

Uncorrelated Total Microscopic Cross Sections
The expected value of the leakage response has the expression [E(L)] where the superscript "(2,U)" indicates "2nd-order uncorrelated" cross sections and where: (i) L α 0 denotes the nominal value of the computed leakage response;

Fully Correlated Total Microscopic Cross Sections
The effects of correlations among the group-averaged microscopic total cross sections are transmitted to the response moments (mean value, variance, skewness) through the 2nd-order mixed sensitivities of the leakage response to these cross sections. The exact effect of such correlations cannot be assessed exactly since they are unavailable. When the cross sections are fully correlated (an extreme case denoted by using the superscript "FC"), the effects of the 2nd-order sensitivities can be quantified as follows: (i) The mean value of the leakage response becomes [E(L)]       (U,N) t provides the contributions from the "mixed second-order sensitivities and fully correlated normally distributed parameters," which is indicated using the superscript "(MSC,N)". Table 3 presents illustrative results when considering uniform relative standard deviations s g t,i of 5% and 10%, respectively. These numerical results highlight the importance of both the unmixed and mixed 2nd-order sensitivities for: (a) causing the mean value of the leakage response to differ from its computed value; (b) contributing significantly to increase the total response variance [var (L)] (U) t ; (c) causing the response distribution to become non-Gaussian. them, rather than considering them a priori to be negligible, as has been the practice thus far in the published literature.