Reduced-Order Modelling with Domain Decomposition Applied to Multi-Group Neutron Transport

: Solving the neutron transport equations is a demanding computational challenge. This paper combines reduced-order modelling with domain decomposition to develop an approach that can tackle such problems. The idea is to decompose the domain of a reactor, form basis functions locally in each sub-domain and construct a reduced-order model from this. Several different ways of constructing the basis functions for local sub-domains are proposed, and a comparison is given with a reduced-order model that is formed globally. A relatively simple one-dimensional slab reactor provides a test case with which to investigate the capabilities of the proposed methods. The results show that domain decomposition reduced-order model methods perform comparably with the global reduced-order model when the total number of reduced variables in the system is the same with the potential for the ofﬂine computational cost to be signiﬁcantly less expensive.


Introduction
Reduced-Order Modelling (ROM) or model reduction is a mathematical technique that can reduce the computational cost of solving a system of equations [1]. To do this, a low-dimensional model is found that can approximate the High-Fidelity Model (HFM) to a high degree of accuracy, but at a fraction of the computational cost. Key to the reduction in computational cost is the offline/online split common to most ROM methods. During the offline phase, an HFM is solved for different parameters, where each set of parameters produces a different solution or snapshot. Dimensionality reduction techniques are applied to these snapshots to produce basis functions that span the reduced space. The governing equations for the system can then be approximated in the reduced space. During the online phase, the ROM is solved for unseen sets of parameters at a fraction of the time it would take for an HFM solution.
Proper orthogonal decomposition is one of the most commonly used methods for dimensionality reduction [2][3][4]. Referred to also as Principal Component Analysis (PCA), Proper Orthogonal Decomposition (POD) was introduced by Lumley [5] to identify structures within turbulent flow. Sirovich [6] later introduced the method of snapshots, which allowed POD to be applied to data sets with a large number of degrees of freedom. The POD basis functions are determined taking a singular-value decomposition of the snapshots, or they can be calculated more efficiently from an associated eigenvalue problem. These basis functions can be used to determine the POD coefficients that lie in the reduced space and, using a Galerkin projection, can be applied to the discretised governing equations to produce a reduced system of equations. ROMs using the governing equations in this manner are called projection-based ROMs [7].
Modelling neutron transport in reactor cores can be computationally demanding due to a large number of degrees of freedom required to be accurate. One example is the Westinghouse PWR-900 core requiring over 1.7 trillion degrees of freedom to be modelled accurately [8,9], which would benefit greatly from ROM as existing research shows. Buchan et al. [10] transformed the eigenvalue problem into a time-dependent problem and applied this to reactor models in 1D and 2D. Heaney et al. [11] applied a POD-based ROM to a PWR fuel assembly in order to model control-rod movement and temperature. This method was further extended to use basis functions that were local in the parameter space [12]. In both [11] and [12], different sets of POD basis functions were produced for each energy group, a method demonstrated by German and Ragusa [13] to be more robust than a monolithic approach, which produced POD basis functions across all energy groups simultaneously. Sartori et al. [14] developed a reduced basis approach to model control rod movement for the TRIGA Mark II reactor core. As the sampling technique for the parameter space, centroidal Voronoi tessellation was used instead of the classical greedy algorithm. Chunyu and Gong [15] and Zhang and Chen [16] developed a reduced basis method for the diffusion equation and the SP3equations, respectively, using the cross-sections as parameters. This work was later built upon in [17] to model the movement of control rods. Lorenzi [18] showed that POD can be combined using an adjoint flux, where basis functions are produced from both the flux and adjoint flux snapshots, to produce solutions that are more accurate than only using standard POD. All these ROMs show computational gains of at least three orders of magnitude over the HFMs. Outside of criticality problems, ROM has been applied to a coupled neutronics and heat transfer problem [19], fuel burnup analysis [20] and to the angular discretisation of a radiation transport problem [21].
Domain decomposition is an iterative method developed by H. Schwarz to solve problems with a complicated geometry [22]. This was achieved by splitting a global domain into smaller sub-domains where the geometry in these sub-domains is less complicated than the global domain and having an overlap between sub-domains. Iteration is then performed to solve sub-domains until the solution converges within these sub-domains. The development of modern computers gave this method traction once again, where it was theorised that sub-domains could be solved in parallel and without overlapping [23].
The internal structures of nuclear reactors are amenable to decomposition. A reactor core normally contains multiple fuel assemblies, so the domain of a reactor core can be split up, naturally, into sub-domains, which represent a fuel assembly. These fuel assemblies contain multiple fuel rods and control rods, and the sub-domain containing each fuel assembly can be further decomposed into sub-domains containing fuel rods or control rods. This forms a natural hierarchy of sub-domains within a reactor. Domain decomposition has been used in this way to significantly improve the computational speed of generating solutions for neutron transport by enabling parallel computing.
A fast non-overlapping Schwarz domain decomposition was used to solve the neutron diffusion equation applied to a PWR test case by Jamelot and Ciarlet Jr. [24]. The domain was split up into many sub-domains, and different numbers of cores were experimented with, the results showing that utilising parallel processing allowed a significant improvement in the computational time required to solve the problem with no loss in accuracy.
Domain decomposition methods do not change the governing equations of a system, but instead turn a single domain problem with boundary conditions into multiple subdomain problems with different boundary conditions. This means that the internal solver used within a sub-domain can be replaced with a ROM. Baiges et al. [25] applied POD in conjunction with domain decomposition to solve the Navier-Stokes equation and reported a computational cost that was two orders of magnitude less than that of the HFM.
An application to neutron transport was provided by Cherezov et al. [26]. Here, a non-overlapping domain decomposition method was combined with POD and applied to a full reactor core, which was decomposed into sub-domains containing fuel assemblies. Three by three and 2 × 2 clusters of these sub-domains were solved. These solutions were then used as the snapshots from which to construct the basis functions for different types of fuel assemblies. They showed that combining these methods can produce an accurate solution of the global system, with a significant speed up, by solving four different layouts or arrangements of the reactor core, in each case using the same basis functions produced from the clusters of sub-domains.
In this paper, we develop a number of reduced-order models that make use of domain decomposition techniques. Although these methods are demonstrated on a relatively simple reactor problem (a 1D version of the KAIST benchmark [27]), the aim is to further develop them and apply them to a more challenging problem, such as 2D, 3D, problems with large numbers of energy groups or control problems, e.g., Wiberg [28,29].
Combining ROM with domain decomposition methods would enable parallel computing to be used and decrease the computational cost more than either method individually. This paper aims to investigate if there is a loss in accuracy when combining ROM with domain decomposition. A number of POD-based ROMs combined with domain decomposition are developed, and results from these are compared with a global POD-based ROM that has no domain decomposition. As domain decomposition represents a compromise in accuracy compared to global methods, it could be expected that the global method will outperform the methods that include domain decomposition. However, results in this paper demonstrate that using domain decomposition in combination with POD-based ROM results in errors that are the same order of magnitude as the global system. Importantly, using domain decomposition allows us to avoid the potentially costly step of solving the high-fidelity model across the whole reactor. Instead, we form local solutions, obtain POD basis functions based on these solutions, which are then used to form the ROM across the whole reactor. The results also demonstrate that constructing basis functions from clusters (i.e., groups of sub-domains), as done by Cherezov et al. [26], produces solutions that are almost as good as using solutions from the entire domain. Section 2 describes the methodology including the governing equations, discretisation, reduced-order modelling methods, domain decomposition methods, the test case and how the different ROMs are constructed. Section 3 contains the results for the above methods applied to a simple 1D test case. Section 4 contains a discussion of these results, and Section 5 contains the concluding remarks.

Methodology
This section introduces the equations that govern neutron behaviour and their controlvolume discretisation, referred to as the high-fidelity model. A description is then given of the power method, which is often used to solve eigenvalue problems. The proper orthogonal decomposition method is then explained, followed by a description of how to construct a projection-based ROM from the POD basis functions. Domain decomposition methods and how they can be used within the context of this work are explained. Finally, an overview of the test problem, the generation of the data sets and how basis functions can be constructed for this problem is given.

Diffusion Equation
The multi-group steady-state diffusion equation for criticality can be written as: where φ is the scalar flux of the neutron population, Σ a represents the absorption crosssection, Σ f represents the fission cross-section, ν is the average number of neutrons produced per fission event, Σ s represents the scatter cross-section, χ g is the proportion of neutrons produced for each energy group per fission event, N g is the number of energy groups used and the subscript g denotes the energy group. The diffusion coefficient, D g , is given by: The eigenvalue, λ, is defined as the reciprocal of k eff , that is λ = 1 k eff , where: k eff = number of neutrons in one generation number of neutrons in the preceding generation .
The boundary condition for reflection is: and for a vacuum or bare surface is: in which n is the outward-pointing normal to the boundary.

Discretisation
A control-volume discretisation of the diffusion equation in 1D with a regular mesh of N x cells can be written as: ∀i ∈ {2, 3, .., N x − 1}, in which ∆x is the uniform cell width in the x-direction, N x is the number of cells, the subscript i refers to the cell index and φ i represents the scalar flux in cell i. In this expression, the first and last cells are omitted in order to apply the boundary conditions efficiently.
To apply a reflective boundary condition (see Equation (4)), the diffusion coefficients in the outermost cells are set to a large negative number. To set a bare surface boundary condition (see Equation (5)), the diffusion coefficients in the outermost cells are again set to a large negative number, and the absorption term is modified as now described. To apply such a boundary condition to the left or right edges (where the normal to the boundary is aligned with the x-direction): The discretised form of Equation (1) can be written as: where matrix A contains the absorption, diffusion and scatter out of energy groups from the left-hand side of Equation (6), matrix B represents the fission terms and scatter into energy groups given in the right-hand side of Equation (6) and the vector φ contains the values of the scalar flux for each cell in every energy group. The matrices are of size (N x − 2)(N g ) by (N x − 2)(N g ). Although a 1D discretisation is given here, the methods could be applied in 2D or 3D.

The Power Method
The power method [30], outlined in Algorithm 1, is an iterative method to find the dominant eigenvalue and eigenvector of an eigenvalue problem. An initial guess of φ and λ is used in Equation (8), and the flux is updated using the forward backward Gauss-Seidel method (Line 3). This flux value is passed to the outer iterations, detailed in Algorithm 2, where the flux is first normalised. Here, b is a vector whose entries all equal one. Power iteration is then performed, with φ and λ both being updated. These values are then passed back into the inner iterations until either k eff converges, with a tolerance of 10 −8 , or the maximum number of iterations is reached, 200. These algorithms are given in [31] and repeated here for completeness. while not_converged do 8: ! normalising the flux 10:

Proper Orthogonal Decomposition
Lumley [5] developed proper orthogonal decomposition in 1967 to find the most energetic structures within turbulent flow, and Sirovich [6] developed the method of snapshots, significantly reducing the computational cost of POD and resulting in it being widely used for analysing the results [32,33]. If the snapshot matrix S is a real N × M matrix that contains M snapshots each with N degrees of freedom, then singular-value decomposition can be performed as follows: where U is an N × N matrix consisting of orthonormalised eigenvectors associated with the M largest eigenvalues of SS T and V is an M × M matrix consisting of orthonormalised eigenvectors associated with S T S. The POD basis functions are the columns of U. The matrix Σ contains the singular values on its diagonal, and these are ordered as follows: The amount of information given by each basis function is proportional to these singular values. Therefore, if γ is the fraction of information to be captured, where 0 γ 1, then the lowest integer value of P is sought that satisfies: Matrix R ∈ R N×P contains the P basis functions required for this information, which are the first P columns of U.
The POD coefficients α associated with a snapshot φ can be determined by: and the flux can be recovered from the POD coefficients by:

Constructing the Reduced-Order Model
The discretised governing equations can be projected onto the reduced space by using the basis functions R. Equation (13) is inserted into Equation (8), and both sides are pre-multiplied by R T , giving: where the matrices A and B both depend on the material parameters. This reduced-order model replaces the inner Algorithm 1 and is outlined in Algorithm 3.
Algorithm 3 POD-based reduced-order model: inner iterations. solve the reduced-order model for α: 4: ! find the updated scalar flux from the reduced variables 6: return φ

Domain Decomposition
Domain decomposition is an iterative method developed by H.Schwarz in 1870 [22] to solve a problem by splitting it into smaller problems, and iteration is performed until the solution converges. Although Schwarz initially developed this as an overlapping method, it was later theorised that the sub-domains could be solved in parallel and without overlapping [34] using advancements in modern computing. A non-overlapping approach is used here to see if parallel computing could be used. Domain decomposition adds a middle iteration, performed inside the outer iteration and outside the inner iteration, outlined in Algorithm 4.
A 1D problem with a global domain Ω is split into K sub-domains where: where each domain Ω k is neighboured by domains Ω k−1 and Ω k+1 . Each domain Ω k has associated matrices, A k and B k , basis functions R k and flux φ k associated with it. If solving just domain Ω k , then the boundary conditions must be carefully considered. An alternative to this is to construct larger domains Ω p using one domain Ω k and its neighbouring domains Ω k−1 and Ω k+1 : This larger domain can then be solved with only the information in the central domain Ω k being updated upon finding a solution. The matrix A p for domain Ω p can be constructed from the matrices for individual domains by: where the off-diagonal matrices represent the transport terms that cause interactions across domain interfaces. Matrix B p and the basis functions R p can be constructed by: The flux in domain φ p and α p can be determined by concatenating the flux and POD coefficients respectively: If a boundary of domain Ω p aligns with a boundary of the global domain Ω, then the conditions for this boundary are known and used within the problem. If it does not, then thought has to be given to what the conditions should be. However, because only the information in the central domain is updated, the solution close to this boundary will never be updated, and the global solution is affected less by improper boundary conditions. The flux in each domain φ p is determined by using the inner iteration algorithms, either normally or using POD, with only the information required from those domains. After φ p is determined, only the flux in φ k is updated, to move the boundary conditions further away from the area of interest. As solutions to these domains are independent, they can be solved in parallel, in which case all K domains could be solved simultaneously. In this case, after all K domains have been solved, the process is repeated with updated fluxes until the absolute mean error in the flux is less than the tolerance, 10 −8 , or the maximum number of iterations has been reached, 100. If done sequentially, then the flux within a domain k can be updated as solutions are generated.
After the global flux φ has converged, then this can then be used within the outer iteration of the power method. if k = 0 then ! solve each sub-domain along with the neighbouring domains.
else if k = K then 12:

Test Problem and Construction of Snapshot Matrices
To test the method, a simple 1D problem is solved. Here, the problem is split into five domains, as seen in Figure 1. Each domain can either be a fuel assembly or the reflector with the same geometry and material properties of the KAISTbenchmark [27]. The 1D geometry of a fuel assembly can be seen in Figure 2 and represents a slice through the centre of the 2D geometry given by the KAIST benchmark. The length of each domain is 21.42 cm, the same as the length of a fuel assembly in the KAIST benchmark. Each domain is discretised into 85 cells with each cell being 0.252 cm.  An example mesh of the a domain containing a fuel assembly can be seen in Figure 2. Each of the 17 regions contains five cells. Regions A, B, C, D and E represent guidetubes/control rod regions. All unlabelled regions are fuel rods whose material depends on what fuel assembly it is. Mixed Oxide (MOX) fuel assemblies contain three different kinds of fuel rods that contain either 4.3%, 7.0% or 8.7% of plutonium. Uranium Oxide (UOX) fuel assemblies only contain one kind of fuel rod.
Each domain has 85 spatial degrees of freedom, and energy is discretised into 22 groups. Therefore, the full system has 9350 degrees of freedom, with each domain containing 1870 of these. The material cross-sections are homogenised across each pin-cell and are generated by WIMS11 [35] using the following: 1.
Equivalence theory for resonance shielding 4.
Multicell collision probability flux spectrum to condense from 172 to 22 groups 5.
Method of solving heterogeneous flux characteristics Four-hundred HFM snapshots are generated for the above problem. Each snapshot has five parameters, D 1 , D 2 , D 3 , D 4 and D 5 representing domains 1, 2, 3, 4 and 5 respectively. Each of these parameters can either be 0 representing UOX without control rods, 1 representing UOX with control rods, 2 representing MOX without control rods, 3 representing MOX with control rods and 4 representing the reflector. D 1 , D 2 and D 3 are randomly assigned a value between zero and three. D 4 and D 5 are randomly assigned a value between zero and four with the condition that if D 4 = 4, then D 5 = 4 as well. This is because reflectors are positioned on the outside of reactors and would not be placed in-between fuel assemblies.
These 400 HFM snapshots can be considered to be the snapshot matrix U Global , which is a 9350 × 400 matrix. All 400 snapshots are decomposed into their five domains forming a matrix U decomp that is 1870 × 2000. Each of these 2000 snapshots has both a location and material type outlined in Table 1. Snapshot matrices to form the basis functions can be constructed through a few different methods. One method is to base it on the location within the global system. U 1 , U 2 ,U 3 , U 4 and U 5 are all 1870 × 400 matrices and contain the snapshots associated with the column in Table 1 that corresponds to their subscript. A different way is to base it on the material type of domain within the system. U UOX contains the data from any domain and snapshot where the parameter is zero or one and for these snapshots results in a 1870 × 913 matrix. U MOX contains the data from any domain and snapshot where the parameter is two or three and for these snapshots results in a 1870 × 873 matrix. U Re f lector contains the data from any domain and snapshot where the parameter is four and for these snapshots results in a 1870 × 214 matrix. U UOX , U MOX and U Re f lector contain the snapshots associated with the rows in Table 1 corresponding to their subscript.
Another way is to base it on both the material type of domain and the location of the domain in the system. This results in 12 matrices: U Typ,Loc where Typ is either UOX, MOX or reflector for the material type and Loc is between one and five for the location within the system. U Typ,Loc are 1870 × Y matrices where Y is dependent on the number for that matrix. These matrices contain any snapshots associated with both the row and column in Table 1 that matches their subscript.
An additional set of HFM snapshots is generated that range between one and three domains. U 1dom represents the one domain set and is a 1870 × 8 matrix; U 2dom represents the two domain set and is a 3740 × 40 matrix; and U 3dom represents the three domain set and is a 5610 × 168 matrix. Table 2 shows the parameter possibilities for these smaller domains. Three snapshots' matrices are constructed from these based on the material type of domain within each snapshot. U UOX,dom contains the data from any domain and snapshot where the parameter is zero or one and for these snapshots results in a 1870 × 268 matrix. U MOX,dom contains the data from any domain and snapshot where the parameter is two or three and for these snapshots results in a 1870 × 268 matrix. U Re f lector,dom contains the data from any domain and snapshot where the parameter is four and for these snapshots results in a 1870 × 56 matrix. U UOX,dom , U MOX,dom and U Re f lector,dom contain the snapshots associated with the rows in Table 2.

Construction of Basis Functions
A total of six methods were tested that used POD Galerkin as the form of ROM. The first of these did not use domain decomposition and is labelled G-POD. An SVD was applied to U Global , and 250 basis functions were retained. An additional method labelled G2-POD was done using the same method as G-POD, but only 50 basis functions were retained instead of 250.
The remaining four methods all used the same domain decomposition methods, but involved the basis functions being generated from different sets of snapshot matrices explained in the previous section. The first method is labelled DDT-POD, and three sets of basis functions are constructed. These basis functions are constructed from the snapshot matrices U UOX , U MOX and U Re f lector to form basis functions R UOX , R MOX and R Re f lector , respectively, with 50 basis functions retained in each matrix. For an example, parameter sets D 1 = 2, D 2 = 2, D 3 = 1, D 4 = 1 and D 5 = 4 and the set of global basis functions R could be constructed from the three sets of basis functions as follows: The second method is labelled DDL-POD, and five sets of basis functions are constructed based on the domains' location within the system. These basis functions are constructed from the snapshot matrices U 1 , U 2 , U 3 , U 4 and U 5 to form basis functions R 1 , R 2 , R 3 , R 4 and R 5 , respectively, with 50 basis functions retained in each matrix. Irrespective of the parameters, the set of global basis functions R could be constructed from the three sets of basis functions as follows: The third method is labelled DDTL-POD, and twelve sets of basis functions are constructed based on the material type of domain and location within the system. These basis functions are constructed from the snapshot matrices U Typ,Loc to form basis functions R Typ,Loc , respective to their subscript, with 50 basis functions retained in each matrix. For an example, parameter sets D 1 = 2, D 2 = 2, D 3 = 1, D 4 = 1 and D 5 = 4 and the set of global basis functions R could be constructed from the three sets of basis functions as follows: The fourth method is labelled DDT2-POD and is the same as DDT-POD with three sets of basis functions constructed, one for each material type of domain. These basis functions are constructed from the snapshot matrices U UOX,dom , U MOX,dom and U Re f lector,dom to form basis functions R UOX,dom , R MOX,dom and R Re f lector,dom , respectively, with 50 basis functions retained in each matrix. For an example, parameter sets D 1 = 2, D 2 = 2, D 3 = 1, D 4 = 1 and D 5 = 4 and the set of global basis functions R could be constructed from the three sets of basis functions as follows: Table 3 shows the number of sets of basis functions and from where the data used to construct those basis functions comes. G-POD uses the same data set as G2-POD, but with fewer basis functions retained. DDT-POD and DDT2-POD both have their basis functions constructed determined from the material type of domain, but differing data sets.

Limiting the Number of Snapshots
Three out of the six methods described in Section 2.8, G-POD, DDT-POD and DDL-POD, were tested by limiting the number of snapshots available to each methods. Methods with this limitation are referred to as G-POD-lim, DDT-POD-lim and DDL-POD-lim. This limitation was imposed by randomly selecting 50 snapshots from U Global to form the snapshot matrix U Lim . Each method can only use basis functions made from U Lim . G-PODlim is therefore limited to 50 basis functions, as the number of basis functions cannot exceed the number of snapshots. Dividing U Lim into domains based on the material type results in 112, 114 and 24 snapshots available for UOX, MOX and reflector, respectively. The sets of basis functions for DDT-POD-lim are constructed from these, and the number of basis functions retained are 50, 50 and 24 for R UOX , R MOX and R Re f lector , respectively. DDL-POD-Lim has 50 snapshots available for each location within the system, allowing 50 basis functions to be retained for each location. G-POD-lim, DDT-POD-lim and DDL-POD-lim are then used to produce solutions for the 400 sets of parameters used in Section 2.7 to produce U Global .

Results
In this section, a brief description of the error measures used is given, followed by a look at the results generated from all methods and the results from limiting the number of snapshots and smart selection of the number of basis functions.

Error Measures
Multiple error measures are used to assess the accuracy of the results. The first is the largest normalised maximum error in the flux profile and is determined by: in which the superscript "POD" indicates the solution was generated by POD and i and j represent cell indices. The second is the error in k eff , which is just the difference between the HFM and POD solutions: An average maximum error is also used, which for the error in the flux profile is given by:ē where N is the total number of snapshots.

POD-Based Results
A look at how each method performs for a single set of parameters (R 1 = 2, R 2 = 2, R 3 = 1, R 4 = 1, R 5 = 4) is shown here. First, k eff convergence is shown and, following this, a look at the flux profiles and pointwise error for a few energy groups.
G-POD uses one set of basis functions to reduce the dimensionality of the full system from 9350 variables to 250. G2-POD uses one set of basis functions to reduce the dimensionality of the full system from 9350 variables to 50. DDT-POD, DDL-POD, DDTL-POD and DD2-POD all use five sets of basis functions that each reduce the dimensionality in a single domain from 1870 variables to 50, with the sets' basis functions used determined by the parameters of the solution being solved. Figure 3 shows the k eff convergence for all methods along with the HFM. All methods converge to a k eff value close to the HFM in under 15 iterations. The scalar fluxes and the absolute pointwise error in the profiles are now compared for each ROM. Figures 4-9 show the flux profile for different energy groups for all methods. Figure 4 shows Energy Group 1. Figure 5 shows Energy Group 5. Figure 6 shows Energy Group 9. Figure 7 shows Energy Group 13. Figure 8 shows Energy Group 17. Figure 9 shows Energy Group 21. All methods show good approximation to the HFM. The absolute pointwise error is similar for all methods except G2-POD, which is an order of magnitude worse.              Table 4 shows the mean absolute maximum error in the flux profile and the mean k eff for the 400 sets of parameters. It can be seen that G-POD, DDT-POD, DDL-POD and DDT2-POD all have mean errors in the same magnitude. Out of these four DDL-POD has the worst mean flux error, and DDT2-POD has the worst mean k eff error. G-POD has the best mean flux and k eff errors. G2-POD has a mean flux error that is an order of magnitude worse than the other four and has a mean k eff error that is two orders of magnitude worse.   Table 4. Mean absolute maximum errors in the flux profile and k eff for solutions using POD-based reduced-order models.

Results for Limiting the Number of Snapshots and Smart Selection of the Number of Basis Functions
This section contains the results from limiting the number of snapshots and smart selection of the number of basis functions. The same set of parameters used in Section 3.2 (R 1 = 2, R 2 = 2, R 3 = 1, R 4 = 1, R 5 = 4) is used here to investigate how the methods perform with these two tests. First, k eff convergence is shown and, following this, a look at the flux profiles and the pointwise error for differing energy groups.
G-POD-lim uses one set of basis functions to reduce the dimensionality of the full system from 9350 variables to 50. DDT-POD-lim reduces the dimensionality of the full system from 9350 variables to between 198 and 250, depending on the types of domains contained within the problem being solved, using five sets of basis functions. The set of parameters being examined here has the dimensionality reduced to 224. DDL-POD-lim reduces the dimensionality of the system from 9350 variables to 250 using five sets of basis functions. G-POD-basis uses one set of basis functions to reduce the dimensionality of the system from 9350 variables to 98. DDT-POD-basis uses five sets of basis functions to reduce the system from 9350 variables to between 116 and 150 depending on the types of domains contained within the problem being solved. For the set of parameters being examined here, the dimensionality is reduced to 133. DDL-POD-basis uses five sets of basis functions to reduce the system from 9350 variables to 177. Figure 11 shows the k eff convergence for the three methods with a limited number of snapshots and smart selection of the number of basis functions along with the HFM. All tested methods converge to a k eff value close to the HFM in under 15 power iterations.  Figure 12 shows Energy Group 1. Figure 13 shows Energy Group 5. Figure 14 shows Energy Group 9. Figure 15 shows Energy Group 13. Figure 16 shows Energy Group 17. Figure 17 shows Energy Group 21. All methods show good approximation to the HFM, although G-POD-lim has higher pointwise errors than the other methods shown. G-POD-basis, DDT-POD-basis and DDL-POD-basis show pointwise errors in the same order of magnitude.               Table 5 shows the mean absolute maximum error in the flux profile and the mean k eff for the 400 sets of parameters. It can be observed that G-POD-lim has errors that are at least an order of magnitude worse than DDT-POD-lim and DDL-POD-lim. Of the three methods tested by limiting the number of snapshots, DDL-POD-lim has the lowest mean errors. G-POD-basis, DDT-POD-basis and DDL-POD-basis all show mean errors of the same magnitude, with DDL-POD-basis being slightly worse than the other two.

Discussion
For the above problem, it can be seen that the accuracy of the solution was in the same order of magnitude when a total of 250 basis functions were used, irrespective if domain decomposition methods were used. This means that domain decomposition methods, and thus locally formed solutions, can be used with POD without a great loss in accuracy. This is important, as it avoids using high-fidelity models for large whole reactor simulations in order to form the snapshots. These high-fidelity simulations may be computationally expensive or even intractable.
It was found that when the basis functions were constructed from the same snapshots as the global system, the accuracy was similar whether they were constructed based on the type, being UOX, MOX or reflector sub-domains, or location of the data, being the position of the sub-domain within the system. This has implications when considering large-scale problems where sub-domains with the same material properties and geometry repeat themselves. When the basis functions are constructed from the locations of sub-domains, the number of sets of basis functions required increases if the number of sub-domains increases. However, when basis functions are constructed based on the material type of sub-domains, then, assuming that new sub-domains have the same material types as existing ones, the required number of sets of basis functions does not increase. This can be beneficial if domains have a large number of degrees of freedom and singular-value decomposition becomes computationally expensive to perform, and similar sub-domains are repeated within the system. It was also found that a much smaller data set, constructed using a set of snapshots that were significantly less expensive to generate, could be used to generate basis functions where the solutions were only slightly less accurate than using the snapshots generated over the full solution. This means that the offline computational cost could be significantly decreased by being able to generate basis functions from smaller snapshots. This also means that the problem could be scaled up, with changes to the geometry, without requiring any more offline computational work to be done.
It was also found that when the number of basis functions retained in the global method was 50, it performed significantly worse than when domain decomposition methods were used. Although the number of POD coefficients used globally is less than the domain decomposition methods, the information retained by the global method is the same as the five sets of basis functions because the basis functions for the global method are a matrix of 9350 × 50, and for the domain decomposition methods, it is five matrices of size 1870 × 50. The total information in these matrices is the same. For the domain decomposition method, the global basis functions is a matrix of size 9350 × 250, but a significant portion of this matrix is zeros and as such can be ignored in computational costs.
Three of the methods presented were selected for additional testing. These three methods were G-POD, DDL-POD and DDT-POD. The first of these tests was to limit the number of snapshots available to the methods. All three methods had lower accuracy compared to when the full range of snapshots was utilised. The global method was greatly affected by the number of basis functions it could retain and had significantly worse accuracy. The domain decomposition methods were less limited by this and could use more basis functions than the global method, but still had a slight loss in accuracy compared to when all snapshots were used. It was found that the methods utilising domain decomposition performed better than the global method for a limited number of snapshots. This is beneficial if it is either difficult or time consuming to produce a large number of snapshots.
The second test performed on the three methods, G-POD, DDL-POD and DDT-POD, was the smart selection of the number of basis functions. This was done by determining the number of basis functions based on the energy retained rather than a fixed number. It was found that all methods performed similarly, but had different numbers of basis functions to reach the same fraction of energy retained. The global method required the smallest total number of basis functions, followed by the method based on the type of domain, and finally, the method based on location required the most basis functions. POD methods combined with domain decomposition produced more accurate solutions with fewer basis functions when the sets of basis functions were constructed based on the material type of domain rather than the location. Although the global method required the smallest total number of basis functions, each of these basis functions spans the whole computational domain. The domain decomposition basis functions only span a single sub-domain, thereby being one fifth the size of the ones used in the global method.
For modelling nuclear reactors, combining both ROM methods with domain decomposition methods can have considerable benefits. The potential to enable parallel computing while retaining the same level of accuracy means that the time taken to produce solutions could be significantly reduced, which can help real-time analysis be performed. A drawback to ROM is the offline computational cost of producing snapshots, and this could be offset by producing smaller snapshots (with fewer degrees of freedom within each snapshot) during the offline phase to be used in the online phase. It is seen here that there is a slight loss in accuracy from using smaller snapshots, but the snapshots generated during the offline phase are significantly smaller than using the global system.
Optimising the design of reactors can also benefit from ROM combined with domain decomposition. By constructing basis functions for each material type of sub-domain, where similar sub-domains repeat within the global system, then an optimum geometry for the arrangement of those sub-domains can be determined quickly and with no increase in the offline computational cost. Typical ROM methods require a fixed geometry that cannot be varied or incur further offline costs, but using domain decomposition means this can be avoided.
One other form of dimensionality reduction that could benefit from domain decomposition is using autoencoders. This is because they can be trained on smaller data sets, as training them on the full system can result in a huge number of trainable parameters. It can be seen, for this problem, that utilising domain decomposition methods with reducedorder modelling methods is a way to reduce the computational cost of modelling further, by enabling parallel computing, without a significant loss in accuracy.

Conclusions and Future Work
This paper assesses the value of combining domain decomposition with Reduced-Order Models (ROMs) to produce an approach suited for solving reactor physics problems. The method chosen to find the low-dimensional basis was Proper Orthogonal Decomposition (POD). Different subsets of the snapshots were used to produce different sets of basis functions, which were then used to produce several ROMs. Both global and local bases were constructed: global methods used all available snapshots; local methods used snapshots associated with one material type, one sub-domain or both one material type and one sub-domain. Three methods were used in for two tests. The first of these tests was limiting the number of snapshots available to the methods. The second test was a smart selection of the number of basis functions based on the fraction of energy retained. Future work will explore the use of autoencoders to form the low-dimensional space. This could improve the accuracy of the ROMs, as the nonlinearity of the autoencoder can be advantageous [31].
The accuracy of these methods was largely independent of whether sets of basis functions were constructed based on the material type, location or both of the sub-domains. This means that sets of basis functions could be repeated in the global system for sub-domains that had similar material properties. For a reactor, where sub-domains with the same material parameters appear several times, this could significantly improve optimisation by enabling rapid solutions to changes in the geometry, such as the layout or amount of fuel assemblies, without an increase in the offline computational cost. It was found that basis functions could be generated from a set of snapshots that were not generated from the full HFM without a great loss in accuracy. This can significantly reduce the offline computational cost to create the basis functions, one of the current limitations of ROM.
Additional testing of the methods found that when the number of snapshots used to construct the ROMs was limited, all three methods suffered a loss in accuracy, but the global method had a greater reduction in accuracy than either domain decomposition method used. This means that if the number of initial snapshots is limited, it would be beneficial to use domain decomposition methods instead. An additional test was performed by smart selection of a number of basis functions where this was chosen based on the fraction of energy retained. It was found that all three methods performed with similar levels of accuracy, despite the different number of basis functions used with each method. For the domain decomposition methods, constructing the sets of basis functions from the material type of domain produced slightly more accurate results with fewer basis functions than when they were constructed from the location of the domain.
Future work will involve applying the ROMs based on the domain decomposition proposed here to progressively more realistic and challenging problems. This includes increasing the spatial dimension to 2D and 3D, the use of transport theory instead of diffusion theory and the use of many more energy groups than the 22 used here. The behaviour seen in these problems will be closer to that of real reactors, and their modelling will benefit more from the reduction of computational cost that the domain decomposition ROMs can provide.