On a Roadmap for Future Industrial Nuclear Reactor Core Simulation in the U.K. to Support the Nuclear Renaissance

: The U.K. has initiated the nuclear renaissance by contracting for the ﬁrst two new plants and announcing further new build projects. The U.K. government has recently started to support this development with the announcement of a national programme of nuclear innovation. The aim of this programme with respect to modelling and simulation is foreseen to fulﬁl the demand in education and the build-up of a reasonably qualiﬁed workforce, as well as the development and application of a new state-of-the-art software environment for improved economics and safety. This document supports the ambition to deﬁne a new approach to the structured development of nuclear reactor core simulation that is based on oversight instead of looking at detail problems and the development of single tools for these speciﬁc detail problems. It is based on studying the industrial demand to bridge the gap in technical innovation that can be derived from basic research in order to create a tailored industry solution to set the new standard for reactor core modelling and simulation for the U.K. However, ﬁnally, a technical requirements speciﬁcation has to be developed alongside the strategic approach to give code developers a functional speciﬁcation that they can use to develop the tools for the future. Key points for a culture change to the application of modern technologies are identiﬁed in the use of DevOps in a double-strata approach to academic and industrial code development. The document provides a novel, strategic approach to achieve the most promising ﬁnal product for industry, and to identify the most important points for improvement.


Introduction
The United Kingdom has recently signed the contracts for the construction of two new reactor blocks at the Hinkley Point site [1], which have now progressed in earnest towards a commissioning date between 2025 and 2027, and further reactor projects are in a far-progressed planning state [2]; thus, the nuclear renaissance is in progress, at least in the U.K. Computer simulation of processes running in nuclear reactors plays an important role not only from the scientific point of view but also from the point of view of the practical implementation of power generation from fission, and also the independent regulation and accreditation of fission technologies. Thus, the evaluation of the safety of every-day nuclear power plant operations, as well as the optimization of the reactor's performance, cannot be done without an application of simulation tools that allow for detailed calculations of the neutronics, thermal hydraulics, mechanics, and other processes that take place within the core of a nuclear power plant. One cannot also ignore that the reactor core itself is only one component of the nuclear power generation system, and feedback from the demand for electricity determines, to a certain extent, the economic drivers for the efficient consumption of nuclear fuel. The various components of a nuclear system, including the reactor core, are part of an end-to-end integrated power generation system that extends across its lifetime from concept design to decommissioning. Each of the components of the system are important, but the reactor core is especially so, given that it represents the 'engine' of power generation and economic value. The planning for, and approval of, the core loading/fuel management that takes place every year with the revision in a nuclear power plant is currently the process where modelling and simulation plays the major role in a nuclear power plant's operation. In this process, the rearrangement of the fuel assemblies that stay in the core, as well as the number of required new fuel assemblies, is determined and it has to be proven to the regulator that the new core configuration stays within the safety envelope of the power plant as it has been defined in the licensing procedure.
The current industrial standard software tools mostly go back to the scientific developments of the 1980s. During this time, the foundation for the simulation tools and procedures for reactor physics and fuel/core management was built by the development of nodal methods [3,4], modern two-dimensional (2D) lattice codes for industrial applications [5][6][7][8][9], the required homogenization procedures [10][11][12], and pin power reconstruction techniques [13].
The calculation scheme for the methods has been updated by generations of scientists on the level of lattice transport methods, homogenization techniques, and core simulators. However, the two-step scheme, which consists of a lattice calculation to produce a few homogenized cross-section sets for the nodal core simulation, is still standard within industry. In addition, there is the offline procedure for the pin power reconstruction to obtain access to the pin-wise power distribution and the temperatures for the fuel rod modelling that are required for the safety evaluation, even if these processes are, in part, highly automatized today [14]. In contrast to the application scheme, a very comprehensive overview on the scientific challenges for reactor physics has been given in the special edition of the PHYSOR 2012 conference [15]. However, this does not imply that a "one-step" scheme is not a viable approach in the long-term, or at least a potential contender (as is later discussed), especially in cases where there are no strict time constraints, e.g., to create a detailed understanding of operational effects, such as local crud deposition. This issue alone would be worthy of an in-depth analysis and discussion, taking into account recent experience with one-step schemes. The main focus of this document is on answering the question: Which developments are promising for future industrial application whilst having the potential to deliver a reasonable gain versus effort outcome? This argument has to be balanced with the computational cost of the time that industry requires for an acceptable solution to support a decision on the timescale at which it is required to be made. This leads to a number of questions, which can be summed up as: how good does an answer need to be? This is not a simple question to answer, given that answers are not usually arrived at in a linear sequential fashion, and may be the culmination of a number of parallel activities that progress at different rates, eventually converging on a solution. This is an argument that would be readily recognised by an experienced programme manager delivering projects to industry, but may not be so familiar to the researcher who wants to push the boundaries of their narrow field of expertise, and is central to the argument as set out in this research paper. The discussion in this publication follows the strategy of a robust argument that has to withstand a discussion among experts. The recommendations are mainly based on expert judgement, code development expertise, and requirement-capturing meetings with industrial partners. The work will make an innovative contribution to the development steps for the U.K.'s national program in Digital Nuclear Reactor Design [16].

An Example of Current Fuel Management
The fuel management in a light water reactor (LWR) is the most prominent application of modelling and simulation in a modern reactor's operation, since this process is repeated every year when new fuel assemblies have to be introduced into the core and the operational safety of the new core loading has to be demonstrated to the regulator.
The process (see Figure 1) starts with a lattice calculation to determine the burnup behaviour and status of each fuel assembly, which will be used in the new core loading. However, this information is often available from the last fuel management of the last cycle. The calculations are required to determine the condensed (2 energy groups instead of 100-500 groups) and homogenized (a homogenized material for the fuel assembly instead of a representation of up to 18 × 18 positions) cross-section set. The cross-section matrices have typically to be rearranged to form the lattice code output to the input demand of the nodal code (either via an internal script, such as in Studsviks CMS, or via a user script). performance analysis, either by a user script or an automated script. The power map of the fuel assemblies is extracted from the lattice calculation of the respective fuel assembly under a reflective boundary condition and overlaid with the power distribution of the nodal core simulation, once more by either a user script or an automated script, to provide input data for a fuel performance and hot channel analysis. In contrast, the core transient analysis is typically run only on homogenized parameters without detailed feedback at the pin and subchannel level.
In the described example, the most prominent gaps in this process are, on the one hand, the error-prone user interactions that are required for the data transfer, which are often not sufficiently documented (this has already been partly solved by the CMS package of Studsvik). On the other hand, it is the power map that is determined at a stage where neither the correct fuel temperature distribution, not the coolant temperature distribution, are available. However, both temperatures would be required to determine the feedback effects that will influence the power production in each pin.
The current industrial application is typically provided either by the plant operator themselves or by some specialized engineering companies hired for the work. Plant operators often either have the software suite delivered by the plant's designer (e.g., BIPR of Rosatom) or they use software from the free market (e.g., HELIOS/DYN3D) that their own experts run. If they are not willing or capable of doing this work, they rely either on the fuel vendor and their software (e.g., AREVA using ARTEMIS ® ) or independent engineering offices, which most often rely on commercial software (e.g., CMS of Studsvik).

Requirement
Some time ago, colleagues from Biology [17] opened an article on scientific software development with the following statement: "Many scientists and engineers spend much of their lives writing, debugging, and maintaining software, but only a handful have ever been taught how to do this effectively: after a couple of introductory courses, they are left to rediscover (or reinvent) the rest of programming on their own. The result? Most spend far too much Based on the cross-section (XS) sets and the available fuel assemblies (new assemblies, and the burnt fuel assembles of different batches), a first core arrangement is determined as the input for the nodal code. Based on this, an optimization procedure is started, which is either an internal code procedure or a user-manipulated process, that evaluates the achievable cycle length with the given fuel assemblies and the core loading pattern to create a convenient core loading. Out of this nodal simulation, the power histories are extracted for the fuel performance analysis, either by a user script or an automated script. The power map of the fuel assemblies is extracted from the lattice calculation of the respective fuel assembly under a reflective boundary condition and overlaid with the power distribution of the nodal core simulation, once more by either a user script or an automated script, to provide input data for a fuel performance and hot channel analysis. In contrast, the core transient analysis is typically run only on homogenized parameters without detailed feedback at the pin and subchannel level.
In the described example, the most prominent gaps in this process are, on the one hand, the error-prone user interactions that are required for the data transfer, which are often not sufficiently documented (this has already been partly solved by the CMS package of Studsvik). On the other hand, it is the power map that is determined at a stage where neither the correct fuel temperature distribution, not the coolant temperature distribution, are available. However, both temperatures would be required to determine the feedback effects that will influence the power production in each pin.
The current industrial application is typically provided either by the plant operator themselves or by some specialized engineering companies hired for the work. Plant operators often either have the software suite delivered by the plant's designer (e.g., BIPR of Rosatom) or they use software from the free market (e.g., HELIOS/DYN3D) that their own experts run. If they are not willing or capable of doing this work, they rely either on the fuel vendor and their software (e.g., AREVA using ARTEMIS ® ) or independent engineering offices, which most often rely on commercial software (e.g., CMS of Studsvik).

Requirement
Some time ago, colleagues from Biology [17] opened an article on scientific software development with the following statement: "Many scientists and engineers spend much of their lives writing, debugging, and maintaining software, but only a handful have ever been taught how to do this effectively: after a couple of introductory courses, they are left to rediscover (or reinvent) the rest of programming on their own. The result? Most spend far too much time wrestling with software, instead of doing research, but have no idea how reliable or efficient their programs are": Greg Wilson [18].
They describe a set of guidelines for successful code development: "We are of the opinion that community adherence to the guidelines described here will increase the impact and usability of computational biology work, without placing an undue burden on the creators of rapidly evolving, scientific code bases" [17].
We intend to go deeper into the problem, since, in our view, scientific software development is only a part of the bigger picture. When we look into scientific software and the advancement to industrial application, the overreaching requirement is no longer only technical in nature. As we have alluded to earlier, the answer to the question 'How good is good enough?' can lead to significantly different answers depending on who is asked. How good is good enough is dependent on the perspective of the individual stakeholder: • for an industrial user? • for the regulator? • for the reactor physicist? • to convince the public that the power generation process is well-managed?
In general, we can distinguish between different states of development. The widest spread split is given here with an illustrative example as to what this would mean for the scientific software development's status: • the state of the science: new scientific developments that are published and may be proven to be useful, e.g., new methods that have been developed and are already being applied in scientific software products and reported in the form of a scientific consensus. However, these methods do not need to be applied in totality in order to advance standard scientific tools. • state-of-the-art of science and technology: new scientific approaches that are in a secured state of science, e.g., the methods are available in the most recent scientific software packages, typically produced within academic institutions or national laboratories, and are used in the day-to-day work of scientists. However, for an industrial application, improved software quality, documentation, and licensing is still required. • the state-of-the-art: the most advanced, currently available technical capabilities, e.g., the most advanced industrially supplied software suite for modelling and simulation that is already licensed through an independent accreditation process. • the acknowledged rules of technology: good industrial practice that has been proven in application, e.g., code systems that are currently used in the day-to-day work for the fuel management in a nuclear reactor core.
If we were to attempt to directly couple the industrial end-user requirement with the described states of development, there would appear to be significant differences.
In the German Atomgesetz [19], the phrase state-of-the-art of science and technology is used 13 times. This phrase reflects the highest status of legal requirement and requests the application of what is described as the Best Practical Means in terms of both equipment and methods. However, this is only requested for the demonstration of the safe operation of the system. Operational requests can be lower, as long as there are no implications for system safety. Thus, the German legislature does not request this high status for operational decisions, e.g., the fuel reshuffling and the determination of the cycle length; however, the safety-related simulations need to demonstrate that the new core loading is within the safety envelope. This demonstrates that, even in a highly regulated system, there are still several degrees of freedom. In contrast to the German legislative regulation, the U.K. regulation provides much more freedom: "The U.K. generally operates a goal-setting regime rather than the more prescriptive, standards-based regimes applied in some other countries" [20]. One of the major concerns of a top-down approach is that it can stifle innovation, as it does not allow for the introduction of potentially disruptive technologies that can both improve safety and economic efficiency. This can only be proven by trial-and-error, and maybe technical innovations, whilst good ideas on paper may not stand the test. However, the use of state-of-the-art of science and technology tools can become interesting, even for operational decisions, if there is the expectation of a significant gain in operational planning and economic reactor operation, e.g., when unnecessary safety factors can be eliminated that are required due to insufficient capturing of physical effects. Thus, we try to find here a good balance between the top-down approach, which is given by the nuclear regulatory body, and the view of the science community as the response to the regulation, which represents the bottom-up approach.
In summary, the requirements given below have been identified in requirement-capturing sessions within the BEIS national programme on Digital Reactor Design, with stakeholders from government, a regulator, and different levels of industrial players taking part. The conclusions were on the following points, which drive the demand for future development:

•
There is a need for a highly qualified workforce for the next generation power plants, and the industry needs to effectively utilise and maintain the high skill levels of Suitably Qualified and Experienced experts in nuclear reactor design; • There is a requirement for updated quality-assured processes to overcome issues of potentially overly 'conservative designs' to improve economic performance without having a detrimental effect on safety; • There is a demand for the development of a system architecture for code coupling to facilitate the transfer of high-fidelity information/data between functions of the whole reactor system; • There is a need for code coupling to extend to the high-fidelity information, potentially down to the atomic scale, for some mission-critical components of the system, particularly within the reactor core, but extending to other areas where such increased fidelity can demonstrate a clear value (either an economic or a safety value); • The code coupling should form part of an overall integrated software framework that is user-friendly and reduces or avoids knowledge requests on specific details from the user; • It is important to develop a user-friendly interface that allows us to flatten the learning curve for the new specialists and to reduce human-induced mistakes.
However, even if the major request is not technical in nature, the code developers need a final and clear set of technical specifications to be able to develop the tools of the future. Finally, how good the answer needs to be is a function of how important the particular quantity of interest is for the intended Energies 2018, 11, 3509 6 of 18 application and in the decision to be made. In consequence, uncertainty quantification is intended to inform the quality of the result.

Innovations
At the absolute scientific forefront, there are very innovative and accurate methods, such as full-core Monte Carlo [21] and the full-core Method of characteristics [22]. Both approaches can be coupled to (advanced) fluid dynamics [23]. However, the computational demand of these approaches is currently on a level that is far too high for a routine industrial application. Another approach is to couple existing core simulators to advanced computational fluid dynamic (CFD) solvers to deliver a coupled full-core CFD solution based on a reduced CDF model, such as the porous body approach [24]. In addition, CFD can be, and has been, used for local analyses, and can partly be used for developing "CFD-informed" parameters for a subchannel analysis. Even the reduced model approach seems to be computationally too demanding for a modern routine application, while the use of CFD-informed parameters seems to be promising. However, all of these methods have their niche application, e.g., for the determination of a reference solution for the validation of new modelling and simulation approaches, since a code-to-code evaluation provides many more opportunities, and much more insight, than would be possible using only experimental data. This does not mean that we do not need to validate against experiments anymore, but a code-to-code comparison can be used to eliminate a significant number of the sources of uncertainty and obtain a better understanding of the details relating to modelling and simulation. Thus, it is a diverse approach that provides additional information.
Current industrial demand is very often dictated by a very tight time schedule: "The scheduled outage time (18 days) in the newest LWR (Emsland) has been considerably reduced compared to older LWRs (Grafenrheinfeld averaged~28 days). This has been achieved by improved component reliability and a strictly optimized organization of the work (outage management) to be carried out during the outage" [25]. These time schedules are the key for the outstanding economic performance that has been demonstrated over decades by the Emsland plant. The core loading planning (fuel management) is typically evaluated before the start of the outage time. However, during the outage, a physical fuel shuffling has to be performed, sometimes with a slight revision, which has to be evaluated by modelling and simulation in the case of fuel rod failures. This has to be followed by an analysis of the safety parameters and a transient analysis based on a nodal full-core analysis. The stim schedules should be kept in mind in the development of advanced modelling and simulation techniques. One new approach is the SERPENT Monte Carlo code [26] for the production of homogenized cross-section sets for the nodal core simulator. A recent performance comparison has highlighted a calculation time of 40 min for a reduced test matrix of 255 transport calculations for a fuel assembly using the current industrial standard tool CASMO-5 compared to 43,000 core minutes with SERPENT [27] on a 24-core machine for the easy task of cross-section preparation, which does not involve any multiphysics. It is not clear from the publication that the quality of the Monte Carlo solution would be sufficient to extract the power map that is required for the pin power's reconstruction. However, for a real cross-section preparation for a single fuel assembly, 5000-7500 transport calculations would be more reasonable. This example indicates the challenge for the development of 'new' methods for industrial applications. It is sure to be discussed what is the right number of cores to evaluate the performance of developments for the future. For example, about 4 years ago, CASL imposed a "1000-core" constraint on performance, where a full-cycle depletion with pin-resolved neutronics and a coupled subchannel T-H should be less than 12 h. However, the example on the time limits given by the outage time and the massive influence of the outage time on the reactor's performance [25] will define the real limit, which has to be weighed against the required accuracy and the available computational resources.
Besides these computationally very demanding new approaches, there is a historic approach from one of the major players in the development of new technologies for nuclear reactor core simulation in the late 1970s that could be suitably applied to the standard approaches that are Energies 2018, 11, 3509 7 of 18 employed today. The approach originates from Klaus Köbke. He proposed in 1977 the 'Flusslupe' [28], an interesting multiscaling approach that has not been applied industrially up to now as a method to reverse the homogenization procedure (developed by him), which forms a key component in nodal calculations. The currently applied approach only solves part of the problem, since it leads to the application of pin-power reconstruction. However, the proposal of Köbke to use a multiscale could be significantly improved by using a multimethodological, thus hybrid, approach that could be linked into a multiphysics environment by coupling the neutronics code with a multiscale fluid dynamics analysis and a multiscale fuel performance analysis.

Positioning of the Development
Recently, several projects have been operated, or are operated, on advanced code development. The most prominent ones are NURESIM [29,30] and CASL [31,32]. Both projects apply different strategies to develop the next generation of computer codes for nuclear applications focussed on the situation as well as the demand of the respective community. On the one hand, there is the European approach used in the NURESIM program, which is mainly based on the coupling of existing codes delivered by different partners to obtain an improved representation of the coupled physical processes taking place in a nuclear reactor. On the other hand, there is the U.S. approach of the CASL program, which focusses on a twofold strategy by developing new high-fidelity methods and tools, as well as coupling these to obtain extremely detailed high-fidelity coupled results (CASL advanced), and providing the coupling of existing tools, which was a temporary designation made early on in the CASL project (~2011-2012) (the CASL baseline). Unfortunately, the scientifically attractive approach of CASL advanced (now CASL has "VERA-CS", which is the core simulator portion of VERA that includes the advanced MOC code MPACT with direct coupling to the sub-channel analysis tool CTF and one-way coupling to the fuel performance code BISON) has led to the emergence of both perceived and real barriers to industrial adoption. Early versions of the VERA suite imposed a relatively high computational demand, and whilst this has been reduced due to optimisation of the code and advances in computational technology, there remains a perception that such an approach is impractical for current industrial users, not only based on the required computer power but also due to the often strict time schedules described above. However, as already discussed in the cases required to create a deeper understanding without time constraints, the first allocations have been shown; e.g., the multicycle simulations of Watts Bar 1 and the start up of Watts Bar 2 have helped to convince industry that VERA is a realistically applicable tool. The point is that acceptance by industry for each kind of new method and software piece is a long-term proposition, but progress is being made. In addition, and in the long-term more significantly, the introduction of advanced methods requires extensive validation and verification in a prescriptive regulatory environment, such as that imposed by the Nuclear Regulatory Commission (NRC), leading to a protracted time for industry acceptance. CASL is also working with the NRC to address the licensing issues that are associated with adopting VERA for safety-related analyses. This may be an even longer-term proposition than industrial acceptance, but the first steps have been taken with the major aim of the verification and validation of VERA as a key activity for CASL. These points have been readily demonstrated within the current and planned outputs of the CASL programme, with a focus on extensive on benchmarking and validation to support future regulator acceptance. Once regulatory acceptance can be assured, the application of the software tools for safety-related cases with a relaxed time request and high demand for detail should take off.
In the U.K., we currently face a very specific problem: that the generation of new codes should be ready for industrial application (developed, tested, validated, and licensed) when the first reactors of the nuclear renaissance go into operation. On the one hand, this opens a time window of about 10 years to improve the modelling and simulation tools to define the future industrial standard. On the other hand, this could help the U.K. to develop an answer to the future demand in workforce that is related to the operation and support of the power plants of the nuclear renaissance and to upskill the current workforce. The aim is a tailor-made system for the U.K. as desired and supported by the government, which is focussed on an industrial application that aims to be more advanced than the current state-of-the-art industrial tools by closing specific gaps in the current robust modelling and simulation approach. The aim is to leverage the optimization potential to improve the economics of fuel use and, thus, of reactor operation. In addition, the development and testing of new methods has the potential to attract new students into universities and eventually into industrial code development. Validation will create education and development opportunities in industry, and the licensing process of the advanced tools will involve the regulator as well as the industry as potential employers of both the technologies and the developers. This approach could stimulate a part of the strong request to educate future experts on nuclear reactor simulation for operators, regulators, and fuel vendors in light water reactor technology to support the nuclear renaissance. There would be another, complementary, possible approach, which is just buying the code systems and attracting experienced users. However, this is, in our eyes, not a sustainable approach, since it neither leads to the required upskilling of the workforce nor solves the long-term requirements for specialists that are needed for a 60-year lifetime of a nuclear power plant.
In general, we propose an intermediate development approach (see Figure 2). It relies, on the one hand, upon approved existing methods and existing code systems, such as lattice calculations and nodal methods, as a basis for the development (e.g., the existing U.K.-developed ANSWERS software suite [33], by Safety and Risk Consultants (UK) Limited, John Wood Group PLC, Dorchester, UK). This strategy seems to be the most probable approach due to budgetary limitations. On the other hand, we propose to apply more advanced, innovative methods and strategies, where an improvement in the representation of the physical phenomena will lead to a clear reduction in historically introduced extra margins, and where an improved physical representation will reduce the challenges due to the feedback effects. The most promising example for this approach is the inclusion of the former externally applied pin-power reconstruction. The currently applied, historic approach suffers from the loss of the physical coupling of thermal hydraulics, thermodynamics, and neutron physics on the pin-power level due to the absence of a true coupling, which requires the introduction of an additional safety reserve (safety factor), as it is obvious for a reactor physics specialist that the hottest fuel rod will be damped in power due to the feedback effects. The major part of this limitation, the offline use of a pin-power reconstruction, has been applied due to modelling challenges and limited computer power when the calculation scheme had been developed. This could be overcome with the currently available computer power for industrial applications if a smart approach is used that avoids a coupled pin-by-pin full-core calculation. The offline application of a pin-power reconstruction is an important limitation, since all safety limits for the nuclear fuel are typically defined on the level of single fuel rods; thus, detailed, pin-wise results are necessary, at least for the most limiting fuel assembly, to determine the safety parameters. Using the described approach, we propose a more innovative approach than NURESIM and CASL baseline, but a less risky and ambitious task than CASL advanced. In particular, our approach should provide a clear focus on industrial application requests in the U.K., ideally stronger than that of CASL and NURESIM.
The current methods and procedures (lattice calculation, homogenization and condensation, nodal methods) have been proven to be highly efficient and of sufficient quality to answer wide parts of the industrial demand. Thus, the current calculation scheme can be seen as acknowledged rules of technology. Nevertheless, the above-described approximations that are applied (an offline pin-power reconstruction) create limitations in the applicability and reliability of the results for regulatory issues. In these cases, the historic approach requests that we add an additional safety margin to take into account the limitations of the models, e.g., insufficient capturing of coupled physical effects. This approach is also called the "conservative approach". In addition, the coolant and the fuel temperature feedback effects, which will reduce the power in the hottest fuel rod, cannot be determined correctly; thus, the calculation results for the fuel as well as the cladding temperature will be too high. The current methods and procedures (lattice calculation, homogenization and condensation, nodal methods) have been proven to be highly efficient and of sufficient quality to answer wide parts of the industrial demand. Thus, the current calculation scheme can be seen as acknowledged rules of technology. Nevertheless, the above-described approximations that are applied (an offline pin-power reconstruction) create limitations in the applicability and reliability of the results for regulatory issues. In these cases, the historic approach requests that we add an additional safety margin to take into account the limitations of the models, e.g., insufficient capturing of coupled physical effects. This approach is also called the "conservative approach". In addition, the coolant and the fuel temperature feedback effects, which will reduce the power in the hottest fuel rod, cannot be determined correctly; thus, the calculation results for the fuel as well as the cladding temperature will be too high.
Our major driver will be to provide innovative software that appropriately fits our industrial purpose within the timescale of the nuclear renaissance. Based on this boundary condition, the major criteria for the development are to: • add models that are significant for industrial applications by creating a clear gain using an acceptable amount of extra computational effort; • develop models that are feasible within an acceptable calculation and development time envelope; • apply promising new approaches that offer significant progress; • add models for improving the accuracy/performance or satisfying the regulatory request (reduced conservatism) where it is efficient to do so; • apply what is useful to fulfil the tasks we want/need to resolve; • keep the code system user-friendly and relieve the user from knowledge requests on specific details; • develop a user-friendly graphical user interface that allows us to reduce the learning curve for new specialists, and reduce the number of human-induced mistakes; • develop a common system architecture for the required coupling of the codes; and • create a flexible approach that incorporates a choice as to where and when a high-fidelity resolution and the resulting higher computational demand is required.

Short-Term Approach
To achieve a fully coupled calculation to determine the pin power of the hottest pins, either a very costly coupled full-core pin-by-pin calculation is required, or a smart approach is necessary to keep the computational demand on a level that is acceptable for industry application. In the smart approach, only the hottest fuel assembly/assemblies will be resolved down to the pin level, while the fuel assemblies that are operated far from Our major driver will be to provide innovative software that appropriately fits our industrial purpose within the timescale of the nuclear renaissance. Based on this boundary condition, the major criteria for the development are to: • add models that are significant for industrial applications by creating a clear gain using an acceptable amount of extra computational effort; • develop models that are feasible within an acceptable calculation and development time envelope; • apply promising new approaches that offer significant progress; • add models for improving the accuracy/performance or satisfying the regulatory request (reduced conservatism) where it is efficient to do so; • apply what is useful to fulfil the tasks we want/need to resolve; • keep the code system user-friendly and relieve the user from knowledge requests on specific details; • develop a user-friendly graphical user interface that allows us to reduce the learning curve for new specialists, and reduce the number of human-induced mistakes; • develop a common system architecture for the required coupling of the codes; and • create a flexible approach that incorporates a choice as to where and when a high-fidelity resolution and the resulting higher computational demand is required.

Short-Term Approach
To achieve a fully coupled calculation to determine the pin power of the hottest pins, either a very costly coupled full-core pin-by-pin calculation is required, or a smart approach is necessary to keep the computational demand on a level that is acceptable for industry application. In the smart approach, only the hottest fuel assembly/assemblies will be resolved down to the pin level, while the fuel assemblies that are operated far from the design and safety limits will be handled in with traditional homogenized approach (a nodal approach using a representative pin and coolant channel). Thus, the costly, detailed solution will be concentrated on the area of high interest where a high-quality result is useful to reduce over-pessimism, while lower-fidelity results with a higher error are accepted in the other areas. This hybrid, multiscale, and multiphysics approach consists of different building blocks that have to be successively put together in a stepwise approach. The major building blocks for this hybrid approach are given in Figure 3. In a useful and easily applicable stepwise approach, the following steps could be taken. If sufficient resources are available, some steps can be performed in parallel:

•
The development of a user-friendly unstructured mesh neutron transport code for the simulation of the resolved region with sufficient accuracy • Linking the neutron transport code into an existing nodal code (e.g., a nodal diffusion code) for full-core simulation, including the management of the boundary condition transfer and the cross-section handling • Coupling the nodal code to a code for improved fuel rod simulation in the resolved region (each pin) as well as in the unresolved fuel assemblies (one representative pin) • Coupling the nodal code to a code for improved fluid dynamic modelling (a subchannel analysis code) for the resolved region (each subchannel in the fuel assembly) as well as in the unresolved fuel assemblies (one representative channel) Energies 2018, 11, x FOR PEER REVIEW 9 of 17 the design and safety limits will be handled in with traditional homogenized approach (a nodal approach using a representative pin and coolant channel). Thus, the costly, detailed solution will be concentrated on the area of high interest where a high-quality result is useful to reduce over-pessimism, while lower-fidelity results with a higher error are accepted in the other areas. This hybrid, multiscale, and multiphysics approach consists of different building blocks that have to be successively put together in a stepwise approach. The major building blocks for this hybrid approach are given in Figure 3. In a useful and easily applicable stepwise approach, the following steps could be taken. If sufficient resources are available, some steps can be performed in parallel:

•
The development of a user-friendly unstructured mesh neutron transport code for the simulation of the resolved region with sufficient accuracy • Linking the neutron transport code into an existing nodal code (e.g., a nodal diffusion code) for full-core simulation, including the management of the boundary condition transfer and the cross-section handling • Coupling the nodal code to a code for improved fuel rod simulation in the resolved region (each pin) as well as in the unresolved fuel assemblies (one representative pin) • Coupling the nodal code to a code for improved fluid dynamic modelling (a subchannel analysis code) for the resolved region (each subchannel in the fuel assembly) as well as in the unresolved fuel assemblies (one representative channel) Figure 3. The scheme for the smart high-fidelity coupled core simulator to replace the pin-power reconstruction for determining the safety limits of the fuel using a hybrid approach and improved physical coupling.
The multiscaling scheme is visualized in Figure 4. While the whole core calculation is still based on the classical approach of a nodal core simulator (homogenized neutron physics, and a representative fuel rod and coolant channel), the region of interest (hottest fuel assembly) is resolved down to the pin level in each of the used codes: a transport code with boundary conditions from the full-core calculation, a fuel performance code, and a subchannel analysis code. Depending on the quality of the modules (fuel thermodynamics and fluid dynamics) of the core simulator, the typical representative pin and fuel channel can either be modelled in the core simulator or by a multiscale link to the subchannel and fuel performance code. The result will be a computational model of the reactor core, where the hottest fuel assembly (the limiting case) is resolved in all details while the other, less limiting, fuel assemblies are treated as homogenized systems following the approach to invest the computational power into the area of highest interest. Thus, it provides us with the opportunity to choose where to invest the higher computational effort into higher fidelity. The embedding of the hybrid multiscale approach and the additional higher-quality tools (the fuel performance code and the subchannel analysis code) into a regular nodal core simulator will offer the opportunity to apply the higher-fidelity tools when these are required and when it is worth investing the additional computational effort. The multiscaling scheme is visualized in Figure 4. While the whole core calculation is still based on the classical approach of a nodal core simulator (homogenized neutron physics, and a representative fuel rod and coolant channel), the region of interest (hottest fuel assembly) is resolved down to the pin level in each of the used codes: a transport code with boundary conditions from the full-core calculation, a fuel performance code, and a subchannel analysis code. Depending on the quality of the modules (fuel thermodynamics and fluid dynamics) of the core simulator, the typical representative pin and fuel channel can either be modelled in the core simulator or by a multiscale link to the subchannel and fuel performance code. The result will be a computational model of the reactor core, where the hottest fuel assembly (the limiting case) is resolved in all details while the other, less limiting, fuel assemblies are treated as homogenized systems following the approach to invest the computational power into the area of highest interest. Thus, it provides us with the opportunity to choose where to invest the higher computational effort into higher fidelity. The embedding of the hybrid multiscale approach and the additional higher-quality tools (the fuel performance code and the subchannel analysis code) into a regular nodal core simulator will offer the opportunity to apply the higher-fidelity tools when these are required and when it is worth investing the additional computational effort.  Figure 4. The multiscaling scheme for the smart, high-fidelity coupled core simulation for industrial applications using one resolved region that provides pin-level data, while the rest of the core is calculated based on the nodal resolution and approximations.
The validation of the developed multiphysics coupled code system will be an essential point to create a licensing grade code system for the future. Two different levels of validation have to be considered: one for the scientific tool that will be used to demonstrate the gain of the new hybrid, coupled approach, and a much more sophisticated one for the licensing grade code system for industrial application. However, the industrial, licensing grade system will be assembled out of already validated and licensed code modules, which will reduce the effort for the validation of each of the primary modules. The major challenge will be the validation and verification of the coupling process and a final validation of the multiphysics system [34]. To fully leverage the proposed multiphysics approach, the system should be adopted in the requests for a transient analysis in a next step as soon as possible.

Longer-Term Objectives
Further highly innovative steps could be performed to improve the nodal approximation of the nodal method as a part of the central core simulator to close the gaps in the nodal technology. Nodal codes were developed for transient simulations in the 1980s [3,4]. During this time, the computational power was rather limited; thus, the limitation of the size of the nodal coupling matrix was of utmost importance to obtain results in an acceptable time and to make the codes attractive for industrial application as soon as possible. This limitation led to the solution of coupling each node only by one integral parameter (the neutron current) to its neighbouring node via their common surface. Another characteristic point is the typical use of three onedimensional approximation functions within the nodes by applying a transverse leakage method [3,4] to link the three dimensions. It is still unclear how good these approximations work within the nodes in detail. The second generation of nodal codes have been improved with the so-called 2D/1D nodal approach, e.g., [35][36][37]. This approach is based on a 2D nodal mathematical expansion solution within each node in the radial direction where the major heterogeneity appears. The one-dimensional (1D) axial solution is linked to the 2D radial solution once more via the transverse leakage term. However, even in the 2D solution, the quality of the mathematical expansion is only as good as the boundary conditions that can be provided by the streaming terms via the surfaces of the nodes. In a first step, we propose to investigate the quality of the current approximations used in nodal codes-thus the coupling between the nodes-by visualizing the mathematical approximation functions on a 2D core plane as given by the nodal approximation and taking care of the intra-nodal flux distributions (based on the mathematical expansion solutions) in addition to the normally analyzed nodal flux using advanced mathematics software, such as Maple or Mathematica. Based on the analysis of the graphical results in different core layers, a decision can be made on the next steps of development of the nodal code. In general, there are two scenarios possible: either the mathematical approximation functions within the node (e.g., the nodal expansion terms) should be improved or the number of coupling points between the nodes can be increased, since the size of the coupling matrix is no longer limited by the computational power. However, it would be important to have a detailed analysis before this decision is made. Another point is the so-called history effect, which considers the fuel temperature and the moderator density effect on the burnup behaviour [38,39]. Both effects are only approximated in the lattice calculation, since detailed knowledge of the exact fuel The validation of the developed multiphysics coupled code system will be an essential point to create a licensing grade code system for the future. Two different levels of validation have to be considered: one for the scientific tool that will be used to demonstrate the gain of the new hybrid, coupled approach, and a much more sophisticated one for the licensing grade code system for industrial application. However, the industrial, licensing grade system will be assembled out of already validated and licensed code modules, which will reduce the effort for the validation of each of the primary modules. The major challenge will be the validation and verification of the coupling process and a final validation of the multiphysics system [34]. To fully leverage the proposed multiphysics approach, the system should be adopted in the requests for a transient analysis in a next step as soon as possible.

Longer-Term Objectives
Further highly innovative steps could be performed to improve the nodal approximation of the nodal method as a part of the central core simulator to close the gaps in the nodal technology. Nodal codes were developed for transient simulations in the 1980s [3,4]. During this time, the computational power was rather limited; thus, the limitation of the size of the nodal coupling matrix was of utmost importance to obtain results in an acceptable time and to make the codes attractive for industrial application as soon as possible. This limitation led to the solution of coupling each node only by one integral parameter (the neutron current) to its neighbouring node via their common surface. Another characteristic point is the typical use of three one-dimensional approximation functions within the nodes by applying a transverse leakage method [3,4] to link the three dimensions. It is still unclear how good these approximations work within the nodes in detail. The second generation of nodal codes have been improved with the so-called 2D/1D nodal approach, e.g., [35][36][37]. This approach is based on a 2D nodal mathematical expansion solution within each node in the radial direction where the major heterogeneity appears. The one-dimensional (1D) axial solution is linked to the 2D radial solution once more via the transverse leakage term. However, even in the 2D solution, the quality of the mathematical expansion is only as good as the boundary conditions that can be provided by the streaming terms via the surfaces of the nodes. In a first step, we propose to investigate the quality of the current approximations used in nodal codes-thus the coupling between the nodes-by visualizing the mathematical approximation functions on a 2D core plane as given by the nodal approximation and taking care of the intra-nodal flux distributions (based on the mathematical expansion solutions) in addition to the normally analyzed nodal flux using advanced mathematics software, such as Maple or Mathematica. Based on the analysis of the graphical results in different core layers, a decision can be made on the next steps of development of the nodal code. In general, there are two scenarios possible: either the mathematical approximation functions within the node (e.g., the nodal expansion terms) should be improved or the number of coupling points between the nodes can be increased, since the size of the coupling matrix is no longer limited by the computational power. However, it would be important to have a detailed analysis before this decision is made. Another point is the so-called history effect, which considers the fuel temperature and the moderator density effect on the burnup behaviour [38,39]. Both effects are only approximated in the lattice calculation, since detailed knowledge of the exact fuel temperature, as well as the moderator density, is not available at this stage. However, both effects have a first-order influence on the breeding of plutonium and, thus, on the burnup behaviour of the fuel assemblies. Thus, more effort would be required here to resolve problems that are created by the two-step approach.
Another important point is to extend the advanced core simulator in the application range for different types of transients that are driven by processes that are outside of the model range of the core simulator, e.g., loss of coolant accidents or anticipated transients without scram [40]. These transients require a strongly coupled interaction between the primary or even secondary circuit and the core simulator. To be able to simulate similar transients, the coupling of the developed core simulation code system to a system code would be required to model the processes outside of the core to create realistic boundary conditions that could then be fed into the core simulator. The boundary conditions for the investigation of anticipated transients without scram are given by the response of the primary as well as the secondary system. In addition, the reactor's operation and management system plays an essential role in these transients.
Improved user friendliness and a state-of-the-art graphical user interface are judged to be important, which is also applicable to the underlying cross-section preparation process. On the level of the lattice code, possible objectives are: -improved user friendliness; -a convenient user interface; -a flexible geometry representation for advanced fuel assembly configurations; and creating automatic procedures and data transfer to avoid error-prone data handovers in conjunction with hand work.
It is hard to believe, but, from the point of view of usability and user friendliness, modern reactor simulation tools are still, in the best case (e.g., the commercial code HELIOS or CASMO), at the level of the beginning of the 2000s, or, in the worst case, at the beginning of the 1990s. This means that, for producing input files and the checking of the results, the user relies only on text files and text editors (in many cases even without syntax highlighting) without having access to modern visualization techniques or a graphical user interface for input generation. Some approaches have been developed to improve the user interface for HELIOS and CASMO. A new approach with a reasonable number of pre-produced fuel assembly geometries has been launched recently with SCALE/POLARIS [41]. The generation of the cross-section sets and the required data organization and transfer are often still manual and performed using user-written scripts. As a future standard, the functionality of Studsvik's CMS could be a first objective.
An upcoming trend that will influence the future application and trust in the results is uncertainty quantification [42]. In the case of the U.K., it would be a reasonable approach to link the code system to an uncertainty analysis for future applications, ideally via a connection to the COSSAN X package of the Risk Institute of the University of Liverpool [43]. However, it would be worthwhile to look into other approaches for sensitivity and uncertainty analyses [44] as well as advanced methodologies for model validation.

Code Development Strategy
A key challenge for the development of advanced industrial application software is to define the right strategy with a good balance between the development of new models, their testing in a scientific The development of a licensing grade code with all of the requested quality assurance procedures and extensive validation procedures as well as interaction with the regulator is far beyond the scope of a university and even a national laboratory, such as the National Nuclear Laboratory (NNL). Thus, we propose a parallel or double-strata approach ( Figure 5). The academic-level code development will be performed on an open source level to provide a prototype for verifying the quality and the gain of new methods and for testing their opportunities. After a successful test, a newly developed method will be forwarded to the industry partner, including the requested knowledge transfer. Once the fundamental physics approach has been prototyped within a test bed environment, an industrial partner would then reprogramme the code, taking account of good software development practice and incorporating any experience gained in the testing phase during academic development. At this stage, the necessary quality assurance and user documentation would be created, and knowledge preservation would be assured, which would otherwise be problematic in the current academic environment in the U.K. The developments for industrial application have to follow the modern IEEE standards for software development as defined in ISO/IEC/IEEE 12207:2017. The academic partner developing the module will then be consulted by the industrial partner regarding the implementation of the new approach. The industrial partner will be responsible for verification, validation, and licensing. Besides keeping the coding quality standards, the key points for licensing will be: comprehensive methods as well as input documentation, a sufficient validation of the results against experiments as well as against various reference calculations (ideally an industrial standard tool as well as a higher-quality solver, such as Monte-Carlo for neutronics), and a robust testing procedure on a reasonable matrix for the licensed width of the application. On the one hand, this approach will provide IMPACT (a measure for the outreach of the research into an industrial application for university work and the National Lab). On the other hand, it provides the industrial partner with almost immediate access to the innovative methods developed by the academic partners and the required transfer of knowledge for faster implementation. A very interesting approach to the evaluation of the quality of codes has been published by the Sandia National Laboratory. "We propose the Predictive Capability Maturity Model (PCMM), which is a structured method for assessing the level of maturity of M&S (modelling & simulation) efforts. The purpose of the PCMM is to contribute to decision-making for some engineering system applications. The six M&S elements used to assess maturity in this model are (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. These six elements are important in judging the trustworthiness and credibility of an M&S effort that deals primarily with the numerical solution of PDEs (partial differential equations) describing the engineering system of interest" [45]. Following this approach will provide a framework for judging the quality of new code developments to convince the regulatory bodies as well as the stakeholders to trust the modelling and simulation results.
Within the current phase of the digital reactor design programme, the current partners have developed a number of coupled-code technology demonstrators as a proof of concept [34] of both the technical approach and, more significantly, the development approach that we outline in this paper.
These activities demonstrated that strong interaction with Wood plc. (formerly AMEC Foster Wheeler) and EdF UK is required to provide a licensing grade code system that can be based on upgrading their industrially applied nodal code PANTHER [46] and the lattice code system WIMS [47]. The fuel performance code could be delivered by NNL with their ENIGMA code [14], which is already an industrial-grade application. The transport code is already available at the University of Liverpool [48,49]. A decision on the subchannel analysis code has to be taken, either upgrading the code VIPER that is used in the U.K., or accessing either: A decision on a possible system code should be taken in close cooperation with future customers.
EdF UK is required to provide a licensing grade code system that can be based on upgrading their industrially applied nodal code PANTHER [46] and the lattice code system WIMS [47]. The fuel performance code could be delivered by NNL with their ENIGMA code [14], which is already an industrial-grade application. The transport code is already available at the University of Liverpool [48,49]. A decision on the subchannel analysis code has to be taken, either upgrading the code VIPER that is used in the U.K., or accessing either: A decision on a possible system code should be taken in close cooperation with future customers. A culture change is essential for the success of the roadmap for future industrial nuclear reactor core simulation as a part of a future Integrated Nuclear Digital Environment (INDE ® ) [52] in the U.K. to support the A culture change is essential for the success of the roadmap for future industrial nuclear reactor core simulation as a part of a future Integrated Nuclear Digital Environment (INDE ® ) [52] in the U.K. to support the nuclear renaissance. We propose to take advantage of current trends of modern software development, such as DevOps, a software development methodology or process that combines software development (Dev) and information technology operations (Ops) (Figure 6). The goal of DevOps is to shorten the development life cycle, in our case of codes, while also delivering features, fixes, and updates as frequently as possible in close alignment with customer demands. It is about moving faster and being more agile without sacrificing the security or reliability of the codes. "This is the situation where DevOps seeks to address: how to get development, operations and other groups within the organisation to collaborate around a set of shared goals, to deliver software faster and more reliably to customers and end users? Key technical practices that underpin a DevOps initiative include getting dev and ops teams to standardize on a common set of agile processes and tools for software delivery" [53]. On the one hand, many of the processes are already partly established in the development process of modern codes in reactor physics in the academic development strata, e.g., version control or automated/semi-automated testing. On the other hand, there are very strict regulatory rules for licensing grade code development, which require a very carefully established development process with extensive quality assurance. However, for both worlds, the same process could ultimately be used, but on different time scales and according to different priorities. The academic code development should quickly act to prove the advantages of new approaches and to integrate user feedback, while the focus of the licensing grade development has to be on applying only the promising new approaches and providing assurance that the required high-quality standards that are essential in nuclear safety have been met.
DevOps is more of a cultural perspective on how all contributing parties should be engaged in working the right way and how to get the cross-fertilization between the academic world and industrial code development to work properly. It aims to include every player who has a stake in the game by involving them as early on as possible in the collaborative working process. To get this approach to work in the right way, DevOps is a key point for continual learning and improvement rather than an end state. Thus, DevOps should be recognized as a culture, not a role! See [53].
Widening the view from pure code development questions for the future state-of-the-art core simulation to the aims of the Digital Reactor Development project leads to an analysis of the opportunities for this project. The following points have been made to close the gaps between academic research and industrial application of high-level digital simulation tools for the future: • An advanced level of computational modelling will require the use of High-Performance Computing (HPC), which is widely available in academic research to support the nuclear industry, but which has, until recently, seen limited adoption within the industry for specific challenges.

•
There is a need to overcome the perceived technical risks that are associated with the use of high-fidelity simulation to support industry-relevant challenges.
• These risks may be associated with the lack of credibility of these models that could be solved by addressing needs for further validation.

•
The schedule risks that are associated with high-fidelity simulation can be solved by focusing on ease of use and improved methods for data transfer.
• There is a need to understand how to validate the emergent behaviour that is present in complex systems and that could arise as a result of coupling across multiphysics and multiscale domains.
• Notwithstanding this challenge, significant progress can be made by demonstrating code-coupling against industry-led challenge problems.
• Improved data transfer between the real world and the digital environment to ensure the availability of validation data throughout the entire life-cycle of the nuclear system. • Consistent data management within the digital environment to ensure sufficient knowledge management and long-term data preservation throughout the entire life-cycle of the nuclear system. • The use of a digital prototype to facilitate virtual reactor design and assembly in order to reduce the cost of, and risks associated with, reactor design and construction [34].
Energies 2018, 11, x FOR PEER REVIEW 13 of 17 nuclear renaissance. We propose to take advantage of current trends of modern software development, such as DevOps, a software development methodology or process that combines software development (Dev) and information technology operations (Ops) (Figure 6). The goal of DevOps is to shorten the development life cycle, in our case of codes, while also delivering features, fixes, and updates as frequently as possible in close alignment with customer demands. It is about moving faster and being more agile without sacrificing the security or reliability of the codes. "This is the situation where DevOps seeks to address: how to get development, operations and other groups within the organisation to collaborate around a set of shared goals, to deliver software faster and more reliably to customers and end users? Key technical practices that underpin a DevOps initiative include getting dev and ops teams to standardize on a common set of agile processes and tools for software delivery" [53]. On the one hand, many of the processes are already partly established in the development process of modern codes in reactor physics in the academic development strata, e.g., version control or automated/semi-automated testing. On the other hand, there are very strict regulatory rules for licensing grade code development, which require a very carefully established development process with extensive quality assurance. However, for both worlds, the same process could ultimately be used, but on different time scales and according to different priorities. The academic code development should quickly act to prove the advantages of new approaches and to integrate user feedback, while the focus of the licensing grade development has to be on applying only the promising new approaches and providing assurance that the required high-quality standards that are essential in nuclear safety have been met. DevOps is more of a cultural perspective on how all contributing parties should be engaged in working the right way and how to get the cross-fertilization between the academic world and industrial code development to work properly. It aims to include every player who has a stake in the game by involving them as early on as possible in the collaborative working process. To get this approach to work in the right way, DevOps is a key point for continual learning and improvement rather than an end state. Thus, DevOps should be recognized as a culture, not a role! See [53]. Widening the view from pure code development questions for the future state-of-the-art core simulation to the aims of the Digital Reactor Development project leads to an analysis of the opportunities for this project. The following points have been made to close the gaps between academic research and industrial application of high-level digital simulation tools for the future: • An advanced level of computational modelling will require the use of High-Performance Computing (HPC), which is widely available in academic research to support the nuclear industry, but which has, until recently, seen limited adoption within the industry for specific challenges.

•
There is a need to overcome the perceived technical risks that are associated with the use of high-fidelity simulation to support industry-relevant challenges.
o These risks may be associated with the lack of credibility of these models that could be solved by addressing needs for further validation. o The schedule risks that are associated with high-fidelity simulation can be solved by focusing on ease of use and improved methods for data transfer.
• There is a need to understand how to validate the emergent behaviour that is present in complex systems and that could arise as a result of coupling across multiphysics and multiscale domains. The given points form a final vision for an integrated nuclear digital environment [52], which is the final vision for the currently running U.K. national programme on Digital Reactor Design [34], where we foresee the roadmap for future industrial nuclear reactor core simulation to feed into.

Conclusions
The U.K. has initiated the nuclear renaissance and has recently started to support this development with the announcement of a national programme. This document supports the ambition of developing a new state-of-the-art software environment for improved economics and safety. We try to achieve this by a new approach to structured development with oversight instead of looking at the development of single tools for specific detail problems. The work is based on studying the industrial demand to create a tailored industry solution to set the new standard for reactor core modelling and simulation for the U.K.
However, finally, a technical specification has to be developed between academia and industry alongside the strategic approach to give code developers a proper functional specification that they can use to develop the tools for the future. We have laid out the industrial requirements of, and different approaches to, the regulators' software requirements. This was followed by a short analysis of different approaches to new, innovative solutions and the aimed positioning compared to other leading international projects driven by the industrial demand in the U.K. We propose a flexible multiscaling scheme for smart, high-fidelity coupled core simulation for industrial applications based on one or more resolved fuel assemblies providing pin-level data using a transport solver, while the rest of the core is calculated based on the nodal resolution and approximations. This should be accompanied by a coupling to advanced codes to assure improved fuel performance and a resolved fluid dynamics solution in the resolved fuel assembly.
The proposed strategy is complemented with a description of a dual-strata code development strategy to assure industry's close cooperation with, and immediate takeover of, academic innovations into an industrial application within a licensing grade code module. This process can be significantly supported by the ideas of DevOps as a tool to overcome out-of-date structures that are often slowing down nuclear code development. Thus, the document provides a novel, strategic approach to achieve the most promising final product for industry and to identify the most important points for improvement to feed into the wider picture of the formation of an integrated nuclear digital environment for nuclear research and development in academia as well as the nuclear industry to force closer cooperation.