An Overview of AI Methods for in-Core Fuel Management: Tools for the Automatic Design of Nuclear Reactor Core Conﬁgurations for Fuel Reload, (Re)arranging New and Partly Spent Fuel

: An important aspect of managing a nuclear reactor is how to design refuellings, and from the 1980s to the present di ﬀ erent artiﬁcial intelligence (AI) techniques have been applied to this problem. A section of the reactor core resembles a symmetrical grid; long fuel assemblies are inserted there, some of them new, some of them partly spent. Rods of “burnable poisons” dangle above, ready to be inserted into the core, in order to stop the reactor. Traditionally, manual design was made by shu ﬄ ing positions in the grid heuristically, but AI enabled to automatically generate families of candidate conﬁgurations, under safety constraints, as well as in order to optimize combustion, with longer cycles of operation between shutdown periods, thus delaying the end-of-cycle point (except in France, where shutdown is on an annual basis, and Canada, where individual fuel assemblies are replaced, with no need for shutdown for rearranging the entire batch). Rule-based expert systems, the ﬁrst being FUELCON, were succeeded by projects combining neural and rule-based processing (a symbolic-to-neural compilation of rules we did not implement), and later on, genetic algorithms in FUELGEN. In the literature, one also comes across the application of fuzzy techniques, tabu search, cellular automata and simulated annealing, as well as particle swarms. Safety regulations require simulating the results using a parameter prediction tool; this is done using either nodal algorithms, or neural processing.


Aims
This article provides an overview of artificial intelligence applications to an economically important problem in nuclear engineering, this being in-core fuel management: how to design fuel reloads (refuellings) into the core of a nuclear reactor, after the fuel was partly depleted.
I strive to be understood by, and to offer something interesting to, both nuclear engineers, and computer scientists, and possibly also to other people involved in computer applications within engineering.These aims strongly shape the presentation.
In particular, Sections 1.3 and 2 of this paper are intended to especially appeal to such readers who are not familiar with nuclear engineering.In Section 1, such concepts are introduced, which will be important for understanding Section 3, "Computer Tools for Designing Fuel Reload Configurations", which itself surveys how various techniques historically associated with artificial intelligence (AI) Before we turn to introducing concepts from nuclear engineering, catering to such readers who are not nuclear engineers, let us briefly list such artificial intelligence or heuristic optimisation techniques that are relevant for understanding the survey in Section 3. Let it be clear that as far as I am aware, only some of the potentially relevant techniques from artificial intelligence have been applied to the in-core fuel management problem within nuclear engineering.There are such other techniques that I would have loved, over the years, to also see applied, alternatively or in combination, to this problem, but I am not aware that they ever were.Were I to list these as well, they may as well fill a separate article that would be a tutorial for nuclear engineers, but the present article is already long, and it is not advisable to burden it further.
We clarify right away that artificial intelligence as relevant for this article is of the kind that is closely associated with optimisation problems.Traditionally, a discipline called operations research has developed algorithms seeking to find optimal solutions for systems of equations, or should we rather say, seeking to minimize or maximize the resulting value of one or more object functions, under several constraints.Heuristic optimisation differs from traditional operations research in that what one looks for is good solutions, rather than the very best, if the search for the very best is forbiddingly costly or at any rate impractical.
It has often been the case that tools of heuristic optimisation have been applying techniques that originated within artificial intelligence research, starting (in historical order) with rulesets: sets of rules elicited from an expert in the domain of application (for our present purposes: a nuclear engineer) and Designs 2019, 3, 37 3 of 45 being rules of thumb of how the human expert would go about, trying to solve the class of problems that the computer tool being developed is required to solve.
From the 1970s, research and applications of the area that has since come under the name artificial intelligence has had a very broad scope.For example, computational linguistics has been affected by artificial intelligence when the ambition has been to go beyond the parsing of the syntax of natural human languages, and to also handle semantics, as well as commonsense to make sense of references to the world in interactions, or in narrative text.
It is only sometimes that artificial intelligence research has aimed to provide models of mind that would be of interest to cognitive science or to the neurosciences.Significantly, this has not been a concern for AI tools whose application is in nuclear engineering (except, potentially, tools attuned to the ergonomics, i.e., the manner of working, of the human staff in the control room).
One of the two anonymous referee reports appeared to suggest that artificial intelligence is a misnomer, the way it is used in this paper: "AI" in the title, the report claimed, "raises the expectation that the author is going to speak about convolutional neural networks 3 or long short-term memory networks-but they are not the main focus of the text".This strongly suggests to me that the referee is either younger, or that he or she has become familiar with some AI applications in the 21st century, and therefore has come to expect that AI is what we more specifically call "subsymbolic computation", of which neural networks are just one class.Such AI techniques from the 1970s and 1980s as rulesets, 4  scripts, 5 frames, 6 and the like have been the bulk of artificial intelligence techniques and are known as such AI that are based upon symbolic computation.With such tools, you can trace how the software has been reasoning, while running.This is not the case of neural networks, which runs, so to speak, in a "black box" and do not let you peer into how they reached their outcome.In the 21st century, subsymbolic computation, and especially neural networks, have mushroomed in applications to an extent that, as more experienced AI scholars sometimes lament among themselves, generational turnover is bringing about a loss of the memory of the discipline, and a kind of unawareness of acquired results of past research, so that eventually it will almost be necessary to reinvent the wheel.That same report offered this criticism: "AI supposedly stands for 'artificial intelligence' and it is not clear what this fashionable term actually means".My response is that for practical purposes, and certainly so in this overview article, by "artificial intelligence" we mean the array of techniques that have historically originated within the disciplinary compartment, among computer scientists, of such researchers who have been known as "the AI people" as opposed to other computer scientists who typically were their colleagues.It sounds like a joke, but is a sensible course of action, attentive to how history has unfolded.
As suggested by another anonymous referee, I proceed to summarize (to make things easier for such readers who are nuclear engineers) such AI or optimisation techniques that would later occur in the survey of applications to the nuclear reactor refueling problem.
Artificial intelligence tools search a space of possible solutions.Search (heuristic search) as intended in AI is treated in textbooks-e.g., in [6] (Chapter 4)-and the subject of the seminal book by Donald Knuth [7], as well as such seminal papers as [8][9][10][11].Heuristics in computer problem solving is the subject of a classic book by Judea Pearl [12].
Rule-based expert systems comprise a set of rules (ruleset) capturing knowledge of how a human expert would solve a given category of problems, and a control component (an inference engine) that applies those rules to the data and develops the results.Rules in the ruleset are formulated in an "if 3 An introduction to convolutional neural networks can be found in [1].That technique is most commonly applied to analyzing visual imagery.Its inspiration is biological, in that the connectivity pattern between its artificial neurons resembles the organization of the animal visual cortex.4 Rule-based expert systems are the subject of, e.g., [2] (Chapters 2,3), and [3] (pp. .5 Scripts, as well as more complex structures such as MOPs (memory-organisation packages), useful for describing different roles in an environment and the pattern of their respective goals and plans, were discussed by Michael Dyer in [4].Cf. [5].6 Frame-based expert systems are the subject of, e.g., [2] (Chapter 5).
Designs 2019, 3, 37 condition then consequence" format, where the consequence is (such as in the kind of problem in nuclear engineering with which this paper is concerned) an instruction as to what to do or not to do, thus, an action.However, the consequence part of the rule can instead help to construct an interpretation instead, in such tools that try to understand a situation at hand and to provide a diagnosis.
Rule-based expert systems were handy, when introduced, because [2] (p.211) "(t)he separation of the problem-solving knowledge and the inference engine makes it possible to represent knowledge in a more natural fashion.If . . .then . . .rules [ . . .] are closer to the way in which human beings describe their own problem-solving techniques than a program that embeds this knowledge in lower-level computer code.[ . . .] Because the knowledge base is separated from the program's lower-level control structures, expert system builders can focus directly on capturing and organizing problem-solving knowledge rather than on the details of its computer implementation.[ . . .] The separation of knowledge and control, along with the modularity provided by rules and other representational structures used in building knowledge bases, allows changes to be made in one part of the knowledge base without creating side effects in other parts of the program.[ . . .] The separation of the knowledge and control elements of the program allows the same control and interface software to be used in a variety of systems.[ . . .]" The emergence, within AI, of the early models of expert systems, mainly rule-based expert systems, led to the rise of knowledge acquisition and elicitation techniques, [13,14], in order to interview and acquire from human experts in the application domain the knowledge to be represented in the software's knowledge base.Knowledge elicitation "has been thought of as a stage of KA [knowledge acquisition].The KA process is usually divided into three stages: deciding what knowledge is needed, variously referred to as the definition stage or initial analysis; getting knowledge predominantly characterized as coming from human expert(s) and interpreting it, usually called elicitation; and 'writing' the knowledge in the internal language of the system, encoding it, usually called representation" [15] (p.91).
The following informal definition of artificial neural networks-a fashionable and much applied class of techniques, in the 21st century-is quoted from [2] (pp.661-662): "Neurally inspired models, also known as parallel distributed processing (PDP) or connectionist systems, de-emphasize the explicit use of symbols in problem solving.[ . . .] Processing in these systems is distributed across collections or layers of neurons.Problem solving is parallel in the sense that all the neurons within the collection or layer process their inputs simultaneously and independently.[ . . .] In connectionist models there is, however, a strong representational character both in the creation of input parameters as well as in the interpretation of output values.To build a neural network, for example, the designer must create a scheme for encoding patterns in the world into numerical quantities in the net.The choice of an encoding scheme can play a crucial role in the eventual success or failure of the network to learn.
In connectionist systems, processing is parallel and distributed with no manipulation of symbols as symbols.Patterns in a domain are encoded as numerical vectors.The connections between components, or neurons, are also represented as numerical values.Finally, the transformation of patterns is the result of a numerical operation, usually, matrix multiplications.These "designer choices" for a connectionist architecture constitute the inductive bias of the system.
The algorithms and architectures that implement these techniques are usually trained or conditioned rather than explicitly programmed.Indeed, this is a major strength of the approach: an appropriately designed network architecture and learning algorithm can often Designs 2019, 3, 37 5 of 45 capture invariances in the world, even in the form of strange attractors, 7 without being explicitly programmed to recognize them." An introduction to artificial neural networks can be found in AI textbooks, e.g., [2] (Chapter 6).In-depth treatments can be found e.g., in [16,17].
Let us turn to genetic algorithms."Like neural networks, genetic algorithms are based on a biological metaphor: they view learning as a competition among a population of evolving candidate problem solutions.A 'fitness' function evaluates each solution to decide whether it will contribute to the next generation of solutions.Then, through operations analogous to gene transfer in sexual reproduction, the algorithm creates a new population of candidate solutions" [2] (p.715).An introduction to genetic algorithms can be found in AI textbooks, e.g., [2] (Chapter 7) or [3] (pp. 715-749), as well as in books such as [18][19][20][21][22][23][24][25][26][27][28].
Reference [29] is an application to metallurgy.Reference [30] surveys applications to materials engineering of multi-objective genetic and evolutionary algorithms.Reference [31] discussed an application to structural damage detection.Reference [32] described an application of genetic algorithms to responding to the "challenge to satisfy the energy demand using green energy resources is to balance energy supply and demand.Territory design deals with the problem of grouping geographic areas into larger geographic clusters called territories in such a way that the grouping is acceptable according to a planning criterion.The aim of this study is to group geographic areas so that energy requirement in a geographic cluster matches the available green energy potential in the same cluster.In this way, investments may be supported through region specific policies.The problem is formulated as a mixed-integer linear programming model.A location-allocation approach is employed to solve the model.The location and allocation problems are solved iteratively.In order to solve the initial location problem, a Genetic Algorithm is developed to find the results of the p-median problem.Then, the allocation problem is solved optimally using the ILOG Cplex solver." A book by Steeb [33] applies several techniques, including genetic, to scientific computing, the overarching theme being nonlinear dynamical systems; apart from neural networks and genetic algorithms, also e.g., cellular automata, discrete wavelets, and fractals are explained.
Yet another technique is fuzzy sets or fuzzy logic. 8Suppose an expert says, in commonsense 9 fashion: "though the power transformer is slightly overloaded, I can keep this load for a while" [2] (p.87).In classical set theory (as developed by Georg Cantor), sets are involved, which in AI jargon are known as crisp sets.Let us see the difference, vis-à-vis fuzzy sets, by comparing their respective operations on sets.The complement (i.e., the opposite) of a crisp set is defined as "who does not belong to the set?"In contrast, in fuzzy set (as developed by Lotfi Zadeh), the complement of a fuzzy set is defined as "how much do elements not belong to the set?" [2] (p.98).Let us turn to the operation of set containment: "in crisp sets, all elements of a subset entirely belong to a larger set and their membership values are equal to 1.In fuzzy sets, however, each element can belong less to the subset than to the larger set.Elements of the fuzzy subset have smaller memberships in it than in the larger set" [2] (p.99).As for the set theory operation of intersection, with crisp sets it is defined as "which element belongs to both sets?", whereas with fuzzy sets, intersection is defined as "how much of the element is in both sets?", "in classical set theory, an intersection between two sets contains the elements shared by these sets.( . . . ) In fuzzy sets, however, an element may partly belong to both sets with different 7 Attractor networks are the subject of [2] (Section 14.6, pp.701-711).
8 An introduction to fuzzy set or fuzzy logic theory and applications can be found in AI textbooks, e.g., [1] (Chapter 4), or in a short paper by Zadeh [34].See in-depth treatment in Yager and Zadeh's book [35], as well as Ross's [36] Fuzzy Logic with Engineering Applications.9 AI representations of commonsense knowledge are the subject of, e.g., [37,38].
Designs 2019, 3, 37 6 of 45 memberships.Thus, a fuzzy intersection is the lower membership in both sets of each element" [2] (p.99).The union of two sets is defined, with crisp sets, as "which element belongs to either set?", whereas with fuzzy sets, union is defined as "how much of the element is in either set?" "in fuzzy sets, the union is the reverse of the intersection.That is, the union is the largest membership value of the element in either set" [2] (p.100).One also speaks of fuzzy logic, which "uses the continuum of logical values between 0 (completely false) and 1 (completely true)", "accepting that things can be partly true and partly false at the same time" [2] (p.89).
Simulated annealing is an optimization technique described in [39,40].The following is quoted from [40] (p.v): "Annealing is the physical process of heating up a solid until it melts, followed by cooling it down until it crystallizes into a state with a perfect lattice.During this process, the free energy of the solid is minimized.Practice shows that the cooling must be done carefully in order not to get trapped in locally optimal lattice structures with crystal imperfections.
In combinatorial optimization, we can define a similar process.This process can be formulated as the problem of finding-among a potentially very large number of solutions-a solution with minimal cost.Now, by establishing a correspondence between the cost function and the free energy, and between the solutions and the physical states, we can introduce a solution method in the field of combinatorial optimization based on a simulation of the physical annealing process.The resulting method is called Simulated Annealing.
Salient features of this method are its general applicability and its ability to obtain solutions arbitrarily close to an optimum.A major drawback however is that finding high-quality solutions may require large computational efforts.
A substantial reduction of the computational effort required by the simulated annealing algorithm may be achieved by using computational models based on massively parallel execution.An example of such a model is the Boltzmann machine.
The Boltzmann machine is a neural network and belongs to the class of connectionist models.A Boltzmann machine consists of a large network of simple computing elements, called units, that are connected in some way.The units can have two states, either 'on' or 'off', and the connections have real-valued strengths that impose local constraints on the states of the individual units.A consensus function gives a quantitative measure for the 'goodness' of a global configuration of the Boltzmann machine, determined by the states of all individual units." Concerning the positioning of Boltzmann machines within types of artificial neural networks, see [41].A book by Aguiar e Oliveira et al. [42] discussed stochastic global optimization using fuzzy adaptive simulated annealing, "developed to statistically find the best global fit of a nonlinear constrained, non-convex cost function over a multi-dimensional space".Simulated annealing was applied by Jung et al. [43] to taxi dispatch with shared rides.Yu and Lin [44] applied a simulated annealing heuristic to "the location-routing problem with simultaneous pickup and delivery (LRPSPD) in which the pickup and delivery take place at the same time for each customer.The goal is to determine the facility locations and vehicle routes in order to minimise the total system cost as a sum of facility opening cost, vehicle fixed cost and vehicle travel cost".In order to determine the coefficients in proportional-integral-derivative (PID) controllers, [45] "uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers".Arbalaez Garces et al. [46] applied simulated annealing to improving users' product acceptability (the real case concerning the design of a medical-stocking threading device).Roshani and Ghazi Nezami [47] applied simulated annealing to assembly lines.The application of Sen et al. [48] is to computer networks: to allocate data segments, so that the total routing cost is minimized.Moschakis et al. [49] discussed a scheduling optimization technique using simulated annealing, as applied to cloud computing.
Particle swarm optimization (PSO) tries to optimize "a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality.It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity.Each particle's movement is influenced by its best known local position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles.This is expected to move the swarm toward the best solutions" [50]."PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions.However, metaheuristics such as PSO do not guarantee an optimal solution is ever found" [50]."A basic variant of the PSO algorithm works by having a population (called a swarm) of candidate solutions (called particles).These particles are moved around in the search-space according to a few simple formulae.The movements of the particles are guided by their own best known position in the search-space as well as the entire swarm's best known position.When improved positions are being discovered, these will then come to guide the movements of the swarm.The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered" [50].Let us state this formally [50]: "let S be the number of particles in the swarm, each having a position x i ∈ R n in the search-space and a velocity v i ∈ R n .Let p i be the best known position of particle i and let g be the best known position of the entire swarm.A basic PSO algorithm is then": Initialize the particle's position with a uniformly distributed random vector: x i ~U(b lo , b up ) Initialize the particle's best known position to its initial position: update the swarm's best known position: g ← p i Initialize the particle's velocity: while a termination criterion is not met do: for each particle i = 1, . . ., S do for each dimension d = 1, . . ., n do Pick random numbers: r p , r g ~U(0,1) Update the particle's velocity: Update the particle's best known position: Update the swarm's best known position: g ← p i "The values b lo and b up are respectively the lower and upper boundaries of the search-space.The termination criterion can be the number of iterations performed, or a solution where the adequate objective function value is found.The parameters ω, φ p , and φ g are selected by the practitioner and control the behaviour and efficacy of the PSO method" [50].The literature about particle swarm optimization includes, e.g., [51][52][53][54][55].Some scholars have used a simplified version of particle swarm optimization (e.g., [56]).
Case-based reasoning (CBR) is the subject of e.g., [57][58][59][60].Case-based reasoning is used in tools that have access to a database of precedents, and attempts an adaptation to circumstances at hand.By the time this class of techniques emerged in AI, tools for nuclear refuelling existed that would search a vast space of possible solutions and come up with families of alternative good choices, unfettered by solutions used in the past.This was clearly a disincentive for the application of CBR to the refueling problem, precisely because one could expect solutions not as good to be yielded if a CBR tool was to simply cautiously modify some solution that worked satisfactorily in some past situation.Arguably, by that stage in the history of application of AI techniques to the refuelling problem, it would have been unambitious, like wearing blinkers, if one was to apply CBR to that problem.
Case-based reasoning is a methodology in artificial intelligence, that instead of matching rules from a ruleset to a situation at hand, tries to match it to some entry from a pool of past cases, by calculating how close they are according to various features (e.g., [61,62]).This is similar to, and at least in part overlapping with, analogical reasoning [63].Stranieri and Zeleznikow [64] provided this definition: "case based reasoning is the process of using previous experience to analyse or solve a new problem, explain why previous experiences are or are not similar to the present problem and adapting past solutions to meet the requirements of the present problem". 10 indicated earlier that I was not going to also enumerate such AI techniques that as far as I am aware, have not been applied to the refuelling problem.Let me nevertheless mention multiagent systems [68].This is an area that I would love to see applied in nuclear engineering.and that historically developed from an AI technique called blackboard systems [69], so that different software "agents", each with its own domain of expertise (and its set of rules of thumb), try to solve a problem piecewise.Whenever they do not have the knowledge for solving a subproblem, they write it on the blackboard; other agents check on the blackboard whether that subproblem that was listed there is of a kind that is within their respective remit.This blackboard metaphor was already described, long before it found application, by Alan Newell in 1962 ( [70], cited in [69] (p.16) and [68] (p.308)).

Preliminary Notions in Nuclear Engineering, Concerning Fuel at Nuclear Reactor Plants
One uranium pellet (a cylinder of 10 mm × 8 mm, made of slightly enriched uranium dioxide) has as much energy available as three barrels of oil of 42 gallons each, or as one ton of coal, or as 17,000 cubic feet of natural gas.Such uranium pellets are inserted into a tube made of a zirconium alloy (the fuel rod cladding [71]) with a spring on top, and several such resulting fuel rods are arranged in parallel in a grid forming one fuel assembly (Figure 1).Dozens of fuel assemblies (either new, or spent to different degrees) are present inside a nuclear reactor core at a given time [72].A nuclear power plants has one or more reactors.In most kinds of commercial nuclear power plants (Figure 2), every time the reactor is shut down for reload (removing those fuel assemblies that are entirely spent, and loading a batch of fresh fuel), between one third and one half of the assemblies are replaced.
The outage may take nearly one month, and for that period, it may be necessary to hire contract workers (or to second employees from the corporation's other plants) being the double of the regular workforce at the plant the temporary workforce is intended to supplement [73]. 11Clearly, there is an incentive to have longer periods of operation between periods of shutdown for refueling (this is distinct from maintenance-related events [74,75], and related shutdown.)Fuel assemblies are about four metres long, of 179 to 264 cylindrical rods typically of uranium dioxide.While refuelling, is important that once or twice spent fuel assemblies are kept (thrice spent assemblies are removed), as it would be unsafe to have many assemblies of fresh fuel close to each other, or to have fresh fuel at or near the centre of the reactor core.Clever rearrangements of the assemblies are sought, and the symmetry of the grid that constitutes a cross-section of the core is exploited in order to simplify things.Shutdown points in time for the purpose of reloading the fuel are usually sought to be spaced as far as possible without damaging the power peaks generated, even though at the end of the day the decision whether to optimize the length of the cycle is a political one: in France, for administrative simplicity, reactors are shut down for reload at regular yearly intervals.Nor is it the case that with all kinds of commercial nuclear power reactors, fuel reload is done at once: that is not the case with CANDU reactors in use in Canada (see e.g., [76]). 12ithin the operating costs of commercial nuclear power stations, the cost of fuel is substantial,13 hence a major incentive for using nuclear fuel efficiently, for longer, while the potential output of the fuel comes to fruition as fully as possible: whereas it may not be practical to seek a global optimum, very good results are achievable, as technological evolution using artificial intelligence techniques as early as the 1980s for such applications has amply demonstrated.Nuclear reactor plants are important for a power supply grid (failure to supply from a plant may result in a corporation being obliged to buy electricity from a third party so as not to fail the grid), the feasibility of forecasting is limited, and at any rate, careful planning is needed for reactor shutdown periods for maintenance purposes [78], and this includes rearranging the fuel batch in the core of the reactor [72,79,80].
One can see then the incentive, as early as the mid-1980s, for the development of artificial intelligence tool for generating good fuel reload configurations, enhancing performance at plants.

Safety, Materials, and Options for Nonproliferation Aims
This section caters to readers who are not nuclear engineers, in order to show how some problems with nuclear fuel stem from political considerations.Safety [81][82][83][84][85][86][87][88][89][90][91][92] is a major issue in the design of a nuclear power plant, its management, and the management of spent fuel [93][94][95], 14 as well as e.g., the fabrication of fuel in the nuclear sector, along with considerations of efficiency.
With nuclear fission at commercial nuclear power plants, the resulting fission products are not entirely something of no use and only harmful.One also comes across fission byproducts of use in medical equipment.Such is the case of produced molybdenum.Hummel and Palmer [96] remark: "the most widely used and versatile medical radioisotope today is 99m Tc (i.e., an isotope of technetium, a radioactive element not found in ores).Roughly 30 million people depend on this radioisotope for diagnostic imaging procedures each year, and this demand is expected to grow.Although there are numerous ways of producing this isotope, the most common is from fission product 99 Mo, which is produced in all nuclear reactors fueled with 235 U (an isotope of uranium) as a fission fragment with a yield of around 6.1%.Molybdenum-99 has a half-life of just over 2.5 days, and it will decay to 99m Tc 87% of the time".
In contrast, plutonium produced by nuclear fission is politically undesirable (unless you intend to produce nuclear warheads).Plutonium as being a byproduct of the operation of nuclear power reactors for peaceful use is amenable to use for military purposes.Burr et al. [97] described a nondestructive assay system for estimating the amount of plutonium in spent nuclear fuel assemblies.Technically, it would be misleading to just claim that "plutonium produced by nuclear fission is undesirable".As a referee aptly pointed out: "this is not entirely true because even in conventional UO 2 fuel the fissioning of generated Pu isotopes typically contribute to more than 40% of a fuel assembly's energy generation after about 3 years of operation, hence increasing fuel utilization".
Basically, plutonium in spent fuel can be reused to make fuel.Youinou and Vasile [98] discussed the rather tricky problem of how to "multirecycle" plutonium in standard pressurized water reactors (PWR) loaded with "evolutionary fuels": when plutonium is reused, there are fewer and fewer fissile isotopes in it, so one has to increase its quantity, but there are reasons not to go over 12% plutonium in the fuel; therefore, plutonium oxide is mixed with the oxide of enriched uranium (rather than natural uranium) in the same fuel assembly.[98] also discussed other solutions, including the plutonium "monorecycling" option, and a number of plutonium "multirecycling" options; they found that it would be a misconception to claim that uranium would be saved, if plutonium is recycled more than once. 15ecause of the potential for diversion activities of some products of nuclear fission at commercial power plants, in the interest of nonproliferation objectives a dissolver model for spent fuel has been developed [100][101][102].Results of that project of James Peltz and Dan Cacuci "establish confidence in the dissolver model's accuracy for simulating the acid concentrations required to dissolve used nuclear fuel" [101].In 2006, Talamo and Gudowski [103] described a fuel management strategy for the incineration of military plutonium.Part III in the Routledge Handbook of Nuclear Proliferation and Policy [104] addresses the proliferation risks and political challenges arising from nuclear energy production, including potential proliferation by aspiring states and nonstate groups.
The concern to limit the risk of plutonium as being a fission product ending up in military use is at the basis of Ronen et al. [105], also see [106,107] proposing suitably denaturated plutonium in CANDU reactors "by introducing either 241 Am or 237 Np into the nuclear fuel.It was found that about the same results are obtained by doping the fuel with 241 Am or 237 Np.In order not to reduce the fuel burnup, there is a need to enrich the uranium to 0.8% for both 241 Am and 237 Np doping".(Am is americium, a transuranic element.Americium is free of criticality hazards and gamma-radiation.It is appreciated as a long-life alpha-particle emitter.Its half-life is 433 years.Np is neptunium, an element produced artificially by nuclear reaction between uranium and neutrons.The isotope neptunium-237 has a half-life of 2.2 × 10 6 years.) Famously, the same nonproliferation concern is among the incentives of using thorium as a fuel, instead of uranium.Both are fertile nuclear fuels, but thorium is nearly three times as abundant as uranium.The idea of using thorium was launched by Alvin Radkowsky [108,109], who one generation earlier, as a senior scientist for the US Navy-he was the first chief scientist of the US Naval Reactors program under Admiral HG Rickover from 1950 to 1972, and designer of the first commercial nuclear power plant at Shippingport, Pennsylvania-invented "burnable poisons" (i.e., neutron absorbing materials, 16 inserted into some types of reactors 17 in order to lower the high reactivity of their initial fresh fuel load), substances that, slowing combustion, enabled nuclear submarines to be developed and to cruise for many months without refueling.In 1983, Edward Teller asked Radkowsky, who had been a student of his, to develop nuclear fuels that would sever the link between nuclear weapons and nuclear power (see [111]).
Thorium reactors [112][113][114][115][116][117][118][119][120][121][122][123][124][125][126][127] were introduced in Russia (with the blessing of the USA) at a time when the Soviet Union being dissolved carried the risk that plutonium would be clandestinely diverted to foreign buyers for military, even terror purposes."When thorium is used, the nuclear reactions are altered so that the reactor produces much less plutonium, and the radioactive waste it does create is more difficult to refine for use in bombs" [109].
Radkowsky's concept of a thorium reactor was that it would produce 80% less plutonium than a conventionally fuelled reactor, and the mix of plutonium isotopoes generated would not be weapon-grade: the explosive yield would be in all likelihood be too low, and a bomb would fizzle.What is more, in the plutonium generated the content of 238 Pl isotope would be so high, that on decaying it would damage the other materials in the intended weapon [128].

Pressurized Water Reactors
This article comes 30 years after the publication of my first co-authored article on how to apply artificial intelligence to the in-core fuel management problem at nuclear power plants [143], and I later wrote about a sequence projects in which I was taking part, and also held editorial roles on the subject [144,145]. 18In the main, the succession of projects in in-core fuel management in which I participated included (a) FUELCON, an expert system was on a ruleset; and (b) FUELGEN, an intelligent system for the same task that was using a genetic algorithm instead.In between, we considered an enhancement of the rule-based approach of FUELCON, such that rules would be translated into code for neural processing.
In all those projects, we were concerned with pressurized water reactors (PWRs), a widespread kind of reactor, traditionally comprising a majority of reactors at all western nuclear power plants.The other kind of light water reactors is boiling water reactors (BWRs), where by "light water", ordinary water (H 2 O) is meant, as opposed to heavy water, i.e., either deuterium oxide (D 2 O), or HDO, with one atom of hydrogen, one atom of deuterium, and one atom of oxygen.
Among the fuel rods, liquid coolant, pumped from below the assembly, circulates.In PWRs, it is pressurized water.Pressure is needed to prevent the water from boiling.There are steam generators on the borders of the core: they act as heat exchangers.Just part of the thermal energy thus produced can be exploited as electric power.In a PWR, the nuclear fuel heats the water in the primary coolant loop by thermal conduction through the fuel cladding.Power transfer is therefore from the reactor core to the primary coolant, whose tubes are entirely inside the containment structure of the reactor (i.e., the building containing the reactor chassis and whatever it contains).
The tubes of the primary coolant enter and exit the reactor vessel through the inlet and outlet nozzles.To the tubes of the primary coolant, a pressurizer is applied.These tubes of the primary coolant loop pass through a heat exchanger: this is a steam generator containing secondary coolant.Heat is transferred across a set of tubes to the lower-pressure secondary coolant, and the latter evaporates to pressurized steam.The steam generator is inside the containment structure of the reactor, but a tube at the top connects it to the top of a container placed outside the reactor containment structure, and inside that container there is a turbine.That turbine is connected to an electrical generator, driven by the turbine, and which in turn transfers power to the electric grid which distributes it.After passing through the turbine the secondary coolant is cooled down.As it cools, it becomes a water-steam mixture, and this in turn is condensed in a condenser, which transforms the steam of the secondary coolant into liquid, which is then moved as feedwater to the bottom of the steam generator.The primary and secondary coolants are not mixed; having two separate coolant loops is a precaution, as the primary coolant may become radioactive [150].
The core geometry of a reactor is schematized by the grid of positions hosting fuel assemblies (or both a fuel assembly positioned inside, and a control rod that can be lowered there in order to stop the nuclear fission reaction either for regular shutdown, or because of an emergency), and this grid can be conceptualized as a core section (Figure 3).The core is symmetrical (Figure 4), so reasoning for the application with which we are concerned in this article-namely, fuel allocation to the grid positions-is done on just a one-eighth slice of the core schema.Fuel assemblies (for our purposes, fuel units) are inserted vertically into the core, which for the purposes of designing the refuelling can be viewed as a planar horizontal section.
The nuclear fuel cycle includes not only the positioning and depletion of the fuel inside the reactor, but, as well, those stages that take place upstream and downstream of the fuel burning inside the reactor core, from the extraction of the natural uranium ore, through enrichment and fuel assembly manufacturing, down to the disposal of the spent fuel once it is discharged from the reactor (unless it is recycled, where this is permitted; in 1977, fuel recycling was prohibited in the USA, but much research has gone meanwhile into recycling materials from spent fuel).
The nuclear fuel cycle includes not only the positioning and depletion of the fuel inside the reactor, but, as well, those stages that take place upstream and downstream of the fuel burning inside the reactor core, from the extraction of the natural uranium ore, through enrichment and fuel assembly manufacturing, down to the disposal of the spent fuel once it is discharged from the reactor (unless it is recycled, where this is permitted; in 1977, fuel recycling was prohibited in the USA, but much research has gone meanwhile into recycling materials from spent fuel).Figure 3.A planar horizontal cross-section of a nuclear power reactor core.A square is a position in the core, and if no other symbol is shown inside, then this is a fuel assembly location.A hexagon is a control rod assembly location.A black bullet is an inc-re instrument location.A hexagon that contains a circle (a black bullet fits in it) is a surveillance specimen holder tube.

Nuclear Fuel Cycle
There are three parts in the nuclear fuel cycle."The first part, called Front-End, covers all activities connected with fuel procurement and fabrication.The middle part of the cycle includes fuel reload design activities and the operation of the fuel in the reactor.Back-End comprises all activities ensuring safe separation of spent fuel and radioactive waste from the environment.The individual stages of the fuel cycle are strongly interrelated.Overall economic optimization is very Figure 3.A planar horizontal cross-section of a nuclear power reactor core.A square is a position in the core, and if no other symbol is shown inside, then this is a fuel assembly location.A hexagon is a control rod assembly location.A black bullet is an inc-re instrument location.A hexagon that contains a circle (a black bullet fits in it) is a surveillance specimen holder tube.
batches of fuel units (fresh, one-burnt, and twice-burnt) that are available for use in the design of the refuelling, there is just one batch (thrice-burnt fuel) that is to be discarded, apart from such fuel units that upon being inspected, are found to be damaged.The fuel units from the two other batches are simply to be repositioned, according to the suitability of their respective degree of depletion.For example, fresh fuel is not to be positioned at the centre of the grid, to avoid too high a localized power density, which could damage the fuel units (or, worse, result in a safety hazard). .Symmetry inside a cross-section of a nuclear power reactor core.DIAG is any cell in the grid that is on a diagonal.AXIS is any cell in the grid that is on an axis.In one-eighth symmetry, the slice of the grid on which we reason is comprised between an axis and a diagonal.
The back-end of the nuclear fuel cycle, when fuel is disposed of, comes in two alternative basic strategies.One of these is the open cycle: "the spent fuel is stored in long-term storage and then disposed of in a deep geological repository" [72] (p.30).The other strategy is the closed cycle: "the spent fuel is reprocessed (the nuclear material is separated from the construction material and the Figure 4. Symmetry inside a cross-section of a nuclear power reactor core.DIAG is any cell in the grid that is on a diagonal.AXIS is any cell in the grid that is on an axis.In one-eighth symmetry, the slice of the grid on which we reason is comprised between an axis and a diagonal.

Nuclear Fuel Cycle
There are three parts in the nuclear fuel cycle."The first part, called Front-End, covers all activities connected with fuel procurement and fabrication.The middle part of the cycle includes fuel reload design activities and the operation of the fuel in the reactor.Back-End comprises all activities ensuring safe separation of spent fuel and radioactive waste from the environment.The individual stages of the fuel cycle are strongly interrelated.Overall economic optimization is very difficult" [72] (p.30).The nuclear fuel cycle of the process is not coincident with periodical shutdown and refuelling (which is in the middle part of the nuclear fuel cycle): along with the three batches of fuel units (fresh, one-burnt, and twice-burnt) that are available for use in the design of the refuelling, there is just one batch (thrice-burnt fuel) that is to be discarded, apart from such fuel units that upon being inspected, are found to be damaged.The fuel units from the two other batches are simply to be repositioned, according to the suitability of their respective degree of depletion.For example, fresh fuel is not to be positioned at the centre of the grid, to avoid too high a localized power density, which could damage the fuel units (or, worse, result in a safety hazard).
The back-end of the nuclear fuel cycle, when fuel is disposed of, comes in two alternative basic strategies.One of these is the open cycle: "the spent fuel is stored in long-term storage and then disposed of in a deep geological repository" [72] (p.30).The other strategy is the closed cycle: "the spent fuel is reprocessed (the nuclear material is separated from the construction material and the products of fission reaction, and is re-used in the new fuel), and only vitrified radioactive waste is disposed of in a deep geological repository (smaller in size than for open cycle)" [72] (p.30).
There are two areas in fuel management, within nuclear engineering: (i) the management of fuel as kept in store, and (ii) the allocation of fuel to be burned in the reactor core during a certain temporal interval between two refuelling points, known as end-of-cycle (EOC).The problem of how to replace depleted fuel assemblies and how to position, inside the reactor core, fresh fuel assemblies and partly depleted fuel assemblies, is the in-core fuel management problem (as opposed to the management of fuel kept in store).
Area (ii) is the task of the FUELCON expert system developed in Beer-Sheva, Israel, in 1986-1990, and of the FUELGEN tool developed in London in 1994-1996.Every cycle (intended as the interval between two EOC points, or more rigorously, from the beginning-of-cycle, or BOC, that follows the latest OEC, and the next EOC) is unique, and poses a new problem for design; it is typified by specific power demands, availability of fresh fuel, and cycle length (symbolized by K eff ) before shutdown.Therefore, one cannot devise a refuelling once for all future refuellings.Every cycle at a given reactor is unique.Factors limiting forecasting include specific power demands, availability of fresh fuel, past cycle length as yielding certain cumulative burnup values for each fuel-assembly, and expected length of the next cycle.One would think it to be desirable for the design of the next refuelling to be ready for the refuelling itself to be carried out immediately as the reactor is shut down; yet, this is not possible.There is an unavoidable bottleneck.Inspection is necessary, and then the design of a suitable fuel reload may take a few days to accomplish, because not just the design proper, but simulations as well must be carried out.In fact, once the new downtime period arrives, i.e., at end-of-cycle (EOC), the fuel manager could not possibly have a ready-made design for reload, i.e., a tailored, reasonably good configuration that assigns one fuel-assembly (of a given kind) to each single position in the core geometry.
For the purposes of fuel reload pattern design, the relevant criterion of classification is how many times the fuel unit has already been used.This is reflected in a cumulated burnup degree.Twice burnt fuel has the highest burnup degree, and therefore the lowest potency, among the fuel retained for the next cycle; i.e., twice burnt fuel is the kind of fuel assembly with the lowest power density.This constitutes an advantage in terms of manoeuvring, when designing the new arrangement of the contiguous fuel assemblies in the reactor: the spatial power density of the fuel as arranged, indeed, is the main design parameter considered by the fuel manager who has to produce the design.

Manual Design Packages
Various approaches exist, in computer tools assisting with in-core fuel management.Westinghouse's LPOP is based on back-calculation from a target power-distribution.In contrast, the FUELCON expert system applies a heuristic ruleset and generates alternative candidate solutions by the hundreds, these then being simulated for parameter prediction, and visualized as "clouds" of dots in the plane of power peaking and cycle length.FUELCON's successor, FUELGEN, applies evolutionary computing.
In 1991, a team from Westinghouse Electric Corporation pointed out a need [151]: "Pressurized water reactor core designs have become more complex and must meet a plethora of design constraints.Trends have been toward longer cycles with increased discharge burnup, increased burnable absorber (BA) number, 19 mixed BA types, reduced radial leakage, axially blanketed fuel, and multiple-batch feed fuel regions.Obtaining economical reload core loading patterns (LPs) that meet design criteria is a difficult task to do manually.Automated LP search tools are needed.An LP search tool cannot possibly perform an exhaustive search because of the sheer size of the combinatorial problem.On the other hand, evolving complexity of the design features and constraints often invalidates expert rules based on past design experiences." The consecutive waves of methods from computing-linear programming from operations research; then, expert systems based on heuristic rulesets, and other knowledge-based systems from artificial intelligence; and then again, artificial neural networks; next, genetic algorithms, and finally, hybrid methods combining the above-had a major impact in nuclear engineering over the 1980s and 1990s, and ever since.In 1992, Parks and Lewins [152] outlined three categories of computerized tools for in-core fuel management: manual design packages (the engineer's own expertise is applied, and software only assists with calculations for analyzing the reactor core physics, and for visualizing the data); expert systems (which, like FUELCON, have a ruleset prune the search space, seeking good configurations); and optimization packages.One such package is Westinghouse's LPOP, described in 1991 by Alsop et al. [151,153].LPOP adopts an integer programming (near-)optimization procedure that starts from a target power distribution and calculates backwards how to obtain it, by matching the available fuel inventory.
There used to be tools that just help with either computations or graphics, to perform single steps in the design procedure, whereas the overall procedure is manual.Tools from another class embody some optimization method from operations research, as customized for the task.Algorithms adopted there, do not transparently reflect the intuitive rules of thumb of the human expert.Observability in the space of solutions is very reduced, so such tools were just seeking one solution that is admissible, and, if possible, locally optimal.Then, FUELCON came into the picture, generating families of configurations.
There used to be computer programs that did not incorporate artificial intelligence techniques.For fuel reload pattern design, there had been one or two attempts to apply AI, but these did not depart from the conception of manual design as seeking just one solution [154][155][156].Typically, a given solution, known-e.g., from some published case-study-to have suited a different situation, was adapted to the new situation.This was done by means of a series of shuffles, i.e., by switching the positions of two (or more) fuel-assemblies.Adaptation can be by shuffling indeed, i.e., by swapping again and again the positions of two (or more) fuel assemblies in the one-eighth slice of the core and evaluating by simulation whether the configuration obtained is safe, could keep the nuclear reaction going long enough, and is efficient.Conceptually, such a procedure is a depth-first search, with shuffles constituting backtracking.Rules check whether the candidate shuffle is admissible.Thereafter, simulation has to check whether the new solution is admissible for the problem at hand, i.e., for the given reactor after a given operation cycle.

Expert Systems
In 1991, Tahara [157] reported about an expert system for reshuffling.Bernard and Washio [158] surveyed ongoing projects applying expert system technology to nuclear engineering.For refuelling in particular, two kinds of tasks have had expert systems or prototypes deal with them: (a) the repositioning of fuel-units based on the physical constraints of a crane having to take out, transfer and reinsert the unit in another position in the grid which represents the core; 19 and (b) designing such a reload of the fuel while deliberately ignoring such logistics of the material operation, and while focusing on the benefits in terms of effective power peaking in the successive states of the reactor core, and of length of the period over which the nuclear fission can be usefully sustained (for example, one year and a half instead of just one year), as well as in terms of safety (the power peaking being sustained, yet not excessive).
Arguably, FUELCON, about which we published as early as 1988 [143], with a preliminary paper by Alex Galperin in 1985 [165], was the earliest project in category (b) of Bernard and Washio [88].Another tool for reload design is AUTOLOAD, described in 1994 by Li and Levine [166]; in AUTOLOAD, an expert system with heuristic rules is integrated with search by means of the SCAM-W algorithm.Also see Parks e al.'s paper of 1992 [167], which originated at an event of 1990.
In 1998, K.-J.Lin and Chaung Lin, in Taiwan [168] described pressurized water reactor reload design by an expert system.Their program generated several reload cycles for the Maanshan nuclear power plant in Taiwan.In their system, "an initial loading pattern was generated according to the fuel assembly's infinite multiplication factor, by using heuristic rules.Then, the fuel assembly was swapped or rotated, using heuristic rules, to satisfy an assumed search target" [168].
Ambitions, with FUELCON, were of global optimization.FUELCON generates not one, but hundreds of alternative configurations per session.A heuristic ruleset finds a good position for a given fuel-assembly, and this is done recursively, until the entire core is filled with all of the available fuel units.Basically, search is breadth-first.There is no backtracking.The main part of the FUELCON expert system is a solution-space generator based on the beam-search algorithm, thus, a combination of the best-first and breadth-first methods.Solution space expansion is performed in a tree level by level, and, at each level, several best branches are developed-the rest of the branches being discarded, which makes backtracking impossible.In FUELCON, the algorithm is applied to the generation of a tree, where each level corresponds to one fuel assembly to be positioned in the reactor core (with different core cells corresponding to the various branches on the same level in the tree).The method is adequate for the application.Chronological backtrack, i.e., revoking the last placed assembly, would be ineffective with regard to improving the current configuration, and no alternative backtracking method was envisioned.
Two kinds of knowledge are resorted to, in FUELCON: (i) basic knowledge in reactor physics and thermal hydraulic principles; and (ii) specific local heuristic knowledge, related to the positions in the core geometry, and to the degree of depletion of the fuel to be positioned there.This knowledge is represented, in FUELCON, in if-then rule format.There are two subsets within the ruleset: (A) "elimination rules" (which are mandatory, and dictated by safety considerations, namely, the concern is to prevent too high local power densities in the reactor core), and (B) "preference rules", which express a policy of design, and make one version of the ruleset peculiar with respect to other potential and permissible versions of the ruleset.The preference rules vary according to the performance-seeking strategy to be adopted.
The loading sequence of the fuel assemblies is given (just as the heuristic rules are a given), and it is dictated by heuristic considerations.Such heuristic knowledge is applied to the data in the stable of increasing the temporal distance between shutdowns, apart from the generalized requirement that downtime periods be kept short (they usually take a few weeks)."Expertsystems have also been developed to assist with scheduling the movements of fuel assemblies.One such system is CLEO, which was developed at Hanford for use in the Fast Flux Test Facility [160,161].Moreover, EPRI [i.e., the Electric Power Research Institute in Palo Alto, California], in conjunction with Intellicorp and the Virginia Electric Power Company, developed a prototype expert system [162,163], which planned crane movements for the fuel insert shuffle of a PWR" [159] (p.41).Samary Baranov, a former Soviet, naturalized Israeli researcher who was a colleague of mine in Beer-Sheva, at one time developed-for application at Soviet nuclear plants-a method by which a solution algorithm is Converted into an optimized logic circuit design for incorporation into the hardware, Baranov's conversion procedure is independent of the application domain, and was exemplified in several ways, in a book of 1994 (see especially its Chapter 8 [164]) which however does not include the old, but (as I noted in 1998) unpublished application to nuclear power plants).
Designs 2019, 3, 37 20 of 45 database of the problem at hand (the geometric structure of the core's symmetric slice, the positions at which the control rods are located, the pool of available fuel assemblies).
A set of predefined heuristic rules is executed recursively by FUELCON's control component.The ruleset is executed, determining which nodes to generate (or, instead, avoid generating) as, level by level, a tree of candidate configurations is being built.Those configurations are the terminal nodes (the leaves) of the tree.The tree of partial configurations starts at the root, being the empty core.The first level of the tree corresponds to the first fuel unit in the given loading sequence.The next level in the tree corresponds to the next fuel unit in the loading sequence.The loading sequence of the fuel units is given: it is an input of FUELCON.
Typically, the user of FUELCON is an experienced fuel manager, who will further want to improve upon the solutions yielded by the first round of ruleset execution, once they are simulated.The user can do that by modifying the ruleset: to steer the strategy he or she would refine the "preference rules", for example to achieve a zooming effect on a region in the space of solutions.This forms a loop of automated computation and manual intervention, and each round of executing the ruleset, recursively generates hundreds of terminal nodes in the tree constructed in the working memory of the expert system.These hundreds of candidate solutions are projected in the plane of power peaking and cycle length and are visualized as dots in the admissible or forbidden regions of the plane.See Figure 5. Relevant publications on FUELCON include [145,163,[169][170][171][172][173][174][175][176].In particular, [176], a 100-page paper, also covers FUELGEN.A set of predefined heuristic rules is executed recursively by FUELCON's control component.The ruleset is executed, determining which nodes to generate (or, instead, avoid generating) as, level by level, a tree of candidate configurations is being built.Those configurations are the terminal nodes (the leaves) of the tree.The tree of partial configurations starts at the root, being the empty core.The first level of the tree corresponds to the first fuel unit in the given loading sequence.The next level in the tree corresponds to the next fuel unit in the loading sequence.The loading sequence of the fuel units is given: it is an input of FUELCON.
Typically, the user of FUELCON is an experienced fuel manager, who will further want to improve upon the solutions yielded by the first round of ruleset execution, once they are simulated.The user can do that by modifying the ruleset: to steer the strategy he or she would refine the "preference rules", for example to achieve a zooming effect on a region in the space of solutions.This forms a loop of automated computation and manual intervention, and each round of executing the ruleset, recursively generates hundreds of terminal nodes in the tree constructed in the working memory of the expert system.These hundreds of candidate solutions are projected in the plane of power peaking and cycle length and are visualized as dots in the admissible or forbidden regions of the plane.See Figure 5. Relevant publications on FUELCON include [163,145,[169][170][171][172][173][174][175][176].In particular, [176], a 100-page paper, also covers FUELGEN.Improved output sets ("clouds") of candidate refuellings.The "cloud" of configurations of fuel in the core (displayed as dots in the plane of cycle length and the power peaking factor) moves into a better region, after the ruleset has been revised.

Validation by Simulations of the Reactor Physics
Once a design for how to load the fuel into the reactor core has been produced and selected by engineers, it is a statutory obligation that simulations of the reactor physics parameter prediction -under the assumption that that particular design is adopted -be carried out by special software.Nodal algorithms have traditionally been used, but neural computation has also been applied [177,178] (by the latter two teams, see [179][180][181] on neural processing being applied to optimal fuel loading pattern design, which is [179,181] in combination with a fuzzy ruleset).
Even though nodal algorithms are the older class of methods for parameter prediction of the reactor physics (for example, a 1995 doctoral dissertation by Fitzpatrick [182] was concerned with nodal reactor analysis tools for hexagonal geometry, i.e., with the core being a grid of hexagonal cells [183] instead of square cells: originally, hexagonal geometry used to be favoured at Soviet reactors), new versions of nodal methods keep emerging [184][185][186][187][188][189][190] In a paper published in 2018, Gamarino et al. explain [184]: "Nodal diffusion is currently the preferred neutronics model for industrial reactor core calculations, which use few-group cross-section libraries generated via standard assembly homogenization.The infinite-medium flux-weighted cross sections fail to capture the spectral effects triggered in the core environment by nonreflective boundary conditions at the fuel-assembly edges.This poses a serious limitation to the numerical simulation of currentand next-generation reactor cores, characterized by strong interassembly heterogeneity." In fact, in 2017 Zhang et al. [191] proposed "[a] two-dimensional/one-dimensional (2D/1D) variational nodal approach . . .for pressurized water reactor core calculations without fuel-moderator homogenization" (rather than assembly homogenization, which is what Gamarino et al. [184] were talking about).In [191], "[o]n the radial interfaces, orthogonal polynomials are employed; on the axial interfaces, piecewise constants corresponding to the finite elements eliminate the interface homogenization that has been a challenge for method of characteristics (MOC)-based 2D/1D approximations".
Within Alex Galperin's FUELCON project around 1990 (a project in which I took part in since its inception in 1986), Meir Segev's NOXER tool was used in order to carry out the simulations on the families of fuel configurations generated by FUELCON.Ambitions, with FUELCON, were of global optimization.FUELCON generates not one, but hundreds of alternative configurations per session.A heuristic ruleset finds a good position for a given fuel-assembly, and this is done recursively, until the entire core is filled with all of the available fuel units.Basically, search is breadth-first.There is no backtracking.The output of the generation module is fed to NOXER, a simulator using nodal algorithms.The numeric results of NOXER are visualized.Each configuration is displayed as a dot in the cartesian plane of two parameters: x is boron concentration at EOC (i.e., at the end of the cycle concluded by the current downtime period) and has to be maximized.As to y, it is power peaking: y < y defines the admissibility window, whose "southwestern" corner is at the origin of the coordinates.Instead, configurations above y are unsafe.As FUELCON produced hundreds of alternative solutions, a "cloud" of solutions is displayed in the plane.Sometimes, none is safe.Or, then, some are, but they are not satisfactory, because x < x.The interesting region is y < y ∧ x > x.

Ruleset Improvement, and the Involvement of Neural Revision
As mentioned, an expert fuel manager using FUELCON (which typically was Galperin himself) would try to enhance the preference rules in the ruleset of FUELCON, based on the outcome of the previous iteration of FUELCON and NOXER.Already in [192], in 1989, we indicated that a refinement loop for heuristic rules in the context of FUELCON would be a desideratum.Actually, this is something I longed to, for the sake of innovation within AI, whereas Galperin, confident in his expertise at using FUELCON in his capacity as an expert user, did not feel the same.The opportunity to try to automate ruleset improvement came while I was based at Bar-Ilan University in the early 1990s, and I collaborated with a colleague, Hava Siegelmann, an expert in neural computation.Neural revision of the ruleset was envisaged.That project was designed, yet not implemented.A representation framework for the rules was developed, in which a rule would be coded at the symbolic level in a particular programming language (developed by Hava Siegelmann), and then transformed into a neural network.
Consider this elimination rule: "Do not load a fresh assembly in such a position that is adjacent to another position where there is another assembly of the same kind, except when one of those two positions is in a corner position."The corresponding code, written by Siegelmann, was as shown in Algorithm 1.This program was intended for compilation by symbolic-to-neural transformation.Algorithm 1 scans all the positions in the slice of the reactor core.An alternative version just checks adjacent positions.It was next shown how to translate this code into a network: into either a simple feedforward network that tests the 20 positions simultaneously, or into a recurrent network that tests them serially [193,194].(Also see [195][196][197], concerning the symbolic-to-neural project.) In contrast, Algorithm 2 is how she encoded this preference rule (which was Rule 8 in the ruleset): "If it is a once-burned assembly that is currently being considered, then choose for it-from amongst those positions that were not forbidden by Rules 1 to 6 (the elimination rules)-that position whose distance from the centre of the core is minimal."Algorithm 1 Code written by Hava Siegelmann for the rule elimination rule: "Do not load a fresh assembly in such a position that is adjacent to another position where there is another assembly of the same kind, except when one of those two positions is in a corner position."[193,194].
Designs 2018, 2, x FOR PEER REVIEW 22 of 45 neural computation.Neural revision of the ruleset was envisaged.That project was designed, yet not implemented.A representation framework for the rules was developed, in which a rule would be coded at the symbolic level in a particular programming language (developed by Hava Siegelmann), and then transformed into a neural network.Consider this elimination rule: "Do not load a fresh assembly in such a position that is adjacent to another position where there is another assembly of the same kind, except when one of those two positions is in a corner position."The corresponding code, written by Siegelmann, was as shown in Code 1.This program was intended for compilation by symbolic-to-neural transformation.Code 1 scans all the positions in the one-eight slice of the reactor core.An alternative version just checks adjacent positions.It was next shown how to translate this code into a network: into either a simple feedforward network that tests the 20 positions simultaneously, or into a recurrent network that tests them serially [193,194].(Also see [195][196][197], concerning the symbolic-to-neural project.) In contrast, Code 2 is how she encoded this preference rule (which was Rule 8 in the ruleset): "If it is a once-burned assembly that is currently being considered, then choose for it-from amongst those positions that were not forbidden by Rules 1 to 6 (the elimination rules)-that position whose distance from the centre of the core is minimal."Code 1. Code written by Hava Siegelmann for the rule elimination rule: "Do not load a fresh assembly in such a position that is adjacent to another position where there is another assembly of the same kind, except when one of those two positions is in a corner position."[193,194].
Algorithm 2 Code written by Hava Siegelmann for the rule preference rule: "If it is a once-burned assembly that is currently being considered, then choose for it -from amongst those positions that were not forbidden by Rules 1 to 6 (the elimination rules)-that position whose distance from the centre of the core is minimal."[193,194].
Code 2. Code written by Hava Siegelmann for the rule preference rule: "If it is a once-burned assembly that is currently being considered, then choose for it -from amongst those positions that were not forbidden by Rules 1 to 6 (the elimination rules)-that position whose distance from the centre of the core is minimal."[193,194].
Historically, automating the revision of knowledge acquired from a human expert by resorting to neural networks was not unique.In 1997, Shouhong Wang published a paper [198] about the application of neural networks in generalizing expert knowledge, for an industrial engineering application.In [199], around that time, I surveyed intelligent technologies for nuclear applications, signalling heuristic and neural tools for an array of diverse applications.

An Expert System Based on a Genetic Algorithm
In contrast to the transparency of FUELCON's ruleset, its functional replacement, FUELGEN [200][201][202][203][204][205][206][207][208] (developed in London and leading to Jun Zhao's doctoral dissertation [200]), works like a black box, and is more suitable for such fuel managers who are not necessary as experienced as FUELCON's users.It would be an apt metaphor if we said that FUELCON is like a sports car, which does not suit everybody who wants to drive a car.In FUELGEN, candidate pattern reload is represented as a set of genes.These correspond to the cells in the reactor core.The particular values (alleles) that each gene can take are the fuel assemblies. 20The choice of such a minimal representation is deliberate, even though it restricts the possible crossover and mutation operators that can be constructed, because of the limits on information available to them.Nuclear physics dictates that our genes, which represent cells, will interact strongly, so that fitness variance of gene sets will be large (the power peak, being a local constraint, makes it easy to produce configurations with low fitness by giving the remainder of the genes appropriate alleles).The fact that adjacent assemblies interact most strongly, suggests a natural linkage for the core reload problem: Poon and Parks [209][210][211][212][213] used a bidimensional chromosome, with structure identical to the sector of the core represented, so that adjacent cells correspond to adjacent genes.However, there are no groups of linked genes that are unlinked or weakly linked to others.The exchange of a linked set of genes cannot proceed without disruption to others.The local nature of the power peak measure implies 20 The following few paragraphs are based on the final section in [201].The role, in the making of FUELGEN, of my colleague, Alan Soper (by whom, see [202]), and of our doctoral student, Jun Zhao, cannot be overstressed.They deserve credit as my senior co-authors of the text of the paragraphs about FUELGEN in my present paper.
Historically, automating the revision of knowledge acquired from a human expert by resorting to neural networks was not unique.In 1997, Shouhong Wang published a paper [198] about the application of neural networks in generalizing expert knowledge, for an industrial engineering application.In [199], around that time, I surveyed intelligent technologies for nuclear applications, signalling heuristic and neural tools for an array of diverse applications.

An Expert System Based on a Genetic Algorithm
In contrast to the transparency of FUELCON's ruleset, its functional replacement, FUELGEN [200][201][202][203][204][205][206][207][208] (developed in London and leading to Jun Zhao's doctoral dissertation [200]), works like a black box, and is more suitable for such fuel managers who are not necessary as experienced as FUELCON's users.It would be an apt metaphor if we said that FUELCON is like a sports car, which does not suit everybody who wants to drive a car.In FUELGEN, candidate pattern reload is represented as a set of genes.These correspond to the cells in the reactor core.The particular values (alleles) that each gene can take are the fuel assemblies. 20The choice of such a minimal representation is deliberate, even though it restricts the possible crossover and mutation operators that can be constructed, because of the limits on information available to them.Nuclear physics dictates that our genes, which represent cells, will interact strongly, so that fitness variance of gene sets will be large (the power peak, being a local constraint, makes it easy to produce configurations with low fitness by giving the remainder of the genes appropriate alleles).The fact that adjacent assemblies interact most strongly, suggests a natural linkage for the core reload problem: Poon and Parks [209][210][211][212][213] used a bidimensional chromosome, with structure identical to the sector of the core represented, so that adjacent cells correspond to adjacent genes.However, there are no groups of linked genes that are unlinked or weakly linked to others.The exchange of a linked set of genes cannot proceed without disruption to others.The local nature of the power peak measure implies that disruption will normally produce undesirable loading patterns with a high power peak and hence poor fitness.
The exchange of a linked set of genes will in most cases produce offspring containing more than one copy of an assembly.Adjustments need to be made to the offspring to restore the assembly set.This is further evidence against the existence of independent groups of linked genes suitable for recombination, since restoration will disrupt linked gene combinations.Because of these considerations, in FUELGEN it was considered preferable not use any additional structure defining linkage for recombination.Instead, the minimal representation was used and appropriate mutation strategies defined.We used an assorting crossover.This kind of crossover is highly explorative, as it takes those genes that are common to both parents into the offspring and fills in the remaining genes randomly using all available allele values, rather than only those residing in the parents.In FUELGEN, it consists of these steps: given two selected parent loading maps, create two empty, child loading maps with no assemblies assigned to cells; copy the contents of all cells for which the parents have identical assemblies to the offspring; fill the rest of the cells in the children randomly from the available assemblies, subject to the symmetry restrictions.Since our search strategy is based on mutation, we use mutation rates that are higher than normally used with a recombining crossover (where the mutation operator traditionally plays a secondary role); this way, search is given a strong emphasis on hill climbing.
Population structure in FUELGEN (the project adopted the so-called Island Model), consists of many small subpopulations, each evolving under the action of a separate genetic algorithm, and exchanging individuals infrequently with other subpopulations.As to the selection operator, we used ranking selection.Our approach to the fitness function was to use a multiobjective strategy [212].Here, population members are placed in ranked groups according to whether they are dominated by other individuals.Ours was a strategy of gradually increasing the penalization of the fitness of solutions for violating the constraint as the search progresses.The initial stages of the search were directed towards increasing cycle length while less hampered by the presence of local minima arising from the constraint on the power peak.As evolution progresses and the average fitness of the population increases, the search was more directed to reducing the power peak.FUELGEN adopted MacroGA, the "macro genetic algorithm" the London team developed, within the framework of what is known as macroevolution, in genetic algorithms research.In macroevolution, a number of species co-evolve.In FUELGEN, we chose not to have dominant species among these.Migration is controlled by a migration rate (or probability) determining when migration occurs, the number of migrants per migration (normally two), and a migration topology, such as the so-called ring or line topology.Migrants are the best individuals in the source species, and they will compete with all the individuals within the destination species for survival; the same number of the least fit individuals will then be removed.Each species has a different crossover and mutation rate, which are generated from a mutation rate range and a crossover rate range.The ranking selection method is applied to all species.It is only as a particular case, that all species may happen to use the same crossover and mutation operators.Once in the macro genetic algorithm the parameters and a set of species are initialized, the probability of migration is set, and each species is initialized in terms of selection operator, crossover operator, mutation operator, population size, and crossover and mutation rates.Afterwards, all species evolve in parallel, and migrations take place at each generation.As each generation is created, the fitness for each individual in the population is evaluated, and the best individual is recorded and checked; next, the genetic operators are applied, in order to create a next generation.Yet, if the current best individual is acceptable in terms of fitness (i.e., if a fuel-loading pattern has been found that is as good as we want it to be), then a termination message, "Stop", is distributed to all species, and the macroevolution process comes to an end.Several other teams reported about adopting genetic computation for the purposes of generating reload patterns [214][215][216][217][218][219][220][221][222][223][224][225][226][227][228][229][230], or for other applications to nuclear reactors [159,160].As mentioned early, a combination of a fuzzy ruleset and neural computations being applied to optimal fuel loading pattern design was reported in [179][180][181].Also see [231][232][233] on fuzzy representations in the nuclear sector.
Particle swarm optimization was applied in [234,235].Swarm optimization [236] (Chapters 7 to 9) is a loop which for each individual (i.e., individual particle) in the swarm, evaluates the fitness of that individual, in order to establish whether for each dimension in turn, the particular individual is the best so far.Having completed the inner loop on the dimensions, the fitness of the particular individual is compared to the fitness of each neighbour, in order to find out who is the best performer in the neighbourhood.another inner loop is carried out on all dimensions in turn: change is in terms of velocity of the particle, and change is a function of the difference between the individual's previous best and current positions and the difference between the neighbourhood's best and the individual's current position.What is adjusted is the particle velocity and position.A maximal velocity is applied, in order to prevent the velocity of the given individual particle from exceeding it on each dimension.This is because the particle cycles unevenly around a point defined as a weighted average of the two bests, and there are oscillations, because owing to random numbers being used, the exact location of this point changed on every iteration.As oscillations become wider and wider, the system may explode, unless velocity is dampened.This is achieved by imposing a maximal velocity, so particles oscillate within bounds.There exist a few versions of the particle swarm paradigm.These include the inertia weight approach and constriction factor approach.
Khoshahval et al. [237] offered a performance evaluation of particle swarm optimization and of genetic algorithms as being applied to the core loading pattern optimization (i.e., to optimize refuelling) in pressurized water reactors.Alamaniotis et al. [238] discussed applications of particle swarm optimization in industrial engineering and nuclear power plants.A paper by de Moura Meneses et al. [239] concerns particle swarm optimization as being applied to the nuclear reload problem of a pressurized water reactor.
De Lima et al. [240] applied to the refuelling problem a technique of optimization known as Artificial Ant Colony Connective Networks.This is a technique inspired by biology (this phenomenon in computing or in robotics is known as biomimetism); in particular, it is inspired by pheromones in biology [241,242].
In computational intelligence, ant colony algorithms for optimisation, also known as distributed stigmergetic control [243][244][245][246][247][248][249]-an introduction can be found in Engelbrecht's textbook Computational Intelligence [250] (Ch.17, pp.199-208)-are based on the metaphor of a colony of ants (or termites), when individual members of the colony follow a route once they detect the olfactive trace of chemicals (pheromones) released by their companions that already passed by that route [249]. 21  21 As for robotics (biorobotics), a pheromone trail is a concept underlying neurobiologist Frank Grasso's RoboLobster.The aim of that project is to produce a robot that would mimic a lobster's ability to navigate pounding surf, and the intended use is in discovering the scent of chemicals and identifying their sources underwater, such as leakages (e.g., a plume of chemicals was discovered in the Red Sea, in experiments conducted in 2002 on second-generation lobster-like robots in collaboration with the Interuniversity Institute of Eilat, Israel), as well as in clearing mines from shallow water.The underwater trials were conducted at a depth of five metres.The substance to be sensed by the robot was marked with dye, so the students who were scoring the robot's behaviour could do so by tracking its movements relative to the dye [251]."A small underwater wheeled robot with conductivity sensors was used to test chemical orientation strategies employed by lobsters to locate odor sources" [252] (p.779, referring to [253]; also see [254,255]).RoboLobster does not look like a lobster, except in that on its front side, it has two antennae, which are parallel with the flanks of the rather flat box which constitutes its body.There is one wheel on each of the flanks of its lower part (a box with a trapeze section), which is surmounted by a parallelepiped of clear plastic.Natural lobsters sense chemicals which may or may not be produced by their own kind.For example, in his book on pheromones, Wyatt [256] (p.62, caption of Figure 3.10) shows: "a female lobster [ . . .], attracted by a male's chemical 'song', jets her own urine towards a male in his shelter.He responds by retreating to the opposite entrance and fanning his pleopods.[ . . .]". Wyatt's corresponding text [256] (p.61) explains: "males of the lobster Homarus americanus Tabu search has been applied to fuel reload design [257][258][259].Tabu search (to say it with [260]) "is a metaheuristic search method employing local search methods used for mathematical optimization.Local (neighborhood) searches take a potential solution to a problem and check its immediate neighbors (that is, solutions that are similar except for very few minor details) in the hope of finding an improved solution.Local search methods have a tendency to become stuck in suboptimal regions or on plateaus where many solutions are equally fit.
Tabu search enhances the performance of local search by relaxing its basic rule.First, at each step worsening moves can be accepted if no improving move is available (like when the search is stuck at a strict local minimum).In addition, prohibitions (henceforth the term tabu) are introduced to discourage the search from coming back to previously-visited solutions.
The implementation of tabu search uses memory structures that describe the visited solutions or user-provided sets of rules.If a potential solution has been previously visited within a certain short-term period or if it has violated a rule, it is marked as "tabu" (forbidden) so that the algorithm does not consider that possibility repeatedly." See, e.g., [261,262] about tabu search.Simulated annealing, too, was applied to load pattern optimization at nuclear power plants [263][264][265][266][267] (also see [268]).At any rate, consider that optimization occurs in the nuclear sector for sundry applications, e.g., fuel rod optimization [269].
And finally, consider that because of how expensive it is to shut down a large nuclear power plant in order to refuel, a reload strategy was considered for the Perry plant in Perry, Ohio, on fixing the locations of once-burnt fuel between cycles, instead of waiting for end-of-cycle shutdown; this was discussed in 2001 by Maag and Knott [270], who claimed: "this strategy can reduce refueling shuffle critical path time by three days without penalty in fuel cycle economics.The scheme works well for Perry because of the extreme cycle energy requirements and the large feed batch size needed to meet those requirements".

Current Trends
In the 2010s, the problem of optimization of fuel reload design into the cores of the more traditional kinds of reactors at commercial nuclear power plants appears to have been considered to be satisfactorily solved, in the sense that even before the mid-decade, one no longer finds new developments reported about in the issues of the journal Nuclear Science and Engineering of the American Nuclear Society, 22 the first and oldest peer-reviewed journal in its field, published by ANS since 1956, and appearing 12 times a year.In contrast, one does come across such reports in other forums, such as conferences in South America, because teams report about how the solved the problem at particular plants, considering their circumstances (of course, seeking a better understanding of reactor operations and in-core fuel create odour currents from their dens, thus 'singing a chemical song' [ . . .] [citing [257]].Females choose the locally dominant male by his urine signals".Wyatt [256] (pp.[219][220] explains: "turbulent odour plumes form as air or water currents disperse odour molecules from their source.Odour plumes are of course normally invisible but swirling smoke clouds from a chimney provide a good visual analogy of the important features", and: "the smoke forms a meandering cloud that snakes down.If you get closer, you can see the fine-scale structure within the clouds, with filaments of high concentration interspersed with cleaner air.As a cloud of odour molecules moves from the source, turbulence tears apart the cloud into elongated odour-containing filaments, each only a few millimetres wide, separated by 'clean' water or air".Importantly for communication: "this fine filament structure is central to the responses evolved by orienting animals.The turbulent effects are greater than diffusion (which is comparatively slow) and an important consequence is that a plume is far from a uniform cloud of pheromone drifting downwind; rather, it is composed of filaments that remain relatively concentrated.Thus the pheromone concentrations within the filaments will be above the response threshold much further downstream than a diffusion model would predict-but in a spreading plume, far down stream, the odour filaments may be widely spaced" [256] (ibid.).22 Nuclear Science and Engineering is the flagship of the ANS, and the most prestigious journal in nuclear engineering.It is a journal in print, with several volumes per year (for a total of 12 issues) but abstracts and (for purchase) articles can be accessed through the website http://www.new.ans.org/pubs/journals/nse/.
management through the development and application of fundamental tools remains an important goal among those of the Reactor Physics Division of the American Nuclear Society, for example.)Rather-and this is something that I ascribe to two factors: (a) diversification into new types of reactors, and (b) the difficulties the commercial nuclear power sector has been and is still undergoing because of sundry reasons (mainly of competing in the wholesale energy markets), 23 whereas it can be expected that in the longer term, the sector will continue to be an important form of power generationone does comes across, in the journal Nuclear Science and Engineering, articles about methods and tools intended to assist in decision making within fuel management at nuclear power plants, when it comes to select a fuel management strategy for the future, in face of uncertainties about future circumstances and the array of options becoming available.
In particular, recycling systems are an important factor to consider when deciding about fuel-cycle implementation pathways.In a 2017 paper, Lara Pierpoint explained [272], while proposing a method of decision making comprising simulations along with a decision tree model: "Nuclear fuel cycle studies have provided a wealth of information on the potential impacts of advanced recycling systems.Deciding on fuel cycle implementation pathways, however, requires synthesizing volumes of data and navigating trade-offs between fuel cycle options.This research presents a framework intended to aid fuel cycle decision makers by focusing on the cost reduction/waste mitigation trade-off as a lens for choosing a near-term strategy.The framework consists of a fuel cycle simulation coupled to a decision tree model that maps evolution scenarios.System scenarios are constructed by considering the technological options for fuel cycle evolution and key uncertainties expected to affect the desirability of those options.For this study, the once-through fuel cycle is compared to a self-sustaining fast reactor (FR) fuel cycle.Scenarios are compared using a value function that incorporates cost and waste metrics. . . ." In an article of 2014, Passerini et al. [273] began making these considerations: "Experience with modeling fuel cycle options reveals that the large amount of generated data makes it difficult to understand trade-offs among fuel cycle policies.This paper shows that numerical optimization can be used to better identify impacts of fuel cycle policies and condense the generated data against a few significant criteria.The once-through cycle is considered the baseline case, while advanced technologies with fuel recycling characterize the alternative fuel cycle options available in the future.The options include, among others, recycling the fissile materials from spent light water reactor fuel in fast reactors (FRs) as well as deployment of innovative recycling reactor technologies, such as the 235 U initiated FRs.Additionally, a first-of-a-kind optimization scheme for the nuclear fuel cycle analysis is described.Optimization metrics of interest to different stakeholders in the fuel cycle (economics, fuel resource utilization, high-level waste, transuranic materials/proliferation management, and environmental impact) are utilized for two different optimization techniques: a linear one and a stochastic one. . . ." In 2016, Phathanapirom and Schneider [274] "introduce[d] a new methodology for explicitly incorporating uncertainties in key parameters into decision making regarding the transition between various nuclear fuel cycles.These key uncertainties-in demand growth rates, technology availability, and technology costs, among others-are unlikely to be resolved for several decades and invalidate the concept of planning for a unique optimal transition strategy".This was demonstrated in a case study: "the methodology is applied to a case study involving transition from the current once-through light water reactor fuel cycle to one relying on continuous recycle in fast reactors, and the effect of choice criterion is explored.Hedging strategies are found that preserve significant flexibility to allow alteration of the fuel cycle strategy once these uncertainties are resolved" [274].
Also consider plans for small reactors that could be carried by lorry (truck).Whereas one may think that this size and mode of transport may even make them vulnerable to theft (and use in terror), this is not a realistic risk, as the reactor, when transported, would contain no fuel: the risk is inferior to that of transporting nuclear fuel.In practice, such small reactors are intended to be housed, several of them, in a plant.To give an idea of the scale involved, as opposed to the Perry plant in Ohio being a 1270-MWe one, in contrast, the US nuclear technology developer NuScale Power has developed (originally at Oregon State University) its small modular reactor (SMR), whose basic power module is fully factory-fabricated and is a 50-MWe (gross) nuclear unit that consists of an integral reactor surrounded by a high-pressure steel containment.The firm foresees power stations housing up to 12 modules [275].The very fact that the reactor core would be small, would make it more difficult to start the reaction and keeping it steady, and for that purpose, special configurations of how to load the fuel assemblies could be part of the solution.In 2018, Kuwagaki et al. [276] proposed a rotational fuel-shuffling concept, for breed-and-burn reactors, a next-generation kind of reactor: "In the breed and burn (B&B) strategy, 24 low-reactivity fuels are loaded in a core.It is difficult to keep criticality in operating a small core.To enhance the potential for achieving criticality, the neutron economy in a core should be improved.One improvement method is to increase the core size and reduce neutron leakage.If it is necessary to avoid the large-sized core, another method is to locate high-reactivity fuels in high-neutron-importance region continuously through an equilibrium burnup state.On the other hand, to stabilize the change of neutron flux and power distribution during the operation, the B&B regions need to be kept stationary in the same region.
In this study, a rotational fuel-shuffling concept was proposed.In this concept, fuel assemblies are moved to the next position step by step in a divided symmetry core region.Fresh fuel is loaded from the periphery and moved toward the center region, then moved outward and discharged.If the core could achieve an equilibrium state at which high-reactivity fuels are continuously placed in the core center region, it would be possible to keep the B&B regions stationary.In this kind of equilibrium state, high-reactivity fuels are placed in high-neutron-importance region stably. . . ." Moreover, bear in mind that we have been concerned with practical problems of in-core fuel management at commercial plants that generate power by nuclear fissions, and not with how to refuel reactors based on nuclear fusion, which is still a sector of blue-sky research.For example, in late November 1990, Loren Steinhauer reported, "[t]he sixth in a series of workshops on D-3 He-fueled field-reversed configuration (FRC) fusion reactors was held at Kyushu University.The workshop was co-chaired by H. Momota of the National Institute for Fusion Science (NIFS) and G. H. Miley of the University of Illinois"; "An updated reactor design was presented by H. Momota (NIFS)" [278] (p.373); "refuelling in the new design employs a new idea whereby a pellet is fired at relatively low speed into the region just beyond one end of the FRC.Then a control coil is activated to cause a transient 'strike' of the FRC to envelop and absorb the pellet" [278] (pp.374-375).Saveli Feinberg in 1958 [279] proposed what he called a breed-and-burn reactor, i.e., a reactor that could breed it own fuel inside the reactor.As things stands, scholars also refer more specifically to a travelling wave reactor, or TWR for short (in French, the name is réacteur à onde progressive, or réacteur à onde de combustion).This concept is still in the development stage.Such a reactor would operate deep underground, and would only need to have fuel loaded at the beginning, and this would suffice for, say, 30 years.
Think of a square grid representing a cross-section of the reactor core.It would originally be filled with fuel assemblies made of the uranium-238 isotope.The fission would originally occur in a smaller square of, say, four by four fuel-assemblies filling 16 cells, at the very centre of the core.Then the fission would spread from there, in a circle, where the fuel has been replaced with fission products, and around the circle there would be a ring comprising such cells in the grid where the uranium has become plutonium-239.The circle keeps growing, and with it, the ring surrounding it.In the end, almost the entire grid would contain fission products, with some plutonium at the periphery.But it would take several decades for the core to reach such a state."TWRs differ from other kinds of fast-neutron and breeder reactors in their ability to use fuel efficiently without uranium enrichment or reprocessing, instead directly using depleted uranium, natural uranium, thorium, spent fuel removed from light water reactors, or some combination of these materials" [280]."The name refers to the fact that fission remains confined to a boundary zone in the reactor core that slowly advances over time.TWRs could theoretically run, self-sustained, for decades without refueling or removing spent fuel" [280].
See e.g., a 2015 article by Huang et al. [281].Also see a seminal paper [282] by Teller at al., of 1996, entitled "Completely Automated Nuclear Reactors for Long-Term Operation".Daniel Garrun [283], writing in 2012, has stated: "The Generation IV International Forum (GIF) has identified six types of new reactor to solve this [i.e., the problem of generated radioactive waste] and today's other problems with nuclear power. . . .One of the most promising is TerraPower's Traveling Wave Reactor (TWR).It is a type of 'fast breeder': it works using fast rather than thermal neutrons, and breeds its own fuel.Started off with a relatively small amount of low-enriched material, the reactor runs on depleted uranium" [283].In a fast breeder, "[n]uclear waste then becomes a valuable resource.The next generation of nuclear reactors could thus run by extracting the energy left over in the spoils of the present nuclear generation.The TWR is also known as a breed-and-burn or burning-wave reactor.A 'burn wave' is started with a small core of enriched fissile U-235 but continues with only fertile material like much more common depleted uranium U-238" [283].Once a TWR is fuelled, there is no need to open it: "the breed-and-burn wave of the TWR does not itself move.Instead, the fuel in the core is moved in and out of the breed-burn region as a 'standing' wave.This fuel shuffling will be automatic, and won't need the reactor to be opened" [283].The core of a TWR is conceptualized as having four zones: "the localised fission zone concentrates the U-235 that decays producing fast neutrons.These are captured in the surrounding breeding zone, converting a fertile isotope like U-238 into a fissile isotope like Pu-239, which itself decays producing between two or three fast neutrons.The fresh zone contains unreacted fertile material, and the depleted zone contains mostly fission products and leftover fuel.The reactor has a once-through (open) nuclear cycle.It operates at around 550 • C, with the heat being removed by liquid sodium to drive steam turbines" [283].We can see then that next-generation reactions bring with them new kinds of situations that modify how we think about in-core fuel management.It is foreseeable that mature AI technologies will continue to play a part in operating such reactors.

Concluding Remarks
This overview article surveys such research-spanning a period that goes from the mid-1980s to the present-that has applied to the in-core fuel management problem in nuclear engineering (an economically important problem, also known as nuclear reactor fuel loading pattern design, or the design of refuellings: see Figures 6-8) such techniques that have been historically associated with artificial intelligence research within computer science.A referee questioned the appearance of "AI" in the title of this paper, because of changing perceptions of what it stands for, or which expectations mention would raise now.Therefore, I underline that the criterion of inclusion is historical: these are computing techniques that were typically developed by people who within computer science departments were known as the AI people; or then, a few of the techniques were developed at the interface between operations research (the discipline that researches mathematical optimization techniques) and AI, because of the latter's promotion of heuristic optimization, that is to say, such techniques that are not guaranteed to find the optimum.
The introduction section of this paper comprises three parts.The first subsections lists the aims of the exposition; the second caters to nuclear engineers not yet familiar with AI, and introduces the reader to a list of techniques whose applications to nuclear reactor fuel loading pattern design.The third subsection of the introduction tutors such readers who are not familiar with nuclear reactors to such preliminary notions in the domain that are necessary for understanding the rest of the article.Also, the second section of this paper, "Safety, Materials, and Options for Nonproliferation Aims", caters to readers who are not nuclear engineers: it points out that there are political concerns, not merely economic or industrial concerns, that have been affecting nuclear fuels, and the types of plants that suit them.
Designs 2018, 2, x FOR PEER REVIEW 30 of 45 optimization techniques) and AI, because of the latter's promotion of heuristic optimization, that is to say, such techniques that are not guaranteed to find the optimum.
The introduction section of this paper comprises three parts.The first subsections lists the aims of the exposition; the second caters to nuclear engineers not yet familiar with AI, and introduces the reader to a list of techniques whose applications to nuclear reactor fuel loading pattern design.The third subsection of the introduction tutors such readers who are not familiar with nuclear reactors to such preliminary notions in the domain that are necessary for understanding the rest of the article.Also, the second section of this paper, "Safety, Materials, and Options for Nonproliferation Aims", caters to readers who are not nuclear engineers: it points out that there are political concerns, not merely economic or industrial concerns, that have been affecting nuclear fuels, and the types of plants that suit them.The actual survey is in Sections 3 and 4. In particular, Section 3, "Computer Tools for Designing Fuel Reload Configurations", comprises eight subsections: 3.1, "Pressurized Water Reactors"; 3.2, "Nuclear Fuel Cycle"; 3.3, "Manual Design Packages" (tools that became available in the late 1980s); 3.4, "Expert Systems" (tools for the problem at hand that were developed from the second half of that decade, and made their appearance in the 1990s, at first by using the rule-based paradigm of expert systems); 3.5, "Validation by Simulations of the Reactor Physics" (after a design for fuel management is going to become.
In this article, mathematics and physics have been kept to a minimum.Adopting a discursive style, this survey has sought to accessibly inform different categories of readers: nuclear engineers, computer scientists, and even just a curious educated public.We had to remain rigorous, without becoming arcane, even as a multitude of names for techniques made their appearance.Hopefully, the subsections of the "Introduction", catering to different categories of readers, are conducive to that goal being achieved.The loading sequence of fuel units is given as input, as is a slice in one-eighth symmetry of the grid of the reactor core.Run after run of the configuration-generator, a new level is added to the tree.Eventually, a multitude of alternatives solutions is obtained.Then they are simulated.
The actual survey is in Sections 3 and 4. In particular, Section 3, "Computer Tools for Designing Fuel Reload Configurations", comprises eight subsections: Section 3.1, "Pressurized Water Reactors"; Section 3.2, "Nuclear Fuel Cycle"; Section 3.3, "Manual Design Packages" (tools that became available in the late 1980s); Section 3.4, "Expert Systems" (tools for the problem at hand that were developed from the second half of that decade, and made their appearance in the 1990s, at first by using the rule-based paradigm of expert systems); Section 3.5, "Validation by Simulations of the Reactor Physics" (after a design for fuel reload has been produced, there is a statutory obligation to validate it by running simulations, for example by using nodal algorithms); Section 3.6, "Ruleset Improvement, and the Involvement of Neural Revision" (this subsection describes the design of a hybrid rule-based/neural, i.e., symbolic/subsymbolic component, intended to help refine the ruleset of a rule-based told for fuel reload design; by that stage in the 1990s, neural tools for that nuclear engineering problem were also appearing as an alternative to rule-based expert systems); Section 3.7, "An Expert System Based on a Genetic Algorithm" (the application to fuel reload design of that other technique emerged in the mid-1990s, and remained popular for years); and Section 3.8, "A Survey of the Application of Other Techniques: Genetic, Fuzzy, Particle Swarm, Tabu Search, Simulated Annealing".
Section 4 offers a tour of current trends in how to design the nuclear reactor plants of the future; the emergence of such novel plant designs affects what, correspondingly, the problem of fuel management is going to become.
In this article, mathematics and physics have been kept to a minimum.Adopting a discursive style, this survey has sought to accessibly inform different categories of readers: nuclear engineers, computer scientists, and even just a curious educated public.We had to remain rigorous, without becoming arcane, even as a multitude of names for techniques made their appearance.Hopefully, the subsections of the "Introduction", catering to different categories of readers, are conducive to that goal being achieved.

Figure 4
Figure 4. Symmetry inside a cross-section of a nuclear power reactor core.DIAG is any cell in the grid that is on a diagonal.AXIS is any cell in the grid that is on an axis.In one-eighth symmetry, the slice of the grid on which we reason is comprised between an axis and a diagonal.

Figure 5 .
Figure5.Improved output sets ("clouds") of candidate refuellings.The "cloud" of configurations of fuel in the core (displayed as dots in the plane of cycle length and the power peaking factor) moves into a better region, after the ruleset has been revised.

Figure 5 .
Figure5.Improved output sets ("clouds") of candidate refuellings.The "cloud" of configurations of fuel in the core (displayed as dots in the plane of cycle length and the power peaking factor) moves into a better region, after the ruleset has been revised.
. A Survey of the Application of Other Techniques: Genetic, Fuzzy, Particle Swarm, Tabu Search, Simulated Annealing

Figure 6 .
Figure 6.The phases of operation of the usual types of nuclear reactors, with a new reload design having to be made at downtime period, which usually begins at end-of-cycle.

Figure 6 .
Figure 6.The phases of operation of the usual types of nuclear reactors, with a new reload design having to be made at downtime period, which usually begins at end-of-cycle.

Figure 7 .
Figure 7. Looping through types of fuel , through an alternation of the reactor being in operation and the reactor being shut down in preparation for the next phase of operation.

Figure 7 . 45 Figure 8 .Figure 8 .
Figure 7. Looping through types of fuel F , through an alternation of the reactor being in operation and the reactor being shut down in preparation for the next phase of operation.Designs 2018, 2, x FOR PEER REVIEW 32 of 45