Next Article in Journal / Special Issue
A Mechanism for Large-Amplitude Parallel Electrostatic Waves Observed at the Magnetopause
Previous Article in Journal
Plasma Treatment of Polystyrene Films—Effect on Wettability and Surface Interactions with Au Nanoparticles
Previous Article in Special Issue
EUV/VUV Spectroscopy for the Study of Carbon Impurity Transport in Hydrogen and Deuterium Plasmas in the Edge Stochastic Magnetic Field Layer of Large Helical Device
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

What Machine Learning Can and Cannot Do for Inertial Confinement Fusion

Los Alamos National Laboratory (LANL), P.O. Box 1663, Los Alamos, NM 87545, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Plasma 2023, 6(2), 334-344; https://doi.org/10.3390/plasma6020023
Submission received: 16 February 2023 / Revised: 7 May 2023 / Accepted: 25 May 2023 / Published: 1 June 2023
(This article belongs to the Special Issue Feature Papers in Plasma Sciences 2023)

Abstract

:
Machine learning methodologies have played remarkable roles in solving complex systems with large data, well-defined input–output pairs, and clearly definable goals and metrics. The methodologies are effective in image analysis, classification, and systems without long chains of logic. Recently, machine-learning methodologies have been widely applied to inertial confinement fusion (ICF) capsules and the design optimization of OMEGA (Omega Laser Facility) capsule implosion and NIF (National Ignition Facility) ignition capsules, leading to significant progress. As machine learning is being increasingly applied, concerns arise regarding its capabilities and limitations in the context of ICF. ICF is a complicated physical system that relies on physics knowledge and human judgment to guide machine learning. Additionally, the experimental database for ICF ignition is not large enough to provide credible training data. Most researchers in the field of ICF use simulations, or a mix of simulations and experimental results, instead of real data to train machine learning models and related tools. They then use the trained learning model to predict future events. This methodology can be successful, subject to a careful choice of data and simulations. However, because of the extreme sensitivity of the neutron yield to the input implosion parameters, physics-guided machine learning for ICF is extremely important and necessary, especially when the database is small, the uncertain-domain knowledge is large, and the physical capabilities of the learning models are still being developed. In this work, we identify problems in ICF that are suitable for machine learning and circumstances where machine learning is less likely to be successful. This study investigates the applications of machine learning and highlights fundamental research challenges and directions associated with machine learning in ICF.

1. Introduction

Artificial intelligence (AI) is rapidly becoming one of the most important technologies of our era. In recent years, machine learning [1], particularly, deep learning [2], has enabled computers to acquire knowledge by being trained with large amounts of input information and learn by analyzing large amounts of data instead of being programmed using deterministic algorithms. Machine learning methods are being applied to image sorting, classification, self-driving vehicles, speech recognition, and other tasks previously performed by humans, and it is having a profound impact. Machine learning has been applied to many classes of problems but it is not always the optimal solution. As machine learning applications continue to expand, particularly in the context of complex scientific problems, we need to understand the capabilities and limitations of machine learning methodologies in order to identify problems that can be successfully addressed using machine learning and understand what machine learning can and cannot do.
This paper is organized as follows. The capabilities and limitations of machine learning are presented in Section 2. Section 3 describes the physical system of inertial confinement fusion (ICF). The uniqueness, challenges, and opportunities for applying machine learning to ICF problems are discussed. Section 4 presents the tasks to which machine learning can be successfully applied. A framework of physics-guided deep learning, some successful examples, and recent progress are given in Section 5. The conclusions are presented in Section 6.

2. Machine Learning and Limitations

Numerous studies have shown that machine learning methods can be successfully applied to many problems, for example, pattern recognition, image classification, cancer diagnosis, learning a function that maps well-defined inputs to outputs, a system with large digital datasets that contain input–output pairs, and systems that provide clear feedback with definable goals and metrics. Machine learning methods are particularly effective in handling problems that (1) do not have long chains of logic or reasoning that depend on diverse background knowledge or “common sense”; (2) do not need a detailed explanation of how a decision was made; (3) have a high degree of tolerance for errors; and (4) have no need for provably correct or optimal solutions. Recently, more rapid advancements in machine learning have been made in complex problems, such as robotics tasks, real-time correction in three-dimensional printing, drug discovery, aircraft design, self-driving vehicles, and even symbolic regression [3].
Unique challenges remain for systems that have a mapping function that changes rapidly over time and requirements for specialized dexterity, physical skills, or mobility. Not all problems are solvable using machine learning methodologies. As stated by Andrew Ng, “if a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future” [4]. Otherwise, machine learning methodologies may not be as successful as we had hoped.
Although the advantages of machine learning are enormous, there are some inherent limitations of the methodologies that cannot be addressed by using more data, more computing power, or more resources. Firstly, the inherent limitations come from the foundation of machine learning methodologies, i.e., probability and statistics. Reasoning is inherently limited and cannot be achieved in the framework of machine learning. Machine learning methodologies encode correlations but not causation or ontological relationships. For example, they cannot learn from the probability that “rain clouds cause rain”. Symbolic regression or planning is still a core challenge for both physics and AI, although there has recently been significant progress in physics-inspired machine learning. Secondly, machine learning methods are stochastic, rather than deterministic. No matter how many inputs are given and how much computer power is available, machine learning methods cannot understand Newton’s second law, Einstein’s theory of relativity, and the second law of thermodynamics. Simply speaking, physical constraints are not incorporated into the framework of machine learning methodologies or algorithms.
Therefore, machine learning methods alone are not always the best solution to a problem. Applying machine learning methods to any problem or system can lead to a poor outcome if a task requires a “thinking” process or it might not fully benefit from machine learning. For scientific problems such as ICF, physics-guided deep learning is typically utilized [5,6].
Deep learning is a subset of machine learning that classifies input data based on a multi-step process of learning from prior examples. It makes use of advanced “neural networks” [1,7] that proactively discover new patterns and become more accurate over time. Although traditional machine learning techniques are widely used in industries, true deep learning methods are only now being used in certain fields of research [8]. In order to maintain high fidelity, in addition to the required computational power, deep learning methods require not only large amounts of hand-crafted, structured, and high-quality training data but also require a new mindset that embraces a flexible way of thinking about how to solve a problem.
Achieving AI capabilities requires developing cognitive computing algorithms that enable the extraction of information from unstructured data by sorting concepts and relationships into a knowledge base. This can be thought of as a kind of biological exaptation, where a physiological structure becomes relevant for a function it was not originally adapted or selected for. Figure 1 shows the domains and relationships between artificial intelligence, machine learning, neural networks, and deep learning.
Existing deep learning models may or may not differentiate between causation and correlation, and they may not accurately make open-ended inferences based on real-world knowledge. Thus, complementary tools, in addition to the machine/deep learning algorithms, are required. So, the first step in building a good machine learning model would be a combination of physics knowledge [9], human analysis [10], as well as deep learning algorithms [11]. The scientific problems in inertial confinement fusion capsules and high-energy-density physics partially fall into this category.
Research shows that deep learning models do not perform as well in problems where the data are limited and lack a mechanism for learning abstractions through explicit and verbal definitions. Deep learning models can fail if the test data differ significantly from the training data. Additionally, deep learning models do not perform well when dealing with data that have complex hierarchical structures. So, using machine learning methods in areas with considerable noise may well lead to dangerous outcomes. In order to thoroughly understand the capabilities of machine learning and effectively apply it to scientific research and industrial advancement, we summarized the present status of machine learning and present this summary in Table 1.

3. Inertial Confinement Fusion

Inertial confinement fusion (ICF) capsules represent a complex system that initiates nuclear fusion reactions by compressing and heating targets (capsules) filled with thermonuclear fuel. The targets are small spherical pellets about the size of a pinhead that typically contain a mixture of about 150–200 micrograms of deuterium and tritium [12]. Successful simulations of the dynamic system, from ablation to implosion, ignition, and explosion require long chains of logic and planning and heavily rely on physical and human analysis. The dynamic process in ICF capsules is multi-dimensional, multi-scale, and deterministic with stochasticity [13]. Advances in fusion science and engineering depend on complex simulations, rigorous physics analysis, innovative experiment designs, and new device developments. Simulations of ICF capsules are particularly challenging due to the coupled physics phenomena and a vast range of scales in length and time. The multi-scale physics modeling results in impractical requirements for computational power and capability, inspiring the development of reduced models to make applications more practical, although the reduced models still face the challenges of intensive computation and numerical optimization, as well as uncertainty quantification.
Many existing and widely-used machine-learning methods [14] can be directly applied to model reduction in ICF problems [5,6]. The specific method to be used depends on the type of model and the applicability of the reduced model. For models that are approximately static, flexible regression methods, such as artificial neural networks [1] and Gaussian process regression [15], can be readily applied. Gaussian process regression is predictable for lower-dimensional problems. For high-dimensional problems, it is advantageous to extract a reduced set of features from the input and output space, which can be accomplished using principal component analysis [16], autoencoders, or convolutional neural networks [17,18,19,20]. The flexibility of these methods enables the fitting of applicable data with varying levels of accuracy. In addition, hyperparameter-tuning approaches [21,22,23] can be used to optimize the balance between model accuracy and complexity. Approaches used to identify dynamical models, e.g., linear-state-space-system identification methods [24,25] and recurrent neural networks [26,27,28], can be used to develop reduced models of dynamic systems.
There are complicating factors in ICF ignition capsules [29,30]. One unique challenge is that the experimental database used for training is very limited. Additionally, the nuclear performance (i.e., the neutron yield) of ICF capsules can be quite sensitive to multiple input parameters, as observed in NIF ignition experiments [31,32]. Existing machine learning models are trained on a mix of simulations and experimental data; however, the simulations are not yet predictive. Thus, the test data could be very different from the data used for the training. These challenges limit the quality and predictive capability of the AI machine/deep learning models applied to ICF capsules, which underlies the need to add physics analysis and human judgment to deep learning models.
Including physics knowledge and human analysis in deep learning models can significantly improve the models’ predictive capability. With physics analysis, we can take into account the undesired factors and decompose the ICF problems into two components: those that are solvable by machine learning and those that are less solvable by machine learning. The former can be directly addressed using deep learning methods and the latter can be addressed through a combination of first principles physics, reduced physics models, human analysis, and deep learning algorithms.
For example, the neutron yield of an ICF capsule is given by an integral over the volume of the hot fuel and time t:
Y n = n D n T < σ v > D T d V d t ,
where n D and n T are, respectively, the number density of the deuterium (D) and tritium (T) in the hot fuel, < σ v > D T is the nuclear reaction rate of DT, and V is the volume of the hot DT fuel (or hot spot). In terms of the pressure ( P h s ), ion temperature (T), and mass ( M h s ) of the hot spot and the mean thermonuclear (TN) burn width ( τ ¯ b ) of the hot fuel, the averaged yield of the capsule becomes [33,34]:
Y n N A 8 k A D T < σ v > T ( P h s τ h ) M h s ( τ ¯ b τ h ) ,
where τ h is the hydrodynamic disassembly time and defined as the ratio of the hot spot radius ( R h s ) to the sound speed ( C s ) in the hot spot; and N A , k, and A D T are, respectively, the Avogadro number, Boltzmann constant, and atomic number of the DT mixture. The product P h s τ h of the ICF capsule is given by the expression [34,35,36,37]:
P h s τ h = P 0 [ γ p ( 3 γ p 1 ) ϵ 0 η L η V i m p 2 ] γ p γ p 1 R h s C s g ˜ f T ,
where P 0 and ϵ 0 are, respectively, the pressure and specific internal energy of the pusher at the time of peak implosion velocity ( V i m p ), and γ p is the effective adiabatic index [34,36] of the pusher that is nonlinearly related to the pusher adiabat [38,39]. η L is the conversion efficiency of the laser energy to the pusher kinetic energy and η is the conversion efficiency of the pusher kinetic energy to the internal energy of the total stagnated fuel mass. These two coefficients account for the energy losses from the system during the implosion process. g ˜ is a shape factor with a value of 1 for spherical and <1 for non-spherical hot spots [39]. C s 2.778 × 10 7 γ T ( k e V ) cm/s is the sound speed in the hot DT, f T (≥1) is the tamping factor, and γ is the adiabat index of the hot DT.
Equations (2) and (3) show that the neutron yield of the capsule is sensitive not only to the peak implosion velocity but also to other implosion parameters such as the pusher adiabat, absorbed laser energy, tamping factor, hot-spot geometry (i.e., implosion symmetry), pusher symmetry, and pusher pressure at the time of peak implosion velocity.
The analytic nonlinear relationship (3), which agrees well with the NIF experimental data, was derived from the minimum implosion energy principle [36]. Due to the small size of the NIF experimental dataset, it is impossible to obtain this analytical nonlinear representation from any machine/deep learning model. This presentation has to come from physics principles and analysis because any machine learning model trained on large simulation data cannot compensate for missing physics. In fact, thousands of simulations conducted prior to the NIF experiments produced correlations between the hot-spot pressure P h s and the peak implosion velocity V i m p [40] that differed significantly from the correlations shown in the experimental data [36].
Despite the fact that machine learning methods are not able to produce an analytic-integrated physics presentation (e.g., Equation (3)), machine learning methods can have a great impact on ICF capsule design and the design optimizations of the parameters under the guidance of physics relationships and causations. In this sense, machine learning methods present some unique opportunities for research and development in the areas of high-energy-density physics [5,6].

4. Tasks Good for Machine Learning

Machine learning methods can be used to explore the sensitivity of ICF outputs to design parameters [41,42] and aid in the design and understanding of ICF implosions by integrating simulation and experimental data into a common framework. Particularly, with enhanced physics understanding and an increased number of experiments on NIF, deep learning methodologies may be able to reveal general correlations among variables and bridge the gap between measured and simulated data in fusion ignition on NIF.
Machine learning methods can be very useful in optimizing the implosion symmetry of capsules [43], the pusher mass/thickness, and the pusher materials with respect to the implosion energy and hot-spot pressure in multi-variable and multi-dimension environments. Because the pusher adiabat plays a crucial role in the energy partition between the pusher and the hot DT fuel during the implosion [35,37,44], for example, the amount of implosion energy going into the hot spot of a capsule with a low-adiabat pusher could be as high as 2 × the implosion energy going into the hot spot of a capsule with a high-adiabat pusher. More importantly, the adiabat of the pusher at the time of the peak implosion velocity depends on the level of preheating and the degree of mixing between the ablator material and cold DT fuel. So, optimizing the laser-pulse shape and hohlraum energy coupling with respect to preheating, the pusher adiabat, and ablation-front instability are other important tasks that machine learning methods can perform well. In addition, machine learning methods are capable of performing well in the numerical optimization and uncertainty quantification of any new design.
Deep learning methods enable the extraction of powerful models from experimental data if a large dataset exists. By performing advanced data analytics, new and hidden structures within the data can be extracted and used to develop an accurate modeling framework. Together with physics principles and knowledge, this approach can lead to the discovery of new physics through the direct use of data to verify and validate analytic models that generate fundamental physics. In this way, parameterized representations are uncovered that not only minimize the mismatch between theory and data but also potentially reveal hidden physics at play within integrated multi-physics and engineering systems.
Deep learning can also provide data-enabled enhancement [45,46]. For example, the new deep learning cognitive simulation model for ICF, recently developed at the Lawrence Livermore National Laboratory (LLNL), combines simulation and experimental data for modeling ICF experiments, resulting in more accurate predictions of NIF shots [47]. In this approach, a neural network is first trained on a variety of simulations to teach it the basics of ICF and the different measurements involved. Then, a portion of the neural network is retrained on the NIF experimental data, allowing it to adjust its performance predictions. Cognitive deep learning can be used to enhance theoretical models using data, or experimental data acquisition can be enhanced using theories and models. Similarly, data from empirical models can be used to enrich theoretical computational models.

5. Physics-Guided Deep Learning

Although machine/deep learning methods have demonstrated great success in some predictive modeling, when applied to surrogate modeling, they are often not robust, as they require large amounts of data and inadequately capture parameter sensitivities. In recent years, physics-guided machine learning algorithms [9], together with human analysis and self-consistent cognitive learning models, have achieved significant success in ICF capsule design, leading to robust and self-consistent surrogate learning models for complex ICF applications. Figure 2 displays a framework for physics-guided deep learning algorithms. In the framework, physics knowledge and the laws of nature are incorporated into the mapping functions with variable weights and are used to guide the selection of the model architecture, activation functions, loss functions, etc. The model training is driven by physics.
One successful example is the triple-alpha experiment conducted on OMEGA [48]. Machine learning methods, experimental feedback, and human analysis tripled the fusion yield of the direct-drive capsule at OMEGA. Researchers at the University of Rochester ran simple one-dimensional models hundreds of thousands of times. Each run involved randomly changing the values of the pulse shape and target structure, and then picking the best designs for subsequent rounds of target shots. They compared the simulated results with the actual results of the shots and repeated this process. The physics-guided machine learning-designed target had a threefold higher yield [48].
A second successful example is the recent series of high-yield NIF ignition capsules [31,32,49]. The NIF machine learning team at LLNL developed a cognitive simulation methodology for combining simulation and experimental data into a common, predictive model. This method leveraged a machine learning technique called “transfer learning,” which is the process of taking a model trained to solve basic tasks and partially retraining it on a sparse dataset to solve a different but related task. In the context of ICF ignition design, machine learning models are trained on large simulation datasets for general fusion burn and partially retrained on experimental data, producing models that are far more accurate than simulations alone. Cognitive machine learning models that combined simulations, experimental data, and human analysis reduced NIF shot prediction errors from as high as 110 percent to less than 7 percent [31,32,45,47]. NIF achieved a then record yield of 1.37 MJ with shot N210808 [50].
In a recent study [51], we applied machine learning methods to NIF ICF ignition capsules and performed a comparative assessment of neutron yields and hot-spot temperatures of the ignition capsules using six popular supervised machine learning regression methods: K-nearest-neighbor regression [52], polynomial regression [53], support vector regression [54], sparse heteroscedastic Gaussian process [15], deep neural network regression [14], and deep jointly informed neural network regression [55]. Predictions were obtained and compared, along with the observed experimental yield data. All of the supervised methods considered the hot-spot temperature T i o n as input and performed predictions based on the data. When the machine learning methods were first directly applied to the entire NIF dataset, a very weak correlation between the neutron yield and the hot-spot temperature was observed, which was inconsistent with intuition. We then incorporated physics analysis into the model and divided the data into two groups according to the laser-pulse shape (high foot and low foot). Then, a strong correlation between the yield and hot-spot temperature in each group emerged. All six methods generated reasonable and consistent predictions by leveraging the training data from these two groups [51].
We found that the machine learning predictions of all methods (except the Gaussian algorithm) for the high-yield capsules in the training data were consistently lower than the actual measured yields. This happens to be an inherent feature of machine learning, which often results in underestimation of the high values and overestimation of the low values, as the machine learning algorithm is drawn to the middle where most of the data lie. The highest-yield data point cannot be overestimated because the algorithm has never seen anything higher in its training set. So, human- and physics-guided analyses need to be incorporated into machine learning algorithms in order to address this limitation.
The inability of most machine learning methods to accurately predict high-yield NIF data also reflects the reality of NIF experiments, where the capsule performance is extremely sensitive to various design perturbations, especially when operating under marginal laser energy drive so achieving high-yield performance in the capsules is hard to replicate. The high-yield shots are characterized by a relatively high peak implosion velocity and thin shell, which brings these capsules close to the “velocity cliff” and increases the risk of shell burn-through, leading to excessive mixing between the pusher and cold fuel. All of these factors mean that the capsule’s performance is hard to reproduce.
As the NIF database continues to grow and the understanding of high-energy-density physics (HEDP) and fusion science advances, machine learning models, together with human and physics knowledge, are expected to play an increasingly important role in future capsule design, design optimization, and the development of new platforms (e.g., polar direct-drive and indirect-drive hot-spot design, pushered single-and double-shell design) for burning plasma and conducting HEDP experiments. Combining well-simulated data and experimental data into one dynamic model can significantly improve the predictive capability of deep learning models. A summary of the present status of machine learning, as well as future directions, needs, and applications in ICF research, is presented in Table 2.

6. Conclusions and Future Work

Applying physics knowledge and human analysis to deep learning models for ICF problems can significantly improve the predictive capabilities of these models in designing experiments for ICF and HEDP. The predictions of learning models strongly depend on the quality and quantity of the training data. If the training data are insufficient, the deep learning predictions will be poor. A combination of transfer learning, physics, and human analysis may be able to compensate for the limitations of small experimental datasets in ICF. In this work, we summarized the present status of machine learning methods, as well as their advantages, inherent limitations, and productive applications in inertial confinement fusions.
For the success of machine learning in ICF, we propose the following areas for direct applications: (1) Advanced neutron image analysis and reconstruction algorithms. The successes of machine learning in image analysis have been demonstrated in many fields and applying machine learning to neutron image analysis in ICF could help to determine the correlations between inputs and outputs and lead to significant improvements over current techniques, such as autoencoded features, 3D reconstruction using 2D projections, and advanced characterization of the size and location of sources. (2) Optimization. Designing targets for burning plasma is a multi-scale and multi-dimensional task. Applying machine learning algorithms to study design sensitivities to high-dimensional parameters and optimize design parameters, including the laser-pulse shape, ablator material, thickness, surface perturbations, and fuel mass and size, can speed up the design process and optimize designs. (3) Uncertainty quantification. Uncertainty quantification plays a pivotal role in reducing the impact of uncertainties during both optimization and decision making. In fusion science, most decisions are made based on collected observations and uncertain domain knowledge. Quantifying uncertainty is an effective method for evaluating the reliability and efficacy of a decision and solving real design problems. Bayesian approximation and ensemble learning techniques used in deep learning have shown success in a variety of problems. Applying these methods to ICF data could greatly enhance both the physics understanding of fusion science and capsule designs in reliably achieving burning plasma.
Finally, it is worthwhile pointing out that although AI is rapidly becoming one of the most important technologies and most powerful tools of our era, AI machine learning is not the solution to all problems because of its inherent limitations. Blindly applying machine learning methods to problems beyond their applicability can lead to poor, and sometimes dangerous, conclusions. Considering the limitations of machine learning methods, combining machine learning algorithms with physics knowledge and human analysis can provide a powerful tool, yielding viable results for the future of high-energy-density physics and inertial confinement fusion target designs. However, certain aspects of human intelligence and knowledge can never be replaced by AI machine learning.

Author Contributions

All authors have contributed equally in the developments of theory, methodology, validation and formal analysis. All authors read and agreed to the published version of the manuscript.

Funding

This research was funded by the LANL ICF program under the auspices of the U.S. Department of Energy by the Los Alamos National Laboratory under Contract No. 89233218CNA000001.

Institutional Review Board Statement

The document has been reviewed by the publication office of LANL and the document number is LA-UR-22-24244.

Informed Consent Statement

Not applicable.

Data Availability Statement

All of data used are available in the public domain.

Acknowledgments

The authors would like to thank the anonymous referees for their valuable and constructive comments that led to notable improvements to our manuscript. This work was conducted with the support of the LANL ICF program under the auspices of the U.S. Department of Energy by the Los Alamos National Laboratory under Contract No. 89233218CNA000001.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mitchell, T. Machine Learning; McGraw Hill: New York, NY, USA, 1997; ISBN 0–07-042807-7. [Google Scholar]
  2. Bengio, Y.; LeCun, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  3. Udrescu, S.-M.; Tegmark, M. AI Feynman: A physics-inspired method for symbolic regression. Sci. Adv. 2020, 6, eaay2531. [Google Scholar] [CrossRef] [PubMed]
  4. Ng, A. How Artificial Intelligence Is Transforming the Industry. 2021. Available online: https://www.bosch.com/stories/artificial-intelligence-in-industry/ (accessed on 29 July 2022).
  5. Hatfield, P.W.; Gaffney, J.A.; Anderson, G.J.; Ali, S.; Antonelli, L.; Başeğmez du Pree, S.; Citrin, J.; Fajardo, M.; Knapp, P.; Kettle, B.; et al. The data-driven future of high-energy-density physics. Nature 2021, 593, 351–361. [Google Scholar] [CrossRef]
  6. Humphreys, D.; Kupresanin, A.; Boyer, M.D.; Canik, J.; Chang, C.S.; Cyr, E.C.; Granetz, R.; Hittinger, J.; Kolemen, E.; Lawrence, E.; et al. Advancing Fusion with Machine Learning Research Needs Workshop Report. J. Fusion Energy 2020, 39, 123–155. [Google Scholar] [CrossRef]
  7. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [PubMed]
  8. Kim, E.J.; Brunner, R.J. Star–galaxy classification using deep convolutional neural networks. MNRAS 2017, 464, 4463–4475. [Google Scholar] [CrossRef]
  9. Iten, R.; Metger, T.; Wilming, H.; del Rio, L.; Renner, R. Discovering Physical Concepts with Neural Networks. Phys. Rev. Lett. 2020, 124, 010508. [Google Scholar] [CrossRef]
  10. Keshavan, A.; Yeatman, J.D.; Rokem, A. Combining Citizen Science and Deep Learning to Amplify Expertise in Neuroimaging. Front. Neuroinform. 2019, 13, 29. [Google Scholar] [CrossRef]
  11. Beck, M.R.; Scarlata, C.; Fortson, L.F.; Lintott, C.J.; Simmons, B.D.; Galloway, M.A.; Willett, K.W.; Dickinson, H.; Masters, K.L.; Marshall, P.J.; et al. Integrating human and machine intelligence in galaxy morphology classification tasks. MNRAS 2018, 476, 5516–5534. [Google Scholar] [CrossRef]
  12. Atzeni, S.; Meyer-ter Vehn, J. 2004 The Physics of Inertial Fusion: BeamPlasma Interaction, Hydrodynamics, Hot Dense Matter International Series of Monographs on Physics; Clarendon Press: Oxford, UK, 2004. [Google Scholar]
  13. Lindl, J. Inertial Confinement Fusion: The Quest for Ignition and Energy Gain Using Indirect Drive; AIP Press: College Park, MD, USA, 1998. [Google Scholar]
  14. Mehta, P.; Bukov, M.; Wang, C.-H.; Day, A.G.R.; Richardson, C.; Fisher, C.K.; Schwab, D.J. A high-bias, low-variance introduction to Machine Learning, for physicists. Phys. Rep. 2019, 810, 1–124. [Google Scholar] [CrossRef]
  15. Rasmussen, C.E.; Williams, C.K.I. 2006 Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  16. Pearson, K. On Lines and Planes of Closest Fit to Systems of Points in Space. Philos. Mag. 1901, 2, 559–572. [Google Scholar] [CrossRef]
  17. Atwell, J.A.; King, B.B. Proper orthogonal decomposition for reduced basis feedback controllers for parabolic equations. Math. Comput. Model. 2001, 33, 1–19. [Google Scholar] [CrossRef]
  18. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
  19. Kates-Harbeck, J.; Svyatkovskiy, A.; Tang, W. Predicting disruptive instabilities in controlled fusion plasmas through deep learning. Nature 2019, 568, 526. [Google Scholar] [CrossRef]
  20. Lee, K.; Carlberg, K. Model Reduction of Dynamical Systems on Nonlinear Manifolds Using Deep Convolutional Autoencoders. arXiv 2018, arXiv:1812.08373. [Google Scholar] [CrossRef]
  21. Alibrahim, H.; Ludwig, S.A. Hyperparameter Optimization: Comparing Genetic Algorithm against Grid Search and Bayesian Optimization. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland, 28 June–1 July 2021. [Google Scholar]
  22. Andrieu, C.; De Freitas, N.; Doucet, A.; Jordan, M.I. An Introduction to MCMC for Machine Learning. Mach. Learn. 2003, 50, 5–43. [Google Scholar] [CrossRef]
  23. Feurer, M.; Hutter, F. Automated Machine Learning: Methods, Systems, Challenges; The Springer Series on Challenges in Machine Learning; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar] [CrossRef]
  24. Moonen, M.; Moor, B.D.; Vandenberghe, L.; Vandewalle, J. On- and Off-Line Identification of Linear State Space Models. Int. J. Control 1989, 49, 219–232. [Google Scholar] [CrossRef]
  25. Viberg, M. Subspace-based Methods for the Identification of Linear Time-invariant Systems. Automatica 1995, 31, 1835–1851. [Google Scholar] [CrossRef]
  26. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-art in artificial neural network applications: A survey. Heliyon 2018, 4, e00938. [Google Scholar] [CrossRef]
  27. Dupond, S. A thorough review on the current advance of neural network structures. Annu. Rev. Control 2019, 14, 200–230. [Google Scholar]
  28. Tealab, A. Time series forecasting using artificial neural networks methodologies: A systematic review. Future Comput. Inform. J. 2018, 3, 334–340. [Google Scholar] [CrossRef]
  29. Gaffney, J.A.; Brandon, S.T.; Humbird, K.D.; Kruse, M.K.G.; Nora, R.C.; Peterson, J.L.; Spears, B.K. Making inertial confinement fusion models more predictive. Phys. Plasmas 2019, 26, 082704. [Google Scholar] [CrossRef]
  30. Spears, B.K.; Brase, J.; Bremer, P.-T.; Chen, B.; Field, J.; Gaffney, J.; Kruse, M.; Langer, S.; Lewis, K.; Nora, R.; et al. Deep learning: A guide for practitioners in the physical sciences. Phys. Plasmas 2018, 25, 080901. [Google Scholar] [CrossRef]
  31. Kritcher, A.L.; Young, C.V.; Robey, H.F.; Weber, C.R.; Zylstra, A.B.; Hurricane, O.A.; Callahan, D.A.; Ralph, J.E.; Ross, J.S.; Baker, K.L.; et al. Design of inertial fusion implosions reaching the burning plasma regime. Nat. Phys. 2022, 18, 251–258. [Google Scholar] [CrossRef]
  32. Zylstra, A.B.; Hurricane, O.A.; Callahan, D.A.; Kritcher, A.; Ralph, J.E.; Robey, H.F.; Ross, J.S.; Young, C.V.; Baker, K.L.; Casey, D.T.; et al. Burning plasma achieved in inertial fusion. Nature 2022, 601, 542–548. [Google Scholar] [CrossRef]
  33. Cheng, B.; Kwan, T.J.T.; Wang, Y.-M.; Merrill, F.E.; Cerjan, C.J.; Batha, S.H. Analysis of NIF experiments with the minimal energy implosion model. Phys. Plasmas 2015, 22, 082704. [Google Scholar] [CrossRef]
  34. Cheng, B.; Kwan, T.J.T.; Wang, Y.-M.; Batha, S.H. On Thermonuclear ignition criterion at the National Ignition Facility. Phys. Plasmas 2014, 21, 102707. [Google Scholar] [CrossRef]
  35. Cheng, B.; Bradley, P.A.; Finnagan, S.A.; Thomas, C.A. Fundamental factors affecting thermonuclear ignition. Nucl. Fusion 2020, 61, 096010. [Google Scholar] [CrossRef]
  36. Cheng, B.; Kwan, T.J.T.; Wang, Y.-M.; Batha, S.H. Scaling laws for ignition at the National Ignition Facility from first principles. Phys. Rev. E 2013, 88, 041101. [Google Scholar] [CrossRef]
  37. Cheng, B.; Kwan, T.J.T.; Wang, Y.-M.; Yi, S.A.; Batha, S.H.; Wysocki, F.J. Ignition and pusher adiabat. Phys. Control. Fusion 2018, 60, 074011. [Google Scholar] [CrossRef]
  38. Cheng, B.; Kwan, T.J.T.; Wang, Y.-M.; Yi, S.A.; Batha, S.H.; Wysocki, F.J. Effects of preheat and mix on the fuel adiabat of an imploding capsule. Phys. Plasmas 2016, 23, 120702. [Google Scholar] [CrossRef]
  39. Cheng, B.; Kwan, T.J.T.; Yi, S.A.; Landen, O.L.; Wang, Y.-M.; Cerjan, C.J.; Batha, S.H.; Wysocki, F.J. Effects of asymmetry and hot-spot shape on ignition capsules. Phys. Rev. E 2018, 98, 023203. [Google Scholar] [CrossRef] [PubMed]
  40. Edwards, M.J.; Patel, P.K.; Lindl, J.D.; Atherton, L.J.; Glenzer, S.H.; Haan, S.W.; Kilkenny, J.D.; Landen, O.L.; Moses, E.I.; Nikrooet, A.; et al. Progress towards ignition on the national ignition facility. Phys. Plasmas 2013, 20, 070501. [Google Scholar] [CrossRef]
  41. Nakhleh, J.B.; Fernández-Godino, M.G.; Grosskopf, M.J.; Wilson, B.M.; Kline, J.; Srinivasan, G. Exploring Sensitivity of ICF Outputs to Design Parameters in Experiments Using Machine Learning. IEEE Trans. Plasma Sci. 2021, 49, 2238–2246. [Google Scholar] [CrossRef]
  42. Vazirani, N.N.; Grosskopf, M.J.; Stark, D.J.; Bradley, P.A.; Haines, B.M.; Loomis, E.; England, S.L.; Scales, W.A. Coupling 1D xRAGE simulations with machine learning for graded inner shell design optimization in double shell capsules. Phys. Plasmas 2021, 28, 122709. [Google Scholar] [CrossRef]
  43. Peterson, J.L.; Humbird, K.D.; Field, J.E.; Brandon, S.T.; Langer, S.H.; Nora, R.C.; Spears, B.K.; Springer, P.T. Zonal flow generation in inertial confinement fusion implosions. Phys. Plasmas 2017, 24, 032702. [Google Scholar] [CrossRef]
  44. Melvin, J.; Lim, H.; Rana, V.; Cheng, B.; Glimm, J.; Sharp, D.H.; Wilson, D.C. Sensitivity of inertial confinement fusion hot spot properties to the deuterium-tritium fuel adiabat. Phys. Plasmas 2015, 22, 022708. [Google Scholar] [CrossRef]
  45. Vander Wal, M.D.; McClarren, R.G.; Humbird, K.D. Transfer learning of hight-fidelity opacity spectra in autoencoders and surrogate models. arXiv 2022, arXiv:2203.00853. [Google Scholar]
  46. Michoski, C.; Milosavljevic, M.; Oliver, T.; Hatch, D. Solving Irregular and Data-Enriched Differential Equations Using Deep Neural Networks. arXiv 2019, arXiv:1905.04351. [Google Scholar]
  47. Humbird, K.D.; Peterson, J.L.; Salmonson, J.; Spears, B.K. Cognitive simulation models for inertial confinement fusion: Combining simulation and experimental data. Phys. Plasmas 2021, 28, 042709. [Google Scholar] [CrossRef]
  48. Gopalaswamy, V.; Betti, R.; Knauer, J.P.; Luciani, N.; Patel, D.; Woo, K.M.; Bose, A.; Igumenshchev, I.V.; Campbell, E.M.; Anderson, K.S.; et al. Tripled yield in direct-drive laser fusion through statistical modelling. Nature 2019, 565, 581–586. [Google Scholar] [CrossRef]
  49. Ross, J.S.; Ralph, J.E.; Zylstra, J.E.A.B.; Kritcher, A.L.; Robey, H.F.; Young, C.V.; Hurricane, O.A.; Callahan, D.A.; Baker, K.L.; Casey, D.T.; et al. Experiments conducted in the burning plasma regime with inertial fusion implosions. arXiv 2021, arXiv:2111.04640. [Google Scholar]
  50. Abu-Shawared, H.; Acree, R.; Adams, P.; Adams, J.; Addis, B.; Aden, R.; Adrian, P.; Afeyan, B.B.; Aggleton, M.; Indirect Drive ICF Collaboration; et al. Lawson’s criteria for ignition exceeded in an inertial fusion experiment. Phys. Rev. Lett. 2022, 129, 075001. [Google Scholar] [CrossRef] [PubMed]
  51. Hsu, A.; Cheng, B.; Bradley, P.A. Analysis of NIF scaling using physics informed machine learning. Phys. Plasmas 2020, 27, 012703. [Google Scholar] [CrossRef]
  52. Kramer, O. K-Nearest Neighbors. Dimensionality Reduction with Unsupervised Nearest Neighbors; Springer: Berlin/Heidelberg, Germany, 2013; pp. 13–23. [Google Scholar]
  53. Liu, W.; Principe, J.C.; Haykin, S.S. Kernel Adaptive Filtering: A Comprehensive Introduction, 1st ed.; Wiley: Hoboken, NJ, USA, 2010. [Google Scholar]
  54. Vapnik, V.N. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 2000. [Google Scholar]
  55. Humbird, K.D.; Peterson, J.L.; Mcclarren, R.G. Deep Neural Network Initialization With Decision Trees. IEEE Trans. Neural Networks Learn. Syst. 2019, 30, 1286. [Google Scholar] [CrossRef]
Figure 1. A scheme of artificial intelligence, machine learning, deep learning, and physics-guided deep learning, where ANN, CNN, RNN, and DL, respectively, represent artificial neural networks, convolutional neural networks, recurrent neural networks, and deep learning.
Figure 1. A scheme of artificial intelligence, machine learning, deep learning, and physics-guided deep learning, where ANN, CNN, RNN, and DL, respectively, represent artificial neural networks, convolutional neural networks, recurrent neural networks, and deep learning.
Plasma 06 00023 g001
Figure 2. A framework for physics-guided deep learning models.
Figure 2. A framework for physics-guided deep learning models.
Plasma 06 00023 g002
Table 1. Summary of the present status of machine learning.
Table 1. Summary of the present status of machine learning.
Successful areasPattern recognition, image classification, cancer diagnosis, and systems with the following features: (a) large digital datasets (inputs, outputs), clear goals, and metrics; (b) not dominated by a long chain of logic and reasoning; (c) no requirement for diverse background knowledge and explanation of decision process; (d) high tolerance for errors and no requirement for provably correct or optimal solutions.
Inherent limitationsUnable to (a) achieve reasoning; (b) incorporate physics constraints in the framework of machine learning.
Deep learning features(a) Input data based on multi-step learning process; (b) Advanced neural network; (c) Able to discover new patterns, requires a new mindset, and can potentially distinguish between causation and correlation; (d) Does not work well for problems with limited data and data with complex hierarchical structures, no mechanism for learning abstractions.
Specialized methods(1) Flexible regression method (artificial neural network and Gaussian process regression) for static and low-dimension systems; (2) Principal component analysis, autoencoder, and convolutional neural network methods for high-dimension systems; (3) Hyperparameter-tuning approach for optimization and model accuracy; (4) Linear-star-space system identification method and recurrent neural networks for identifying models.
Desired toolsCombining physics knowledge with human analysis and deep learning algorithms.
Required for AICognitive computing algorithms that enable the extraction of information from unstructured data by sorting concepts and relationships into a knowledge base.
Table 2. Applications of machine learning methodologies in inertial confinement fusion.
Table 2. Applications of machine learning methodologies in inertial confinement fusion.
ICF systemsLimited data, requiring a long chain of logical, multi-scale, and multi-dimensional physics; sensitivity to small perturbations; low-error tolerance level.
Required MLPhysics-informed and human analysis incorporated into deep learning and transfer learning algorithms.
Suitable problems(1) Study of sensitivity of outputs to design parameters; (2) Integration of simulations and experimental data into a common framework; (3) Exploration of general correlations among the variables buried in the experimental data and between the measured and simulated data; (4) Optimization of implosion symmetry, pusher mass/thickness/materials, and laser-pulse shape; (5) Advanced neutron image analysis and reconstruction.
Successful examples(a) NIF high-yield Hybrid E series ignition target design and optimization guided by the LLNL transfer learning model; (b) OMEGA trip-alpha experiment driven by combining machine learning with human analysis and physics knowledge.
Future plans(1) Optimizing energy-coupling coefficients; designing parameter space of implosion (symmetry, pusher mass/thickness/materials, and laser-pulse shape); (2) Minimizing hydrodynamic instabilities using optimized spectrum of perturbations; (3) Quantifying uncertainties for both methods and experimental data; (4) Improving 3D neutron image reconstruction using 2D projection and autocoded features; (5) Combining physics knowledge, human analysis, data, and deep learning algorithms in each step of a design.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cheng, B.; Bradley, P.A. What Machine Learning Can and Cannot Do for Inertial Confinement Fusion. Plasma 2023, 6, 334-344. https://doi.org/10.3390/plasma6020023

AMA Style

Cheng B, Bradley PA. What Machine Learning Can and Cannot Do for Inertial Confinement Fusion. Plasma. 2023; 6(2):334-344. https://doi.org/10.3390/plasma6020023

Chicago/Turabian Style

Cheng, Baolian, and Paul A. Bradley. 2023. "What Machine Learning Can and Cannot Do for Inertial Confinement Fusion" Plasma 6, no. 2: 334-344. https://doi.org/10.3390/plasma6020023

APA Style

Cheng, B., & Bradley, P. A. (2023). What Machine Learning Can and Cannot Do for Inertial Confinement Fusion. Plasma, 6(2), 334-344. https://doi.org/10.3390/plasma6020023

Article Metrics

Back to TopTop