1. Introduction
The spectroscopic signatures of liquid-propellant rocket exhaust plumes serve as a high-fidelity diagnostic medium for engine condition monitoring [
1,
2]. For example, plume emission spectroscopy has emerged as the predominant technique for liquid-propellant rocket engine health monitoring, owing to its unique capability to detect metallic contamination signatures [
3]. Metallic constituents from components undergoing degradation processes such as aging, ablation, or wear are transported into the exhaust plume, where high-temperature environments (i.e., 1800–3500 K) thermally excite these elements to produce element-specific atomic emission signatures detectable at parts-per-million (ppm) concentration levels [
4,
5]. This phenomenon enables precise identification of component-specific failure modes through a spectral line intensity analysis of metals, establishing a non-invasive, real-time anomaly detection framework with millisecond-level temporal resolution for critical engine subsystems [
6,
7,
8,
9].
In related research, Arnold et al. from NASA’s Ames Research Center (ARC) developed the Line-by-Line (LBL) program for atomic emission spectroscopy simulations. Later, the Stennis Space Center (SSC) enhanced it to better suit the high-temperature exhaust plume environment [
10]. This program calculates the contributions of each atomic line to generate a complete spectrum. In 1987, NASA’s Marshall Space Flight Center (MSFC) conducted an analysis of failure reports from the Space Shuttle Main Engine (SSME) test firings, which revealed that anomalous plume spectral data had been consistently recorded during numerous ground test anomalies, providing early evidence of metallic emission signatures correlated with component degradation events [
11,
12]. During the same year, Cikanek et al. [
12] at MSFC identified characteristic emission signatures of OH radicals, potassium (K), sodium (Na), and calcium hydroxide (CaOH) in the SSME exhaust plumes, though the limited spectral resolution of first-generation Optical Multichannel Analyzers (OMAs) prevented definitive detection of metallic constituents like iron and nickel [
13].
Building upon these findings, both NASA’s MSFC and SSC initiated the development of an Optical Plume Anomaly Detection (OPAD) system for the SSME, establishing the first dedicated optical diagnostics framework for liquid-propellant rocket engine health monitoring. This initiative implemented comprehensive spectral surveillance across 220–1500 nm wavelengths during SSME hot-fire tests to capture both atomic metal emissions and molecular band structures critical for early fault detection [
14]. For example, Gardner D.G. et al. [
15] conducted spectroscopy measurement on a subscale SSME test article, identifying iron and chromium emission lines indicative of baffle ablative erosion, while notably failing to detect copper signatures despite the component’s copper matrix. Complementing this, Powers W.T. et al. [
16] detected characteristic spectra of Fe, Co, Cr, and Ni in the exhaust plume, which were linked to wear on the SSME combustion chamber panel. To systematically characterize metallic atomic emission in rocket plumes, Tejwani G.D. et al. [
13] at NASA developed a laboratory-scale bipropellant engine with integrated metal salt injection capabilities, enabling precise doping of aqueous metal nitrate solutions into the combustion chamber.
The advent of high-resolution OMAs with 0.05 nm spectral resolution across UV-VIS-NIR (200–1100 nm) enabled NASA to conduct hyperspectral characterization of rocket plumes, not only resolving metallic emission features but also achieving the quantitative determination of nickel concentrations [
17]. Subsequently, Benzing D.A. [
18,
19] implemented neural network algorithms to analyze rocket exhaust plume spectra, achieving diagnostic identification of engine component anomalies through advanced spectral pattern recognition and machine learning techniques. The SSC later integrated the OPAD system into the Engine Diagnostics Console (EDC) [
20], correlating exhaust plume alloys with engine hardware components. The EDC system demonstrated its critical safety value during a January 1996 SSME hot-fire test at NASA SSC [
20].
With the above understandings, a critical step in plume-spectra-based engine fault diagnosis is accurately quantifying metallic element concentrations from measured spectra. To address this challenge, this study introduces a quantification model based on a backpropagation (BP) neural network optimized by a genetic algorithm (GA), extending prior research [
21]. The model linearizes the inherently nonlinear relationship between spectral intensity and metal concentration by incorporating a photon transmission factor. This factor is derived from radiative transfer theory and experimentally calibrated through AES measurements. Using this calibrated factor, high-accuracy spectral simulations of atomic emission are performed to systematically construct a training dataset for the BP network. Finally, a series of experiments is conducted to validate the model’s effectiveness.
2. Atomic Emission Spectroscopy
During thermal excitation processes in high-temperature flames, metallic atoms undergo electronic transitions from ground states to excited states, followed by radiative decay that produces characteristic atomic emission spectra. In a local thermal equilibrium system, the number density of atoms at different states obeys Boltzmann’s law:
where
T is the adiabatic temperature in the local volume element,
Ni is the number density at
ith excited state, and
N0 is the number density at the ground state.
gi and
g0 represent the degeneracies of
ith excited state and ground state, respectively.
Ei is the excitation energy, and
k is the Boltzmann constant.
The total number density (
N) is:
where
Q is the partition function, which is dependent on the elemental properties (electron configuration, ionization potentials) and temperature.
Considering the line broadening, the spectral line radiant energy intensity can be expressed through the following radiative transfer equation:
where the right-hand side of the equation includes the atomic emission term and the Voigt line broadening term (
V). c is the speed of light, h is the Planck constant, λ is the spectral wavelength, and Aki is the Einstein coefficient for spontaneous emission.
Under elevated concentrations of metallic species, self-absorption effects become significant, necessitating modification of the atomic emission equation, as shown in
Figure 1. Considering the coupled emission and self-absorption, the revised form of the radiative transfer equation is expressed as:
where
α is the absorption coefficient,
L is the optical path length, and
is the absorption term, dependent on element properties and elemental concentration.
α is proportional to the ground-state atom number density. When
≪ 1 (low density), self-absorption is negligible, corresponding to the optically thin model. While
≫ 1 (high density), self-absorption is significant, corresponding to the optically thick model.
The spectral intensity–concentration relationship can be finally expressed as:
where
n =
N/
L is the atomic concentration. The absorption coefficient
encapsulates inherent nonlinear dependencies arising from coupled radiative transfer phenomena. Specifically, the concentration-dependent absorption coefficient
(where
denotes the absorption cross-section from NIST databases) is governed nonlinearly by the unknown atom concentration. This nonlinear coupling creates analytically intractable transcendental relationships requiring numerical solutions through radiation transport codes combined with experimentally calibrated scaling laws for practical implementation. To decouple the nonlinear interdependencies in radiation transport modeling, we introduce a photon transmission factor (
):
The photon transmission factor exhibits a primary dependence on the absorption coefficient, which is intrinsically governed by the number density of metallic atoms (i.e., concentration) rather than the metallic species. This dimensionless factor linearizes the intensity–concentration mapping by inherently compensating for self-absorption artifacts:
4. Experimental Validation of the Spectroscopic Quantification Model
The developed model was rigorously evaluated through a series of controlled combustion experiments. The experimental protocol employed four distinct atomization gas flow rates (
ma2) while maintaining constant global combustion conditions, ensuring a total air mass flow of 10.3 g/min and a methane (CH
4) flow of 0.65 g/min with 3 MPa atomization backpressure. Thermocouple measurements (Type K, ±5 K accuracy) at the spectrometer detection zone revealed temperature variations of 10–40 K (
Table 5) across different solution injection rates, attributable to evaporative cooling effects, with the maximum observed temperature depression representing 0.3% of the mean flame temperature (1320 ± 15 K). Given that such minor thermal fluctuations induce <1% variation in atomic emission intensities (confirmed via NIST spectral simulations), all concentration inversions were performed using the line-of-sight averaged temperature of 1320 K.
The experiments comprised 32 test cases (4 atomization gas flow rates × 8 metal salt solution concentrations), with
Figure 6 displaying the normalized peak intensities of the corresponding atomic emission spectra, revealing two key observations: (1) within each flow rate group, the spectral intensity exhibited a direct monotonic relationship with the solution concentration, and (2) at fixed concentrations, the relative spectral intensities across groups followed D > C > B > A, precisely correlating with the atomization gas flow rates due to increased metal vapor delivery efficiency. Concentration inversion accuracy was quantified through relative error between measured and predicted values (
Table 6,
Table 7,
Table 8 and
Table 9), demonstrating that the method achieves a mean absolute percentage error of 3.21% across all test conditions, with 84.4% of predictions falling within 5% of reference values. Error sources were systematically categorized as: (i) instrumental uncertainties (±2.1% from spectrometer calibration drift), and (ii) operational variabilities including liquid feed line occlusions, nebulizer efficiency degradation (15% reduction after prolonged use), and ambient background interference (accounting for 0.5–1.8% intensity variation).
Figure 7 demonstrates the intensity–concentration correlation, validating the method’s ability to accurately determine elemental concentrations in doped flames despite significant self-absorption effects. The nonlinear relationship clearly shows that: (1) within the tested concentration range, the spectral intensities deviate from linear proportionality, indicating complete breakdown of traditional emission spectroscopy methods due to severe self-absorption, yet our neural-network-based approach maintains 94.3% prediction accuracy; and (2) the characteristic saturation trend aligns perfectly with radiative transfer theory—while intensity monotonically increases with concentration, its growth rate decreases. As can be seen, the concentration of metallic components can be determined without presupposing initial values, where prediction accuracy is primarily governed by the optical absorption effects analyzed in this study. The specific metal species do not significantly influence prediction accuracy if appropriate spectral lines are selected. The results demonstrate the technical feasibility of real-time, plume-spectroscopy-based engine health monitoring.
5. Conclusions
This study addresses a critical problem in liquid-propellant rocket engine plume-spectroscopy-based health monitoring: accurate metallic element concentration inversion from spectral intensity measurements. We developed an inversion method centered on a genetically optimized neural network architecture. Through a combination of simulations and experiments, the following conclusions can be drawn:
This study develops a metallic element concentration assessment approach for flame environments, utilizing a methane-premixed jet flame as the excitation source and a potassium carbonate (K2CO3) solution as the doping medium. The proposed approach establishes a quantitative conversion framework between solution concentration and thermofluidic environmental concentration.
This study develops a genetic-algorithm-optimized BP neural network approach incorporating a photon transmission factor to bridge spectral simulations and experimental measurements, significantly enhancing training dataset generation efficiency.
The proposed concentration inversion method was rigorously evaluated through 32 experimental test cases covering various atomization flow rates and metallic element concentrations. These results confirm that the proposed method not only achieves high inversion accuracy but also remains effective under strong self-absorption conditions.
These results demonstrate the technical feasibility of real-time, plume-spectroscopy-based engine health monitoring. Future work includes the investigation of the quantitative relationship between metallic concentrations in the plume and specific rocket engine fault modes or health states (e.g., turbine erosion, combustion chamber degradation, or nozzle wear).