Next Article in Journal
Improving Health and Safety in Welding Through Remote Human–Robot Collaboration
Previous Article in Journal
Design and Study of a New Rotary Jet Wellbore Washing Device
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Prediction of Liquid Injection Volume and Leaching Rate for In Situ Leaching Uranium Mining Using the CNN–LSTM–LightGBM Model

1
State Key Laboratory of Nuclear Resources and Environment, East China University of Technology, Nanchang 330013, China
2
School of Information Engineering, East China University of Technology, Nanchang 330013, China
3
Engineering Research Center of Nuclear Technology Application, East China University of Technology, Ministry of Education, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Processes 2025, 13(9), 3013; https://doi.org/10.3390/pr13093013
Submission received: 21 August 2025 / Revised: 16 September 2025 / Accepted: 18 September 2025 / Published: 21 September 2025
(This article belongs to the Section AI-Enabled Process Engineering)

Abstract

In traditional in situ leaching (ISL) uranium mining, the injection volume depends on technicians’ on-site experience. Therefore, applying artificial intelligence technologies such as machine learning to analyze the relationship between injection volume and leaching rate in ISL uranium mining, thereby reducing human factor interference, holds significant guiding importance for production process control. This study proposes a novel uranium leaching rate prediction method based on a CNN–LSTM–LightGBM fusion model integrated with an attention mechanism. Ablation experiments demonstrate that the proposed fusion model outperforms its component models across three key metrics: Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). Furthermore, comparative experiments reveal that this fusion model achieves superior performance on MAE, MAPE, and RMSE metrics compared to six extensively utilized machine learning methods, including Multi-Layer Perceptron, Support Vector Regression, and K-Nearest Neighbors. Specifically, the model achieves an MAE of 0.085%, an MAPE of 0.833%, and an RMSE of 0.201%. This attention-enhanced fusion model provides technical support for production control in ISL uranium mining and offers valuable references for informatization and intelligentization research in uranium mining operations.

1. Introduction

Against the backdrop of the global energy transition and low-carbon economic development, nuclear energy—as a clean and efficient energy source—has been progressively increasing its share in the global energy supply, driving a rising demand for uranium resources [1]. Compared to conventional uranium mining methods, in situ leaching (ISL) uranium mining technology offers distinct advantages, including the ability to exploit low-grade deposits, simpler construction, cost-effectiveness, and faster production cycles. These benefits have garnered widespread attention in the international nuclear energy industry, establishing ISL as one of the pivotal technologies in modern uranium extraction practices [2].
ISL uranium mining is primarily conducted through a system of engineered boreholes, including injection wells and extraction wells spaced at specific intervals. The core process involves pumping a formulated lixiviant (leaching solution) into the injection wells, which selectively reacts with subsurface uranium deposits, dissolving the metal into solution. The uranium-bearing liquid is then extracted via the extraction wells, pumped to the surface, and subjected to engineered chemical processes in surface facilities to recover uranium [3,4]. Currently, ISL accounts for approximately 50% of global uranium production annually [5].
However, ISL uranium mining faces critical technical challenges such as imprecise control of injection volumes, suboptimal leaching efficiency, and groundwater contamination caused by chemical solvents. These issues constrain the technology’s operational efficiency and environmental sustainability, hindering its broader industrial adoption [6,7]. With the rapid advancement of artificial intelligence, particularly breakthroughs in machine learning for data-driven pattern recognition and process optimization, innovative approaches are being explored to mitigate these persistent challenges in ISL operations.
Currently, machine learning technologies have been widely applied to various aspects of ISL uranium mining, including ore deposit exploration, mining process optimization, environmental monitoring, and evaluation. For example, Mukhamediev [8] in Kazakhstan used machine learning methods to estimate the filtration characteristics of host rocks in sandstone-type uranium deposits, providing a scientific basis for mining; Merembayev [9] employed machine learning algorithms to classify strata of uranium deposits; Amirgaliev [10] applied several machine learning methods to identify rocks in uranium deposits; and Mukhamediev [11] utilized machine learning methods to determine the formation of oxidized zones in uranium well reservoirs.
With the rapid development of the nuclear energy industry, research on ISL uranium mining technology and its optimization methods has garnered increasing attention. Li et al. [12] developed a coupled model to simulate and optimize fluid flow in a complex well-formation system for ISL uranium mining. Jia et al. [13] devised an enhanced fast marching method (FMM), this method was studied for rapidly analyzing solute transport coverage patterns, offering an alternative to groundwater numerical simulators. At the same time, numerous research institutions and universities have extensively explored the application of machine learning to ISL technology. For example, Dongyuan Yu [14] developed machine learning-based methods to predict variations in uranium leaching rate during ISL processes, while Lei Lin [15] employed neural network models to forecast uranium concentration of acid in situ leach liquor. These studies focus not only on optimizing extraction processes—such as predicting and adjusting injection strategies through machine learning algorithms—but also extend to environmental monitoring and assessment, utilizing machine learning models to predict and evaluate potential environmental impacts of mining activities.
Despite progress in applying machine learning to ISL uranium mining, significant challenges remain. A critical technical priority is optimizing injection strategies to improve uranium leaching rates and recovery rates—an effort closely tied to the overarching goal of efficient resource utilization. To address this, we propose an attention-based CNN–LSTM–LightGBM fusion model for the precise prediction of uranium leaching rates in ISL operations. Building on these predictions, the model provides targeted support for optimizing uranium recovery processes. By clarifying the relationships between key parameters in the leaching process—such as daily lixiviant volume and sulfuric acid concentration—and leaching efficiency, it helps adjust core operations like injection strategies. This, in turn, indirectly enhances uranium recovery rates while meeting the current demand for efficiency in ISL resource recovery [16,17]. The proposed approach offers actionable guidance for improving leaching performance and ensuring consistency with engineering expectations.

2. CNN–LSTM LightGBM Fusion Model

2.1. Construction of the Coupled CNN–LSTM Model

The proposed uranium leaching rate prediction model consists of three core layers—a Convolutional Neural Network (CNN), a Long Short-Term Memory (LSTM) network, and an attention mechanism layer—which analyze influencing factors in the uranium leaching process.

2.1.1. Convolutional Neural Network

The core strength of Convolutional Neural Networks (CNNs) in time-series prediction lies in their ability to efficiently capture local temporal dependencies and hierarchical patterns within sequential data—particularly through one-dimensional (1D) convolutional layers, which are tailored for processing time-series inputs. As a feedforward neural network with trainable convolutional kernels [18], CNNs excel in extracting meaningful features from ordered temporal sequences by sliding 1D filters (convolution kernels) along the time axis. This sliding operation enables element-wise multiplication and summation across consecutive time steps, generating feature maps that encode critical local patterns such as short-term fluctuations, periodic pulses, or transient trends in the time series.
By stacking multiple 1D convolutional layers, the network progressively aggregates these local temporal features into higher-level representations—for example, linking hourly leaching rate variations to daily or weekly trend patterns—thereby capturing both fine-grained dynamics and coarse-grained temporal structures [19,20,21]. Compared to fully connected networks or traditional time-series models, CNNs reduce parameter redundancy through weight sharing in convolutional kernels, enhancing computational efficiency even for long time-series data [22]. One-dimensional (1D) convolutional layers extract temporal features along the time axis, with their output as Equation (1).
y i = tanh j = 1 k   ω j x i j + k + b
where x i is the input time series; ω j is the weight matrix of the convolution kernel; b is the bias; and k is the number of convolution kernels.

2.1.2. LSTM Neural Networks

Long Short-Term Memory (LSTM) neural networks [23] are a specialized variant of Recurrent Neural Networks (RNNs) designed specifically to address challenges in modeling long-sequence data. As a significant innovation in deep learning, they aim to handle and analyze long-term dependency issues in time-series data. The structure of the LSTM recurrent unit is illustrated in detail in Figure 1.
Compared to standard RNNs, the basic neural unit of LSTM adds three “gates”: the input gate i t , forget gate f t , and output gate o t , whose coefficient values range between [0, 1]. The input gate primarily determines which attributes need to be updated and the content of new attributes; the forget gate aims to discard previous useless state information; the output gate decides what to output. These three gates are all outputs from the previous unit h t 1 and the current input x t at any time step, collectively determining the final output [24]. The calculation formulas for the input gate i t , forget gate f t , output gate o t , and the candidate cell state C ~ t are given in Equations (2)–(5).
i t = σ W i x x t + W i h h t 1 + b i
f t = σ W f x x t + W f h h t 1 + b f
o t = σ W o x x t + W o h h t 1 + b o
c ~ t = tanh W c x x t + W c h h t 1 + b c
Here, W i x , W i h , W f x , W f h , W o x , W o h , W c x , and W c h denote the weight matrices obtained by multiplying the corresponding gates with the current input x t and the previous unit output h t 1 . b i , b f , b o , and b c are bias terms, and σ represents the sigmoid function.
The updated state value C t is determined by the previous state value C t 1 , the forget gate f t , the input gate i t , and the candidate cell state C ~ t . Following the state update, the output value h t can be computed in the form of element-wise multiplication, as shown in Equations (6) and (7).
C t = f t C t 1 + i t C ~ t
h t = o t tan   h S t

2.1.3. Attention Mechanism

The fundamental idea of the attention mechanism is to filter useful information. Its essence in achieving this effect lies in computing the output sequence of the LSTM hidden layer—specifically, the input feature vectors post-training—to derive a feature weight vector. This process identifies more critical influencing factors, thereby enhancing the efficiency and accuracy of information processing [25]. The specific architectural diagram is illustrated in Figure 2.
The steps for computing the feature weight vector are as follows. First, we calculate the attention weight a t i assigned to the element at the current time step t within the output sequence of the LSTM hidden layer. In Equation (8), i denotes the sequence number in the output sequence of the LSTM hidden layer, T h represents the sequence length of the LSTM hidden layer output, and e t i indicates the matching degree between the element to be encoded and other elements in the LSTM hidden layer output sequence. Subsequently, the feature weight vector h k is computed using Equations (8)–(10).
a t i = e x   p e t i i = 1 T h   e x p e t i
h k = H C , s t , h t
C = i = 1 T h   a t i s t
Here, H(⋅) denotes the feature weight vector function; h t represents the output sequence of the LSTM hidden layer; and s t corresponds to the hidden state of the attention mechanism associated with the output sequence of the LSTM hidden layer.

2.1.4. Construction of the Fusion Model

The architecture of the CNN–LSTM fusion model integrated with the attention mechanism is illustrated in Figure 3.
The model first transforms historical data related to uranium leaching rates (including daily production volume, daily lixiviant volume, H2SO4 lixiviant concentration, and uranium concentration in the lixiviant) through preprocessing steps into 1D vectors with a length of 5 and a height of 1. This approach leverages the advantage of CNNs in image feature extraction.
In this architecture, the convolutional kernel size is set to 1, and the ReLU function is employed as the activation function. The LSTM layer, designed to process long-sequence data, incorporates two hidden layers with 64 units each, where the ReLU activation function is also applied. However, individual models have their respective limitations: CNNs excel at capturing local features but struggle to model long-term dependencies, while LSTMs have an advantage in long-time-series analysis but lack the ability to focus on key local features. An attention module is introduced after the LSTM network to compute the weights, ultimately deriving the prediction results.
Overall, this coupled architecture exactly addresses the aforementioned limitations: it combines the spatial feature extraction capabilities of CNNs, the time-series analysis capabilities of LSTMs, and the selective focusing capability of the attention mechanism. It enables a holistic comprehension and analysis of multiple factors influencing the uranium leaching process. The approach not only enhances predictive accuracy but also deepens the model’s understanding of interactions among complex geological and chemical variables.

2.2. LightGBM

LightGBM was first introduced in [26] and has since been widely adopted. LightGBM, short for ‘Light Gradient Boosting Machine’, is an efficient machine learning framework based on Gradient Boosting Decision Tree (GBDT). As an extension of GBDT, LightGBM adopts a histogram-based decision tree algorithm. By reducing memory usage and improving computational efficiency, it significantly accelerates training speed and decreases memory consumption. The core idea of LightGBM is to linearly combine M weak regression trees into a strong regression tree. The computational formula is provided in Equation (11).
F x = m = 1 M   f m x
Here, F x represents the final output, and f m x denotes the output of the m-th weak regression tree. The key improvements in the LightGBM model include the histogram-based algorithm and the leaf-wise splitting strategy. The histogram algorithm discretizes continuous data into K integer intervals and constructs a histogram with a width of K. During traversal, the discretized values are used as indices to accumulate statistics in the histogram, followed by a search for the optimal split points in the decision tree. The leaf-wise splitting strategy iteratively selects the leaf node with the maximum gain for splitting. Additionally, model complexity is reduced, and overfitting is mitigated by constraining the depth of the trees and the number of leaf nodes [27].

2.3. Training Process of Fusion Models

The flowchart of the training process for the CNN–LSTM–LightGBM-based fusion model with attention mechanism, designed for predicting leaching rates in ISL uranium mining, is illustrated in Figure 4.
Based on the data features extracted by the CNN, the CNN–LSTM model and the LightGBM model are constructed separately, and a fusion prediction model is established through weighted fusion. The MAPE-RW algorithm is employed to calculate the initial weights of individual models. First, the CNN–LSTM model and the LightGBM model are trained, and their respective MAPE values on the validation set are computed. Subsequently, the model weights are optimized via the MAPE-RW algorithm, and the uranium leaching rate prediction from the fusion model is output as the final result.
The MAPE-RW algorithm calculates model weights through the following procedure: First, the MAPE values of the CNN–LSTM model and the LightGBM model are computed separately to determine the initial weights for each individual model. Then, the model weights are iteratively adjusted in both directions until the weights that minimize the MAPE value are identified. The optimal weights within the bidirectional search range that yield the smallest MAPE value are ultimately selected as the optimal weights for the fusion prediction model. Finally, the combined uranium leaching rate prediction is calculated using Equations (12)–(14).
ω i = σ MAPE j σ MAPE i + σ MAPE j
ω j = σ M A P E i σ M A P E i + σ M A P E j
f c = ω C N N L S T M × f C N N L S T M + ω L i g h t G B M × f L i g h t G B M
Here, σ M A P E i and σ MAPE j represent the MAPE values of the CNN–LSTM model and the LightGBM model, respectively. ω i and ω j denote the initial weights assigned to the two models. The coefficients ω C N N L S T M and ω L i g h t G B M are the optimized weights for the CNN–LSTM and LightGBM models, respectively. f C N N L S T M and f L i g h t G B M correspond to the predicted uranium leaching rates from the CNN–LSTM and LightGBM models, while f c is the final prediction of the uranium leaching rate generated by the fusion model. The proposed model employs the Root Mean Square Error (RMSE) to evaluate its effectiveness against traditional strategies.

3. Data Preprocessing

3.1. Handling Missing Values and Outliers

In this study, we adopted the widely recognized multiple imputation (MI) technique to address the unavoidable issue of missing values in the dataset. The principle of MI is grounded in the assumption that the missing data mechanism is Missing at Random (MAR), meaning the probability of missingness depends solely on observed data and is independent of unobserved data. The MI process can be broadly divided into three stages: Imputation, Analysis, and Pooling. In scenarios involving complex missing data patterns or high missingness proportions, MI provides an effective solution [28].
Outliers in the following six metrics were processed via the clipping method: Uranium Concentration in the Production Solution, Uranium Concentration in the Lixiviant, H2SO4 Lixiviant Concentration, Metal Concentration in the Lixiviant, Daily Production Volume in the acid leaching process, and Daily Lixiviant Volume in the acid leaching process. Clipping is a statistical method for handling outliers, with the primary objective of trimming values that exceed predefined thresholds by capping them within upper and lower bounds.
After data cleaning, the coefficients of variation (CV) for all six metrics—H2SO4 Lixiviant Concentration (0.135), Uranium Concentration in the Lixiviant (0.119), Uranium Concentration in the Production Solution (0.142), Metal Concentration in the Lixiviant (0.012), Daily Production Volume (0.062), and Daily Lixiviant Volume (0.078)—remained below 0.15, confirming the effectiveness of outlier clipping.

3.2. Indicator Correlation Analysis

This study employs the Pearson correlation coefficient [29] and Spearman’s rank correlation coefficient [30] among various correlation coefficients for analysis, integrating the analytical results from both methods to conduct a comprehensive analysis.
The formula for the Pearson correlation coefficient can be represented by Equations (15) and (16).
c o v x , y = i = 1 n   x i x e y i y e n 1
ρ p = c o v x , y σ x σ y
where x and y represent the time series of the two selected reference variables, c o v x , y denotes the covariance between sequence x and reference sequence y, and x e and y e represent the mean values of sequences x and y, respectively. Additionally, i serves as the index of the time series, indicating the actual value at each time point. A correlation coefficient ρ < 0 indicates a negative correlation between variables, while a correlation coefficient ρ > 0 suggests a positive correlation. When the absolute value of the correlation coefficient between variables exceeds 0.5, it can be considered that a strong correlation exists.
In the analysis of the ISL uranium mining process, six indicators were preliminarily selected. A code-based computational approach was employed to calculate the correlation coefficients among these six indicators within the dataset. The resulting correlation coefficients are visualized in Figure 5a. The heatmap reveals that, for the uranium concentration in the leachate, the highest correlations are observed with daily production volume and daily lixiviant volume, yielding values of 0.48 and 0.54.
To validate the robustness of indicator correlations, a non-parametric method—Spearman’s rank correlation coefficient—was further employed for multi-method validation, aiming to assess the presence of significant monotonic associations among the indicators. This coefficient, proposed by Spearman [31], is defined by Equation (17) and serves to quantitatively analyze the direction and strength of monotonic associations between variables.
ρ s = 1 6 d 2 n ( n 2 1 )
In Equation (17), ρ s represents Spearman’s rank correlation coefficient. The calculation of this coefficient involves several key elements: d = r x r y , where d denotes the difference in ranks between variables x and y , and n refers to the total sample size. The Spearman correlation coefficients for the technical indicator data were further computed, and the results are visualized in Figure 5b.
According to the Spearman correlation heatmap, the positively correlated indicators align with the results from the Pearson correlation analysis. Specifically, metal concentration in the lixiviant, daily lixiviant volume, daily production volume, and uranium concentration in the production solution exhibit positive correlations with uranium concentration in the lixiviant. Among these, the strongest correlation is observed between metal concentration in the lixiviant and uranium concentration in the lixiviant. In contrast, H2SO4 lixiviant concentration shows a weak negative correlation with uranium concentration in the lixiviant.
By integrating the results from both the Pearson and Spearman correlation heatmaps, a composite histogram of correlation indicators was generated (Figure 6). As shown in Figure 6, the daily lixiviant volume exhibits the highest correlation strength. The remaining indicators are ranked in descending order of correlation magnitude: metal concentration in the lixiviant, daily production volume, H2SO4 lixiviant concentration, and uranium concentration in the lixiviant.

3.3. XGBoost Feature Selection

To validate the correctness of Pearson and Spearman analyses on the five indicators, XGBoost feature selection was further used to rank the importance of all features in the dataset. XGBoost provides built-in feature importance evaluation methods for feature selection [32]. The general steps for feature importance evaluation are as follows: 1. Train an XGBoost model. 2. Obtain importance scores of each feature using built-in functions (e.g., feature importance). 3. Rank features based on the scores and select the highest-ranked ones. By integrating data from all mining areas and constructing an XGBoost model, the importance of all features was calculated. The final ranking of feature importance, sorted in descending order, is shown in Table 1.
As shown in Table 1, the daily lixiviant volume exhibits the highest feature importance, indicating its greatest contribution to predicting uranium concentration in the production solution. In contrast to the Pearson and Spearman correlation analyses, the remaining features are ranked in descending order of importance as follows: daily production volume, followed by H2SO4 lixiviant concentration, then uranium concentration in the lixiviant, and finally metal concentration in the lixiviant. Notably, metal concentration in the production solution and H2SO4 production concentration showed minimal relevance. This trend is consistent with the correlations derived from chemical reaction equation analysis, further validating the dominance of the top five features. The final constructed feature set comprises the following variables: daily lixiviant volume, daily production volume, H2SO4 lixiviant concentration, uranium concentration in the lixiviant, and metal concentration in the lixiviant.

4. Experimental Analysis

4.1. Model Evaluation Metrics

The model was primarily evaluated using MAPE (Mean Absolute Percentage Error), RMSE (Root Mean Square Error), and MAE (Mean Absolute Error) as performance metrics.
MAPE is a commonly used metric for evaluating prediction accuracy. It calculates the average of the absolute percentage errors between predicted and actual values, which is then converted into a percentage to provide an intuitive understanding of the error magnitude. RMSE, another widely used measure, quantifies deviations between predicted and observed values. It is calculated as the square root of MSE. By preserving the original data units and reflecting error dispersion, RMSE enables direct comparisons across different scales. MAE measures the average absolute difference between the predicted and actual values, serving as a straightforward method for assessing the accuracy of a prediction model. The computational formulas for these metrics are provided in Equations (18)–(20).
M A P E = 100 % n t = 1 n   y t y ^ t y t y t 0
R M S E = 1 n t = 1 n   ( y t y ^ t ) 2
M A E = 1 n t = 1 n   y t y ^ t
where y t represents the actual value at the t observation; y ^ t denotes the predicted value at the t observation; and n is the total number of observations.

4.2. Ablation Experiment

To validate the effectiveness of the proposed CNN–LSTM–LightGBM model for uranium leaching rate prediction, a series of performance tests was conducted on individual components (CNN, LSTM, and CNN–LSTM). The comparative results are summarized in Table 2 and illustrated in Figure 7.
As shown in Table 2 and Figure 8, the fused model achieves the best performance across three key metrics: MAE (0.085%), MAPE (0.833%), and RMSE (0.201%), demonstrating the lowest errors. The CNN–LSTM coupled model exhibits superior performance compared to standalone CNN and LSTM architectures across all metrics. This improvement likely stems from its ability to synergize CNN’s spatial feature extraction with LSTM’s temporal sequence modeling, thereby enhancing the accuracy of uranium leaching rate predictions.
The standalone LSTM model outperforms the standalone CNN model in terms of MAE and RMSE, likely due to LSTM’s inherent strength in processing time-series data and capturing temporal dependencies in uranium leaching processes. However, its performance remains inferior to the CNN–LSTM fusion, suggesting that relying solely on temporal analysis may be insufficient for comprehensively predicting uranium leaching rates, thus highlighting the necessity of integrating spatial features.
In contrast, the standalone CNN model demonstrates the weakest performance across all metrics, indicating that spatial feature extraction alone may lack accuracy in predicting uranium leaching rates without addressing complex temporal dependencies. This further emphasizes the critical role of temporal information and the need to combine it with spatial features for uranium leaching rate prediction.
These limitations highlight the necessity of hybrid modeling. The proposed fusion model outperforms all benchmarks on every metric, empirically showing that combining LightGBM with the CNN–LSTM architecture markedly improves prediction accuracy.

4.3. Comparative Experiments

To further validate the effectiveness of the fusion model, a comparative performance analysis was conducted between the proposed fusion model and six machine learning prediction models, including Support Vector Regression (SVR), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and attention-based ResNet [33], attention-based TCN-BiGRU [34], and CNN-BiGRU fused with attention mechanisms [35]. The experimental results (summarized in Table 3) demonstrate that the fusion model consistently outperformed all alternatives across every evaluation metric.
As shown in Table 3, the CNN–LSTM–LightGBM uranium leaching rate prediction model outperforms six heterogeneous and extensively utilized machine learning methods, such as SVR, MLP, and KNN, across all three key metrics on the validation set.

4.4. Model Application

To validate whether the model can be effectively applied in actual production, the proposed CNN–LSTM–LightGBM fusion model with attention mechanisms was used to predict uranium leaching rates across 10 mining areas. As shown in Figure 9, the model achieved robust prediction accuracy across all mining areas, reliably forecasting uranium concentrations within operational tolerances.
The metric performance is shown in Table 4: MAE had a maximum of 0.112% and a minimum of 0.039%; MAPE had a maximum of 0.985% and a minimum of 0.166%; RMSE had a maximum of 0.249% and a minimum of 0.105%.

5. Conclusions

The proposed CNN–LSTM–LightGBM fusion model with attention mechanisms provides reliable predictions of uranium leaching rates during in situ uranium leaching processes. Comparative experiments demonstrate that the proposed model achieves an MAE of 0.085%, an MAPE of 0.833%, and an RMSE of 0.201% on the validation set, outperforming six heterogeneous and extensively utilized methods, including SVR, MLP, and KNN across three metrics. Ablation experiments, which quantify component contributions through systematic exclusion, reveal the fusion model’s advantages in terms of MAE, MAPE, and RMSE on the validation set compared to standalone CNN, LSTM, and CNN–LSTM counterparts. The research outcomes, by enabling accurate prediction of uranium leaching rates and thereby optimizing lixiviant injection strategies, not only enhance the efficient utilization of uranium resources but also provide actionable insights for reducing environmental pollution. In conclusion, the proposed fusion model presents a deep learning-integrated approach for in situ uranium leaching technology, while offering a methodological reference for related industrial domains. This innovation holds significant potential for promoting the transformation of in situ leaching uranium mining technology toward a data-driven and optimization-oriented intelligent transformation.
There is still room for expansion in the dimensionality of data indicators for prediction and the coverage of mining area samples in this study’s model, and the depth of integration between the model and professional mechanistic knowledge, such as geological fluid dynamics, remains insufficient. Future work should focus on integrating the model with the full-process environmental impact assessment of in situ leaching uranium mining, quantifying the long-term effects of injection strategies on ecosystems such as groundwater, and exploring interdisciplinary integration with hydrological simulation and ecological risk assessment, so as to provide more systematic guarantees for in situ leaching uranium mining technology in moving toward a high-quality development direction featuring low environmental impact and high resource utilization.

Author Contributions

Conceptualization, Z.L.; methodology, Y.Z. and Z.W.; software, H.Z.; validation, Z.W.; formal analysis, Z.L., Z.J. and Y.Z.; investigation, Z.J.; data curation, Y.Z.; writing—original draft preparation, Z.L., Z.J. and H.Z.; writing—review and editing, Z.L. and Z.J.; visualization, H.Z.; supervision, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the China Uranium Industry Corporation—East China University of Technology State Key Laboratory of Nuclear Resources and Environment Joint Innovation Fund (No. 2022NRE-LH-14), the Jiangxi Provincial Natural Science Foundation (No. 20242BAB25084), the Ministry of Education Engineering Research Center of Nuclear Technology Application Fund (No. HJSJYB2021-12), and the Science and Technology Research Project of Jiangxi Provincial Department of Education (No. GJJ2200728).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors are grateful to the anonymous reviewers for their helpful comments on the manuscript.

Conflicts of Interest

The authors declare that this study received funding from China Uranium Industry Corporation. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.

References

  1. Song, Y.; Deng, B.; Wang, K.; Zhang, Y.; Gao, J.; Cheng, X. Highly-Efficient Adsorbent Materials for Uranium Extraction from Seawater. J. Environ. Chem. Eng. 2024, 12, 113967. [Google Scholar] [CrossRef]
  2. Li, C.; Zeng, S. The Progress of the Research on Comprehensive Treatment of Groundwater Pollution from In-situ Leaching of Uranium. J. Libr. Inf. Sci. 2011, 21, 176–178, 206. [Google Scholar] [CrossRef]
  3. Brown, S.H. Occupational Radiation Protection Aspects of Alkaline Leach Uranium in Situ Recovery (ISR) Facilities in the United States. Health Phys. 2019, 117, 106–113. [Google Scholar] [CrossRef]
  4. Li, H.; Muhammad, A.M.; Tang, Z. Variation of Groundwater and Mineral Composition of in Situ Leaching Uranium in Bayanwula Mining Area, China. PLoS ONE 2024, 19, e0303595. [Google Scholar] [CrossRef]
  5. Hao, X.; Zhongbao, R.; Xinyang, L. Global Uranium Production Cost and Prospect of Supply and Demand Situation. Min. Res. Dev. 2019, 39, 148–152. [Google Scholar] [CrossRef]
  6. Campbell, K.M.; Gallegos, T.J.; Landa, E.R. Biogeochemical Aspects of Uranium Mineralization, Mining, Milling, and Remediation. Appl. Geochem. 2015, 57, 206–235. [Google Scholar] [CrossRef]
  7. Seredkin, M.; Zabolotsky, A.; Jeffress, G. In Situ Recovery, an Alternative to Conventional Methods of Mining: Exploration, Resource Estimation, Environmental Issues, Project Evaluation and Economics. Ore Geol. Rev. 2016, 79, 500–514. [Google Scholar] [CrossRef]
  8. Mukhamediev, R.I.; Kuchin, Y.; Amirgaliyev, Y.; Yunicheva, N.; Muhamedijeva, E. Estimation of Filtration Properties of Host Rocks in Sandstone-Type Uranium Deposits Using Machine Learning Methods. IEEE Access 2022, 10, 18855–18872. [Google Scholar] [CrossRef]
  9. Merembayev, T.; Yunussov, R.; Yedilkhan, A. Machine Learning Algorithms for Stratigraphy Classification on Uranium Deposits. Procedia Comput. Sci. 2019, 150, 46–52. [Google Scholar] [CrossRef]
  10. Amirgaliev, E.; Isabaev, Z.; Iskakov, S.; Kuchin, Y.; Muhamediyev, R.; Muhamedyeva, E.; Yakunin, K. Recognition of Rocks at Uranium Deposits by Using a Few Methods of Machine Learning. In Soft Computing in Machine Learning; Rhee, S.-Y., Park, J., Inoue, A., Eds.; Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2014; Volume 273, pp. 33–40. ISBN 978-3-319-05532-9. [Google Scholar]
  11. Mukhamediev, R.I.; Kuchin, Y.; Popova, Y.; Yunicheva, N.; Muhamedijeva, E.; Symagulov, A.; Abramov, K.; Gopejenko, V.; Levashenko, V.; Zaitseva, E.; et al. Determination of Reservoir Oxidation Zone Formation in Uranium Wells Using Ensemble Machine Learning Methods. Mathematics 2023, 11, 4687. [Google Scholar] [CrossRef]
  12. Li, Z.; Su, X.; Jiao, Y.; Zhang, Y.; Qiu, Y.; Hu, X. A New Comprehensive Model to Simulate and Optimize Fluid Flow in Complex Well—Formation System for In Situ Leaching Uranium. Energy Sci. Eng. 2024, 13, 1089–1102. [Google Scholar] [CrossRef]
  13. Jia, M.; Luo, B.; Lu, F.; Yang, Y.; Chen, M.; Zhang, C.; Xu, Q. Improved FMM for Well Locations Optimization in In-Situ Leaching Areas of Sandstone Uranium Mines. Nucl. Eng. Technol. 2024, 56, 3750–3757. [Google Scholar] [CrossRef]
  14. Yu, D.; Luo, Y.; Liang, D.; Li, L. Predicting the Variation of Uranium Leaching Metal Contentin Ground-leaching Process Based on Machine Learning Methods. Nonferrous Met. Metall. 2024, 92–98. [Google Scholar] [CrossRef]
  15. Lei, L.; Lei, Z. Neural Network Predicting Model for Uranium Concentration of Acid In-situ Leach Liquor. Min. Metall. Eng. 2007, 27, 17–20. [Google Scholar] [CrossRef]
  16. Yıldız, T.D.; Tombal-Kara, T.D. Challenges and Recovery Opportunities in Waste Management during the Mining and Enrichment Processes of Ores Containing Uranium and Thorium—A Review. Gospod. Surowcami Miner.-Miner. Resour. Manag. 2024, 40, 25–62. [Google Scholar] [CrossRef]
  17. Hatzilyberis, K.; Tsakanika, L.-A.; Lymperopoulou, T.; Georgiou, P.; Kiskira, K.; Tsopelas, F.; Ochsenkühn, K.-M.; Ochsenkühn-Petropoulou, M. Design of an Advanced Hydrometallurgy Process for the Intensified and Optimized Industrial Recovery of Scandium from Bauxite Residue. Chem. Eng. Process.-Process Intensif. 2020, 155, 108015. [Google Scholar] [CrossRef]
  18. Li, C. Reasearch on Stock Price Prediction and Quantitative Stock Selection Based on CNN-LSTM. Specialized Master‘s Thesis, NorthWest University, Kirkland, WA, USA, 2022. [Google Scholar]
  19. Abdulnabi, A.H.; Wang, G.; Lu, J.; Jia, K. Multi-Task CNN Model for Attribute Prediction. IEEE Trans. Multimed. 2015, 17, 1949–1959. [Google Scholar] [CrossRef]
  20. Zhang, J.; Li, S. Air Quality Index Forecast in Beijing Based on CNN-LSTM Multi-Model. Chemosphere 2022, 308, 136180. [Google Scholar] [CrossRef]
  21. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 2002, 86, 2278–2324. [Google Scholar] [CrossRef]
  22. Di, J.; Tang, J.; Wu, J.; Wang, K.; Ren, Z.; Zhang, M.; Zhao, J. Research Progress in the Applications of Convolutional Neural Networks in Optical Information Processing. Laser Optoelectron. Prog. 2021, 58, 9–35. [Google Scholar] [CrossRef]
  23. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  24. Farzad, A.; Mashayekhi, H.; Hassanpour, H. A Comparative Performance Analysis of Different Activation Functions in LSTM Networks for Classification. Neural Comput. Appl. 2019, 31, 2507–2521. [Google Scholar] [CrossRef]
  25. Lin, Z.; Cheng, L.; Huang, G. Electricity Consumption Prediction Based on LSTM with Attention Mechanism. IEEJ Trans. Electr. Electron. Eng. 2020, 15, 556–562. [Google Scholar] [CrossRef]
  26. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
  27. Tian, L.; Feng, L.; Yang, L.; Guo, Y. Stock Price Prediction Based on LSTM and LightGBM Hybrid Model. J. Supercomput. 2022, 78, 11768–11793. [Google Scholar] [CrossRef]
  28. Schafer, J.L. Multiple Imputation: A Primer. Stat. Methods Med. Res. 1999, 8, 3–15. [Google Scholar] [CrossRef]
  29. Munandar, T.A.; Sumiati, S.; Rosalina, V. Pattern of Symptom Correlation on Type of Heart Disease Using Approach of Pearson Correlation Coefficient. IOP Conf. Ser. Mater. Sci. Eng. 2020, 830, 022086. [Google Scholar] [CrossRef]
  30. Ashok Kumar, J.; Abirami, S. Aspect-Based Opinion Ranking Framework for Product Reviews Using a Spearman’s Rank Correlation Coefficient Method. Inf. Sci. 2018, 460–461, 23–41. [Google Scholar] [CrossRef]
  31. Spearman, C. The Proof and Measurement of Association between Two Things. Am. J. Psychol. 1987, 100, 441–471. [Google Scholar] [CrossRef]
  32. Lin, Z.; Fan, Y.; Tan, J.; Li, Z.; Yang, P.; Wang, H.; Duan, W. Tool Wear Prediction Based on XGBoost Feature Selection Combined with PSO-BP Network. Sci. Rep. 2025, 15, 3096. [Google Scholar] [CrossRef] [PubMed]
  33. Liu, P.; Liu, D.; Li, Y.; Ye, S.; Su, D. Attention-Based ResNet for Radiation Pattern Prediction of Phased Array Antenna. IEEE Antennas Wirel. Propag. Lett. 2024, 23, 4453–4457. [Google Scholar] [CrossRef]
  34. Lin, J.; Lin, W.; Lin, W.; Liu, T.; Wang, J.; Jiang, H. Multi-Objective Cooling Control Optimization for Air-Liquid Cooled Data Centers Using TCN-BiGRU-Attention-Based Thermal Prediction Models. Build. Simul. 2024, 17, 2145–2161. [Google Scholar] [CrossRef]
  35. Chen, J.; Chen, D.; Jiang, H.; Miao, X.; Yin, C. Skeleton-Based 3D Human Pose Estimation with Low-Resolution Infrared Array Sensor Using Attention Based CNN-BiGRU. Int. J. Mach. Learn. Cybern. 2024, 15, 2049–2062. [Google Scholar] [CrossRef]
Figure 1. LSTM neural network architecture diagram.
Figure 1. LSTM neural network architecture diagram.
Processes 13 03013 g001
Figure 2. Attention mechanism architecture diagram.
Figure 2. Attention mechanism architecture diagram.
Processes 13 03013 g002
Figure 3. Architecture diagram of CNN–LSTM model with Integrated attention mechanism.
Figure 3. Architecture diagram of CNN–LSTM model with Integrated attention mechanism.
Processes 13 03013 g003
Figure 4. Flowchart of the fusion model for leaching rate prediction.
Figure 4. Flowchart of the fusion model for leaching rate prediction.
Processes 13 03013 g004
Figure 5. The figure displays two heatmaps of correlation coefficients: (a) Pearson correlation coefficient heatmap; (b) Spearman’s rank correlation coefficient heatmap.
Figure 5. The figure displays two heatmaps of correlation coefficients: (a) Pearson correlation coefficient heatmap; (b) Spearman’s rank correlation coefficient heatmap.
Processes 13 03013 g005
Figure 6. Correlation analysis of uranium concentration in the production solution.
Figure 6. Correlation analysis of uranium concentration in the production solution.
Processes 13 03013 g006
Figure 7. Ablation study: (a) comparison between CNN model predictions and ground truth, (b) comparison between LSTM model predictions and ground truth, (c) comparison between CNN–LSTM model predictions and ground truth, (d) comparison between CNN–LSTM–LightGBM model predictions and ground truth.
Figure 7. Ablation study: (a) comparison between CNN model predictions and ground truth, (b) comparison between LSTM model predictions and ground truth, (c) comparison between CNN–LSTM model predictions and ground truth, (d) comparison between CNN–LSTM–LightGBM model predictions and ground truth.
Processes 13 03013 g007
Figure 8. Comparison of ablation experiment metrics.
Figure 8. Comparison of ablation experiment metrics.
Processes 13 03013 g008
Figure 9. Model predictions vs. ground truth of Uranium concentration in mining areas, (a) MA001 mining area, (b) MA003 mining area, (c) MA004 mining area, (d) MA005 mining area, (e) MA006 mining area, (f) MA007 mining area, (g) MA008 mining area, (h) MA009 mining area, (i) MA010 mining area, (j) MA011 mining area.
Figure 9. Model predictions vs. ground truth of Uranium concentration in mining areas, (a) MA001 mining area, (b) MA003 mining area, (c) MA004 mining area, (d) MA005 mining area, (e) MA006 mining area, (f) MA007 mining area, (g) MA008 mining area, (h) MA009 mining area, (i) MA010 mining area, (j) MA011 mining area.
Processes 13 03013 g009
Table 1. Feature importance table under the XGBoost algorithm.
Table 1. Feature importance table under the XGBoost algorithm.
FeatureImportance
Daily lixiviant volume0.3267
Daily production volume0.2548
H2SO4 lixiviant concentration0.1479
U Concentration in the Lixiviant0.0954
Metal concentration in the lixiviant0.0723
Metal concentration in the production solution0.0529
H2SO4 production concentration0.0499
Table 2. Comparison of ablation experiment metrics.
Table 2. Comparison of ablation experiment metrics.
ModelMAE/%MAPE/%RMSE/%
CNN0.6865.7221.002
LSTM0.6055.1140.876
CNN–LSTM0.3723.0090.502
the proposed method0.0850.8330.201
Table 3. Comparative experiments.
Table 3. Comparative experiments.
ModelMAE/%MAPE/%RMSE/%
SVR4.05442.5624.846
MLP2.87723.0693.496
KNN2.28417.6373.159
Attention-based ResNet1.51711.4062.247
Attention-based TCN-BiGRU model1.41510.7541.914
Attention-based CNN-BiGRU model0.2221.7570.355
the proposed method0.0850.8330.201
Table 4. Evaluation metrics for mining area prediction results.
Table 4. Evaluation metrics for mining area prediction results.
Mining AreaMAE/%MAPE/%RMSE/%
MA001 mining area0.0520.4970.142
MA003 mining area0.1120.9850.249
MA004 mining area0.0940.8650.211
MA005 mining area0.0820.6970.198
MA006 mining area0.0800.6550.174
MA007 mining area0.0610.4270.142
MA008 mining area0.0470.2900.124
MA009 mining area0.0390.1910.108
MA010 mining area0.0970.4480.216
MA011 mining area0.0440.1660.105
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Z.; Jin, Z.; Zhou, Y.; Wei, Z.; Zhang, H. Research on the Prediction of Liquid Injection Volume and Leaching Rate for In Situ Leaching Uranium Mining Using the CNN–LSTM–LightGBM Model. Processes 2025, 13, 3013. https://doi.org/10.3390/pr13093013

AMA Style

Liu Z, Jin Z, Zhou Y, Wei Z, Zhang H. Research on the Prediction of Liquid Injection Volume and Leaching Rate for In Situ Leaching Uranium Mining Using the CNN–LSTM–LightGBM Model. Processes. 2025; 13(9):3013. https://doi.org/10.3390/pr13093013

Chicago/Turabian Style

Liu, Zhifeng, Zirong Jin, Yipeng Zhou, Zhenhua Wei, and Huanyu Zhang. 2025. "Research on the Prediction of Liquid Injection Volume and Leaching Rate for In Situ Leaching Uranium Mining Using the CNN–LSTM–LightGBM Model" Processes 13, no. 9: 3013. https://doi.org/10.3390/pr13093013

APA Style

Liu, Z., Jin, Z., Zhou, Y., Wei, Z., & Zhang, H. (2025). Research on the Prediction of Liquid Injection Volume and Leaching Rate for In Situ Leaching Uranium Mining Using the CNN–LSTM–LightGBM Model. Processes, 13(9), 3013. https://doi.org/10.3390/pr13093013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop