Next Article in Journal
Deep Reinforcement Learning for UAV Target Search and Continuous Tracking in Complex Environments with Gaussian Process Regression and Prior Policy Embedding
Previous Article in Journal
Optimized Kuhn–Munkres with Dynamic Strategy Selection for Virtual Network Function Hot Backup Migration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stacking Ensemble Learning-Assisted Simulation of Plasma-Catalyzed CO2 Reforming of Methane

by
Jie Pan
,
Xin Qiao
,
Chunlei Zhang
,
Bin Li
,
Lun Li
,
Guomeng Li
and
Shaohua Qin
*
School of Physics and Electronics, Shandong Normal University, Jinan 250014, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(7), 1329; https://doi.org/10.3390/electronics14071329
Submission received: 15 February 2025 / Revised: 22 March 2025 / Accepted: 25 March 2025 / Published: 27 March 2025

Abstract

:
Plasma catalysis is capable of significantly enhancing the energy conversion efficiency of the CO 2 reforming of methane. Simulation is an effective method for studying internal principles and operational mechanisms of the plasma-catalyzed CO 2 reforming of methane. However, simulation has some potential problems such as poor convergence and high computational complexity. To address these challenges, a stacking ensemble learning-assisted simulation of the plasma-catalyzed CO 2 reforming of methane was proposed. The stacking ensemble model, trained on limited converged simulation data, interpolates non-convergent points by leveraging the combined predictive power of multiple base models (KNN, DT, XGBoost). This approach ensures that predictions remain within the training data’s parameter space, minimizing extrapolation risks. We utilize Bayesian optimization and stacking ensemble methods aimed at improving the accuracy and generalization capability of this model. Experimental results show that this model can provide accurate CO density values under different E / N and CO 2 gas-feeding ratio conditions. The comparative analysis results also demonstrate that Bayesian optimization and ensemble techniques can effectively improve model accuracy. This model combines advanced machine learning techniques with traditional simulation techniques. The time for predicting particle density under new experimental conditions has been reduced from 24 min in numerical simulation to a few seconds, which is 99.8% less than traditional 0D simulations, while maintaining high prediction accuracy (R2 = 0.9795).

1. Introduction

With the escalating global energy demand and intensifying environmental concerns, efficient resource utilization and pollution reduction have become important issues in scientific research [1,2,3]. In this context, the rapid development of plasma-catalysis technology has provided innovative solutions to address these pressing challenges [4,5]. Plasma-catalysis technology, specifically via nanosecond-pulsed dielectric barrier discharge (DBD), uniquely activates CH4 and CO2 through non-equilibrium electron collisions, enabling low-temperature methane dry reforming with enhanced selectivity toward CO and value-added chemicals. Pulsed DBD is increasingly recognized as a promising technique for generating atmospheric non-equilibrium plasmas [6,7,8]. Reza et al. have extensively studied the effects of various catalysts on plasma discharge in DBD reactors, optimizing hydrocarbon selectivity [9]. Asif et al. have reviewed the applications and challenges of DBD technology in the CO2 reforming of methane, analyzing how reactor configurations and operating parameters have influenced product distribution and laying the groundwork for the CO2 reforming of methane technology commercialization [10].
Numerical simulation has been an effective method for studying plasma catalysis, offering insights into physical information that is difficult to measure experimentally, while significantly reducing costs and improving efficiency. However, multi-timescale simulations have often experienced poor convergence and high-computational-resource demands. Mathews et al. have proposed a physics-informed deep learning framework based on partial differential equation constraints, using local observations of plasma electron density and temperature to elucidate the dynamics of turbulent plasmas, providing a novel perspective for plasma diagnostics [11]. Zhong and colleagues proposed a deep-learning based framework for the solution of partial differential equations in thermal plasma simulations. Their work specifically dealt with one-dimensional arc decay under three distinct scenarios: steady-state arcs, transient arcs lacking radial velocity, and transient arcs having radial velocity. Unlike Mathews et al.’s physics-informed approach for turbulent edge plasmas, Zhong et al. directly model spatiotemporal plasma dynamics using feedforward neural networks, achieving high accuracy in predicting temperature and velocity profiles without explicit reliance on fluid theory constraints. This approach uniquely handles transient thermal plasmas with convection terms, demonstrating robustness in complex scenarios like radial velocity coupling [12]. Zhu et al. have developed a physics-informed deep learning framework to study non-equilibrium discharge plasma systems described by global models [13]. Li et al. proposed a hybrid model combining neural networks and fluid simulations to predict turbulent transport phenomena in plasmas. Experimental validation demonstrated its precise prediction of key turbulent transport features, including dominant turbulence categories and radially averaged fluxes, independent of local gradient parameter variations [14]. By integrating deep neural networks with dielectric barrier discharge techniques, researchers have highlighted the significant potential of machine learning for predicting and optimizing the plasma-catalyzed CO2 reforming of methane and ammonia synthesis simulations [15,16,17].
Machine learning techniques exhibit distinct advantages in dynamic nonlinear image feature modeling with automated architecture adaptation [18], automated high-dimensional feature engineering [19,20], and nonlinear pattern modeling across complex datasets with cross-domain generalization [21,22,23]. However, machine learning models contain numerous parameters that significantly affect results, and traditional manual tuning methods often yield unsatisfactory results. To address this issue, Bayesian optimization (BO), an efficient hyperparameter tuning strategy, has found extensive applications in diverse machine learning tasks over recent years and has been shown to significantly improve predictive models’ performance [24,25,26,27]. As data science and artificial intelligence have evolved, numerous new algorithms have been proposed to address complex prediction problems [28,29,30,31]. Adaptive stacking ensemble frameworks, by optimizing base models and hyperparameters, have significantly enhanced prediction accuracy and generalization capabilities [32,33,34,35,36]. These algorithms have illustrated the significant potential of ensemble learning in improving prediction accuracy, offering valuable theoretical foundations and technical support for research and applications in related fields [37,38]. In particular, stacking methods have leveraged the strengths of multiple base models to better capture complex data patterns, thereby enhancing overall model performance. This study has employed a stacking approach, combined with Bayesian optimization for hyperparameters tuning, to predict key product particle densities in the simulation of the plasma-catalyzed CO2 reforming of methane. Experimental results have shown that the model performs exceptionally well, exhibiting strong predictive capabilities.

2. Methodology

2.1. The Pulsed Discharge Plasma-Catalytic Kinetics Model

The zero-dimensional (0D) numerical simulation model assumes uniform discharge in the region, converting three-dimensional plasma discharge experiments into 0D kinetic simulations. By defining species, reaction equations, reaction rate coefficients, reduced field strength distribution, and initial and boundary conditions, this model analyzes the time evolution of particles, key radical reactions, and major product pathways during discharge. The primary governing equation for the numerical simulation is the particle density continuity equation (Equation (1)).
d N i d t = j k j l N l α s j Right α s j Left
where N i represents the particle density of the i-th species, k j denotes the reaction rate coefficient for the j-th plasma reactions, and N l is the density of the l-th reactant in the left side of the reaction equation. The coefficients α i j Right and α i j Left represent the stoichiometric coefficients of the i-th species on the right and left sides of the j-th reaction equation. We compute reaction rate coefficients for electron-impact phenomena using the BOLSIG+ Boltzmann solver, a tool that accounts for electron temperature and activation energy dependencies. Regarding neutral chemical species and ionized reactions, the corresponding rate coefficients are either derived from the Arrhenius equation or sourced from established references [39,40,41,42,43]. This study focuses on the plasma-catalyzed CO2 reforming of methane, for which a 0D kinetic model is developed using the ZDPlaskin package based on nanosecond-pulsed DBD plasma-catalysis experiments. The model couples gas-phase and surface reactions in the plasma-catalysis process, providing a systematic description of the microscopic processes during discharge. In constructing the model, a uniform electric field is assumed in the discharge region, with reaction species, initial conditions, boundary conditions, pulse voltage, and reaction temperature defined according to the experimental setup. The model also incorporates reactor volume and surface area via surface reaction rates to represent the plasma discharge structure, while catalyst properties are reflected through parameters like specific surface area, active site density, and site hopping rate. Adsorption reaction rate constant k ada , Eley–Rideal reaction rate constant k ER , Langmuir–Hinshelwood reaction rate constant k LH , and dissociative adsorption rate constant k ada are calculated using Equations (2)–(5), ensuring that the zero-dimensional model accurately captures the complex three-dimensional plasma-catalysis reaction mechanisms [44,45,46].
k ads = Λ 2 D + V A 2 2 σ ads v ¯ σ ads S T 1 V A
k ER = Λ 2 D + V A 2 2 σ ER v ¯ σ ER S T 1 V A
k dad = Λ 2 D + V A 2 2 σ dad v ¯ σ dad S T 2 1 V A 2
k LH = ν V 4 S T A exp E a + E d k B T wall
where Λ = 0.2d/2.405 represents the diffusion length, d is the reactor radius, D is the diffusion coefficient, and V / A represents the volume-to-surface area ratio of the discharge [39,44]. v ¯ is the average velocity of gas molecules due to thermal motion, σ wall is the wall loss probability for excited species, σ dad is the dissociative adsorption probability of particles, and σ ads and σ ER represent the adsorption and recombination probabilities [46]. v represents the surface diffusion hopping frequency, S T is the surface active site density of the catalyst, E a is the reaction activation energy, and E d is the diffusion energy barrier for the reaction.
The current situation of work is shown in Table 1. In plasma-catalyzed methane dry reforming, the feed gas only contains CO2 and CH4, of which H2 is produced as one of the reaction products and can be dissociated by electron collision to form radicals. The core objective of this process is to convert CH4 and CO2 into syngas or high-value hydrocarbons, with reaction pathways synergistically regulated by plasma parameters and catalyst properties. Microwave plasma, arc discharge plasma, radio frequency plasma, and nanosecond-pulsed dielectric barrier discharge (ns-pulsed DBD) can all catalyze CO2/CH4 reforming reactions. Among them, ns-pulsed DBD generates uniform micro-discharges through nanosecond-scale high-voltage pulses, simultaneously featuring high electron density and low temperature characteristics. Its high energy efficiency, controllable reaction pathways, and carbon deposition resistance endow it with significant advantages in the directional synthesis of syngas (CO + H2) and C2 + hydrocarbons.
The model includes 106 gas-phase species and 19 surface species, as presented in Table 2. Excited species refer to neutral states where molecules absorb energy, causing electron transitions without ionization, while free radicals contain unpaired electrons with structural modifications. It investigates the key species and reaction mechanisms in the plasma-catalyzed CO2 reforming of methane and analyzes the formation pathways of the main products. The model uses the reduced electric field ( E / N ) and the CO2/CH4 feed ratio as critical control parameters. These parameters influence the particle density and reaction rates during the discharge process, ultimately affecting feedstock conversion, product distribution, and yield. Through model simulations, we can analyze the electron energy loss distribution, reaction pathways, and the contributions of different mechanisms to CO2 and CH4 conversion, particularly in the formation of key products such as CO. Our research group has conducted extensive tests on this model. Previous studies [15,47] have presented comprehensive numerical simulations of plasma-catalyzed reactions. These studies not only offer theoretical insights for optimizing experimental conditions but also provide essential data for plasma reactor design and process improvement, guiding further experimental refinement.
The comprehensive efficiency of CO generation, relative to reactant conversion, is defined as the ratio of CO yield to the conversion amounts of methane and CO2 (Equation (6)). Figure 1 illustrates the relationship between CO generation efficiency, the E/N, and the CO2 feed ratio. The calculated CO generation efficiencies aim to guide the determination of optimal plasma conditions under varying operating parameters. CO generation efficiency generally increases with higher E/N values and lower CO2 feeding ratios, consistent with the expected behavior under higher input power.
η CO = ϵ n ( CO ) Δ n ( CH 4 ) + Δ n ( CO 2 )
where ϵ represents a coefficient, with a value of 1 × 10 6 here. Let n(CO) be the yield of CO, Δn(CH4) be the conversion amount of methane, and Δn(CO2) be the conversion amount of CO2.
This ratio reflects the comprehensive efficiency of CO generation relative to reactant conversion within the reaction system. Monitoring variations in this ratio under different experimental conditions allows for the evaluation of side reaction occurrence. For example, a significant decrease in this ratio under varying E/N or CO2 feed ratios prompts a further analysis of potential side-reaction pathways and their effects on the main reaction. This facilitates a deeper understanding of the reaction mechanism and lays the groundwork for optimizing reaction conditions. Adjusting experimental parameters to suppress side reactions can improve the selectivity of the main reaction and enhance CO generation efficiency. This ratio-oriented strategy for optimizing reaction conditions aids in efficiently identifying optimal conditions and offering feasible combinations of parameters for practical scenarios, including industrial applications.
While 0D numerical simulations have proven useful in simplifying the complex kinetics of the plasma-catalyzed CO2 reforming of methane, their significant limitations cannot be overlooked. First, the multi-timescale nature of kinetic processes significantly increases the complexity of numerical simulations. The timescales of electron collisions and species transformations range from nanoseconds to seconds, necessitating extremely small time steps to maintain accuracy. This results in both time-consuming calculations and non-convergence in simulations. Additionally, the large number of reactive species and reaction equations causes a sharp increase in computational resource demands when exploring different experimental conditions and parameter combinations, further exacerbating the scalability challenges of numerical simulations.

2.2. Stacking Ensemble Model

Variations in experimental conditions during CO2 reforming of methane experiments significantly affect product distribution. Simulation models provide an effective method to study these effects in greater detail. However, the complex interactions between multiple physical fields often lead to issues with data point convergence in simulations. Traditional grid-based simulation methods are constrained by their complexity, limiting grid refinement and making it challenging to ensure the continuity of experimental data points.
This study proposes a stacking ensemble model to assist in simulating the plasma-catalyzed CO2 reforming of methane to address this challenge. A 0D plasma-catalyzed simulation model is designed to investigate the CO2 reforming of methane process. The stacking ensemble model is trained on data obtained from the simulation model. The trained stacking ensemble model computes points where the simulation model fails to converge and points not covered by the simulation grid. The stacking ensemble model effectively resolves issues of poor convergence and limited grid refinement inherent in traditional simulation methods. Figure 2 illustrates the machine learning-assisted plasma-catalyzed simulation model.
Traditional simulation models can calculate the temporal variations of various species based on different experimental parameters. Machine learning can be trained using data obtained from traditional simulation models, allowing it to predict new results for particles under different experimental conditions. Compared to traditional simulation models, machine learning is not constrained by grid partitioning, allowing for calculations over a broader range and enabling faster computational speeds. The stacking ensemble learning-assisted simulation of the plasma-catalyzed CO2 reforming of methane integrates machine learning techniques with traditional simulation methods, leveraging the strong theoretical foundation of the latter and the data processing capabilities of the former. This study employs a stacking ensemble learning architecture combined with Bayesian optimization techniques to improve the performance of predictive models, as shown in Figure 2.
The field of machine learning encompasses numerous models, each demonstrating considerable performance variations across various datasets. To improve accuracy and generalization, the stacking ensemble learning method integrates multiple models, utilizing their respective strengths. This approach employs a two-level learner structure: the initial layer is composed of several foundational models, while the second level utilizes the outputs of these first-level learners, which a meta-estimator further optimizes. A notable strength of this approach is its capacity to fully exploit the diversity of various models, which are based on differing assumptions and learning capabilities, thereby handling complex and heterogeneous data more effectively. Some models excel in specific data subsets, while others perform better under varying conditions. By integrating the predictions of these base models using a meta-estimator, the stacking ensemble model maximizes the advantages of each model, thereby improving overall predictive performance.
Stacking ensemble learning models find widespread application across diverse fields. Uyeol et al. designed a two-stage stacked ensemble method combining random forests, support vector machines, and CatBoost, demonstrating superior performance compared to individual models [32]. Huang et al. formulated a self-adjusting stacked ensemble architecture aimed at wind and solar power prediction, effectively improving predictive accuracy and generalization by optimizing base models and hyperparameters [33]. Additionally, Shu et al. boosted the forecasting precision of shear capacity in RC beams using a multilayer stacking ensemble model [35]. Reza et al. optimized the costly SEAR program with a stacking ensemble model, achieving optimal remediation strategies in the field of environmental restoration [36].
In the numerical simulation of the plasma-catalyzed CO2 reforming of methane, the stacking ensemble demonstrates unique applicability. This reaction involves various chemical species and pathways, characterized by non-linearity, high dimensionality, and multiple timescales. Traditional single models struggle to comprehensively capture the complex interactions occurring during the reaction, while the stacking ensemble effectively captures the key features of this intricate system by integrating multiple types of base models. This research explores the plasma-catalyzed CO2 reforming of methane via a hierarchical stacking ensemble learning framework, as presented in Figure 2.
To ensure robust model evaluation, a five-fold cross-validation approach was implemented, where the dataset was partitioned into five equal subsets. During each iteration, four subsets were utilized to train the base models (KNN, DT, XGBoost) and the meta-model (linear regression). The remaining subset served as the validation set for assessing intermediate performance and preventing overfitting. Additionally, a separate held-out test set was reserved to evaluate the generalization performance of the final stacked model without any involvement in the cross-validation process. This method ensures that the model is validated on unseen data during training and provides an unbiased estimate of its predictive ability. For base model predictions, during each cross-validation fold, the base models were trained on the training folds and their predictions were generated exclusively on the validation fold. These predictions were then used as input variables for the meta-model. The meta-model was trained specifically on combined predictions from the base models rather than the raw input features, ensuring that it learns to combine base model outputs without direct access to the original data. Finally, the final stacked model was evaluated on a completely unseen test set that was never used during training or cross-validation, guaranteeing that the meta-model does not inadvertently learn patterns from the test data and preserving the integrity of the validation process.
DT are widely utilized in supervised learning, employing a simple yet effective hierarchical tree structure for predictions. When constructing a decision tree, three types of nodes must be considered: the root node serves as the starting point of the tree, and feature selection leads to optimal data partitioning into internal nodes or terminal nodes. Intermediate nodes represent decision points that evaluate the selected variable to facilitate further splits in the tree. For regression problems, the output prediction is the average of samples in each leaf node, assessed using Root Mean Square Error (RMSE). Prediction trees select features at each node to minimize Mean Squared Error (MSE) as much as possible, thereby indirectly achieving automatic feature selection. Since each node evaluates only one feature, training decision trees occurs relatively quickly and remains unaffected by input dimensionality.
KNN is classified as a supervised machine learning algorithm widely recognized for its efficiency. Predictions from this algorithm are generated using the distances to the k nearest existing instances from a new instance. The efficiency of KNN arises from its lack of training time, as most calculations occur during the assessment of novel instances. In regression tasks, KNN estimates the value of a novel instance by computing the mean of the k closest training samples.
XGBoost is initially introduced by Tianqi Chen in 2016. A scalable gradient boosting framework is proposed to enhance the computational efficiency and predictive performance of conventional gradient boosting models. Since its inception, XGBoost gains immense popularity, especially within the competitive machine learning community, often ranking at the top of leaderboards. Due to its scalability, XGBoost claims that training efficiency is nearly ten times faster than that of other popular machine learning methods. This efficiency results from a novel learning algorithm that effectively manages sparse data. Additionally, the design of XGBoost facilitates multi-core parallel processing, significantly enhancing speed compared to traditional gradient boosting algorithms.
Stacking ensemble models consist of numerous parameters, and varying configurations can significantly affect performance. Parameter tuning is a vital approach for optimizing these models; however, manual tuning often fails to discover globally best parameter combinations. BO is a powerful global optimization technique, particularly effective for handling costly black-box functions. This method leverages surrogate models to approximate unknown objective functions, facilitating efficient exploration for optimal solutions in the hyperparameter space. Specifically, Bayesian optimization estimates the performance of potential parameter values via the surrogate model and pinpoints parameter points that maximize the acquisition function. The model continuously learns from new observational data, updating the surrogate model and acquisition function to incrementally approach the optimal solution. This process strikes a balance between exploring unknown parameter spaces and exploiting known parameter values near the optimum, gathering the most informative observational data at each iteration to ultimately achieve the best hyperparameter combination.
Bayesian optimization is essential for hyperparameter tuning, providing an efficient solution to the complexities and challenges of manual adjustments. Every machine learning algorithm has unique hyperparameter settings, and proper configuration is crucial for optimizing model performance. Introducing Bayesian optimization streamlines the process, reduces the manual tuning workload, and accelerates model convergence through intelligent search mechanisms.
In this study, Bayesian optimization is applied during the training of base learners in the stacking ensemble learning model. It fine-tunes the hyperparameters of each model, resulting in significant improvements in both accuracy and robustness. This approach provides a powerful tool for optimizing complex model parameters, enabling the model to achieve its optimal state more quickly and accurately when processing complex data. The successful implementation of Bayesian optimization lays a solid foundation for improving the overall performance of predictive models.

2.3. Data Preprocessing

The dataset uses Min–Max Scaling for feature normalization. Variables in the original dataset vary in magnitude, and unscaled inputs can lead to biased and inaccurate predictions. In many machine learning algorithms, features with larger magnitudes often receive greater weights, which can negatively affect model performance and efficiency. Therefore, feature scaling is a crucial step in data preprocessing. Feature scaling mitigates the impact of varying scales among features, ensuring that each feature contributes equally within the same range during model training. This study employs the Min–Max normalization method, represented by Equation (7) as follows:
x * = x min max min
where x * represents the normalized value, x is the original value before normalization, and max and min indicate the extreme values (maximum and minimum) in the original dataset, respectively. Using this method, data from numerical simulations are standardized, ensuring that all features are compared and processed on the same scale.

2.4. Evaluation Metrics

This study uses three key performance evaluation metrics for the efficiency of the predictive model: RMSE, MAE, and R2, with specific formulas given in Equations (8)–(10). These metrics comprehensively reflect the fit and predictive accuracy of the model. RMSE represents a widely adopted error metric in regression analysis that calculates the difference between forecast and actual results. A smaller RMSE indicates lower prediction error, reflecting better performance. MAE quantifies the average absolute deviation between predicted and observed values, offering a straightforward measure of error severity: lower MAE values indicate better predictive performance. R2 measures the explanatory power over the target variable, with values ranging from 0 to 1. R2 approaching 1 suggests a superior fit to the observed data, with a value of 1 indicating perfect model–data alignment and 0 implying no explanatory power over the dependent variable.
R M S E = 1 n i = 1 n y y ^ 2
M A E = 1 n i = 1 n | y y ^ |
R 2 = 1 i = 1 n ( y y ^ ) 2 i = 1 n y y ¯ 2
where n denotes the total count of data instances predicted by the model, y denotes the truth values, y ^ denotes the forecasted values, and y ¯ is the mean of y. These metrics collectively evaluate the comprehensive performance evaluation of the model, aiding in determining its predictive accuracy and robustness under varying conditions.

3. Results

3.1. Hyperparameter Tuning with Bayesian Optimization

Selecting appropriate hyperparameters is crucial for effectively evaluating the performance of individual models. However, the complexity of the models and the abundance of parameters in this study pose significant challenges for traditional manual tuning methods. Bayesian optimization emerges as a preferred approach due to its efficient search capabilities in complex parameter spaces. It utilizes prior information from current iterations to intelligently select hyperparameters, significantly reducing computation time while improving accuracy.
BO is used to adjust the key parameters of the fundamental predictive models. Parameter ranges are defined for each model, and the Bayesian optimization algorithm is employed to find the best hyperparameter configurations. The optimization results and corresponding hyperparameter configurations are shown in Table 3. The DT model includes parameters such as maximum depth (max depth), minimum samples required to split a node (min samples split), and minimum samples at leaf nodes (min samples leaf). The KNN focuses on parameters such as the number of neighbors (n neighbors), distance metric parameter (p), and specific metric method (metric). The XGBoost model involves parameters such as learning rate (learning rate), number of weak classifiers (n estimators), and maximum tree depth (max depth).
Proper configuration of hyperparameters is essential for achieving optimal performance in predictive models. By employing Bayesian optimization, an intelligent search within defined ranges occurs to identify the best configurations, thereby enhancing model accuracy and robustness. Since the hyperparameters of each model have unique functions and effects, the optimal configurations may vary across different datasets.
BO is a global optimization algorithm based on prior knowledge, engineered to efficiently locate optima in complex optimization problems with restricted evaluation budgets. This study uses RMSE as the objective function for Bayesian optimization to improve model performance. In this process, the BO algorithm continuously updates prior knowledge to identify the best hyperparameter configurations, thereby minimizing RMSE. This process significantly improves the accuracy of the model. Figure 3 shows the optimal hyperparameters found for basic models, with the number of iterations set to 50. Figure 3a–c illustrates the parameter spaces and corresponding RMSE values for the DT, KNN, and XGBoost models, while Figure 3d–f depicts the convergence process of the minimum RMSE.
Specifically, Figure 3a shows the three hyperparameters of the DT model and their corresponding RMSE values, indicating that the best hyperparameter configuration is max depth = 75, min samples split = 2, and min samples leaf = 1 when RMSE is minimized. Figure 3d illustrates the convergence process of the DT model, demonstrating that the optimal hyperparameter configuration minimizing the objective function was identified at the 11th iteration, after which the results stabilize, demonstrating that Bayesian optimization successfully determines the best parameters of the model. Figure 3b shows the three hyperparameters of the KNN model and their corresponding RMSE values, revealing that the best hyperparameters are n neighbors = 12, p = 2, and the distance metric is Euclidean when RMSE is minimized. Critical to highlight in Figure 3b is that the z-axis corresponds to various distance measurement methods employed by the KNN algorithm: 0 corresponds to Euclidean Distance, 1 to Manhattan Distance, 2 to Chebyshev Distance, and 3 to Minkowski Distance. These metrics significantly influence neighbor identification and prediction accuracy. Figure 3e shows the convergence process of the KNN model, indicating that the best hyperparameter point is found in the ninth iteration and remains stable in subsequent iterations, further validating the effectiveness of Bayesian optimization. Figure 3c shows the three hyperparameters of the XGBoost model and their corresponding RMSE values, with the optimal configuration being learning rate = 0.1, n estimators = 179, and max depth = 45. Figure 3f illustrates the convergence over 50 iterations. The results indicate that the best hyperparameter point is determined in the 38th iteration and remains stable, demonstrating that Bayesian optimization successfully optimizes the parameters of the model.
This study effectively balanced model complexity and generalization ability through a combination of multiple strategies. Taking the XGBoost model as an example, when dealing with highly nonlinear plasma catalytic reaction characteristics, shallow tree structures are prone to underfitting due to their inability to fully capture complex patterns. In response to this issue, the study introduces L2 regularization to constrain the complexity of the model and reduces the fluctuation amplitude of the parameter space through a feature-weight decay mechanism, effectively suppressing the risk of overfitting. At the same time, the Bayesian optimization algorithm combined with five-fold cross-validation is used to systematically search for core hyperparameters such as max depth, learning rate, and n estimators. The experiment found that, when max depth = 45, the validation set RMSE decreased to the minimum value, and the performance difference between the training set and the validation set was small, indicating that the model retained both nonlinear modeling ability and good generalization performance, verifying the synergistic effectiveness of regularization and parameter tuning strategies.
Overall, the findings indicate that, after approximately 40 iterations, the RMSE of each basic model reaches its minimum value, at which point the results stabilize. This further confirms the effective use of Bayesian optimization in identifying the best hyperparameter configurations.

3.2. Performance of the Stacking Ensemble Model

Figure 4 illustrates the predicted carbon monoxide densities calculated under various E / N values and gas delivery ratios conditions. In each subfigure, the x-axis represents the E / N , ranging from 72 to 92 Td; the y-axis indicates the reaction time, spanning from 4.5 to 5.0 ms; and the z-axis shows the proportion of CO2 in the feed gas, ranging from 30% to 70%. The colors of the spheres represent the particle density of CO, offering a visual representation of the variations in gas product concentrations under varying experimental conditions. Figure 4a–c presents the prediction results from three base models: DT, KNN, and XGBoost models. A comparative analysis reveals significant differences in the predictive capabilities of these models under various experimental conditions, particularly in certain nonlinear regions where the predictions of a single model deviate from actual values, highlighting their limitations. Figure 4d presents the predictive outcomes of the stacking ensemble modeling. Figure 4e presents the true values obtained from numerical simulations, serving as a benchmark for comparison. Stacking ensemble predictions closely match numerical simulations, which serve as the reference baseline for this study. These simulation-derived ‘true values’ were generated using the validated 0D plasma-catalysis model described in Section 2.1, incorporating gas-phase and surface reactions under specified experimental conditions. This indicates that the stacking method effectively captures the trends of CO density variations in the simulation of the plasma-catalyzed CO2 reforming of methane by integrating the strengths of the base models.
Further analysis indicates that the model not only addresses the limitations of individual models in managing complex reaction systems but also significantly enhances prediction accuracy and stability by leveraging the strengths of multiple models. The results suggest that the stacking ensemble method can more accurately represent reaction processes in practical applications, rendering it an effective tool for tackling highly nonlinear problems. Such visual results not only illustrate the performance differences among models but also clearly show the better performance of the stacking model in predicting complex reactions.
To conduct an in-depth analysis of the performance of the stacking ensemble model, a comparison is made between the predictions of individual models (DT, KNN, and XGBoost) and those of the stacking ensemble model. Figure 5 presents a scatter plot of predicted values versus experimental values, with the diagonal line representing perfect predictions ( V p r e d = V e x p ). The closer the scatter points are to the fitted line, the more closely the predicted values align with the true values. The results indicate that the predictions of individual models exhibit significant variations in particle density and are relatively sparse around the Y = X line, reflecting their limitations in capturing the trends of the true data. In contrast, most scatter points from the stacking ensemble model are tightly clustered around the perfect fit line ( V p r e d = V e x p ), indicating a high level of consistency between its predicted values and experimental values, thereby showcasing its superior predictive capability. The predictive superiority of stacking-based ensembles originates from their aptitude to fully integrate the advantages of each base learner, thereby significantly enhancing overall prediction accuracy and stability. By integrating multiple models, the stacking ensemble method effectively reduces the risk of individual models becoming trapped in local optima, thereby enhancing generalization ability and adaptability.
Figure 6 clearly demonstrates the comparative performance of various predictive models in simulating the plasma-catalyzed methane CO2 reforming process, as evaluated through key performance metrics. The individual base model functions as a reference benchmark for comparing against the proposed stacked ensemble architecture. Notably, the stacking ensemble model exhibits significantly lower RMSE and MAE values than all selected single base models, highlighting its superior predictive accuracy and robustness in capturing the complex dynamics of the reforming process. The proposed method demonstrates superior predictive performance compared to benchmark methods, as evidenced by its lower error metrics. Specifically, it achieves an RMSE of 7.01 × 1011 and MAE of 3.80 × 1011, representing significant improvements over DT (RMSE: 1.29 × 1012, MAE: 9.80 × 1011), KNN (RMSE: 6.71 × 1012, MAE: 5.39 × 1012), and XGBoost (RMSE: 2.17 × 1012, MAE: 1.73 × 1012).
Previous numerical simulations [15,47] were validated against physical experiments, demonstrating the reliability of the 0D plasma-catalysis model in capturing key reaction pathways and product distributions. This study builds upon these validated simulations by leveraging the dataset to train a stacking ensemble model, which addresses the limitations of traditional numerical methods. The proposed model achieves a 99.8% reduction in simulation time while maintaining high prediction accuracy (R2 = 0.9795), as shown in Figure 5 and Figure 6. This highlights its exceptional performance, including improved predictive accuracy and enhanced generalization capability [48].

4. Conclusions

This study tackles the challenges of poor convergence and high computational costs in simulations of the plasma-catalyzed CO2 reforming of methane. A stacking ensemble model is proposed that integrates Bayesian optimization and five-fold cross-validation to predict variations in key particle densities.
Experimental results indicate that Bayesian optimization can automatically adjust hyperparameters, leveraging prior knowledge and observed data to enhance optimization efficiency and model accuracy. Five-fold cross-validation effectively prevents both overfitting and underfitting. The stacking ensemble model integrates DT, KNN, and XGBoost algorithms, significantly enhancing generalization ability, avoiding local optima, and effectively utilizing limited simulation data. The predictions of the model align closely with numerical simulations, achieving an R2 value exceeding 0.9795. In comparison to traditional numerical methods, the stacking ensemble model achieves a 99.8% reduction in simulation time, with particle density predictions under new experimental conditions decreasing from 24 min to a few seconds. This efficiency gain is maintained while preserving high predictive accuracy. Future advancements in plasma-catalyzed simulations will focus on two key directions: Real-Time Optimization with Online Learning—integrating the stacking ensemble model with real-time experimental data streams to dynamically adjust plasma parameters and catalyst properties, enabling the adaptive control of reaction pathways; Multi-Scale Modeling Coupling—bridging the current 0D plasma-catalysis model with 3D fluid dynamics simulations to resolve spatial heterogeneities in reactor design, while incorporating machine learning for the reduced-order modeling of computationally intensive sub-processes.
This represents a promising area for future exploration, further emphasizing the broad applicability of machine learning in plasma-catalyzed processes.

Author Contributions

J.P.: Conceptualization, Methodology, Project administration, Funding acquisition, Supervision. X.Q.: Conceptualization, Methodology, Investigation, Software, Writing—original draft. C.Z.: Conceptualization, Methodology, Writing—review. B.L.: Data curation, Validation, Investigation. L.L.: Methodology, Validation, Investigation. G.L.: Methodology, Data curation, Validation. S.Q.: Conceptualization of this study, Methodology, Software. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 52077129).

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, Y.; Dou, L.; Zhou, R.; Sun, H.; Fan, Z.; Zhang, C.; Ostrikov, K.K.; Shao, T. Liquid-phase methane bubble plasma discharge for heavy oil processing: Insights into free radicals-induced hydrogenation. Energy Convers. Manag. 2021, 250, 114896. [Google Scholar] [CrossRef]
  2. Osorio-Tejada, J.; van’t Veer, K.; Long, N.V.D.; Tran, N.N.; Fulcheri, L.; Patil, B.S.; Bogaerts, A.; Hessel, V. Sustainability analysis of methane-to-hydrogen-to-ammonia conversion by integration of high-temperature plasma and non-thermal plasma processes. Energy Convers. Manag. 2022, 269, 116095. [Google Scholar] [CrossRef]
  3. Liu, L.; Yang, K.; Li, L.; Liu, W.; Yuan, H.; Han, Y.; Zhang, E.; Zheng, Y.; Jia, Y. The aeration and dredging stimulate the reduction of pollution and carbon emissions in a sediment microcosm study. Sci. Rep. 2024, 14, 26172. [Google Scholar] [CrossRef]
  4. Gao, Y.; Dou, L.; Zhang, S.; Zong, L.; Pan, J.; Hu, X.; Sun, H.; Ostrikov, K.K.; Shao, T. Coupling bimetallic Ni-Fe catalysts and nanosecond pulsed plasma for synergistic low-temperature CO2 methanation. Chem. Eng. J. 2021, 420, 127693. [Google Scholar] [CrossRef]
  5. Du, J.; Zong, L.; Zhang, S.; Gao, Y.; Dou, L.; Pan, J.; Shao, T. Numerical investigation on the heterogeneous pulsed dielectric barrier discharge plasma catalysis for CO2 hydrogenation at atmospheric pressure: Effects of Ni and Cu catalysts on the selectivity conversions to CH4 and CH3OH. Plasma Process. Polym. 2022, 19, 2100111. [Google Scholar] [CrossRef]
  6. Wang, X.; Gao, Y.; Zhang, S.; Sun, H.; Li, J.; Shao, T. Nanosecond pulsed plasma assisted dry reforming of CH4: The effect of plasma operating parameters. Appl. Energy 2019, 243, 132–144. [Google Scholar] [CrossRef]
  7. Alhemeiri, N.; Kosca, L.; Gacesa, M.; Polychronopoulou, K. Advancing in-situ resource utilization for earth and space applications through plasma CO2 catalysis. J. CO2 Util. 2024, 85, 102887. [Google Scholar] [CrossRef]
  8. Pan, J.; Li, L. Particle densities of the pulsed dielectric barrier discharges in nitrogen at atmospheric pressure. J. Phys. D Appl. Phys. 2015, 48, 055204. [Google Scholar] [CrossRef]
  9. Vakili, R.; Gholami, R.; Stere, C.E.; Chansai, S.; Chen, H.; Holmes, S.M.; Jiao, Y.; Hardacre, C.; Fan, X. Plasma-assisted catalytic dry reforming of methane (DRM) over metal-organic frameworks (MOFs)-based catalysts. Appl. Catal. B Environ. 2020, 260, 118195. [Google Scholar] [CrossRef]
  10. Khoja, A.H.; Tahir, M.; Amin, N.A.S. Recent developments in non-thermal catalytic DBD plasma reactor for dry reforming of methane. Energy Convers. Manag. 2019, 183, 529–560. [Google Scholar] [CrossRef]
  11. Mathews, A.; Francisquez, M.; Hughes, J.W.; Hatch, D.R.; Zhu, B.; Rogers, B.N. Uncovering turbulent plasma dynamics via deep learning from partial observations. Phys. Rev. E 2021, 104, 025205. [Google Scholar] [CrossRef] [PubMed]
  12. Zhong, L.; Gu, Q.; Wu, B. Deep learning for thermal plasma simulation: Solving 1-D arc model as an example. Comput. Phys. Commun. 2020, 257, 107496. [Google Scholar] [CrossRef]
  13. Zhu, Y.; Bo, Y.; Chen, X.; Wu, Y. Tailoring electric field signals of nonequilibrium discharges by the deep learning method and physical corrections. Plasma Process. Polym. 2022, 19, e2100155. [Google Scholar] [CrossRef]
  14. Li, H.; Fu, Y.; Li, J.; Wang, Z. Machine learning of turbulent transport in fusion plasmas with neural network. Plasma Sci. Technol. 2021, 23, 115102. [Google Scholar] [CrossRef]
  15. Pan, J.; Liu, Y.; Zhang, S.; Hu, X.; Liu, Y.; Shao, T. Deep learning-assisted pulsed discharge plasma catalysis modeling. Energy Convers. Manag. 2023, 277, 116620. [Google Scholar] [CrossRef]
  16. Zeng, X.; Zhang, S.; Hu, X.; Shao, T. Dielectric Barrier Discharge Plasma-Enabled Energy Conversion Under Multiple Operating Parameters: Machine Learning Optimization. Plasma Chem. Plasma Process. 2024, 44, 667–685. [Google Scholar] [CrossRef]
  17. Cai, Y.; Mei, D.; Chen, Y.; Bogaerts, A.; Tu, X. Machine learning-driven optimization of plasma-catalytic dry reforming of methane. J. Energy Chem. 2024, 96, 153–163. [Google Scholar] [CrossRef]
  18. Basha, S.S.; Vinakota, S.K.; Pulabaigari, V.; Mukherjee, S.; Dubey, S.R. Autotune: Automatically tuning convolutional neural networks for improved transfer learning. Neural Netw. 2021, 133, 112–122. [Google Scholar] [CrossRef]
  19. Mehrjerd, A.; Dehghani, T.; Jajroudi, M.; Eslami, S.; Rezaei, H.; Ghaebi, N.K. Ensemble machine learning models for sperm quality evaluation concerning success rate of clinical pregnancy in assisted reproductive techniques. Sci. Rep. 2024, 14, 24283. [Google Scholar] [CrossRef]
  20. Du, J.; Yang, S.; Zeng, Y.; Ye, C.; Chang, X.; Wu, S. Visualization obesity risk prediction system based on machine learning. Sci. Rep. 2024, 14, 22424. [Google Scholar] [CrossRef]
  21. Han, H.J.; Ji, H.; Choi, J.E.; Chung, Y.G.; Kim, H.; Choi, C.W.; Kim, K.; Jung, Y.H. Development of a machine learning model to identify intraventricular hemorrhage using time-series analysis in preterm infants. Sci. Rep. 2024, 14, 23740. [Google Scholar] [CrossRef]
  22. Alkhammash, A. Intelligence analysis of membrane distillation via machine learning models for pharmaceutical separation. Sci. Rep. 2024, 14, 22876. [Google Scholar] [CrossRef]
  23. Chen, H.; Zheng, Z.; Yang, C.; Tan, T.; Jiang, Y.; Xue, W. Machine learning based intratumor heterogeneity signature for predicting prognosis and immunotherapy benefit in stomach adenocarcinoma. Sci. Rep. 2024, 14, 23328. [Google Scholar] [CrossRef] [PubMed]
  24. Martinez-de Pison, F.; Gonzalez-Sendino, R.; Aldama, A.; Ferreiro-Cabello, J.; Fraile-Garcia, E. Hybrid methodology based on Bayesian optimization and GA-PARSIMONY to search for parsimony models by combining hyperparameter optimization and feature selection. Neurocomputing 2019, 354, 20–26. [Google Scholar] [CrossRef]
  25. Fernández-Sánchez, D.; Garrido-Merchán, E.C.; Hernández-Lobato, D. Improved max-value entropy search for multi-objective bayesian optimization with constraints. Neurocomputing 2023, 546, 126290. [Google Scholar] [CrossRef]
  26. Garrido-Merchán, E.C.; Hernández-Lobato, D. Predictive entropy search for multi-objective bayesian optimization with constraints. Neurocomputing 2019, 361, 50–68. [Google Scholar] [CrossRef]
  27. Phan-Trong, D.; Tran-The, H.; Gupta, S. NeuralBO: A black-box optimization algorithm using deep neural networks. Neurocomputing 2023, 559, 126776. [Google Scholar] [CrossRef]
  28. Mihaljević, B.; Bielza, C.; Larrañaga, P. Bayesian networks for interpretable machine learning and optimization. Neurocomputing 2021, 456, 648–665. [Google Scholar] [CrossRef]
  29. Nobile, M.S.; Cazzaniga, P.; Ramazzotti, D. Investigating the performance of multi-objective optimization when learning Bayesian Networks. Neurocomputing 2021, 461, 281–291. [Google Scholar] [CrossRef]
  30. Garrido-Merchán, E.C.; Hernández-Lobato, D. Dealing with categorical and integer-valued variables in bayesian optimization with gaussian processes. Neurocomputing 2020, 380, 20–35. [Google Scholar] [CrossRef]
  31. Song, C.; Ma, Y.; Xu, Y.; Chen, H. Multi-population evolutionary neural architecture search with stacked generalization. Neurocomputing 2024, 587, 127664. [Google Scholar] [CrossRef]
  32. Park, U.; Kang, Y.; Lee, H.; Yun, S. A stacking heterogeneous ensemble learning method for the prediction of building construction project costs. Appl. Sci. 2022, 12, 9729. [Google Scholar] [CrossRef]
  33. Huang, H.; Zhu, Q.; Zhu, X.; Zhang, J. An Adaptive, Data-Driven Stacking Ensemble Learning Framework for the Short-Term Forecasting of Renewable Energy Generation. Energies 2023, 16, 1963. [Google Scholar] [CrossRef]
  34. Sun, J.; Wu, S.; Zhang, H.; Zhang, X.; Wang, T. Based on multi-algorithm hybrid method to predict the slope safety factor–stacking ensemble learning with bayesian optimization. J. Comput. Sci. 2022, 59, 101587. [Google Scholar] [CrossRef]
  35. Shu, J.; Yu, H.; Liu, G.; Yang, H.; Chen, Y.; Duan, Y. BO-Stacking: A novel shear strength prediction model of RC beams with stirrups based on Bayesian Optimization and model stacking. Structures 2023, 58, 105593. [Google Scholar] [CrossRef]
  36. Shams, R.; Alimohammadi, S.; Yazdi, J. Optimized stacking, a new method for constructing ensemble surrogate models applied to DNAPL-contaminated aquifer remediation. J. Contam. Hydrol. 2021, 243, 103914. [Google Scholar] [CrossRef]
  37. Djarum, D.H.; Ahmad, Z.; Zhang, J. Reduced Bayesian Optimized Stacked Regressor (RBOSR): A highly efficient stacked approach for improved air pollution prediction. Appl. Soft Comput. 2023, 144, 110466. [Google Scholar] [CrossRef]
  38. Liu, L.; Zhang, Z.; Qu, Z.; Bell, A. Remaining useful life prediction for a catenary, utilizing Bayesian optimization of stacking. Electronics 2023, 12, 1744. [Google Scholar] [CrossRef]
  39. Cheng, H.; Ma, M.; Zhang, Y.; Liu, D.; Lu, X. The plasma enhanced surface reactions in a packed bed dielectric barrier discharge reactor. J. Phys. D Appl. Phys. 2020, 53, 144001. [Google Scholar] [CrossRef]
  40. Bai, C.; Wang, L.; Li, L.; Dong, X.; Xiao, Q.; Liu, Z.; Sun, J.; Pan, J. Numerical investigation on the CH4/CO2 nanosecond pulsed dielectric barrier discharge plasma at atmospheric pressure. AIP Adv. 2019, 9, 035023. [Google Scholar] [CrossRef]
  41. Cheng, H.; Fan, J.; Zhang, Y.; Liu, D.; Ostrikov, K.K. Nanosecond pulse plasma dry reforming of natural gas. Catal. Today 2020, 351, 103–112. [Google Scholar] [CrossRef]
  42. Li, S.; Bai, C.; Chen, X.; Meng, W.; Li, L.; Pan, J. Numerical investigation on plasma assisted ignition of methane/air mixture excited by the synergistic nanosecond repetitive pulsed and DC discharge. J. Phys. D Appl. Phys. 2020, 54, 015203. [Google Scholar] [CrossRef]
  43. Liu, Y.; Zhang, S.; Huang, B.; Dai, D.; Murphy, A.B.; Shao, T. Temporal evolution of electron energy distribution function and its correlation with hydrogen radical generation in atmospheric-pressure methane needle–plane discharge plasmas. J. Phys. D Appl. Phys. 2020, 54, 095202. [Google Scholar] [CrossRef]
  44. Hong, J.; Pancheshnyi, S.; Tam, E.; Lowke, J.J.; Prawer, S.; Murphy, A.B. Kinetic modelling of NH3 production in N2–H2 non-equilibrium atmospheric-pressure plasma catalysis. J. Phys. D Appl. Phys. 2017, 50, 154005. [Google Scholar] [CrossRef]
  45. Carrasco, E.; Jiménez-Redondo, M.; Tanarro, I.; Herrero, V.J. Neutral and ion chemistry in low pressure dc plasmas of H2/N2 mixtures: Routes for the efficient production of NH3 and NH4+. Phys. Chem. Chem. Phys. 2011, 13, 19561–19572. [Google Scholar] [CrossRef]
  46. van ’t Veer, K.; Reniers, F.; Bogaerts, A. Zero-dimensional modeling of unpacked and packed bed dielectric barrier discharges: The role of vibrational kinetics in ammonia synthesis. Plasma Sources Sci. Technol. 2020, 29, 045020. [Google Scholar] [CrossRef]
  47. Pan, J.; Chen, T.; Gao, Y.; Liu, Y.; Zhang, S.; Liu, Y.; Shao, T. Numerical modeling and mechanism investigation of nanosecond-pulsed DBD plasma-catalytic CH4 dry reforming. J. Phys. D Appl. Phys. 2021, 55, 035202. [Google Scholar] [CrossRef]
  48. Kardani, N.; Zhou, A.; Nazem, M.; Shen, S.L. Improved prediction of slope stability using a hybrid stacking ensemble method based on finite element analysis and field data. J. Rock Mech. Geotech. Eng. 2021, 13, 188–201. [Google Scholar] [CrossRef]
Figure 1. The variation of the comprehensive efficiency of CO generation in plasma-catalyzed CO2 reforming of methane with experimental conditions.
Figure 1. The variation of the comprehensive efficiency of CO generation in plasma-catalyzed CO2 reforming of methane with experimental conditions.
Electronics 14 01329 g001
Figure 2. Schematic diagram of Bayesian optimization stacking ensemble learning model.
Figure 2. Schematic diagram of Bayesian optimization stacking ensemble learning model.
Electronics 14 01329 g002
Figure 3. The process of hyperparameter tuning with Bayesian optimization: (ac) the search space of Bayesian optimization for DT, KNN, and XGBoost models, respectively, (df) the convergence process of the minimum RMSE with the number of iterations.
Figure 3. The process of hyperparameter tuning with Bayesian optimization: (ac) the search space of Bayesian optimization for DT, KNN, and XGBoost models, respectively, (df) the convergence process of the minimum RMSE with the number of iterations.
Electronics 14 01329 g003
Figure 4. CO densities provided by different models: (ac) the predicted CO densities from the DT, KNN, and XGBoost models, (d) the predicted CO density from the stacking ensemble model, (e) the predicted CO density from the simulation model.
Figure 4. CO densities provided by different models: (ac) the predicted CO densities from the DT, KNN, and XGBoost models, (d) the predicted CO density from the stacking ensemble model, (e) the predicted CO density from the simulation model.
Electronics 14 01329 g004
Figure 5. The prediction performance of different models.
Figure 5. The prediction performance of different models.
Electronics 14 01329 g005
Figure 6. The prediction errors of different models.
Figure 6. The prediction errors of different models.
Electronics 14 01329 g006
Table 1. The current status of plasma catalysis.
Table 1. The current status of plasma catalysis.
ResearcherReactant GasPlasma Generation ProcessConclusions
Xiaoling Wang [6]CH4, CO2DBDThe short pulse rise/fall time can significantly improve the energy efficiency of CH4 and CO2.
Naama Alhemeiri [7]CH4, CO2, H2ODBD, microwave, sliding arcPlasma catalysis has potential in CO2 conversion, where performance can be improved by optimizing catalysts and diagnostic methods.
Reza Vakilia [9]CH4, CO2DBDMOF-based catalysts significantly improve DRM efficiency by optimizing plasma catalyst synergy.
Asif Hussain Khoja [10]CH4, CO2DBDDBD has advantages in DRM; optimizing catalyst and reactor configurations can improve performance.
Table 2. Species contained in the plasma-catalyzed CO2 reforming of methane.
Table 2. Species contained in the plasma-catalyzed CO2 reforming of methane.
TypeSpecies
Molecules CH 4 , C 2 H 6 , C 2 H 4 , C 2 H 2 , C 3 H 8 , C 3 H 6 , H 2 , O 2 , O 3 , CO 2 , CO, H 2 O , CH2O, CH3OH, CH3CHO, CH2CO
Excited species CO 2 ( e 1 ), CO 2 ( e 2 ), CO 2 ( v a ), CO 2 ( v b ), CO 2 ( v c ), CO 2 ( v d ), CO 2 ( v n ), CO( e 1 ), CO( e 2 ), CO( v j ), O 2 (a), O 2 (b), O(1D), O(1S), O 2 ( v i ), CH 4 ( v 13 ), CH 4 ( v 24 )
Radicals CH 3 , CH 2 , CH, C, C 2 H 5 , C 2 H 3 , C 2 H , C 3 H 7 , H, O, OH, CHO, CH2OH, CH3O, C2HO, CH3CO, CH2CHO, C 2 O , C 3 H 5 , C 2
Electron and ionse, CH 4 + , CH 3 + , CH 2 + , CH + , C 2 H 4 + , C 2 H 2 + , CO 2 + , CO + , C + , O 2 + , O + , O 4 + , C 2 O 4 + , H 3 O + , O , O 2 , O 3 , O 4 , CO 3 , CO 4 , OH
Surface speciesSurf, CO 2 (s), CO(s), H(s), O(s), H2O(s), OH(s), C(s), CH 3 (s), CH 2 (s), CH(s), COOH(s), CH3OH(s), CH2OH(s), CHOH(s), COH(s), CH3O(s), CH2O(s), CHO(s)
Table 3. Optimized hyperparameters for base models.
Table 3. Optimized hyperparameters for base models.
ModelHyperparametersBO RangeOptimized Value
DTmax depth1–10075
p1–52
min samples leaf1–1001
KNNn neighbors1–5012
min samples split2–32
metricEuclidean, Manhattan, Chebyshev, MinkowskiEuclidean
XGBoostlearning rate0.01–10.1
n estimators10–200179
max depth1–5045
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pan, J.; Qiao, X.; Zhang, C.; Li, B.; Li, L.; Li, G.; Qin, S. Stacking Ensemble Learning-Assisted Simulation of Plasma-Catalyzed CO2 Reforming of Methane. Electronics 2025, 14, 1329. https://doi.org/10.3390/electronics14071329

AMA Style

Pan J, Qiao X, Zhang C, Li B, Li L, Li G, Qin S. Stacking Ensemble Learning-Assisted Simulation of Plasma-Catalyzed CO2 Reforming of Methane. Electronics. 2025; 14(7):1329. https://doi.org/10.3390/electronics14071329

Chicago/Turabian Style

Pan, Jie, Xin Qiao, Chunlei Zhang, Bin Li, Lun Li, Guomeng Li, and Shaohua Qin. 2025. "Stacking Ensemble Learning-Assisted Simulation of Plasma-Catalyzed CO2 Reforming of Methane" Electronics 14, no. 7: 1329. https://doi.org/10.3390/electronics14071329

APA Style

Pan, J., Qiao, X., Zhang, C., Li, B., Li, L., Li, G., & Qin, S. (2025). Stacking Ensemble Learning-Assisted Simulation of Plasma-Catalyzed CO2 Reforming of Methane. Electronics, 14(7), 1329. https://doi.org/10.3390/electronics14071329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop