Next Article in Journal
Mathematical Theory of Social Conformity I: Belief Dynamics, Propaganda Limits, and Learning Times in Networked Societies
Previous Article in Journal
Improving Portfolio Management Using Clustering and Particle Swarm Optimisation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Forecasting Framework for Carbon Emission Trading Price Based on Nonlinear Integration

1
School of Statistics and Data Science, Lanzhou University of Finance and Economics, Lanzhou 730020, China
2
Center for Quantitative Analysis of Gansu Economic Development, Lanzhou 730020, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(10), 1624; https://doi.org/10.3390/math13101624
Submission received: 7 March 2025 / Revised: 29 April 2025 / Accepted: 13 May 2025 / Published: 15 May 2025
(This article belongs to the Special Issue Probability Statistics and Quantitative Finance)

Abstract

:
The complex features of carbon price, such as volatility and nonlinearity, pose a serious challenge to accurately predict it. To this end, this paper proposes a novel forecasting framework for carbon emission trading price based on nonlinear integration, including feature selection, deep learning and model combination. Firstly, the historical carbon price series are collected and collated, and the factors affecting the carbon price are analyzed. Secondly, the data are downscaled and the input variables are screened using the max-relevance and min-redundancy. Then, the three integrated learning models are combined with the neural network model through nonlinear integration to construct a hybrid prediction model, and the best performing combined model is obtained. Finally, interval prediction is realized on the basis of point prediction. The experimental results show that the prediction model outperforms other comparative models in terms of prediction accuracy, stability and statistical hypothesis testing, and has good prediction performance. In summary, the hybrid prediction model proposed in this paper can not only provide high-precision carbon market price prediction for government and enterprise decision makers, but also help investors optimize their trading strategies and improve their returns.

1. Introduction

1.1. Background

Climate warming and environmental pollution have aroused the concern of the world; therefore, many countries are reducing carbon emissions by making relevant policies and technological innovation. For example, China has established specific environmental targets, with the goal of decreasing carbon dioxide emissions per unit of GDP by 20% relative to 2005 levels by approximately 2030. In parallel, the country intends to reduce coal’s proportion in primary energy consumption to below 58%, while simultaneously increasing the share of non-fossil energy sources to around 20% within the same period. Furthermore, the United States has rejoined the Paris Climate Agreement, invested in carbon capture and storage technologies, and formulated clean energy transition policies to reduce carbon emissions more effectively in the future. In the meantime, the European Union has introduced a carbon emissions trading scheme to limit and trade carbon emissions from businesses, while vigorously promoting renewable energy and energy-saving technologies to improve energy efficiency. In general, countries around the world are actively addressing the challenge of climate change by taking various measures to reduce carbon emissions. This requires not only policy support and technological innovation at the national level, but also global cooperation and joint efforts.
The establishment of a robust carbon market is an effective means of reducing carbon emissions [1], and it is an effective and credible policy tool for countries to implement the Paris Agreement to address climate change at present. From international experience, the carbon pricing mechanism mainly includes carbon emissions trading, carbon tax and carbon emission reduction credit system. Carbon emissions trading and carbon tax are the main means of the carbon pricing mechanism, and the carbon credit system is an effective supplement. The importance and urgency of predicting the price of carbon emissions trading makes it very necessary for us to accurately analyze the pattern of price change in carbon emissions trading [2], study the factors that may affect the price of carbon emissions trading [3], gain an in-depth understanding of the pricing mechanism and lay the foundation for the long-term development of the carbon market [4].

1.2. Literature Review

Current studies on carbon market price forecasting can be divided into two main types: one is the analysis of the factors influencing the carbon emissions trading price, and the other is the carbon emissions trading price forecasting [5].
When analyzing the influencing factors of carbon emission trading price, scholars mostly select the influencing factors subjectively based on the construction of panel regression [6]. Li et al. (2023) classified the features into two categories when conducting carbon price prediction: structured data and unstructured data. Structured data includes economic and financial indicators, energy indicators and environmental indicators, while unstructured data includes search indexes based on internet big data [7]. Ye et al. (2024) proposed a new framework for predicting carbon emissions that combines linear and machine learning models [8], taking into account temporal dynamics and external influences. The analysis begins with the identification of twelve preliminary influencing factors, which encompass urban development, economic growth, industrial energy consumption and demographic considerations. Subsequently, the Lasso regression algorithm was employed to eliminate indicators demonstrating weak predictive capabilities, resulting in the selection of five key variables. Max-relevance and min-redundancy, and the recursive feature elimination method, are also methods for feature screening. Max-relevance and min-redundancy has already had effects in fields such as short-term load forecasting by Liang et al. (2019) [9], while Sharma et al. (2024) confirmed the superior properties of the recursive feature elimination method [10]. However, scholars do not seem to focus on the ranking results obtained using recursive feature elimination and the economic implications behind them, especially in the field of carbon pricing.
In point forecasting of carbon emissions trading price, with the strong growth of enterprise demand, price forecasting has become an important hotspot. Currently, the prediction of carbon emissions trading prices primarily encompasses three methodological categories: statistical and econometric approaches, artificial intelligence techniques and hybrid forecasting models. For instance, Hao and Tian (2020) combined advanced data decomposition techniques, effective feature selection algorithms, kernel-based extreme learning machine models, multi-stage prediction strategies and the newly proposed multi-objective chaotic sinusoidal cosine algorithm to construct a carbon price prediction framework that considered multiple influencing factors and ultimately achieved the desired prediction results [11]. Subsequently, the carbon price prediction methodology proposed by Wang et al. (2021) incorporates four key components [12]: data preprocessing mechanisms, decomposition techniques, forecasting modules and matching strategies. Empirical results demonstrate that this approach enhances both the robustness and predictive accuracy of the implemented model. In a separate study, Nadirgil (2023) introduced an innovative framework employing 48 distinct hybrid machine learning configurations to examine how external variables influence carbon pricing [13]. Zhang et al. (2024) focused on selecting nine widely used artificial intelligence models from the recent energy price forecasting literature, utilizing the POA algorithm to integrate the top three performing models into a highly accurate hybrid system [14]. As for nonlinear integration technology, Lan et al. (2024) employed a nonlinear ensemble method to integrate all sub-sequences on carbon price and validated the effectiveness of the nonlinear integration and optimization method [15].
In terms of interval prediction, Hu et al. (2022) combined Conformal Quantile Regression with a deep learning algorithm based on Time-Series Convolutional Network architecture, and obtained the prediction intervals of wind power with good results [16]. In addition to conformal prediction, kernel density estimation is also one of the popular interval prediction methods. Li et al. (2023) used kernel density estimation for interval prediction of hourly PM2.5 and used speed-constrained multi-objective particle swarm optimization (SMPSO) to fine-tune the upper and lower bounds of the intervals, and obtained good interval prediction results [17].
Although the interval prediction technique is relatively mature nowadays, there are not many related studies on the carbon price prediction method. Consequently, further research is needed. Moreover, current carbon price prediction methodologies predominantly rely on historical pricing data, often overlooking the significance of multivariate influencing factors and feature selection processes [18]. To address these limitations, integrating insights from related energy market research and incorporating diverse external variables into predictive models becomes essential [19]. Furthermore, the inherent constraints of individual forecasting approaches in meeting accuracy requirements necessitate the adoption of hybrid modeling frameworks for enhanced carbon price prediction capabilities. To address these limitations, this paper proposes a new hybrid model, which mainly includes feature selection, deep learning and model combination.

1.3. Main Contributions

The main novelties, contributions and bridged research gaps of the current study are as follows:
(1)
Consider the influencing factors comprehensively. Few previous studies comprehensively involved data of different dimensional variables. This paper proposes an index system, which can be abbreviated as “4E”. Ten influencing factors related to carbon price were considered from four different dimensions: energy factor, economic factor, environmental factor and exchange rate factor, and variables highly related to carbon price were extracted through the max-relevance and min-redundancy.
(2)
Advanced forecasting framework and excellent forecasting performance. In this paper, a two-stage ensemble prediction framework is proposed. Firstly, three independent ensemble learning models are used to predict the carbon price and the novel prediction framework uses the GRU neural network model to predict the carbon price by nonlinear integration of the three prediction results. The constructed integration model is named EGRU. In addition, the new prediction framework proposed in this paper combines the advantages of four machine learning models. Compared with a single benchmark model, this framework can significantly improve the model’s prediction performance and obtain more accurate prediction results.
(3)
Considerable practical application reference value. This paper uses the recursive feature elimination method of Wrapper to rank the importance of different factors affecting carbon price and explain the results in economic sense according to regional characteristics, which provides reference suggestions for researchers to forecast carbon emissions trading prices combined with the country’s future development scenario planning.
(4)
Interval prediction. The traditional forecast is provided in the form of point forecast, and this single forecast information is not enough to reflect the uncertainty of the forecast. And interval prediction can obtain the range that the predicted value may appear in, that is, give the upper bound and lower bound. At present, interval prediction is widely used in electrical disciplines, such as photovoltaic power prediction, wind power prediction, wind speed prediction, etc., while it is rarely used in carbon price prediction. In this paper, the interval prediction based on adaptive bandwidth kernel density estimation method is added to the point prediction, and the model shows a good prediction effect.

2. Methods

2.1. Feature Selection

Feature selection denotes the process of identifying and extracting an optimal subset of predictors from the complete set of available features. This methodological approach offers dual advantages: firstly, it enables the elimination of extraneous or repetitive variables, thereby enhancing model accuracy while simultaneously reducing both feature dimensionality and computational processing time [20]. Secondly, by isolating the most pertinent variables, it contributes to model simplification and facilitates researchers’ comprehension of underlying data generation mechanisms [21].
In order to determine the main external factors affecting the carbon emission trading price in each region, this paper adopts the max-relevance and min-redundancy (mRMR) method to screen the key factors affecting the carbon price, preparing for the subsequent analysis. Due to the limitation of the frequency of indicators and the strong volatility of carbon prices, this paper adopts daily data and selects the daily average price of carbon emission rights trading. The basic principle of mRMR is to find a group of features in the original feature set that have the greatest correlation with the final output result and the least correlation among them. This method first defines two key indicators: max-relevance and min-redundancy. Among them, the max-relevance requires maximizing the sum of the mutual information between the selected feature subset Ω and the target category c. If I( x i ; c) represents the mutual information between feature x i and category c, the maximum correlation can be expressed as:
m a x 1 | Ω | x i Ω   I x i ; c
The min-redundancy requires that the average value of mutual information among features within the subset Ω be minimized, that is:
m i n 1 | Ω | 2 x i , x j Ω   I x i ; x j
To achieve this goal, mRMR adopts the greedy algorithm to select features step by step. The algorithm first initializes the empty subset Ω = , and selects the single feature with the maximum mutual information with the target category as the initial subset. Subsequently, at each step of the iteration, the algorithm selects feature i from the remaining feature set S / Ω that maximizes the increment I x i ; c 1 Ω x j Ω   I x i ; x j and adds it to Ω until the preset number of features k is reached. This optimization criterion ensures that the selected features are highly correlated with the target variable and as independent as possible from each other, thereby improving the discrimination efficiency of the feature subset. The advantage of mRMR lies in its ability to capture the nonlinear relationship between features and targets, and it is applicable to discrete and continuous features (the latter need to be processed through discretization or kernel density estimation). In practical applications, the number of features k is usually determined through cross-validation or other model selection techniques. This method has been widely applied in fields such as bioinformatics, image recognition and pattern recognition, and has become a classic feature selection strategy.

2.2. Forecasting Models

Carbon prices exhibit inherent nonlinearities due to structural shocks from policy adjustments, asymmetric responses to energy market volatility and speculative trading behavior during the compliance phase. These complexities make traditional linear models incapable of capturing regime-related patterns and tail risks. To address this problem, we predict carbon prices by combining the following nine machine learning and deep learning base models.

2.2.1. GBDT

Gradient Boosted Decision Tree (GBDT) employs an iterative approach that generates multiple weak learners (decision trees) and synthesizes their outputs to produce final predictions. This methodology effectively integrates the principles of decision tree construction with ensemble learning techniques.

2.2.2. RF

Random forest (RF) is an ensemble learning method based on resampling bootstrap. RF is a forest that consists of multiple decision trees built in a random manner, each tree is a basic learner, and the “whole” forest corresponds to integrated learning. The random forest, through randomness, strive to make the “appearance” of each tree irrelevant because it “looks different” from each other.

2.2.3. XGBoost

XGBoost algorithm utilizes a forward stagewise approach, progressively incorporating decision trees into its model architecture. Each tree is strategically positioned at specific leaf nodes based on sample attributes. The model’s final output is derived from the cumulative summation of scores across all corresponding leaf nodes.

2.2.4. RNN

Recurrent neural network (RNN) architecture incorporates recurrent connections within its neural network framework, making it particularly effective for handling sequential data and extracting temporal patterns. This distinctive capability enables its extensive application across various domains, including natural language processing, speech recognition and time series analysis, setting it apart from conventional neural network structures.

2.2.5. LSTM

As a prominent variant of recurrent neural networks, long short-term memory (LSTM) networks address critical challenges in sequence modeling through their unique architectural design. By incorporating specialized memory cells, LSTM enables effective learning of long-range temporal dependencies, thereby mitigating the issues of gradient vanishing and explosion commonly encountered in training conventional recurrent networks for extended sequences.

2.2.6. CNN

Convolutional neural network (CNN) is a very important model structure in deep learning, which is particularly good at processing image data. CNN automatically extracts image features through convolution operations and pooling operations, and uses the fully connected layer for tasks such as classification or regression.

2.2.7. GRU

Gated Recurrent Unit (GRU) is used to process sequence data, and it controls the flow of information through two gating mechanisms: update gate and reset gate. GRU has a simpler structure and fewer parameters than the long short-term memory network model, so it usually has advantages in terms of training speed and computational efficiency.

2.2.8. KDE

Kernel density estimation (KDE) is a sample-based estimation method. This widely utilized non-parametric estimation approach can be effectively integrated with the developed point prediction framework to enhance the model’s robustness and reliability. According to the rule of thumb, the formula for estimating the fixed bandwidth kernel density is shown in the following equation:
f ^ K D E x = 1 N h M I S E i = 1 n K x x i h M I S E
where f ^ K D E x denotes the value of the probability function of the sample at the estimation point, N is the total number of samples, n is the number of samples involved in density estimation, K(·) is the kernel function and the Gaussian kernel function is used.

2.2.9. ABKDE

However, the conventional fixed bandwidth approach exhibits limitations in accurately capturing local data characteristics, often resulting in prediction intervals that are either excessively broad or narrow, thereby compromising prediction reliability and precision. To address this issue, researchers have developed an adaptive bandwidth kernel density estimation (ABKDE) technique. This method dynamically adjusts bandwidth parameters based on estimation locations or sampling points, enabling better adaptation to local data patterns and consequently enhancing both the accuracy and robustness of interval predictions [22], and its basic forms are shown as follows:
h x j = i = 1 N f ^ K D E x j 1 N f ^ x j α h M I S E
f ^ A B K D E x = 1 N h x j i = 1 n K x x j h x j
where h x j is the adaptive bandwidth function, α is the sensitive factor and f ^ A B K D E x indicates the ABKDE expression.

2.3. Evaluation Criteria

2.3.1. Precision Evaluation

In order to test the point forecast validity of the model, this paper evaluates the prediction effect by means of mean absolute error (MAE), mean square error (MSE) and mean absolute percentage error (MAPE). MAE is the average error calculated according to the absolute value of the forecast error, which avoids the problem of canceling each other, so as to accurately reflect the size of the actual forecast error. MSE is the average error calculated by eliminating the sign of the error by squaring it. MAPE is a relative value that reflects the size of the error, which eliminates the effect of the level and unit of calculation of the time series data. The formulas are expressed as follows:
M A E = i = 1 n Y i Y ^ i n
M S E = i = 1 n Y i Y ^ i 2 n
M A P E = 1 n i = 1 n Y i Y ^ i Y i × 100
where Y i is the actual value, and Y ^ i is the predicted value.
It is important to note that we use the MSE as the training objective because its quadratic form is consistent with the convex cost structure of carbon market participants. When forecasts deviate significantly from actual prices, firms face disproportionate losses. Policymakers prioritize tail risk mitigation, which the MSE inherently emphasizes by penalizing large errors more severely than the MAE, in contrast to linear metrics, which place less weight on extreme events critical to market stability.
As for testing the interval forecast validity of the model, this paper evaluates the prediction effect by means of prediction interval normalized average width (PINAW), prediction interval coverage probability (PICP) and composite width and coverage (CWC). PINAW is the average of the ratio of the predicted interval width to the actual data range and is used to measure the relative size of the predicted interval width. PICP is the proportion of the actual value falling within the forecast interval, reflecting the reliability of the forecast interval. CWC is an indicator that comprehensively evaluates the width of the forecast interval and the probability of coverage. The formulas are expressed as follows:
P I N A W = 1 n W i = 1 n Y ^ i , u Y ^ i , l
P I C P = 1 n i = 1 n α i , α i = 1 , Y ^ i Y ^ i , l , Y ^ i , u 0 , Y ^ i Y ^ i , l , Y ^ i , u
C W C = P I N A W 1 + γ P I C P e η P I C P μ , γ = 0 , i f   P I C P μ 1 , i f   P I C P < μ
where Y ^ i , l , Y ^ i , u is the forecasting interval, W is the range of Y i , η and μ are parameters that determines a penalty, with μ indicating the confidence level which is set in the process of interval prediction.

2.3.2. Effectiveness Testing

Complementing the six primary indicators, the Diebold–Mariano (DM) statistical test was incorporated to strengthen the evaluation framework. This analytical method serves as a comparative tool for assessing the predictive capabilities between two time series models. The test’s fundamental premise establishes the null hypothesis ( H 0 ) as the equivalence of predictive performance between models, with the alternative hypothesis ( H 1 ) positing differential predictive effectiveness. Hypothesis testing can be expressed as:
H 0 : E g e i p r o g e i b a s e = 0
H 1 :   E g e i p r o g e i b a s e 0
where e i p r o and e i b a s e stand for the prediction errors of the proposed and baseline model, respectively. g denotes a loss function such as MAE and MSE. E represents the expectation calculation. Then, the DM statistics can be formulated as follows:
d = 1 n i = 1 n g e i p r o g e i b a s e
D M = d s 2 n
where s 2 represents the consistent estimation of the asymptotic variance of g e i p r o g e i b a s e . For a given significance level α, H 0 should not be rejected if the statistic value is within the interval [ Z α / 2 , Z α / 2 ] . Otherwise, H 0 will be rejected, which means that there is a significant performance difference between the proposed forecasting system and the baseline model at the significance level α.

2.4. Feature Ranking

Feature ranking plays a critical role in carbon price forecasting systems. Establishing an appropriate feature hierarchy facilitates deeper comprehension of dataset characteristics and underlying patterns, thereby contributing significantly to the enhancement of predictive models and algorithmic performance. In this paper, the recursive feature elimination (RFE) method of Wrapper is used to rank the importance of different factors that affect the price of carbon. RFE is a model-based feature selection method that evaluates the importance of features through an external machine learning algorithm and recursively removes less important features until a specified number of features are reached. In each iteration of RFE, the model is refitted and the evaluation of feature importance is updated. Unlike filter-based feature selection methods, RFE takes into account the interactions between features and typically uses model-based feature scoring methods internally.

3. Framework

This part focuses on the basic framework and the theory involved in the developed hybrid model, including feature selection techniques and prediction methods. This paper proposes a new framework for carbon price prediction, which consists five parts: (1) data collection, (2) feature selection, (3) point prediction, (4) interval prediction and (5) further discussion. Figure 1 demonstrates the proposed forecasting framework.
(1)
Data collection
In data collection module, the historical daily carbon price series and related influencing factor data of Tianjin and Chongqing are collected, with a total of eight categories in four dimensions.
(2)
Feature selection
In feature selection module, the feature selection method of mRMR was used to reduce the dimension of the influence variables in the two regions. Finally, two datasets are obtained.
(3)
Point prediction
The experiments module is divided into three steps, individual forecasting, ensemble and performance measure, respectively. The first step aims to validate the selected submodels and determine the best predictive model for each dataset, with three ensemble learning models (GBDT, RF and XGBoost) being used as the submodels. In the second step, GRU is used as the nonlinear integration model to integrate the predictions of the three submodels to obtain the final prediction results, making full use of each submodel. The constructed integration model is named EGRU. In the third step, four evaluation criteria, including MAD, MSE, MAPE and DM, are used to evaluate the prediction effect.
Here, we use a five-step prediction, that is, the input variable is the carbon price of the previous five days, and the output variable is the carbon price of the sixth day. The setting captures weekly effects in daily carbon price data.
(1)
Interval prediction
The interval prediction module is divided into two steps. The first step is to use the ABKDE method for interval prediction on the basis of point prediction; the second step is to use PINAW, PICP and CWC to evaluate the prediction effect.
(2)
Further discussions
Beyond conducting a comprehensive error analysis, the findings are systematically examined to validate the enhanced performance of the developed framework. This section establishes comparative evaluations with existing methodologies in carbon price prediction research, quantitatively assesses the relative influence of key determinants on pricing dynamics and clarifies their underlying economic mechanisms. In this module, we utilize the integrated model-based RFE approach to rank the importance of features.

4. Data

The average daily carbon emission trading price of Tianjin and Chongqing is selected as the research object for the following economic reasons: Tianjin is an important economic center in northern China, with developed manufacturing, port logistics and modern service industries. Tianjin Emission Exchange is the first comprehensive environmental rights trading institution in China, and an international trading platform that promotes energy conservation and emission reduction through market-oriented means and financial innovation. Therefore, the study of its carbon emission trading price has had an important impact on the low-carbon transition in the northern region. As a typical city in the western region, Chongqing is also an important industrial base in China, with pillar industries such as automobile, electronics and equipment manufacturing, and its industrial carbon emissions are relatively large. Therefore, the study of its carbon emission trading price can provide a reference for other western cities, and has a demonstration significance for national industrial emission reduction. The data were obtained from the Wind database.
Feature selection plays a critical role in enhancing the prediction accuracy of carbon emission trading prices, given the multitude of influencing factors involved. The novel index system constructed in this paper is based on the existing carbon price research and is divided into four levels: energy factor, economic factor, environmental factor and exchange rate factor. These factors are described in detail below.
(1)
Energy factor
In this paper, coal prices, crude oil prices and natural gas prices are selected to reflect the impact of energy prices on carbon prices. The coal price is the International Petroleum Exchange (IPE) Rotterdam coal futures settlement price, the Brent crude futures settlement price is denoted as the crude price and the gas price is the New York Mercantile Exchange (NYMEX) natural gas futures settlement price. And the above data were obtained from the Wind database.
(2)
Economic factor
The economy of our country is developing at a high speed, in which economic factor plays an important role that cannot be ignored. In this paper, three indexes are chosen as the representative of financial factor, namely the CSI 300 Index, the S&P 500 Index and the NASDAQ OMX Green Economy Index. And the above data were obtained from the Wind database.
(3)
Environmental factor
In terms of environmental factor, this paper selects daily maximum and minimum temperature as the representatives of temperature, and uses daily air quality index (AQI) to reflect air quality. Moreover, the data were obtained from http://www.tianqihoubao.com.
(4)
Exchange rate factor
Exchange rate factor is also important factor that affects the price of carbon emissions trading. This paper chooses the central parity rate of USD to CNY (USDCNYC) as the exchange rate factor. Considering the special currency status of US dollar, the choice of the USDCNYC is representative. And the above data were obtained from the Wind database.
In addition, this paper selects the common time interval of each variable on the basis of comprehensive consideration of domestic and foreign public holidays and trading time differences, as well as the impact of the missing value of the variable. If the data of the current day are missing, this paper uses the data of the previous day to supplement, thus obtaining the final dataset. Then, 80% of all the data are selected as the training set and the remaining 20% as the test set. Specifically, the period from 4 January 2021 to 19 August 2024 was chosen to validate the proposed framework for predicting emissions trading prices, with 704 training samples and 176 test samples.
More importantly, the details of the multiple influence factors that affected the China carbon price are shown in Table 1.

5. Carbon Price Forecasting Based on Tianjin Dataset

5.1. Analysis of Affecting Factors for Tianjin Carbon Price

To determine the main external influencing factors of Tianjin emission trading price, this paper uses mRMR to determine the final input variable. The final input variables are shown in Table 2.

5.2. Point Prediction for Tianjin Carbon Price

The advancement of computational technologies has facilitated the widespread implementation of artificial intelligence methodologies across various domains, including pattern recognition, signal analysis, system modeling and optimization processes. In order to verify the previous experimental results as well as the practicability of traditional methods and prediction models, and to clearly prove the superiority of the proposed model, three comparative experiments are designed.

5.2.1. Comparison I: Effectiveness of the Feature Screening Method

In order to further examine the effectiveness of feature screening, the eliminated features were reintroduced into the dataset in this step, and the prediction error was calculated, as shown in Table 3. EGRU is used to represent the integrated model constructed in this paper, EGRU-all is used to represent the model before feature screening, and difference value is used to represent the difference between EGRU and EGRU-all. Compared with the prediction results before feature screening, the MAE, MSE and MAPE values after screening decreased by 0.377, 0.146 and 0.004, respectively, indicating that the mRMR feature screening method adopted in this paper is effective.
Table 4 shows the result of DM test to evaluate the difference in accuracy between EGRU and the model before feature screening. With a test statistic value of 5.439 exceeding the critical threshold of 2.58, the null hypothesis can be rejected at the 1% significance level. This statistical evidence demonstrates that the proposed model achieves superior predictive accuracy compared to the pre-feature screening version across most scenarios within the 99% confidence interval. Consequently, the analysis confirms substantial improvements in overall model accuracy following the implementation of feature screening procedures.

5.2.2. Comparison II: Superiority of the Proposed Prediction System

Firstly, GBDT, RF, XGBoost, GRU and EGRU are compared to determine the superiority of a single integrated learning model. According to the predicted results, the MAPE values of the four models are all less than 0.1, so they can all enter the subsequent nonlinear integration stage. In addition, it is found that compared with GBDT and RF, XGBoost has better predictive performance, because although its MAPE value is the same as that of RF model, which is 0.089, its MAE and MSE values are smaller than those of RF model, so it can be concluded that XGBoost model is more effective than the other two models. This also proves the effectiveness of the three evaluation indicators selected in this paper.
In addition, we find that the performance of EGRU is better than that of the four sub-models. Here, we compare the prediction results of the unintegrated model with the proposed nonlinear integrated model to highlight the advantages of the proposed prediction framework. Taking GRU as an example, MAE, MSE and MAPE values before and after integration differ by 0.341, 0.716 and 0.072, respectively. Therefore, the evaluation index of the combined model is better than that of the basic model, and the two-stage nonlinear integral algorithm proposed in this paper has a positive effect on improving the ability of carbon price prediction.
After the three prediction results are obtained by the tree model, the GRU method is used for nonlinear integration of these prediction results. To verify the superiority of the GRU algorithm, we compare it with RNN, LSTM and CNN. It should be noted that the models for nonlinear integration using these three contrast models are referred to in this paper as ERNN, ELSTM and ECNN. Through the comparison of the four nonlinear integrated models, it can be seen that the model proposed in this paper is superior to other comparison models in all evaluation indexes, and its MAE, MSE and MAPE values are the best, which are 0.522, 0.763 and 0.015, respectively. The error values of the contrast models and forecasting results of Tianjin dataset are presented in Table 5 and Figure 2.
Table 6 summarizes the results of DM test to evaluate the difference in accuracy between EGRU and the contrast models. Statistical analysis reveals that the test measurements sequentially attain 5.528, 6.451, 5.318, 5.297, 4.611, 3.887 and 3.264. These values uniformly exceed the 2.58 benchmark required for statistical significance at the 1% level. Such distribution characteristics necessitate rejection of the null hypothesis, establishing the proposed method’s predictive superiority within 99% confidence boundaries. Comparative evaluation further confirms the framework’s enhanced estimation precision relative to conventional approaches.

5.3. Interval Prediction for Tianjin Carbon Price

In this section, interval prediction of Tianjin dataset is carried out on the basis of previous studies. The kernel density estimation curve and the comparison between the interval prediction results and the true value is shown in Figure 3 and the interval prediction results of models based on different confidence intervals are shown in Table 7. Among them, Figure 3a is the kernel density estimation curve, which shows the possible error range and the most probable error value of ABKDE method, indicating that ABKDE method can provide more accurate probability density estimation results. Figure 3b is a visual display of the interval prediction result of carbon price and the true value. It can be seen that the interval prediction results obtained by the framework proposed in this paper basically cover the true value of carbon price. It can be seen that the PICP values of the proposed model are close to the confidence level under different confidence levels, which further proves the prediction effectiveness of the proposed prediction framework.

5.4. Ranking of Affecting Factors for Tianjin Carbon Price

In this section, the RFE method is used to calculate the importance proportion of different factors affecting carbon price in Tianjin, and then order them. The results are shown in Table 8 and Figure 4. Gas ranks first, accounting for 39.64 percent, indicating that it has the greatest impact on the carbon price in Tianjin. This may be because natural gas occupies an important position in Tianjin’s energy consumption structure, especially in industrial production and residential life. The higher the price of natural gas, the cost of reducing carbon emissions through the use of natural gas will increase, so when the carbon emission rights are purchased from the market, the carbon price will naturally rise according to the relationship between supply and demand. In addition, the daily maximum temperature ranked last, accounting for 3.92, indicating that it has the least impact on the carbon price of Tianjin. This may be due to the relatively moderate temperature fluctuations in Tianjin, so the daily maximum temperature has a relatively limited impact on carbon prices.

6. Carbon Price Forecasting Based on the Chongqing Dataset

In order to further prove that the proposed prediction framework can achieve good prediction results in different datasets, the prediction based on the Chongqing dataset is presented in this section.

6.1. Analysis of Affecting Factors for Chongqing Carbon Price

The volatility of the Chongqing dataset is stronger than that of the Tianjin dataset, so we selected nine major external factors as input variables for it. The final input variable are shown in Table 9.

6.2. Point Prediction for Chongqing Carbon Price

This section is the same as the previous section. In order to verify the previous experimental results as well as the practicability of traditional methods and prediction models, and to clearly prove the prediction superiority of the proposed model, three comparative experiments are designed.

6.2.1. Comparison I: Effectiveness of the Feature Screening Method

In order to further examine the effectiveness of feature screening, the eliminated features were reintroduced into the dataset in this step, and the prediction error was calculated, as shown in Table 10. Compared with the prediction results before feature screening, the MAE, MSE and MAPE values after screening decreased by 0.028, 0.017 and 0.001, respectively, indicating that the mRMR feature screening method adopted in this paper is effective.
Table 11 shows the result of DM test to evaluate the difference in accuracy between EGRU and the model before feature screening. The test statistic, calculated at 3.035, exceeds the critical threshold of 2.58, leading to the rejection of the null hypothesis at the 1% significance level. This statistical evidence suggests that the proposed model demonstrates enhanced predictive accuracy compared to its pre-feature screening counterpart.

6.2.2. Comparison II: Superiority of the Proposed Prediction System

Firstly, GBDT, RF, XGBoost, GRU and EGRU are compared to determine the superiority of a single integrated learning model. According to the predicted results, the MAPE values of the four models are all less than 0.1, so they can all enter the subsequent nonlinear integration stage.
We also find that the performance of EGRU is better than that of the four sub-models. Taking GRU as an example, MAE, MSE and MAPE values before and after integration differ by 0.260, 0.435 and 0.065, respectively. Therefore, the evaluation index of the combined model is better than that of the basic model, and the two-stage nonlinear integral algorithm proposed in this paper has a positive effect on improving the ability of carbon price prediction.
Secondly, ERNN, ELSTM, ECNN and EGRU are compared. Through the comparison of the four nonlinear integrated models, it can be seen that the model proposed in this paper is superior to other comparison models in all evaluation indexes, and its MAE, MSE and MAPE values are the best, which are 0.725, 1.002 and 0.017, respectively. The error values of the contrast models and forecasting results of the Chongqing dataset are presented in Table 12 and Figure 5.
Table 13 summarizes the results of DM test to evaluate the difference in accuracy between EGRU and the contrast models. The calculated test statistics, measuring 4.501, 5.088, 4.452, 4.278, 3.396, 3.623 and 3.157, respectively, consistently surpass the critical value of 2.58. This empirical evidence supports the rejection of the null hypothesis at the 1% significance level, demonstrating the enhanced predictive performance of the proposed model relative to alternative approaches.

6.3. Interval Prediction for Chongqing Carbon Price

In this section, interval prediction of Chongqing dataset is carried out on the basis of previous studies. The kernel density estimation curve and the comparison between the interval prediction results and the true value is shown in Figure 6, and the interval prediction result of models based on different confidence intervals are shown in Table 14. Among them, Figure 6a is the kernel density estimation curve and Figure 6b is a visual display of the interval prediction results of carbon price and the true value. It can be seen that the interval prediction results obtained by the framework proposed in this paper basically cover the true value of carbon price. It can be seen that the PICP values of the proposed model are close to the confidence level under different confidence levels, which further proves the prediction effectiveness of the proposed prediction framework, which is the same as the conclusion under Tianjin dataset.

6.4. Ranking of Affecting Factors for Chongqing Carbon Price

In this section, the RFE method is also used to calculate the importance proportion of different factors affecting carbon price in Chongqing, and then order them. The results are shown in Table 15 and Figure 7. Crude ranks first, accounting for 19.49 percent, indicating that it has the greatest impact on the carbon price in Chongqing. This may be due to the fact that Chongqing’s industrial base depends on energy supplies, and movements in crude oil prices affect its overall energy market. In particular, high-energy-consuming industries and heavy chemical industries play an important role in Chongqing’s economy. The rise in crude oil prices will lead to an increase in energy costs for enterprises, forcing enterprises to increase the demand for carbon quotas to cope with operational pressure, resulting in an increase in carbon prices. We also find that the importance of crude and USDCNYC to Chongqing carbon price is not significantly different. The main reason may be that both crude and USDCNYC have an indirect impact on the carbon market through a common mechanism of energy costs and economic activity, while being jointly constrained by policy regulation and other market mechanisms.

7. Conclusions

Environmental problems such as climate warming caused by rapid economic development have become a major hidden danger threatening global security and development. However, the carbon market price is affected by a variety of factors, showing sharp fluctuations, which brings great challenges to the forecast. Therefore, this paper first uses the feature selection algorithm based on mRMR to identify and screen the influencing factors of carbon emissions trading price, then builds a two-stage nonlinear integrated prediction model to predict carbon emissions trading price at the point, and finally completes the interval prediction based on the point prediction. The findings were as follows:
(1)
The analysis of multiple determinants plays a critical role in carbon price prediction research. This paper enhances the carbon market risk evaluation framework through systematic feature selection implementation, while generating actionable insights for risk management strategies. The investigation reveals significant variations in key determinants across different carbon trading markets, with these variations demonstrating strong correlations with regional policy frameworks. Consequently, market participants should develop a thorough understanding of the interactions between carbon pricing mechanisms and various determinants to effectively manage and distribute market risks.
(2)
The neural network-based nonlinear integrated model demonstrates significant predictive capabilities. This paper employs four evaluation metrics, seven reference models, two case analyses, and four combined models to conduct a systematic assessment of the developed hybrid forecasting framework. Comparative analysis reveals that the proposed framework outperforms all reference models across evaluation metrics. These findings suggest that the proposed framework represents a viable methodological approach for carbon price prediction.
(3)
It is of practical significance to use the recursive feature elimination method to rank the importance of different factors affecting carbon emission price. Understanding the impact of each factor helps enterprises to rationally allocate resources according to the importance of each factor and prioritize support to areas that have the greatest impact on the price of carbon emission rights, such as the research and development and promotion of clean energy technologies.
(4)
The paper extends carbon price forecasting to range prediction, offering enhanced informational value for carbon market management. The proposed framework integrates point prediction with interval prediction utilizing ABKDE methodology, evaluated through three distinct performance metrics. Systematic assessment demonstrates the model’s predictive accuracy and effectiveness.
(5)
According to the research results of this paper, energy factors such as coal, oil and natural gas will have a significant impact on the trading price of carbon emissions. In the context where China’s energy consumption is highly dependent on coal, the government should formulate more targeted emission reduction policies, such as adjusting energy prices and optimizing the allocation of carbon quotas. Meanwhile, the energy structure should also be adjusted and optimized. The policy support for the development of clean energy should be reasonably increased, and the proportion of clean energy, such as wind energy, electricity and solar energy, in China’s energy consumption structure should be raised. In addition, strengthening publicity and enhancing the environmental awareness of the public and enterprises is also an important part of promoting green development. It is equally important to establish and improve environmental protection policies, perfect relevant laws and regulations, urge enterprises to improve their technological levels and promote the upgrading of digital infrastructure.
However, this paper has certain limitations:
(1)
The interval prediction methodology employed in this paper, while relatively straightforward in its implementation, presents certain limitations in terms of complexity and sophistication. This constrained approach may, nevertheless, offer potential applicability across various market management domains, including oil and equity markets. Future research could explore more advanced techniques, such as conformal prediction, to enhance the framework’s theoretical robustness and practical utility.
(2)
The current framework exhibits limitations in its data preprocessing capabilities, which could be enhanced through the implementation of noise reduction techniques to improve data quality and minimize irrelevant information interference. Furthermore, the predictive performance might be strengthened by incorporating additional exogenous variables, such as geopolitical and climate risk factors, into the carbon price modeling framework.
(3)
The current prediction framework can also be improved by adopting dynamic training (such as rolling window updates) to enhance market adaptability. In future work, as more post-COVID-19 data accumulate, the cyclic tree model can be utilized to implement rolling window prediction, and a follow-up study comparing the integration of static and dynamic factors can be conducted.
We should draw on the successful experience of the EU’s carbon pricing mechanism, promote the construction of the carbon emissions trading market, establish and improve the accounting methods and standards of the carbon market, perfect the institutional mechanisms of the carbon trading market, steadily promote the development of carbon finance, accelerate the advancement of the carbon benefit mechanism and gradually explore and improve a long-term mechanism for carbon pricing that suits our national conditions.
In summary, this paper contributes to the field by enhancing carbon price prediction accuracy while offering a methodological framework applicable to time series analysis in various market management contexts. Future investigations could focus on the integration of advanced deep learning architectures and sophisticated data processing techniques to optimize model performance. Additionally, exploring diverse practical applications may facilitate the realization of the framework’s full potential.

Author Contributions

R.G.: conceptualization, methodology, data curation, software, formal analysis, visualization, writing—original draft preparation. J.S.: conceptualization, supervision, project administration, writing—reviewing and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by grants from the National Natural Science Foundation of China (No. 72061020); the Natural Science Foundation of Gansu Province of China (25JRRA979); the Financial Statistics Research Integration Team of Lanzhou University of Finance and Economics (XKKYRHTD202304).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Niu, X.; Wang, J.; Zhang, L. Carbon price forecasting system based on error correction and divide-conquer strategies. Appl. Soft Comput. 2022, 118, 107935. [Google Scholar] [CrossRef]
  2. Sun, S.; Jin, F.; Li, H.; Li, Y. A new hybrid optimization ensemble learning approach for carbon price forecasting. Appl. Math. Model. 2021, 97, 182–205. [Google Scholar] [CrossRef]
  3. Zeng, L.; Hu, H.; Tang, H.; Zhang, X.; Zhang, D. Carbon emission price point-interval forecasting based on multivariate variational mode decomposition and attention-LSTM model. Appl. Soft Comput. 2024, 157, 111543. [Google Scholar] [CrossRef]
  4. Yang, K.; Sun, Y.; Hong, Y.; Wang, S. Forecasting interval carbon price through a multi-scale interval-valued decomposition ensemble approach. Energy Econ. 2024, 139, 107952. [Google Scholar] [CrossRef]
  5. Hao, Y.; Tian, C.; Wu, C. Modelling of carbon price in two real carbon trading markets. J. Clean. Prod. 2020, 244, 118556. [Google Scholar] [CrossRef]
  6. Yang, W.; Hao, M.; Hao, Y. Innovative ensemble system based on mixed frequency modeling for wind speed point and interval forecasting. Inf. Sci. 2023, 622, 560–586. [Google Scholar] [CrossRef]
  7. Li, D.; Ren, X. Carbon price prediction based on LsOALEO feature selection and time-delay least angle regression. J. Clean. Prod. 2023, 416, 137853. [Google Scholar] [CrossRef]
  8. Ye, L.; Du, P.; Wang, S. Industrial carbon emission forecasting considering external factors based on linear and machine learning models. J. Clean. Prod. 2024, 434, 140010. [Google Scholar] [CrossRef]
  9. Liang, Y.; Niu, D.; Hong, W.C. Short term load forecasting based on feature extraction and improved general regression neural network model. Energy 2019, 166, 653–663. [Google Scholar] [CrossRef]
  10. Sharma, A.; Singh, M. Batch reinforcement learning approach using recursive feature elimination for network intrusion detection. Eng. Appl. Artif. Intell. 2024, 136, 109013. [Google Scholar] [CrossRef]
  11. Hao, Y.; Tian, C. A hybrid framework for carbon trading price forecasting: The role of multiple influence factor. J. Clean. Prod. 2020, 262, 120378. [Google Scholar] [CrossRef]
  12. Wang, J.; Sun, X.; Cheng, Q.; Cui, Q. An innovative random forest-based nonlinear ensemble paradigm of improved feature extraction and deep learning for carbon price forecasting. Sci. Total Environ. 2021, 762, 143099. [Google Scholar] [CrossRef]
  13. Nadirgil, O. Carbon price prediction using multiple hybrid machine learning models optimized by genetic algorithm. J. Environ. Manag. 2023, 342, 118061. [Google Scholar] [CrossRef]
  14. Zhang, K.; Yang, X.; Cao, H.; Thé, J.; Tan, Z.; Yu, H. Multi-step forecast of PM2.5 and PM10 concentrations using convolutional neural network integrated with spatial–temporal attention and residual learning. Environ. Int. 2023, 171, 107691. [Google Scholar] [CrossRef]
  15. Lan, Y.; Huangfu, Y.; Huang, Z.; Zhang, C. Breaking through the limitation of carbon price forecasting: A novel hybrid model based on secondary decomposition and nonlinear integration. J. Environ. Manag. 2024, 362, 121253. [Google Scholar] [CrossRef]
  16. Hu, J.; Luo, Q.; Tang, J.; Heng, J.; Deng, Y. Conformalized temporal convolutional quantile regression networks for wind power interval forecasting. Energy 2022, 248, 123497. [Google Scholar] [CrossRef]
  17. Li, H.; Yu, Y.; Huang, Z.; Sun, S.; Jia, X. A multi-step ahead point-interval forecasting system for hourly PM2.5 concentrations based on multivariate decomposition and kernel density estimation. Expert Syst. Appl. 2023, 226, 120140. [Google Scholar] [CrossRef]
  18. Zhang, X.; Zong, Y.; Du, P.; Wang, S.; Wang, J. Framework for multivariate carbon price forecasting: A novel hybrid model. J. Environ. Manag. 2024, 369, 122275. [Google Scholar] [CrossRef]
  19. Sun, J.; Zhao, P.; Sun, S. A new secondary decomposition-reconstruction-ensemble approach for crude oil price forecasting. Resour. Policy 2022, 77, 102762. [Google Scholar] [CrossRef]
  20. Spandagos, C.; Tovar Reaños, M.A.; Lynch, M. Energy poverty prediction and effective targeting for just transitions with machine learning. Energy Econ. 2023, 128, 107131. [Google Scholar] [CrossRef]
  21. Wu, C.; Wang, J.; Hao, Y. Deterministic and uncertainty crude oil price forecasting based on outlier detection and modified multi-objective optimization algorithm. Resour. Policy 2022, 77, 102780. [Google Scholar] [CrossRef]
  22. Chen, H.; Zheng, Y.; Huang, H.; Wang, Z.; Yang, B.; Ni, J. A point-interval prediction framework for minimum miscibility pressure of CO2-crude oil systems. Fuel 2025, 381, 133573. [Google Scholar] [CrossRef]
Figure 1. The proposed forecasting framework.
Figure 1. The proposed forecasting framework.
Mathematics 13 01624 g001
Figure 2. The forecasting results of Tianjin dataset.
Figure 2. The forecasting results of Tianjin dataset.
Mathematics 13 01624 g002
Figure 3. The interval prediction result of Tianjin dataset at 95% confidence level.
Figure 3. The interval prediction result of Tianjin dataset at 95% confidence level.
Mathematics 13 01624 g003
Figure 4. The proportion of feature importance of the Tianjin dataset.
Figure 4. The proportion of feature importance of the Tianjin dataset.
Mathematics 13 01624 g004
Figure 5. The forecasting results of the Chongqing dataset.
Figure 5. The forecasting results of the Chongqing dataset.
Mathematics 13 01624 g005
Figure 6. The interval prediction results of the Chongqing dataset at 95% confidence level.
Figure 6. The interval prediction results of the Chongqing dataset at 95% confidence level.
Mathematics 13 01624 g006
Figure 7. The proportion of feature importance in the Chongqing dataset.
Figure 7. The proportion of feature importance in the Chongqing dataset.
Mathematics 13 01624 g007
Table 1. The details of the multiple influence factors that affected the carbon price.
Table 1. The details of the multiple influence factors that affected the carbon price.
Factor GroupFactor SymbolAcronym
Energy factorIPE Rotterdam coal futures settlement priceCoal
Brent crude futures settlement priceCrude
NYMEX natural gas futures settlement priceGas
Economic factorCSI 300 IndexCSI300
S&P 500 IndexS&P500
NASDAQ OMX Green Economy IndexNASDAQ
Environmental factorMaximum temperatureT_max
Minimum temperatureT_min
Regional AQIAQI
Exchange rate factorCentral parity rate of USD to CNYUSDCNYC
Table 2. Influencing factors of Tianjin dataset.
Table 2. Influencing factors of Tianjin dataset.
Influencing FactorSelection Result
Coal
Crude
Gas
CSI300
S&P500
NASDAQ
T_max
T_min
AQI
USDCNYC
Table 3. Error values of nonlinear integrated models of Tianjin dataset before data screening.
Table 3. Error values of nonlinear integrated models of Tianjin dataset before data screening.
ModelMAEMSEMAPE
EGRU-all0.8990.9090.019
EGRU0.5220.7630.015
Difference value0.3770.1460.004
Table 4. The DM test result of Tianjin dataset.
Table 4. The DM test result of Tianjin dataset.
ModelDM
EGRU-all5.439 **
Note, ** represents 1% significance level with the Z α 2 = 2.58 .
Table 5. Error values of the contrast models of Tianjin dataset.
Table 5. Error values of the contrast models of Tianjin dataset.
ModelMAEMSEMAPE
GDBT0.9031.6090.091
RF0.9971.9620.089
XGBoost0.8791.5700.089
GRU0.8631.4790.087
ERNN0.7731.0540.022
ELSTM0.6230.8620.018
ECNN0.5660.8140.016
EGRU0.5220.7630.015
Table 6. The DM test results of Tianjin dataset.
Table 6. The DM test results of Tianjin dataset.
ModelDM
GDBT5.528 **
RF6.451 **
XGBoost5.318 **
GRU5.297 **
ERNN4.611 **
ELSTM3.887 **
ECNN3.264 **
Note, ** represents 1% significance level with the Z α 2 = 2.58 .
Table 7. Model interval prediction results based on different confidence intervals of the Tianjin dataset.
Table 7. Model interval prediction results based on different confidence intervals of the Tianjin dataset.
Confidence IntervalPINAWPICPCWC
α = 0.05 0.030.980.03
α = 0.10 0.020.941.02
α = 0.25 0.010.871.01
Table 8. The ranking order of the external influence factors of the Tianjin dataset.
Table 8. The ranking order of the external influence factors of the Tianjin dataset.
External FactorsProportion (%)Ranking Order
Gas39.641
S&P500 9.074
NASDAQ13.813
T_max3.926
AQI27.252
USDCNYC6.315
Table 9. Influencing factors in the Chongqing dataset.
Table 9. Influencing factors in the Chongqing dataset.
Influencing FactorSelection Result
Coal
Crude
Gas
CSI300
S&P500
NASDAQ
T_max
T_min
AQI
USDCNYC
Table 10. Error values of nonlinear integrated models of the Chongqing dataset before data screening.
Table 10. Error values of nonlinear integrated models of the Chongqing dataset before data screening.
ModelMAEMSEMAPE
EGRU-all0.7531.0190.018
EGRU0.7251.0020.017
Difference value0.0280.0170.001
Table 11. The DM test result of Chongqing dataset before feature selection.
Table 11. The DM test result of Chongqing dataset before feature selection.
ModelDM
EGRU-all3.035 **
Note, ** represents 1% significance level with the Z α 2 = 2.58 .
Table 12. Error values of the contrast models of the Chongqing dataset.
Table 12. Error values of the contrast models of the Chongqing dataset.
ModelMAEMSEMAPE
GDBT1.0771.5390.096
RF1.1051.6280.097
XGBoost1.0321.4810.095
GRU0.9851.4370.082
ERNN0.7981.0790.019
ELSTM0.7811.0360.018
ECNN0.7661.0070.018
EGRU0.7251.0020.017
Table 13. The DM test results of the Chongqing dataset.
Table 13. The DM test results of the Chongqing dataset.
ModelDM
GDBT4.501 **
RF5.088 **
XGBoost4.452 **
GRU4.278 **
ERNN3.906 **
ELSTM3.623 **
ECNN3.157 **
Note, ** represents 1% significance level with the Z α 2 = 2.58 .
Table 14. Model interval prediction results based on different confidence intervals of the Chongqing dataset.
Table 14. Model interval prediction results based on different confidence intervals of the Chongqing dataset.
Confidence IntervalPINAWPICPCWC
α = 0.05 0.050.980.05
α = 0.10 0.030.950.03
α = 0.25 0.020.910.02
Table 15. The ranking order of the external influence factors in the Chongqing dataset.
Table 15. The ranking order of the external influence factors in the Chongqing dataset.
External FactorsProportion (%)Ranking Order
Coal15.873
Crude19.491
Gas12.634
CSI5009.145
S&P5007.186
NASDAQ6.357
T_min4.829
AQI6.278
USDCNYC18.252
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, R.; Sun, J. A Novel Forecasting Framework for Carbon Emission Trading Price Based on Nonlinear Integration. Mathematics 2025, 13, 1624. https://doi.org/10.3390/math13101624

AMA Style

Gao R, Sun J. A Novel Forecasting Framework for Carbon Emission Trading Price Based on Nonlinear Integration. Mathematics. 2025; 13(10):1624. https://doi.org/10.3390/math13101624

Chicago/Turabian Style

Gao, Rulin, and Jingyun Sun. 2025. "A Novel Forecasting Framework for Carbon Emission Trading Price Based on Nonlinear Integration" Mathematics 13, no. 10: 1624. https://doi.org/10.3390/math13101624

APA Style

Gao, R., & Sun, J. (2025). A Novel Forecasting Framework for Carbon Emission Trading Price Based on Nonlinear Integration. Mathematics, 13(10), 1624. https://doi.org/10.3390/math13101624

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop