Next Article in Journal
Strategic Interaction Between Brands and KOLs in Live-Streaming E-Commerce: An Evolutionary Game Analysis Using Prospect Theory
Previous Article in Journal
Green Human Resource Management System Development in Africa: A Systematic Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Conformable Fractional Order Unbiased Kernel Regularized Nonhomogeneous Grey Model and Its Applications in Energy Prediction

1
School of Statistics and Mathematics, Shandong University of Finance and Economics, Jinan 250014, China
2
School of Statistics and Mathematics, Shanghai Lixin University of Accounting and Finance, Shanghai 201209, China
*
Author to whom correspondence should be addressed.
Systems 2025, 13(7), 527; https://doi.org/10.3390/systems13070527
Submission received: 5 April 2025 / Revised: 11 May 2025 / Accepted: 20 June 2025 / Published: 1 July 2025

Abstract

Grey models have attracted considerable attention as a time series forecasting tool in recent years. Nevertheless, the linear characteristics of the differential equations on which traditional grey models rely frequently result in inadequate predictive accuracy and applicability when addressing intricate nonlinear systems. This study introduces a conformable fractional order unbiased kernel-regularized nonhomogeneous grey model (CFUKRNGM) based on statistical learning theory to address these limitations. The proposed model initially uses a conformable fractional-order accumulation operator to derive distribution information from historical data. A novel regularization problem is then formulated, thereby eliminating the bias term from the kernel-regularized nonhomogeneous grey model (KRNGM). The parameter estimation of the CFUKRNGM model requires solving a linear equation with a lower order than the KRNGM model, and is automatically calibrated through the Bayesian optimization algorithm. Experimental results show that the CFUKRNGM model achieves superior prediction accuracy and greater generalization performance compared to both the KRNGM and traditional grey models.

1. Introduction

Numerous nations worldwide are presently seeing a transformation in their energy consumption frameworks. A multitude of scholars have exerted considerable effort to devise energy forecasting methodologies intended to precisely estimate national energy use. Lu et al. conducted a literature survey on building energy prediction using artificial neural networks (ANNs), highlighting their potential to improve prediction accuracy and efficiency in energy management systems [1]. Similarly, Wang et al. studied the application of Random Forest algorithms for hourly building energy prediction, revealing their higher precision compared to standard techniques [2]. Fan et al. proposed deep learning-based feature engineering methods to enhance building energy prediction models, leveraging neural networks to automatically extract features from large-scale data, resulting in significant improvements in forecasting accuracy [3]. Additionally, Cammarano et al. introduced the Pro-Energy model, combining solar and wind energy harvesting with wireless sensor networks and offering a promising solution for energy prediction in renewable energy systems [4]. Nevertheless, the swift acceleration of economic expansion renders several statistical models inadequate for managing the significant uncertainty in energy consumption predictions. Indeed, the economic frameworks of numerous nations worldwide have experienced substantial transformations in recent years, rendering only the most current data dependable for forecasting energy consumption. Consequently, numerous researchers have adopted forecasting models adept at managing limited sample sizes [5,6,7,8].
Grey system theory, initially introduced by Deng [9] in 1982, has gained substantial popularity in forecasting and decision-making research due to its strong performance under uncertain conditions and limited data scenarios. Among various grey models, the GM(1,1) model (or GM for short) is the most basic and widely adopted, recognized for delivering reliable forecasts even with minimal data (as few as four points) [10]. Owing to their effectiveness in handling small datasets, grey prediction models have found widespread applications in diverse fields, including traffic safety analysis [11], energy production prediction [12], energy economics forecasting [13], as well as environmental studies [14,15].
Nevertheless, traditional grey models rely on integer order accumulation, which can be inadequate for systems exhibiting strong memory effects or complex dynamics. To address this limitation, fractional calculus—capable of representing long-memory processes and refined dynamical details—has been gradually introduced into grey modeling. In 2013, Wu et al. [16] pioneered the integration of fractional order accumulation (abbreviated as FOA) into fractional grey models (abbreviated as FGM), thereby facilitating effective modeling of nonlinear sequences without additional nonlinear equations. This approach subsequently demonstrated robust performance in emission forecasting [17], clean energy production [18], space-floating target trajectories [19], and building settlement monitoring [20]. Concurrently, researchers have proposed a variety of fractional grey models for different application contexts. For example, Gao et al. [21] proposed a discrete fractional grey model aimed at forecasting China’s CO2 emissions. Ma et al. [22] employed conformable fractional derivatives and a brute-force approach for optimizing the order. Duan et al. [23] applied particle swarm optimization (abbreviated as PSO) to enhance fractional grey models for forecasting China’s crude oil consumption. Lin et al. [24] introduced fractional operators into a time-delay polynomial grey model, improving its flexibility. Wu et al. [25] adjusted the order of fractional accumulations in GMC(1,N) to forecast electric power consumption in Shandong province. These findings collectively underscore the pronounced advantages of fractional grey models in dealing with nonlinear and nonstationary sequences.
On the other hand, conventional grey models still encounter challenges when subjected to strong nonlinearities or external disturbances. In response, scholars have put forward the nonhomogeneous grey model (abbreviated as NGM) and its various extensions [26,27,28], incorporating time-dependent or other prior information into the whitening equation to capture more complex external drivers. However, adding linear terms alone may not thoroughly capture higher-order nonlinear structures in the data. To address this shortcoming, Ma et al. [29] introduced a kernel-based regularization scheme for nonhomogeneous grey models (abbreviated as KRNGM), which maps the linear model into a high-dimensional feature space, thereby flexibly representing nonlinear dynamics. This concept is grounded in Vapnik’s work on support vector machines (abbreviated as SVMs) [30], which has already gained widespread traction in machine learning [31], clustering [32], and principal component analysis [33]. Furthermore, the least squares support vector machine (abbreviated as LS-SVM) proposed by Suykens et al. [34] has simplified kernel-based methods, enabling successful applications in image classification [35], hydropower consumption forecasting [36], and natural disaster prediction [37].
Despite these developments, existing kernel-based regularized nonhomogeneous grey models still treat the bias term as a separately estimated parameter. This approach can not only complicate the model structure but also introduce biases in parameter estimation, thereby reducing its generalization capability. In order to enhance its practical utility, this study draws on the unbiased perspective proposed by Wang et al. [38] for LS-SVM, which jointly regularizes the bias term and kernel mapping parameters to effectively suppress uncertainties arising from the bias. This unbiased strategy has been extended to multiple applications in computer vision and machine learning. For instance, Jeon et al. [39] employed unbiased learning methods to mitigate convolutional neural networks’ (abbreviated as CNN) reliance on biased training data, while de Mello et al. [40] presented an innovative active learning strategy to curtail the detrimental effects of biased sampling on model performance.
This research advances current understanding by integrating conformable fractional order accumulation into the kernel-based regularized nonhomogeneous grey model. The outcome is the conformable fractional unbiased kernel-regularized nonhomogeneous grey model (CFUKRNGM). This methodology seeks to enhance predictive precision for intricate systems through the utilization of a tunable-order accumulation operator. This operator enables the modulation of historical data weighting. Furthermore, impartial regularization is employed to alleviate overfitting induced by the bias factor. Oil production projections serve as a case study for evaluating the model’s efficacy, illustrating its practical applicability.

2. Kernel-Regularized Nonhomogeneous Grey Model

2.1. Mathematical Basis of GM(1,1)

Grey system theory asserts that real observational data frequently encompass noise, random variations, and other uncertainties, complicating the direct formulation of a dynamic equation from the raw dataset. To resolve this, grey models utilize the accumulation generation operation (AGO), which converts the differential properties of the discrete series into a more uniform sequence, thus reducing significant oscillations that may exist in the raw data. Let the original sequence be specified as
X ( 0 ) = x ( 0 ) ( 1 ) , x ( 0 ) ( 2 ) , , x ( 0 ) ( n ) ,
and define its first-order accumulated operation (abbreviated as 1-AGO) sequence as
X ( 1 ) = x ( 1 ) ( 1 ) , x ( 1 ) ( 2 ) , , x ( 1 ) ( n ) ,
where
x ( 1 ) ( k ) = i = 1 k x ( 0 ) ( i ) , k = 1 , 2 , , n .
Through the 1-AGO operation, the original series is smoothed to some extent, providing a more stable data foundation for subsequent differential equation fitting.
Once the relatively smooth sequence X ( 1 ) is obtained, the core assumption in grey models is that this accumulated sequence can be described by a first-order linear differential equation, namely,
d x ( 1 ) ( t ) d t + a x ( 1 ) ( t ) = b ,
where a and b are constants to be estimated. This equation is referred to as the white equation of GM(1,1), which effectively maps a discrete, noise-perturbed sequence onto a relatively simple continuous dynamic system.
To relate Equation (4) to discrete observations, grey modeling introduces the concept of background values by setting
z ( 1 ) ( k ) = 1 2 x ( 1 ) ( k ) + x ( 1 ) ( k 1 ) , k = 2 , 3 , , n .
Here, z ( 1 ) ( k ) approximately represents the average level of x ( 1 ) ( t ) over the interval [ k 1 , k ] . Meanwhile, at discrete time t = k , the differential equation is approximately satisfied by
k 1 k d x ( 1 ) ( t ) = x ( 1 ) ( k ) x ( 1 ) ( k 1 ) = x ( 0 ) ( k ) ,
which allows Equation (4) to be discretized at k as
x ( 0 ) ( k ) + a z ( 1 ) ( k ) = b , k = 2 , 3 , , n .
This constitutes the GM(1,1) model in discrete form. Rearranging Equation (7) yields the matrix equation
Y = B H ,
where
Y = x ( 0 ) ( 2 ) x ( 0 ) ( 3 ) x ( 0 ) ( n ) , B = z ( 1 ) ( 2 ) 1 z ( 1 ) ( 3 ) 1 z ( 1 ) ( n ) 1 , H = a b .
By applying the least squares method, one obtains
H = B T B 1 B T Y ,
thus determining the estimates of a and b. Consequently, the white equation of GM(1,1) captures, to a certain extent, the continuous evolution pattern of the sequence X ( 1 ) .
Once a and b are determined, one can analytically solve the continuous form of Equation (4):
x ^ ( 1 ) ( t ) = x ( 0 ) ( 1 ) b a e a ( t 1 ) + b a .
For discrete time k, substituting t = k and using x ( 1 ) ( 1 ) = x ( 0 ) ( 1 ) yields
x ^ ( 1 ) ( k ) = x ( 0 ) ( 1 ) b a e a ( k 1 ) + b a .
However, GM(1,1) makes forecasts for the accumulated sequence X ( 1 ) . To revert to the original scale X ( 0 ) , one must perform an inverse accumulation operation (abbreviated as IAGO):
x ^ ( 0 ) ( k ) = x ^ ( 1 ) ( k ) x ^ ( 1 ) ( k 1 )
This step yields the forecasted value of x ^ ( 0 ) ( k ) at time k, thereby completing the overall prediction procedure.
In GM(1,1), only the linear term a x ( 1 ) ( t ) and a constant term b are considered. To accommodate strong nonlinearities or external disturbances, an additional nonhomogeneous term f ( t ) is typically introduced, thus giving the whitening equation
d x ( 1 ) ( t ) d t + a x ( 1 ) ( t ) = f ( t ) + c ,
which corresponds to the NGM(1,1,k,c) model. It allows the incorporation of more external drivers or nonlinear information under the grey model framework, thereby enhancing the depiction of complex systems. Similar to GM(1,1), discretizing x ( 1 ) ( t ) and using background values and least squares (or regularization) enables the estimation of a, c, and the parameters of f ( t ) . When f ( t ) is unknown and strongly nonlinear, kernel methods (discussed in the following subsection) can be applied to approximate it.

2.2. Kernel-Regularized Nonhomogeneous Grey Model

As previously mentioned, the NGM(1,1,k,c) model maintains a fundamentally linear structure. To incorporate nonlinear modeling capability, this section proposes introducing kernel regularization into the nonhomogeneous grey model framework.
d x ( 1 ) ( t ) d t + a x ( 1 ) ( t ) = f ( t ) + c ,
where f ( t ) represents a nonlinear transformation with respect to t. If f ( t ) is set as the identity function ( f ( t ) = t ), the model reduces to the NGM(1,1,k,c) form.
The whitening equation of KRNGM is
d x ( 1 ) ( t ) d t + a x ( 1 ) ( t ) = w T φ ( t ) + c ,
Here, φ ( t ) represents a nonlinear transformation mapping inputs into a higher-dimensional feature space, while w denotes the corresponding coefficient vector. After discretizing across the interval [ k 1 , k ] using the trapezoidal rule, the equation becomes:
x ( 0 ) ( k ) + a z ( 1 ) ( k ) = w T ϕ ( k ) + c ,
where
z ( 1 ) ( k ) = 1 2 x ( 1 ) ( k ) + x ( 1 ) ( k 1 ) and ϕ ( k ) = 1 2 φ ( k ) + φ ( k 1 ) .
To estimate the parameters of the KRNGM, the following optimization objective is defined:
min a , w , c a 2 2 + w T w 2 + γ 2 j = 2 n e j 2 , s . t . e j = x ( 0 ) ( j ) + a z ( 1 ) ( j ) w T ϕ ( j ) c , j = 2 , , n ,
where γ is a regularization coefficient that balances the model’s smoothness and fitting error. To solve this quadratic program with linear constraints, the Lagrangian is formulated as
L = a 2 2 + w T w 2 + γ 2 j = 2 n e j 2 + j = 2 n λ j x ( 0 ) ( j ) + a z ( 1 ) ( j ) w T ϕ ( j ) c e j .
According to the KKT conditions, one obtains
L a = 0 a = j = 2 n λ j z ( 1 ) ( j ) , L w = 0 w = j = 2 n λ j ϕ ( j ) , L c = 0 j = 2 n λ j = 0 , L e j = 0 e j = λ j γ , L λ j = 0 x ( 0 ) ( j ) + a z ( 1 ) ( j ) w T ϕ ( j ) c = e j .
By eliminating a, w, and e j from the KKT conditions, one obtains the following linear system:
0 1 n 1 1 n 1 Ω + γ 1 I n 1 c λ = 0 Y ,
where
1 n 1 = [ 1 , 1 , , 1 ] n 1 , Ω = ϕ T ( i ) ϕ ( j ) z ( 1 ) ( i ) z ( 1 ) ( j ) ( n 1 ) × ( n 1 ) , λ = [ λ 2 , λ 3 , , λ n ] , Y = [ x ( 0 ) ( 2 ) , x ( 0 ) ( 3 ) , , x ( 0 ) ( n ) ] .
and I n 1 is the ( n 1 ) -dimensional identity matrix whose diagonal elements are 1 and whose off-diagonal entries are 0.
By solving the above system, one obtains λ j and c. Using the first relation in the KKT conditions provides the parameter a. If the kernel function K ( · , · ) satisfies φ ( i ) · φ ( j ) = K ( i , j ) , explicit construction of φ vectors can be avoided. Through this regularization process, one thereby determines the main parameters of the KRNGM model.

2.3. Time-Response Series of the KRNGM

Once the parameters of the KRNGM are determined, the forecasted sequences X ^ ( 1 ) and X ^ ( 0 ) can be calculated. The initial condition is set as x ^ ( 1 ) ( 1 ) = x ( 1 ) ( 1 ) = x ( 0 ) ( 1 ) , then
x ^ ( 1 ) ( t ) = x ( 0 ) ( 1 ) e a ( t 1 ) + 1 t e a ( t τ ) Ψ ( τ ) d τ ,
where Ψ ( t ) = w T φ ( t ) + c . According to w = j = 2 n λ j ϕ ( j ) , one obtains
w T φ ( t ) = 1 2 j = 2 n λ j φ ( j ) + φ ( j 1 ) · φ ( t ) = 1 2 j = 2 n λ j K j , t + K j 1 , t .
Applying the trapezoidal rule to the integral term in Equation (23) yields
x ^ ( 1 ) ( k ) = x ( 0 ) ( 1 ) e a ( k 1 ) + 1 2 τ = 2 k e a ( k τ ) Ψ ( τ ) + e a ( k τ + 1 ) Ψ ( τ 1 ) .
Finally, to obtain the predicted original series X ^ ( 0 ) , one applies the difference:
x ^ ( 0 ) ( k ) = x ^ ( 1 ) ( k ) x ^ ( 1 ) ( k 1 ) .
Through these steps, the KRNGM completes its overall prediction process.

3. Proposed Conformable Fractional Unbiased Kernel Regularized Nonhomogeneous Grey Model

This section introduces an unbiased kernel-regularized nonhomogeneous grey model with conformable fractional order (CFUKRNGM). The CFUKRNGM combines the advantages of the conformable fractional grey model (CFGM) and kernel-regularized models (KRNGM), utilizing an unbiased parameter estimation approach. The suggested CFUKRNGM improves conventional grey models in the analysis of intricate dynamic systems. It achieves this by integrating the flexible memory attributes of conformable fractional-order accumulation. Furthermore, it utilizes the nonlinear modeling potential afforded by kernel regularization.

3.1. The Definition of Conformable Fractional Accumulation and Difference

This subsection presents the definitions of conformable fractional accumulation and difference.
Definition 1.
Given a differential function f, the α order conformable fractional accumulation (abbreviated as α-CFA) [28] of f with α order is
α f ( k ) = i = 1 k k i + [ α ] 1 k i f ( i ) i [ α ] α , α > 0 ,
where k i + [ α ] 1 k i = ( k i + [ α ] 1 ) ! ( k i ) ! ( [ α ] 1 ) ! , and . denotes the ceil function, which represents the smallest integer no larger than α.
Definition 2.
Given a differential function f, the α order conformable fractional difference (abbreviated as α-CFD) [28] of f with α order is
Δ α f ( k ) = k [ α ] α i = 1 k ( 1 ) k i [ α ] k i f ( i ) , α > 0 ,
where [ α ] k i = [ α ] ! ( [ α ] k + i ) ! ( k i ) ! .
The conformable fractional accumulation and the conformable fractional difference satisfy the following relationship:
Δ α α f ( k ) = f ( k ) , α > 0 .
Definition 1 is utilized to calculate the cumulative value of the original sequence, whereas Definition 2 determines the recovery value of the model’s fitted sequence. It is significant because, in comparison to conventional fractional order accumulation (FOA), both conformable fractional order accumulation and difference are more straightforward to execute.

3.2. The Conformable Fractional Unbiased Kernel Regularized Nonhomogeneous Grey Model

Based on the definitions of α -CFA and α -CFD, the CFUKRNGM model is formulated, and the corresponding implementation steps are described as follows.
For the given initial sequence X ( 0 ) = x ( 0 ) ( 1 ) , x ( 0 ) ( 2 ) , , x ( 0 ) ( n ) , we first define its α -CFA sequence as X ( α ) = x ( α ) ( 1 ) , x ( α ) ( 2 ) , , x ( α ) ( n ) , with the following relationship:
x ( α ) ( k ) = α x ( 0 ) ( k ) = j = 1 k x ( 0 ) ( j ) j [ α ] α , 0 < α 1 , j = 1 k x ( α 1 ) ( j ) , α > 1 .
Similar to the KRNGM model, the CFUKRNGM is expressed as
d x ( α ) ( t ) d t + a x ( α ) ( t ) = w T φ ( t ) + c .
The differential Equation (31) is referred to as the whitening equation of the CFUKRNGM model. When α = 1 , it reduces to the KRNGM model proposed by Ma et al. [22]. Specifically, if φ is defined as an identity map and α = 1 , the CFUKRNGM model (31) simplifies to the NGM(1, 1, k, c) model.

3.3. Parameter Estimation for the CFUKRNGM Model

To determine the parameters of the KRNGM model, we begin by discretizing the differential Equation (31). By integrating (31) over the interval [ k 1 , k ] , the resulting expression is obtained as follows:
k 1 k d x ( α ) ( t ) + a k 1 k x ( α ) ( t ) d t = w T k 1 k φ ( t ) d t + k 1 k c d t .
Noting that k 1 k d x ( α ) ( t ) = x ( α ) ( k ) x ( α ) ( k 1 ) , and k 1 k c d t = c . To discretize the remaining integral terms involving x ( α ) ( t ) and φ ( t ) , we apply the two-point trapezoidal rule, which approximates the integral over [ k 1 , k ] as the average of the values at the endpoints of the interval.
x ( α ) ( k ) x ( α ) ( k 1 ) + a z ( α ) ( k ) = w T ϕ ( k ) + c ,
where
z ( α ) ( k ) = 1 2 x ( α ) ( k ) + x ( α ) ( k 1 )
and
ϕ ( k ) = 1 2 φ ( k ) + φ ( k 1 ) .
The exact form of the nonlinear mapping φ being unknown necessitates a regularization method. The structural risk reduction technique in the KRNGM model notably omits the bias factor c, potentially leading to overfitting and diminishing the model’s generalization capability. This paper integrates the bias term c into the structural risk reduction framework, creating an innovative regularized optimization problem for parameter estimation of the CFUKRNGM model. The mathematical formulation of this optimization problem is expressed as follows:
min a 2 2 + 1 2 w T w + c 2 2 θ 2 + γ 2 e j 2 , s . t . e j = x ( α ) ( j ) x ( α ) ( j 1 ) + a z ( α ) ( j ) w T ϕ ( j ) c
Let w 1 = w T , c θ T and ϕ T ( j ) , θ T = ϕ 1 ( j ) , then the optimization problem (36) can be rewritten as
min a 2 2 + 1 2 w 1 T w 1 + γ 2 e j 2 , s . t . e j = x ( α ) ( j ) x ( α ) ( j 1 ) + a z ( α ) ( j ) w 1 T ϕ 1 ( j )
where θ is a hyperparameter of the CFUKRNGM model. Notice that at this stage, w T φ ( t ) + c = w 1 T φ 1 ( t ) , where φ T ( j ) , θ T = φ 1 ( j ) . Clearly,
ϕ 1 ( k ) = 1 2 φ 1 ( k ) + φ 1 ( k 1 ) = 1 2 φ T ( k ) + φ T ( k 1 ) , θ T .
The problem (37) is essentially a quadratic programming problem with linear constraints, and therefore, we only need to find its extremum point. We first define the Lagrangian function as
L = a 2 2 + 1 2 w 1 T w 1 + γ 2 e j 2 + λ j x ( α ) ( j ) x ( α ) ( j 1 ) + a z ( α ) ( j ) w 1 T ϕ 1 ( j ) e j .
The Karush–Kuhn–Tucker (abbreviated as KKT) conditions for problem (39) are given by the following:
L a = a + j = 2 n λ j z ( α ) ( j ) = 0 a = j = 2 n λ j z ( α ) ( j ) , L w 1 = w 1 j = 2 n λ j ϕ 1 ( j ) = 0 w 1 = j = 2 n λ j ϕ T ( j ) , θ T , L e j = γ e j λ j = 0 e j = λ j γ , L λ j = 0 x ( α ) ( j ) x ( α ) ( j 1 ) + a z ( α ) ( j ) w 1 T ϕ 1 ( j ) = e j .
Set x ( α ) ( j ) x ( α ) ( j 1 ) = y ( α ) ( j ) . By eliminating a, w 1 , and e j , we obtain
y ( α ) ( j ) = i = 2 n λ i z ( α ) ( j ) z ( α ) ( i ) + i = 2 n λ i ϕ T ( i ) , θ ϕ ( j ) θ + e j , j = 2 , 3 , , n .
The KKT conditions are equivalent to the following set of linear equations:
( γ 1 I + Ω ) λ = y ,
where
y k = x ( α ) ( k ) x ( α ) ( k 1 ) , λ = ( λ 2 , , λ n ) T , Ω i j = z ( α ) ( i ) z ( α ) ( j ) + ϕ 1 T ( i ) ϕ 1 ( j ) ,
and I represents an identity matrix of dimension ( n 1 ) with ones on the diagonal and zeros elsewhere. The parameter a is derived from the initial equation within these conditions (40).
Using the definition of ϕ 1 in (38), we have
ϕ 1 ( i ) · ϕ 1 ( j ) = 1 4 φ ( i ) + φ ( i 1 ) · φ ( j ) + φ ( j 1 ) + θ 2 = 1 4 φ ( i ) · φ ( j ) + φ ( i 1 ) · φ ( j ) + φ ( i ) · φ ( j 1 ) + φ ( i 1 ) · φ ( j 1 ) + θ 2 = 1 4 K i j + K i 1 , j + K i , j 1 + K i 1 , j 1 + θ 2

3.4. The Time Response Series of the CFUKRNGM

The differential equation, under the initial condition x ^ ( 1 ) ( 1 ) = x ( 1 ) ( 1 ) = x ( 0 ) ( 1 ) , can be solved by applying the method of variation of parameters from differential equations, leading to the following solution:
x ^ ( 1 ) ( t ) = x ( 0 ) ( 1 ) e a ( t 1 ) + 1 t e a ( t τ ) Ψ 1 ( τ ) d τ .
where
Ψ 1 ( τ ) d τ = w 1 T φ 1 ( t ) .
From the second equation in the KKT conditions (40), expressed as w 1 = j = 2 n λ j ϕ ( j ) , the nonlinear function w 1 T φ 1 ( t ) can be reformulated as
w 1 T φ 1 ( t ) = j = 2 n λ j ϕ 1 ( j ) · φ 1 ( t ) = 1 2 j = 2 n λ j φ T ( j ) + φ T ( j 1 ) , 2 θ φ ( j ) θ .
By substituting (44) into (47), we obtain
w T φ ( t ) = 1 2 j = 2 n λ j [ φ ( j ) + φ ( j 1 ) ] · φ ( t ) = 1 2 j = 2 n λ j K j , t + K j 1 , t + 2 θ 2
To discretize the integral in (45), we apply the two-point trapezoidal rule, resulting in the discrete-time equivalent:
x ^ ( 1 ) ( k ) = x ( 0 ) ( 1 ) e a ( k 1 ) + 1 2 τ = 2 k e a ( k τ ) Ψ 1 ( τ ) + e a ( k τ + 1 ) Ψ 1 ( τ 1 ) .
Finally, the predicted values for x ( 0 ) are obtained as
x ^ ( 0 ) ( k ) = Δ α x ^ ( α ) ( k ) = k [ α ] α Δ n x ^ ( α ) ( k ) , α ( n , n + 1 ] .

4. Parameters Optimization of CFUKRNGM Model

This section focuses on the optimization of the hyperparameters for the CFUKRNGM model. The performance of the model is highly dependent on the selection of these hyperparameters, which control the trade-off between model complexity and fitting accuracy. To identify the optimal set of parameters, we employ the Bayesian optimization algorithm, an efficient global optimization algorithm that can effectively adjust hyperparameters in high-dimensional spaces.

4.1. Optimization Strategies for Hyperparameters

It is worth noting that, given the known kernel function parameter σ , regularization coefficient γ , unbiased coefficient θ , and accumulation order α , the system parameters in the new model can be estimated. We select the parameters that minimize the mean absolute percentage error as the optimal parameters, with the mathematical expression given by
min Θ MAPE = 1 n i = 1 n x ^ ( 0 ) ( i ) x ( 0 ) ( i ) x ( 0 ) ( i ) × 100 %
where Θ represents the hyperparameter vector of each model.If the parameter optimization objective is the CFUKRNGM model, then Θ = σ , γ , λ , α . If the parameter optimization objective is the KRNGM model, then Θ = σ , γ .

4.2. Bayesian Optimization Algorithm

As indicated by Equation (51), this function exhibits nonlinear properties, making it challenging to derive an analytical solution using conventional methods. In the field of parameter optimization, numerous strategies have been proposed by researchers to enhance model performance and accuracy. Traditional optimization methods, such as grid search [41] and random search [42], are widely applied in hyperparameter tuning. In addition, another class of optimization methods involves heuristic algorithms, such as genetic algorithms [43], PSO [44], and simulated annealing [45]. These methods do not rely on gradient information but instead simulate random behaviors observed in nature or physical processes to find the optimal solution. While these approaches can effectively handle complex optimization problems, they may suffer from early convergence and require longer computational times. With the rapid development of machine learning, the Bayesian optimization algorithm [46] has emerged as a global optimization method based on probabilistic models.
Bayesian optimization is a global optimization technique based on probability theory. It is especially efficacious for optimizing black-box functions. The fundamental concept is to develop a surrogate model that estimates the actual objective function. Gaussian processes are frequently employed for surrogate modeling. The optimization process is directed by an acquisition function that facilitates the efficient identification of the global optimum. Consider the input dataset X = { x 1 , x 2 , , x n } and corresponding observed outputs y = { y 1 , y 2 , , y n } , where each observed value y i = f ( x i ) + ϵ i includes a noise term ϵ i , which is independently and identically distributed.
Assumes that the objective function f ( x ) is drawn from a gaussian process distribution:
f ( x ) GP ( m ( x ) , k ( x , x ) )
where m ( x ) is the mean function, typically taken as m ( x ) = 0 , and k ( x , x ) is the covariance function or kernel, which defines the correlation between different input points.
Based on the Gaussian process model, the predicted value of the objective function at a new input point x * can be obtained. Considering the existing observed data points X and their corresponding outputs y , the joint distribution between the observed outputs and the prediction at x * can be expressed as follows:
y f ( x * ) N 0 , K ( X , X ) + σ n 2 I K ( X , x * ) K ( x * , X ) K ( x * , x * )
where K ( X , X ) denotes the covariance matrix computed from the training samples, σ n 2 represents the variance of the noise, and K ( X , x * ) , K ( x * , x * ) correspond to covariance values between training data points and the new data point x * .
The acquisition function serves as a crucial element within Bayesian optimization, guiding the choice of subsequent sampling points according to predictions from the surrogate model. Frequently used acquisition functions are listed as follows:
Expected Improvement (EI): The expected improvement measures the expected amount of improvement over the current best value. Its mathematical expression is
EI ( x * ) = E max 0 , f best f ( x * )
where f best denotes the optimal value obtained so far, and f ( x * ) represents the estimated value at the new candidate point.
Probability of Improvement (PI): This acquisition function evaluates the likelihood that a new candidate point will yield better results compared to the current optimal value. Its mathematical definition is as follows:
PI ( x * ) = Φ f best μ ( x * ) σ ( x * )
where μ ( x * ) represents the predicted mean at the candidate point, and σ ( x * ) is its predicted standard deviation; additionally, Φ ( · ) denotes the cumulative distribution function of the standard normal distribution.

4.3. Optimization Steps of Parameters

According to the preceding discussion, the algorithmic steps for optimizing the CFUKRNGM model using the Bayesian optimization algorithm are detailed in Algorithm 1. The entire optimization process consists of five steps. The input to Algorithm 1 is the training data, and the output is the optimal hyperparameters for the CFUKRNGM model. For a detailed procedure, refer to Algorithm 1.
Algorithm 1: Bayesian Optimization for Hyperparameter Tuning
Input: Dataset T , initial parameter θ 0 .
   1:  while k < IterMax do
   2:      Step 1: Surrogate Model Update
   3:      Train the surrogate model using the current dataset T k = { x i , y i }.
   4:      Step 2: Acquisition Function Maximization
   5:      Find the x next by maximizing the acquisition function A( x ): x next = arg max A( x )
   6:      Use the predictive value update acquisition function of the surrogate model.
   7:      Step 3: Evaluate Objective Function f ( x next )
   8:      Step 4: Update Dataset
   9:       T k + 1 = T k ∪ { x next , f ( x next )}
   10:    Step 5: Convergence Check
   11:    err = |f ( x next ) − f ( x best )|
   12:    if err ϵ tol then
   13:        Exit the loop.
   14:    end if
   15:    kk + 1
   16:  end while
  Output: The optimal hyperparameters θ
The overall computational steps of CFUKRNGM with Bayesian optimization algorithm are depicted in the flowchart shown in Figure 1. The implementation of the proposed algorithm is publicly available as open-source on GitHub 1

5. Numerical Experiment

This section employs four publicly available real-world datasets to verify the superiority of the CFUKRNGM model. The models used for comparison are conformable fractional grey system(abbreviated as CFGM) [22], fractional grey system (abbreviated as FGM) [47], kernel regularized nonhomogeneous grey model (abbreviated as KRNGM) [29], grey model (abbreviated as GM) [9], discrete grey forecasting model (abbreviated as DGM) [48], self-adaptive intelligence grey model(abbreviated as SAIGM) [49], time-delayed grey model(abbreviated as TDGM) [50], nonlinear grey model(abbreviated as NGM) [51], and autoregressive grey model(abbreviated as ARGM) [52]. For the grey models with hyperparameters, namely CFUKRNGM, KRNGM, FGM, and CFGM, the Bayesian optimization algorithm is uniformly applied to optimize the hyperparameters. Table 1 briefly summarizes the basic information of each dataset. The four datasets are all sourced from the Energy Institute Statistical Review of World Energy 2.

5.1. Evaluation Metrics

To evaluate and compare the effectiveness of various models, this study employs multiple performance indicators, including root mean squared error (RMSE), mean absolute error (MAE), normalized RMSE (NRMSE), mean absolute percentage error (MAPE), root mean squared percentage error (RMSPE), mean squared error (MSE), index of agreement (IA), and Theil’s U statistics (U1 and U2). These indicators comprehensively reflect both fitting and forecasting abilities of the CFUKRNGM model. The detailed mathematical definitions are provided in Table 2.

5.2. Case 1: Forecasting Oil Production in Block L

In this scenario, the CFUKRNGM model, along with other grey models for comparison, is utilized to simulate and predict oil production in Block L of the North China Oilfield ( 10 4 m 3 ). Table 3 displays the oil production data spanning 20 months. Among these, the first 15 data points are used as the training set for model construction, while the remaining 5 are reserved for evaluating predictive performance. It is noteworthy that this dataset partitioning approach is consistent with the method adopted in [29].
Table 4 presents a comprehensive account of the predicted outcomes of the CFUKRNGM model alongside other grey models. The hyperparameters for the FGM, CFGM, KRNGM, and CFUKRNGM models were derived using the Bayesian optimization algorithm. In these models, hyperparameter adjustment is essential for improving model performance. The symbol γ in the table represents the penalty parameter, utilized to control the model’s complexity and degree of fit. In the CFUKRNGM model, θ is a constant intricately linked to its structural attributes. α denotes the accumulation order of the FGM model, affecting the model’s fitting precision to the data. r denotes the accumulation order of the CFGM model, which dictates the model’s adaptability and fitting proficiency. σ is the bandwidth of the radial basis function (RBF) kernel, which regulates the model’s smoothness and generalization capacity. The optimized hyperparameters yield the most effective configurations for the models during fitting and prediction, hence guaranteeing their efficiency and accuracy in practical applications.
Figure 2 presents a comparative assessment of the CFUKRNGM model against nine alternative grey prediction models in the simulation and forecasting of oil production in Block L of the North China Oilfield. An exhaustive evaluation of the prediction results reveals that the CFUKRNGM model demonstrates much superior predictive accuracy relative to its counterparts.
For the oil production data of Block L in the North China Oilfield, Table 5 presents the evaluation metrics for both fitting and prediction performance across different models. Among these, the CFUKRNGM model consistently outperforms others, demonstrating superior accuracy across multiple metrics. In terms of fitting performance, CFUKRNGM achieves the lowest RMSE of 0.0026 and the smallest MAE of 0.0022, highlighting its strong capability in reducing prediction errors.
Figure 3 illustrates the detailed outcomes for all 10 models. The results clearly show that the majority of points predicted by CFUKRNGM closely align with the actual data, and the model achieves the highest overall R value. Therefore, in this validation case, CFUKRNGM proves to be the most effective model.

5.3. Case 2: Forecasting Carbon Dioxide Emissions in Turkey

Table 6 lists the carbon dioxide emission data for Turkey from 2004 to 2023. In this case, the hyperparameters of the FGM, CFGM, KRNGM, and CFUKRNGM models employed were meticulously tuned using the Bayesian optimization algorithm.
Simulation forecasts were conducted for Turkey’s carbon dioxide emissions data spanning from 2004 to 2023, and the optimized hyperparameters are detailed in Table 7. Furthermore, metrics for assessing the fitting and predictive performance of each model are systematically presented in Table 8. A comprehensive analysis of the data in Table 7 and Table 8 reveals that the CFUKRNGM model not only demonstrates superior performance in fitting historical carbon dioxide emission data of Turkey but also outperforms other models in predicting data beyond the sample range, thereby confirming its significant advantages in terms of robustness and predictive accuracy.
Figure 4 and Figure 5 respectively illustrate the forecasting outcomes of various grey prediction models. The analysis clearly indicates that the CFUKRNGM model outperforms the others in predictive accuracy. Notably, the coefficient of determination R 2 for CFUKRNGM reaches 0.982, exhibiting only a slight deviation from the theoretical optimal value of 1, which strongly validates its superior predictive performance and generalization ability.

5.4. Case 3: Forecasting Coal Production of Canada

In this case, grey models are used to simulate and predict coal production. The data in Table 9 represents Canada’s coal production from 2004 to 2023.
The prediction results of the 10 grey models are summarized in Table 10. The hyperparameter optimization results for the FGM, CFGM, KRNGM, and CFUKRNGM models are also provided in Table 10. Clearly, the CFUKRNGM model exhibits the best predictive performance for out-of-sample data, demonstrating superior generalization ability and better capturing future trends.
Figure 5. The regression performance of ten grey models in Case 2. R 2 represents the coefficient of determination between the raw data and the predicted values of the models (Overall).
Figure 5. The regression performance of ten grey models in Case 2. R 2 represents the coefficient of determination between the raw data and the predicted values of the models (Overall).
Systems 13 00527 g005
Figure 6 and Figure 7 present the prediction results of the 10 models. It is evident that most of the points generated by CFUKRNGM closely match the raw data, while the results from the other models are noticeably inferior, and CFUKRNGM achieves the highest overall R 2 value. Hence, in this validation case, CFUKRNGM outperforms all the other models.
Table 11 presents the evaluation of the prediction results of the 10 grey models for Canada’s coal production. Based on these metric values, the KRNGM and CFUKRNGM models exhibit smaller fitting errors, indicating a better ability to capture historical data trends. However, the CFUKRNGM model achieves lower prediction errors for future data, making it more accurate in forecasting future trends.
Figure 7. The regression performance of ten grey models in Case 3. R 2 represents the coefficient of determination between the raw data and the predicted values of the models (Overall).
Figure 7. The regression performance of ten grey models in Case 3. R 2 represents the coefficient of determination between the raw data and the predicted values of the models (Overall).
Systems 13 00527 g007

5.5. Case 4: Forecasting Natural Gas Electricity Generation in U.S.

This case study employs grey models to simulate and predict natural gas electricity generation in the United States. The original statistical data on U.S. natural gas electricity generation from 2003 to 2023 is provided in Table 12. Table 13 summarizes the simulation and forecasting results across ten grey models; the optimized hyperparameters for the FGM, CFGM, KRNGM, and CFUKRNGM models are also detailed in Table 13.
Figure 8 offers an exhaustive comparison of predictive accuracy across all ten models, distinctly demonstrating their individual fitting and forecasting capabilities. The visual comparisons indicate that the CFUKRNGM model’s predictions show an extraordinary level of alignment with the actual data points, as its associated linear regression line closely approximates the ideal reference line. This demonstrates that the CFUKRNGM model accurately reflects the fundamental dynamics and trends of the examined dataset. Thus, through both qualitative visual assessment and quantitative accuracy metrics, the CFUKRNGM model demonstrates the highest reliability and precision in this validation context. Moreover,The anticipated values produced by several grey models, in conjunction with the actual observed data, are depicted in Figure 9.
Table 14 delineates the calculated evaluation metrics for each grey model in this instance. Table 14 demonstrates that the CFUKRNGM model surpasses the others in both fitting historical data and attaining superior predictive accuracy for future data. The CFUKRNGM model exhibits superior performance in both fitting and prediction.

6. Conclusions

In this work, a CFUKRNGM model was proposed to address the limitations of traditional grey models in handling complex nonlinear time series. By integrating CFA into the grey modeling process, the method extracts richer long-memory information from the data. Furthermore, a kernel function satisfying Mercer’s condition was introduced into the nonhomogeneous grey model framework, effectively embedding nonlinearity into the model and eliminating the bias term present in the original KRNGM, thus resulting in an unbiased modeling formulation. The parameter estimation for CFUKRNGM is achieved by solving only a single linear equation of reduced order (lower than that of the standard KRNGM), which simplifies the computational procedure. In addition, the key hyperparameters are automatically tuned using a Bayesian optimization algorithm. These innovations yield a model that preserves the efficiency of grey system models for small-sample forecasting while substantially enhancing their capacity to capture complex, nonlinear patterns in the data.
From a practical application perspective, the CFUKRNGM model has demonstrated strong predictive performance across various energy-related datasets, such as oil production, carbon dioxide emissions, and electricity generation. This not only validates the methodological advancement of the model but also highlights its broad real-world applicability. The model is capable of accurately forecasting energy production trends even with limited sample data, thereby providing quantitative support for energy scheduling, resource planning, and policy formulation. For instance, in oilfield production forecasting, accurately predicting output trends over a future time horizon can help enterprises optimize production plans and reduce resource waste. In the context of carbon emission prediction, the model’s outputs can inform the development of more scientifically grounded emission reduction strategies and energy transition policies. The results demonstrate that the removal of the bias factor and the incorporation of a nonlinear kernel function enhance the CFUKRNGM model’s alignment with the dynamic characteristics of complex data, therefore minimizing prediction errors and effectively mitigating overfitting across diverse settings. Nonetheless, it is important to acknowledge that the model’s effectiveness may still be limited by specific conditions. Excessive noise in the dataset, inadequate training samples, or the use of kernel functions ill-suited to the structural properties of a certain domain may result in diminished predictive accuracy. Furthermore, in situations characterized by significant non-stationarity or sudden alterations in data, the model may find it challenging to reliably identify underlying trends, which potentially leads to discrepancies in predictions. The limits of these models will be essential to our future research endeavors.
In summary, this work presents a unified and efficient grey modeling approach that bridges fractional order techniques with kernel regularization to overcome the linearity and bias limitations of previous models. The CFUKRNGM expands the methodological toolbox of grey system theory, offering a more flexible and accurate framework for time series forecasting under uncertainty. The results suggest that combining advanced mathematical tools (such as conformable fractional operators) with machine learning ideas (such as kernel methods and automatic hyperparameter tuning) can make grey models much better at modeling. Such an approach is not only effective for the cases examined in this study but also holds promise for a wide range of applications involving complex, nonlinear, and small-sample data. Future research can build upon the CFUKRNGM framework to explore other kernel functions or fractional operators and apply this model to diverse domains (e.g., energy consumption, economic indicators, or engineering systems), further extending the reach of grey system modeling in capturing real-world complexities.

Author Contributions

W.G.: Conceptualization, formal analysis, methodology, visualization, and writing—review and editing. Q.A.: Supervision and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Social Science Fund of China [grant numbers: 24BTJ035].

Data Availability Statement

The data adopted in this study can be obtained from the website provided in the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Notes

1
https://github.com/gongwenkang/CFUKRNGM (accessed on 25 June 2025).
2

References

  1. Lu, C.; Li, S.; Lu, Z. Building energy prediction using artificial neural networks: A literature survey. Energy Build. 2022, 262, 111718. [Google Scholar] [CrossRef]
  2. Wang, Z.; Wang, Y.; Zeng, R.; Srinivasan, R.S.; Ahrentzen, S. Random Forest based hourly building energy prediction. Energy Build. 2018, 171, 11–25. [Google Scholar] [CrossRef]
  3. Fan, C.; Sun, Y.; Zhao, Y.; Song, M.; Wang, J. Deep learning-based feature engineering methods for improved building energy prediction. Appl. Energy 2019, 240, 35–45. [Google Scholar] [CrossRef]
  4. Cammarano, A.; Petrioli, C.; Spenza, D. Pro-Energy: A novel energy prediction model for solar and wind energy-harvesting wireless sensor networks. In Proceedings of the 2012 IEEE 9th International Conference on Mobile Ad-Hoc and Sensor Systems (MASS 2012), Las Vegas, NV, USA, 8–11 October 2012; pp. 75–83. [Google Scholar]
  5. He, X.; Wang, Y.; Zhang, Y.; Ma, X.; Wu, W.; Zhang, L. A novel structure adaptive new information priority discrete grey prediction model and its application in renewable energy generation forecasting. Appl. Energy 2022, 325, 119854. [Google Scholar] [CrossRef]
  6. Feng, S.; Ma, Y.; Song, Z.; Ying, J. Forecasting the energy consumption of China by the grey prediction model. Energy Sources Part Econ. Planning Policy 2012, 7, 376–389. [Google Scholar] [CrossRef]
  7. Tsai, S.B. Using grey models for forecasting China’s growth trends in renewable energy consumption. Clean Technol. Environ. Policy 2016, 18, 563–571. [Google Scholar] [CrossRef]
  8. Duan, H.; Pang, X. A multivariate grey prediction model based on energy logistic equation and its application in energy prediction in China. Energy 2021, 229, 120716. [Google Scholar] [CrossRef]
  9. Ju-Long, D. Control problems of grey systems. Syst. Control Lett. 1982, 1, 288–294. [Google Scholar] [CrossRef]
  10. Julong, D. Essential Topics on Grey System: Theory and Application. In A Report on the Project of National Science Foundation of China; China Ocean Press: Beijing, China, 1988. [Google Scholar]
  11. Mao, M.; Chirwa, E.C. Application of grey model GM (1, 1) to vehicle fatality risk estimation. Technol. Forecast. Soc. Change 2006, 73, 588–605. [Google Scholar] [CrossRef]
  12. Wang, Y.; He, X.; Zhou, Y.; Luo, Y.; Tang, Y.; Narayanan, G. A novel structure adaptive grey seasonal model with data reorganization and its application in solar photovoltaic power generation prediction. Energy 2024, 302, 131833. [Google Scholar] [CrossRef]
  13. Li, X.; Li, N.; Ding, S.; Cao, Y.; Li, Y. A novel data-driven seasonal multivariable grey model for seasonal time series forecasting. Inf. Sci. 2023, 642, 119165. [Google Scholar] [CrossRef]
  14. Zhou, W.; Wu, X.; Ding, S.; Ji, X.; Pan, W. Predictions and mitigation strategies of PM2. 5 concentration in the Yangtze River Delta of China based on a novel nonlinear seasonal grey model. Environ. Pollut. 2021, 276, 116614. [Google Scholar] [CrossRef] [PubMed]
  15. Zhou, W.; Wu, X.; Ding, S.; Cheng, Y. Predictive analysis of the air quality indicators in the Yangtze River Delta in China: An application of a novel seasonal grey model. Sci. Total Environ. 2020, 748, 141428. [Google Scholar] [CrossRef] [PubMed]
  16. Wu, L.; Liu, S.; Yao, L.; Yan, S.; Liu, D. Grey system model with the fractional order accumulation. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 1775–1785. [Google Scholar] [CrossRef]
  17. Wu, L.; Liu, S.; Chen, D.; Yao, L.; Cui, W. Using gray model with fractional order accumulation to predict gas emission. Nat. Hazards 2014, 71, 2231–2236. [Google Scholar] [CrossRef]
  18. Wang, Y.; Chi, P.; Nie, R.; Ma, X.; Wu, W.; Guo, B. Self-adaptive discrete grey model based on a novel fractional order reverse accumulation sequence and its application in forecasting clean energy power generation in China. Energy 2022, 253, 124093. [Google Scholar] [CrossRef]
  19. Wensong, J.; Zhongyu, W.; Mourelatos, Z.P. Application of Nonequidistant Fractional-Order Accumulation Model on Trajectory Prediction of Space Manipulator. IEEE/ASME Trans. Mechatronics 2016, 21, 1420–1427. [Google Scholar] [CrossRef]
  20. Zhang, J.; Qin, Y.; Zhang, X.; Che, G.; Sun, X.; Duo, H. Application of non-equidistant GM (1, 1) model based on the fractional-order accumulation in building settlement monitoring. J. Intell. Fuzzy Syst. 2022, 42, 1559–1573. [Google Scholar] [CrossRef]
  21. Gao, M.; Mao, S.; Yan, X.; Wen, J. Estimation of Chinese CO2 Emission Based on A Discrete Fractional Accumulation Grey Model. J. Grey Syst. 2015, 27, 114–130. [Google Scholar]
  22. Ma, X.; Wu, W.; Zeng, B.; Wang, Y.; Wu, X. The conformable fractional grey system model. ISA transactions 2020, 96, 255–271. [Google Scholar] [CrossRef]
  23. Duan, H.; Lei, G.R.; Shao, K. Forecasting Crude Oil Consumption in China Using a Grey Prediction Model with an Optimal Fractional-Order Accumulating Operator. Complexity 2018, 2018, 3869619. [Google Scholar] [CrossRef]
  24. Chen, L.; Liu, Z.; Ma, N. Time-Delayed Polynomial Grey System Model with the Fractional Order Accumulation. Math. Probl. Eng. 2018, 2018, 3640625. [Google Scholar] [CrossRef]
  25. Wu, L.; Gao, X.; Xiao, Y.; Yang, Y.; Chen, X. Using a novel multi-variable grey model to forecast the electricity consumption of Shandong Province in China. Energy 2018, 157, 327–335. [Google Scholar] [CrossRef]
  26. Cai, M. Non-homogeneous Grey Model NGM (1, 1) with initial value modification and its application. In Proceedings of the 2010 2nd International Conference on Industrial and Information Systems, Dalian, China, 10–11 July 2010; Volume 1, pp. 102–104. [Google Scholar]
  27. Jiang, J.; Zhang, Y.; Liu, C.; Xie, W. An improved nonhomogeneous discrete grey model and its application. Math. Probl. Eng. 2020, 2020, 4638296. [Google Scholar] [CrossRef]
  28. Wu, W.; Ma, X.; Zeng, B.; Zhang, P. A Conformable Fractional Non-homogeneous Grey Forecasting Model with Adjustable Parameters CFNGMA (1, 1, k, c) and its Application. J. Grey Syst. 2024, 36, 1–12. [Google Scholar]
  29. Ma, X.; Hu, Y.s.; Liu, Z.b. A novel kernel regularized nonhomogeneous grey model and its applications. Commun. Nonlinear Sci. Numer. Simul. 2017, 48, 51–62. [Google Scholar] [CrossRef]
  30. Vapnik, V. Statistical Learning Theory; John Wiley & Sons Google Schola: Hoboken, NJ, USA, 1998; Volume 2, pp. 831–842. [Google Scholar]
  31. Hofmann, T.; Schölkopf, B.; Smola, A.J. Kernel methods in machine learning. Ann. Stat. 2008, 36, 1171–1220. [Google Scholar] [CrossRef]
  32. Camastra, F.; Verri, A. A novel kernel method for clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 801–805. [Google Scholar] [CrossRef]
  33. Blanchard, G.; Bousquet, O.; Zwald, L. Statistical properties of kernel principal component analysis. Mach. Learn. 2007, 66, 259–294. [Google Scholar] [CrossRef]
  34. Suykens, J.A.; Vandewalle, J. Chaos control using least-squares support vector machines. Int. J. Circuit Theory Appl. 1999, 27, 605–615. [Google Scholar] [CrossRef]
  35. Wang, H.; Fu, G.; Cai, Y.; Wang, S. Multiple feature fusion based image classification using a non-biased multi-scale kernel machine. In Proceedings of the 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Zhangjiajie, China, 15–17 August 2015; pp. 700–704. [Google Scholar]
  36. Wang, S.; Tang, L.; Yu, L. SD-LSSVR-based decomposition-and-ensemble methodology with application to hydropower consumption forecasting. In Proceedings of the 2011 Fourth International Joint Conference on Computational Sciences and Optimization, Kunming and Lijiang City, China, 15–19 April 2011; pp. 603–607. [Google Scholar]
  37. Juyal, A.; Sharma, S. A Study of landslide susceptibility mapping using machine learning approach. In Proceedings of the 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), Tirunelveli, India, 4–6 February 2021; pp. 1523–1528. [Google Scholar]
  38. Wang, H.Q.; Sun, F.C.; Cai, Y.N.; Ding, L.G.; Chen, N. An unbiased LSSVM model for classification and regression. Soft Comput. 2010, 14, 171–180. [Google Scholar] [CrossRef]
  39. Jeon, M.; Kim, D.; Lee, W.; Kang, M.; Lee, J. A conservative approach for unbiased learning on unknown biases. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 16752–16760. [Google Scholar]
  40. de Mello, C.E.R. Active Learning: An Unbiased Approach. Ph.D. Thesis, Ecole Centrale Paris. Universidade federal do Rio de Janeiro, Paris, France, 2013. [Google Scholar]
  41. Syarif, I.; Prugel-Bennett, A.; Wills, G. SVM parameter optimization using grid search and genetic algorithm to improve classification performance. TELKOMNIKA Telecommun. Comput. Electron. Control 2016, 14, 1502–1509. [Google Scholar] [CrossRef]
  42. Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
  43. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  44. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  45. Bertsimas, D.; Tsitsiklis, J. Simulated annealing. Stat. Sci. 1993, 8, 10–15. [Google Scholar] [CrossRef]
  46. Brochu, E.; Cora, V.M.; De Freitas, N. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv 2010, arXiv:1012.2599. [Google Scholar]
  47. Mao, S.; Gao, M.; Xiao, X.; Zhu, M. A novel fractional grey system model and its application. Appl. Math. Model. 2016, 40, 5063–5076. [Google Scholar] [CrossRef]
  48. Xie, N.m.; Liu, S.f. Discrete grey forecasting model and its optimization. Appl. Math. Model. 2009, 33, 1173–1186. [Google Scholar] [CrossRef]
  49. Zeng, B.; Meng, W.; Tong, M. A self-adaptive intelligence grey predictive model with alterable structure and its application. Eng. Appl. Artif. Intell. 2016, 50, 236–244. [Google Scholar] [CrossRef]
  50. Ma, X.; Mei, X.; Wu, W.; Wu, X.; Zeng, B. A novel fractional time delayed grey model with Grey Wolf Optimizer and its applications in forecasting the natural gas and coal consumption in Chongqing China. Energy 2019, 178, 487–507. [Google Scholar] [CrossRef]
  51. Shaikh, F.; Ji, Q.; Shaikh, P.H.; Mirjat, N.H.; Uqaili, M.A. Forecasting China’s natural gas demand based on optimised nonlinear grey models. Energy 2017, 140, 941–951. [Google Scholar] [CrossRef]
  52. Wu, L.; Liu, S.; Chen, H.; Zhang, N. Using a novel grey system model to forecast natural gas consumption in China. Math. Probl. Eng. 2015, 2015, 686501. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the conformable fractional order unbiased kernel regularized nonhomogeneous grey model.
Figure 1. The flowchart of the conformable fractional order unbiased kernel regularized nonhomogeneous grey model.
Systems 13 00527 g001
Figure 2. The performance of 10 grey models’ predictions in Case 1.
Figure 2. The performance of 10 grey models’ predictions in Case 1.
Systems 13 00527 g002
Figure 3. The regression performance of ten grey models in Case 1. R 2 represents the coefficient of determination between the raw data and the predicted values of the models (Overall).
Figure 3. The regression performance of ten grey models in Case 1. R 2 represents the coefficient of determination between the raw data and the predicted values of the models (Overall).
Systems 13 00527 g003
Figure 4. The performance of 10 grey models’ predictions in Case 2.
Figure 4. The performance of 10 grey models’ predictions in Case 2.
Systems 13 00527 g004
Figure 6. The performance of 10 grey models’ predictions in Case 3.
Figure 6. The performance of 10 grey models’ predictions in Case 3.
Systems 13 00527 g006
Figure 8. The regression performance of ten grey models in Case 4. R 2 represents the coefficient of determination between the raw data and the predicted values of the models (Overall).
Figure 8. The regression performance of ten grey models in Case 4. R 2 represents the coefficient of determination between the raw data and the predicted values of the models (Overall).
Systems 13 00527 g008
Figure 9. The performance of 10 grey models’ predictions in Case 4.
Figure 9. The performance of 10 grey models’ predictions in Case 4.
Systems 13 00527 g009
Table 1. The information of the four datasets and the partitioning.
Table 1. The information of the four datasets and the partitioning.
NO.NameTotalModelingPrediction
Case 1China’s Oil Production20 MonthsFrom 1 to 15From 16 to 20
Case 2Carbon Dioxide Emissions from EnergyFrom 2004 to 2023From 2004 to 2018From 2019 to 2023
Case 3Coal ProductionFrom 2004 to 2023From 2004 to 2018From 2019 to 2023
Case 4Electricity Generation from GasFrom 2004 to 2023From 2004 to 2018From 2019 to 2023
Table 2. The nine evaluation metrics.
Table 2. The nine evaluation metrics.
Evaluation MetricsMathematical Formula
RMSE 1 v m = 1 v x ( 0 ) ( m ) x ^ ( 0 ) ( m ) 2 0.5
MAE 1 v m = 1 v x ( 0 ) ( m ) x ^ ( 0 ) ( m )
NRMSE 1 x ¯ 1 v m = 1 v x ( 0 ) ( m ) x ^ ( 0 ) ( m ) 2 × 100
MAPE 1 v m = 1 v x ( 0 ) ( m ) x ^ ( 0 ) ( m ) x ( 0 ) ( m ) × 100
RMSPE 1 v m = 1 v x ( 0 ) ( m ) x ^ ( 0 ) ( m ) x ( 0 ) ( m ) 2 0.5 × 100
MSE 1 v m = 1 v x ( 0 ) ( m ) x ^ ( 0 ) ( m ) 2
IA 1 m = 1 v x ( 0 ) ( m ) x ^ ( 0 ) ( m ) 2 m = 1 v x ( 0 ) ( m ) x ¯ + x ^ ( 0 ) ( m ) x ¯ 2
U1 m = 1 v x ( 0 ) ( m ) x ^ ( 0 ) ( m ) 2 0.5 m = 1 v x ( 0 ) ( m ) 2 0.5 + m = 1 v x ^ ( 0 ) ( m ) 2 0.5 1
U2 m = 1 v x ( 0 ) ( m ) x ^ ( 0 ) ( m ) 2 0.5 m = 1 v x ( 0 ) ( m ) 2 0.5
Table 3. Raw data of monthly oil production ( 10 4 m 3 ) of the block-L in North China oilfield.
Table 3. Raw data of monthly oil production ( 10 4 m 3 ) of the block-L in North China oilfield.
Month12345678910
Oil production0.71370.7470.59970.62440.55480.48340.49240.45880.49880.5091
Month11121314151617181920
Oil production0.48220.50320.47210.5320.52960.47650.39410.38620.40090.3652
Table 4. The oil production forecast results for the L block of the North China Oilfield.
Table 4. The oil production forecast results for the L block of the North China Oilfield.
MonthOil ProductionCFUKRNGM
( γ = 97.8481 , θ = 87.7075 ,
α = 0.9809 , σ = 0.0815 )
ARGMCFGM
( α = 0.9817 )
DGMFGM
( r = 1.3059 )
GMKRNGM
( γ = 99.9977 ,
σ = 0.0942 )
NGMSAIGMTDGM
10.71370.71370.71370.71370.71370.71370.71370.7137 7.1370 × 10 1 0.71370.7137
20.74700.74120.64710.61730.62060.74710.61920.7415 3.5300 × 10 2 0.73950.7392
30.59970.60400.60190.60560.60610.61750.60500.6038 1.2017 × 10 0 0.63030.6534
40.62440.62170.57140.59280.59190.56460.59100.6217 5.7169 × 10 0 0.56930.5902
50.55480.55580.55070.57970.57800.53530.57740.5558 2.2198 × 10 1 0.53530.5453
60.48340.48440.53670.56640.56450.51730.56410.4843 8.2359 × 10 1 0.51620.5151
70.49240.49320.52720.55310.55130.50610.55110.4931 3.0196 × 10 2 0.50560.4966
80.45880.45960.52070.54000.53830.49920.53840.4595 1.1035 × 10 3 0.49970.4871
90.49880.50100.51640.52700.52570.49540.52600.5008 4.0295 × 10 3 0.49640.4846
100.50910.50680.51340.51420.51340.49380.51380.5069 1.4710 × 10 4 0.49450.4871
110.48220.48550.51140.50160.50140.49410.50200.4854 5.3695 × 10 4 0.49350.4931
120.50320.50020.51010.48920.48970.49570.49040.5002 1.9600 × 10 5 0.49290.5013
130.47210.47470.50910.47710.47820.49850.47910.4747 7.1544 × 10 5 0.49260.5106
140.53200.53070.50850.46530.46700.50240.46810.5306 2.6115 × 10 6 0.49240.5199
150.52960.52810.50810.45370.45610.50710.45730.5283 9.5325 × 10 6 0.49230.5285
160.47650.43830.50780.44230.44540.51250.44670.4413 3.4796 × 10 7 0.49230.5358
170.39410.41310.50760.43120.43500.51870.43640.4188 1.2701 × 10 8 0.49220.5410
180.38620.40010.50750.42040.42480.52540.42640.4067 4.6362 × 10 8 0.49220.5438
190.40090.38760.50740.40980.41480.53280.41660.3949 1.6923 × 10 9 0.49220.5437
200.36520.37540.50730.39940.40510.54070.40700.3834 6.1772 × 10 9 0.49220.5404
Table 5. The computed evaluation metrics for the prediction results of 10 grey models in Case 1.
Table 5. The computed evaluation metrics for the prediction results of 10 grey models in Case 1.
MetricsCFUKRNGMARGMCFGMDGMFGMGMKRNGMNGMSAIGMTDGM
Fitting RMSE (↓)0.00260.04030.05610.05470.02550.05470.05472.5592 × 10 6 0.02730.0238
MAE (↓)0.00220.02990.04210.04110.02010.04100.04108.7529 × 10 5 0.02240.0180
NRMSE (↓)0.47967.363010.268310.01194.668110.011410.01144.6808 × 10 8 4.98954.3456
MAPE (↓)0.39225.51117.76367.58373.84957.56847.56841.6643 × 10 8 4.22663.4135
RMSPE (↓)0.45367.091210.09549.84404.80049.82079.82074.8342 × 10 8 5.11174.4707
MSE (↓)0.00000.00160.00320.00300.00070.00300.00306.5495 × 10 12 0.00070.0006
IA (↑)0.99900.77370.55990.58160.90900.58170.5817−9.1441 × 10 14 0.89610.9212
U1 (↓)0.00240.03640.05090.04960.02310.04960.04961.00000.02470.0214
U2 (↓)0.00470.07280.10150.09890.04610.09890.09894.6257 × 10 6 0.04930.0429
Prediction RMSE (↓)0.02140.10970.03150.03440.12990.03540.02302.8724 × 10 9 0.09550.1422
MAE (↓)0.01890.10300.02970.03290.12140.03390.02091.6989 × 10 9 0.08760.1364
NRMSE (↓)5.298327.10477.78728.499632.11068.76055.67857.0998 × 10 11 23.597635.1374
MAPE (↓)4.512826.45437.40408.257731.23338.54615.09124.5463 × 10 11 22.639234.8273
RMSPE (↓)4.887428.56567.88808.733933.90299.04025.46277.8163 × 10 11 24.947636.8284
MSE (↓)0.00050.01200.00100.00120.01690.00130.00058.2508 × 10 18 0.00910.0202
IA (↑)0.6802−7.36900.30920.1770−10.74580.12570.6327−5.7421 × 10 21 −5.3434−13.0645
U1 (↓)0.02650.12000.03810.04140.13930.04250.02821.00000.10630.1501
U2 (↓)0.05280.26990.07750.08460.31970.08720.05657.0688 × 10 9 0.23490.3498
Note: ↑ indicates that a larger value is preferred (higher-is-better), while ↓ indicates that a smaller value is preferred (lower-is-better). The same convention applies to all subsequent tables.
Table 6. Carbon dioxide emissions in Turkey from 2004 to 2023 (unit: million metric tons).
Table 6. Carbon dioxide emissions in Turkey from 2004 to 2023 (unit: million metric tons).
Year2004200520062007200820092010201120122013
Emission216.4224.8248.0272.8276.3275.3276.3298.8314.4303.3
Year2014201520162017201820192020202120222023
Emission335.1341.1359.2404.2401.8394.0384.6420.7420.4411.1
Table 7. The carbon dioxide emissions forecast results for Turkey from 2004 to 2023.
Table 7. The carbon dioxide emissions forecast results for Turkey from 2004 to 2023.
YearCarbon Dioxide EmissionsCFUKRNGM
( γ = 91.5762 , θ = 29.3618 ,
α = 0.8293 , σ = 0.0108 )
ARGMCFGM
( α = 0.9953 )
DGMFGM
( r = 1.0028 )
GMKRNGM
( γ = 99.9993 ,
σ = 0.0185 )
NGMSAIGMTDGM
2004216.4216.4000216.4000216.4000216.4000216.4000216.4000216.4000216.4000216.4000216.4000
2005224.8224.9704230.4513233.9799234.1835234.5877234.0667224.9704101.2048242.3103231.4886
2006248247.6885244.3601243.7709243.9169244.0031243.8046247.6885179.9963249.1465245.0977
2007272.8272.4826258.1279253.9461254.0547253.9683253.9477272.4826233.7770256.6431256.8477
2008276.3276.2969271.7562264.5313264.6139264.4261264.5127276.2969270.4861264.8639267.1155
2009275.3275.4745285.2463275.5482275.6120275.3669275.5173275.4745295.5425273.8790276.3172
2010276.3276.3359298.5996287.0170287.0672286.7962286.9797276.3359312.6453283.7650284.9122
2011298.8298.8901311.8176298.9578298.9985298.7263298.9189298.8901324.3191294.6061293.4076
2012314.4313.8068324.9015311.3912311.4257311.1730311.3549313.8068332.2873306.4946302.3634
2013303.3304.6087337.8528324.3383324.3694324.1547324.3082304.6087337.7262319.5316312.3980
2014335.1333.8528350.6728337.8208337.8511337.6913337.8005333.8528341.4386333.8282324.1941
2015341.1342.0929363.3629351.8612351.8931351.8044351.8541342.0929343.9726349.5059338.5053
2016359.2358.7860375.9242366.4830366.5188366.5171366.4923358.7860345.7022366.6983356.1640
2017404.2403.4932388.3583381.7105381.7523381.8534381.7396403.4932346.8828385.5516378.0889
2018401.8402.3295400.6663397.5689397.6190397.8388397.6211402.3295347.6886406.2263405.2947
2019394406.9946412.8495414.0846414.1452414.5000414.1634406.9946348.2386428.8985438.9014
2020384.6429.8464424.9092431.2850431.3582431.8648431.3939429.8464348.6140453.7610480.1458
2021420.7447.6455436.8466449.9624449.2866449.9624449.3413447.6455348.8703481.0254530.3934
2022420.4466.1816448.6630467.8548467.9602468.8233468.0353466.1816349.0452510.9239591.1520
2023411.1485.4853460.3596487.2848487.4099488.4791487.5071485.4853349.1646543.7110664.0860
Table 8. The computed evaluation metrics for the prediction results of 10 grey models in Case 2.
Table 8. The computed evaluation metrics for the prediction results of 10 grey models in Case 2.
MetricsCFUKRNGMARGMCFGMDGMFGMGMKRNGMNGMSAIGMTDGM
Fitting RMSE (↓)0.591515.537111.171911.137111.119111.13740.623146.034710.318010.1745
MAE (↓)0.426612.69078.48438.44748.41158.43710.459733.66638.24787.8018
NRMSE (↓)0.19515.12463.68483.67333.66743.67340.205515.18363.40323.3559
MAPE (↓)0.13454.12062.77482.76722.75852.76370.142111.93582.76322.5084
RMSPE (↓)0.18515.03013.61973.61813.61943.61800.190917.97383.56533.1263
MSE (↓)0.3499241.4020124.8118124.0343123.6341124.04100.38832119.1920106.4605103.5211
IA(↑)0.99990.92050.95890.95910.95930.95910.99990.30190.96490.9659
U1(↓)0.00100.02490.01810.01810.01810.01810.00100.07600.01670.0166
U2(↓)0.00190.05040.03630.03610.03610.03610.00200.14940.03350.0330
Prediction RMSE (↓)9.483419.089519.089527.682228.009727.717326.547234.133248.593988.0666
MAE (↓)4.226810.188510.188514.624014.805814.642713.690219.124525.834744.9252
NRMSE (↓)2.33494.70004.70006.81566.89626.82426.53618.403911.964221.6827
MAPE (↓)1.05002.52052.52053.59613.64053.60063.36354.67236.331310.9646
RMSPE (↓)2.36204.73464.73466.79546.87506.80396.51318.282611.865021.3910
MSE (↓)89.9352364.4101364.4101766.3015784.5446768.2483704.75231165.07302361.36997755.7298
IA(↑)-0.2832−4.1995−4.1995−9.9337−10.1940−9.9615−9.0555−15.6234−32.6924−109.6600
U1(↓)0.01990.03920.03920.05590.05660.05600.05380.07830.09440.1600
U2(↓)0.04040.08140.08140.11800.11940.11810.11310.14550.20710.3753
Table 9. Coal production of Canada from 2004 to 2023 (unit: 10 6 tonnes ).
Table 9. Coal production of Canada from 2004 to 2023 (unit: 10 6 tonnes ).
Year2004200520062007200820092010201120122013
Production66.268.467.469.068.464.668.067.567.368.4
Year2014201520162017201820192020202120222023
Production68.362.462.460.655.053.246.147.646.748.6
Table 10. The coal production forecast results for Canada from 2004 to 2023.
Table 10. The coal production forecast results for Canada from 2004 to 2023.
YearCoal ProductionCFUKRNGM
( γ = 90.9921 , θ = 74.8937 ,
α = 0.6321 , σ = 0.0106 )
ARGMCFGM
( α = 0.8691 )
DGMFGM
( r = 0.8796 )
GMKRNGM
( γ = 99.9935 ,
σ = 0.1159 )
NGMSAIGMTDGM
200466.266.200066.200066.200066.200066.200066.200066.200066.200066.200066.2000
200568.468.221665.382166.877670.277367.016870.255368.464739.860067.925569.2202
200667.467.375064.466568.467969.518668.620169.500367.343363.570367.900868.2143
20076969.139963.441769.029468.768169.147868.753469.141766.241167.861167.7561
200868.468.189462.294569.013368.025769.077868.014568.165466.542067.797167.6783
200964.664.939261.010468.631467.291468.639567.283664.897566.575967.694067.8204
20106867.884959.572968.001966.564967.957866.560567.732466.579767.527868.0290
201167.567.720657.963967.197465.846367.108665.845267.705666.580267.259968.1567
201267.367.178556.162866.266565.135566.141265.137667.075466.580266.828368.0625
201368.468.650654.146765.242964.432365.089364.437568.567466.580266.132767.6114
201468.368.028151.889864.151263.736763.977363.745068.025466.580265.011866.6737
201562.462.546249.363663.009663.048662.822963.060062.571066.580263.205465.1253
201662.462.390246.535861.832162.368061.639862.382362.312166.580260.294462.8475
201760.660.524543.370460.629961.694760.438261.711960.589566.580255.603359.7262
20185555.039739.827259.411461.028659.226361.048755.103266.580248.043555.6522
201953.253.458235.860958.183760.369858.010760.392657.431866.580235.860950.5211
202046.151.153631.421256.952359.718156.796359.743659.257266.580216.228644.2325
202147.649.031126.451455.721859.073455.587359.101558.594066.5802−15.409036.6903
202246.746.949420.888454.496058.435654.387058.466457.938366.5802−66.393327.8027
202348.644.915514.661353.277957.804853.198257.838157.290066.5802−148.554817.4813
Table 11. The computed evaluation metrics for the prediction results of 10 grey models in Case 3.
Table 11. The computed evaluation metrics for the prediction results of 10 grey models in Case 3.
MetricsCFUKRNGMARGMCFGMDGMFGMGMKRNGMNGMSAIGMTDGM
Fitting RMSE (↓)0.188710.97242.13782.57502.15802.57500.18018.40092.66591.3385
MAE (↓)0.15059.48481.43541.92541.48441.92560.15394.76551.82771.0255
NRMSE (↓)0.287616.72793.25913.92573.29003.92570.274612.80764.06432.0405
MAPE (↓)0.225614.77132.24163.00362.31093.00440.23247.41342.94651.5764
RMSPE (↓)0.282517.32933.40524.15343.41254.15580.270712.79904.50002.0723
MSE (↓)0.0356120.39364.57016.63074.65716.63060.032470.57517.10721.7915
IA (↑)0.9975−7.31950.68420.54180.67820.54180.9978−3.87690.50890.8762
U1 (↓)0.00140.08970.01630.01960.01640.01960.00140.06430.02050.0102
U2 (↓)0.00290.16700.03250.03920.03280.03920.00270.12790.04060.0204
Prediction RMSE (↓)3.393623.58587.611810.87157.420910.899110.136118.3152106.605117.0597
Prediction MAE (↓)2.725822.58347.264910.64037.066910.66849.662318.140284.093513.0944
Prediction NRMSE (↓)7.005748.690715.713922.443215.319722.500320.925137.8100220.076535.2183
Prediction MAPE (↓)5.687046.793415.213522.238114.804222.296520.307437.8034175.519927.3004
Prediction RMSPE (↓)7.059048.952316.109722.914915.713622.972521.494838.4112221.809135.5089
Prediction MSE (↓)11.5163556.289957.9397118.189655.0691118.7912102.7406335.445511364.6431291.0341
Prediction IA (↑)−0.8055−86.2146−8.0837−17.5297−7.6337−17.6240−15.1076−51.5909−1780.7389−44.6281
Prediction U1 (↓)0.03550.31270.07300.10100.07130.10130.09510.15910.86190.1990
Prediction U2 (↓)0.07000.48630.15690.22410.15300.22470.20900.37762.19780.3517
Table 12. Natural gas electricity generation in the U.S from 2004 to 2023 (unit: Terawatt-hours).
Table 12. Natural gas electricity generation in the U.S from 2004 to 2023 (unit: Terawatt-hours).
Year2004200520062007200820092010201120122013
Generation763.5818.2877.9964.1949.4990.31062.01090.01318.21209.5
Year2014201520162017201820192020202120222023
Generation1211.41435.11483.11395.41582.61708.11749.21698.11814.11937.7
Table 13. The electricity generation from gas forecast results for United States from 2004 to 2023.
Table 13. The electricity generation from gas forecast results for United States from 2004 to 2023.
YearElectricity GenerationCFUKRNGM
( γ = 97.6935 , θ = 1.7422 )
( α = 0.7297 , σ = 0.2376 )
ARGMCFGM
( α = 0.9589 )
DGMFGM
( r = 0.9729 )
GMKRNGM
( γ = 99.9965 ,
σ = 0.0308 )
NGMSAIGMTDGM
2004763.5763.5000763.5000763.5000763.5000763.5000763.5000763.5000763.5000763.5000763.5000
2005818.2817.7287841.7479830.5556842.6334829.8193841.8743818.2038349.0039824.7984831.7717
2006877.9877.7716915.5636878.0895883.8792878.3379883.1496877.1191616.2725872.6037874.5524
2007964.1963.6613985.1980925.2792927.1440925.7748926.4486963.7392816.1849921.6595920.5546
2008949.4949.31911050.8881973.2401972.5265973.7472971.8704949.2829965.7162971.9986969.5025
2009990.3993.48051112.85731022.51451020.13041022.94081019.5191992.97571077.56311023.65451021.1078
201010621056.95661171.31631073.43441070.06451073.74301069.50391056.98271161.22281076.66161075.0693
201110901098.19111226.46391126.23791122.44281126.41791121.93941097.52351223.79881131.05541131.0718
20121318.21305.76011278.48781181.11661177.38491181.17111176.94571305.03611270.60471186.87201188.7860
20131209.51221.69991327.56481238.23751235.46891238.17811295.18111222.74201305.61471244.14881247.8675
20141211.41204.94761373.86201297.75461295.46891297.59901295.18111204.07681331.80161302.92381307.9563
20151435.11438.08681417.53671359.81571358.88041359.58621358.68111437.47961351.38901363.23641368.6760
20161483.11478.28941458.73741424.56591425.39581424.28921425.29431478.92931366.04001425.12671429.6330
20171395.41403.52421497.60441492.15071495.16711491.85711495.17341403.80161376.99871488.63591490.4160
20181582.61575.70051534.26991562.71741568.35371562.44131568.47861574.97041385.19571553.80651550.5948
20191708.11738.68491568.85841636.41681645.12261636.19611645.37781759.92341391.32691620.68191609.7199
20201749.21720.60031601.48781713.40361725.64921713.28051726.04711770.37821395.91291689.30671667.3209
20211698.11796.01671632.26881793.83781810.11751793.85871810.67151862.49341399.34321759.72661722.9065
20221814.11874.15101661.30641877.88491898.72051878.10071899.44491959.40171401.90901831.98861775.9628
20231937.71955.13441688.69911965.71681991.66041966.18321992.57062061.35231403.82811906.14101825.9526
Table 14. The computed evaluation metrics for the prediction results of 10 grey models in Case 4.
Table 14. The computed evaluation metrics for the prediction results of 10 grey models in Case 4.
MetricsCFUKRNGMARGMCFGMDGMFGMGMKRNGMNGMSAIGMTDGM
Fitting RMSE (↓)5.883687.141558.011258.528658.076458.52686.6559169.728957.889557.8325
MAE (↓)4.749370.989044.212443.944743.904643.73414.9861126.401845.024845.1177
NRMSE (↓)0.51467.62145.07375.11895.07945.11880.582114.84455.06305.0580
MAPE (↓)0.38956.31103.55923.57793.53503.55510.395712.24343.63383.6539
RMSPE (↓)0.47447.80174.50804.54994.50694.54480.525318.54574.51924.5315
MSE (↓)34.61677593.63893365.29593425.59573372.86703425.385444.300728807.91083351.19893344.5959
IA (↑)0.99940.87620.94510.94410.94500.94410.99930.53020.94530.9455
U1 (↓)0.00250.03650.02480.02500.02480.02500.00280.07370.02480.0247
U2 (↓)0.00500.07450.04960.05000.04960.05000.00570.14510.04950.0494
Prediction RMSE (↓)43.9885161.795160.548260.548263.438974.0770115.3784392.258457.171678.6640
MAE (↓)39.0028150.915954.902054.902058.516867.7324101.2688382.976051.677170.9901
NRMSE (↓)14.442945.918341.527442.172741.629542.204439.357859.130734.386319.1493
MAPE (↓)12.903746.389641.866242.536541.972342.569539.509359.995934.298016.2495
RMSPE (↓)15.545648.912744.257644.931744.364844.964442.200162.409736.659519.9406
MSE (↓)298.47903017.02182467.61322544.89872479.76222548.73002216.50215003.03821691.9107524.7003
IA (↑)−0.9758−18.9711−15.3343−15.8459−15.4147−15.8712−13.6721−32.1174−10.1995−2.4732
U1 (↓)0.06850.18720.17240.17460.17270.17470.16500.22850.14710.0910
U2 (↓)0.14370.45680.41310.41950.41410.41980.39150.58820.34210.1905
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gong, W.; An, Q. Novel Conformable Fractional Order Unbiased Kernel Regularized Nonhomogeneous Grey Model and Its Applications in Energy Prediction. Systems 2025, 13, 527. https://doi.org/10.3390/systems13070527

AMA Style

Gong W, An Q. Novel Conformable Fractional Order Unbiased Kernel Regularized Nonhomogeneous Grey Model and Its Applications in Energy Prediction. Systems. 2025; 13(7):527. https://doi.org/10.3390/systems13070527

Chicago/Turabian Style

Gong, Wenkang, and Qiguang An. 2025. "Novel Conformable Fractional Order Unbiased Kernel Regularized Nonhomogeneous Grey Model and Its Applications in Energy Prediction" Systems 13, no. 7: 527. https://doi.org/10.3390/systems13070527

APA Style

Gong, W., & An, Q. (2025). Novel Conformable Fractional Order Unbiased Kernel Regularized Nonhomogeneous Grey Model and Its Applications in Energy Prediction. Systems, 13(7), 527. https://doi.org/10.3390/systems13070527

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop