Next Article in Journal
The Coordinated Control Strategy of Cascaded Voltage-Source Converter
Previous Article in Journal
Untargeted Evasion Attacks on Deep Neural Networks Using StyleGAN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Channel Prediction Technology Based on Adaptive Reinforced Reservoir Learning Network for Orthogonal Frequency Division Multiplexing Wireless Communication Systems

1
College of Automation & College of Artificial Intelligence, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
2
Jiangsu Engineer Laboratory for Internet of Things and Intelligent Robotics, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(3), 575; https://doi.org/10.3390/electronics14030575
Submission received: 14 January 2025 / Revised: 29 January 2025 / Accepted: 30 January 2025 / Published: 31 January 2025

Abstract

:
Channel prediction is an effective technology to support adaptive transmission in wireless communication. To solve the difficulty of accurately predicting channel state information (CSI) due to fast time-varying characteristics, a next-generation reservoir calculation network (NGRCN) is combined with CSI, and a channel prediction method for OFDM wireless communication systems based on an adaptive reinforced reservoir learning network (adaptive RRLN) is proposed. An adaptive elastic network (adaptive EN) is used to estimate the output weight matrix to avoid ill-conditioned solutions. Therefore, the adaptive RRLN has echo and oracle properties. In addition, an adaptive singular spectral analysis (adaptive SSA) method is proposed to improve the local predictability of CSI by decomposing and reconstructing CSI to improve the fitting accuracy of the channel prediction model. In the simulation section, the OFDM wireless communication systems are constructed using IEEE802.11ah and the one-step prediction, the multi-step prediction, and the robustness test are implemented and analyzed. The simulation results show that the prediction accuracy of the adaptive RRLN can reach 3 × 10−5 and 8.36 × 10−6, which offers satisfactory prediction performance and robustness.

1. Introduction

In orthogonal frequency division multiplexing (OFDM) wireless communication systems, adaptive delay equalization [1], automatic power control [2], adaptive modulation [3], adaptive coding [4], and other adaptive transmission techniques can effectively guarantee the communication quality. The prerequisite for these adaptive transmission techniques is that the transmitter side must be accurately informed of the state information of the current communication environment, i.e., channel state information (CSI). However, due to the fast time-varying characteristics of the CSI and the feedback delay, the CSI fed back from the receiver to the transmitter side is often outdated. To ensure the effectiveness of the adaptive transmission technique, the transmitter can predict the CSI in the future based on the outdated CSI, so that the transmitter side can evaluate the quality of the future transmission environment and adjust the transmission parameters in time [5]. Therefore, channel prediction is an important technique to support adaptive transmission in OFDM wireless communication systems, and it is also one of the research hotspots in the wireless communication field.
Over the past decade, research scholars at home and abroad have conducted many studies on channel prediction techniques for OFDM wireless communication systems and have proposed many prediction methods. At present, the channel prediction methods developed by scholars can mainly be divided into the following categories: the linear prediction method [6,7,8], the parametric prediction method [9,10], and the nonlinear prediction method [11,12,13,14]. In the first category, the predicted CSI sample is estimated using the weighted sum of some past CSI samples of OFDM wireless communication systems, where the weights are estimated using tools such as auto regression (AR) [15] and the least mean squares (LMS) [16]. The main advantage of this method is that it is easy to implement; however, its prediction performance is not satisfactory to some extent [17,18]. The second category can offer a high prediction performance [10,19,20]; however, it is not suitable for a fast time-varying fading channel. In the third category, the CSI samples are learned and fitted using nonlinear tools, e.g., the neural network (NN) [21] and the support vector machine (SVM) [22], and the future CSI sample is predicted in a nonlinear learning way. In the NN model, the deep learning method is also an effect channel prediction tool, and scholars have conducted much research in this area. For instance, W. Jiang et al. introduced the time-domain channel predictor based on a deep NN model [23], and P. E. G. S. Pereira et al. proposed channel prediction technology based on a convolutional neural network (CNN) to predict all possible multipaths in OFDM communication systems, and offered some promising results with the aid of a CNN method operating in the time–frequency domain [24]. Since the related parameters of the fading channel are not needed in advance, the nonlinear channel prediction method is widely superior to the former two categories. Therefore, the nonlinear channel prediction method is the current hotspot in the field of channel prediction.
Since machine learning (ML) and artificial intelligence (AI) are continually being improved, nonlinear prediction methods are also being widely developed. One type of efficient, nonlinear learning model is the reservoir learning model, and its typical model is the echo state network (ESN) model [25]. Since the introduction of the ESN model in 2004, it has been widely used to solve time-domain sequence-prediction problems in various fields, such as meteorological predictions [26], distributed photovoltaic power predictions [27], chaotic time-domain sequence predictions [28], and so on. In 2017, Y. Zhao et al. used the ESN model to solve a channel prediction problem and concluded that the ESN can offer satisfactory channel predictions in Ricean-fading scenarios [29]. Y. He et al. further imported the l1/2-norm into the ESN model to solve ill-conditioned solutions and built a time-domain channel prediction model based on the joint ESN [30]. Based on the works of Y. He, J. Zhang et al. developed a communication networking method based on the channel prediction of the time-domain channel CSI for charging piles [31]. The above works indicate that the reservoir learning model works well for solving channel prediction problems due to the echo state property. In 2021, D. Gauthier et al. introduced a simpler reservoir learning structure, i.e., the next-generation reservoir calculation network (NGRCN), where the hidden layer is a nonlinear crossover calculation, instead of a reservoir with huge neurons [32]. Compared to the traditional ESN model, the NGRCN has many advantages, such as high accuracy, low complexity, and easy implementation [33]. Since the NGRCN was proposed, scholars have conducted many studies in various fields. For example, A. Slonopas et al. used the NGRCN to predict network traffic in order to further detect anomalous network traffic [34]. A. Haluszczynski et al. attempted to solve the problem of controlling nonlinear dynamical systems using the NGRCN [35]. A. Haluszczynski also compared the NGRCN to the traditional ESN model in Ref. [35]. The above works indicate that the NGRCN is superior to the ESN. Due to its simpler structure, the NGRCN is not suitable for the fading channel in complex communication scenarios to some extent, especially with high noise power. In addition, the NGRCN does not have the echo state property. Therefore, there is still room to improve the learning performance of the NGRCN. To the best of our knowledge, there have been no reports on channel prediction work based on the NGRCN, which motivated us to conduct related research.
Like Ref. [24], we aim to address the channel prediction issue by adopting an ML approach that efficiently predicts the fading behavior of a channel based on past-trained samples. In this study, the NGRCN is integrated with the ESN model, and the former is improved by the reservoir, and an adaptive reinforced reservoir learning network (adaptive RRLN)-based channel prediction method for OFDM wireless communication systems is proposed in this work, which is the main novelty of our paper. An adaptive singular spectral analysis (adaptive SSA) is used to decompose and reconstruct the CSI in the frequency domain at the subcarrier to improve the local predictability of the CSI and the learning and prediction capability of the adaptive RRLN model. To further improve the generalization ability of the adaptive RRLN, an adaptive elastic network (adaptive EN) is utilized to estimate the output weight matrix and solve the problem of ill-conditioned solutions to the output weight matrix in the NGRCN. Therefore, the adaptive RRLN has the echo state property and oracle property, which enables it to fit the CSI with high accuracy and offer good prediction performances for OFDM wireless communication systems. The main contributions of this paper are as follows:
(1)
Based on the ESN model and the NGRCN model, the channel prediction model based on the adaptive RRLN is proposed for OFDM wireless communication systems in detail, including the output weight matrix estimation method, i.e., the adaptive EN and the local predictability enhancement method for CSI, i.e., the adaptive SSA.
(2)
Extensive evaluations (i.e., computational complexity analysis, one-step prediction, multi-step prediction, and the robust prediction test) are presented and discussed in this paper.

2. Related Theory

2.1. Channel-Estimation Technique for OFDM Wireless Communication Systems

Typical OFDM wireless communication systems usually consist of a transmitter, a receiver, and some intermediate data-processing devices. In OFDM wireless communication systems, the data source at the transmitter is successively sent to the receiving antenna through the transmitting antenna after forward error correction coding, bit interleaving, constellation mapping, serial-to-parallel conversion, an inverse fast Fourier transformation (IFFT), the addition of a cyclic prefix, and digital-to-analog conversion. In this process, the wireless RF signal undergoes decay after experiencing the wireless channel. Therefore, the receiver needs to reduce the BER through channel equalization and FEC decoding after receiving the wireless RF signal [36].
We assume that the CSI at the subcarriers of the OFDM system is invariant or slow-varying over a frame of time; in this case, the frequency-domain signal R i ( k ) at the k-th subcarrier on the i-th OFDM symbol in the receiver can be expressed as follows:
R i ( k ) = 1 K n = 0 K 1 r i ( n ) e j 2 π n / K
where r i ( n ) denotes the n-th time-domain data-sampling point of the i-th complex baseband, and K is the total number of subcarriers per OFDM symbol. Therefore, the channel state information at the k-th subcarrier of the i-th OFDM symbol can be estimated by the least squares method:
H ^ i ( k ) = R i ( k ) S i ( k ) + W i ( k ) = H i ( k ) + W i ( k )
where S i ( k ) , H i ( k ) , and W i ( k ) denote the transmit signal, the actual CSI, and the estimation noise on the k-th subcarrier of the i-th OFDM symbol, respectively. W i ( k ) is usually modeled as Gaussian white noise, with a mean of 0 and a variance of σ 2 .

2.2. Next-Generation Reservoir Calculation Network

Based on Refs. [25,32], the structure of the ESN is shown in Figure 1 and that of the NGRCN is shown in Figure 2.
In Figure 1, the ESN model has the reservoir; therefore, the output matrix Q ( t ) of the reservoir for the t-th input data point u ( t ) is
Q ( t ) = 1 α Q ( t 1 ) + α tanh κ W i n u ( t ) + W R Q ( t 1 )
where α 0 , 1 is the balance coefficient of the reservoir and tanh is the hyperbolic tangent function, i.e., the activation function of the reservoir; κ   0 , 1 is the scaling factor of the reservoir. W i n R p × N is the input weight matrix and W R R p × p is the internal connection matrix of the reservoir, with a sparsity of SD. p and N are the neuron number of the input layer and the reservoir, respectively.
As can be seen from Figure 2, the NGRCN interactively multiplies the input layer data in the hidden layer, instead of the reservoir of the ESN. Therefore, when the input data are input, the output matrix Q ( t ) R ( 1 + N + ( N 2 + N ) / 2 ) × T ˜ of the hidden layer in the NGRCN is
Q ( t ) = Q C ( t ) , Q L ( t ) , Q N ( t ) T
where Q C ( t ) , Q L ( t ) , and Q N ( t ) are the constant part, the linear part, and the nonlinear part, respectively, and their expressions are Q C ( t ) = 1 , Q L ( t ) = u 1 ( t ) , , u N ( t ) , and Q N ( t ) = u 1 2 ( t ) , u 1 ( t ) u 2 ( t ) , , u 1 ( t ) u N ( t ) , , u N 2 ( t ) . u n ( t ) is the n-th data point of the t-th input matrix of the NGRCN, n = 1 , 2 , 3 , , T ˜ , where T ˜ denotes the input number in the training phase. T denotes the transpose calculation of the matrix. Due to the linear part and the nonlinear part, the hidden layer is an equivalently powerful universal approximator and shows comparable performance to that of the standard reservoir [32].
Therefore, the output matrix W ^ o u t can be estimated as follows:
W ^ o u t = Y t r a i n Q T Q Q T 1
where Y t r a i n R h × T ˜ is the target matrix in the training process.

3. Channel Prediction Method Based on the Adaptive RRLN

3.1. Overall Calculation Methodology

The next-generation reservoir learning network has the advantages of high accuracy, low complexity, and easy implementation but does not have the echo state property. Therefore, this research combined the savings pool with the next-generation reservoir learning network and proposed an adaptive reinforced reservoir learning network architecture, as shown in Figure 3.
The adaptive RRLN contains an input layer, a hidden layer, and an output layer. The hidden layer includes the constant part, the linear part, the nonlinear part, and the super nonlinear part. Therefore, by cross-calculation, the constant part, the linear part, and the nonlinear part are inherited from the NGRCN, and by importing the reservoir, the super nonlinear part with echo state property is also imported. Therefore, the adaptive RRLN has a high learning performance.
For the CSI H ^ ( k ) of the k-th subcarrier of OFDM wireless communication systems, the training and prediction process can be expressed as follows:
u ( t ) = H ^ i × t ( k ) , H ^ i × t + 1 ( k ) , , H ^ i × t + N 1 ( k ) T Y ( t ) = H ^ i × t + N ( k ) , H ^ i × t + N + 1 ( k ) , , H ^ i × t + h ( k ) T
where u ( t ) R N × 1 and Y ( t ) R h × 1 are the t-th input matrix and the corresponding target matrix in the adaptive reinforced reservoir learning network, respectively; t = 1 , 2 , 3 , , T ˜ , where T ˜ is the total number of input data points in the training phase. Then, the t-th output matrix Q ( t ) R ( 1 + N + N 2 + P ) × 1 of the hidden layer can be expressed as follows:
Q ( t ) = 1 : u ( t ) : X ( t ) : U ( t ) T
where 1 is the constant value part and u ( t ) denotes the linear part. X ( t ) R 1 × N 2 denotes the nonlinear part and U ( t ) R 1 × P denotes the super-nonlinear part. Their respective expressions are as follows:
X ( t ) = H ^ i × t 2 ( k ) , , H ^ i × t ( k ) H ^ i × t + N 1 ( k ) , H ^ i × t + 1 ( k ) H ^ i × t ( k ) , , H ^ i × t + 1 ( k ) H ^ i × t + N 1 ( k ) ,                                                         M H ^ i × t + N 1 ( k ) H ^ i × t ( k ) , , H ^ i × t + N 1 2 ( k ) T
U ( t ) = 1 α U ( t 1 ) + α tanh κ W i n u ( t ) + W R U ( t 1 )
where α 0 , 1 is the balance coefficient of the reservoir; tanh is the hyperbolic tangent function, i.e., the activation function of the reservoir; κ   0 , 1 is the scaling factor of the reservoir; W i n R p × N is the input weight matrix; and W R R p × p is the internal connection matrix of the reservoir with sparsity SD. When t = 1 , Equation (8) can be rewritten as
U ( t ) = α tanh κ W i n u ( t )
In the output layer, the output weight matrix W ^ o u t about Q ( t ) and Y ( t ) is estimated using the adaptive EN. When the estimated output weight matrix W ^ o u t is obtained, the adaptive RRLN model is forward-computed to predict the CSI of OFDM wireless communication systems. The detailed training process of the adaptive RRLN in this paper is shown in Algorithm 1.
Algorithm 1: The training process of the adaptive RRLN.
Input: Neuron number in the input layer N , neuron number in reservoir P , spectral radius ρ W , balance coefficient α , scaling factor κ and sparse degree SD, input matrix u ( t ) , output matrix Y ( t ) , the total prediction step h, and regularization coefficients λ 1 and λ 2 .
Output: Well-trained adaptive RRLN.
Step 1: Optimize H ^ ( k ) using an adaptive SSA;
Step 2: Generate W R in a certain range;
Step 3: Calculate X ( t ) using Equation (7);
Step 4: Calculate U ( t ) using Equation (8);
Step 5: Obtain Q ( t ) using Equation (6);
Step 6: Estimate the output weight matrix W ^ o u t using adaptive EN;
Step 7: Output the well-trained adaptive RRLN model.

3.2. Estimation of Output Weight Matrix Using Adaptive EN

Since the hidden layer contains a constant value part, a linear part, a nonlinear part, and a super nonlinear part, the output matrix Q ( t ) has a large matrix row size, and solving the output weight matrix W ^ o u t using the least squares method would lead to ill-condition solutions. To output the weight matrix W ^ o u t and avoid ill-conditioned solutions, we used the l 1 norm and the l 2 norm to construct the adaptive elastic network in this research:
J = min j = 1 H Y W o u t , j Q 2 2 + λ 2 W o u t , j 2 2 + λ 1 φ j W o u t , j
where W o u t , j denotes the output weight matrix corresponding to the j-th-step prediction corresponding to the output weight matrix, and λ 1 and λ 2 are regularization parameters for the l 1 norm and the l 2 norm, respectively. φ j is the adaptive factor for the j-th-step prediction corresponding to the adaptive factor of the output weight matrix. λ 1 and λ 2 are confirmed by the tenfold cross-verification method [37] or the experience method.
As shown in Equation (11), the adaptive elastic network contains three parts: the output matrix Q of the hidden layer, the output weight matrix W o u t , and the target output matrix Y for the training phase. The least squares calculation concerns the output weight matrix W o u t of the l 2 norm and adaptive l 1 norm. The first part preserves the output matrices Q and Y , and the second part constrains the amplitude of the output weight matrix. Then, the third part, with the adaptive factor φ j and the estimated output weight matrix W o u t , j , offers the oracle property to avoid producing ill-conditioned solutions [30,38,39,40].
Based on the above elaboration, the above Equation (11) can be further rewritten as follows:
J j = min W o u t , j Y W o u t , j Q 2 2 + λ 2 W o u t , j 2 2 + λ 1 φ j W o u t , j j
Therefore, Equation (11) can be converted to solve the j-th step to predict the loss function, with j = 1 , 2 , 3 , , h , where h is the total prediction step number. For the j-th-step prediction, Equation (12) has the following derivation:
J j = min W o u t , j Y W o u t , j Q 2 2 + λ 2 W o u t , j 2 2 + λ 1 φ j W o u t , j j = min W o u t , j Y W o u t , j Q 2 2 + λ 2 W o u t , j 2 2 + λ 1 φ j W o u t , j j = min W o u t , j Y 0 W o u t , j 1 + λ 2 1 2 Q λ 2 I 2 2 + λ 2 1 + λ 2 φ j W o u t , j j = min W o u t , j Y # W o u t , j Q # 2 2 + λ # φ j W o u t , j j = min W o u t , j Y # W o u t , j # Q # 2 2 + λ # W o u t , j # j
where
Y # = Y 0
Q # = 1 + λ 2 1 2 Q λ 2 I
λ # = λ 2 1 + λ 2
where φ j is calculated using the following equation:
φ j = W ^ LS , j 1
where W ^ LS , j is estimated using the least squares method for the j-th-step prediction output weight matrix. Therefore, the process of solving W ^ o u t , j can be transformed into the process of solving the l 1 norm about W ^ o u t , j # , where I is the unit matrix. When W ^ o u t , j # is obtained, W ^ o u t , j can be calculated using the following formula:
W ^ o u t , j = W ^ o u t , j # φ j 1
Therefore, the output weight matrix W ^ o u t of Equation (11) can be expressed as follows:
W ^ o u t = W ^ o u t , 1 , W ^ o u t , 2 , W ^ o u t , 3 , , W ^ o u t , h
Equation (13) can be solved using many methods, such as the Newton method (NM) [41], quasi-Newton method (QNM) [42], or least angle regression (LARS) [43]. LARS avoids the problem of no derivation in the QNM; therefore, we used the LARS method to solve Equation (13) about the l 1 norm problem of W ^ o u t , j # in this work.
The pseudo-code implementation of solving the output weight matrix W ^ o u t , j # is shown in Algorithm 2.
Algorithm 2: Output weight matrix process using adaptive EN.
Input: The output matrix of the hidden layer Q , the output matrix of the output layer Y , the total prediction step h, and the regularization coefficients λ 1 and λ 2 .
Output: Output weight matrix W ^ o u t , j R ( 1 + N + N 2 + P ) × 1 .
For  j = 1 , 2 , 3 , , h :
      Step 1: Calculate Y # using Equation (14);
      Step 2: Calculate Q # using Equation (15);
      Step 3: Calculate λ # using Equation (16);
      Step 4: Solve Equation (13) using LARS to obtain W ^ o u t , j # ;
End
      Step 5: Output the weight matrix W ^ o u t , j using Equation (18).

3.3. Local Predictability Enhancement Method Using Adaptive SSA

In the adaptive RRLN, the hidden layer has the linear part and the nonlinear part, i.e., the direct input and the cross-multiplication operation input. To improve the generalization ability of the channel prediction model, we used the decomposition and reconstruction of CSI to improve its local predictable performance in this work through adaptive SSA.
For OFDM wireless communication systems, the trace matrix K k R T q + 1 × q for the CSI’s H ^ t ( k ) of the k-th subcarrier is as follows:
K k = H ^ 1 ( k ) H ^ q ( k ) H ^ 2 ( k ) H ^ q + 1 ( k ) H ^ T q + 1 ( k ) H ^ T ( k )
where t = 1 , 2 , 3 , , T , T is the number of sampling points in the singular-spectrum analysis phase and T = T ˜ q + 1 . q is the window length for the adaptive singular-spectrum analysis. Generally, T     q . Therefore, singular-value decomposition (SVD) in a standard singular-spectrum analysis is time-consuming. In this study, standard SVD is replaced with random SVD to improve the decomposition calculation rate:
Φ , Χ , Ψ T = R S V D Y K K k
where Φ R T q + 1 × q , Χ R q × q , and Ψ T R q × q denote the left singular matrix, singular matrix, and right singular matrix, respectively. Y K is the internal matrix, and the expression is
Y K = Orth K k Θ K
where Orth is the matrix’s orthogonalization calculation; Θ K R q × s is the random mapping matrix in the random singular-value decomposition; and s is the rank of the trace matrix K k . Using Equation (21), the synthesis matrix A n R q × q can be obtained from the product of the singular matrix Χ of the n-th singular value, and the product of the n-th column of the left singular matrix and the n-th row of the right singular matrix is obtained.
Therefore, the p-th H ^ n , ρ ( k ) of the n-th part of the channel state information H ^ ( k ) can be obtained through an anti-angle averaging calculation using the random singular-value decomposition.
H ^ n , ρ ( k ) = 1 Z m = 1 Z A n m , Z m + 1 , 1 Z < q 1 q m = 1 q A n m , Z m + 1 , q Z q * 1 T Z + 1 m = Z T + q T A n m , Z m + 1 , q * < Z T
where A n m , Z m + 1 denotes the synthesis matrix A n of the m-th row and the Z m + 1 -th column. Therefore, the k-th channel state information of the subcarrier is decomposed into some parts, i.e., H ^ 1 ( k ) , H ^ 2 ( k ) , H ^ 3 ( k ) , , H ^ q ( k ) . Through the following equation, the k-th channel state information H ^ i ( k ) of the subcarrier can be reconstructed using the sum of the previous decomposition parts:
H ^ ( k ) = i = 1 q * * H ^ i ( k )
where q * * is the reorganization number, which determines the efficacy of adaptive SSA. Therefore, this value can be determined using the following equation in this work:
q * * = R o u n d q F θ S N R
where R o u n d is the upper rounding calculation, F is the sigmoid function, and θ S N R is the signal-to-noise ratio (SNR) of the CSI. In summary, this research achieved the adaptive enhancement of the local predictability performance of CSI by introducing the SNR of the current CSI into the singular-spectrum analysis calculation. The pseudo-code implementation process of the adaptive SSA is shown in Algorithm 3.
Algorithm 3: The calculation process of the adaptive RRLN.
Input: The channel state information H ^ t ( k ) ; t = 1 , 2 , 3 , , T ; the window length q ; the SNR θ S N R .
Output:  H ^ ( k ) R 1 × T .
Step 1: Randomly generate the mapping matrix Θ K R q × s ;
Step 2: Obtain K k R T q + 1 × q using Equation (20);
Step 3: Obtain Y K using Equation (22);
Step 4: Calculate Φ , Χ , and Ψ T using Equation (21);
Step 5: Calculate H ^ n , ρ ( k ) using Equation (23);
Step 6: Determine q * * using Equation (25);
Step 7: Calculate H ^ ( k ) using Equation (24).

3.4. Calculation Complexity Analysis

The calculation complexity is an important indicator for the channel prediction model. In this section, the calculation complexity of the adaptive RRLN is discussed and analyzed.
In the adaptive SSA, the calculation complexity of calculating orthogonal basis matrices, the random singular-value decomposition, and the anti-angle averaging (using Equation (21), Equation (22), and Equation (23)) are, respectively, O ( ( T q + 1 ) 3 ) , O ( q 3 ) and O ( q 2 ) . In the training stage of the adaptive reinforced reservoir learning network, the calculational complexity of the nonlinear part X calculated using Equation (8) is O ( T ˜ N 2 ) , and the calculational complexity of the super nonlinear part U calculated using Equation (9) is O ( T ˜ N p + p 2 ) . Therefore, the calculation complexity for producing the hidden layer output matrix Q is O ( T ˜ N p + p 2 + T ˜ N 2 ) . In the output layer, the calculation complexity of W ^ o u t estimated using the adaptive EN is O ( j = 1 h n j ( 1 + N + N 2 + P ) 2 ( T + N + N 2 + P + 1 ) ) , where n j is the iteration number when estimating the output weight matrix W ^ o u t , j of the j-th-step prediction using the LARS method. In the forward prediction stage, the calculation complexity of the nonlinear part X calculated using Equation (8) is O ( N 2 ) and the calculation complexity of the nonlinear part U calculated using Equation (10) is O ( N p + p 2 ) . The calculation complexity of the hidden layer output matrix Q is O ( N p + p 2 + N 2 ) . In the output prediction process, the calculation complexity of the output layer is O ( h ( 1 + N + N 2 + P ) 2 ) . The computational complexity of the training process (CC-TrPr) and the prediction process (CC-PePr) for some comparable channel prediction models, i.e., AR [15], support vector machine (SVM) [22], least squares support vector machine (LS-SVM) [44], basic ESN (B-ESN) [25], the NGRCN with ridge regularization (R-NGRCN) [39], and the NGRCN with lasso regularization (L-NGRCN) [40], are shown in Table 1. Therefore, the AR model is solved using the Yule–Walker method, the SVM is implemented using Libsvm [45], and, like the adaptive EN, the L-NGRCN is solved using the LARS in our work. As we can see, the AR model has the lowest computational complexity, while the adaptive RRN has unignored calculation complexity. Therefore, it is necessary to appropriately reduce the neuron number of the input layer, the neuron number of the reservoir, the convergence accuracy, and the iteration number of the LARS for a tradeoff between the calculation complexity and the prediction performance of the system model.

4. Simulation and Discussion

4.1. Parameter Settings

As a sub-1GHz network, the IEEE802.11ah is widely used in the power Internet of Things (IOTIPS). Compared to traditional wireless local area networks (WLANs), e.g., Bluetooth and WIFI, IEEE802.11ah can reach a wider range. Therefore, to evaluate the channel prediction performance, OFDM wireless communication systems are constructed based on the IEEE802.11ah described in this paper, the delay multipath number is set to five, and their powers and delays are [0, −2.7, −3.4, −10.3, −1.5] dB and [0, 1.5, 3.5, 5.5, 7] µs, respectively. Other relevant parameters are shown in Table 2. It should be noted that the proposed adaptive RRN is also suitable for other WLANs, not only the IEEE802.11ah in this work.
In this section, the OFDM systems are estimated using the least squares method to obtain the CSI at the subcarriers. Thereinto, the CSI of the first subcarrier of 8000 OFDM symbols is used to train the channel prediction model, and the CSI of the next 1000 OFDM symbols is used to test the channel prediction performance. To evaluate the effectiveness of the channel prediction method, the following indicators are considered: the mean absolute error (MAE), the root mean square error (RMSE), the normalized root mean square error (NRMSE), the symmetric mean absolute percentage error (SMAPE), the mean absolute percentage error (MAPE), the weight range of the output weight matrix (WR-OWM), and the sparse degree of the output weight matrix (SD-OWM). Among them, the MAE, RMSE, NRMSE, SMAPE, and MAPE indicate the prediction performance of the channel prediction model. The output matrix weight range and output weight matrix sparsity indicate the generalization and sparsity capabilities of the channel prediction model [46].

4.2. One-Step Prediction Analysis

In the one-step prediction test, the parameters of the adaptive RRLN are shown in Table 3. The related prediction curves are shown in Figure 4, and the related prediction results are shown in Table 4 and Table 5.
As shown in Figure 4a, the adaptive RRLN offers a high fitting degree between the prediction curve and the actual curve for the real component of the CSI at the first subcarrier, and the maximum prediction error is only 5 × 10−4. Like the real component of the CSI at the first subcarrier, the prediction curve of the adaptive RRLN in this research for the imaginary part of the CSI at the first subcarrier has a higher fitting degree to the actual curve, and the maximum prediction error is only 3 × 10−4, as shown in Figure 4b.
The one-step prediction results of comparable models are given in Table 4 and Table 5. In particular, their relevant parameters are as follows. The real component: (1) AR: the order is 30; (2) SVM: c and g are, respectively, 22 and 0.001, and the convergence accuracy p is set to 1 × 10−10; (3) LS-SVM: c and g are 500 and 151, respectively; (4) B-ESN: the input neuron number is 30, the neuron number of the reservoir is 200, its sparse degree is 0.05, its spectral radius is 0.09, the balance coefficient is 1, and the scaling factor is 0.01; (5) R-NGRCN: the input neuron number is 30 and the ridge regularization factor λ 2 is 1 × 10−5; and (6) L-NGRCN: the input neuron number is 30 and the lasso regularization factor λ 1 is 1 × 10−5.
The imaginary component: (1) AR: the order is 30; (2) SVM: c and g are 25 and 0.005, respectively, and the convergence accuracy p is set to 1 × 10−10; (3) LS-SVM: c and g are, respectively, 100 and 152; (4) B-ESN: the input neuron number is 30, the neuron number of the reservoir is 200, its sparsity degree is 0.05, the spectral radius is 0.09, the balance coefficient is 1, and the scaling factor is 0.01; (5) R-NGRCN: the input neuron number is 30 and the ridge regularization factor λ 2 is 1 × 10−5; and (6) L-NGRCN: the input neuron number is 30 and the lasso regularization factor λ 1 is 1 × 10−5.
As shown in Table 4, for the real component of the CSI at the first subcarrier, the AR method performed the worst in terms of the relevant indicators, i.e., the MAE, RMSE, NRMSE, SMAPE, and MAPE were only 1.51 × 10−3, 1.89 × 10−3, 3.94 × 10−3, 1.65 × 10−2 and 4.08 × 10−2. The LS-SVM method had a better prediction performance than the SVM method, but all of them were worse than the predicted performance of the R-NGRCN. In addition, the output weight matrices of the AR method, LS-SVM method, SVM, and R-NGRCN are not sparse. The ranges of the output weight matrices of the B-ESN, R-NGRCN, L-NGRCN, and this study’s method are relatively close. The sparsity of the output weight matrix of the L-NGRCN method is 1.2889%, but the prediction performance is not as good as that of the adaptive RRLN in this work. Table 5 shows that, for the imaginary component of the CSI at the first subcarrier, the prediction performance of the AR method is still poor, and the prediction performance of this study’s method is the best. The ranges of the output weight matrices of the B-ESN, R-NGRCN, L-NGRCN, and adaptive RRLN are relatively close. The sparsity of the output weight matrix of the adaptive RRLN is 4.33%, which is closer to the sparsity of the output weight matrix of the L-NGRCN. As seen in Table 4 and Table 5, the adaptive RRLN has good channel prediction performance, a good output matrix weight range, and an average sparsity.

4.3. Multi-Step Prediction

Based on the one-step prediction in Section 4.1, this research conducts a multi-step prediction performance evaluation of the channel prediction models. The parameters are consistent with those used for one-step prediction. The multi-step prediction curves are shown in Figure 5.
As we can see from Figure 5a, as the number of prediction steps increases, the prediction error and RMSE of all the channel models also increase gradually. The channel prediction performance of the AR method is the worst when the number of prediction steps is small. The prediction performance of this study’s method and the B-ESN model is similar, and both of them outperform the SVM, LS-SVM, R-NGRCN, and L-NGRCN models. When the number of prediction steps increases, the SVM model has the worst prediction performance, whereas the method in this study still maintains very good prediction results. For example, the prediction results of AR, SVM, LS-SVM, B-ESN, R-NGRCN, and L-NGRCN at the ninth prediction step and those of the adaptive RRLN are 0.02051, 0.03387, 0.0088, 0.002068, 0.00315, 0.00985 and 4.454 × 10−4, respectively. As shown in Figure 5b, the prediction performance of the channel prediction models becomes progressively worse as the number of prediction steps increases. When the prediction step number is small, the worst-performing channel prediction is made by the LS-SVM model, followed by B-ESN, AR, SVM, R-NGRCN, and the adaptive RRLN. When the prediction step number continues to increase, i.e., when the step number is more than 7, the predicted performance of the LS-SVM model still performs the worst, followed by SVM, AR, L-NGRCN, R-NGRCN, B-ESN, and the adaptive RRLN. Therefore, for the imaginary component of the CSI at the first subcarrier, the adaptive RRLN has good multi-step prediction performance.
In addition, for the real component and imaginary component of the CSI at the first subcarrier, the related prediction curves of the 10th-step prediction of the adaptive RRLN are shown in Figure 6. As can be seen in Figure 6, the channel prediction curves of the adaptive RRLN still fit the ideal curves very well, with a maximum absolute error of only 2 × 10−3 for the real component and 1 × 10−3 for the imaginary component. The relevant prediction results for the 10th-step prediction are shown in Table 6 and Table 7.
As we can see from Table 6, the 10th-step prediction results of all the channel prediction models are similar to the 1-step predictions for the real component of the CSI, and the adaptive RRLN in this work is superior to the other evaluation models in terms of the MAE, RMSE, NRMSE, SMAPE, and MAPE. For the output matrix weight range, the weight range of the adaptive RRLN is larger than that of other models, but the output weight matrices of the B-ESN and R-NGRCN models are not sparse, and the sparsity of the output weight matrix of the adaptive RRLN is only 7.41%, which is superior to that of the L-NGRCN. Therefore, the adaptive RRLN has a good prediction performance for the real component of the CSI when the prediction step is 10. For the imaginary part of the CSI, the related prediction results are similar to Table 6, and we will not analyze or discuss these in detail any further.

4.4. Robust Prediction Test

To further evaluate the generalization ability and prediction performance of the channel model, a robust test of the channel prediction model is given in this section. The relevant parameters of the adaptive RRLN are shown in Table 8. Other relevant prediction model parameters are as follows: the real component: (1) AR: the order is 30; (2) SVM: c and g are, respectively, 25 and 0.004, and the convergence accuracy p is set to 1 × 10−3; (3) LS-SVM: c and g are 5 and 12, respectively; (4) B-ESN: the input neuron number is 30, the neuron number of the reservoir is 200, its sparsity degree is 0.05, its spectral radius is 0.09, the balance coefficient is 1, and the scaling factor is 0.01; (5) R-NGRCN: the input neuron number is 30 and the ridge regularization factor λ 2 is 1 × 10−4; and (6) L-NGRCN: the input neuron number is 30 and the lasso regularization factor λ 1 is 1 × 10−4. The imaginary component: (1) AR: the order is 30; (2) SVM: c and g are, respectively, 20 and 0.1, and the convergence accuracy p is set to 1 × 10−3; (3) LS-SVM: c and g are 124 and 25, respectively; (4) B-ESN: the input neuron number is 30, the neuron number of the reservoir is 200, its sparsity degree is 0.05, its spectral radius is 0.09, the balance coefficient is 1, and the scaling factor is 0.01; (5) R-NGRCN: the input neuron number is 30 and the ridge regularization factor λ 2 is 1 × 10−4; and (6) L-NGRCN: the input neuron number is 30 and the lasso regularization factor λ 1 is 4 × 10−4.
The prediction results of the given channel prediction models for the CSI at the first subcarrier for the real component and imaginary component at different SNRs are shown in Figure 7. As we can see, the prediction accuracy of all channel prediction models gradually increases when the SNR gradually increases. The AR method performs the worst, while the method of the adaptive RRLN performs the best at different SNRs. In Figure 7a, the RMSE of the adaptive RRLN is only 0.01722, while the RMSE of the AR method is 0.03087 when the SNR is 25 dB. In Figure 7b, the RMSE of the adaptive RRLN is only 0.01753, while the RMSE of the AR is 0.0302 when the SNR is 25 dB. Figure 8 shows the relevant prediction curves of the adaptive RRLN in the robustness test when the SNR is 20 dB. It can be seen that the prediction curves of the adaptive RRLN fit well with the ideal curves, and the maximum absolute error for the real component and imaginary component is only 0.1. In summary, the adaptive RRLN has a robust performance.

5. Conclusions

In this work, we mainly focus on the channel prediction of the CSI of OFDM wireless communication systems and introduce a channel prediction method based on the adaptive RRLN by combining it with the next-generation reservoir calculation learning network. Through one-step prediction, multi-step prediction, and a robust prediction test, the following conclusions are given: (1) For the real component and imaginary component of the CSI at the first subcarrier, the one-step prediction performance of the SVM method and LS-SVM are different. For the real component, the LS-SVM is superior to the SVM, and for the imaginary component, the SVM is superior to the LS-SVM. The adaptive RRLN in this work has a good one-step prediction performance, and the RMSE can reach 3 × 10−5 and 8.36 × 10−6. (2) In multi-step prediction, the SVM and LS-SVM methods similarly show different trends. However, the adaptive RRLN has good multi-step prediction performance. (3) In the robust prediction test, the AR method exhibits the worst prediction performance, and the RMSE of the adaptive RRLN is only 0.01753 when the SNR is 25 dB.
Although the adaptive RRLN has better one-step prediction, multi-step prediction, and robustness performance, the estimation process of the output weight matrix using the adaptive EN has the non-negligible computational complexity to improve the model’s generalization and learning ability, import the oracle property into the channel prediction model, and import sparsity into the output weight matrix. Therefore, the adaptive RRLN for OFDM wireless communication systems still has room for improvement. In addition, it is necessary to test the prediction performances in other communication systems, e.g., Wi-Fi 7, which are related studies that we will complete in the future.

Author Contributions

Data curation, L.W.; funding acquisition, Y.S. and H.G.; investigation, Y.S.; methodology, Y.S.; resources, H.G.; software, Y.S.; supervision, L.W.; writing—original draft, Y.S.; writing—review and editing, Y.S., L.W. and H.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Research Start-Up Foundation of Recruiting Talents of Nanjing University of Posts and Telecommunications under grant NY221126 and the National Natural Science Foundation of China under grant 52077107.

Data Availability Statement

The original contributions presented in this study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors have no relevant financial or nonfinancial interests to disclose.

References

  1. Li, S.; Yuan, J.; Fitzpatrick, P.; Sakurai, T.; Caire, G. Delay-Doppler Domain Tomlinson-Harashima Precoding for OTFS-Based Downlink MU-MIMO Transmissions: Linear Complexity Implementation and Scaling Law Analysis. IEEE Trans. Commun. 2023, 71, 2153–2169. [Google Scholar] [CrossRef]
  2. Dong, G.; Guo, J.; Xun, Q.; Wang, F.; Peng, P. Engineering Implementation Methods of Anti-interference Performance Improvement for Data Link System. Guid. Fuze 2024, 45, 34–39. [Google Scholar]
  3. Lu, W.; Zhu, B. Automatic modulation recognition of communication signals based on feature fusion. Sci. Technol. Eng. 2024, 24, 9914–9920. [Google Scholar]
  4. Gonzalez-Atienza, M.; Vanoost, D.; Verbeke, M.; Pissoort, D. An Optimized Adaptive Bayesian Algorithm for Mitigating EMI-Induced Errors in Dynamic Electromagnetic Environments. IEEE Trans. Electromagn. Compat. 2024, 66, 2085–2094. [Google Scholar] [CrossRef]
  5. Ye, A.; Chen, H.; Natsuaki, R.; Hirose, A. Polarization-Aware Channel State Prediction Using Phasor Quaternion Neural Networks. IEEE Trans. Mach. Learn. Commun. Netw. 2024, 2, 1628–1641. [Google Scholar] [CrossRef]
  6. Sun, Y. Research on Environmental Information Representation and Channel Prediction for 6G Wireless Communication; Beijing University of Posts and Telecommunications: Beijing, China, 2024. [Google Scholar]
  7. Fan, B.; Zhou, J. A Simple Exponential Smoothing Channel Prediction Algorithm in Massive MIMO System. Commun. Technol. 2024, 57, 354–358. [Google Scholar]
  8. Gao, C.; Zhu, Z.; Li, H.; Wang, G.; Zhou, T.; Li, X.; Meng, Q.; Zhou, Y.; Zhao, S. A Fiber-Transmission-Assisted Fast Digital Self-Interference Cancellation for Overcoming Multipath Effect and Nonlinear Distortion. J. Light. Technol. 2023, 41, 6898–6907. [Google Scholar] [CrossRef]
  9. Huang, C.T.; Huang, Y.C.; Shieh, S.L.; Chen, P.N. Novel Prony-Based Channel Prediction Methods for Time-Varying Massive MIMO Channels. In Proceedings of the IEEE Conference on Vehicular Technology (VTC2024-Spring), Singapore, 24–27 June 2024; pp. 1–6. [Google Scholar]
  10. Liu, Z.; Zhang, D.; Guo, J.; Tsiftsis, T.A.; Su, Y.; Davaasambuu, B.; Garg, S.; Sato, T. A Spatial Delay Domain-Based Prony Channel Prediction Method for Massive MIMO LEO Communications. IEEE Syst. J. 2023, 17, 4137–4148. [Google Scholar] [CrossRef]
  11. Chen, Y. Research on Wireless Channel Prediction and Localization Based on Deep Learning; Beijing University of Posts and Telecommunications: Beijing, China, 2024. [Google Scholar]
  12. Ji, S.; Sun, Y.; Peng, M. Research on satellite-ground adaptive modulation and coding techniques based on intelligent prediction of channel state. Telecommun. Sci. 2024, 40, 1–13. [Google Scholar]
  13. Gonzalez, J.; Dipu, S.; Sourdeval, O.; Siméon, A.; Camps-Valls, G.; Quaas, J. Emulation of Forward Modeled Top-of-Atmosphere MODIS-Based Spectral Channels Using Machine Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 1896–1911. [Google Scholar] [CrossRef]
  14. Fan, D.; Zhan, H.; Xu, F.; Zou, Y.; Zhang, Y. Research on Multi-Channel Spectral Prediction Model for Printed Matter Based on HMSSA-BP Neural Network. IEEE Access 2025, 13, 2340–2359. [Google Scholar] [CrossRef]
  15. Lv, C.W.; Lin, J.C.; Yang, Z.C. CSI Calibration for Precoding in Mmwave Massive MIMO Downlink Transmission Using Sparse Channel Prediction. IEEE Access 2020, 8, 154382–154389. [Google Scholar] [CrossRef]
  16. Xiao, Y.; Liu, J.; Long, Z.; Qiu, C. A data-driven approach to wireless channel available throughput estimation and prediction. Chin. J. Internet Things 2023, 7, 32–41. [Google Scholar]
  17. Wang, Z. Research on Intelligent Channel Prediction for Underwater Acoustic OFDM Communication; Huazhong University of Science and Technology: Wuhan, China, 2021. [Google Scholar]
  18. Wu, L. Research on Low Processing Delay Receiver Technology in Burst Communication System; University of Electronic Science and Technology of China: Chengdu, China, 2022. [Google Scholar]
  19. Chen, Z. Massive MIMO Channel Prediction Based on Autoregressive Model; University of Electronic Science and Technology of China: Chengdu, China, 2022. [Google Scholar]
  20. Li, Y. Research on 3D MIMO Channel Prediction Technology; Xidian University: Xi’an, China, 2020. [Google Scholar]
  21. Zheng, Y.; Tan, Y. An OFDM channel prediction method based on adaptive jump learning network. J. Nanjing Univ. Posts Telecommun. (Nat. Sci. Ed.) 2023, 43, 51–63. [Google Scholar]
  22. Luo, Y.; Tian, Q.; Wang, C.; Zhang, J. Biomarkers for Prediction of Schizophrenia: Insights from Resting-State EEG Microstates. IEEE Access 2020, 8, 213078–213093. [Google Scholar] [CrossRef]
  23. Jiang, W.; Schotten, H.D. Deep Learning for Fading Channel Prediction. IEEE Open J. Commun. Soc. 2020, 1, 320–332. [Google Scholar] [CrossRef]
  24. Pereira, P.E.; Moualeu, J.M.; Nardelli, P.H.; Li, Y.; de Souza, R.A. An Efficient Machine Learning-Based Channel Prediction Technique for OFDM Sub-Bands. In Proceedings of the 2024 IEEE 99th Vehicular Technology Conference (VTC2024-Spring), Singapore, 24–27 June 2024; pp. 1–5. [Google Scholar]
  25. Jaeger, H.; Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef]
  26. Xu, M.; Yang, Y.; Han, M.; Qiu, T.; Lin, H. Spatial-temporal interpolated echo state network for meteorological series prediction. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1621–1634. [Google Scholar] [CrossRef]
  27. Shi, P.; Guo, X.; Du, Q.; Xu., X.; He., C.; Li, R. Photovoltaic power prediction based on Tcn-BILSTM-attention-ESN. Acta Energiae Solaris Sin. 2024, 45, 304–316. [Google Scholar]
  28. Bai, Y.; Lun, S. Optimization of time series prediction of Echo state network based on war strategy algorithm. J. Bohai Univ. (Nat. Sci. Ed.) 2024, 45, 154–160. [Google Scholar]
  29. Zhao, Y.; Gao, H.; Beaulieu, N.C.; Chen, Z.; Ji, H. Echo state network for fast channel prediction in Ricean fading scenarios. IEEE Commun. Lett. 2017, 21, 672–675. [Google Scholar] [CrossRef]
  30. He, Y.; Sui, Y.; Farhan, A. Research of the time-domain channel prediction for adaptive OFDM systems. J. Electron. Meas. Instrum. 2021, 35, 100–110. [Google Scholar]
  31. Zhang, J.; Guo, Y.; Zhang, L.; Zong, Q. Adaptive communication networking method of charging pile based on channel prediction. Guangdong Electr. Power 2023, 36, 1–8. [Google Scholar]
  32. Gauthier, D.J.; Bollt, E.; Griffith, A.; Barbosa, W.A. Next generation reservoir computing. Nat. Commun. 2021, 12, 55–64. [Google Scholar] [CrossRef] [PubMed]
  33. An, H.; Al-Mamun, M.S.; Orlowski, M.K.; Liu, L.; Yi, Y. Robust Deep Reservoir Computing Through Reliable Memristor With Improved Heat Dissipation Capability. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2021, 40, 574–583. [Google Scholar] [CrossRef]
  34. Slonopas, A.; Cooper, H.; Lynn, E. Next-Generation Reservoir Computing (NG-RC) Machine Learning Model for Advanced Cybersecurity. In Proceedings of the IEEE Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 8–10 January 2024; pp. 0014–0021. [Google Scholar]
  35. Haluszczynski, A.; Köglmayr, D.; Räth, C. Controlling dynamical systems to complex target states using machine learning: Next-generation vs. In classical reservoir computing. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; pp. 1–7. [Google Scholar]
  36. Liu, Y.; Chen, M.; Pan, C.; Gong, T.; Yuan, J.; Wang, J. OTFS Versus OFDM: Which is Superior in Multiuser LEO Satellite Communications. IEEE J. Sel. Areas Commun. 2025, 43, 139–155. [Google Scholar] [CrossRef]
  37. Gao, H.; Zang, B.B. New power system operational state estimation with cluster of electric vehicles. J. Frankl. Inst. 2023, 360, 8918–8935. [Google Scholar] [CrossRef]
  38. Sui, Y.; Gao, H. Adaptive echo state network based-channel prediction algorithm for the internet of things based on the IEEE 802.11ah standard. Telecommun. Syst. 2022, 81, 503–526. [Google Scholar] [CrossRef]
  39. Zhu, P.; Wang, H.; Ji, Y.; Gao, G. A Novel Performance Enhancement Optical Reservoir Computing System Based on Three-Loop Mutual Coupling Structure. J. Lightwave Tech. 2024, 42, 3151–3162. [Google Scholar] [CrossRef]
  40. Kent, R.; Barbosa, W.S.; Gauthier, D.J. Controlling chaotic maps using next-generation reservoir computing. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 1–11. [Google Scholar]
  41. Hailin, L. A modified newton method for unconstrained convex optimization. In Proceedings of the 2008 International Symposium on Information Science and Engineering, Shanghai, China, 20–22 December 2008; pp. 754–757. [Google Scholar]
  42. Hui, Y.; Zhibin, H.; Feng, Z. Application of BP neural network based on quasi-newton method in aerodynamic modeling. In Proceedings of the 2017 16th International Symposium on Distributed Computing and Applications to Business, Engineering and Science (DCABES), Anyang, China, 13–16 October 2017; pp. 93–96. [Google Scholar]
  43. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least angle regression. Ann. Stat. 2004, 32, 407–451. [Google Scholar] [CrossRef]
  44. Ma, Y.; Su, J.; Fan, X.; Yang, Q.; Gao, Y.; Huang, Z.; Jiang, R. A Computational Model of MI-EEG Association Prediction Based on SMR-DCT and LS-SVM. In Proceedings of the International Conference on Intelligent Autonomous Systems, Dalian, China, 23–25 September 2022; pp. 351–357. [Google Scholar]
  45. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  46. Sui, Y. Research on Nonlinear Channel Prediction Method for Adaptive OFDM Systems Based on Echo State Network; Hefei University of Technology: Hefei, China, 2021. [Google Scholar]
Figure 1. The typical structure of the ESN.
Figure 1. The typical structure of the ESN.
Electronics 14 00575 g001
Figure 2. The typical structure of the NGRCN.
Figure 2. The typical structure of the NGRCN.
Electronics 14 00575 g002
Figure 3. The structure of the adaptive RRLN in this research.
Figure 3. The structure of the adaptive RRLN in this research.
Electronics 14 00575 g003
Figure 4. The related curves of the one-step prediction of the CSI at the 1st subcarrier using the proposed prediction method: (a) the real component; (b) the imaginary component.
Figure 4. The related curves of the one-step prediction of the CSI at the 1st subcarrier using the proposed prediction method: (a) the real component; (b) the imaginary component.
Electronics 14 00575 g004
Figure 5. The curves of the multi-step prediction of the CSI at the first subcarrier: (a) the real component; (b) the imaginary component.
Figure 5. The curves of the multi-step prediction of the CSI at the first subcarrier: (a) the real component; (b) the imaginary component.
Electronics 14 00575 g005
Figure 6. The curves of the 10th step prediction of the CSI at the 1st subcarrier using the proposed prediction method: (a) the real component; (b) the imaginary component.
Figure 6. The curves of the 10th step prediction of the CSI at the 1st subcarrier using the proposed prediction method: (a) the real component; (b) the imaginary component.
Electronics 14 00575 g006
Figure 7. The prediction performances of the CSI at the subcarrier by comparable prediction methods under different SNRs: (a) the real component; (b) the imaginary component.
Figure 7. The prediction performances of the CSI at the subcarrier by comparable prediction methods under different SNRs: (a) the real component; (b) the imaginary component.
Electronics 14 00575 g007
Figure 8. The prediction performances of the CSI at the subcarrier by comparable prediction methods when SNR is 20 dB: (a) the real component; (b) the imaginary component.
Figure 8. The prediction performances of the CSI at the subcarrier by comparable prediction methods when SNR is 20 dB: (a) the real component; (b) the imaginary component.
Electronics 14 00575 g008
Table 1. Computational complexities of some comparable channel prediction models.
Table 1. Computational complexities of some comparable channel prediction models.
ModelCC-TrPrCC-PePr
AR [15] O ( h N ) O ( h N 2 )
SVM [22] O ( h T ˜ 3 ) O ( d s N )
LS-SVM [44] O ( h T ˜ 3 ) O ( N T ˜ )
B-ESN [25] O ( T ˜ P 2 + h T ˜ c 3 ( P + N ) 3 ) O ( P 3 + h ( P + N ) 2 )
R-NGRCN [39] O N ( 1 + N ) / 2 + h T ˜ c 3 ( 1 + N + N ( 1 + N ) / 2 ) 3 O N ( 1 + N ) / 2 + h ( 1 + N + N ( 1 + N ) / 2 ) 2
L-NGRCN [40] O N ( 1 + N ) / 2 + j = 1 h n ˜ j ( ( 1 + N + N ( 1 + N ) / 2 ) 2 ( 1 + N + N ( 1 + N ) / 2 + T ˜ s ) O N ( 1 + N ) / 2 + h ( 1 + N + N ( 1 + N ) / 2 ) 2
Adaptive RRLN O ( T q + 1 ) 3 + q 2 + q 3 + T ˜ N p + p 2 + T ˜ N 2 + j = 1 h n j ( 1 + N + N 2 + P ) 2 ( T + N + N 2 + P + 1 ) O ( N p + p 2 + N 2 + h ( 1 + N + N 2 + P ) )
Table 2. OFDM wireless communication systems based on IEEE802.11ah.
Table 2. OFDM wireless communication systems based on IEEE802.11ah.
SymbolMeaningValue
f c The carrier frequency780 MHz
B The bandwidth2 MHz
f O F D M The OFDM symbol rate25 kHz
MDThe modulation methodQPSK
KThe subcarrier number per OFDM symbol52
N s p The pilot subcarrier number per OFDM symbol4
f d The maximum Doppler shift70 Hz
f s The sampling rate2 MHz
Table 3. Parameters of the adaptive RRLN in one-step prediction.
Table 3. Parameters of the adaptive RRLN in one-step prediction.
Data SetParameterValue
Real componentInput neuron number N 30
SP neuron number P 200
Sparsity of SP S D 0.05
Spectral radius ρ w 0.09
Balance factor α 1
Scaling factor κ 0.01
Regularization factors λ 1 , λ 2 1 × 10−6, 1 × 10−7
Convergence accuracy ε 1 × 10−8
Window length q 10
Imaginary componentInput neuron number N 30
SP neuron number P 200
Sparsity of SP S D 0.05
Spectral radius ρ w 0.09
Balance factor α 1
Scaling factor κ 0.01
Regularization factors λ 1 , λ 2 4 × 10−5, 1 × 10−8
Convergence accuracy ε 1 × 10−6
Window length q 10
Table 4. The one-step prediction results for the real component of the CSI at the 1st subcarrier.
Table 4. The one-step prediction results for the real component of the CSI at the 1st subcarrier.
ModelMAERMSENRMSESMAPEMAPEWR-OWMSD-OWM (%)
AR [15]1.51 × 10−31.89 × 10−33.94 × 10−31.65 × 10−24.08 × 10−2--
SVM [22]6.43 × 10−48.64 × 10−41.80 × 10−38.29 × 10−31.98 × 10−2--
LS-SVM [44]4.86 × 10−46.47 × 10−41.35 × 10−37.87 × 10−31.72 × 10−2--
B-ESN [25]4.27 × 10−55.33 × 10−51.10 × 10−46.37 × 10−46.50 × 10−4[−0.2869, 1.1672]100
R-NGRCN [39]5.28 × 10−56.48 × 10−51.34 × 10−46.78 × 10−46.65 × 10−4[−0.2825, 1.1624]100
L-NGRCN [40]9.07 × 10−41.20 × 10−32.49 × 10−31.14 × 10−21.60 × 10−2[−0.2764, 1.2257]1.2889
Adaptive RRLN2.43 × 10−53.00 × 10−56.22 × 10−53.26 × 10−43.24 × 10−4[−0.9743, 1.8297]3.1830
Table 5. The one-step prediction results for the imaginary component of the CSI at the 1st subcarrier.
Table 5. The one-step prediction results for the imaginary component of the CSI at the 1st subcarrier.
ModelMAERMSENRMSESMAPEMAPEWR-OWMSD-OWM (%)
AR [15]7.71 × 10−49.54 × 10−42.05 × 10−38.25 × 10−31.11 × 10−2--
SVM [22]2.77 × 10−43.35 × 10−47.18 × 10−41.94 × 10−34.73 × 10−0--
LS-SVM [44]4.19 × 10−35.84 × 10−38.60 × 10−33.55 × 10−24.41 × 10−2--
B-ESN [25]4.71 × 10−56.22 × 10−51.33 × 10−44.77 × 10−44.76 × 10−4[−0.2747, 1.1467]100
R-NGRCN [39]3.25 × 10−54.38 × 10−59.35 × 10−56.96 × 10−46.31 × 10−4[−0.2965, 1.1944]100
L-NGRCN [40]9.73 × 10−41.21 × 10−32.59 × 10−31.01 × 10−21.52 × 10−2[−0.2783, 1.2254]1.1815
Adaptive RRLN6.13 × 10−68.36 × 10−61.79 × 10−51.45 × 10−41.42 × 10−4[−1.2721, 1.9655]4.3324
Table 6. The 10th-step prediction results for the real component of the CSI at the 1st subcarrier.
Table 6. The 10th-step prediction results for the real component of the CSI at the 1st subcarrier.
ModelMAERMSENRMSESMAPEMAPEWR-OWMSD-OWM (%)
AR [15]1.56 × 10−22.41 × 10−25.04 × 10−21.16 × 10−13.96 × 10−1--
SVM [22]3.14 × 10−24.17 × 10−28.68 × 10−21.87 × 10−15.28 × 10−1--
LS-SVM [44]8.77 × 10−31.10 × 10−22.29 × 10−26.78 × 10−21.38 × 10−1--
B-ESN [25]2.06 × 10−32.65 × 10−35.51 × 10−32.15 × 10−23.45 × 10−2[−9.5977, 17.400]100
R-NGRCN [39]3.36 × 10−34.14 × 10−38.62 × 10−33.38 × 10−26.27 × 10−2[−7.9500, 15.026]100
L-NGRCN [40]8.79 × 10−31.20 × 10−32.26 × 10−26.78 × 10−21.59 × 10−1[−9.0454, 12.729]13.96
Adaptive RRLN5.01 × 10−46.29 × 10−41.31 × 10−38.17 × 10−31.08 × 10−2[−44.767, 54.933]7.41
Table 7. The 10th-step prediction results for the imaginary component of the CSI at the 1st subcarrier.
Table 7. The 10th-step prediction results for the imaginary component of the CSI at the 1st subcarrier.
ModelMAERMSENRMSESMAPEMAPEWR-OWMSD-OWM (%)
AR [15]1.03 × 10−21.59 × 10−23.42 × 10−28.22 × 10−21.11 × 10−1--
SVM [22]1.95 × 10−22.35 × 10−25.05 × 10−21.46 × 10−12.34 × 10−1--
LS-SVM [44]2.80 × 10−23.41 × 10−27.32 × 10−21.89 × 10−13.70 × 10−1--
B-ESN [25]2.12 × 10−32.77 × 10−35.93 × 10−32.23 × 10−22.84 × 10−2[−9.1182, 16.706]100
R-NGRCN [39]4.91 × 10−36.11 × 10−31.31 × 10−24.32 × 10−27.34 × 10−2[−5.9125, 11.507]100
L-NGRCN [40]1.33 × 10−21.13 × 10−23.49 × 10−21.07 × 10−11.73 × 10−1[−3.5911, 11.163]20.84
Adaptive RRLN2.50 × 10−43.15 × 10−46.75 × 10−44.81 × 10−34.39 × 10−3[−23.267, 14.144]26.21
Table 8. Parameters of the adaptive RRLN in the robust prediction test.
Table 8. Parameters of the adaptive RRLN in the robust prediction test.
Data SetParameterValue
Real componentInput neuron number N 30
SP neuron number P 200
Sparsity of SP S D 0.05
Spectral radius ρ w 0.09
Balance factor α 1
Scaling factor κ 0.01
Regularization factors λ 1 , λ 2 1 × 10−3, 1 × 10−3
Convergence accuracy ε 1 × 10−6
Window length q 10
Imaginary componentInput neuron number N 30
SP neuron number P 200
Sparsity of SP S D 0.05
Spectral radius ρ w 0.09
Balance factor α 1
Scaling factor κ 0.01
Regularization factors λ 1 , λ 2 5 × 10−4, 1 × 10−4
Convergence accuracy ε 1 × 10−8
Window length q 10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sui, Y.; Wu, L.; Gao, H. Channel Prediction Technology Based on Adaptive Reinforced Reservoir Learning Network for Orthogonal Frequency Division Multiplexing Wireless Communication Systems. Electronics 2025, 14, 575. https://doi.org/10.3390/electronics14030575

AMA Style

Sui Y, Wu L, Gao H. Channel Prediction Technology Based on Adaptive Reinforced Reservoir Learning Network for Orthogonal Frequency Division Multiplexing Wireless Communication Systems. Electronics. 2025; 14(3):575. https://doi.org/10.3390/electronics14030575

Chicago/Turabian Style

Sui, Yongbo, Lingshuang Wu, and Hui Gao. 2025. "Channel Prediction Technology Based on Adaptive Reinforced Reservoir Learning Network for Orthogonal Frequency Division Multiplexing Wireless Communication Systems" Electronics 14, no. 3: 575. https://doi.org/10.3390/electronics14030575

APA Style

Sui, Y., Wu, L., & Gao, H. (2025). Channel Prediction Technology Based on Adaptive Reinforced Reservoir Learning Network for Orthogonal Frequency Division Multiplexing Wireless Communication Systems. Electronics, 14(3), 575. https://doi.org/10.3390/electronics14030575

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop