Next Article in Journal
Investigating the Predominance of Large Language Models in Low-Resource Bangla Language over Transformer Models for Hate Speech Detection: A Comparative Analysis
Next Article in Special Issue
Performance Analysis of Reconfigurable Intelligent Surface (RIS)-Assisted Satellite Communications: Passive Beamforming and Outage Probability
Previous Article in Journal
Hybrid Quantum–Classical Neural Networks for Efficient MNIST Binary Image Classification
Previous Article in Special Issue
A Hybrid Deep Learning Framework for OFDM with Index Modulation Under Uncertain Channel Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Enhanced Autoencoder for Multi-Carrier Wireless Systems

by
Md Abdul Aziz
1,2,
Md Habibur Rahman
1,2,
Rana Tabassum
1,2,
Mohammad Abrar Shakil Sejan
3,
Myung-Sun Baek
3 and
Hyoung-Kyu Song
1,2,*
1
Department of Information and Communication Engineering, Sejong University, Seoul 05006, Republic of Korea
2
Department of Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic of Korea
3
Department of Electrical Engineering, Sejong University, Seoul 05006, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(23), 3685; https://doi.org/10.3390/math12233685
Submission received: 19 October 2024 / Revised: 18 November 2024 / Accepted: 21 November 2024 / Published: 24 November 2024

Abstract

:
In a multi-carrier (MC) system, the transmitted data are split across several sub-carriers as a crucial approach for achieving high data rates, reliability, and spectral efficiency. Deep learning (DL) enhances MC systems by improving signal representation, leading to more efficient data transmission and reduced bit error rates. In this paper, we propose an MC system supported by DL for operation on fading channels. Deep neural networks are utilized to model the modulation block, while a gated recurrent unit (GRU) network is used to model the demodulation blocks, acting as the encoder and decoder within an autoencoder (AE) architecture. The proposed scheme, known as MC-AE, differs from existing AE-based systems by directly processing channel state information and the received signal in a fully data-driven way, unlike traditional methods that rely on channel equalizers. This approach enables MC-AE to improve diversity and coding gains in fading channels by simultaneously optimizing the encoder and decoder. In this experiment, we evaluated the performance of the proposed model under both perfect and imperfect channel conditions and compared it with other models. Additionally, we assessed the performance of the MC-AE system against index modulation-based MC systems. The results demonstrate that the GRU-based MC-AE system outperforms the others.

1. Introduction

Multicarrier (MC) transmission [1] has been recognized and adopted by numerous standards bodies for both wireline and wireless systems [2,3] due to its effectiveness and reliability. This approach offers several benefits, including resistance to single-frequency interference and enhanced capability to reduce inter-symbol interference (ISI). MC’s strength in addressing various channel impairments, such as frequency selectivity and impulse noise, makes it a popular choice in emerging applications like wireless local area networks (WLANs) [4] and power line communications (PLCs) [5].
The core concept of the MC protocol is the decomposition of the spectrum into a series of orthogonal narrowband subchannels, using complex exponentials as carriers of information [6]. Two primary MC techniques are commonly employed: discrete multitone (DMT) [7] for wireline systems and orthogonal frequency-division multiplexing (OFDM) [8], which is predominantly used in wireless applications. Both OFDM and DMT rely on fast Fourier transform (FFT) for spectrum decomposition, and data are transmitted in blocks as a result. Many wireless standards, such as IEEE 802.11, IEEE 802.16, 3GPP LTE, and LTE-Advanced, utilize an OFDM-based multicarrier system [9].
A variety of sophisticated MC methods built on OFDM have been investigated recently with the goal of improving spectral or energy efficiency. For example, spread OFDM (S-OFDM), using a Walsh–Hadamard spreading matrix, was proposed in [10], where each subcarrier in the OFDM system is modulated using a spread spectrum technique. In [11], the author introduced OFDM with index modulation (OFDM-IM), employing a maximum likelihood (ML) detector to identify the received signal. Compared to traditional OFDM, OFDM-IM offers enhanced reliability and energy efficiency by activating only a subset of subcarriers. This method transmits data bits through active indices in addition to M-ary symbols. The study in [12] revealed that, like conventional OFDM, OFDM-IM has a diversity order of one and examined its error performance. In particular, spread OFDM-IM (S-OFDM-IM) was developed in [13] by applying a spreading code to OFDM-IM to enhance its diversity advantage. This experiment demonstrates that when low-complexity detection techniques, like minimum mean squared error (MMSE)-based detectors, are used, S-OFDM-IM performs better than S-OFDM. Similarly, the dual model OFDM (DM-OFDM) was introduced in [14] and used several distinct signal constellations.
Recently, deep learning (DL) has been applied to various areas within the communications field, particularly to address physical layer challenges [15,16]. DL has also been widely utilized to tackle issues in MC systems [17]. For instance, deep neural networks (DNNs) have been effectively employed to detect OFDM and OFDM-IM signals [18,19], particularly in the presence of channel impairments. In [20], a convolutional neural network (CNN)-based receiver was designed for joint detection and modulation classification of the received signal. The work in [21] introduced a dual CNN-based decoder for channel estimation in a multiple input multiple output OFDM (MIMO-OFDM) system, aiming to reduce complexity by incorporating an additional neural network, referred to as HyperNet. A long short-term memory (LSTM)-based detector for the OFDM-IM system was proposed in [22]. A Y-shaped net-based bi-diretional LSTM (Bi-LSTM) model was proposed for the OFDM-IM system in [23].
The concept of an end-to-end communication system modeled as an autoencoder (AE) in DNNs was first introduced in [24]. In this approach, the transmitter encodes the input data so that the receiver’s output is optimized to match the original input. Additionally, ref. [25] demonstrated the ease of implementing these techniques using open-source DNN frameworks and commercially available software-defined radios. Building on these ideas, ref. [26] proposed an AE-based end-to-end communication system capable of handling unknown channel models. In [27], the authors proposed a novel deep energy EA for noncoherent multicarrier multiuser SIMO (MU-SIMO) systems in fading channels. The EA-based approach, using neural networks for both transmitter and receiver, leverages energy-only input at the decoder and optimizes subcarrier power levels. In [28], the authors proposed a novel peak-to-average power ratio (PAPR) reduction scheme, PRNet, for OFDM systems based on deep AE architecture. The scheme adaptively determines constellation mapping and demapping using DL, minimizing both bit error rate (BER) and PAPR. For fading channels, ref. [29] introduced a radio transformer network (RTN) to counteract fading, but it did not leverage multipath diversity. Similarly, the AE method applying individual OFDM sub-carriers in [29] did not provide diversity advantages. The paper in [30] compared an AE-based OFDM system with IEEE 802.11a Wi-Fi, demonstrating that the AE system adapts better to channel conditions and effectively manages multi-path effects. A CNN-based universal filtered MC system was proposed in [31], where the DL models leverage the typically discarded odd-indexed samples from the received signal. Consequently, the model relies heavily on extensive and diverse training data to perform effectively. A DNN-based MC with AE (MC-AE) for fading channels was proposed in [32], where both modulation and demodulation are handled by neural networks without relying on channel equalizers. The approach improves diversity and coding gains, achieving superior BLER performance over traditional methods, and is extended to MU-MC-AE. Another DNN-based turbo-style MC-AE was proposed in [33] for the OFDM system, incorporating the exchange of extrinsic information between the MC-AE decoder and the soft-decision channel decoder, which adds computational complexity. The gated recurrent unit (GRU) model captures long-term dependencies in sequential data through gating mechanisms, effectively addressing the vanishing gradient problem. GRUs have a simpler structure with fewer parameters than LSTMs, enabling faster training efficiency. Additionally, GRUs require less memory than LSTMs and generalize well with smaller datasets, making them suitable for real-time and resource-constrained applications [34,35]. Meanwhile, AEs are advantageous for feature extraction and dimensionality reduction, allowing data compression and efficient representation learning, which often enhances performance in downstream tasks [36]. In this paper, we propose a GRU-based MC-AE (GRU-MC-AE) system that leverages the advantages of the GRU model to tackle the previously mentioned challenges. In this paper, we evaluate the BER and block error rate (BLER) performance of the proposed model across various signal-to-noise ratios (SNRs) under both perfect and imperfect channel conditions. Additionally, we calculate the spectral efficiency (SE) and energy efficiency (EE) across various SNR levels. We compare its performance with traditional decoders and other DL-based decoders. The simulation results demonstrate that our proposed GRU-MC-AE model outperforms both conventional and DL-based models. The key contributions of this paper are summarized below:
  • This study proposes a single-user MC-AE system, wherein a DNN model is used for the modulation block and the demodulation block is implemented using a GRU, acting as the encoder and decoder in an AE framework. The encoder employs a linear activation function, which passes the weighted sum of inputs directly to produce continuous outputs and the GRU-based decoder effectively manages the temporal dependencies between subcarriers, enhancing the system’s ability to adapt to changing channel conditions.
  • The suggested method does not require an equalization or domain knowledge because it feeds the decoder both the received signal and the channel state information (CSI) directly. The encoder and decoder are efficiently learned by this totally data-driven method to maximize diversity and coding gains in fading channels.
  • We evaluate the performance of the proposed model in terms of BER and BLER under both perfect and imperfect channel conditions. We compare it with traditional and DL-based decoders, and the results show that the GRU-MC-AE model exceeds the performance of both conventional and DL-based models. To further assess the efficiency of the proposed model, we evaluate its SE and EE.
The remainder of this paper is organized as follows: The system modeling is introduced in Section 2. A thorough discussion of the suggested model, including details on offline training and online testing protocols, is given in Section 3. The simulation results are presented in Section 4, while the complexity analysis and detailed discussion are provided in Section 5 and Section 6, respectively. Finally, the conclusions are drawn in Section 7.

2. System Model

In an MC-AE system, the incoming message is passed through the modulation section, which consists of a DL-based encoder, generating a transmitted data vector. This data vector is then processed through a time-domain OFDM (t-OFDM) operation. Specifically, an inverse fast Fourier transform (IFFT) is applied to convert the data into the time domain, and a cyclic prefix is added to combat inter-symbol interference before the signal is transmitted through a fading channel with noise. At the receiver, the cyclic prefix is removed, and an inverse time-domain OFDM operation is performed using a fast Fourier transform (FFT) to convert the signal back into the frequency domain. The M-ary QAM/PSK modulation or several newly developed instant messaging systems [37] are examples of modulation schemes. The resulting received signal is then passed through a demodulation block, which consists of a DL-based decoder that generates the estimated output.
The general structure of the proposed MC-AE system is shown in Figure 1. In our proposed system, the incoming message s is fed as input to the encoder. In the MC-AE encoder, each incoming message s ϵ S = { s 1 , , s M } , where M = 2 m and consists of a bit-stream of m bits, is mapped into a one-hot vector s of size M × 1 . This vector has all zero entries except for a single one. Afterward, the DL-based encoder produces the transmitted data vector x ¯ = [ x 1 , x 2 , , x N c ] T , where N c denotes the number of sub-carriers. In a manner similar to the IM-based scheme [38], the proposed MC-AE system splits the total N c subcarriers into G groups, each containing N subcarriers, such that N c = N G . The AE structure is then applied independently to each group. Following processing by the t-OFDM block, x is transmitted to the receiver where it is impeded by additive noise w after first going via the fading channel H . After performing the inverse t-OFDM operation at the receiver, the received signal y is obtained. The received received frequency domain signal y is expressed as follows:
y = H x + w ,
In this equation, w represents the AWGN, where w i CN ( 0 , σ 2 ) . The H = [ H 1 , , H N ] denotes the Rayleigh fading channel, where H i CN ( 0 , 1 ) and the operator ⊙ signifies element-wise multiplication. It is assumed that the average energy of the transmitted M-ary symbol is E a , resulting in an average SNR at the receiver of γ ¯ = E a / σ 2 .
In order to decode the signal, we assume that the received signal y and the CSI are both fully known at the receiver and are fed into the decoder as inputs. The real vector x ^ = [ y R , y I , H R , H I ] T is specifically created from the complex vectors y and H where the real and imaginary components of y and H are denoted, respectively, by y R , y I and H R , H I . The proposed MC-AE significantly differs from traditional RTN-based AE schemes [29]. RTN is designed for block fading channels where sub-carrier channels remain constant over multiple uses. This allows RTN to operate without CSI, relying instead on the domain knowledge of a channel equalizer. These properties make it a model-driven, noncoherent approach. On the other hand, the proposed MC-AE is designed for time-varying channel conditions, in which the channel coefficients fluctuate arbitrarily with each usage. This flexibility enables the proposed scheme to utilize perfect CSI at the receiver in a fully data-driven manner to exploit frequency diversity across different sub-carriers, moving away from the reliance on domain knowledge characteristic of RTN.
For the imperfect CSI settings, we consider an actual system that faces difficulties because of the erroneous CSI assessment of the receiver. The imprecise calculation of CSI at the receiver leads to system performance degradation, especially in scenarios where accurate channel estimation is crucial for effective signal detection and decoding. The estimated channel h ( α ) is modeled as follows:
h α = h ^ α + E α ,
where E ( α ) CN ( 0 , ϵ 2 ) represents the channel estimation error, and h ^ ( α ) CN 0 , 1 ϵ 2 is the imperfect estimate of the channel. Here, ϵ 2 denotes the error variance in the CSI estimation. In this model, the channel estimation error variance ϵ 2 is dependent on the average SNR ( γ ¯ ) and is defined as ϵ 2 = 1 1 + γ ¯ . Thus, as the SNR increases, the channel estimation error variance decreases, reflecting improved CSI accuracy at higher SNR values [12].

3. Proposed GRU-MC-AE Network Architecture

The structure of the proposed GRU-MC-AE network is shown in Figure 2. Our proposed model consists of three main parts: a DL-based encoder, a channel, and a DL-based decoder. The encoder section includes an input layer, a dense layer, a normalization layer, and a reshape layer. In the encoder, one of the M = 2 m possible incoming messages s is mapped to a one-hot vector s of size M × 1 , where all entries are zero except for one, which is set to one. The next layer is a dense layer with a linear activation function. This layer produces a 2 N -dimensional vector u . The operation of this layer is expressed as follows:
u = W s + b ,
where w is the weight and b is the bias vector. The following layer is a normalization layer implemented as a lambda function, which regulates the average transmission power for each subcarrier. The output of this layer is a 2 N -dimensional vector represented as follows:
v = N E a u / u ,
where E a is the average energy of the transmitted symbol. The next layer is the reshape layer, which generates the real and imaginary parts by reshaping v into an N × 1 complex-valued vector. The subsequent section is the channel implemented with a lambda layer, where noise is added to the signal. After passing through the channel, the output consists of both the received signal and the estimated channel coefficients.
The final section is the decoder, which consists of an input layer, two GRU layers, and an output layer. In the input layer, a 4 N -dimensional real vector, denoted as x ^ , is constructed from the complex vectors y and H . This vector is then fed into the GRU layers, which have Q 1 and Q 2 hidden nodes, respectively. The first GRU layer processes the channel output and generates a hidden state with Q 1 units for each subcarrier. The second GRU layer further reduces the sequence to produce a single output vector of Q 2 units. The internal structure of the GRU model is illustrated in Figure 3. The GRU streamlines the traditional RNN by incorporating two key gates: the update gate and the reset gate. The update gate manages the portion of the previous hidden state that is passed to the next time step, enabling the model to retain information across longer sequences. Conversely, the reset gate controls how much of the previous hidden state is discarded, allowing the model to clear its memory when it is necessary. The candidate hidden state is calculated using the current input and the reset hidden state, and the final hidden state is determined by combining the previous hidden state with the candidate state, based on the update gate’s influence [39,40].
The update gate, represented as d t , controls the extent to which the previous hidden state h t 1 is retained and passed on to the next time step. The reset gate, denoted as R t , regulates how much of the previous hidden state h t 1 should be discarded or reset before incorporating the new input. The candidate hidden state, h ^ t , represents the newly computed memory content, which is generated based on the current input and the modified hidden state. The final hidden state h t at time step t is a combination of the previous hidden state h t 1 and the candidate hidden state h ^ t , controlled by the update gate d t . The operation of the gates and hidden states in the GRU can be expressed as follows:
d t = σ ( W d · x ^ t + U d · h t 1 + b d ) ,
R t = σ ( W R · x ^ t + U R · h t 1 + b R ) ,
h ^ t = tanh ( W h · x ^ t + U h · ( R t h t 1 ) + b h ) ,
h t = ( 1 d t ) h t 1 + d t h ^ t ,
where σ represents the sigmoid activation function, t a n h denotes the hyperbolic tangent function, ⊙ represents element-wise (Hadamard) multiplication, b d , b R , b h are the bias vectors, W d , W R , W h are the weight matrices, and U d , U R , U h are the recurrent weight matrices for the update, reset, and candidate hidden state, respectively.
The final layer of the decoding section is a dense layer with M nodes, utilizing a softmax activation function. By utilizing the softmax, the decoder produces a probability vector s ^ = [ s ^ 1 , , s ^ M ] T , where the i-th entry represents the likelihood that the corresponding message is transmitted. The largest entry of s ^ is used to determine the estimated message s ^ . Applying softmax enables the model to produce a clear and interpretable output with values that sum is 1. This allows the output to be evaluated based on the correct predicted symbol, similar to a classification setup in communication systems [41]. If θ dec = { W i , b i } i = 1 , 2 denote the parameters and z denotes the input of the output layer, the output is expressed as follows:
s ^ = σ Softmax ( W z + b ) ,
where σ Softmax represent the softmax activation functions. The detailed signal detection process for the proposed GRU-based decoder in the MC-AE system is systematically presented in Algorithm 1. This algorithm provides a structured overview of each stage, including data generation, AE model design, training, and testing. The step-by-step organization offers a clear perspective on the entire workflow.
Algorithm 1 MC-AE Model Setup, Training, and Evaluations
  1:
Initialize Parameters: N, M, m, R (bit rate), SNR, batch size, norm type, epochs, learning rate, hidden layers, loss function, train/test sizes, activation functions
  2:
Define noise_std based on training SNR
  3:
Define Channel Layers
  4:
Function channel(Z):
  5:
    Initialize channel estimate H est with noise
  6:
    Calculate real and imaginary parts with noise added
  7:
    Return received signal y
  8:
Function channel_test(Z, noise_std, test_size, imperfect_channel=False):
  9:
if imperfect_channel then
10:
        Calculate imperfect CSI error
11:
end if
12:
    Add noise to channel estimates
13:
    Calculate received signal y
14:
    Return y
15:
Build MC-AE Model
16:
Create input layer for encoder
17:
Add dense encoding layer
18:
Normalize encoder output based on norm type
19:
Reshape encoder output
20:
Define decoder GRU layers and Dense output layer
21:
Construct complete Autoencoder model
22:
Train or Load Model
23:
if pre-trained then
24:
    Load encoder and decoder weights
25:
else
26:
    Generate training data
27:
    Compile and train AE model
28:
    Save encoder and decoder weights
29:
end if
30:
Generate test data
31:
Encode test data
32:
BLER and BER Calculation
33:
Function calculate_BER_BLER(EbNodB_range, test_data, test_label, imperfect_channel)
34:
Initialize BER and BLER arrays
35:
for each SNR value in EbNodB_range do
36:
    Calculate noise_std for given SNR
37:
    Encode test data
38:
    Pass encoded data through channel_test
39:
    Decode received data and calculate BER and BLER
40:
end for
41:
Return BER and BLER
42:
Perfect CSI BLER Calculation
43:
Print “Perfect CSI BLER”
44:
Set EbNodB_range = range(0, 15, 4)
45:
BER, BLER ← calculate_BER_BLER(EbNodB_range, test_data, test_label, False)
46:
Imperfect CSI BLER Calculation
47:
Print “Imperfect CSI BLER”
48:
Set EbNodB_range = range(0, 15, 4)
49:
BER, BLER ← calculate_BER_BLER(EbNodB_range, test_data, test_label, True)

Training and Testing Procedure

A collection of randomly generated incoming messages s or their corresponding one-hot vectors is used to train the MC-AE model offline. During training, randomly generated noise w and channel H are added to the encoder’s output. The model is trained with 200,000 date samples. The training and testing parameters used in this experiment are shown in Table 1. In DL, the selection of an appropriate loss function is crucial as it directly influences model performance and guides the optimization process. By quantifying errors, the loss function ensures that the model learns relevant patterns for specific tasks, such as regression or classification. For GRU-MC-AE training, we use the mean squared error (MSE) loss function [42], which can be expressed as follows:
L ( s i , s i ^ ; θ ) = 1 n i = 1 n ( s i s ^ i ) 2 ,
where θ denotes the weight and bias of the model. n is the training batch size and s i ^ is the prediction of s i . For randomly selected batches from the data sample, the stochastic gradient descent (SGD) algorithm can update the model parameter θ in the following manner:
θ + : = θ η L s i , s i ^ ; θ ,
where η is the learning rate and step size of SGD. Optimizers play a crucial role in DL by enabling efficient weight updates, leading to quicker convergence and improved model performance [43]. The adaptive moment estimation (Adam) optimizer is widely used in DL due to its adaptive learning rates, which combine the strengths of momentum and RMSProp for faster and more stable convergence. It is efficient, robust to noisy gradients, and performs effectively across various tasks, especially with large models and datasets. Leveraging these advantages, we train our proposed model using the Adam optimizer [44] which is extensively accessible in several commercial DL libraries, including TensorFlow [45]. During the training, data labeling is conducted using one-hot encoding. Each data sample represents one of the possible M = 16 modulation symbols. The label for each sample is a one-hot encoded vector of length M, where the position corresponding to the true modulation symbol is set to 1, and all other positions are set to 0. This allows the neural network to learn the mapping between the transmitted symbol and its corresponding one-hot label during training.
Figure 4 illustrates the classification performance metrics of the proposed GRU-based MC-AE model for the (4, 16) data combination trained at 7 dB SNR. In Figure 4a, the precision-recall curve for each modulation class shows a gradual precision decrease as recall increases, reflecting the model’s accuracy across different modulation classes. The inset zooms in on variations in the middle precision-recall range, revealing slight class differences in performance. Figure 4b depicts the training loss over epochs, with an initial sharp decline indicating rapid adaptation to the data. The loss stabilizes around the 200th epoch, suggesting the model has reached a stable, low-error state and effectively minimized prediction errors.
Training the GRU-MC-AE model requires the careful selection of the SNR level, referred to as γ ¯ t , due to its critical impact on the model’s performance. Specifically, it is essential to choose the optimal γ ¯ t to ensure that the model trained at this SNR level maintains effective performance across a range of relevant SNRs. If γ ¯ t is set too low, the effects of noise may not be adequately represented during training, leading to a poorly generalized model [46]. The subsequent simulation results will detail the specific training SNR levels used for each experimental setup. All other training parameters, such as learning rate, batch size, and the number of epochs, which have a significant impact on model training, are chosen very carefully.
The testing procedure for the MC-AE system evaluates its performance by transmitting encoded test data through a simulated wireless channel and measuring the error rates. The testing procedure is performed using 30,000 samples. BER and BLER are calculated by comparing the decoded output with the original messages. The performance is tested over a range of SNR values, and the results are plotted to compare BLER and BER under perfect and imperfect CSI conditions. At high SNRs, the diversity gain g d and coding gain g c can be used to approximate the BLER as follows [47]:
B L E R = g c γ ¯ g d + o γ ¯ g d .

4. Simulation Results

We utilize a variety of advanced MC schemes listed in Table 1 as baseline schemes for the GRU-MC-AE model. In this study, simulations are conducted using the Python environment, with an Intel Core i7-8700 CPU at 3.20 GHz and an NVIDIA GeForce GTX 1660 Ti GPU. The configuration of IM-based schemes, such as OFDM-IM [11], is denoted by ( N , K , M ) , where N represents the number of sub-carriers per block, K is the number of active sub-carriers, and M is the size of the conventional M-ary modulation. In particular, the configuration for OFDM [9] and S-OFDM [10] is represented by ( N , M ) . All of these systems, including OFDM, S-OFDM, and OFDM-IM utilized ML-based detectors, with the exception of the MMSE-based S-OFDM system in [13]. The configurations for both the DNN-based MC-AE [32] and the proposed MC-AE are represented as ( N , M ) . It is essential to note that the M in our approach is distinct from that in the baseline schemes, as it signifies the size of the transmitted message for each block of N sub-carriers, where M is defined as 2 m , with m representing the number of data bits in each message.
The speed at which a model updates its weights during training is determined by the learning rate, a crucial parameter in deep learning. Finding the ideal learning rate is crucial to striking a balance between convergence speed and stability. A low learning rate may result in sluggish or less-than-ideal convergence, whereas an excessively high learning rate may create instability [48]. The BLER of the proposed model under perfect CSI conditions, with a (4, 16) setup, is shown in Figure 5. From the results, we observe that the model demonstrates superior performance at a 0.001 learning rate. While the performance at lower SNR levels is nearly similar for all learning rates, the performance gap widens at higher SNR levels. Specifically, at 15 dB SNR, the 0.001 learning rate outperforms the 0.01, 0.005, and 0.0005 learning rates by approximately 3 dB, 1.2 dB, and 0.9 dB, respectively. Hence, the 0.001 learning rate is used for all setups in this experiment.
Figure 6 shows the BLER performance of the proposed model across different batch sizes and epochs for the (4, 16) data setup. While the model demonstrates strong performance across all batch sizes and epoch configurations, it achieves the best results with a batch size of 512 and 1000 epochs. When the batch size decreases while keeping the epoch count at 1000, performance declines. Similarly, reducing the number of epochs while maintaining a batch size of 512 also results in decreased performance. Moreover, reducing both batch size and epochs further degrades performance. These results confirm that our proposed model performs optimally with a batch size of 512 and 1000 epochs.
Training SNR significantly affects a DL model’s performance. Low training SNR can hinder the model’s ability to generalize in high-SNR conditions, while high SNR may reduce its robustness in noisy environments. A balanced training SNR ensures better overall performance across varying SNRs [46]. Figure 7 presents a comparison of the proposed model’s BLER performance across different training SNRs. The BLER is measured with a batch size of 512 and a configuration of (4, 16). From the results, we observe that the proposed model performs well for all positive training SNR values. Although the GRU-MC-AE model shows slightly poorer performance at 15 dB SNR, it still demonstrates good and almost similar performance at 5 dB and 10 dB training SNRs. When it is trained at 7 dB SNR, the model performs better compared to other training SNRs. Performance decreases as the training SNR is either increased or decreased from this value. Specifically, the GRU-MC-AE model exhibits around a 1.25 dB improvement when it is trained at 7 dB compared to 5 dB and 10 dB training SNRs at 15 dB SNR. Thus, all the experiments for this model are conducted using a 7 dB training SNR.
The BLER for different values of M is shown in Figure 8. We compare the performance for the configurations ( N , M ) = (4, 16), (4, 32), and (4, 64). From the results, it can be observed that while our proposed model demonstrates good performance across all setups, the performance decreases as the size of M increases. Specifically, for M = 32 and 64, the GRU-MC-AE model shows a degradation of approximately 1.125 dB and 1.7 dB, respectively, compared to M = 16 .
In Figure 9, the performance of the proposed model configured as (4, 16), is evaluated against ML-based OFDM and OFDM-IM, MMSE-based S-OFDM, and the DNN-based MC-AE [32] system under ideal channel conditions at a spectral efficiency (SE) of 1 bps/Hz. The results indicate that the proposed GRU-MC-AE model consistently outperforms the alternative models. While its performance closely aligns with that of the DNN-based MC-AE system at lower SNR levels, the proposed model demonstrates a notable gain of approximately 1.15 dB in performance at higher SNR levels.
A comparative BER performance of the proposed model with the configuration ( 4 , 16 ) is illustrated in Figure 10. This figure considers the results under perfect channel conditions at a SE of 1 bps/Hz. The performance of our proposed system is compared against several benchmarks, including a turbo-style DNN-based MC-AE system [33], an LSTM-based OFDM-IM system [22], a Y-BLSTM-based OFDM-IM system [23], and a greedy detector (GD)-based OFDM-IM system [49]. It is important to note that all OFDM systems are configured with a ( 4 , 1 , 4 ) setup, while the turbo-style MC-AE uses a ( 4 , 16 ) configuration. From the results, it is evident that the proposed model outperforms all other system models. Although the GRU-MC-AE model demonstrates slightly inferior performance at lower SNRs, particularly around 5 dB SNR, it exhibits approximately a 1 dB performance gain compared to the turbo MC-AE model at higher SNR levels.
The BLER performance of the GRU-MC-AE model under uncertain channel conditions is presented in Figure 11, using a configuration of ( 4 , 16 ) and a training SNR of 7 dB. The figure compares the performance of the GRU-MC-AE system with a DNN-based MC-AE system, as well as ML-based OFDM, OFDM-IM, and S-OFDM systems, at a SE of 1 bps/Hz. It is important to highlight that the proposed approach is evaluated using MMSE-based imperfect CSI, despite being trained under perfect CSI conditions. As shown, the GRU-MC-AE model surpasses all other systems. While it effectively outperforms S-OFDM and the DNN-based MC-AE, it exhibits a more significant advantage over OFDM and OFDM-IM systems. Notably, at 15 dB SNR, the proposed model offers approximately a 0.7 dB improvement over the DNN-based MC-AE system and 0.9 dB improvement over the ML-based S-OFDM system.
Figure 12 shows two plots that analyze the performance of a system in terms of SE and EE as a function of SNR under perfect channel conditions for different data settings. This performance is calculated with a training SNR of 7 dB, assuming a fixed circuit power of 0.25 W [50]. In Figure 12a on the left, SE is plotted against SNR for three different configurations of ( N , M ) : (4, 16), (4, 32), and (4, 64). As SNR increases, SE improves for all configurations, indicating that higher SNR values support higher data rates. Among the configurations, ( N , M ) = (4, 64) achieves the highest SE, followed by (4, 32) and (4, 16), suggesting that the increase of M enhances SE. In Figure 12b on the right, EE is plotted against SNR for the same configurations. EE also improves with SNR, showing that the system becomes more energy-efficient at higher SNR levels. Here, the configuration ( N , M ) = (4, 64) shows superior EE performance across most SNR values, particularly at higher SNRs. This suggests that larger values of M contribute to higher EE as well. The (4, 32) configuration follows in EE performance, and (4, 16) has the lowest EE across the SNR range. These plots provide insights into how adjusting the ( N , M ) parameters and SNR affects the SE and EE, helping to optimize the system’s performance based on specific requirements.

5. Computational Complexity

The high computational cost of data decoding is a major challenge in today’s advanced MC systems, often viewed as a trade-off for performance enhancement. To address this, we analyze the decoding complexity of the proposed DL-based MC schemes by measuring the decoding runtime per sample and comparing it with baseline models included in Figure 9. The transmitter complexity of our schemes is minimal compared to the receiver since the proposed encoders require only a single linear fully connected layer. The decoding complexity of the proposed MC-AE and the baselines, as shown in Figure 9, is compared in Table 2, where the runtime is expressed in milliseconds (ms). Our proposed model achieves a runtime of 0.049 ms per sample. In comparison, the runtimes for S-OFDM and OFDM-IM are 0.121 ms and 0.107 ms per sample, respectively, which are significantly higher than our proposed model. Additionally, we have calculated the training time per sample for different training SNRs to further evaluate the system’s efficiency, as presented in Table 3. From the table, it is evident that the runtime shows slight variations with changes in the training SNR value. While our model requires slightly more runtime than the OFDM and DNN-based MC-AE systems, the results clearly indicate that our scheme not only offers higher reliability but also benefits from lower computational complexity.
During the inference (decoding) phase of the MC-AE model, computational costs are minimized through several optimizations. The channel layer, which simulates noise and fading, leverages batch operations in TensorFlow to efficiently compute the effects of the noisy channel with parallel processing. The decoder, which uses GRU layers, is less resource-intensive, helping reduce latency. Additionally, GPU acceleration further improves performance. For class prediction, the use of a r g m a x efficiently maps probabilities to the predicted modulation classes, minimizing post-processing time.

6. Discussion

To optimize diversity and coding gains over fading channels in a fully data-driven manner, we proposed a robust DL-based MC-AE system capable of learning both the encoder and decoder. The encoder uses a linear activation function for continuous outputs, while the decoder employs a GRU neural network to effectively manage temporal dependencies across subcarriers, enhancing the system’s adaptability to varying channel conditions. In our investigation, we focused on evaluating the performance of the model, calculating key metrics such as BER, BLER, SE, and EE for different data configurations and DL model parameters. The results were compared with several conventional and DL-based models. While our system demonstrated superior performance in terms of these metrics, it exhibits higher complexity compared to some of the other models. Some techniques like model quantization, static graph execution, and simplifying the channel model through approximations for noise and fading can be employed to optimize performance while reducing complexity. These combined approaches enable faster inference with minimal performance loss. In a prior study [32], the authors designed an AE model with a DNN network and analyzed the BER and BLER for different scenarios, such as uplink-downlink and multiuser systems. Although this approach had lower complexity than ours, our model outperformed it in terms of performance. We intentionally avoid training with negative SNR levels, as excessive noise would overpower the signal, leading to overfitting and reduced information for learning. Our current work primarily focuses on static channel conditions, although we have explored varying training SNR values to account for dynamic channel conditions. Further work could extend this model to fully dynamic channel scenarios.

7. Conclusions

We proposed a robust GRU-based MC-AE system that optimizes diversity and coding gains over fading channels in a data-driven manner. We evaluated the BLER under both perfect and imperfect CSI conditions, comparing the performance of our proposed model with traditional and DL-based MC systems. Additionally, we assessed the BER performance under perfect CSI conditions. For further analysis, we calculated the SE and EE across varying SNR levels. Our findings show that the model performs optimally with a learning rate of 0.001, but it is capable of learning efficiently at different learning rates. Training at moderate SNR values helps capture more relevant features, improving generalization. Our proposed model exhibited superior performance with 1000 epochs and a batch size of 512, achieving optimal results when configured with (4, 16) and trained at 7 dB SNR. The results demonstrate that MC-AE outperforms conventional baselines, including recent IM-based and S-OFDM-based schemes, in terms of performance improvements. It also achieves superior performance compared to other DL-based OFDM-IM and MC-AE systems under both perfect and imperfect CSI conditions. Moreover, the SE and EE performance highlights the system’s robustness. Our study is primarily simulation-based, and we have not pursued practical implementation at this stage. However, in the future, the research community may consider the design of this experiment for practical implementation.

Author Contributions

Conceptualization, M.A.A. and M.H.R.; methodology, M.A.A., R.T. and M.A.S.S.; software, M.A.A., M.H.R. and M.-S.B.; validation, M.A.A. and M.A.S.S.; formal analysis, M.A.A., M.H.R., R.T. and M.-S.B.; investigation, M.A.A., M.H.R. and M.A.S.S.; resources, H.-K.S.; data curation, M.A.A. and R.T.; writing—original draft preparation, M.A.A.; writing—review and editing, M.A.A. and H.-K.S.; visualization, M.A.A. and M.-S.B.; supervision, H.-K.S.; project administration, H.-K.S.; funding acquisition, H.-K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP) grant funded by the Korea Government [Ministry of Science and ICT (MSIT)], Republic of Korea, under the Metaverse Support Program to Nurture the Best Talents under Grant IITP-2024-RS-2023-00254529; in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant 2020R1A6A1A03038540; and in part by the MSIT (Ministry of Science and ICT), Republic of Korea, under the ITRC (Information Technology Research Center) support program (IITP-2024-RS-2024-00437191) supervised by the IITP (Institute for Information and Communications Technology Planning and Evaluation).

Data Availability Statement

The data will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Come, H. Multicarrier Modulation for Data Transmission: An Idea Whose Time. IEEE Commun. Mag. 1990, 28, 5–14. [Google Scholar]
  2. Taura, K.; Tsujishita, M.; Takeda, M.; Kato, H.; Ishida, M.; Ishida, Y. A digital audio broadcasting (DAB) receiver. IEEE Trans. Consum. Electron. 1996, 42, 322–327. [Google Scholar] [CrossRef]
  3. G.992.1; Asymmetric Digital Subscriber Line (ADSL) Transceivers. Telecommunication Standardization Sector of ITU: Geneva, Switzerland, 1999.
  4. Nee, R.V.; Prasad, R. OFDM for Wireless Multimedia Communications; Artech House, Inc.: Norwood, MA, USA, 2000. [Google Scholar]
  5. Biglieri, E. Coding and modulation for a horrible channel. IEEE Commun. Mag. 2003, 41, 92–98. [Google Scholar] [CrossRef]
  6. Garg, V.K. Fourth generation systems and new wireless technologies. In Wireless Communications & Networking; Elsevier: Amsterdam, The Netherlands, 2007; pp. 1–22. [Google Scholar]
  7. Cioffi, J.M. A multicarrier primer. ANSI T1E1 1991, 4, 91–157. [Google Scholar]
  8. Terry, J.; Heiskala, J. OFDM Wireless LANs: A Theoretical and Practical Guide; Sams Publishing: Carmel, IN, USA, 2002. [Google Scholar]
  9. Hwang, T.; Yang, C.; Wu, G.; Li, S.; Li, G.Y. OFDM and its wireless applications: A survey. IEEE Trans. Veh. Technol. 2008, 58, 1673–1694. [Google Scholar] [CrossRef]
  10. Bury, A.; Egle, J.; Lindner, J. Diversity comparison of spreading transforms for multicarrier spread spectrum transmission. IEEE Trans. Commun. 2003, 51, 774–781. [Google Scholar] [CrossRef]
  11. Başar, E.; Aygölü, Ü.; Panayırcı, E.; Poor, H.V. Orthogonal frequency division multiplexing with index modulation. IEEE Trans. Signal Process. 2013, 61, 5536–5549. [Google Scholar] [CrossRef]
  12. Van Luong, T.; Ko, Y. Impact of CSI uncertainty on MCIK-OFDM: Tight closed-form symbol error probability analysis. IEEE Trans. Veh. Technol. 2017, 67, 1272–1279. [Google Scholar] [CrossRef]
  13. Van Luong, T.; Ko, Y. Spread OFDM-IM with precoding matrix and low-complexity detection designs. IEEE Trans. Veh. Technol. 2018, 67, 11619–11626. [Google Scholar] [CrossRef]
  14. Mao, T.; Wang, Z.; Wang, Q.; Chen, S.; Hanzo, L. Dual-mode index modulation aided OFDM. IEEE Access 2016, 5, 50–60. [Google Scholar] [CrossRef]
  15. Rahman, M.H.; Sejan, M.A.S.; Aziz, M.A.; Baik, J.I.; Kim, D.S.; Song, H.K. Deep learning-based improved cascaded channel estimation and signal detection for reconfigurable intelligent surfaces-assisted MU-MISO systems. IEEE Trans. Green Commun. Netw. 2023, 7, 1515–1527. [Google Scholar] [CrossRef]
  16. Aziz, M.A.; Rahman, M.H.; Sejan, M.A.S.; Baik, J.I.; Kim, D.S.; Song, H.K. Spectral Efficiency Improvement Using Bi-Deep Learning Model for IRS-Assisted MU-MISO Communication System. Sensors 2023, 23, 7793. [Google Scholar] [CrossRef] [PubMed]
  17. Li, A. Deep Learning for Multi-Carrier Signal Reception. Ph.D. Thesis, University of Surrey, Guildford, UK, 2022. [Google Scholar]
  18. Ye, H.; Li, G.Y.; Juang, B.H. Power of deep learning for channel estimation and signal detection in OFDM systems. IEEE Wirel. Commun. Lett. 2017, 7, 114–117. [Google Scholar] [CrossRef]
  19. Van Luong, T.; Ko, Y.; Vien, N.A.; Nguyen, D.H.; Matthaiou, M. Deep learning-based detector for OFDM-IM. IEEE Wirel. Commun. Lett. 2019, 8, 1159–1162. [Google Scholar] [CrossRef]
  20. Yıldırım, Y.; Özer, S.; Cırpan, H.A. Deep receiver design for multi-carrier waveforms using cnns. In Proceedings of the 2020 43rd International Conference on Telecommunications and Signal Processing (TSP), Milan, Italy, 7–9 July 2020; pp. 31–36. [Google Scholar]
  21. Jiang, P.; Wen, C.K.; Jin, S.; Li, G.Y. Dual CNN-based channel estimation for MIMO-OFDM systems. IEEE Trans. Commun. 2021, 69, 5859–5872. [Google Scholar] [CrossRef]
  22. Aziz, M.A.; Rahman, M.H.; Sejan, M.A.S.; Tabassum, R.; Hwang, D.D.; Song, H.K. Deep Recurrent Neural Network Based Detector for OFDM With Index Modulation. IEEE Access 2024, 12, 89538–89547. [Google Scholar] [CrossRef]
  23. Zhu, Y.; Wang, B.; Li, J.; Zhang, Y.; Xie, F. Y-shaped net-based signal detection for OFDM-IM systems. IEEE Commun. Lett. 2022, 26, 2661–2664. [Google Scholar] [CrossRef]
  24. O’shea, T.; Hoydis, J. An introduction to deep learning for the physical layer. IEEE Trans. Cogn. Commun. Netw. 2017, 3, 563–575. [Google Scholar] [CrossRef]
  25. Dörner, S.; Cammerer, S.; Hoydis, J.; Ten Brink, S. Deep learning based communication over the air. IEEE J. Sel. Top. Signal Process. 2017, 12, 132–143. [Google Scholar] [CrossRef]
  26. Aoudia, F.A.; Hoydis, J. End-to-end learning of communications systems without a channel model. In Proceedings of the 2018 52nd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 28–31 October 2018; pp. 298–303. [Google Scholar]
  27. Van Luong, T.; Ko, Y.; Vien, N.A.; Matthaiou, M.; Ngo, H.Q. Deep energy autoencoder for noncoherent multicarrier MU-SIMO systems. IEEE Trans. Wirel. Commun. 2020, 19, 3952–3962. [Google Scholar] [CrossRef]
  28. Kim, M.; Lee, W.; Cho, D.H. A novel PAPR reduction scheme for OFDM system based on deep learning. IEEE Commun. Lett. 2017, 22, 510–513. [Google Scholar] [CrossRef]
  29. Felix, A.; Cammerer, S.; Dörner, S.; Hoydis, J.; Ten Brink, S. OFDM-autoencoder for end-to-end learning of communications systems. In Proceedings of the 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Kalamata, Greece, 25–28 June 2018; pp. 1–5. [Google Scholar]
  30. Oshiro, S.; Toma, T.; Wada, T. Performance Comparison of Autoencoder based OFDM Communication System with Wi-Fi. Int. J. Comput. Sci. Netw. Secur. IJCSNS 2023, 23, 172–178. [Google Scholar]
  31. Youssef, M.M.; Ibrahim, M.; Abdelhamid, B. Deep Learning-aided Channel Estimation For Universal Filtered Multi-carrier Systems. In Proceedings of the 2023 40th National Radio Science Conference (NRSC), Giza, Egypt, 30 May–1 June 2023; Volume 1, pp. 159–166. [Google Scholar]
  32. Van Luong, T.; Ko, Y.; Matthaiou, M.; Vien, N.A.; Le, M.T.; Ngo, V.D. Deep learning-aided multicarrier systems. IEEE Trans. Wirel. Commun. 2020, 20, 2109–2119. [Google Scholar] [CrossRef]
  33. Xu, C.; Van Luong, T.; Xiang, L.; Sugiura, S.; Maunder, R.G.; Yang, L.L.; Hanzo, L. Turbo detection aided autoencoder for multicarrier wireless systems: Integrating deep learning into channel coded systems. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 600–614. [Google Scholar] [CrossRef]
  34. Ali, M.H.E.; Rabeh, M.L.; Hekal, S.; Abbas, A.N. Deep learning gated recurrent neural network-based channel state estimator for OFDM wireless communication systems. IEEE Access 2022, 10, 69312–69322. [Google Scholar]
  35. Rahman, M.H.; Sejan, M.A.S.; Aziz, M.A.; Kim, D.S.; You, Y.H.; Song, H.K. Deep Convolutional and Recurrent Neural-Network-Based Optimal Decoding for RIS-Assisted MIMO Communication. Mathematics 2023, 11, 3397. [Google Scholar] [CrossRef]
  36. Chen, S.; Guo, W. Auto-encoders in deep learning—A review with new perspectives. Mathematics 2023, 11, 1777. [Google Scholar] [CrossRef]
  37. Basar, E.; Wen, M.; Mesleh, R.; Di Renzo, M.; Xiao, Y.; Haas, H. Index modulation techniques for next-generation wireless networks. IEEE Access 2017, 5, 16693–16746. [Google Scholar] [CrossRef]
  38. Mao, T.; Wang, Q.; Wang, Z.; Chen, S. Novel index modulation techniques: A survey. IEEE Commun. Surv. Tutorials 2018, 21, 315–348. [Google Scholar] [CrossRef]
  39. Nosouhian, S.; Nosouhian, F.; Khoshouei, A.K. A Review of Recurrent Neural Network Architecture for Sequence Learning: Comparison Between LSTM and GRU. 2021. Available online: https://www.preprints.org/manuscript/202107.0252/v1 (accessed on 18 November 2024).
  40. Dey, R.; Salem, F.M. Gate-variants of gated recurrent unit (GRU) neural networks. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; pp. 1597–1600. [Google Scholar]
  41. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef]
  42. Zhou, B.; Chen, Q. A tutorial on minimum mean square error estimation. Southwest Jiaotong Univ. Sichuan China Tech. Rep. 2015. [Google Scholar] [CrossRef]
  43. Hassan, E.; Shams, M.Y.; Hikal, N.A.; Elmougy, S. The effect of choosing optimizer algorithms to improve computer vision tasks: A comparative study. Multimed. Tools Appl. 2023, 82, 16591–16633. [Google Scholar] [CrossRef] [PubMed]
  44. Diederik, P.K. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  45. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: https://arxiv.org/abs/1603.04467 (accessed on 18 November 2024).
  46. Zhang, X.; Seyfi, T.; Ju, S.; Ramjee, S.; El Gamal, A.; Eldar, Y.C. Deep learning for interference identification: Band, training SNR, and sample selection. In Proceedings of the 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Cannes, France, 2–5 July 2019; pp. 1–5. [Google Scholar]
  47. Simon, M.K.; Alouini, M.S. Digital communications over fading channels (mk simon and ms alouini; 2005) [book review]. IEEE Trans. Inf. Theory 2008, 54, 3369–3370. [Google Scholar] [CrossRef]
  48. Wu, Y.; Liu, L.; Bae, J.; Chow, K.H.; Iyengar, A.; Pu, C.; Wei, W.; Yu, L.; Zhang, Q. Demystifying learning rate policies for high accuracy training of deep neural networks. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 1971–1980. [Google Scholar]
  49. Crawford, J.; Ko, Y. Low complexity greedy detection method with generalized multicarrier index keying OFDM. In Proceedings of the 2015 IEEE 26th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Hong Kong, China, 30 August–2 September 2015; pp. 688–693. [Google Scholar]
  50. Sboui, L.; Rezki, Z.; Sultan, A.; Alouini, M.S. A new relation between energy efficiency and spectral efficiency in wireless communications systems. IEEE Wirel. Commun. 2019, 26, 168–174. [Google Scholar] [CrossRef]
Figure 1. A general configuration of the MC-AE system.
Figure 1. A general configuration of the MC-AE system.
Mathematics 12 03685 g001
Figure 2. The proposed GRU-MC-AE network model.
Figure 2. The proposed GRU-MC-AE network model.
Mathematics 12 03685 g002
Figure 3. The internal structure of a GRU model.
Figure 3. The internal structure of a GRU model.
Mathematics 12 03685 g003
Figure 4. Classification performance metrics of the proposed GRU-based MC-AE model for the (4, 16) data combination trained at 7 dB SNR. (a) Precision-recall curve and (b) loss performance over epochs.
Figure 4. Classification performance metrics of the proposed GRU-based MC-AE model for the (4, 16) data combination trained at 7 dB SNR. (a) Precision-recall curve and (b) loss performance over epochs.
Mathematics 12 03685 g004
Figure 5. Comparative BLER performance of the proposed model for different learning rates under perfect CSI condition with (4, 16) setup.
Figure 5. Comparative BLER performance of the proposed model for different learning rates under perfect CSI condition with (4, 16) setup.
Mathematics 12 03685 g005
Figure 6. The BLER performance of the proposed GRU-based MC-AE system for varying batch sizes and epochs.
Figure 6. The BLER performance of the proposed GRU-based MC-AE system for varying batch sizes and epochs.
Mathematics 12 03685 g006
Figure 7. BLER performance of the proposed model for different training SNRs under perfect channel condition with (4, 16) setup.
Figure 7. BLER performance of the proposed model for different training SNRs under perfect channel condition with (4, 16) setup.
Mathematics 12 03685 g007
Figure 8. BLER performance of the proposed model for different values of M under perfect channel condition.
Figure 8. BLER performance of the proposed model for different values of M under perfect channel condition.
Mathematics 12 03685 g008
Figure 9. The BLER performance comparison between the proposed model and other models under perfect channel conditions at a SE of 1 bps/Hz.
Figure 9. The BLER performance comparison between the proposed model and other models under perfect channel conditions at a SE of 1 bps/Hz.
Mathematics 12 03685 g009
Figure 10. BER performance comparison of the proposed model with other models under perfect channel conditions at a SE of 1 bps/Hz.
Figure 10. BER performance comparison of the proposed model with other models under perfect channel conditions at a SE of 1 bps/Hz.
Mathematics 12 03685 g010
Figure 11. Comparison of BLER performance between the proposed model and other models in imperfect channel conditions, with a SE of 1 bps/Hz.
Figure 11. Comparison of BLER performance between the proposed model and other models in imperfect channel conditions, with a SE of 1 bps/Hz.
Mathematics 12 03685 g011
Figure 12. The SE and EE performance of the proposed GRU-based MC-AE system for configurations ( N , M ) = (4, 16), (4, 32), (4, 64) across SNR values. (a) SE performance. (b) EE performance.
Figure 12. The SE and EE performance of the proposed GRU-based MC-AE system for configurations ( N , M ) = (4, 16), (4, 32), (4, 64) across SNR values. (a) SE performance. (b) EE performance.
Mathematics 12 03685 g012
Table 1. The simulation parameters for the proposed system.
Table 1. The simulation parameters for the proposed system.
ParametersValue
Training samples200,000
Testing samples30,000
Fading channelRayleigh fading channel
Noise typeAWGN
Number of hidden layer256, 128
Learning rate0.01, 0.005, 0.001, 0.0005
Batch size64, 128, 256, 512
Number of epoch250, 500, 1000
Training SNR (dB)5, 7, 10, 15
(N,M)(4, 16), (4, 32), (4, 64)
OptimizerAdam
Table 2. Computational complexity comparison of the proposed GRU-based MC-AE system with other systems.
Table 2. Computational complexity comparison of the proposed GRU-based MC-AE system with other systems.
Model NameComplexity per Sample (ms)
OFDM [9]0.032
S-OFDM [10]0.121
OFDM-IM [11]0.107
DNN-based MC-AE [32]0.028
Proposed MC-AE0.049
Table 3. Computational complexity of the proposed GRU-based MC-AE system evaluated for various training SNRs.
Table 3. Computational complexity of the proposed GRU-based MC-AE system evaluated for various training SNRs.
Training SNR (dB)Complexity per Sample (ms)
50.04995
70.049
100.0517
150.05281
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aziz, M.A.; Rahman, M.H.; Tabassum, R.; Sejan, M.A.S.; Baek, M.-S.; Song, H.-K. Deep Learning-Enhanced Autoencoder for Multi-Carrier Wireless Systems. Mathematics 2024, 12, 3685. https://doi.org/10.3390/math12233685

AMA Style

Aziz MA, Rahman MH, Tabassum R, Sejan MAS, Baek M-S, Song H-K. Deep Learning-Enhanced Autoencoder for Multi-Carrier Wireless Systems. Mathematics. 2024; 12(23):3685. https://doi.org/10.3390/math12233685

Chicago/Turabian Style

Aziz, Md Abdul, Md Habibur Rahman, Rana Tabassum, Mohammad Abrar Shakil Sejan, Myung-Sun Baek, and Hyoung-Kyu Song. 2024. "Deep Learning-Enhanced Autoencoder for Multi-Carrier Wireless Systems" Mathematics 12, no. 23: 3685. https://doi.org/10.3390/math12233685

APA Style

Aziz, M. A., Rahman, M. H., Tabassum, R., Sejan, M. A. S., Baek, M.-S., & Song, H.-K. (2024). Deep Learning-Enhanced Autoencoder for Multi-Carrier Wireless Systems. Mathematics, 12(23), 3685. https://doi.org/10.3390/math12233685

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop