Next Article in Journal
G-Subdiffusion Equation as an Anomalous Diffusion Equation Determined by the Time Evolution of the Mean Square Displacement of a Diffusing Molecule
Previous Article in Journal
Causally-Informed Instance-Wise Feature Selection for Explaining Visual Classifiers
Previous Article in Special Issue
Joint Iterative Decoding Design of Cooperative Downlink SCMA Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning-Assisted Secure Random Communication System

by
Areeb Ahmed
* and
Zoran Bosnić
University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, 1000 Ljubljana, Slovenia
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(8), 815; https://doi.org/10.3390/e27080815
Submission received: 20 June 2025 / Revised: 17 July 2025 / Accepted: 28 July 2025 / Published: 29 July 2025
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)

Abstract

Machine learning techniques have revolutionized physical layer security (PLS) and provided opportunities for optimizing the performance and security of modern communication systems. In this study, we propose the first machine learning-assisted random communication system (ML-RCS). It comprises a pretrained decision tree (DT)-based receiver that extracts binary information from the transmitted random noise carrier signals. The ML-RCS employs skewed alpha-stable ( α -stable) noise as a random carrier to encode the incoming binary bits securely. The DT model is pretrained on an extensively developed dataset encompassing all the selected parameter combinations to generate and detect the α -stable noise signals. The legitimate receiver leverages the pretrained DT and a predetermined key, specifically the pulse length of a single binary information bit, to securely decode the hidden binary bits. The performance evaluations included the single-bit transmission, confusion matrices, and a bit error rate (BER) analysis via Monte Carlo simulations. The fact that the BER reached 10−3 confirms the ability of the proposed system to establish successful secure communication between a transmitter and legitimate receiver. Additionally, the ML-RCS provides an increased data rate compared to previous random communication systems. From the perspective of security, the confusion matrices and computed false negative rate of 50.2% demonstrate the failure of an eavesdropper to decode the binary bits without access to the predetermined key and the private dataset. These findings highlight the potential ability of unconventional ML-RCSs to promote the development of secure next-generation communication devices with built-in PLSs.

1. Introduction

Machine learning (ML) has transformed all aspects of contemporary communication systems, such as signal processing, channel estimation, resource allocation, error correction, and security protocols. This has enhanced the efficiency, flexibility, and reliability of the current 5G and next-generation 6G systems [1,2]. Specifically, regarding signal detection and processing, model-driven deep learning networks have enhanced the performance of various channel environments for multiple-input multiple-output (MIMO) detection by optimizing the trainable parameter errors [3]. Similarly, within channel estimation, prediction, and compression, new ML approaches that are capable of adapting to changing channel conditions through online training and occasional retraining have been presented for multicarrier systems [4]. ML has also aided in the active resource management of the previous 5G spectrum and more advanced networks for radio access network (RAN) slicing [5]. In addition, the application of machine learning with a rateless scheme for error correction and decoding has established future trends in wireless communications [6]. Moreover,, the integration of ML with conventional protocols in order to optimize every component of communication systems has been extensively explored. Additionally, its ability to enhance security in preliminary open system interconnection (OSI) layers is also attracting attention with regard to future-generation security schemes.
In the above context of OSI layers, the physical layer security (PLS) is considered the front wall of any communication system. This enhances confidentiality without relying solely on cryptographic methods. It prevents unauthorized access to the first layer by leveraging the unique characteristics of the communication channels [7]. Boosting the PLS of all communication systems is the most critical requirement of the current global digital era. Recent technological advancements, new challenges, innovative solutions, and the prospects for PLS techniques with regard to the present 5G systems and upcoming 6G networks have gained considerable attention [8,9]. The security of 5G access networks has evolved over the decades, and researchers are now dedicated to the development of physical-level security architectures from scratch. This includes the use of various techniques, such as millimeter-wave communication, PLS coding, nonorthogonal multiple access, massive MIMO, and other advanced and conventional methods. It is also necessary to meet the needs of the main 6G use cases, including autonomous cyber–physical systems, machine communications, and ultralow-latency communications [10,11]. PLS has been identified as a potential solution to the underlying problems associated with traditional encryption methods, but it is considered to have several limitations [12]. As a result, its usage has diversified into other areas, such as military communications, vehicular networks, the internet of things (IoT), industrial control systems, and satellite networks [13,14,15,16,17]. These developments call for the creation of new PLS mechanisms and the assessment of previous mechanisms from different angles.
This paper contributes to the existing literature in the following ways:
  • Incorporation of ML into RCSs: This paper contributes to PLS by proposing the first possible configuration of a machine learning-assisted random communication system (ML-RCS).
  • Enhanced data rate and security: Our proposed user and device authentication model shows increased performance and security between a transmitter and legitimate receiver. It strengthens the PLS by covertly conveying the binary information by utilizing noise as the carrier signals. Moreover, the incorporation of an ML algorithm provides increased data rates in comparison to the previously proposed models.
  • Baseline model and benchmark for comparison: The paper can be considered as presenting a baseline model for encrypting and decrypting α -stable noise signals by utilizing ML algorithms. The proposed methodology for presenting simulation results can be used as a benchmark to compare any ML-RCS proposed in the future.
The remainder of this paper is organized as follows. The theoretical background, practical utility of α-stable noise, and associated research are presented in the first part of Section 3. This is followed by a general description of decision trees and their ability to address noise signals in the second part. The design of the proposed system and some of the primary results are presented in Section 4 and Section 5. In Section 6, we review the necessary performance criteria used to evaluate RCSs. This is followed by the complete performance and analysis of the proposed ML-RCS through Monte Carlo simulations. Finally, we provide concluding remarks and directions for future research in Section 7.

2. Machine Learning for Physical Layer Security

ML has recently evolved into a potential candidate for fortifying PLS and achieving low-latency communication through an open radio access network architecture. It offers adaptive techniques that are based on a data-driven approach to resisting incoming threats and dynamically adjusting the defenses of 5G and 6G systems [18]. The contributions of machine learning to strengthening PLS can be categorized into three domains:
  • User and device authentication via ML.
  • Intrusion detection systems via ML.
  • Confidentiality using ML.
1. In the case of authentication, ML techniques, such as decision trees (DTs), support vector machines (SVMs), and k-nearest neighbors (KNNs), exploit physical channel features, such as channel state information and time of arrival, to achieve reliable results for MIMO systems and mobile scenarios [19]. Channel analysis [20] and signal features [21] can also help to determine the class in order to identify or detect potential attacks. This shows the versatility of ML for elevating PLS.
2. For intrusion detection, ML classifiers, such as SVMs, KNNs, DTs, and other boosting algorithms, have already been employed to perform attack identification in wireless systems, automation systems, and IoT networks [22,23,24]. Advanced techniques, including autoencoders [25] and graph neural networks [26], have also utilized temporal and multivariate features for intrusion detection. Moreover, versatile approaches have been adopted in order to identify unknown attacks on communication systems; these include simultaneous classification in address jamming [27] and incorporating perimeter intrusion detection with the generation of a curated dataset [28].
3. Specifically, to ensure confidentiality between a transmitter and legitimate receiver, the channel state information (CSI) has been exploited as a secure key in various studies. For example, eavesdropping attempts have been rebuffed by the use of secure beamforming vectors adapted to the CSI, in such a way that the original data can be recovered only by legitimate users [29]. Moreover, machine learning-based transmitter antenna selection with partial CSI (the CSI of an eavesdropper is unknown) and full CSI (the CSI of an eavesdropper is known) has been explored by deploying an SVM and naive Bayes to maximize channel secrecy [30]. Overall, ML has contributed significantly to the enhancement of PLS in all the categories discussed above.
Specifically, DT algorithms play a key role in the PLS enhancement of communication systems. They provide a versatile method for intrusion detection, channel authentication, and confidentiality. As ML algorithms, they have been used extensively in the development of underlying models that focus on data security and privacy [31]. Owing to the low computational complexity of DTs, it is widely recognized that they ensure secure communication across diverse fields, including biomedicine, mobile healthcare, power systems, automated systems, cloud computing, wireless sensor networks, and cybersecurity [32,33,34,35,36,37,38]. Less complex DTs are now being specifically designed in order to optimize the privacy and security of communication systems [39,40]. Therefore, compared with traditional static approaches, they are now considered more effective for addressing the unique challenges of PLS in communication systems. However, well-known ML algorithms, such as neural networks, naive Bayes, logistic regressions, random forests, support vector machines, decision trees, and k-nearest neighbors, have significantly improved accuracy, recall, and precision across different training datasets. However, the performance of ML algorithms incorporated within conventional communication systems begins to diminish after a certain security benchmark is achieved [41]. This stresses the vulnerabilities of ML approaches when they are incorporated into conventional communication systems. Moreover, it is necessary to explore the possibility of incorporating ML into unconventional communication systems to enhance the PLS.
Among the various unconventional communication mechanisms, the utilization of noise as a random carrier can be integrated into ML algorithms. Specifically, if secure communication must be established in the additive white Gaussian noise (AWGN) channel, alpha-stable ( α -stable) noise can provide coverage by exploiting its non-Gaussian distribution characteristics. In general, it has been used to model channels that exhibit heavy tails, skewed data, and responses that are vulnerable to unpredictable fluctuations. However, its ability to covertly convey binary information to legitimate receivers has also been exploited. The key advantage of utilizing α-stable noise as a carrier is twofold: first, it safeguards encrypted binary information, and second, it masks the presence of communication [42]. Random communication systems (RCSs) have initiated the utilization of symmetric α-stable (S α S) and skewed α-stable (Sk α S) noise as random carrier signals to establish covert communication between a transmitter and legitimate receiver [43,44]. Both schemes utilize static estimation methods, such as the sinc estimator and logarithmic estimator, to decrypt the transmitted random carrier signals at the legitimate receiver by utilizing a predetermined pulse length. The most refined design was later proposed on the basis of the maximum extreme value method-based estimator [45]. This provided an optimized version of an RCS with respect to the bit error rate. This was followed by the development of the first synchronization method for RCSs, paving the way for their practical implementation in real-world applications, such as military communications, vehicular networks, the internet of things, industrial control systems, and satellite networks [46]. Studies have also been conducted to expand the symbol capacity by introducing multiple-level M-ary modulation schemes [47]. This resulted in an increase in the data rate achievable by RCSs. The PLS achievable by RCSs is theoretically guaranteed due to the nonexistence of higher-order statistical properties in α-stable noise. This results in an inherent resistance to eavesdropping as a random carrier. Therefore, the initially proposed architectures focused on improving the bit error rate (BER). Nevertheless, several RCS architectures concatenated with other entities, such as inverse systems and signal multiplexers, were later proposed [48,49]. The primary objective of these studies was to extend the parameters or private keys from a single parameter, that is, the pulse length, to multiple parameters. This was because these parameters or private keys govern the coverage of RCSs. The approach of deploying various static estimators along with the integration of several entities in an RCS has improved the performance of the BER and security [50]. On the other hand, the practicality of RCSs for the internet of things has also been tested, which has strengthened its potential for real world applications [51]. However, there is still a significant gap between conventional communication systems and α -stable noise-based unconventional RCSs in terms of the data rate scale. ML algorithms could play a key role in reducing this gap. This could further prove their applicability for strengthening the PLS of future 6G nano-communication systems if they can compete with conventional communication in terms of the data rate [52].
Considering these factors, we propose a novel machine learning-assisted secure random communication system (ML-RCS). This is the first attempt to deploy any ML algorithm to encrypt and decrypt the α -stable noise at a transmitter and legitimate receiver, respectively. The proposed method belongs to the category of ML-based user and device authentications. The proposed ML-RCS establishes covert communication in the AWGN channel and strengthens the PLS of the system. The transmitter ‘T’ generates Sk α S noise as a random carrier on the basis of the incoming binary bits. The inherent symmetry of the skewness parameter of the Sk α S noise is exploited to transmit binary bits ‘0’ and ‘1’. The legitimate receiver ‘LR’ uses the least complex ML algorithm, that is, a DT, which is pretrained on a developed α-stable noise signal model (α-SNSM). The α-SNSM comprises all the combinations of parameters required to generate the Sk α S noise as a random carrier. Moreover, we include a unique private key, that is, the pulse length, in the α-SNSM while training the DT-based receiver, and we assume that it is known only to the T and LR. Only the LR can decrypt the transmitted Sk α S noise and retrieve the binary information via the pretrained DT and a unique private key. However, no eavesdropper can decrypt the transmitted Sk α S noise and retrieve the binary information without knowledge of the pretrained DT and the unique private key. Integrating ML into an RCS in this unique way produces an extremely efficient BER performance and enhanced covertness in comparison with previous RCSs, which were based on static estimators. The proposed ML-RCS can retrieve a unit binary bit from transmitted random carrier signals by utilizing an extremely small number of Sk α S noise samples. Hence, it provides a significantly increased data rate in comparison with previously proposed RCSs.
The reason for choosing DTs is their probable suitability as receivers operating with alpha-stable noise since they have a non-parametric nature and provide robustness against outliers. Opposite to the prominent ML algorithms (KNNs, SVMs, and neural networks), DTs do not rely on distance metrics or statistical assumptions due to their absence under heavy-tailed alpha-stable noise. They have been shown to operate with nonlinear decision boundaries in a good manner. Most importantly, they require minimum computational complexity for real-time applications, making them a feasible choice for signal classification in non-Gaussian noise conditions. Therefore, we expect DTs to be a more practical choice with which to establish a baseline ML-RCS model which can be used as a benchmark in the future. Their use makes the ML-RCS a potential candidate for providing security for future communication systems in which security through unconventional mechanisms is explicitly needed.

3. Allied Concepts

In this study, we investigate the integration of an ML algorithm with an RCS; it is therefore necessary to delve into the principles of the methods used. The proposed ML-RCS begins by harnessing the antipodal characteristics of the Sk α S noise carrier signals for transmitting binary bits. Therefore, in this section, we first discuss the general structure of the α-stable noise and its special cases. We train the data retrieved via the LR using the DT. To better understand the training and decision-making phases of DTs, we present them in detail and explain the variants of DTs utilized. We then explore the DT decisions obtained when dealing with the α-stable noise signals.

3.1. Alpha Stable ( α -Stable) Distribution

α -stable noise, a well-known random noise model, has been generally identified as a severe channel impairment in communication systems because of its unique properties. In contrast to AWGN, the heavy-tailed distribution of α-stable noise can cause significant disruption to communication channels. α -stable noise simulates impulsive channel conditions. It is highly relevant in contexts such as vehicular networks, space communications, wireless systems, and IoT scenarios, which are susceptible to excessive interference [53,54,55].
  • The α-stable distribution includes several subcategories of α-stable noise: symmetric α-stable (S α S) and skewed α-stable (Sk α S). The overall statistical parameters of the distribution are denoted by S α (β, γ , μ), where a random variable R follows R~ S α (β, γ , μ). The parameters responsible for defining the distribution are as follows.
  • The characteristic exponent ( α ) controls the impulsiveness of the distribution and varies within α ( 0 ,   2 ] .
  • The skewness ( β ) governs the asymmetry of the distribution, with β = −1 creating a leftward skew and β = 1 creating a rightward skew, falling within β [− 1 ,   1 ] .
  • The scale parameter ( γ ) is the dispersion or scaling of the distribution within γ ( 0 ,   ).
  • The location parameter ( μ ) shifts the distribution along the horizontal axis with μ (− , ).
The basic structure of the α-stable noise R~ S α (β, γ , μ) is dependent on its characteristic function derived in [56]:
ϕ θ = e x p j μ θ γ α   θ α 1 j β s i g n θ t a n α π 2 ;   i f   α 1 e x p j μ θ γ   θ 1 + j β 2 π s i g n θ l n α π 2 ;   i f   α = 1 .
In this study, α-stable noise was produced via the approach described in [57]. A range of probability density functions (PDFs) for R~ S α (β, γ , μ) were produced, as illustrated in Figure 1, by systematically varying the parameters α and β within their respective ranges while keeping μ = 0 and γ = 1 constant. This was performed to understand the basic roles of α and β in dealing with α-stable noise.
Note: Specific cases within the α-stable family, such as R~ S α = 2 (β = 0, γ , μ), R~ S α = 1 (β = 0, γ , μ), and R~ S α = 0.5 (β = 1, γ , μ), represent Gaussian, Cauchy, and Levy noises, respectively, as shown in Figure 2. In contrast, Sk α S noise is represented by R~ S α (+β/−β, γ , μ), and its characteristics are illustrated in Figure 1.
When α < 2, the corresponding distribution has a second-order moment-generating function. However, a first-order moment-generating function does not exist under such conditions. As shown in [56,57], α-stable noise accurately models heavy-tailed impulsive events that are orders of magnitude larger than the channel impairments induced by AWGN. This characteristic makes it virtually undetectable by illegitimate receivers or eavesdroppers, while also conferring natural immunity to channel distortions. Consequently, α-stable noise is an appealing candidate for physical layer covert communications.

3.2. Decision Trees

DTs enable one to conveniently perform simple and intelligent decision-making within complex processes by dividing them into smaller and manageable components. This enables straightforward interpretation, which represents a powerful yet easy way to analyze several variables. Several approaches are available for DT construction. However, C4.5 [58], the enhanced version of iterative dichotomizer 3 (ID3), and classification and regression trees (CARTs) [59], are considered the most reliable algorithms when decision-making on complex data is required through a single tree. If the two algorithms are compared on the basis of accuracy, the C4.5 algorithm is more suitable for the construction of DTs with datasets that possess negligible or no noise. In contrast, the performance of CART algorithms has been proven to be better for noisy datasets [60]. It has been found to be comparatively successful, particularly in the presence of random noise in a dataset. Moreover, its accuracy has been extensively tested on standardized datasets that have already added random noise as a class variable with varying noise power. It has also proven its applicability to datasets containing label noise during the training phase of supervised classification [61]. Specifically, when conventional communication systems are incorporated, a CART significantly enhances the classification accuracy compared with static methods. It has also shown robustness in detecting and estimating mixed noise signals in challenging communication environments, such as fading channels [62]. This makes the CART approach a potential candidate for the efficient estimation of Sk α S noise carrier signals for RCSs. Because this is an underlying principle that enables the retrieval of binary information via the LR in the proposed ML-RCS, we proceed by reviewing the basic methodology behind it.

CART Classifier Algorithm

The CART was introduced by Breiman et al. in 1984 [59]. It uses a binary splitting mechanism to construct DTs. The splits are evaluated on the basis of the Gini index (GI). It is used to compute the impurities of a particular node. The GI is the deciding factor in selecting the splitting attribute variable. For a dataset D , the GI is computed as follows:
GI   ( D ) = 1 i = 1 m p i 2
where the i-th class proportion of instances is denoted by ‘ p i ’ among the total ‘m’. A value of ‘GI = 0’ reflects a perfectly pure node, whereas equal classes reflect the maximum impurity. The GI of the split ‘GIS’ is given by
GIS   ( D ) = n 1 n   GI   D 1 + n 2 n   GI   D 2
during the splitting of dataset D into subsets D 1 and D 2 . The subsets D 1 and D 2 are of sizes ‘ n 1 ’ and ‘ n 2 ’, respectively. The ‘ n ’ represents a size of D . To classify a specific class, the CART algorithm proceeds with the split that attains the minimum GI for the subsequent subsets. The basic structure of the DT is shown in Figure 3.
Note: The topmost node in the tree is the root node representing the entire dataset. All the nodes are split on the basis of the attribute variable, resulting in branches and internal nodes. A node that does not split further is known as a leaf node. The leaves represent the final decisions of the tree and contain the target value as the predicted outcome, which is referred to as the ‘class’.
CART Classifier Variations: Fine tree (FT), coarse tree (CT), and medium tree (MT) are variations of DTs; these differ in their stopping criteria when training a tree by recursively splitting the data. Table 1 summarizes the three variations used in this study.

4. Transmission

The proposed ML-RCS model is shown in Figure 4. It consists of a transmitter ‘T’, legitimate receiver ‘LR’, and illegitimate receiver ‘ILR’. The ML-RCS operates in the AWGN channel. Unlike the ILR, the T and LR are assumed to be perfectly synchronized by the method given in [46] and have predetermined pulse lengths and parameters, which are utilized to generate the Sk α S noise as random carriers. In contrast, the ILR is used to analyze the covertness of the ML-RCS from the perspective of an eavesdropper. To depict the worst-case scenario in this context, the ILR can presumably operate via the same CART-based classifier. However, it does not have knowledge of the predetermined pulse length or the parameters utilized by the T and LR.

Transmitter ‘T’

The transmitter ‘T,’ shown in Figure 4, operates on the principle of stochastic process shift keying or noise shift keying (NSK). It transmits the incoming binary information, namely ‘0’ and ‘1’, by generating R0~ S α 0 ( β 0 , γ , μ) and R1~ S α 1 ( β 1 , γ , μ) noises as random carrier signals. Because β 1 =   β 0 , the symmetry behind the skewness parameters ‘ β 0 ’ and ‘ β 1 ’ are exploited to shift the Sk α S noise distributions to either the left or right side, respectively. This is intentionally performed to generate noise samples from distributions S α 0 ( β 0 , γ , μ) and S α 1 ( β 1 , γ , μ) to represent the corresponding ‘0’ and ‘1’ values. The term β   β 1   ( β 0 ) is used to monitor the difference between β 0 and β 1 . The samples are generated as per the predetermined amount, which is known as the pulse length ‘ T s N ’ or simply N , as T s = 1 is considered throughout this study. It is necessary to encode a unit bit as ‘1’ or ‘0’ in the following form:
R   =   { r 1 ,   r 2 ,   ,   r N }
where N is the total generated noise sample of length T s , which corresponds to a single binary bit. The binary information ‘0’ and ‘1’, hidden in random carriers R0~ S α 0 ( β 0 , γ , μ) and R1~ S α 1 ( β 1 , γ , μ), respectively, cannot be retrieved without an exact knowledge of T s N [46,50]. This method is known as skewed α -stable noise shift keying (Sk α SNSK). The complexity of R0~ S α 0 ( β 0 , γ , μ) and R1~ S α 1 ( β 1 , γ , μ) can be further increased by selecting different characteristic exponents α 0 and α 1 . However, they are kept the same, as α 0   = α 1 =   α , throughout the study to analyze the effect of only hiding the binary information on the skewness parameters. Similarly, the scale and location parameters are predetermined between the T and LR, as γ = 1 and μ = 0 , respectively. Consequently, the factors responsible for generating unique random carriers in different settings are α , β , and N .
Any specific values of α , β , and N , within their defined ranges, must be selected in advance by the T and LR to securely transmit the hidden binary information via Sk α SNSK. These values act as secure keys in the data retrieval process via the LR. In this study, they are set to the values listed in Table 2.
The values of α and β , as listed in Table 2, are typically used while experimenting with α -stable noise signals. However, the values of N were derived after rigorous experimentation with the values used in [43,44,45,46,47,48,49,50]. During the experiments, we examined various combinations of the above parameter values to analyze the effects on the performance of ML-RCSs for the T and LR; we also aimed to determine a trend in the performance while exhaustively searching the parameters responsible for Sk α SNSK.

5. Reception

The proposed ML-RCS receiver works in the AWGN channel, and the transmitted Sk α SNSK noise sequences R0~ S α ( β 0 , γ , μ) and R1~ S α ( β 1 , γ , μ), that is, R or the output of the T, travel through it. Therefore, the functionality of AWGN is first explained before proceeding to the LR.

5.1. AWGN Channel

The AWGN channel A, as shown in Figure 4, distorts the transmitted noise sequences via the distribution given below
A   ~   S α A = 2 ( β A = 0 ,   γ A ,   μ A   =   0 )
The impairments introduced by A are similar to the random characteristics inherently built into R. Hence, the conventional criterion ‘signal-to-noise ratio’ is not applicable for measuring the performance of unconventional RCSs. Therefore, the effects of channel impairments on R are usually analyzed via the criteria of the dispersion ratio, scale ratio, or mixed signal-to-noise ratio (MSNR). This was introduced in [63] as follows:
MSNR dB   = 10 log γ γ A
Using (6), the ML-RCS can be analyzed under various channel settings by changing γ A to imitate the noise variants encountered by R while propagating through the AWGN channel. The ranges of γ A and the corresponding MSNRs utilized in this study are summarized in Table 3.

5.2. Legitimate Receiver ‘LR’

The legitimate receiver LR receives noise sequence Y as
Y = R + A
which is accessible to both the LR and the ILR. As defined in Figure 4, the LR first performs a cart-based classification, followed by a pulse length-based classification, to retrieve a unit binary bit that is hidden in Y. The received noise sequence Y carrying the unit binary bit is defined as
Y = { y 1 ,   y 2 ,   ,   y N }

5.2.1. DT-Based Classifier

A cart-based classification is performed by utilizing a pretrained DT that has been trained on an α-stable noise signal dataset (α-SNS D ) solely known to the T and LR. The α-SNS D used for training the DT consists of a random carrier signal and channel settings. The target variable to be predicted is the skewness parameter of the α-stable noise carrier. The utilized α-SNS D is expressed as
D = { ( α ,   β ,   N ,   MSNR dB ,   y ) }
where α   {0.6, 0.8, 1.1}, β   {−1, −0.9, −0.8, 0.8, 0.9, 1}, and N {15, 25, 35}, as defined in Table 2. The MSNRdB  [−10, 5] is defined in Table 3. The ‘y’ is the α -stable noise sample generated via the specified combination of α , β , N, and MSNRdB. Each unique combination in (9) produces a different ‘y’ each time, so 100 unique ‘y’ values are taken to train the DT for a single combination. The DT-based classifier is trained on an α-SNS D to predict the target parameter β ^ . The classifier learns to predict the output as follows:
β ^ t =   f ( α ,   N ,   MSNR dB ,   y t )

5.2.2. Pulse Length-Based Classifier

For legitimate communication between the T and LR, the DT is trained on an α-SNS D , which consists of three different values of N. Therefore, any one value of N can be prechosen while carrying out secure communication. Specifically, from the perspective of the LR, the predicted parameter values { β ^   1 ,   β ^ 2 ,   ,   β ^ N } carrying a single binary bit are then provided to the pulse length-based classifier. It classifies the incoming { β ^   1 ,   β ^ 2 ,   ,   β ^ N } carrying a unit bit to a single classified beta ‘ β C ’ using the following relation:
β C =   β 0   ; i f   t = 1 N β ^ t < 0 β 1 ; i f   t = 1 N β ^ t 0
where β 0 represents the retrieval of the bit as ‘0’ and where β 1 represents the retrieval of the bit as ‘1’. The procedure is repeated to retrieve all subsequent binary bits.

5.3. Demo Transmission

Before analyzing the performance of the ML-RCS on a large number of binary bits, we tested it on fewer bits to better represent and understand the signals processed by the T and LR. The demo transmissions in this section demonstrate the proof-of-concept. In Figure 5, Figure 6 and Figure 7, the communication consists of incoming binary bits and the corresponding noise sequence ‘R’ transmitted from the perspective of the T. This sequence is further followed by the received noise sequence ‘Y,’ which is predicted by β ^ and computed as β C to retrieve binary bits from the perspective of the LR. We present the initial test of the binary stream of bits ‘0,1,0,1’ in Figure 5, Figure 6 and Figure 7. In this section, we vary one of the Sk α SNSK factors responsible for the Sk α SNSK ( α , β , and N ) simultaneously. The other two parameters are fixed. We conducted this study to examine the trends for all possible combinations.
Figure 5, Figure 6 and Figure 7 show that the retrieval process at the T and LR improved with increasing the Sk α SNSK responsible factors ( α , β , and N ). Specifically, the predicted parameter values { β ^   1 ,   β ^ 2 ,   ,   β ^ N } that carried a single binary bit had fewer errors when greater values of α , β , and N were utilized by the T and LR.
Figure 5 shows the transmitted noise sequence obtained by varying β while keeping the other two parameters fixed. Figure 5 (left diagram) represents the transmission with β = 1.6, where the T and LR utilize S α = 0.6 ( β = 0.8 , γ = 1 , μ = 0) for binary ‘1’ and S α = 0.6 ( β = 0.8 , γ = 1 , μ = 0) for binary ‘0’. However, in Figure 5 (right diagram), with β = 2, we use the random carriers S α = 0.6 ( β = 1 , γ = 1 , μ = 0) and S α = 0.6 ( β = 1 , γ = 1 , μ = 0) for binaries ‘1’ and ‘0,’, respectively. Figure 5 and Figure 6 indicate that increasing β from 1.6 to 2 results in fewer errors for the predicted β ^ . This indicates a gradual increase in the BER performance of the ML-RCS with an increased β .
Similarly, we vary α in Figure 6 while keeping β and N fixed to observe the trend. Specifically, in Figure 6 (left diagram), the T and LR utilize the following parameters: S α = 0.8 ( β = 1 , γ = 1 , μ = 0) for binary ‘1’ and S α = 0.8 ( β = 1 , γ = 1 , μ = 0) for binary ‘0’. However, in Figure 6 (right diagram), we use S α = 1.1 ( β = 1 , γ = 1 , μ = 0) and S α = 1.1 ( β = 1 , γ = 1 , μ = 0) for binaries ‘1’ and ‘0’, respectively. As explained previously, in Figure 6, increasing α from 0.8 to 1 has the same effect on the performance. The overall BER performance was expected to complement the results obtained in this study.
Finally, in Figure 7, we vary N while α and β remain constant to analyze the potential difference in the retrieval process at the T and LR. Figure 7 (left diagram) shows the random carriers S α = 1.1 ( β = 0.8 , γ = 1 , μ = 0) for binary ‘1’ and S α = 1.1 ( β = 0.8 , γ = 1 , μ = 0) for binary ‘0’, with N = 15. However, we selected N = 25, as shown in Figure 7 (right diagram). In conclusion, increasing N yields a better prediction for β ^ . A better prediction helps achieve better BER rates, which is also confirmed in the next section of our paper.
The successful demo transmission reflects the ability of ML-RCSs to securely convey binary information. Moreover, the trend observed in Figure 5, Figure 6 and Figure 7 was also observed for many bits in the proposed ML-RCS. The next section focuses on this issue.

6. Results and Discussion

Following the results and trends observed in the effective demo transmissions, in this section, we analyze the proposed ML-RCS for a larger number of bits. The generalized method of checking 1000 binary bits for errors [43,44,45,46,47,48,49,50,51,52] was adopted to compute the confusion metrics and BER performance.
First, the proposed ML-RCS was analyzed from the perspective of the T and LR, termed ‘intended communication’. This indicates that the T and LR are synchronized and in harmony with the user and device authentication. This is followed by the perspective of the T and ILR, which is referred to as ‘unintended communication’. This indicates that the T and LR are synchronized but not in harmony with the user and device authentication.

6.1. Intended Communication

Because the proposed ML-RCS is the first approach to incorporate ML into RCSs, we tested it using methods commonly employed to perform individual analyses in both fields. First, we analyzed the ML-RCS via confusion matrices for clarity with respect to ML. After showing successful results with the confusion matrices, we conducted Monte Carlo simulations that targeted an ideal BER of 10−3 against the lowest value of the MSNR to reflect its effectiveness with respect to RCSs.

6.1.1. Confusion Matrices

We compute the confusion matrices for the ‘transmitted bits’ against the ‘retrieved bits’ in Figure 8. The established trend of exhaustively varying one of the Sk α SNSK responsible factors ( α , β , and N ) and keeping the others fixed was also adopted while analyzing the ML-RCS with confusion metrics. The MSNR was constant throughout the confusion matrix.
Firstly, as shown in Figure 8 (left column), β varies from 1.6 to 2. Similar to the demo transmission, the expected increase in performance is also observed in the confusion matrix as β increases, as shown in Figure 8 (left column). Second, as shown in Figure 8 (middle column), α is changed from 0.6 to 1.1 while keeping β and N fixed. Given the results of the demo transmission, we observed a similar increase in the performance of the confusion matrices for α . Finally, similar effects were also observed when varying N while keeping α and β constant; this is also shown in Figure 8 (right column). The performance appeared to increase with every increment in N from 15 to 35.
Compared with the successful results obtained during the demo transmissions for a few transmitted bits, the previous results show that the ML-RCS achieves a favorable performance for many bits. The confusion matrices derived for all the Sk α SNSK responsible factors ( α , β , and N ) also reflect similar trends in the performance of the ML-RCS with increasing values of α , β , and N , resulting in fewer errors between the transmitted and retrieved bits.

6.1.2. Bit Error Rate Analysis

We checked the trends that developed for the Sk α SNSK responsible factors ( α , β , and N ) by exhaustively examining the performance of the ML-RCS with respect to the criteria pertinent to RCSs. We generated BER vs. MSNRdB graphs by utilizing the same ranges of α , β , and N as those used for generating the confusion matrices. In Figure 9, we present a systematic BER analysis of the ML-RCS with respect to RCSs. The simulations comprised 10,000 iterations for a single BER data point at a given MSNRdB. Stabilization was achieved at a tolerance of ±0.01 for 8000 iterations to verify convergence.
In Figure 9, the effects of increasing α move horizontally for a given row, whereas the effects of increasing β move vertically for a given column. Moreover, every individual BER vs. MSNRdB graph shows the effects of increasing N for fixed values of α , and the lowest value of the MSNR is considered an ideal performance. The figures reveal that, similar to the results in the previous sections, the performance of the ML-RCS increases as α increases for a fixed β and N in any given row. Moreover, increasing β , for a fixed α and N , also has an incremental effect on the BER performance. For any individual BER vs. MSNRdB graph, the performance increases as N increases. The ideal BER performance is achieved at MSNRdB = −14 for α = 1.1, β = 2, and N = 35, which is a combination of the maximum values utilized for these parameters. This further confirms the trends found for α , β , and N while exhaustively examining the performance of the ML-RCS, as observed in the previous section. Given the BER vs. MSNRdB results for the previously proposed RCSs [43,44,45,46,47,48,49,50,51,52], the ideal performance of the ML-RCS is comparatively achieved against a very low value of the MSNRdB. This further shows that the incorporation of pretrained DTs and an α-SNS D not only inherits the building security of an Sk α SNSK-based RCS, but also outperforms the previously proposed models in terms of its BER performance.
In this study, all the results for the ML-RCS were generated via an FT-based pretrained DT. However, we present a comparative BER vs. MSNRdB analysis with CT-based pretrained DTs and MT-based pretrained DTs in Figure 10. We present this to justify the selection of the FT-based pretrained DT for the ML-RCS against other potential configurations. For any common combination of α , β , and N , the ML-RCS achieved an ideal BER of 10−3, which was achieved against the lowest value of the MSNR, while utilizing the FT configuration compared with its counterparts. Therefore, the FT-based pretrained DT was chosen to record all the results in the previous sections.

6.2. Unintended Communication

We also analyzed the ML-RCS from the perspective of an ILR to determine its strength against potential eavesdropping. In this section, we model the ILR as a highly capable and sophisticated eavesdropper by depicting the worst-case scenario for the T and LR. As shown in Figure 4, we assume that the ILR utilizes the same DT-based classifier. However, the ILR does not have access to the α-SNS D to produce a pretrained DT similar to that developed by the T and LR for the intended communication. Therefore, the parameter settings of the Sk α SNSK responsible factors ( α , β , and N ) for the ILR are not the same as those in Table 2.

Confusion Matrices

The parameter settings for the ILR are given below in Table 4.
It is important to note in Table 2 that the ILR must possess the actual pulse length ( N ) utilized by the T and LR to successfully retrieve the binary information hidden in the transmitted Sk α SNSK signals. This is practically impossible, with a probability of 1 . However, to analyze the ML-RCS under the worst-case scenario in this study, the ILR is assumed to use a pulse length of ‘ N + 1’, which is identical to the actual pulse length N . With the above assumption, a confusion matrix for the ILR is generated, as shown in Figure 11. Therefore, the ILR cannot correctly predict β ^ even in the most favorable scenario for unintended communication. Even when β = 2, the true positive rate (TPR) for β ^ does not exceed 50%, resulting in a false negative rate (FNR) above 50%. The TPR and FNR achieved by the ILR are impractical for retrieving a single binary information bit from the communication established between the T and LR in the ML-RCS. This finding indicates that communication is sufficiently secure for the intended purposes.

6.3. Comparative Analysis

The proposed ML-RCS can be compared with the previously proposed RCSs in terms of its performance and data rate. Regarding its performance, we have compared the BER of our system with that of other prominent RCSs mentioned in the literature. Similarly, the data rate has also been analyzed by comparing the number of noise samples (utilized N) required to successfully transmit and receive a unit binary digit. Table 5 summarizes the comparison of the proposed ML-RCS with other RCSs mentioned in the literature.
It can be observed that the proposed ML-RCS achieved a BER = 10−3 at an MSNRdB = −12, which is the minimum achievable value among all the RCSs. It reflects the proposed ML-RCS’s capability to successfully transmit and receive information under severe channel impairments, which gradually arises when we operate with a lesser MSNRdB. Most importantly, the proposed ML-RCS achieved this performance with N = 35 only. This reflects that the ML-RCS can transmit more bits (approximately 2000 35 = 57 more bits) in comparison to the closest competitor, the system of Xu et al. [51], which utilized N = 2000 for a unit binary digit. This comparison shows the proposed ML-RCS’s capability to provide better performance with an increased data rate.

7. Conclusions

This study presents the first model that incorporates machine learning into RCSs. The proposed ML-RCS uses Sk α SNSK to securely transmit binary information from a transmitter within an AWGN channel. The legitimate receiver uses a pretrained DT with an FT configuration to retrieve the binary bits hidden in the received random carrier signals. Our Monte Carlo simulations analyzed the communication perspectives of both the ML and RCSs. Compared with the previously proposed RCSs, the proposed ML-RCS provides a significant increase in the BER performance. The enhanced BER performance of the ML-RCS is governed by the fact that it incorporates ML via the pretrained DT in order to establish secure communication.
In future work, we plan to evaluate different ML algorithms to determine the appropriateness of this task. We are currently investigating the applicability of deploying ML algorithms to improve the RCS performance further. We are in the process of producing RCS architectures incorporated within each of the ML algorithms—SVMs, KNNs, and neural Networks—as separate studies. We have adopted this approach to explore the unique characteristics and contributions of each algorithm when integrated into RCSs. In line with the current approaches to designing RCS architectures, this study initially focused on testing the proposed system in an AWGN channel to establish foundational insights and baseline performance metrics. As it was a standard practice in all previous studies, we tested the proposed ML-RCS in a controlled environment like an AWGN channel only so that a fair comparison could be developed. We are currently inspecting the behavior of the proposed ML-RCS in various channel environments. Subsequent efforts will explore the effectiveness of the ML-RCS in scenarios involving Rayleigh–Rician fading and dynamically varying channels to further evaluate its practical viability. Moreover, incorporating already present or developing secure key distribution methods (independent of pilots) for the proposed system will further ensure the establishment of secure communication. It will prevent interception or tampering during synchronization, leading to a stronger foundation for secure communication systems. With the successful integration of ML into RCSs, we expect that ML algorithms will be utilized in other fields. This will help to increase the level of scientific contributions toward a better and more prosperous society.

Author Contributions

Conceptualization, A.A.; Methodology, A.A.; Validation, Z.B.; Formal analysis, A.A.; Investigation, A.A.; Resources, Z.B.; Data curation, A.A.; Writing—original draft, A.A.; Writing—review & editing, Z.B.; Supervision, Z.B.; Project administration, Z.B.; Funding acquisition, A.A. and Z.B. All authors have read and agreed to the published version of the manuscript.

Funding

This publication was supported by the European Union’s Horizon Europe Research and Innovation Programme under the Marie Skłodowska-Curie Postdoctoral Fellowship Programme, SMASH cofunded under grant agreement No. 101081355.

Institutional Review Board Statement

Not Applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qiao, L.; Li, Y.; Chen, D.; Serikawa, S.; Guizani, M.; Lv, Z. A survey on 5G/6G, AI, and robotics. Comput. Electr. Eng. 2021, 95, 107372. [Google Scholar] [CrossRef]
  2. Celik, A.; Eltawil, A. At the dawn of generative AI era: A tutorial-cum-survey on new frontiers in 6G wireless intelligence. IEEE Open J. Commun. Soc. 2024, 5, 2433–2489. [Google Scholar] [CrossRef]
  3. He, H.; Wen, C.; Jin, S.; Li, G.Y. Model-driven deep learning for MIMO detection. IEEE Trans. Signal Process. 2020, 68, 1702–1715. [Google Scholar] [CrossRef]
  4. Weththasinghe, K.; Jayawickrama, B.; He, Y. Machine learning-based channel estimation for 5G new radio. IEEE Wirel. Commun. Lett. 2024, 13, 1133–1137. [Google Scholar] [CrossRef]
  5. Azimi, Y.; Yousefi, S.; Kalbkhani, H.; Kunz, T. Applications of machine learning in resource management for RAN-slicing in 5G and beyond networks: A survey. IEEE Access 2022, 10, 106581–106612. [Google Scholar] [CrossRef]
  6. Kee, H.L.M.; Ahmad, N.; Izhar, M.A.M.; Anwar, K.; Ng, S.X. A review on machine learning for channel coding. IEEE Access 2024, 12, 89002–89025. [Google Scholar] [CrossRef]
  7. Zhou, X.; Song, L.; Zhang, Y. (Eds.) Physical Layer Security in Wireless Communications; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  8. Sullivan, S.; Brighente, A.; Kumar, S.A.P.; Conti, M. 5G security challenges and solutions: A review by OSI layers. IEEE Access 2021, 9, 116294–116314. [Google Scholar] [CrossRef]
  9. Pradhan, A.; Das, S.; Piran, M.J.; Han, Z. A survey on physical layer security of ultra/hyper reliable low latency communication in 5G and 6G networks: Recent advancements, challenges, and future directions. IEEE Access 2024, 12, 112320–112353. [Google Scholar] [CrossRef]
  10. Mitev, M.; Chorti, A.; Poor, H.V.; Fettweis, G.P. What physical layer security can do for 6G security. IEEE Open J. Veh. Technol. 2023, 4, 375–388. [Google Scholar] [CrossRef]
  11. Wu, Y.; Khisti, A.; Xiao, C.; Caire, G.; Wong, K.; Gao, X. A survey of physical layer security techniques for 5G wireless networks and challenges ahead. IEEE J. Sel. Areas Commun. 2018, 36, 679–695. [Google Scholar] [CrossRef]
  12. Hamamreh, J.M.; Furqan, H.M.; Arslan, H. Classifications and applications of physical layer security techniques for confidentiality: A comprehensive survey. IEEE Commun. Surv. Tutor. 2019, 21, 1773–1828. [Google Scholar] [CrossRef]
  13. Kelley, B.; Ara, I. An intelligent and private 6G air interface using physical layer security. In Proceedings of the IEEE Military Communications Conference (MILCOM), Rockville, MD, USA, 28 November–2 December 2022; pp. 968–973. [Google Scholar] [CrossRef]
  14. Ayaz, F.; Sheng, Z.; Ho, I.W.; Tiany, D.; Ding, Z. Blockchain-enabled FD-NOMA based vehicular network with physical layer security. In Proceedings of the IEEE 95th Vehicular Technology Conference (VTC2022-Spring), Helsinki, Finland, 19–22 June 2022; pp. 1–6. [Google Scholar] [CrossRef]
  15. Li, M.; Xue, P.; Yuan, H.; Han, Y. Physical layer security for CR-NOMA network with cooperative jamming. Tsinghua Sci. Technol. 2024, 30, 708–720. [Google Scholar] [CrossRef]
  16. Xia, S.; Li, D.; Zhao, X.; Zhou, J.; Du, J.; Wang, Q.; Hou, W.; Lv, R. Research on the physical layer security for industrial 5G private networks. In Proceedings of the IEEE 11th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 8–10 December 2023; pp. 816–819. [Google Scholar]
  17. Singh, R.; Ahmad, I.; Huusko, J. The role of physical layer security in satellite-based networks. In Proceedings of the European Conference on Networks and Communications (EuCNC), Gothenburg, Sweden, 6–9 June 2023; pp. 36–41. [Google Scholar] [CrossRef]
  18. Ara, I.; Kelley, B. Physical layer security for 6G: Toward achieving intelligent native security at layer-1. IEEE Access 2024, 12, 82800–82824. [Google Scholar] [CrossRef]
  19. Meng, R.; Xu, B.; Xu, X.; Sun, M.; Wang, B.; Han, S.; Lv, S.; Zhang, P. A survey of machine learning-based physical-layer authentication in wireless communications. J. Netw. Comput. Appl. 2024, 235, 104085. [Google Scholar] [CrossRef]
  20. Cao, Y.; Wu, Y.; Lian, L.; Tao, M. Importance-Aware Resource Allocations for MIMO Semantic Communication. Entropy 2025, 27, 605. [Google Scholar] [CrossRef]
  21. Li, D.; Xu, Y.; Zhao, M.; Zhu, J.; Zhang, S. Knowledge-driven machine learning and applications in wireless communications. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 454–467. [Google Scholar] [CrossRef]
  22. Alsalman, D. A comparative study of anomaly detection techniques for IoT security using adaptive machine learning for IoT threats. IEEE Access 2024, 12, 14719–14730. [Google Scholar] [CrossRef]
  23. Shree, S.R. Autonomous development of theoretical framework for intelligence automation system using decision tree algorithm. Comput. Electr. Eng. 2022, 102, 108131. [Google Scholar] [CrossRef]
  24. Eljialy, A.E.M.; Uddin, M.Y.; Ahmad, S. Novel framework for an intrusion detection system using multiple feature selection methods based on deep learning. Tsinghua Sci. Technol. 2024, 29, 948–958. [Google Scholar] [CrossRef]
  25. Alrayes, F.S.; Zakariah, M.; Amin, S.U.; Khan, Z.I.; Helal, M. Intrusion detection in IoT systems using denoising autoencoder. IEEE Access 2024, 12, 122401–122425. [Google Scholar] [CrossRef]
  26. Wang, Z.; Wang, Y.; Xiong, X.; Ren, Q.; Huang, J. A Novel Framework for Enhancing Decision-Making in Autonomous Cyber Defense Through Graph Embedding. Entropy 2025, 27, 622. [Google Scholar] [CrossRef]
  27. Hong, S.; Kim, K.; Lee, S.-H. A hybrid jamming detection algorithm for wireless communications: Simultaneous classification of known attacks and detection of unknown attacks. IEEE Commun. Lett. 2023, 27, 1769–1773. [Google Scholar] [CrossRef]
  28. Pitafi, S.; Anwar, T.; Widia, I.D.M.; Yimwadsana, B. Revolutionizing perimeter intrusion detection: A machine learning-driven approach with curated dataset generation for enhanced security. IEEE Access 2023, 11, 106954–106966. [Google Scholar] [CrossRef]
  29. Xing, J.; Lv, T.; Zhang, X. Cooperative relay based on machine learning for enhancing physical layer security. In Proceedings of the IEEE Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Istanbul, Turkey, 8–11 September 2019; pp. 1–6. [Google Scholar] [CrossRef]
  30. He, D.; Liu, C.; Quek, T.Q.S.; Wang, H. Transmit antenna selection in MIMO wiretap channels: A machine learning approach. IEEE Wirel. Commun. Lett. 2018, 7, 634–637. [Google Scholar] [CrossRef]
  31. Wang, Z.; Guo, H.; Gai, K. Decision tree-based privacy protection in federated learning: A survey. In Proceedings of the IEEE 10th Conference on Big Data Security on Cloud (BigDataSecurity), New York, NY, USA, 6–8 May 2024; pp. 119–124. [Google Scholar] [CrossRef]
  32. El Zein, Y.; Lemay, M.; Huguenin, K. PrivaTree: Collaborative privacy-preserving training of decision trees on biomedical data. IEEE/ACM Trans. Comput. Biol. Bioinform. 2023, 21, 1–13. [Google Scholar] [CrossRef]
  33. Alex, S.; Dhanaraj, K.J.; Deepthi, P.P. Private and energy-efficient decision tree-based disease detection for resource-constrained medical users in mobile healthcare networks. IEEE Access 2022, 10, 17098–17112. [Google Scholar] [CrossRef]
  34. Hou, Q.; Zhang, N.; Kirschen, D.S.; Du, E.; Cheng, Y.; Kang, C. Sparse oblique decision tree for power system security rules extraction and embedding. IEEE Trans. Power Syst. 2021, 36, 1605–1615. [Google Scholar] [CrossRef]
  35. Boztas, G.; Tuncer, T.; Aydogmus, O.; Yildirim, M. A DCSLBP based intelligent machine malfunction detection model using sound signals for industrial automation systems. Comput. Electr. Eng. 2024, 119, 109541. [Google Scholar] [CrossRef]
  36. Chen, Y.-C.; Chang, C.-C.; Hung, C.-C.; Lin, J.-F.; Hsu, S.-Y. SecDT: Privacy-preserving outsourced decision tree classification without polynomial forms in edge-cloud computing. IEEE Trans. Signal Inf. Process. Netw. 2022, 8, 1037–1048. [Google Scholar] [CrossRef]
  37. Elsadig, M.A. Detection of denial-of-service attack in wireless sensor networks: A lightweight machine learning approach. IEEE Access 2023, 11, 83537–83552. [Google Scholar] [CrossRef]
  38. Al-Quayed, F.; Ahmad, Z.; Humayun, M. A situation-based predictive approach for cybersecurity intrusion detection and prevention using machine learning and deep learning algorithms in wireless sensor networks of Industry 4.0. IEEE Access 2024, 12, 34800–34819. [Google Scholar] [CrossRef]
  39. Xue, L.; Liu, D.; Huang, C.; Lin, X.; Shen, X.S. Secure and privacy-preserving decision tree classification with lower complexity. J. Commun. Inf. Netw. 2020, 5, 16–25. [Google Scholar] [CrossRef]
  40. Zheng, Y.; Wang, C.; Wang, R.; Duan, H.; Nepal, S. Optimizing secure decision tree inference outsourcing. IEEE Trans. Dependable Secure Comput. 2022, 20, 3079–3092. [Google Scholar] [CrossRef]
  41. Elsadig, M.A.; Gafar, A. Covert channel detection: Machine learning approaches. IEEE Access 2022, 10, 38391–38405. [Google Scholar] [CrossRef]
  42. Gu, C.; Cao, X. Research on information hiding technology. In Proceedings of the International Conference on Consumer Electronics, Communications and Networks (CECNet), Yichang, China, 21–23 April 2012; pp. 2035–2037. [Google Scholar]
  43. Cek, M.E.; Savaci, F.A. Stable non-Gaussian noise parameter modulation in digital communication. Electron. Lett. 2009, 45, 1256–1257. [Google Scholar] [CrossRef]
  44. Cek, M.E. Covert communication using skewed α-stable distributions. Electron. Lett. 2015, 51, 116–118. [Google Scholar] [CrossRef]
  45. Ahmed, A.; Savaci, F.A. Random communication system based on skewed alpha-stable levy noise shift keying. Fluct. Noise Lett. 2017, 16, 1750024. [Google Scholar] [CrossRef]
  46. Ahmed, A.; Savaci, F.A. Synchronization of alpha-stable levy noise-based random communication system. IET Commun. 2018, 12, 276–282. [Google Scholar] [CrossRef]
  47. Cek, M.E. M-ary alpha-stable noise modulation in spread-spectrum communication. Fluct. Noise Lett. 2015, 14, 1550022. [Google Scholar] [CrossRef]
  48. Savaci, F.A.; Ahmed, A. Inverse system approach to design alpha-stable noise-driven random communication systems. IET Commun. 2020, 14, 910–913. [Google Scholar] [CrossRef]
  49. Ahmed, A.; Savaci, F.A. Measure of covertness based on the imperfect synchronization of an eavesdropper in random communication systems. In Proceedings of the 10th International Conference on Electrical and Electronics Engineering (ELECO), Bursa, Turkey, 30 November–2 December 2017; pp. 638–641. [Google Scholar]
  50. Ahmed, A.; Bosnić, Z. A covert α-stable noise-based extended random communication by incorporating multiple inverse systems. IEEE Access 2025, 13, 13675–13685. [Google Scholar] [CrossRef]
  51. Xu, Z.; Jin, W.; Zhou, K.; Hua, J. A covert digital communication system using skewed α-stable distributions for Internet of Things. IEEE Access 2020, 8, 113131–113141. [Google Scholar] [CrossRef]
  52. Ahmed, A.; Savaci, F.A. Covert electromagnetic nanoscale communication system in the terahertz channel. J. Circuits Syst. Comput. 2020, 29, 2050126. [Google Scholar] [CrossRef]
  53. Zhang, Y.; Zhang, Y. An adaptive parameter estimation algorithm of radar linear frequency modulation signal based on nonlinear transform under different α-stable distribution noise environments. IEEE J. Miniatur. Air Space Syst. 2023, 4, 389–399. [Google Scholar] [CrossRef]
  54. Peppas, K.P.; Mathiopoulos, P.T. Space shift keying (SSK) transmission over Rayleigh fading channels and symmetric α-stable noise. IEEE Access 2024, 12, 40569–40581. [Google Scholar] [CrossRef]
  55. Zhang, J.; Liu, M.; Chen, Y.; Zhao, N.; Han, Y.; Zhang, N. Multiantenna spectrum sensing with α-stable noise for cognitive radio-enabled IoT. IEEE Internet Things J. 2024, 11, 21546–21558. [Google Scholar] [CrossRef]
  56. Samorodnitsky, G.; Taqqu, M.S. Stable Non-Gaussian Random Processes; Chapman & Hall/CRC: New York, NY, USA, 1994; pp. 23–45. [Google Scholar]
  57. Janicki, A.; Weron, A. Simulation and Chaotic Behavior of α-Stable Stochastic Processes; Marcel Dekker: New York, NY, USA, 1994; pp. 30–75. [Google Scholar]
  58. Quinlan, J.R. C4.5: Programs for Machine Learning; Morgan Kaufmann: Burlington, MA, USA, 1993. [Google Scholar]
  59. Breiman, L.; Friedman, J.; Stone, C.J.; Olshen, R.A. Classification and Regression Trees; Wadsworth International Group: Belmont, CA, USA, 1984. [Google Scholar]
  60. Alharbi, A.A. Classification performance analysis of decision tree-based algorithms with noisy class variable. Discrete Dyn. Nat. Soc. 2024, 2024, 6671395. [Google Scholar] [CrossRef]
  61. Abrishami, M.; Dadkhah, S.; Neto, E.C.P.; Xiong, P.; Iqbal, S.; Ray, S.; Ghorbani, A.A. Label noise detection in IoT security based on decision tree and active learning. In Proceedings of the IEEE 19th International Conference on Smart Communities (HONET), Marietta, GA, USA, 19–21 December 2022; pp. 46–53. [Google Scholar] [CrossRef]
  62. Luan, S.; Zhang, Y.; Chen, H.; Wang, C. Automatic modulation classification: Decision tree based on error entropy and global-local feature-coupling network under mixed noise and fading channels. IEEE Wirel. Commun. Lett. 2022, 11, 1703–1707. [Google Scholar] [CrossRef]
  63. Kuruoglu, E.E. Density parameter estimation of skewed α-stable distributions. IEEE Trans. Signal Process. 2001, 49, 2192–2201. [Google Scholar] [CrossRef]
Figure 1. PDF of α-stable noise for parameters α and β, where γ = 1.0 and µ = 0.
Figure 1. PDF of α-stable noise for parameters α and β, where γ = 1.0 and µ = 0.
Entropy 27 00815 g001
Figure 2. PDF of α-stable noise for various special noises.
Figure 2. PDF of α-stable noise for various special noises.
Entropy 27 00815 g002
Figure 3. Basic structure of the decision tree.
Figure 3. Basic structure of the decision tree.
Entropy 27 00815 g003
Figure 4. Model of the proposed ML-RCS for two scenarios: (1) transmitter and legitimate receiver in AWGN channel, and (2) transmitter and illegitimate receiver in AWGN channel.
Figure 4. Model of the proposed ML-RCS for two scenarios: (1) transmitter and legitimate receiver in AWGN channel, and (2) transmitter and illegitimate receiver in AWGN channel.
Entropy 27 00815 g004
Figure 5. Left diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 0.6, β = 1.6, N = 15. Right diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 0.6, β = 2, N = 15.
Figure 5. Left diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 0.6, β = 1.6, N = 15. Right diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 0.6, β = 2, N = 15.
Entropy 27 00815 g005
Figure 6. Left diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 0.8, β = 2, N = 15. Right diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 1.1, β = 2, N = 15.
Figure 6. Left diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 0.8, β = 2, N = 15. Right diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 1.1, β = 2, N = 15.
Entropy 27 00815 g006
Figure 7. Left diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 1.1, β = 1.6, N = 15. Right diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 1.1, β = 1.6, N = 25.
Figure 7. Left diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 1.1, β = 1.6, N = 15. Right diagram: Binary bits and transmitted noise sequence (R) at T (top two), received noise sequence (Y), classified beta β C , and corresponding retrieved binary bits at LR (bottom two); α = 1.1, β = 1.6, N = 25.
Entropy 27 00815 g007
Figure 8. Left column: Confusion matrices when β = 1.6, 1.8, and 2. This figure shows the reduction in the error during the retrieval when β is increased. Middle column: Confusion matrices when α   = 0.6, 0.8, and 1.1. This figure shows the reduction in error during the retrieval when α is increased. Right column: Confusion matrices when N = 15, 25, and 35. This figure shows the reduction in the error during the retrieval when N is increased.
Figure 8. Left column: Confusion matrices when β = 1.6, 1.8, and 2. This figure shows the reduction in the error during the retrieval when β is increased. Middle column: Confusion matrices when α   = 0.6, 0.8, and 1.1. This figure shows the reduction in error during the retrieval when α is increased. Right column: Confusion matrices when N = 15, 25, and 35. This figure shows the reduction in the error during the retrieval when N is increased.
Entropy 27 00815 g008
Figure 9. BER vs. MSNRdB performance of ML-RCS; total transmitted bits = 1000. The figures show that the ML-RCS performance increases with an increment in the Sk α SNSK responsible factors ( α , β , and N ).
Figure 9. BER vs. MSNRdB performance of ML-RCS; total transmitted bits = 1000. The figures show that the ML-RCS performance increases with an increment in the Sk α SNSK responsible factors ( α , β , and N ).
Entropy 27 00815 g009
Figure 10. BER vs. MSNRdB performances for the FT, MT, and CT configurations of the ML-RCS; total transmitted bits = 1000. The figure shows that the FT is the best configuration for the cart classifier while retrieving binary information from the Sk α SNSK.
Figure 10. BER vs. MSNRdB performances for the FT, MT, and CT configurations of the ML-RCS; total transmitted bits = 1000. The figure shows that the FT is the best configuration for the cart classifier while retrieving binary information from the Sk α SNSK.
Entropy 27 00815 g010
Figure 11. Confusion matrix for the ILR; α = 1.1, N = 26, and MSNRdb = −11. This figure shows the inability of the ILR to retrieve binary information.
Figure 11. Confusion matrix for the ILR; α = 1.1, N = 26, and MSNRdb = −11. This figure shows the inability of the ILR to retrieve binary information.
Entropy 27 00815 g011
Table 1. CART classifier.
Table 1. CART classifier.
ConfigurationModel FlexibilityDescription
FTLowMany leaves to make many fine distinctions between classes.
(maximum splits = 100)
MTMediumMedium number of leaves for finer distinctions between classes.
(maximum splits = 20)
CTHighFew leaves to make coarse distinctions between classes.
(maximum splits = 4)
Table 2. Parameter settings responsible for LR.
Table 2. Parameter settings responsible for LR.
ParameterSelected Values
α 0.6, 0.8, 1.1
β 0 −0.8, −0.9, −1
β 1 0.8, 0.9, 1
β 1.6, 1.8, 2
N 15, 25, 35
Table 3. Channel parameters.
Table 3. Channel parameters.
SymbolDefinitionUtilized Values
γ dispersion of noise generated to represent ‘0’ or ‘1’1
γ A dispersion of channel noise[0.316, 10]
MSNRdBmixed signal-to-noise ratio[−10, 5]
Table 4. Parameter settings responsible for ILR.
Table 4. Parameter settings responsible for ILR.
ParameterSelected Values
α ( 0 ,   2 ]
β 0 [−1, 0)
β 1 [0, 1]
β [0, 2]
N [ 1 ,   ]
Table 5. BER and data rate comparison.
Table 5. BER and data rate comparison.
RCSsAchieved BER = 10−3 at MSNRdBUtilized N
Cek, M.E. [44]−62000
Ahmed, A.; Savaci, F.A. [45]−61000
Ahmed, A.; Savaci, F.A. [46]−8500
Cek, M.E. [47]−61600
Savaci, F.A.; Ahmed, A [48]−51000
Ahmed, A.; Bosnić, Z [50]−41000
Xu et al. [51]−122000
Ahmed, A.; Savaci, F.A. [52]−5500
Proposed ML-RCS−1235
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmed, A.; Bosnić, Z. Machine Learning-Assisted Secure Random Communication System. Entropy 2025, 27, 815. https://doi.org/10.3390/e27080815

AMA Style

Ahmed A, Bosnić Z. Machine Learning-Assisted Secure Random Communication System. Entropy. 2025; 27(8):815. https://doi.org/10.3390/e27080815

Chicago/Turabian Style

Ahmed, Areeb, and Zoran Bosnić. 2025. "Machine Learning-Assisted Secure Random Communication System" Entropy 27, no. 8: 815. https://doi.org/10.3390/e27080815

APA Style

Ahmed, A., & Bosnić, Z. (2025). Machine Learning-Assisted Secure Random Communication System. Entropy, 27(8), 815. https://doi.org/10.3390/e27080815

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop