Next Article in Journal
Digital Twin Applications in Manufacturing Industry: A Case Study from a German Multi-National
Next Article in Special Issue
Comparison of Supervised Learning Algorithms on a 5G Dataset Reduced via Principal Component Analysis (PCA)
Previous Article in Journal
Detection of Man-in-the-Middle (MitM) Cyber-Attacks in Oil and Gas Process Control Networks Using Machine Learning Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Transmit Antenna Selection Schemes for High-Rate Fully Generalized Spatial Modulation

by
Hindavi Kishor Jadhav
1,
Vinoth Babu Kumaravelu
1,*,
Arthi Murugadass
2,*,
Agbotiname Lucky Imoize
3,4,
Poongundran Selvaprabhu
1 and
Arunkumar Chandrasekhar
5
1
Department of Communication Engineering, School of Electronics Engineering, Vellore Institute of Technology, Vellore 632014, India
2
Department of Computer Science and Engineering (AI & ML), Sreenivasa Institute of Technology and Management Studies, Chittoor 517127, India
3
Department of Electrical and Electronics Engineering, Faculty of Engineering, University of Lagos, Lagos 100213, Nigeria
4
Department of Electrical Engineering and Information Technology, Institute of Digital Communication, Ruhr University, 44801 Bochum, Germany
5
Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore 632014, India
*
Authors to whom correspondence should be addressed.
Future Internet 2023, 15(8), 281; https://doi.org/10.3390/fi15080281
Submission received: 2 July 2023 / Revised: 7 August 2023 / Accepted: 17 August 2023 / Published: 21 August 2023

Abstract

:
The sixth-generation (6G) network is supposed to transmit significantly more data at much quicker rates than existing networks while meeting severe energy efficiency (EE) targets. The high-rate spatial modulation (SM) methods can be used to deal with these design metrics. SM uses transmit antenna selection (TAS) practices to improve the EE of the network. Although it is computationally intensive, free distance optimized TAS (FD-TAS) is the best for performing the average bit error rate (ABER). The present investigation aims to examine the effectiveness of various machine learning (ML)-assisted TAS practices, such as support vector machine (SVM), naïve Bayes (NB), K-nearest neighbor (KNN), and decision tree (DT), to the small-scale multiple-input multiple-output (MIMO)-based fully generalized spatial modulation (FGSM) system. To the best of our knowledge, there is no ML-based antenna selection schemes for high-rate FGSM. SVM-based TAS schemes achieve ∼71.1% classification accuracy, outperforming all other approaches. The ABER performance of each scheme is evaluated using a higher constellation order, along with various transmit antennas to achieve the target ABER of 10 5 . By employing SVM for TAS, FGSM can achieve a minimal gain of ∼2.2 dB over FGSM without TAS (FGSM-NTAS). All TAS strategies based on ML perform better than FGSM-NTAS.

Graphical Abstract

1. Introduction

In comparison to fifth-generation (5G) wireless communication networks, 6G networks are expected to have much higher spectral, energy, and cost efficiency, with higher data rates (in Tbps), latency reduced by a factor of ten, connection density increased by a factor of a hundred, and increased intelligence for full automation. Innovations such as index modulation (IM), reconfigurable intelligent surfaces (RISs), non-orthogonal multiple access (NOMA), and artificial intelligence (AI) will all be introduced into 6G networks to fulfill these overall objectives [1,2,3,4,5,6]. Massive MIMO designs can increase the spectral efficiency (SE) of 6G systems because of the higher frequencies and dense networks. In 6G, technologies like SM, RIS, NOMA, orbital angular momentum, and rate splitting multiple access have the potential to increase SE [7].
The energy efficiency (EE) of wireless networks may be enhanced by using four major approaches: (1) Resource allocation. Rather than maximizing throughput, the EE of the network is optimized by efficiently distributing resources [8]. (2) Instead of simply increasing the coverage area, the second option employs infrastructure nodes to increase the network’s EE. (3) Use renewable energy sources to power communication systems. (4) Design energy-efficient hardware solutions. In a network, IM and its derivatives can also be utilized to shoot up EE [9,10].
Massive MIMO is an essential technique for achieving the SE requirements of 5G. When adopting a multi-antenna system, two issues arise: inter-antenna synchronization (IAS) and inter-channel interference (ICI) [11]. With more radio frequency (RF) links, the hardware complexity of the system grows, lowering EE [12]. To overcome this problem, the concept of SM has been presented [9,10]. Because one antenna is functional at a time, the RF chain is simpler to handle in SM [9,10,11].
SM’s SE is given by
α S M = log 2 M + log 2 N t x ,
where the number of transmit antennas and modulation order are denoted by N t x and M, respectively. The SE of SM is proportional to l o g 2 ( N t x ) . As a result, in order to enhance SE, a higher proportion of transmit antennas is required. SM cannot satisfy SE demands since only one antenna is simultaneously active [11]. High-rate SM variants are needed for future-generation networks to meet SE requirements [13]. The high-rate SM variants may transmit the same or distinct symbols at the same time across multiple antennas, depending on their operating principle. FGSM is a cutting-edge SM variant in which the transmitter uses just one or several, or all of its antennas for transmission [13]. As a result, the possible SE grows linearly with N t x . Gudla V.V. et al. briefly discussed the system architecture and functioning theory of FGSM [13].
Transmit diversity gain could not be achieved in SM since only one antenna is fully functional at once [14,15,16,17]. The integration of TAS practices into SM improves its transmit diversity. The receiver decides on transmitter antenna subsets driven by the channel quality information (CQI). In ML, machines are taught new skills so that they may execute tasks on their own using data. In terms of planning and optimizing future-generation networks, ML plays a critical role. ML may be used to solve a wide range of technical difficulties in future-generation systems, including massive MIMO, NOMA, device-to-device (D2D) networks, etc. [2,3]. The present research performs TAS on FGSM using ML algorithms, and the effectiveness of these algorithms is evaluated using classification accuracy and ABER.

Related Work

The future-generation network demands improvements in EE and SE [1]. The SE of SM is proportional to l o g 2 ( N t x ) , which lowers the performance of SM in terms of SE [9,10]. To overcome this, high-rate versions of SM are developed [13]. FGSM is a high-rate variation of SM, where transmit antennas and SE are linearly related and could meet the expanding demands of 6G [13].
Lower-powered integrated circuits, antenna diversity, or a mix of the two, may be adopted to boost EE [18]. The traditional SM technique does not provide transmit diversity gain. TAS practices have been incorporated to boost the SM system’s transmit diversity gain [14]. TAS approaches are applied to overcome the cost and complexity challenges in massive MIMO systems [12]. Different massive MIMO configurations have shown that antenna selection can improve performance and lower RF costs [19]. For optimal channel utilization, a novel SM technique is developed by fusing transmit mode switching and adaptive modulation techniques [20]. The free distance (FD) strategy is computationally more expensive due to the requirement of simultaneously searching for antenna pairs and constellation orders that maximize the least FD. Although FD-TAS provides the best EE performance, its implementation is more challenging. The FD-TAS system has gained popularity and is used as a benchmark in many TAS-related articles [3,13,14,15,16,17,21].
A capacity-optimized antenna selection (COAS) technique is addressed in conjunction with FD-TAS, where the antenna subset with the most prominent channel capacity is chosen [13,14,15,17]. The correlation angle of two antennas is used to evaluate the TAS [13,16]. In this study, the antenna subset with the least correlation is prioritized. The capacity and correlation angle-based technique is proposed to enhance ABER performance [13] while increasing computing complexity. A partitioning strategy that relies on the capacity and correlation angle further reduces the system’s complexity [13]. Sub-optimal techniques are less efficient than FD-TAS when it comes to TAS in SM and its variants [13,14,15,16,17].
To save the computations offered by FD-TAS, nowadays data-driven approaches are utilized. TAS is a classification problem that can be solved using supervised ML algorithms. A pattern recognition-based approach is proposed for traditional MIMO [22]. This work is tested for 5000 samples with KNN and SVM algorithms. This work can be extended further to various supervised learning algorithms and larger datasets to boost the overall system performance. Two unique features of the channel space are the element norm of H and element norm of H H H , which are used to analyze the performance of these algorithms. The absolute of elements of H is used to analyze NB- and SVM-based TAS algorithms in conventional MIMO architecture [23]. The purpose of this research is to demonstrate how ML can be utilized to enhance the security of MIMO architecture. In the future, other ML algorithms could be employed to carry out this research. In this example, the algorithms are only trained on 10,000 samples; nevertheless, this number can be increased to improve overall efficiency.
To solve power assignment problems in adaptive SM-MIMO, supervised ML algorithms and deep neural networks (DNNs) are proposed and implemented [24]. The ABER performance in this work can be enhanced by extending the training data beyond 2000 feature label pairs. Other features, such as the angle of elements of H , real and imaginary parts of elements of H , etc., can be used as attributes to obtain better results. TAS as a classification problem is addressed using DNN and SVM for SM [25]. To construct the models, the channel gain and correlation properties of the column space of H are retrieved. In this work, the comparison with other ML-based algorithms is not conducted with DNN and SVM. Altın, G. and Arslan, İ.A. selected both the transmit and receive antennas for SM using deep learning (DL) architectures [26]. We used DL-based algorithms to execute TAS for FGSM in [3] to meet the SE requirements and improve classification accuracy. The following is a list of research gaps identified:
  • Most of these data-driven approaches are proposed and implemented for conventional MIMO and basic SM architectures.
  • These data-driven strategies are neither proposed nor implemented for high-rate variants of SM.
  • The TAS dataset is not available for FGSM in any of the repositories.
Listed below are the key contributions of this work:
  • Through the repeated application of the FD-TAS algorithm, 4 different datasets (with a total of 10,000 entries) are produced. The dataset contains channel information for a variety of MIMO setups and antenna counts, as well as essential performance measures, like FD.
  • For FGSM, four distinct supervised ML classification methods are used and tested for TAS: SVM, NB, KNN, and DT. The dataset size, the value of k, which is used in k-fold cross-validation, and parameters, such as the kernel type and hyperparameter K, are optimized to obtain the best results.
  • The ABER and computational complexity of ML-based TAS approaches are contrasted to those of FGSM-NTAS and FD-TAS.
  • An analysis of the proposed work is carried out in terms of SNR gain and classification accuracy.
The following is how the manuscript is structured: The principle of operation of FGSM is discussed elaborately in Section 2. The various ML algorithms used for TAS in the FGSM system are covered in Section 3. The proposed ML-based TAS practices and the conventional FD-TAS practice are subjected to a computational complexity study in Section 4. The ABER performance of ML-based algorithms is contrasted to the FD-TAS scheme in Section 5. Section 6 concludes the work by laying out future research directions.

2. System Model of FGSM Transceiver

FGSM is a high-rate version of SM that allows one, many, or all antennas to transmit the same symbol at the same instant. The SE is directly proportional to N t x as compared to SM, which improves the SE. This variation does not require that the number of transmit antennas be a power of two. The SE of FGSM is calculated as follows:
α F G S M = log 2 M + N t x 1 .
Consider the input bits [0 0 1 1] to comprehend FGSM mapping. An example with an SE of 4 bpcu (M = 4 and N t x = 3) is used to demonstrate FGSM mapping. As indicated in Table 1, the symbol s 1 is identified using the first log 2 M bits, i.e., [0 0]. It will be transmitted by the antenna pair (1, 2), which is selected using the next N t x 1 bits, i.e., [1 1], as illustrated in Table 2. For the given block of bits, the transmit vector generated is given by χ = [ 1 2 + 1 2 j , 1 2 + 1 2 j , 0 ] T . Consider an alternative set of bits [0 1 0 1], where the symbol s 2 is chosen by using the first log 2 M bits [0 1], as shown in Table 1. It will be transmitted by antenna 2, which is selected using the next N t x 1 bits, i.e., [0 1], as illustrated in Table 2. For this set of bits, the vector to be transmitted is χ = [ 0 , 1 2 1 2 j , 0 ] T .
The multipath channel H and AWGN η influence the transmit vector χ . At the receiver end, the signal vector y is stated as follows:
y N r x × 1 = H N r x × N t x χ N t x × 1 + η N r x × 1 .
Consider the MIMO configuration ( N t x = 3 , N r x = 4 ) with M = 4 . For the single antenna operational configuration explained earlier, the received signal vector could be expressed as follows:
y 1 y 2 y 3 y 4 = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 h 41 h 42 h 43 0 s 2 0 + η 1 η 2 η 3 η 4 .
It comes down to
y 1 y 2 y 3 y 4 = h 12 h 22 h 32 h 42 s 2 + η 1 η 2 η 3 η 4 .
For single antenna operational case, the generalized form of the received signal is
y = h j s m + η , j 1 , 2 , . . N t x , m 1 , 2 , . . . M .
For the double antenna operational configuration explained earlier, the received signal vector could be indicated as follows:
y 1 y 2 y 3 y 4 = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 h 41 h 42 h 43 s 1 s 1 0 + η 1 η 2 η 3 η 4 .
It reduces to
y 1 y 2 y 3 y 4 = h 11 h 21 h 31 h 41 s 1 + h 12 h 22 h 32 h 42 s 1 + η 1 η 2 η 3 η 4 .
For the double antenna operational case, the generalized form of the received signal is given by
y = h j 1 + h j 2 s m + η , j 1 j 2 , j 1 , j 2 1 , 2 , . . N t x , m 1 , 2 , . . . M .
For the N A number of functional antennas, the generalized form of the received signal is represented as follows:
y = a = 1 N A h j a s m + η , j a 1 , 2 , . . N t x , m 1 , 2 , . . . M .
When a receiver possesses perfect CQI, the maximum likelihood detection for the FGSM system with N A functional antennas is given by
j ^ a , a = 1 , 2 . . N A , s ^ m = arg min j a , a = 1 , 2 . . N A , s m y a = 1 N A h j a s m 2 .

3. Antenna Selection Schemes for FGSM

TAS for FGSM is schematically depicted in Figure 1. TAS drives up the transmit diversity in traditional MIMO and SM, as described in the literature review section. Let H T = h 1 , h 2 , h N T C N 0 , I N r x × N T be the channel matrix for a MIMO. Here, h 1 , h 2 , h N T are the column vectors of H T . The TAS scheme is employed to select a subset from the candidate set S= S 1 , S 2 , , S l , , S N s . N t x antennas are picked from N T antennas when the CQI is perfectly known to the user equipment (UE). There are N s possible antenna subsets, each with N t x antennas, as given below:
S 1 = 1 , 2 , , N t x , S 2 = 1 , 2 , , N t x 1 , N t x + 1 , S N s = N T N t x + 1 , , N T ,
where S l represents the lth possible transmit antenna subset. A low-rate feedback channel communicates the chosen subset antenna index to the base station (BS). The CQI feedback latency is assumed to be insignificant in this case. If S l is selected, then the signal received is expressed as follows:
y = H T l χ + η ,
where H T l is the channel matrix corresponding to the selected set S l . An example of the TAS candidate set mapping is illustrated in Table 3 for N T = 4 , N t x = 3 , & N r x = 4 .

3.1. FD-TAS Based FGSM System

The purpose of FD-TAS is to find antenna pairs that optimize the least FD between the received constellations [3,13,14,15,16,17,21]. FD-TAS results in H of size N r x × N t x from H T of size N r x × N T . FD-TAS involves the following major steps:
Step 1: Compute the number of possible subsets N s = N T N t x .
Step 2: Determine all feasible transmit vectors ( 2 α F G S M ) for every antenna subset.
Step 3: Determine the minimum FD for every antenna subset using
d m i n ( H T ) = min χ i χ j H T χ i χ j 2 ,
here, χ i , χ j X is the set of all feasible transmit vectors.
Step 4: Find an antenna subset that has the maximum FD.
l ^ = arg max l 1 , 2 , N s d m i n ( H T l ) ,
here, H T l = H T 1 , H T 2 , H T N s .
Based on the antenna subset index obtained in step 4, the antenna indices and the associated channel matrix of size N r x × N t x are computed.

3.2. TAS Based on ML for FGSM

In this work, TAS in FGSM is carried out with the use of ML-based methods. An ML-based classification architecture is depicted in Figure 2. At first, source data are normalized using the mean normalization technique. Pre-processed data are then used to extract features. Several ML methods are then used to train the model using the chosen features and class labels. A trained model can be used to predict the class labels for new data using data that have not previously been fed into the system. The process for the execution of ML-based TAS for FGSM is shown in Figure 3. If CQI is present at the UE, the class labels obtained by the ML algorithms are utilized to choose N t x antennas from N T . Algorithm 1 shows the processes that must be followed for ML-based TAS. This section also includes details on the dataset development process and the supervised algorithms that are employed.
Algorithm 1 Primary steps involved in ML-based TAS
  • 1. Establish a mapping between the antenna subsets and class labels.
  • 2. Pre-process the input data using the mean normalization technique.
  • 3. Identify the key performance indicators (KPI) for each input sample. Find out the class label c on the basis of the KPI.
  • 4. Employ the normalized feature matrix H ^ T and the class label c to create a learning system utilizing SVM, NB, KNN, and DT algorithms.
  • 5. The learning system converts the test channel H T n e w to a normalized feature vector f n e w and determines the class label c.
  • 6. Compare the class label acquired in Step 5 with the map created in Step 1 to identify the antenna subset.

3.3. Generation of a Dataset

Elements of channel matrices are used to generate the training dataset. A multiclass classification technique is then used to divide the channel matrix into numerous corresponding classes, each of which represents the optimal antenna subset. ML methods are used to build the model. The developed models can also be used to predict the class labels of test data.
Four major steps are involved in the collection of the dataset: (1) Generation of training data. (2) Extraction of feature vectors. (3) Evaluation of KPI. (4) Use of the KPI information to obtain the class labels.
1.
Generation of training data.
To form the training data, the N number of channel matrices of size N r x × N T are randomly generated.
H = H T 1 , H T 2 , , H T N .
2.
Extraction of feature vectors
The training performance is generally influenced by the feature vectors. In this work, four different features, i.e., h i , j n , h i , j n , channel gain h i , j n 2 , and channel gain of every element of the correlation matrix ( H T n ) H ( H T n ) 2 , are considered to train the models. From each sample H T , a feature of size N Q N r x × N T is extracted. The process of extracting these four features is as follows:
(a)
h i , j n : The absolute value of each element of the channel matrix is considered as a dominating feature, which is used throughout this work. The extraction of the feature vector, in this case, is given as follows:
f n = h 1 , 1 n , , h 1 , N T n , , h i , j n , , h N r x , N T n , n 1 , 2 , . . . N ,
where h i , j n is a i , j th complex element of H T n , whose absolute value is calculated as follows:
h i , j n = h i , j n 2 + h i , j n 2 .
(b)
h i , j n : This is the second feature that is extracted to build the model from the elements of H T . The feature vector is constructed as follows:
f n = h 1 , 1 n , , h 1 , N T n , , h i , j n , , h N r x , N T n , n 1 , 2 , . . . N ,
where
h i , j n = tan 1 h i , j n h i , j n .
(c)
Channel gain h i , j n 2 : The channel gain of an individual element of the matrix is considered to be another feature to train the models whose feature vector is given by:
f n = h 1 , 1 n 2 , , h 1 , N T n 2 , , h i , j n 2 , h N r x , N T n 2 , n 1 , 2 , . . . N .
(d)
Squared element norm of ( H T n ) H ( H T n ) : In addition, the similarity between two distinct column vectors is a major feature. Similar column vectors introduce detection ambiguity while estimating the transmit antenna index. The feature vector extraction is conducted as follows:
H T n H H T n 2 = ( h 1 n ) H ( h 2 n ) H ( h N T n ) H h 1 n h 2 n h N T n .
To avoid bias, features are normalized using
f ¯ n q = f n ( q ) E [ ( f ( q ) ] m a x ( f ( q ) ) m i n ( ( f ( q ) ) , n = 1 , , N , q = 1 , , N Q .
3.
Evaluation of KPI.
Input data samples are labeled using KPI. FD is considered as the KPI in this work, which is calculated using (14).
4.
Class labeling.
Associate each feature with a label. A transmit antenna subset S= S 1 , S 2 , , S l , , S N s is mapped to each feature vector. The feature vector corresponding to each channel matrix H ^ T n is mapped to label c n S . Hence, the class label vector c = c 1 , c 2 , , c N .

3.4. ML Methods

This section examines four different supervised ML methods utilized for TAS.

3.4.1. SVM

An SVM model consists of a hyperplane containing representations of distinct classes in multidimensional space [22]. SVM will iteratively create the hyperplane in order to reduce errors. SVM divides datasets into classes in order to determine the best marginal hyperplane. TAS is a multiclass classification problem. The one-versus-one and one-versus-all (OVA) approaches are two common ways of creating multiclass SVM classifiers. This work looks at how OVA can be used to overcome the drawback of complexity in FD-TAS. It creates r binary SVMs for an R-class problem. Specifically, for TAS-assisted FGSM, R SVM models are generated as follows:
To address this two-class classification problem for the attribute label combinations f n , c n , where n=1, , N , r 1 , 2 , , R , an R order SVM is constructed. In this case, f n represents the nth row of the normalized attribute matrix H ^ T . Samples belonging to the rth class are characterized by positive labels, while those belonging to the other classes are characterized by negative labels.
The following approach is used for solving the optimization problem for the two-class SVM:
min ψ r , z r , λ r 1 2 ψ r T ψ r + D r n = 1 N λ n r , s . t . ψ r T ϕ f n + z r 1 λ n r , i f c n = r , ψ r T ϕ f n + z r 1 + λ n r , i f c n r , λ n r 0 , r 1 , 2 , , R ,
here, z r and ψ r are linear parameters, D r and λ n r are the regularization and penalty parameters, respectively, for rth SVM. By applying kernel function ϕ ( . ) to an attribute vector in (23), the SVM maps them into a higher-dimensional space.
In this work, the radial basis function (RBF) kernel is utilized to fit the model, which is specified by [22]
ϕ f i , f j = e f i f j 2 2 σ 2 .
By calculating parameters ψ r and z r for all valid r 1 , 2 , , R , the prediction functions obtained can be given as follows:
ψ 1 T ϕ f n e w + z 1 . ψ r T ϕ f n e w + z r .
For a new observation H T n e w , we utilize (17) to retrieve its feature vector as f n e w , and (26) to obtain its label.
l S V M = arg max r 1 , 2 , R ψ r T ϕ f n e w + z r .

3.4.2. NB

An NB classifier is a supervised ML technique that uses the Bayesian theorem to solve classification problems [21,27]. Consider the N features f 1 , , f N and N s feasible classes in a classification task. One of the N s classes must be assigned to the new instance H T n e w . N s -conditional probabilities p ( l | f n e w ) , l=1, , N s are computed by the NB approach. The calculation of joint probability becomes simpler if all features are independent. According to Bayes’ theorem, conditional probability is calculated as follows:
p l | f n e w = p l p f n e w | l p f n e w .
In (28), prior probability of the lth class is given by p ( l ) . Here, p ( f n e w | l ) indicates the likelihood probability and p ( f n e w ) represents evidence probability. The probability of evidence is the same for all classes. As a result, this term can be discarded. Using this method, the class with the greatest probability is selected.
l N B = arg max l 1 , 2 , N s p l p f n e w | l .
Equation (29) can also be written as follows:
l N B = arg max l 1 , 2 , N s p l q = 1 N Q p f n e w q | l .
Here, f q n e w is the qth feature of the new observation f n e w .

3.4.3. DT

Tree-structured classifiers can classify a dataset using internal nodes as attributes, branches as decision rules, and leaves as the inference [21,28]. Two nodes exist in a decision tree—the leaf node and the decision node. Decision nodes make decisions and have multiple branches, whereas leaf nodes generate the output of that decision process and do not contain any additional branches. In a decision tree, an initial question is asked, and based on the answer (yes/no), subtrees are constructed. The attribute value with the highest information gain will be chosen for further branching. The probability p l of a sample being owned by a specific class l is computed using
p l = f r e q l , T | T | ,
here, | T | is the number of occurrences in T . freq ( l , T ) represents the number of samples owned by a particular class l. An entropy of T is given as
E N T = l = 1 N s p l log 2 p l .
The dataset | T | is split into X partitions depending on the domain values of a non-class attribute G l . The entropy associated with this is determined using
E N G l , T = k = 1 X | T k | | T | E N T .
For attribute G l , information gain is computed using
I F G G l = E N T E N G l , T .
The information gain is computed for every feature–value combination. When splitting the root node, the feature–value combination that yields the highest information gain is chosen. The process is repeated at all decision nodes. Repeat this process until the maximum depth has been reached. Since the gain will not rise beyond a certain depth, the depth of the DT is regarded as a hyperparameter in the learning process.

3.4.4. KNN

This algorithm is lazy; it does not immediately begin learning from the data [22]. Instead, it saves it and performs the action when it is time to classify. For a new instance H T n e w , its feature vector f n e w is acquired. When this feature vector is normalized, we obtain f n e w . The KNN classifier finds the K-closest data points amidst N data points. The KNN classifier performs the following main steps:
Step 1: Determine K through k-fold cross-validation for which the model has low variance and reasonably good accuracy.
Step 2: Calculate the FD of a new observation f n e w for each data point in the dataset.
d f n , f n e w = f n f n e w 2 .
Step 3: Find out the K-nearest neighbors.
Step 4: For each K-nearest neighbor, count the data samples.
Step 5: Allocate the test data sample H T n e w to the class label, which obtains the maximum number of votes ( l K N N ) .
Despite its apparent simplicity, the cost of computing the FD between data samples and the time complexity increases with the dataset size. One of the disadvantages of KNN is that it works well for small datasets but becomes more difficult and time-expensive with larger datasets. Identifying the value of K is always necessary, although it might be difficult at times.

4. Complexity Analysis

Table 4 shows the complexity analysis for traditional FD-TAS and distinct supervised TAS systems. H T χ i χ j requires N r x × N T computations. H T χ i χ j H H T χ i χ j needs ( N r x × N T ) 2 computations. These computations are carried out only for one value of FD. This process is repeated N s 2 α F G S M 2 α F G S M 1 times for all possible antenna subsets. It is observed that the number of computations required for TAS using the traditional method is tedious. For ML techniques, it depends on the attribute vector length N Q . The number of computations required is much less compared to the traditional FD-TAS for a higher-order configuration, i.e., for a larger number of N T , N r x , and α F G S M . KNN is the least computationally efficient of all ML approaches because it stores the training data. This analysis is conducted for B = 2 × 10 5 bits. By decreasing the length of the attribute vector, we can reduce the complexity of ML-based TAS practices. Traditional FD-TAS becomes increasingly complex as these components rise, necessitating the use of ML techniques for TAS to simplify the system.

5. Discussion on the Simulation Results

This section presents an ABER analysis of the classic FD-TAS method, four distinct ML-based TAS methods, and the FGSM-NTAS scheme. Table 5 lists the parameters used in this investigation. In MATLAB 2022b, all ML models are trained using the Statistics and ML toolbox. All the simulation results are generated for distinct values of N T , N t x , N r x , and M. FD-TAS and ML-based TAS are labeled as N T , N t x , N r x , M in the figures, while FGSM-NTAS is labeled as N t x , N r x , M .
From Figure 4, it is observed that with an increased dataset size, ML models show an improvement in classification accuracy. Since the accuracy of classification does not improve any further after 10,000 samples, all models are trained for the same number of samples. Table 6 compares the classification accuracies of several ML algorithms with different sample counts. SVM has the highest accuracy in classification (71.1%), NB has the second greatest accuracy (70.2%), followed by KNN (67.2%), and DT has the lowest classification accuracy (43.8%).
Cross-validation is an approach that allows the model to learn from many train–test splits. This provides a better idea of how well the trained model will perform on data that have not been seen before. In contrast, hold-out is based on a single train–test split. As a result, the hold-out method’s score is influenced by the way the data are divided into train and test sets. Figure 5 and Table 7 show how altering the value of k-folds during cross-validation improves classification accuracy. The highest classification accuracy is observed for k = 10. As a result, the value of k is set to 10.
After performing normalization, these models are trained for four distinct features, as discussed earlier in Section 3.3. The complexities of all features based on their attribute vector lengths are listed in Table 8. The worst-performing attribute is h i , j n . Attribute 4 performs well only for DT, but its complexity increases with a higher-order configuration, so it is discarded. Combining features lengthens the feature vector and increases model complexity; hence, feature combination is avoided in this study. The absolute feature outperforms the other features in SVM, NB, and KNN, and its complexity is low; thus, only the absolute feature is utilized for this work.
After conventional FD-TAS, the SVM-based TAS scheme outshines other ML-based systems, as shown in Figure 6. In comparison to the FGSM-NTAS method, the SVM-based TAS scheme cut downs the SNR need by ∼2 dB. This analysis is conducted for N T = 4, N t x = 3, N r x = 4, and M = 4. The ABER performance of the system is re-examined in Figure 7 by increasing the constellation order to 16. For larger values of M, the SNR requirement for all ML-based methods increases. For FGSM-NTAS and conventional FD-TAS SNR gain requirements increase by ∼1.3 dB and ∼1.61 dB, respectively. SVM outperforms the FGSM-NTAS system in this circumstance as well, with a gain improvement of ∼2 dB.
Figure 8 analyzes the ABERs of different TAS schemes for N T = 5, N t x = 4, N r x = 4, and M = 4 configuration. It is found that SVM-based TAS outperforms FGSM-NTAS by ∼1.42 dB in terms of SNR gain. By increasing N T to 6, the analysis is repeated in Figure 9. For traditional FD-TAS, the required SNR drops by ∼1.28 dB. For SVM, NB, KNN, and DT-based TAS systems, respectively, there is a reduction of ∼0.78 dB, ∼0.66 dB, ∼0.68 dB, and ∼0.59 dB in the SNR requirements. In comparison to the FGSM-NTAS system, the SVM-based TAS technique gains ∼2.2 dB. Table 9 lists the SNR gain achieved by the proposed ML-based TAS practices over traditional FGSM-NTAS.

6. Conclusions

Four distinct ML-based TAS techniques are suggested and implemented for a small-scale MIMO-based FGSM system in this work. For different values of N T , N t x , and M, the recommended algorithm’s ABER performance is compared to those of FD-TAS and FGSM-NTAS. The SVM-based TAS scheme surpasses all proposed ML-based TAS approaches in the simulations. The ABER performance of SVM is poorer than FD-TAS but the SVM-based TAS scheme is more computationally efficient than FD-TAS. SVM-based TAS attains the highest classification accuracy (∼71.1%) and a minimal SNR gain of ∼2.2 dB compared to FGSM-NTAS. As a result, it may be a viable option for future-generation networks. These schemes, as well as DL-based architectures, can be implemented in the future for advanced SM variants. The proposed TAS strategies could be extended to massive MIMO systems.

Author Contributions

All authors contributed to the manuscript. Conceptualization: H.K.J. and V.B.K.; article gathering and sorting: H.K.J., V.B.K., A.M., A.L.I., P.S. and A.C.; resources: H.K.J., V.B.K., A.M., A.L.I., P.S. and A.C.; supervision: V.B.K., A.M., A.L.I., P.S. and A.C.; validation: V.B.K., A.M., A.L.I., P.S. and A.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work of Agbotiname Lucky Imoize is supported in part by the Nigerian Petroleum Technology Development Fund (PTDF) and in part by the German Academic Exchange Service (DAAD) through the Nigerian-German Postgraduate Program under Grant 57473408.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
5GFifth-Generation
6GSixth-Generation
ABERAverage Bit Error Rate
AWGNAdditive White Gaussian Noise
DLDeep Learning
DTDecision Tree
EEEnergy Efficiency
FDFree Distance
FGSMFully Generalized Spatial Modulation
IMIndex Modulation
KNNK-Nearest Neighbor
MIMOMultiple-Input Multiple-Output
MLMachine Learning
NBNaïve Bayes
QAMQuadrature Amplitude Modulation
SESpectral Efficiency
SMSpatial Modulation
SNRSignal-to-Noise Ratio
SVMSupport Vector Machine
TASTransmit Antenna Selection
UEUser Equipment

References

  1. Turker, I.; Tan, S.O. Machine Learning vs. Deep Learning in 5G Networks–A Comparison of Scientific Impact. arXiv 2022, arXiv:2210.07327. [Google Scholar]
  2. De Alwis, C.; Kalla, A.; Pham, Q.V.; Kumar, P.; Dev, K.; Hwang, W.J.; Liyanage, M. Survey on 6G frontiers: Trends, applications, requirements, technologies and future research. IEEE Open J. Commun. Soc. 2021, 2, 836–886. [Google Scholar]
  3. Jadhav, H.K.; Kumaravelu, V.B. Deep Learning-Assisted Transmit Antenna Classifiers for Fully Generalized Spatial Modulation: Online Efficiency Replaces Offline Complexity. Appl. Sci. 2023, 13, 5134. [Google Scholar] [CrossRef]
  4. Lee, H.; Lee, B.; Yang, H.; Kim, J.; Kim, S.; Shin, W.; Shim, B.; Poor, H.V. Towards 6G hyper-connectivity: Vision, challenges, and key enabling technologies. J. Commun. Networks 2023, 25, 344–354. [Google Scholar] [CrossRef]
  5. Shahjalal, M.; Kim, W.; Khalid, W.; Moon, S.; Khan, M.; Liu, S.; Lim, S.; Kim, E.; Yun, D.W.; Lee, J.; et al. Enabling technologies for AI empowered 6G massive radio access networks. ICT Express 2023, 9, 341–355. [Google Scholar]
  6. Wang, Z.; Du, Y.; Wei, K.; Han, K.; Xu, X.; Wei, G.; Tong, W.; Zhu, P.; Ma, J.; Wang, J.; et al. Vision, application scenarios, and key technology trends for 6G mobile communications. Sci. China Inf. Sci. 2022, 65, 151301. [Google Scholar]
  7. Chowdhury, M.Z.; Shahjalal, M.; Ahmed, S.; Jang, Y.M. 6G wireless communication systems: Applications, requirements, technologies, challenges, and research directions. IEEE Open J. Commun. Soc. 2020, 1, 957–975. [Google Scholar]
  8. Chochliouros, I.P.; Kourtis, M.A.; Spiliopoulou, A.S.; Lazaridis, P.; Zaharis, Z.; Zarakovitis, C.; Kourtis, A. Energy efficiency concerns and trends in future 5G network infrastructures. Energies 2021, 14, 5392. [Google Scholar]
  9. Mesleh, R.Y.; Haas, H.; Sinanovic, S.; Ahn, C.W.; Yun, S. Spatial modulation. IEEE Trans. Veh. Technol. 2008, 57, 2228–2241. [Google Scholar] [CrossRef]
  10. Basar, E.; Wen, M.; Mesleh, R.; Di Renzo, M.; Xiao, Y.; Haas, H. Index modulation techniques for next-generation wireless networks. IEEE Access 2017, 5, 16693–16746. [Google Scholar]
  11. Castillo-Soria, F.R.; Cortez-González, J.; Ramirez-Gutierrez, R.; Maciel-Barboza, F.M.; Soriano-Equigua, L. Generalized quadrature spatial modulation scheme using antenna grouping. ETRI J. 2017, 39, 707–717. [Google Scholar]
  12. Asaad, S.; Rabiei, A.M.; Müller, R.R. Massive MIMO with antenna selection: Fundamental limits and applications. IEEE Trans. Wirel. Commun. 2018, 17, 8502–8516. [Google Scholar] [CrossRef]
  13. Gudla, V.V.; Kumaravelu, V.B.; Murugadass, A. Transmit antenna selection strategies for spectrally efficient spatial modulation techniques. Int. J. Commun. Syst. 2022, 35, e5099. [Google Scholar] [CrossRef]
  14. Rajashekar, R.; Hari, K.; Hanzo, L. Antenna selection in spatial modulation systems. IEEE Commun. Lett. 2013, 17, 521–524. [Google Scholar] [CrossRef]
  15. Pillay, N.; Xu, H. Comments on “Antenna Selection in Spatial Modulation Systems”. IEEE Commun. Lett. 2013, 17, 1681–1683. [Google Scholar] [CrossRef]
  16. Zhou, Z.; Ge, N.; Lin, X. Reduced-complexity antenna selection schemes in spatial modulation. IEEE Commun. Lett. 2013, 18, 14–17. [Google Scholar] [CrossRef]
  17. Pillay, N.; Xu, H. Low-complexity transmit antenna selection schemes for spatial modulation. IET Commun. 2014, 9, 239–248. [Google Scholar] [CrossRef]
  18. Junior, E.N.; Theis, G.; Santos, E.L.d.; Mariano, A.A.; Brante, G.; Souza, R.D.; Taris, T. Energy Efficiency Analysis of MIMO Wideband RF Front-End Receivers. Sensors 2020, 20, 7070. [Google Scholar] [CrossRef]
  19. Bereyhi, A.; Asaad, S.; Mueller, R.R. Stepwise transmit antenna selection in downlink massive multiuser MIMO. In Proceedings of the WSA 2018; 22nd International ITG Workshop on Smart Antennas, VDE, Bochum, Germany, 14–16 March 2018; pp. 1–8. [Google Scholar]
  20. Yang, P.; Xiao, Y.; Li, L.; Tang, Q.; Yu, Y.; Li, S. Link adaptation for spatial modulation with limited feedback. IEEE Trans. Veh. Technol. 2012, 61, 3808–3813. [Google Scholar]
  21. Jadhav, H.K.; Kumaravelu, V.B. Transmit antenna selection for spatial modulation based on machine learning. Phys. Commun. 2022, 55, 101904. [Google Scholar] [CrossRef]
  22. Yang, P.; Zhu, J.; Xiao, Y.; Chen, Z. Antenna selection for MIMO system based on pattern recognition. Digit. Commun. Networks 2019, 5, 34–39. [Google Scholar]
  23. He, D.; Liu, C.; Quek, T.Q.; Wang, H. Transmit antenna selection in MIMO wiretap channels: A machine learning approach. IEEE Wirel. Commun. Lett. 2018, 7, 634–637. [Google Scholar]
  24. Yang, P.; Xiao, Y.; Xiao, M.; Guan, Y.L.; Li, S.; Xiang, W. Adaptive spatial modulation MIMO based on machine learning. IEEE J. Sel. Areas Commun. 2019, 37, 2117–2131. [Google Scholar] [CrossRef]
  25. Liu, H.; Xiao, Y.; Yang, P.; Fu, J.; Li, S.; Xiang, W. Transmit Antenna Selection for Full-Duplex Spatial Modulation Based on Machine Learning. IEEE Trans. Veh. Technol. 2021, 70, 10695–10708. [Google Scholar]
  26. Altın, G.; Arslan, İ.A. Joint transmit and receive antenna selection for spatial modulation systems using deep learning. IEEE Commun. Lett. 2022, 26, 2077–2080. [Google Scholar]
  27. Shi, Y.; Lu, X.; Niu, Y.; Li, Y. Efficient jamming identification in wireless communication: Using small sample data driven naive bayes classifier. IEEE Wirel. Commun. Lett. 2021, 10, 1375–1379. [Google Scholar] [CrossRef]
  28. León, J.P.A.; de la Cruz Llopis, L.J.; Rico-Novella, F.J. A Machine Learning Based Distributed Congestion Control Protocol for Multi-Hop Wireless Networks. Comput. Netw. 2023, 231, 109813. [Google Scholar]
Figure 1. Pictorial representation of TAS for FGSM.
Figure 1. Pictorial representation of TAS for FGSM.
Futureinternet 15 00281 g001
Figure 2. Workflow of supervised learning algorithms during the training and testing process.
Figure 2. Workflow of supervised learning algorithms during the training and testing process.
Futureinternet 15 00281 g002
Figure 3. ML-supported TAS for FGSM.
Figure 3. ML-supported TAS for FGSM.
Futureinternet 15 00281 g003
Figure 4. Comparing the classification accuracy vs. dataset size for several ML algorithms.
Figure 4. Comparing the classification accuracy vs. dataset size for several ML algorithms.
Futureinternet 15 00281 g004
Figure 5. Comparing the classification accuracies of several ML algorithms for distinct values of k.
Figure 5. Comparing the classification accuracies of several ML algorithms for distinct values of k.
Futureinternet 15 00281 g005
Figure 6. Comparison between ABER and SNR of FGSM for distinct TAS practices ( N T = 4, N t x = 3, N r x = 4, and M = 4).
Figure 6. Comparison between ABER and SNR of FGSM for distinct TAS practices ( N T = 4, N t x = 3, N r x = 4, and M = 4).
Futureinternet 15 00281 g006
Figure 7. Comparison between ABER and SNR of FGSM for distinct TAS practices ( N T = 4, N t x = 3, N r x = 4, and M = 16).
Figure 7. Comparison between ABER and SNR of FGSM for distinct TAS practices ( N T = 4, N t x = 3, N r x = 4, and M = 16).
Futureinternet 15 00281 g007
Figure 8. Comparison between ABER and SNR of FGSM for distinct TAS practices ( N T = 5, N t x = 4, N r x = 4, and M = 4).
Figure 8. Comparison between ABER and SNR of FGSM for distinct TAS practices ( N T = 5, N t x = 4, N r x = 4, and M = 4).
Futureinternet 15 00281 g008
Figure 9. Comparison between ABER and SNR of FGSM for distinct TAS practices ( N T = 6, N t x = 4, N r x = 4, and M = 4).
Figure 9. Comparison between ABER and SNR of FGSM for distinct TAS practices ( N T = 6, N t x = 4, N r x = 4, and M = 4).
Futureinternet 15 00281 g009
Table 1. A sample mapping of the constellation bits for M = 4.
Table 1. A sample mapping of the constellation bits for M = 4.
Data BitsIndex of SymbolsTransmitted Symbols
00 s 1 1 2 + 1 2 j
01 s 2 1 2 1 2 j
10 s 3 1 2 + 1 2 j
11 s 4 1 2 1 2 j
Table 2. A sample mapping of the spatial bits for N t x = 3.
Table 2. A sample mapping of the spatial bits for N t x = 3.
Antenna BitsAntenna Index
001
012
103
111,2
Table 3. The TAS candidate set mapping for N T = 4 , N t x = 3 , N r x = 4 .
Table 3. The TAS candidate set mapping for N T = 4 , N t x = 3 , N r x = 4 .
TAS Candidate SetPossible Antenna Subset
S 1 1,2,3
S 2 1,2,4
S 3 1,3,4
S 4 2,3,4
Table 4. Comparison of distinct TAS techniques based on their complexity order.
Table 4. Comparison of distinct TAS techniques based on their complexity order.
ML-Based TAS SchemeOrder of ComplexityConfiguration 1 ( N tx = N rx = 4, N T =5, M = 4)Configuration 2 ( N tx = N rx = 6, N T = 7, M = 16)
FD-TAS O N s N T 2 N r x 2 2 α F G S M 2 α F G S M 1  [20] 1.98 × 10 6 3.23 × 10 9
SVM-based TAS O N Q 2 + N Q  [22]4201806
NB-based TAS O N Q  [21,27]2042
DT-based TAS O N Q log 2 N Q  [21,28]86.44226.48
KNN-based TAS O B N Q  [22] 4 × 10 6 8.4 × 10 6
Table 5. Simulation parameters.
Table 5. Simulation parameters.
ParametersValues
N T 4, 5, 6
N t x 3, 4
N r x 4
M4, 16
Constellation mappingQuadrature amplitude modulation (QAM)
α F G S M 4, 5, 6
B 2 × 10 5
Fading environmentUncorrelated Rayleigh flat fading
Table 6. Comparing the classification accuracy vs. dataset size for several ML algorithms.
Table 6. Comparing the classification accuracy vs. dataset size for several ML algorithms.
   Number of SamplesAchieved Percentage of Accuracy
DTNBKNNSVM
100038.2%62.8%57.9%67.6%
300040.9%67.2%65.3%70%
500041.7%69.2%65.6%70.3%
10,00043.8%70.2%67.2%71.1%
Table 7. Comparing the distinct TAS techniques based on their accuracy performances.
Table 7. Comparing the distinct TAS techniques based on their accuracy performances.
    ML-Based TAS SchemeAchieved Percentage of Accuracy
k  = 3 k  = 5 k  = 10
SVM-based TAS [22]70.1%70.7%71.1%
NB-based TAS [21,27]69.5%69.9%70.2%
DT-based TAS [21,28]42.9%43.3%43.8%
KNN-based TAS [22]66.2%66.8%67.2%
Table 8. Complexity associated with different features based on the feature vector length.
Table 8. Complexity associated with different features based on the feature vector length.
Selected FeatureLength of the Feature Vector ( N Q ) Configuration 1 ( N rx = 2 , N T = 6 ) Configuration 2 ( N rx = 4 , N T = 10 )
h i , j n N r x × N T 1240
h i , j n N r x × N T 1240
h i , j n 2 N r x × N T 1240
h i , j n H h i , j n 2 N T × N T 36100
Table 9. SNR gain achieved by distinct TAS practices over FGSM-NTAS.
Table 9. SNR gain achieved by distinct TAS practices over FGSM-NTAS.
ML-Based TAS SchemeImprovement in SNR gain (dB)
Configuration 1 ( N T = 4, N tx = 3, N rx = 4, and M = 4)Configuration 2 ( N T = 4, N tx = 3, N rx = 4, and M = 16)Configuration 3 ( N T = 5, N tx = 4, N rx = 4, and M = 4)Configuration 4 ( N T = 6, N tx = 4, N rx = 4, and M = 4)
SVM-based TAS [22]1.761.891.442.2
NB-based TAS [21,27]1.491.651.211.86
DT-based TAS [21,28]0.591.080.461.03
KNN-based TAS [22]1.271.390.961.61
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jadhav, H.K.; Kumaravelu, V.B.; Murugadass, A.; Imoize, A.L.; Selvaprabhu, P.; Chandrasekhar, A. Intelligent Transmit Antenna Selection Schemes for High-Rate Fully Generalized Spatial Modulation. Future Internet 2023, 15, 281. https://doi.org/10.3390/fi15080281

AMA Style

Jadhav HK, Kumaravelu VB, Murugadass A, Imoize AL, Selvaprabhu P, Chandrasekhar A. Intelligent Transmit Antenna Selection Schemes for High-Rate Fully Generalized Spatial Modulation. Future Internet. 2023; 15(8):281. https://doi.org/10.3390/fi15080281

Chicago/Turabian Style

Jadhav, Hindavi Kishor, Vinoth Babu Kumaravelu, Arthi Murugadass, Agbotiname Lucky Imoize, Poongundran Selvaprabhu, and Arunkumar Chandrasekhar. 2023. "Intelligent Transmit Antenna Selection Schemes for High-Rate Fully Generalized Spatial Modulation" Future Internet 15, no. 8: 281. https://doi.org/10.3390/fi15080281

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop