Next Article in Journal
Preparation of Hybrid Films Based in Aluminum 8-Hydroxyquinoline as Organic Semiconductor for Photoconductor Applications
Next Article in Special Issue
Tensor-Free Holographic Metasurface Leaky-Wave Multi-Beam Antennas with Tailorable Gain and Polarization
Previous Article in Journal
Rapid Estimation of Soil Pb Concentration Based on Spectral Feature Screening and Multi-Strategy Spectral Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Emerging Technologies for 6G Communication Networks: Machine Learning Approaches

by
Annisa Anggun Puspitasari
1,
To Truong An
1,
Mohammed H. Alsharif
2 and
Byung Moo Lee
1,*
1
Department of Intelligent Mechatronics Engineering and Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic of Korea
2
Department of Electrical Engineering, College of Electronics and Information Engineering, Sejong University, Seoul 05006, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(18), 7709; https://doi.org/10.3390/s23187709
Submission received: 2 August 2023 / Revised: 29 August 2023 / Accepted: 4 September 2023 / Published: 6 September 2023
(This article belongs to the Special Issue Communication, Sensing and Localization in 6G Systems)

Abstract

:
The fifth generation achieved tremendous success, which brings high hopes for the next generation, as evidenced by the sixth generation (6G) key performance indicators, which include ultra-reliable low latency communication (URLLC), extremely high data rate, high energy and spectral efficiency, ultra-dense connectivity, integrated sensing and communication, and secure communication. Emerging technologies such as intelligent reflecting surface (IRS), unmanned aerial vehicles (UAVs), non-orthogonal multiple access (NOMA), and others have the ability to provide communications for massive users, high overhead, and computational complexity. This will address concerns over the outrageous 6G requirements. However, optimizing system functionality with these new technologies was found to be hard for conventional mathematical solutions. Therefore, using the ML algorithm and its derivatives could be the right solution. The present study aims to offer a thorough and organized overview of the various machine learning (ML), deep learning (DL), and reinforcement learning (RL) algorithms concerning the emerging 6G technologies. This study is motivated by the fact that there is a lack of research on the significance of these algorithms in this specific context. This study examines the potential of ML algorithms and their derivatives in optimizing emerging technologies to align with the visions and requirements of the 6G network. It is crucial in ushering in a new era of communication marked by substantial advancements and requires grand improvement. This study highlights potential challenges for wireless communications in 6G networks and suggests insights into possible ML algorithms and their derivatives as possible solutions. Finally, the survey concludes that integrating Ml algorithms and emerging technologies will play a vital role in developing 6G networks.

1. Introduction

Due to the success achieved by the fifth generation (5G) networks regarding fast and good quality signal transmission compared to the previous generation, the world is currently expecting much faster and better communication in the sixth generation (6G) as the following generation network. However, these attractive and high-demand applications impose challenging key performance indicators (KPIs) and constraints on communication networks, which were also addressed in the International Telecommunication Union Radiocommunication Sector (ITU-R) workshop on International Mobile Telecommunications (IMT) in terms of several usage scenarios and key capability indicators for the 2030 communication network and beyond. These indicators include ultra-reliable low latency communication (URLLC), extremely high data rate, substantially high energy and spectral efficiency, ultra-dense connectivity, secure communication, and massive machine-type communication [1,2,3,4,5]. One of the potential solutions to meet all the requirements is developing emerging technology-assisted wireless communications, which researchers have proposed for the past few years [6,7,8].
Emerging technologies have significantly improved the quality of wireless communications. As shown in Figure 1, emerging technologies can help wireless communication to forward and improve the transmitted signal. Implementation of emerging technologies can provide communication in non-line-of-sight (NLOS) areas [9], dead zone areas [10], disaster environments [11], and even underground and underwater communications [12]. Several researchers have developed studies such as implementing emerging technology to serve NLOS areas and increase throughput [13], unmanned vehicles for information/power transfer [14], secure communication [15], non-orthogonal multiple access (NOMA) applications for interference cancellation [16], and other applications. However, there are some challenges in implementing emerging technology-assisted wireless communication, such as increasing reception of signal diversity from different hardware devices and increasing coexistence requirements, thus giving inaccurate results for model-based approaches. In addition, due to the implementation of massive MIMO (m-MIMO) systems, high overhead and computational complexity becomes a drawback for mathematical models in optimizing the functionality of the physical layer [17]. Therefore, it is likely that the performance enhancement of future wireless networks is difficult to achieve with conventional mathematical solutions.
The application of machine learning (ML) has been gaining traction across a range of industries, including robotics, image processing, healthcare, finance, and transportation [18,19,20,21,22]. In [18], a hybrid of deterministic and swarm-based algorithms was applied for multi-robot exploration in a cluttered environment. In [19], a self-organized and self-healing peer-to-peer information system was designed for a dynamic environment. ML can also be applied for practical applications like finance and healthcare. The use of ML and deep learning (DL) techniques in monitoring and making informed decisions regarding the COVID-19 pandemic was discussed in [20]. Meanwhile, [21] conducted a thorough analysis of COVID-19-related news to predict the stock market using ML technology. Additionally, ML has been employed to predict traffic accident severity, aiming to reduce road accidents and make transportation safer [22].
Other than that, ML and DL have also shown their contribution to optimizing wireless communication. In addition to its ability to work without human intervention, the DL approach is capable of tackling intricate system problems through precise mathematical models [23]. This advanced problem-solving method relies on cutting-edge algorithms that can analyze vast amounts of data, identify patterns, and make accurate predictions. By leveraging DL, the system can achieve more efficient and effective operations, learning to achieve better outcomes. Various ML algorithms and their derivatives have been adapted to improve wireless communication performance [24,25,26,27]. Previous studies of ML demonstrated significant advancements when it was implemented in several emerging technology-aided communications, such as intelligent reflecting surfaces (IRSs), unmanned aerial vehicles (UAVs), autonomous underwater vehicles (AUVs), NOMA, and others.

1.1. Related Work

Due to the promising benefits of using ML for future communications, recently, there have been several studies dealing with such implementations [28,29,30,31,32]. In [28], the authors discussed applying different ML types at each communication layer between devices. They highlighted that the ML algorithm used in applications and infrastructure layers is able to meet the 6G requirements, while Tang et al., in [29], have specifically discussed one of the 6G network requirements, URLLC. The authors presented ML abilities for optimizing channel allocation, network routing, congestion control, and adaptive streaming control. In addition, several studies discussed the role of ML algorithms for parameter optimization in m-MIMO communication. In [30], the authors analyzed ML-aided m-MIMO communications for the 5G network. They carried out several issues, including channel estimation, beamforming and precoding, signal detection, distributed and cell-free configurations, and m-MIMO with NOMA. By raising the same communication problem for the 6G network instead of the 5G network, another study focused on the impact of DL algorithm implementation called a transformer, a sequence-to-sequence DL model consisting of encoder–decoder modules and layers for semantic communication [31]. ML algorithms could also be applied to optimize integrated sensing and communication. Demirhan and Alkhateeb, in [32], described ten key roles of ML for integrated sensing and communication, which were divided into three categories: joint sensing and communication, sensing-aided communication, and communication-aided sensing. While the authors of [33] specifically described the reconfigurable intelligent surface (RIS)-aided wireless communication quality improvement due to the implementation of an ML algorithm, reinforcement learning (RL) to be precise, to optimize its communication parameters. They highlighted that implementing RIS as an emerging technology assisted by algorithms based on data statistics could improve communication performance.
Several aforementioned studies have explained some of the ML algorithm capabilities for 6G networks. Yet, there are still very limited studies that specifically discuss the implementation of ML in various emerging technologies based on the 6G requirements approach. The summary of the existing studies for ML implementation in 6G communication networks is shown in Table 1.

1.2. Scope and Contributions

Due to the rapid changes and developments in the current environment, ML algorithm implementations allow systems to work adaptively and efficiently. In addition, even though the development of 6G is still in its early stages, it has the potential to revolutionize the way of communication. Emerging technologies such as terahertz (THz) communication, m-MIMO, autonomous vehicles, and optical communication play a critical role in bringing about that communications revolution.
In contrast to recent surveys of ML algorithms implementation for 6G networks, our study delves into the role of ML algorithms in enhancing the efficiency of emerging technologies to meet the demanding demands of the 6G network. Therefore, due to the lack of surveys that focus on the application of ML algorithms in emerging technologies, this research bridges the gap in the current literature by explaining the technical intricacies of the optimization process and highlighting the benefits that can be achieved through the proper implementation of ML algorithms in emerging technologies to overcome various issues in wireless communication and meet 6G network requirements. The list below outlines the main contributions of our research:
  • In the beginning, we provide comprehensive details of the visions and requirements for 6G networks. We point out several critical requirements for 6G networks, including zero-energy Internet of Things (IoT), high-speed connectivity and throughput, URLLC, high reliability and availability, security, seamless integration, scalability, and personalizing quality of experience (QoE).
  • We provide an insight into the ML algorithm. It includes a brief explanation of the ML algorithms with mathematical approaches. We categorize the different ML algorithms as supervised and unsupervised learning, DL, and RL.
  • We extensively analyze the role of ML-aided emerging technologies in empowering this integration by optimizing several parameters in several scenarios of emerging technologies applications, such as IRS, UAV, AUV, NOMA, millimeter-wave (mmWave) and THz communications, free space optics (FSO), visible light communication (VLC), and mobile edge computing (MEC).
  • We offer a comprehensive review of the implementation of ML-aided emerging technologies to meet the requirements of 6G communication networks. This study includes several challenges found in 6G KPIs, such as throughput improvement, coverage extension, high reliability, low latency communication, energy efficiency, interconnection of terrestrial and non-terrestrial technologies, sensing and communication, and secure communication.
  • In the end, we provide conclusions regarding the impact of ML algorithm implementation on emerging technologies in meeting 6G network requirements.

1.3. Organization of the Paper

In this paper, we cover various aspects of 6G networks, especially those related to implementing emerging technologies. As shown in Figure 2, the rest of this paper is structured as follows. A comprehensive discussion of the visions and requirements for 6G networks is outlined in Section 2. Then, we provide an overview of the ML algorithms in Section 3. Additionally, we furnish comprehensive details on the ML algorithms deployed for emerging technologies in Section 4. Furthermore, we examine the potential challenges that future wireless communication may encounter concerning the requirements for 6G networks, as well as several insights for future research opportunities in Section 5. Finally, Section 6 presents the conclusions of the paper.

2. 6G Visions and Requirements

In this section, an overview of the key elements and features expected in 6G networks will be provided. The success of 5G in enhancing communication has raised high expectations for 6G. Additionally, the anticipated involvement of a massive number of users and connectivity in the 6G network further contributes to these expectations. It was shown by the KPIs of 6G networks that have already been announced, which was also discussed at the ITU-R workshop on IMT. Several requirements for 6G networks need to be considered, including zero-energy IoT, high-speed connectivity and throughput, URLLC, high reliability and availability, security, seamless integration, scalability, and personalizing QoE.

2.1. Zero-Energy IoT

Zero-energy IoT is a new technology that allows IoT devices to operate without batteries. Instead, the energy necessary for communication is harvested from the surroundings. This could be performed through various means, such as the energy from vibrations, sunlight, temperature gradients, and radio waves that can be converted into electricity. Implementing 6G networks using zero-energy IoT could give several benefits, such as environmental impact reductions and lower costs. Zero-energy IoT devices do not require batteries, making them significantly cheaper, and there will be no waste when they are disposed of [34]. Furthermore, the use of zero-energy IoT will reduce the possibility of failure due to its characteristics, which are not reliant on batteries. Other than that, 6G networks are expected to be more energy efficient than 5G networks, making them more suitable for zero-energy IoT devices.
Zero-energy IoT has the potential to revolutionize the IoT industry by making it possible to deploy large numbers of low-cost, low-power devices that can be used to collect data in a variety of environments [35,36]. This could lead to new applications in areas, such as smart cities, Industry 4.0, and agriculture.

2.2. High-Speed Connectivity and Throughput

High-speed connectivity and throughput are two key features of 6G networks. 6G is expected to offer peak data rates of up to 1 Tbps, 1000 times faster than 5G. There are a number of technologies that are being considered for use in 6G networks to achieve high-speed connectivity, including THz frequencies [37]. THz frequencies offer a much wider bandwidth than those used by 5G networks, enabling peak data rates of up to 1 Tbps. m-MIMO and beamforming could also be implemented to improve wireless channel efficiency, SNR, and data rates. Other than that, a full duplex is another potential technology that can double the data rate of the wireless channel by allowing a device to transmit and receive data simultaneously.
Therefore, implementing high-speed connectivity could give some benefits to the 6G network, such as enabling a more immersive and interactive user experience [38]. High-speed connectivity technologies could also help to improve efficiency by allowing them to transfer data more quickly and easily [39]. Furthermore, it can help network security improvement by making it more difficult for attackers to exploit vulnerabilities [40,41].

2.3. URLLC

URLLC is a type of communication that is characterized by its high reliability and low latency. This means that URLLC is well-suited for applications that require real-time communication and where even a small amount of data loss or delay can be critical. As the development of 6G continues, more innovative technologies being used to achieve the high reliability and low latency requirements of URLLC are expected. It is supported by the advantages of URLLC for 6G networks. A URLLC could improve the safety of critical infrastructure and systems by ensuring that they are able to communicate reliably and with low latency [42]. It will also increase efficiency by enabling users to automate processes and make better decisions in real-time.
Other than that, because it prioritizes reliability and latency, URLLC could use less power and bandwidth than other communication technologies [43]. This means that URLLC networks are optimized to deliver small amounts of data quickly and reliably, even in challenging conditions [44]. Those benefits of URLLC make it suitable for several advanced technologies, such as AI-driven optimization, which will optimize the URLLC networks in case of predictive analytics, resource allocation, security, and network troubleshooting.

2.4. High Reliability and Availability

High reliability and availability are also critical requirements for 6G networks. High availability refers to the ability of a network to remain operational even in the event of failure. This is essential for 6G networks, as they will be used to support a wide range of critical applications. There are several factors that can contribute to high reliability and availability in 6G networks, including redundancy, load balancing, failover, and monitoring. Redundancy is the use of multiple components to perform the same function, while failover is the ability of a network to switch to a backup component automatically. That approach guarantees that if one component malfunctions, another component can seamlessly assume control and sustain the intended functionality [45]. Moreover, load balancing and monitoring help prevent any component from becoming overloaded or causing a failure by distributing traffic across multiple components and monitoring the network health tracking process [46].
By implementing those and other measures, 6G networks can be made highly reliable and available [47]. It will ensure that they can continue to provide critical services even in the event of failure [48]. Therefore, it could lead to reducing the likelihood of service outages, increasing the uptime of 6G networks, providing longer periods of time, enhancing security, and improving user experience by ensuring that users are able to access services even if there is a failure [49,50].

2.5. Security

Security communication in 6G networks is a critical issue, as the network will be used to transmit sensitive data. Secure communication could help improve the 6G network’s security, privacy, trust, and user experience by preventing users’ data from being accessed by unauthorized parties [51,52]. It will challenge the attackers to eavesdrop on or intercept the communication, ensuring that communications between users and service providers are secure and confidential, and reducing the risk of security incidents.
However, there are some security challenges that need to be addressed in 6G networks. 6G networks will have a larger attack surface than the previous generation of networks because they will use a wider range of frequencies and technologies [53,54]. However, in contrast, it will make the 6G network more complex than previous network generations, which makes it more difficult to secure them, as there will be more potential vulnerabilities to exploit [55]. As 6G networks become more sophisticated, new attack vectors will emerge. These could include attacks on the network infrastructure, the devices that connect to the network, or the data that is transmitted over the network.

2.6. Seamless Integration

Due to the need for 6G networks to seamlessly integrate with existing 5G networks, as well as other networks such as Wi-Fi and Ethernet, the seamless integration of 6G networks is an essential requirement for 6G network success. This will allow users to seamlessly switch between networks as they move around, and will also allow for the efficient use of spectrum. Furthermore, the interconnection of terrestrial and non-terrestrial technologies that are expected to be implemented in 6G networks increasingly makes seamless integration even more necessary.
Terrestrial and non-terrestrial technologies are both being considered for use in 6G networks. Terrestrial technologies, such as mmWave, m-MIMO, and beamforming, use the Earth’s surface to transmit and receive signals. In contrast, non-terrestrial technologies, such as satellites, use the atmosphere or space to do so [56]. Those technologies will allow 6G networks to provide global coverage, high data rates, and low latency [57]. Terrestrial technologies can provide coverage in urban areas, while non-terrestrial technologies can provide in rural areas and remote locations [58,59,60]. Non-terrestrial technologies could also provide high data rates and low latency, which is essential for applications such as virtual reality (VR) and augmented reality (AR) [61]. Other than that, by interconnecting terrestrial and non-terrestrial networks, 6G networks can be made more secure [62]. This is because non-terrestrial networks are less susceptible to physical attacks than terrestrial networks. However, the interconnection of terrestrial and non-terrestrial technologies in 6G networks is a complex challenge due to issues such as heterogeneity, mobility, and security.

2.7. Scalability

6G networks are expected to have better scalability for machine-to-machine (M2M) connections than the previous networks. Scalability is the ability of a network to handle an increasing number of users and devices without sacrificing performance. The 6G networks will characterized by higher data rates, lower latency, and massive connectivity. These features will make them well-suited for M2M applications. M2M refers to the communication between devices without human intervention. In order to support the growing number of M2M connections in 6G networks, scalability should be one of the things to be considered.
As the number of M2M connections increases, the need for scalable networks will also increase. Thus, 6G networks are well-positioned to meet this need due to massive devices and connectivity adoption that will allow for a much larger number of M2M devices to be connected to the network, which will be essential for applications such as IoT. Scalability will also help to improve the user experience by ensuring that users have a reliable and consistent connection to the network. This could enable new and innovative M2M applications in wireless networks, such as smart cities, industrial automation, and transportation.
There are a number of factors that contribute to scalability in 6G networks, including heterogeneous networks, network slicing, software-defined networking (SDN), hybrid cloud, and AI [63,64,65]. Heterogeneous networks and network slicing could help to scale the network as needed to support massive applications and services [66]. SDN will allow for the network to be controlled and managed by the software, which makes the network easier to adapt to changes in traffic demand [67,68]. A hybrid cloud is a deployment model that combines the benefits of the public cloud and private cloud. The public cloud can scale the network horizontally by adding more resources, while the private cloud can scale the network vertically by adding more powerful resources [69,70]. Thus, combining them could be useful for applications that experience sudden spikes in traffic and require a lot of processing power [71]. A hybrid cloud allows organizations to have the flexibility to choose the right cloud environment and scale the network as needed, in order to provide high performance for demanding applications [72]. Additionally, AI can be used to optimize the performance of the network and to identify and mitigate potential problems. Therefore, scalability is essential to support the growing demand for connectivity and the increasing number of devices that will be connected to the 6G networks.

2.8. Personalizing QoE

The QoE in 6G wireless networks is expected to be significantly improved over previous generations. This is due to a number of factors mentioned in the subsection above, such as higher data rates, lower latency, wider coverage, more reliability, and better security [73]. However, in order to enhance the overall efficiency of the network towards a specific objective, it is imperative that QoE be personalized for each user or application on the network, so that the network operators can ensure that resources are used efficiently. Besides that, 6G networks are expected to support a wide range of users and applications with different requirements for QoE. Thus, personalizing QoE can ensure that users are able to optimize their experience and achieve the best possible experience in a timely and effective manner [74]. Therefore, prioritizing the personalization of QoE is a critical component of achieving success in the upcoming 6G network.
There are several promising technologies that can be implemented in order to improve QoE, including network slicing, edge computing, ML, and AI technologies. AI and ML can personalize network experiences by providing and analyzing specific data for user behaviors and preferences [75,76]. Network slicing could help to personalize QoE by dividing the network into multiple virtual networks, where each virtual network can be customized to meet the specific needs of the users or applications [77]. In addition, edge computing can be used to bring computing resources closer to the end users, which leads to a better QoE [78,79]. Implementing radio access technologies, such as THz communication, could also support more demanding applications and provide a better QoE for users, such as higher data rates or lower latency [80,81]. Personalizing QoE would be very useful to be implemented in specific cases, such as VR, AR, self-driving cars, Industrial IoT, and smart cities.
Figure 3 shows 6G visions and requirements, including its potential technologies as discussed in this subsection.

3. ML Algorithms

ML is a branch of artificial intelligence that uses mathematical algorithms to discern trends and patterns within complex, multi-dimensional datasets. A key component of ML’s effectiveness is its ability to learn from the data itself, which allows it to automatically adapt over time and improve its performance. Due to its versatility, effectiveness, and ability to address complex problems without requiring explicit programming instructions, ML has been incorporated into a variety of applications, including image and speech recognition, medical diagnosis, recommendation systems, financial forecasting, and many others. Therefore, ML has emerged as a transformative technology, revolutionizing industries and paving the way for numerous advancements and innovations in modern society.
There are several techniques within the ML domain, including supervised learning, unsupervised learning, DL, and RL. A supervised learning approach employs labeled data to make accurate predictions. Conversely, unsupervised learning algorithms can uncover patterns in unlabeled data. By utilizing neural networks (NN), DL methods extract hierarchical representations from the data. In contrast, RL trains models to make sequential decisions through interactions with the environment. These diverse approaches collectively provide a comprehensive toolkit for addressing a wide range of challenges in both the research and practical applications of ML.

3.1. Supervised Learning

In ML, supervised learning involves mapping input data to output data with high accuracy. This approach requires labeled datasets to train the model, and is commonly used for classification and regression problems. Some of the techniques used in supervised learning include linear and logistic regressions, decision trees, random forests, gradient boosting, and support vector machines (SVM).

3.1.1. Linear Regression

Linear regression is one of the most popular ML algorithms, due to its ability to predict continuous variables easily. Linear regression works based on the relationship between the target variable (dependent variable) and the predictor variable (independent variable). A sloped straight line of regression shows the relationship between these variables. It can be a negative linear relationship (the dependent variable decreases while the independent increases) or a positive one (both variables increase). Thus, the mathematical representation is written in Equation (1) [82].
y = b x + c
where y represents the dependent variable, x represents the independent variable, b represents the slope of the line, and c represents the intercept of the line. Meanwhile, to determine the accuracy of the predicted value, linear regression uses the mean squared error (MSE) cost function, written in Equation (2).
M S E = 1 N i = 1 n ( y i ( b x + c ) ) 2
where N represents the total number of observations.

3.1.2. Logistic Regression

The logistic regression model is widely used to predict binary outcomes based on probabilities. In contrast to linear regression, which assumes a linear relationship between predictors and the target variables, logistic regression uses a sigmoid or S-shaped logistic function to reflect the non-linear relationship between predictors and the likelihood of a specific result. The logistic regression can be given as follows:
f ( x ) = 1 1 + e x
where f ( x ) represents the predicted probability of the binary outcome, x denotes the linear combination of predictor variables and their corresponding coefficients, and e is the base of the natural logarithm.
A threshold value is applied to the predicted probabilities in order to classify the binary outcome. Traditionally, the threshold is set at 0.5, with predictions above 0.5 classified as 1 (positive outcome), and predictions below 0.5 classified as 0 (negative outcome). The threshold can, however, be adjusted according to the specific requirements of the problem to achieve the appropriate balance between sensitivity and specificity.
In logistic regression, the predicted outcome is obtained by comparing the predicted probability to the threshold value. For example, if the predicted probability is greater than a threshold, it is classified as 1, and if it is less than a threshold, it is classified as 0 [82].

3.1.3. Decision Tree

Decision trees are a popular supervised ML technique used for both classification and regression problems. They provide an intuitive and interpretable approach by representing data in a tree-like structure. In this structure, the root node represents the entire dataset, the branches correspond to decision rules based on attribute values, and the leaves represent the output or prediction [83].
Attribute selection is a critical step in constructing decision trees. The goal is to determine the most informative attributes that effectively split the data to maximize predictive accuracy. Two commonly used attribute selection measures are the Gini index and information gain.
The Gini index measures the impurity or disorder of a node in a decision tree. It calculates the probability of a specific attribute being incorrectly classified. The Gini index and information gain are mathematically represented in Equations (4) and (5), respectively.
G i n i = 1 i = 1 n p i 2
I n f o r m a t i o n G a i n = E ( S ) [ W E ( s ) ]
E ( S ) = i = 1 n p i log 2 p i
where p i represents the probability that a feature is classified as class i, W represents the weighted average, and E ( S ) and E ( s ) indicate the entropy of the main node and each feature, respectively.
The mathematical formulations of the Gini index and information gain provide a solid foundation for attribute selection in decision trees. These measures allow decision trees to effectively partition the data and make informed decisions at each node, leading to accurate predictions. Decision trees’ interpretability and ability to handle both categorical and numerical data make them valuable tools for various applications in various domains.

3.1.4. Random Forest

Similar to the decision tree technique, the random forest technique is widely used for classification and regression problems in ML. A random forest leverages the concept of ensemble learning by combining multiple decision trees to make predictions. This approach harnesses the collective wisdom of multiple models to enhance the accuracy and robustness of predictions [83].
In a random forest, each decision tree is constructed independently, utilizing a subset of the training data and a random selection of features. This sampling process, known as bootstrap aggregating or “bagging”, introduces diversity among the trees. By aggregating the predictions from all the individual trees, the random forest predicts the final output based on the majority vote (for classification) or the average (for regression) of the predictions generated by the constituent trees.
The random forest algorithm offers several advantages over a single decision tree. Firstly, it reduces the risk of overfitting, as the averaging of multiple models helps to mitigate the effects of noise and biases in the training data. Additionally, by randomly selecting a subset of features for each tree, a random forest introduces feature diversity and reduces the influence of dominant features, leading to a more balanced and robust model.
The number of trees in a random forest is a crucial parameter that impacts the model’s performance. As the number of trees increases, the random forest becomes more capable of capturing complex patterns and relationships in the data. However, there is a trade-off between predictive accuracy and computational efficiency, as the inclusion of more trees typically results in longer computation times.
The random forest technique has been extensively applied in various domains, including finance, healthcare, and image analysis. It has demonstrated its effectiveness in tackling complex problems, such as fraud detection, disease diagnosis, and object recognition.
Ensemble learning is an ML approach that seeks better prediction by combining multiple models. In general, there are four methods of ensemble learning which are bagging, boosting, staking, and a mixture of experts.
  • Bagging: a technique that generates multiple training data subsets and trains the model on each subset, then combines the output;
  • Boosting: a method that creates multiple models where each model is trained on a modified version of the training dataset;
  • Stacking: a method that generates bootstrapped data subsets and adds a meta-classifier at the end of the process to rectify any incorrect behavior from the initial classifiers;
  • Mixture of experts: a technique that utilizes a whole dataset for each classifier input. A gating network is applied to produce weights for each initial classifier before going through a linear combination.

3.1.5. Gradient Boosting

Gradient boosting is a supervised ML that takes the concept of boosting method of ensemble learning. This algorithm is designed to solve both classification and regression problems by combining multiple weak learners into strong learners.
In gradient boosting, the algorithm iteratively builds a sequence of weak learners, where each learner is trained to correct the errors of the previous model’s predictions. At each iteration i, the algorithm fits a decision tree to the negative gradient of the loss function, aiming to minimize the residuals or errors of the previous model’s predictions. Gradient boosting in mathematical representation is shown in Equation (7).
f ( x ) = i α i h i ( x )
where f ( x ) , α i , and h i represent strong learners, the weight of the last iteration, and weak learners, respectively.
Each weak learner is designed to capture a specific aspect or pattern in the data that the previous models may have missed. By iterative training and combining these weak learners, gradient boosting gradually improves its predictive performance, reducing the overall error or loss. The choice of loss function depends on the problem at hand. For example, in classification problems, the cross-entropy loss or exponential loss may be used, while in regression problems, mean squared error or mean absolute error could be employed.
Gradient boosting has gained significant attention and popularity in various domains due to its ability to handle complex problems and deliver high predictive accuracy. It has proven successful in diverse applications such as click-through rate prediction, anomaly detection, and recommendation systems.

3.1.6. Support Vector Machines (SVMs)

SVMs are powerful and versatile ML algorithms that have gained considerable attention in the field of supervised learning. They belong to the class of discriminative classifiers, and are widely used for both classification and regression tasks [83]. SVMs have proven to be effective in various domains, including image recognition, text categorization, and bioinformatics.
The fundamental concept behind SVMs is to find an optimal decision boundary or hyperplane that maximally separates the data points belonging to different classes. The key idea is to identify a decision boundary with the maximum margin, which represents the distance between the boundary and the closest data points of each class. This property makes SVMs robust and less susceptible to overfitting.
SVMs excel in scenarios where the data is not linearly separable in the original feature space. To address this, SVMs employ a technique called the “kernel trick”, which implicitly maps the input data into a higher-dimensional feature space where linear separation becomes feasible [83,84]. This allows SVMs to capture complex, nonlinear relationships between the input features and the target variable.
It is necessary to find the optimal hyperplane when training an SVM in order to maximize the margin and minimize the classification error. It is common for convex optimization techniques to be used in order to solve this optimization problem. An SVM’s generalization performance is influenced by the support vectors, which are the data points closest to the decision boundary.
Moreover, SVMs can handle both binary and multi-class classification problems. Binary classification involves separating data into two classes, while multi-class classification extends the SVM framework to handle multiple classes by using strategies such as one-vs-one or one-vs-rest.
There are several advantages to using SVMs, including their ability to handle high-dimensional data and their robustness against overfitting. As well as providing a clear sense of decision boundary, SVMs can also be used to gain insights into the classification process, because they provide a clear sense of the decision boundary.

3.2. Unsupervised Learning

The Unsupervised Learning type of ML is trained using no pre-existing labels and input data that is not classified, in order to discover patterns within the data. Thus, it does not need external supervision to learn the data, and does not have a predefined output.

3.2.1. K-Means Clustering

The K-means algorithm, also known as the K-nearest neighbors algorithm, is a method of clustering data instances based on pairwise distances between them. This algorithm is aimed at minimizing the variance between clusters.
Initially, the algorithm partitions the input points into K initial sets. The sets can be randomly assigned or determined by heuristic methods based on the data. The centroid is the mean or center of its clusters, whose values are updated for each iteration i, where the initial centroids of k clusters are chosen randomly. The objective function of K-means clustering P is shown as follows:
( P ) m i n j = 1 k i = 1 n x i j c j 2
where x i j c j represents the distance function.
The number of clusters is a critical parameter in K-means clustering. A large number of clusters may improve data separation, but it can also lead to overfitting. The Elbow method is a popular technique for determining the optimal number of clusters in K-means clustering. The within-cluster sum of squares (WCSS) plotting can be used to determine the optimal number of clusters, where the optimal number of clusters is the point at which the WCSS decreases sharply.

3.2.2. Hierarchical Clustering

Hierarchical clustering differs from K-means in that it allows the number of clusters to change during each iteration. It can be divided into two categories: divisive clustering and agglomerative clustering. The divisional clustering algorithm starts with all data instances grouped into a single cluster, and then splits them in each iteration, resulting in a hierarchical cluster structure. Agglomerative clustering, on the other hand, requires a bottom-up approach, where each instance is considered a separate cluster and is merged iteratively. Regardless of the method used, the resulting hierarchy will have N levels, where N represents the total number of instances.
Hierarchical clustering, in contrast to other clustering methods, does not provide a single definitive clustering solution for the data. Instead, it generates N 1 clusterings, leaving it up to the user to determine the most suitable one for their specific objectives. To aid in this decision-making process, statistical heuristics are sometimes employed.
After the training phase, the resulting arrangement of clusters forms a hierarchical structure, often visualized using a dendrogram. In the dendrogram, nodes represent clusters, and the length of an edge connecting a cluster to its split reflects the dissimilarity between the resulting split clusters. Dendrograms have contributed to the popularity of hierarchical clustering, as they offer an easily interpretable visualization of the clustering structure.
It should be noted that selecting the appropriate clustering solution from the hierarchical structure requires careful consideration, and may involve domain knowledge and expertise. The dendrograms serve as a valuable tool in understanding and interpreting the clustering outcomes.
The use of hierarchical clustering and the interpretation of dendrograms have found wide applications across various domains due to their ability to provide an intuitive and accessible view of the clustering structure

3.2.3. Principal Component Analysis (PCA)

PCA is a popular unsupervised ML technique widely used for dimensionality reduction and data analysis. It aims to transform high-dimensional data into a lower-dimensional space while retaining maximum information.
The key objective of PCA is to identify the underlying structure or patterns within the data. It achieves this by splitting the data into a k-dimensional space based on the principal components, which are the eigenvectors of the covariance matrix. Each principal component captures a different aspect of the data’s variability.
The eigenvalues associated with the principal components represent the variances explained by each component. Higher eigenvalues indicate a greater proportion of the total variance explained by the corresponding principal component.
By leveraging PCA, analysts and researchers can gain insights into the essential features and relationships within complex datasets while reducing the dimensionality. This technique has found widespread application across various fields, including image processing, genetics, and finance, among others.

3.3. Deep Learning (DL)

DL is a sub-branch of ML consisting of multiple NN layers that can be implemented for data prediction, classification, or other data decision-making by learning its representations. The structure of DL consists of input, output, and hidden layers. Based on the forward-propagation cycle, the neurons in every hidden layer calculate the weighted sum of the input of the previous layer and forward it to the following layers using a nonlinear activation function. DL converts the raw data into pairs of nonlinear input–output mapping used for executing actions to achieve the objective. While learning the characteristics of the raw data in high complexity, each layer of NN will transform to a higher level.

3.3.1. Artificial Neural Networks (ANNs)

ANNs are fundamental neural network models, often referred to as feed-forward neural networks. They comprise a group of interconnected neurons organized in layers, where information propagates in a unidirectional manner, from the input layer through intermediate hidden layers (if present) to the output layer [85].
In an ANN, the input data is processed only in the forward direction, with each neuron receiving input from the previous layer and generating an output that becomes the input for the subsequent layer. The hidden layers, which may or may not be included in an ANN model, provide a means for the network to learn and capture complex representations of the input data [83].
The absence of hidden layers in an ANN simplifies its operation and interpretation. Without hidden layers, ANNs primarily function as linear models, with input data being mapped directly to the output layer. This characteristic makes ANNs particularly suitable for problems that involve linear relationships and straightforward decision-making processes.
The simplicity of ANNs, both in terms of their structure and interpretability, has contributed to their widespread usage and understanding in various domains. Researchers and practitioners often employ ANNs as a starting point to explore more complex neural network architectures and advanced ML techniques.

3.3.2. Deep Neural Networks (DNNs)

DNNs represent the most widely implemented algorithm in DL. DNNs are characterized by their fully connected structure, where multiple layers are stacked, and each neuron is connected to all neurons in the preceding and following layers. The architecture of a DNN allows for the extraction of increasingly complex representations as information flows through each layer. This hierarchical representation learning enables DNNs to capture intricate patterns and relationships in the data [86].
Optimizing the learning performance of DNNs is crucial, and a key consideration is the weight of the model. The weights determine the strength of connections between neurons, and play a vital role in the network’s ability to accurately learn from the data. Careful adjustment of the weights is necessary to prevent issues such as vanishing or exploding gradients, which can hinder the training process [85].
Efficient weight initialization, regularization techniques, and appropriate optimization algorithms are employed to ensure effective weight management in DNNs. These practices contribute to enhancing the learning capacity and overall performance of DNN models.
Due to their ability to handle complex data and learn intricate representations, DNNs have achieved remarkable success in various domains, including computer vision, natural language processing (NLP), and speech recognition. Their flexibility and versatility have made DNNs a powerful tool for solving challenging problems and advancing the field of DL.

3.3.3. Convolutional Neural Networks (CNNs)

CNN architecture focuses on identifying similarities within 2D feature vectors. Typically, CNN models start with convolutional layers, followed by nonlinear activation functions, pooling layers, and additional convolutional layers. The fundamental concept underlying CNN architecture is local connection and weight sharing.
Unlike DNNs that employ a fully connected structure, a CNN’s convolutional layers have each unit connected to a local patch in the preceding layer, and all connections within the patch share the same weight matrix. This weight-sharing property significantly reduces the number of learnable parameters in CNNs [83].
By leveraging local connections and weight sharing, CNNs excel in capturing local patterns and spatial relationships in data. This makes them highly effective in tasks such as image recognition and computer vision, where identifying local features is crucial.
The architecture of CNNs enables them to automatically learn hierarchical representations from raw data, starting from low-level features and progressively extracting more complex and abstract features. This hierarchical feature learning contributes to CNNs’ ability to achieve impressive performance in various domains.

3.3.4. Recurrent Neural Networks (RNNs)

RNNs differ from DNNs and CNNs in that they process input sequences iteratively and have the ability to retain information from the past, thus avoiding memory loss. The architecture of an RNN can vary depending on the specific application it is designed for.
In an RNN, the hidden layers share parameters across different time steps, similar to the weight-sharing technique used in CNNs. This shared structure reduces the model’s complexity and helps prevent overfitting. However, training RNNs using backpropagation through time can present challenges. The backpropagation algorithm, which leverages a stochastic gradient descent, unfolds in time and can impede the smooth flow of information, leading to difficulties in training.
To address the issues of vanishing or exploding gradients, and to enhance the memory capabilities of traditional RNNs, Long Short-Term Memory (LSTM) networks were introduced as a robust alternative. LSTM networks incorporate specialized memory cells that enable them to selectively retain and forget information, making them more effective in capturing long-range dependencies in sequential data. LSTM networks have gained significant popularity due to their ability to overcome the limitations of traditional RNNs and provide improved memory and learning capabilities.
The utilization of RNNs and LSTM networks has led to significant advancements in various domains, including NLP, speech recognition, and time series analysis. Their ability to model sequential data and capture temporal dependencies makes them well-suited for tasks involving dynamic patterns and contextual information.

3.4. Reinforcement Learning (RL)

In recent years, ML has played an important role by allowing machines to make decisions automatically based on their datasets. RL is an advancement in ML, specifically DL. As derivatives of ML, RL algorithms allow machines to interact with a dynamic environment while considering their experience dataset to make the most accurate decision [87,88,89]. Based on the Markov decision process (MDP) formula, the RL algorithm consists of three stages: state, action, and reward.
  • State: A set of environment’s characteristics (S) received by the agent provided by the environment. s 1 represents the initial state and the environment for each time step t indicated by s t S ;
  • Action: a set of actions taken by the agent (A) in response to the characteristics of the environment, while next state s t + 1 indicates the latest environmental characteristics sent to the agent each time the agent executes an action a t A in a time instant t;
  • Reward: A set of feedback provided by the environment based on the action given by the agent. When the result obtained are better than those previously achieved, the environment will give a reward r t to the agent for every time instant t. In contrast, a punishment will be given when the results obtained are worse than the previous ones;
  • Q-Value function: a state–action value function Q ( s , a ) received by the agent that indicates the level of action a t we took for each given state s t .
RL methodology can be classified as policy-based or value-based, based on the approach taken to decide what action to take [90]. The value-based method considers the optimal Q-value Q * ( s , a ) , while the policy-based considers the optimal policy value or transition probability π * ( s , a ) .
The combination of DNN and RLs is beneficial in solving intricate problems. The DNN could act as a Q-value estimator Q ( s , a ) in the value-based Deep RL (DRL), as in Equation (5). In addition, it could also perform as a gradient estimator θ J ( θ ) to estimate the probability value of J ( θ ) in the policy-based method, as shown in Equation (6).
Q ( s , a ; θ ) Q ( s , a )
θ J ( θ ) t 0 r ( τ ) θ log π θ ( a t , s t )
where θ represents the weight of the DNN, r ( τ ) indicates the reward for each trajectory (path), and log π θ ( a t , s t ) indicates the probability of the performed action in each state. Linear in its development, a study has recently been carried out on the application of DRLs in various branches of technology, one of which is emerging technology.
Due to its characteristic that works based on each environment, RL is useful in a constantly changing environment and suitable to handle very large and complex data at the cost of the computation. In contrast, it will be unavailing for simple problems because it will be hard to achieve the maximum reward. Furthermore, RL is highly dependent on their reward function quality, and it is difficult to debug and interpret RL implementation.
Based on the discussion in this section, Table 2 shows the comparison of each ML algorithm in terms of their concept, advantages, and limitations for their implementation.

4. ML Algorithm Implementation for Emerging Technologies in a 6G Network

4.1. Intelligent Reflecting Surfaces (IRSs)

IRS or RIS is a technology that has been intensively discussed by researchers to support 6G communication networks because of its ability to improve signal quality by working passively and having low installation and maintenance costs. The IRS is an artificial two-dimensional planar metasurface that has reconfigurable features implemented through electronic circuits. IRS helps transmit data and avoid NLOS propagation in wireless communications by reflecting electromagnetic waves (EMs) to the desired receiver to enhance the transmission quality of service (QoS) significantly. Several things in IRS-aided communication need to be considered to support the QoS obtained, such as channel state information (CSI), phase shift configuration, beamforming, power and spectral efficiency, and physical layer security. These issues can be overcome by optimizing using ML. Table 3 provides a concise summary of the studies that are discussed in this subsection.
The study in [91] applied two DNNs to find the relationship between the pilot signals, the optimal phase shift, and the downlink transmit beamforming vector. The proposed system was shown to reduce pilot overhead while still providing performance comparable to communication with perfect CSI. Whereas in [92], the optimum IRS phase shift and overhead reduction are obtained by implementing the CNN architecture. The system can converge to near-optimal data rates using less than 2% of the receiver locations. Applications of ML to maximize spectral efficiency have been applied in [93,94,95]. The system proposed by authors of [93] achieved almost the same performance as the alternative optimization method with less computational complexity by using a learning phase-shift NN (LSPNet) that is trained using an unsupervised learning method. Other than that, in [94,95], the system improved spectral and power efficiencies by applying a DL-based framework in RIS-assisted MIMO and MIMO–NOMA communication systems with STAR-RIS, respectively. The approach suggested in [94] can configure real-time phase shifts, improve rate performance in low signal-to-noise ratios (SNRs), and provide higher energy efficiency (EE) than the optimal beamforming solution. In comparison, the DL-based framework in [95] provided the low complexity iterative algorithm with guaranteed convergence at a relatively optimal level, and predicted the optimal user’s power allocation and phase shift configuration at STAR-RIS.
Another study focused on minimizing transmit power in an RIS-assisted MISO-OFDM system by implementing a DRL-based framework, which is a twin delay deep deterministic policy gradient (TD3) algorithm [96]. The system was shown to be effective in reducing transmit power, which is almost the same as the lower bound obtained by the manifold optimization algorithm, but with a much shorter computation time. Another crucial issue for 6G communication networks is privacy and security. Several works have explored the application of ML in IRS-aided PLS communications. In [97], the authors aimed to enhance the efficiency and the learning convergence rate by implementing post-decision state (PDS) and prioritized experience replay (PER) schemes. The result outperformed the deep q-learning (DQN) method by increasing the system’s secrecy rate as well as the probability of satisfied QoS, while the authors of [98] were optimizing the average secrecy rate and throughput in IRS-assisted secure buffer-aided cooperative networks. The proposed multi-agent DRL (MA-DRL) method significantly improved those two parameters over the max-ratio algorithm.

4.2. Unmanned Aerial Vehicles (UAVs)

UAVs are one of the most widely applied unmanned vehicles, and have the potential for future communications. UAV-aided communications have become increasingly popular for communications applications in recent years due to several advantages they offer, such as mobility, high maneuverability, low-cost maintenance, and easy deployment [99]. Their ability to hover and move around an area allows communication to occur in an infrastructure lacking due to NLOS. By optimizing various parameters, such as the UAV trajectory, UAV placement, bandwidth, and power allocation, the performance of a UAV-assisted communication network can be significantly improved. This can lead to a number of benefits, such as increased coverage, improved QoS, and reduced costs.
In [100], the authors developed an RL approach to allow a UAV to traverse a given trajectory autonomously. The proposed system gave fewer localization errors compared to other methods mentioned in the study by considering the fixed amount of UAV energy consumption, path length, flying time, and velocity, while in [101], the authors have implemented a MA q-learning algorithm, ESN algorithm, for placement optimization, trajectory acquisition, and power control. The proposed ESN algorithm predicted the user’s movement at high accuracy and provided a high quality of maintaining the trajectory and power control. Another study focused on UAV path planning and obstacle avoidance by implementing a DQN-based algorithm [102]. The proposed modified q-learning showed reducing 50% in computation time and 30% of the path length than the state–action–reward–state–action (SARSA) algorithm. Another algorithm, called DL-based energy optimization (DEO), has been proposed to optimize energy for edge devices in [103]. It is used to dynamically adjust the emission energy of the edge device so that the received power of the UAV is equal to the receiver’s sensitivity. They used DL to predict the UAV location information. The results showed that the DEO algorithm achieved a weighted mean absolute percentage error (WMAPE) of less than 2% under the effect of a communication delay of less than 1 s.
In [104], the authors were concerned about the required energy in a moving UAV. They used the mean-fielded game (MMFG) method to obtain the optimal trajectory and proposed the mean-field trust region policy optimization (MFTRPO) algorithm, which proved to be effective in robustness and superiority in energy efficiency. Furthermore, ML can be used for UAV-aided communication for resource allocation and handover management. The authors of [105] presented an algorithm for handovers and radio resources management (H-RRM) in UAV communications. They used DQN to make decisions about the way to allocate resources and time to perform handovers. The proposed system was shown to result in fewer handovers, less interference, and less delay experienced by terrestrial users. This was achieved by setting appropriate coefficients for delay, interference, and handover in the reward function. In addition, concerning the energy efficiency of the moving UAV, a study proposed a system for mobile charging scheduling in distributed multi-drone networks [106]. They proposed a DL-based method to troubleshoot possible problems in distributed multi-drone networks effectively. The proposed system reduced the number of false bids made by drones by increasing the payment for those bids. That method resulted in a revenue-optimal auction, even without bid distribution among the drones. Table 4 provides a concise summary of the studies discussed in this subsection.

4.3. Autonomous Underwater Vehicles (AUVs)

Underwater communication is receiving a lot of attention from researchers lately. The increasing need for sensor applications and cellular communication through this environment encourages the importance of optimizing underwater communications. However, a crucial issue that needs to be considered is that the underwater environment can only occur by using optical or EM waves, which only occur in short-distance communication. Other than that, the water flow, movement of living things, uneven surfaces, and oceanic turbulence can cause a high level of multipath fading, reducing the quality of the transmitted signal [107]. An AUV is expected to serve the deployed nodes of the Internet of Underwater Things (IoUwT) by moving from one node to another to provide a better QoS, which resembles traditional mobile relaying [108]. AUV application can also be integrated with UAVs or IRS, where an AUV is well suited for carrying RIS to optimize the transmitted signals. Therefore, the AUV trajectory and limited energy are essential things to consider. Thus, further research is required to realize this strategy to support underwater wireless communication networks fully.
In [109], DRL has been proposed to find an AUV’s optimal trajectory tracking control. The proposed system has proven robust and effective in different kinds of trajectory tracking, while in [110], the authors proposed the asynchronous multithreading proximal policy optimization-based path planning (AMPPO-PP) and trajectory tracking (AMPPO-TT) algorithms for autonomous planning, tracking, and emergency obstacle avoidance in underwater vehicles. AMPPO-PP proved effective in planning paths around underwater communication by outperforming the classical path-planning algorithm and performing at the same level as the advanced sampling-based path-planning method. In contrast, AMPPO-TT is a trajectory-tracking algorithm that provides good tracking performance in three-dimensional coastline detection scenarios. Another study applied RL-based methods to control the underwater vehicle by redesigning the cost function, which allowed the vehicle to avoid obstacles smoothly [111]. The proposed system proved the effectiveness of completing the tracking task by avoiding obstacles. In comparison, the authors of [112] proposed an RNN with a convolution (CRNN) algorithm to overcome the obstacle avoidance issues. The CRNN solved the obstacle avoidance planning problem with fewer parameters and shorter computation times, leading to shorter paths and improved energy efficiency. Table 5 provides a concise summary of the studies discussed in this subsection.

4.4. Non-Orthogonal Multiple Access (NOMA)

The rapidly growing need for massive connectivity and the growth predictions of the use of emerging technologies on the 6G network makes spectrum efficiency a crucial issue that needs to be solved. NOMA is a promising and suitable technique to overcome that issue, due to its ability to provide highly efficient spectrum multiple access in a 6G wireless network [113]. In NOMA, several clusters are formed by a wireless terminal to transmit data over the same frequency channel. In addition, to prevent interference between clusters, each cluster implemented successive interference cancellation (SIC) [114].
In [115], unsupervised and supervised learning is implemented for spectrum sensing in NOMA communication. The proposed system achieved optimal power allocation between two primary users and accurate and effective spectrum sensing, while in [116], the authors focused on implementing LSTM-based DL models for signal detection. The results showed that the DL approach performed better than the SIC receiver, and was more robust than the limited radio resources. Other than that, ML can also be applied to NOMA communication to improve energy efficiency. In comparison, an energy-efficient ML power optimization algorithm was developed to meet QoS constraints in [117]. The proposed system significantly minimized energy consumption in a network while maintaining low complexity using an energy-efficient co-training-based semi-supervised learning (EE-CSL) algorithm. Due to its high spectral efficiency, the proposed system applied in the MIMO network achieved a more significant sum rate than conventional MIMO orthogonal multiple access, while in [118], the authors implemented a Double DQN (DDQL)-based RL to optimize the transmission power. The proposed DDQL algorithm reached the desired target value in 91% of the test cases. Compared to the sequential least squares programming algorithm (SLSQP) and trust-region constrained (TCONS) algorithms, the proposed DDQL algorithm significantly provided better results. Another study implemented a NOMA-based federated learning (DREAM-FL) system for client selection [119]. DREAM-FL proved to select more qualified clients with higher model accuracy than frequency division multiple access (FDMA)- and time division multiple access (TDMA)-based solutions, while in [120], the DL-based algorithm was implemented in the NOMA system for channel estimation. The LSTM-based DL algorithm is utilized to predict the channel coefficients. The bit-error-rate (BER), outage probability, sum rate, and individual capacity have verified that the proposed system provided reliable performance, even when cell capacity is increased. Table 6 provides a concise summary of the studies discussed in this subsection.

4.5. Millimeter-Wave and Terahertz Communications

By focusing on enhancing system performance, especially for throughput, 6G networks are expected to take advantage of the high-spread multi-band spectrum by allowing hundreds of gigabits per second to terabits per second links [121]. Other than that, for the sake of seamless connectivity of emerging technologies-based communication, higher spectrum frequency is something that can be considered to achieve fast and reliable communication, such as the combined use of mmWave band (30–300 GHz) and THz band (0.1–10 THz) [122]. However, these high-frequency communications will suffer from distance limitation, energy efficiency, physical layer improvement, and intense phase noise. The increasing frequency will result in higher spreading loss and stronger multipath fading losses. In addition, transceivers that can transmit at high power in the THz band are not yet available, which means that THz communication has lower transmit power than mmWave communication systems. Traditional transmission techniques are difficult to apply directly due to their inability to overcome intense phase noise caused by radio-frequency impairments in higher frequencies [123,124].
Several studies have proposed their schemes and algorithms to overcome some of the problems in mmWave wireless communication. In [125], ML is used for low-complexity beam selection in mmWave MIMO communication by implementing a random forest classification (RFC) algorithm. The proposed system achieved the maximum uplink sum rate, which is similar to the sub-optimization method and significantly better than the SVM-based method. Furthermore, it converged faster than SVM-based methods, and nearly reached the optimal performance. The RFC-based method could especially reduce the complexity of the system by 99.8% with massive users, while in [126], the authors proposed a supervised ML algorithm to improve the blind handover success rate in sub-6 GHz LTE and mmWave bands. The proposed system predicts the success or failure of the handover using previous calculations. The results showed that the proposed system improved the inter-radio access technology (inter-RAT) handover success rate and no longer kept the session in the optimal band for an extended time. Therefore, it likely has a high chance of supporting the self-organizing network regarding high availability, bandwidth, low latency, and reducing degraded service in a handover time. In [127], the authors applied three unsupervised learning algorithms to cluster secondary users without knowing the number of clusters and degrading the primary user’s performance. Three unsupervised ML algorithms, namely K-means, agglomerative hierarchical clustering, and density-based spatial clustering of applications with noise (DBSCAN), were used in the THz-NOMA network. Based on the sum data rates results, the agglomerative hierarchical clustering outperformed the other two algorithms as the number of secondary users increased. Table 7 provides a concise summary of the studies discussed in this subsection.

4.6. Free Space Optics (FSO)

Optical wireless communication (OWC) techniques can be an alternative to the RF spectrum, especially on 6G networks and in the future, due to their available bandwidth [128]. FSO communication is a type of communication that uses light to transmit data through free space rather than through wired cables. Therefore, FSO is more versatile and flexible than traditional wired communication. However, the signal will experience much interference, which can reduce the quality of the transmitted signal, such as multipath fading, atmosphere turbulence, and others. A concise summary of the studies discussed in this subsection is described in Table 8.
The authors of [129] focused on avoiding the effects of amplified spontaneous emission (ASE) noise, turbulence, and pointing errors by predicting the FSO channel for different transmission speeds using CNN and SVM. The results showed that CNN outperformed the SVM in most cases, and similar results for the rest. The CNN regressor could accurately predict channels with ASE noise regardless of the transmission speed. However, the turbulence and pointing error prediction was more accurate for low-speed than high-speed transmission, while in [130], the authors focused on atmospheric turbulence problems in the FSO-MIMO communication system. The dense CNN (DCNN) algorithm, which is DNN with a convolutional layer, was implemented in the proposed system’s transmitter, receiver, and transceiver sides. The results showed that the proposed DL-based methods performed better than ML-based methods in the case of optimum performance and lower complexity. In comparison, the DL-based detector with 16 modulation orders is two times faster, three times faster for 64 modulation orders, and 7.5 times faster for 256 modulation orders than the ML-based detector.
In addition, in [131], the authors worked on a cognitive FSO communication network that offers some tantalizing advantages. For example, it can overcome the system complexity caused by the heterogeneity of supported services, applications, devices, and transmission technologies, while guaranteeing a high data rate and bandwidth. They developed an unsupervised-learning-based method to identify the number of concurrently transmitting users sharing time. The system could also be used to allocate bandwidth, time, and space resources more efficiently. Based on the empirical model, the number of communicating users was considered accurate when validated from four users, considering the number of samples and receiver sampling rate. The result achieved over 92% accuracy in differentiating simultaneously transmitting users, even in conditions of moderate atmospheric turbulence. Another study applied a supervised-learning-based ML method to estimate the transmission quality for multi-user FSO communication links [132]. They compared the performance of SVM, RF, K-NN, and ANN to evaluate the proposed system. The results confirmed that SVM achieved the highest accuracy by 92%, followed by RF and K-NN with comparable results, and ANN at the lowest with 84.2%, while in [133], the combination of generative neural networks (GNN) and CNN considered the effects of turbulent light propagation, attenuation, and receiver noise detectors. Those factors could degrade the quality of the received state, increase cross-talk, and decrease the accuracy of symbol classification. The results showed that the proposed system efficiently received improved signals that had deteriorated from those problems. It also showed improvements in CNN classification accuracy while implementing GNN.

4.7. Visible Light Communication (VLC)

VLC is one of the other types of OWC communications, which uses light-emitting diodes (LEDs) to transmit signals to receivers [134]. VLC has many advantages, including rich spectrum resources between 400 and 800 THz, robustness against interference, high confidentiality, affordable implementation costs, and has become the best method to achieve high speed and long-distance signals in underwater wireless communications [135,136]. A concise summary of the studies discussed in this subsection is described in Table 9.
A study aimed to prevent eavesdropping in a MISO-VLC system by developing a secure and efficient way using the Deep RL algorithm [137]. The system proposed two ways to control beamforming, namely RL-based MISO VLC and DRL-based MISO VLC beamforming control schemes. Those two schemes were used to derive the optimal beamforming policy and to efficiently and effectively deal with the high-dimensional and continuous action and state spaces. The results showed that the proposed system greatly increases the secrecy rate, decreases BER, and outperforms the zero-forcing beamforming than other existing algorithms, while in [138], gated recurrent units (GRUs) with a CNN prediction algorithm were proposed to jointly optimize UAV deployment, user allocation, and energy efficiency of VLC-enabled UAV-based networks. The combined algorithms could model the long-term historical illumination distribution and predict the future illumination distribution, which could solve the non-convex optimization problem in low complexity. The proposed system showed a great result by reducing the total transmit power by up to 68.9%, by enabling UAVs to determine their deployment and user allocation. Another study proposed a model-driven DL-nonlinear post-equalizer scheme to cope with severe channel impairments of OFDM communication [139]. The authors showed how to estimate the channel and detect symbols that worked in a VLC system. The result showed that the overall channel impairment of intensity modulation and direct detection was effectively compensated, and the distorted symbols were efficiently demodulated to the bit stream. Furthermore, the VLC systems demonstrated that the proposed scheme is robust and generalizable, which can work effectively in various conditions.
The authors of [140] focused on the effect of low-frequency noise on the signal quality of LED-based VLC communication systems. The problem was overcome by mapping the LED-VLC system as an ANN-based AE structure and introducing an in-band channel model (IBCM) channel modeling strategy. High SNR training data was obtained for ANN-based IBCM. Furthermore, the embedded in-band autoencoder (IBAE) and IBCM were trained to combat the precise-estimated channel impairment, while avoiding performance degeneration due to the influence of the strong low-frequency noise. It achieved speeds of up to 0.325 Gbps faster than another scheme, indicating robustness to bias, amplitude, and bitrate changes. Another effect of distorted signals due to the nonlinearity of LEDs is the peak-to-average power ratio (PAPR). In [141], the LSTM autoencoder (LSTM-AE) dealt with variable input sequential data and predicted variable length output sequences in OFDM systems. The proposed model reduces the PAPR of the transmitted signal without increasing the BER.

4.8. Mobile Edge Computing (MEC)

The traditional cloud computing model has been widely adopted in the last decade. Computation offloading can extend the usability of mobile terminals. However, sending data to a central cloud is expensive and adds overhead delays, which can reduce the QoS of each user and can cause heavy losses for service providers [142]. Moreover, recently, the increasing growth of mobile terminals and the considerable transmission distance between the remote cloud and the user has increasingly driven this problem [143]. MEC is a technology that can reduce latency, improve energy efficiency, and provide more resources for mobile devices by performing computing tasks at the edge of the wireless network.
In [144], the base station (BS) was equipped with MEC to optimize spectrum and transmit power allocation. A multi-stack RL algorithm was proposed to help BS optimize its resource allocation for different tasks, including adjusting subcarriers, transmit power, and task allocation schemes. The proposed system enhanced learning efficiency and convergence speed by tracking past resource allocation schemes and user data at each BS. Therefore, it was more efficient than Q-learning, requiring 18% fewer iterations and resulting in 11% less maximal delay for users, while in [145], the ML algorithm was used for multiuser MEC systems in a cognitive eavesdropping environment. The authors proposed an FL framework to improve the efficiency of offloading tasks, allocating bandwidth, and distributing computational resources. The framework took latency and power consumption into account. The task offloading and resource allocation were formulated into a Markov decision process problem, while the state and action spaces were designed with DQN-based RL. FL framework distributed the DQN scheme to be run by each user to reduce the communication overhead and protect data privacy. The proposed method improved performance by reducing latency and energy consumption, while ensuring more bandwidth and computational resources for higher task-priority users. MEC is suitable for processing IoT computing-intensive tasks where the generated tasks can be offloaded to MEC. In addition, MEC is a promising technology to provide services for massive IoT devices. However, acquiring system information comprehensively and accurately had become a challenge in offloading multi-edge servers. In [146], the DRL-based energy efficient task offloading (DEETO) algorithm was proposed to enhance the energy efficiency and workload balance among the edge servers. The DEETO algorithm was found to be more energy-efficient and reduce edge server workload compared to the other RL algorithms mentioned in the paper.
Apart from supporting IoT communications, MEC can also be applied to the implementation of other emerging technologies. In [147], the DDPG-based RL algorithm was applied to optimize the physical-layer security of the IRS-assisted MEC network. The proposed system allocated the offloading ratio, bandwidth, and computational abilities to users. The results showed that the DDPG scheme found a more efficient way to offload tasks, resulting in a lower total cost than the all-local scheme. Furthermore, the system demonstrated the ability to work well in the MEC network under various conditions. Besides the IRS, UAV communication has also received widespread attention in MEC systems. In [148], the authors proposed a single-agent scheme based on Q-learning and a MA scheme based on Nash Q-learning (NQL) algorithms to maximize the secure offloading of multi-UAV-assisted MEC networks. The system solved the optimization problem while considering the limitations of the secure offloading transmission rate, computing latency, power consumption, and task types. The proposed system demonstrated that the MA scheme was better at optimizing the offloading and achieving greater system utility than the single-agent and random-offloading schemes. In contrast, the MA-TD3 (MATD3) approach was proposed in [149] to design a joint strategy of trajectory, task allocation, and power management. The result showed that the total system cost was significantly higher while applying the proposed approach than the other optimization method mentioned. Besides that, the proposed UAV-assisted edge cloud adapts to be flexible and adaptable to changing conditions, making it a promising technology for future wireless networks that are expected to become increasingly complex and dynamic. Table 10 provides a concise summary of the studies discussed in this subsection.
Based on the discussion in this section, Table 11 shows the application of ML algorithms for emerging technologies that could be implemented in 5G/6G communications to overcome several issues.

5. Potential Challenges for 6G Network Requirements

The ITU-R workshop on IMT addressed several usage scenarios and key capability indicators for 2030 and beyond. These indicators include in the KPIs for 6G communication networks, which need to be considered to design the future communication network, including high throughput (Gbps/Tbps), extended coverage, low latency, and high reliability. Furthermore, the network should have the capacity to support the interconnection of terrestrial and non-terrestrial technologies for both sensing and communication needs, while also being highly efficient in its operations.

5.1. Throughput Improvement

Throughput is a critical performance metric for wireless networks, and the demand for higher throughput is growing with each generation of technology. 6G networks are expected to deliver extremely high throughput, up to more than 1 Gbps [150]. One of the technologies that can enhance throughput in 6G networks is m-MIMO. However, m-MIMO requires a large number of antennas, which may not be feasible for handheld devices. A more practical solution is optimizing the beamformer, which is the antenna array used to transmit and receive signals.
ML can potentially enhance beamforming optimization by scanning wireless CSI, which is constantly changing and inherently imperfect. In particular, RL, a type of ML well-suited for unstable environments, can accurately optimize CSI. By utilizing the RL algorithm to select the optimal communication channel, we can achieve a throughput that approaches the theoretical maximum, even in the presence of imperfect CSI.
Another potential solution to the challenge of realizing a Tbps wireless network is to implement an optical spectrum. Optical spectrum, which works through free space, has the potential to achieve Tbps speeds due to its ability to overcome the limitations of coaxial cables. Despite the existence of certain limitations, such as the effects of atmospheric turbulence, ASE noise, and channel error, these can be minimized by applying an ML algorithm. This algorithm can learn the statistical properties of the optical spectrum and create a model to predict and mitigate these effects. Therefore, this would enable reliable and efficient data transmission at Tbps speeds over the optical spectrum.

5.2. Coverage Extension

The ever-increasing demand for wireless communication is inversely proportional to the decreasing availability of frequency capacity and the difficulty of constructing new BS. Implementing emerging technologies such as IRS and UAV can help address these issues by extending terrestrial BS coverage without overburdening the infrastructure. IRS, a software-defined meta-material that can be programmed to reflect wireless signals in desired ways, can extend the coverage of existing BS without requiring any new infrastructure, while UAVs are becoming increasingly sophisticated and can carry a variety of payloads, including BS equipment. This makes them a flexible and versatile tool for improving wireless coverage and a cost-effective way to provide in remote areas. Therefore, these technologies can be strategically positioned to transmit signals, such as providing a communication network in disaster-prone areas and on the high seas by implementing aerial BS or expanding the network in underwater and underground environments.

5.3. Ultra-High Reliability

The 6G cellular networks are expected to provide reliable and ultra-low latency communication. Especially with the presence of emerging technologies, the reliability of the 6G network is increasingly being anticipated. It is increased from 99.9 percent to 99.999 percent, which is ten times higher than 5G [151]. Despite the importance of ultra-low latency, it is difficult to achieve both low latency and high reliability. This is a significant challenge in developing architectures that support ultra-low latency, as well as spectral and energy efficiency [152].
AI-empowered technologies can be a solution to gain high reliability due to their predictable ability to solve complex problems and improve generative learning abilities [153]. AI-empowered technologies can be used to improve the reliability and availability of 6G networks in several ways. One of them is a single computing core that combines high-performance computing (HPC) and AI. HPC is the use of computers to solve complex problems that would be too time-consuming to solve using traditional methods. A combination of HPC and AI could help the system develop new algorithms to better manage traffic, optimize the use of network resources, and detect and prevent problems without human intervention [154,155]. It can predict and prevent network failures by analyzing historical data and identifying patterns that could lead to them. This information can then be used to take preventive measures, such as adjusting network settings or deploying additional resources. It can also be used to develop self-healing capabilities for 6G networks. Thus, these features could make the network more reliable by reducing the number of outages and the load on the network.
However, there are some challenges that need to be addressed in order to develop a single computing core. The development of new hardware that can combine the capabilities of HPC and AI is a major challenge, due to their need to be able to handle the high processing power and memory requirements of HPC. Furthermore, the software needs to be able to scale to large networks, handle a wide variety of data traffic, and protect the network from cyberattacks which can exploit the vulnerabilities of a single computing core.
In linear technologies with AI, which need to select information to avoid privacy concerns carefully, FL is one of the keys to this drawback, due to its ability to allow each user to run the learning algorithms individually without exchanging personal data. Other than that, as AI technologies, the ability to make decisions should not be questioned. Therefore, the role of DL will be indispensable for the smooth realization of this technology.

5.4. Low Latency Communication

Due to the massive connectivity in the 6G network, which is predicted to increase from 5G leading to unbearable latency, one of the demand requirements for the 6G network is URLLC, which requires latency of less than 0.1 ms [156]. There are two main challenges to achieving URLLC in 6G networks. First, the limited frequency bands available for cellular networks make it difficult to increase spectrum utilization rates [157]. Second, future communication networks may not be able to meet the latency, reliability, and other QoS requirements of URLLC, such as spectrum efficiency, energy efficiency, capacity, and network coverage [158].
ML algorithms offer a promising solution to these challenges. ML algorithms can be used to optimize network resources, such as spectrum and power, and to predict and prevent network failures, which would improve the network’s overall efficiency and reduce latency. In addition, ML algorithms can be used to develop self-healing capabilities, as mentioned in the last subsection. This is crucial for URLLC, as even a small amount of latency can significantly impact the applications’ performance. Even though the potential benefits of ML for URLLC are significant, and it is likely that ML will play a significant role in developing 6G networks, continuous research is needed on implementing ML to support the realization of optimal URLLC communication in various scenarios.
Other than that, as mentioned before, AI-empowered technologies can be implemented to improve reliability. They can also be used to achieve low-latency communication, such as predictive analytics to predict future traffic patterns. The predictive information will be used to adjust network settings and optimize routing, which would help reduce latency. Additionally, it can also reduce the latency by caching data in 6G networks, meaning frequently accessed data would be stored close to the user. Thus, applying the ML algorithm is one of the promising technologies to overcome these complex problems. Continuous research is needed on implementing ML to support the realization of optimal URLLC communication in various scenarios.

5.5. Energy Efficient Communication

The widespread adoption of emerging technologies in 6G communication networks is expected, which raises concerns about their impact on energy consumption. Even though realizing zero-energy IoT is very challenging, energy efficiency should be a critical issue in 6G networks. m-MIMO is expected to be a key technology in 6G networks, and emerging technologies such as IRS, UAVs, and AUVs are expected to play a significant role in m-MIMO communication. While MIMO can improve both spectral and energy efficiency [159], power consumption is a critical issue that must be considered when implementing emerging technologies-aided wireless communication. This is because emerging technologies are often more complex and require more power.
Likewise, the IRS mostly requires an energy supply due to the lack of power amplifiers [160]. UAVs and AUVs, on the other hand, consume energy when they move around and forward signals [161,162]. As a result, an energy-efficient mechanism must be considered to address this issue. This could involve developing a suitable optimization method to minimize power consumption or implementing wireless power transfer to recharge the required energy.

5.6. Interconnection of Terrestrial and Non-Terrestrial Technologies

Terrestrial and non-terrestrial technologies are being considered for use in 6G networks, because the interconnection of both technologies is one of the key enablers for a wide range of applications and benefits. However, some of the challenges need to be addressed in order to achieve seamless integration and to overcome some complex challenges, due to some issues such as heterogeneity, mobility, and security.
The upcoming 6G networks are predicted to be heterogeneous and support high mobility, comprising a range of disparate technologies, such as mmWave, THz, and m-MIMO. Network slicing, full duplex, and beamforming could also be implemented to address the challenges. Network slicing allows for the creation of virtual networks within a physical network, which will enhance spectrum efficiency and make it easier to interconnect terrestrial and non-terrestrial networks. By implementing these technologies, the interconnection of terrestrial and non-terrestrial technologies in 6G networks can be achieved, and it will allow for a wide range of new applications, such as VR, AR, self-driving cars, and critical infrastructure.
However, seamless integration will require new security protocols, as it will be necessary to ensure the data is secure as it moves between different networks. Other than that, standardization and cost should be the other issues that need to be considered. There is currently no agreed-upon standard for 6G networks, which makes it hard to confirm that different networks will be able to work together seamlessly. Furthermore, the cost of implementing seamless integration for 6G networks could be high, making it difficult for some businesses and organizations to adopt this technology.

5.7. Sensing and Communication

The 6G cellular networks are anticipated to utilize the broad spectrum of multiple frequency bands to enhance data transfer rates. This will be achieved through the new radio access technologies and the exploitation of the unique characteristics of the sub-THz spectrum. The sub-THz spectrum refers to the portion of the electromagnetic spectrum that lies between the microwave and infrared bands. This spectrum has a large amount of unused bandwidth, which can be used to achieve higher data rates [163,164].
In addition, the use of the sub-THz spectrum can lead to the development of integrated sensing and communication technology. This technology considers the entire communication system as a sensor and enables a wide range of new services, such as environmental reconstruction and high-accuracy imaging [165]. Localization obtained from sensing could enhance communication performance, such as improving beamforming, traffic routing, directing radio waves in a specific direction, reducing interference, and improving the SNR value [166,167].
Other than that, embedded space building (ESB) is another key technology that will be explored for 6G networks. ESB refers to the use of wireless networks to create virtual spaces that can be accessed by users. It involves embedding small, low-power sensors and actuators in the environment to create a distributed network of devices [168,169]. ESB can be used to create virtual space to support gaming, education, environment and healthcare monitoring, and training. It can provide users with a more immersive and realistic gaming experience, a more interactive and engaging learning experience, and provides trainees with a safe and realistic environment. It can also be used to monitor environmental changes and patient health, in order to provide remote tasks.
However, the sub-THz spectrum and ESB also pose some challenges, such as being easily absorbed by water vapor and oxygen, which limits their range. Furthermore, the use of integrated sensing and communication technology will require the development of new hardware and software. Other than that, there is no common standard for ESB yet, and the expected new services and applications, which implement high-definition virtual reality, augmented reality, and autonomous driving, are still being developed. Therefore, even though using the sub-THz spectrum and developing ESB offers beneficial things for communication networks, it still requires further research to fully implement it in a real-world scenario.

5.8. Secure Communication

Linearly with the high data traffic expected to occur in 6G communications, data security risks are also likely to increase due to wireless transmission characteristics [170]. Data privacy and security are crucial requirements for 6G communication. Besides protecting communication privacy for both parties, data security also affects the quality of the signal received by the receiver. Communication over the Internet is susceptible to cyber-attacks from malicious users, such as eavesdropping on important information being shared by others. In another scenario, jamming is a major security threat to wireless networks. It occurs when malicious parties intentionally transmit signals that interfere with legitimate communication or capture network bandwidth, thereby leaking vital information shared between communicating parties. Additionally, attackers may share or broadcast incorrect information to conduct data integrity attacks [171].
Therefore, we need security measures to those challenges while maintaining the security of wireless communication and protecting the data privacy and security of users who are communicating in the network. Strong authentication or encryption will be essential to prevent unauthorized access to 6G networks. This could be achieved using techniques such as biometric authentication or multi-factor authentication. Physical layer security (PLS)-based secret key generation is a promising new technique for securing communications and reducing shared information for eavesdroppers due to its more secure and efficient potential than traditional cryptographic schemes. Nevertheless, using PLS may be hindered by NLoS propagation when channel estimation is constrained by low SNR, high bit agreement rates, and low secret key rates (SKR) [172].
Network security monitoring is also needed to detect and respond to security threats, which can be achieved using techniques such as intrusion detection systems (IDS) and intrusion prevention systems (IPS). While specific to jamming problems, there are several ways that can be implemented to deal with jamming attacks. Frequency hopping is a technique where the network periodically changes its frequency [173]. It will make it difficult for attackers to keep up with the network and maintain the jamming signal. Implementing AI technology that can identify suspicious signals and dynamically adjust the modulation scheme used by networks could also help to ensure communication even in the presence of jamming. Spread spectrum and MIMO are also other techniques that can be implemented to improve the robustness of the network to interference [174,175]. Other than that, it is important to provide security education and raise awareness among all users, which will help users understand the potential security risks and acquire the fundamental skills to protect themselves.
In addition, quantum computing technology is an advanced method that is being developed to create secure communication. Compared with the classical binary-based communication systems that use bits either 0 or 1 to represent information, quantum communication uses qubits, which can be in a superposition of 0 and 1. This means that qubits can represent both 0 and 1 simultaneously, making it much more difficult to eavesdrop on [176,177]. It makes quantum communication carry a much more significant amount of information than classical communication systems [178]. Furthermore, it could significantly improve the transmission quality, as it is less susceptible to noise and interference, because qubits are not affected by the same physical processes as bits. Quantum communication also offers another potential benefit, which is absolute randomness and security. It is inherently secure because the act of eavesdropping will collapse the quantum state of qubits, which will be instantly detectable by the legitimate parties.
By implementing those measures, it could ensure a safer and more secure environment for all parties involved. Therefore, further research focusing on optimizing the parameters of each measure is required due to the limited research exploring this issue. Meanwhile, the development of quantum communication is still in its early stages, which also requires further research to implement it in real-world situations.

6. Conclusions

In the realm of communication networks, integrating emerging technologies is a critical factor to consider, especially in developing the forthcoming 6G network. These technologies can significantly boost the speed, security, and overall quality of communication services, even in locations or circumstances deemed challenging or hazardous to access. By leveraging cutting-edge technologies, 6G networks can offer unparalleled performance and reliability, making them a key enabler of a wide variety of applications and use cases in the future. However, implementing emerging technology-assisted wireless communications proved challenging for the model-driven approach, due to its inaccuracy. Furthermore, the high overhead and computational complexity of major emerging technologies hindered the system’s optimization, which conventional mathematical solutions cannot resolve. Therefore, ML algorithms, which are well-known and proven for their reliability in solving complex problems, are the leading solution for these concerns.
It is of utmost importance to create efficient algorithms and techniques that can cater to the needs of the upcoming 6G network. The 6G network is expected to have high requirements in terms of throughput, connectivity reliability, and energy efficiency. It is imperative to tackle these needs without the burden of high workloads and time complexities, as doing so is critical to the success of the network. Therefore, this article provides a comprehensive review of implementing ML, DL, RL, and DRL algorithms to optimize some of the difficulties that every emerging technology may face to meet the 6G network requirements. This study has revealed that the application of ML algorithms can effectively address a broad range of challenges related to spectral and energy efficiency, throughput, computational reduction, and the establishment of reliable and secure communication channels. However, despite their success in previous studies, it is essential to note that further research is necessary to fully harness the potential of these tools in advancing innovation. At the end of this study, we provide possible ML approaches to effectively address other challenges that may be presented in 6G network technology, such as extending network coverage, minimizing latency issues, connecting terrestrial and non-terrestrial technologies, and integrating the latest advances in sensing and communication technologies. Our recommendation aims to stimulate further discourse and exploration within this critical development area.

Author Contributions

Conceptualization, A.A.P., M.H.A. and B.M.L.; methodology, A.A.P., M.H.A. and B.M.L.; resources, B.M.L.; writing—original draft preparation, A.A.P., T.T.A., M.H.A. and B.M.L.; writing—review and editing, A.A.P., T.T.A., M.H.A. and B.M.L.; supervision, M.H.A. and B.M.L.; project administration, B.M.L.; funding acquisition, B.M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by Korea government (MSIT) under Grant NRF-2023R1A2C1002656, was supported by the MSIT (Ministry of Science and ICT), Korea under Grant IITP-2023-RS-2022-00156345 (ICT Challenge and Advanced Network of HRD Program), and was supported by the faculty research fund of Sejong University in 2023.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Murakami, T.; Kishi, Y.; Ishibashi, K.; Kasai, K.; Shinbo, H.; Tamai, M.; Tsuda, K.; Nakazawa, M.; Tsukamoto, Y.; Yokoyama, H.; et al. Research Project to Realize Various High-reliability Communications in Advanced 5G Network. In Proceedings of the 2020 IEEE Wireless Communications and Networking Conference (WCNC), Online, 25–28 May 2020; pp. 1–8. [Google Scholar] [CrossRef]
  2. Poirot, V.; Ericson, M.; Nordberg, M.; Andersson, K. Energy efficient multi-connectivity algorithms for ultra-dense 5G networks. Wirel. Net. 2020, 26, 2207–2222. [Google Scholar] [CrossRef]
  3. Suyama, S.; Okayama, T.; Kishiyama, Y.; Nagata, S.; Takahiro, A. A Study on Extreme Wideband 6G Radio Access Technologies for Achieving 100Gbps Data Rate in Higher Frequency Bands. IEICE Trans. Commun. 2021, E104.B, 992–999. [Google Scholar] [CrossRef]
  4. Zhang, H.; Huang, W. Tractable Mobility Model for Multi-Connectivity in 5G User-Centric Ultra-Dense Networks. IEEE Access 2018, 6, 43100–43112. [Google Scholar] [CrossRef]
  5. Chen, K.C.; Zhang, T.; Gitlin, R.D.; Fettweis, G. Ultra-Low Latency Mobile Networking. IEEE Netw. 2019, 33, 181–187. [Google Scholar] [CrossRef]
  6. Safi, H.; Dargahi, A.; Cheng, J.; Safari, M. Analytical Channel Model and Link Design Optimization for Ground-to-HAP Free-Space Optical Communications. J. Light. Technol. 2020, 38, 5036–5047. [Google Scholar] [CrossRef]
  7. Noh, S.; Lee, J.; Lee, G.; Seo, K.; Sung, Y.; Yu, H. Channel Estimation Techniques for RIS-Assisted Communication: Millimeter-Wave and Sub-THz Systems. IEEE Veh. Technol. Mag. 2022, 17, 64–73. [Google Scholar] [CrossRef]
  8. Nguyen, T.V.; Nguyen, D.N.; Renzo, M.D.; Zhang, R. Leveraging Secondary Reflections and Mitigating Interference in Multi-IRS/RIS Aided Wireless Networks. IEEE Trans. Wirel. Commun. 2023, 22, 502–517. [Google Scholar] [CrossRef]
  9. Huang, A.; Tian, L.; Jiang, T.; Zhang, J. NLOS Identification for Wideband mmWave Systems at 28 GHz. In Proceedings of the 2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring), Kuala Lumpur, Malaysia, 28 April–1 May 2019; pp. 1–6. [Google Scholar] [CrossRef]
  10. Mukherjee, M.; Kumar, V.; Kumar, S.; Mavromoustakis, C.X.; Zhang, Q.; Guo, M. RIS-assisted Task Offloading for Wireless Dead Zone to Minimize Delay in Edge Computing. In Proceedings of the GLOBECOM 2022–2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 2554–2559. [Google Scholar] [CrossRef]
  11. Nguyen, L.D.; Kortun, A.; Duong, T.Q. An Introduction of Real-time Embedded Optimisation Programming for UAV Systems under Disaster Communication. EAI Endorsed Trans. Ind. Netw. Intell. Syst. 2018, 5, e5. [Google Scholar] [CrossRef]
  12. Jamali, M.V.; Chizari, A.; Salehi, J.A. Performance Analysis of Multi-Hop Underwater Wireless Optical Communication Systems. IEEE Photonics Technol. Lett. 2017, 29, 462–465. [Google Scholar] [CrossRef]
  13. Yin, S.; Zhao, Y.; Li, L.; Yu, F.R. UAV-Assisted Cooperative Communications With Time-Sharing Information and Power Transfer. IEEE Trans. Veh. Technol. 2020, 69, 1554–1567. [Google Scholar] [CrossRef]
  14. Agrawal, N.; Bansal, A.; Singh, K.; Li, C.P.; Mumtaz, S. Finite Block Length Analysis of RIS-Assisted UAV-Based Multiuser IoT Communication System With Non-Linear EH. IEEE Trans. Commun. 2022, 70, 3542–3557. [Google Scholar] [CrossRef]
  15. Khan, M.A.; Ullah, I.; Alkhalifah, A.; Rehman, S.U.; Shah, J.A.; Uddin, M.I.; Alsharif, M.H.; Algarni, F. A Provable and Privacy-Preserving Authentication Scheme for UAV-Enabled Intelligent Transportation Systems. IEEE Trans. Ind. Inform. 2022, 18, 3416–3425. [Google Scholar] [CrossRef]
  16. Do, D.T.; Le, A.T.; Lee, B.M. NOMA in Cooperative Underlay Cognitive Radio Networks Under Imperfect SIC. IEEE Access 2020, 8, 86180–86195. [Google Scholar] [CrossRef]
  17. Yang, F.; Wang, J.B.; Zhang, H.; Lin, M.; Cheng, J. Multi-IRS-Assisted mmWave MIMO Communication Using Twin-Timescale Channel State Information. IEEE Trans. Commun. 2022, 70, 6370–6384. [Google Scholar] [CrossRef]
  18. Gul, F.; Mir, A.; Mir, I.; Mir, S.; Islaam, T.U.; Abualigah, L.; Forestiero, A. A Centralized Strategy for Multi-Agent Exploration. IEEE Access 2022, 10, 126871–126884. [Google Scholar] [CrossRef]
  19. Forestiero, A.; Mastroianni, C.; Spezzano, G. Antares: An ant-inspired P2P information system for a self-structured grid. In Proceedings of the 2007 2nd Bio-Inspired Models of Network, Information and Computing Systems, Budapest, Hungary, 10–12 December 2007; pp. 151–158. [Google Scholar] [CrossRef]
  20. Radanliev, P.; De Roure, D. New and emerging forms of data and technologies: Literature and Bibliometric Review. Multimed. Tools Appl. 2022, 82, 2887–2911. [Google Scholar] [CrossRef]
  21. Costola, M.; Hinz, O.; Nofer, M.; Pelizzon, L. Machine learning sentiment analysis, COVID-19 news and stock market reactions. Res. Int. Bus. Financ. 2023, 64, 101881. [Google Scholar] [CrossRef]
  22. Megnidio-Tchoukouegno, M.; Adedeji, J.A. Machine Learning for Road Traffic Accident Improvement and Environmental Resource Management in the Transportation Sector. Sustainability 2023, 15, 2014. [Google Scholar] [CrossRef]
  23. Mismar, F.B.; Alammouri, A.; Alkhateeb, A.; Andrews, J.G.; Evans, B.L. Deep Learning Predictive Band Switching in Wireless Networks. IEEE Trans. Wirel. Commun. 2021, 20, 96–109. [Google Scholar] [CrossRef]
  24. Chen, T.; Zhang, X.; You, M.; Zheng, G.; Lambotharan, S. A GNN-Based Supervised Learning Framework for Resource Allocation in Wireless IoT Networks. IEEE Internet Things J. 2022, 9, 1712–1724. [Google Scholar] [CrossRef]
  25. Liu, Y.; Qin, Z.; Cai, Y.; Gao, Y.; Li, G.Y.; Nallanathan, A. UAV Communications Based on Non-Orthogonal Multiple Access. IEEE Wirel. Commun. 2019, 26, 52–57. [Google Scholar] [CrossRef]
  26. An, T.T.; Lee, B.M. Robust Automatic Modulation Classification in Low Signal to Noise Ratio. IEEE Access 2023, 11, 7860–7872. [Google Scholar] [CrossRef]
  27. Liu, X.; Liu, Y.; Chen, Y. Machine Learning Empowered Trajectory and Passive Beamforming Design in UAV-RIS Wireless Networks. IEEE J. Sel. Areas Commun. 2021, 39, 2042–2055. [Google Scholar] [CrossRef]
  28. Kaur, J.; Khan, M.A.; Iftikhar, M.; Imran, M.; Emad Ul Haq, Q. Machine Learning Techniques for 5G and Beyond. IEEE Access 2021, 9, 23472–23488. [Google Scholar] [CrossRef]
  29. Tang, F.; Mao, B.; Kawamoto, Y.; Kato, N. Survey on Machine Learning for Intelligent End-to-End Communication Toward 6G: From Network Access, Routing to Traffic Control and Streaming Adaption. IEEE Commun. Surv. Tutorials 2021, 23, 1578–1598. [Google Scholar] [CrossRef]
  30. Gkonis, P.K. A Survey on Machine Learning Techniques for Massive MIMO Configurations: Application Areas, Performance Limitations and Future Challenges. IEEE Access 2023, 11, 67–88. [Google Scholar] [CrossRef]
  31. Wang, Y.; Gao, Z.; Zheng, D.; Chen, S.; Gunduz, D.; Poor, H.V. Transformer-Empowered 6G Intelligent Networks: From Massive MIMO Processing to Semantic Communication. IEEE Wirel. Commun. 2022, 1–9. [Google Scholar] [CrossRef]
  32. Demirhan, U.; Alkhateeb, A. Integrated Sensing and Communication for 6G: Ten Key Machine Learning Roles. IEEE Commun. Mag. 2023, 61, 113–119. [Google Scholar] [CrossRef]
  33. Puspitasari, A.A.; Lee, B.M. A Survey on Reinforcement Learning for Reconfigurable Intelligent Surfaces in Wireless Communications. Sensors 2023, 23, 2554. [Google Scholar] [CrossRef]
  34. Alsharif, M.H.; Jahid, A.; Kelechi, A.H.; Kannadasan, R. Green IoT: A Review and Future Research Directions. Symmetry 2023, 15, 757. [Google Scholar] [CrossRef]
  35. Abate, F.; Carratù, M.; Liguori, C.; Paciello, V. A low cost smart power meter for IoT. Measurement 2019, 136, 59–66. [Google Scholar] [CrossRef]
  36. Nayanatara, C.; Divya, S.; Mahalakshmi, E. Micro-Grid Management Strategy with the Integration of Renewable Energy Using IoT. In Proceedings of the 2018 International Conference on Computation of Power, Energy, Information and Communication (ICCPEIC), Chennai, India, 28–29 March 2018; pp. 160–165. [Google Scholar] [CrossRef]
  37. Huang, Y.; Jin, J.; Lou, M.; Dong, J.; Wu, D.; Xia, L.; Wang, S.; Zhang, X. 6G mobile network requirements and technical feasibility study. China Commun. 2022, 19, 123–136. [Google Scholar] [CrossRef]
  38. Guo, F.; Yu, F.R.; Zhang, H.; Ji, H.; Leung, V.C.M.; Li, X. An Adaptive Wireless Virtual Reality Framework in Future Wireless Networks: A Distributed Learning Approach. IEEE Trans. Veh. Technol. 2020, 69, 8514–8528. [Google Scholar] [CrossRef]
  39. Rappaport, T.S.; Xing, Y.; Kanhere, O.; Ju, S.; Madanayake, A.; Mandal, S.; Alkhateeb, A.; Trichopoulos, G.C. Wireless Communications and Applications Above 100 GHz: Opportunities and Challenges for 6G and Beyond. IEEE Access 2019, 7, 78729–78757. [Google Scholar] [CrossRef]
  40. Roh, J.h.; Lee, S.k.; Son, C.W.; Hwang, C.; Kang, J.; Park, J. Cyber Security System with FPGA-based Network Intrusion Detector for Nuclear Power Plant. In Proceedings of the IECON 2020 The 46th Annual Conference of the IEEE Industrial Electronics Society, Singapore, 18–21 October 2020; pp. 2121–2125. [Google Scholar] [CrossRef]
  41. Hao, W.; Yang, Q.; Li, Z.; Hu, S.; Liu, B.; Ruan, W. Multi-Scale Traffic Aware Cybersecurity Situational Awareness Online Model for Intelligent Power Substation Communication Network. IEEE Internet Things J. 2023, 10, 1666–1681. [Google Scholar] [CrossRef]
  42. Slalmi, A.; Saadane, R.; Chehri, A.; Kharraz, H. How Will 5G Transform Industrial IoT: Latency and Reliability Analysis. In Human Centred Intelligent Systems; Zimmermann, A., Howlett, R.J., Jain, L.C., Eds.; Springer: Singapore, 2021; pp. 335–345. [Google Scholar] [CrossRef]
  43. Zhao, Y.; Chi, X.; Qian, L.; Zhu, Y.; Hou, F. Resource Allocation and Slicing Puncture in Cellular Networks With eMBB and URLLC Terminals Coexistence. IEEE Internet Things J. 2022, 9, 18431–18444. [Google Scholar] [CrossRef]
  44. Sefati, S.S.; Halunga, S. Ultra-reliability and low-latency communications on the internet of things based on 5G network: Literature review, classification, and future research view. Trans. Emerg. Telecommun. Technol. 2023, 34, e4770. [Google Scholar] [CrossRef]
  45. Xu, Y.; Pi, D.; Yang, S.; Chen, Y.; Qin, S.; Zio, E. An Angle-Based Bi-Objective Optimization Algorithm for Redundancy Allocation in Presence of Interval Uncertainty. IEEE Trans. Autom. Sci. Eng. 2023, 20, 271–284. [Google Scholar] [CrossRef]
  46. Holzinger, K.; Biersack, F.; Stubbe, H.; Mariño, A.G.; Kane, A.; Fons, F.; Haigang, Z.; Wild, T.; Herkersdorf, A.; Carle, G. SmartNIC-based load management and network health monitoring for time sensitive applications. In Proceedings of the NOMS 2022–2022 IEEE/IFIP Network Operations and Management Symposium, IEEE, Budapest, Hunagry, 25–29 April 2022; pp. 1–6. [Google Scholar]
  47. Talaat, F.M.; Ali, H.A.; Saraya, M.S.; Saleh, A.I. Effective scheduling algorithm for load balancing in fog environment using CNN and MPSO. Knowl. Inf. Syst. 2022, 64, 773–797. [Google Scholar] [CrossRef]
  48. Zaretalab, A.; Sharifi, M.; Guilani, P.P.; Taghipour, S.; Niaki, S.T.A. A multi-objective model for optimizing the redundancy allocation, component supplier selection, and reliable activities for multi-state systems. Reliab. Eng. Syst. Saf. 2022, 222, 108394. [Google Scholar] [CrossRef]
  49. Sefati, S.; Mousavinasab, M.; Zareh Farkhady, R. Load balancing in cloud computing environment using the Grey wolf optimization algorithm based on the reliability: Performance evaluation. J. Supercomput. 2022, 78, 18–42. [Google Scholar] [CrossRef]
  50. Kashani, M.H.; Mahdipour, E. Load Balancing Algorithms in Fog Computing. IEEE Trans. Serv. Comput. 2022, 16, 1505–1521. [Google Scholar] [CrossRef]
  51. Viswanathan, V.B.; Nagarajan, K.A. Building Privacy First 5G Networks. In Proceedings of the 2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bengaluru, India, 8–10 July 2022; pp. 1–5. [Google Scholar] [CrossRef]
  52. Ali, A.; Al-rimy, B.A.S.; Alsubaei, F.S.; Almazroi, A.A.; Almazroi, A.A. HealthLock: Blockchain-Based Privacy Preservation Using Homomorphic Encryption in Internet of Things Healthcare Applications. Sensors 2023, 23, 6762. [Google Scholar] [CrossRef] [PubMed]
  53. Tataria, H.; Shafi, M.; Dohler, M.; Sun, S. Six critical challenges for 6G wireless systems: A summary and some solutions. IEEE Veh. Technol. Mag. 2022, 17, 16–26. [Google Scholar] [CrossRef]
  54. Sambhwani, S.; Boos, Z.; Dalmia, S.; Fazeli, A.; Gunzelmann, B.; Ioffe, A.; Narasimha, M.; Negro, F.; Pillutla, L.; Zhou, J. Transitioning to 6G part 1: Radio technologies. IEEE Wirel. Commun. 2022, 29, 6–8. [Google Scholar] [CrossRef]
  55. Batista, E.; Lopez-Aguilar, P.; Solanas, A. Smart Health in the 6G Era: Bringing Security to Future Smart Health Services. IEEE Commun. Mag. 2023, 1–7. [Google Scholar] [CrossRef]
  56. Saafi, S.; Vikhrova, O.; Fodor, G.; Hosek, J.; Andreev, S. AI-Aided Integrated Terrestrial and Non-Terrestrial 6G Solutions for Sustainable Maritime Networking. IEEE Netw. 2022, 36, 183–190. [Google Scholar] [CrossRef]
  57. Geraci, G.; López-Pérez, D.; Benzaghta, M.; Chatzinotas, S. Integrating Terrestrial and Non-Terrestrial Networks: 3D Opportunities and Challenges. IEEE Commun. Mag. 2023, 61, 42–48. [Google Scholar] [CrossRef]
  58. Msadaa, I.C.; Zairi, S.; Dhraief, A. Non-Terrestrial Networks in a Nutshell. IEEE Internet Things Mag. 2022, 5, 168–174. [Google Scholar] [CrossRef]
  59. Tirmizi, S.B.R.; Chen, Y.; Lakshminarayana, S.; Feng, W.; Khuwaja, A.A. Hybrid Satellite-Terrestrial Networks toward 6G: Key Technologies and Open Issues. Sensors 2022, 22, 8544. [Google Scholar] [CrossRef]
  60. López, M.; Damsgaard, S.B.; Rodríguez, I.; Mogensen, P. An Empirical Analysis of Multi-Connectivity between 5G Terrestrial and LEO Satellite Networks. In Proceedings of the 2022 IEEE Globecom Workshops (GC Wkshps), Rio de Janeiro, Brazil, 4–8 December 2022; pp. 1115–1120. [Google Scholar] [CrossRef]
  61. Yu, H.; Taleb, T.; Samdanis, K.; Song, J. Towards Supporting Holographic Services over Deterministic 6G Integrated Terrestrial & Non-Terrestrial Networks. IEEE Netw. 2023, 1–10. [Google Scholar] [CrossRef]
  62. Ahmad, I.; Suomalainen, J.; Porambage, P.; Gurtov, A.; Huusko, J.; Höyhtyä, M. Security of Satellite-Terrestrial Communications: Challenges and Potential Solutions. IEEE Access 2022, 10, 96038–96052. [Google Scholar] [CrossRef]
  63. Zhang, X.; Zhu, Q.; Poor, H.V. Heterogeneous Statistical QoS Provisioning for Scalable Software-Defined 6G Mobile Networks. In Proceedings of the 2023 57th Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 22–23 March 2023; pp. 1–6. [Google Scholar] [CrossRef]
  64. Abdulqadder, I.H.; Zhou, S. SliceBlock: Context-Aware Authentication Handover and Secure Network Slicing Using DAG-Blockchain in Edge-Assisted SDN/NFV-6G Environment. IEEE Internet Things J. 2022, 9, 18079–18097. [Google Scholar] [CrossRef]
  65. Lin, C.; Han, G.; Jiang, J.; Li, C.; Shah, S.B.H.; Liu, Q. Underwater Pollution Tracking Based on Software-Defined Multi-Tier Edge Computing in 6G-Based Underwater Wireless Networks. IEEE J. Sel. Areas Commun. 2023, 41, 491–503. [Google Scholar] [CrossRef]
  66. Wu, Y.J.; Hwang, W.S.; Shen, C.Y.; Chen, Y.Y. Network Slicing for mMTC and URLLC Using Software-Defined Networking with P4 Switches. Electronics 2022, 11, 2111. [Google Scholar] [CrossRef]
  67. Masoudi, R.; Ghaffari, A. Software defined networks: A survey. J. Netw. Comput. Appl. 2016, 67, 1–25. [Google Scholar] [CrossRef]
  68. Kim, H.; Feamster, N. Improving network management with software defined networking. IEEE Commun. Mag. 2013, 51, 114–119. [Google Scholar] [CrossRef]
  69. Vos, S.; Lago, P.; Verdecchia, R.; Heitlager, I. Architectural Tactics to Optimize Software for Energy Efficiency in the Public Cloud. In Proceedings of the 2022 International Conference on ICT for Sustainability (ICT4S), Plovdiv, Bulgaria, 14–16 June 2022; pp. 77–87. [Google Scholar] [CrossRef]
  70. Gong, S.; Zhu, X.; Zhang, R.; Zhao, H.; Guo, C. An Intelligent Resource Management Solution for Hospital Information System Based on Cloud Computing Platform. IEEE Trans. Reliab. 2023, 72, 329–342. [Google Scholar] [CrossRef]
  71. Mnyakin, M. Applications of AI, IoT, and Cloud Computing in Smart Transportation: A Review. Artif. Intell. Soc. 2023, 3, 9–27. [Google Scholar]
  72. Tabrizchi, H.; Kuchaki Rafsanjani, M. A survey on security challenges in cloud computing: Issues, threats, and solutions. J. Supercomput. 2020, 76, 9493–9532. [Google Scholar] [CrossRef]
  73. Barakabitze, A.A.; Walshe, R. SDN and NFV for QoE-driven multimedia services delivery: The road towards 6G and beyond networks. Comput. Netw. 2022, 214, 109133. [Google Scholar] [CrossRef]
  74. Sultan, M.T.; El Sayed, H. QoE-Aware Analysis and Management of Multimedia Services in 5G and Beyond Heterogeneous Networks. IEEE Access 2023, 11, 77679–77688. [Google Scholar] [CrossRef]
  75. Bai, Y.; Chen, L.; Ren, S.; Xu, J. Automated Customization of On-Device Inference for Quality-of-Experience Enhancement. IEEE Trans. Comput. 2023, 72, 1329–1342. [Google Scholar] [CrossRef]
  76. Stamatelatos, G.; Sgora, A.; Alonistioti, N. Intelligent SON Coordination in the 5G-and-beyond era. In Proceedings of the 2022 Global Information Infrastructure and Networking Symposium (GIIS), Argostoli, Greece, 26–28 September 2022; pp. 99–103. [Google Scholar] [CrossRef]
  77. Gür, G.; Kalla, A.; de Alwis, C.; Pham, Q.V.; Ngo, K.H.; Liyanage, M.; Porambage, P. Integration of ICN and MEC in 5G and Beyond Networks: Mutual Benefits, Use Cases, Challenges, Standardization, and Future Research. IEEE Open J. Commun. Soc. 2022, 3, 1382–1412. [Google Scholar] [CrossRef]
  78. Deng, Y.; Chen, X.; Zhu, G.; Fang, Y.; Chen, Z.; Deng, X. Actions at the Edge: Jointly Optimizing the Resources in Multi-Access Edge Computing. IEEE Wirel. Commun. 2022, 29, 192–198. [Google Scholar] [CrossRef]
  79. Yang, J.; Bashir, A.K.; Guo, Z.; Yu, K.; Guizani, M. Intelligent cache and buffer optimization for mobile VR adaptive transmission in 5G edge computing networks. Digit. Commun. Netw. 2023. [Google Scholar] [CrossRef]
  80. Du, H.; Liu, J.; Niyato, D.; Kang, J.; Xiong, Z.; Zhang, J.; Kim, D.I. Attention-Aware Resource Allocation and QoE Analysis for Metaverse xURLLC Services. IEEE J. Sel. Areas Commun. 2023, 41, 2158–2175. [Google Scholar] [CrossRef]
  81. Chaccour, C.; Soorki, M.N.; Saad, W.; Bennis, M.; Popovski, P. Can Terahertz Provide High-Rate Reliable Low-Latency Communications for Wireless VR? IEEE Internet Things J. 2022, 9, 9712–9729. [Google Scholar] [CrossRef]
  82. Gupta, V.; Mishra, V.K.; Singhal, P.; Kumar, A. An Overview of Supervised Machine Learning Algorithm. In Proceedings of the 2022 11th International Conference on System Modeling & Advancement in Research Trends (SMART), Moradabad, India, 16–17 December 2022; pp. 87–92. [Google Scholar] [CrossRef]
  83. Bkassiny, M.; Li, Y.; Jayaweera, S.K. A Survey on Machine-Learning Techniques in Cognitive Radios. IEEE Commun. Surv. Tutorials 2013, 15, 1136–1159. [Google Scholar] [CrossRef]
  84. Somvanshi, M.; Chavan, P.; Tambade, S.; Shinde, S.V. A review of machine learning techniques using decision tree and support vector machine. In Proceedings of the 2016 International Conference on Computing Communication Control and automation (ICCUBEA), Pune, India, 12–13 August 2016; pp. 1–7. [Google Scholar] [CrossRef]
  85. Massa, A.; Marcantonio, D.; Chen, X.; Li, M.; Salucci, M. DNNs as Applied to Electromagnetics, Antennas, and Propagation—A Review. IEEE Antennas Wirel. Propag. Lett. 2019, 18, 2225–2229. [Google Scholar] [CrossRef]
  86. Latif, S.; Rana, R.; Khalifa, S.; Jurdak, R.; Qadir, J.; Schuller, B. Survey of Deep Representation Learning for Speech Emotion Recognition. IEEE Trans. Affective Comput. 2023, 14, 1634–1654. [Google Scholar] [CrossRef]
  87. Ernst, D.; Glavic, M.; Wehenkel, L. Power systems stability control: Reinforcement learning framework. IEEE Trans. Power Syst. 2004, 19, 427–435. [Google Scholar] [CrossRef]
  88. Liu, Y.; Zhang, D.; Gooi, H.B. Optimization strategy based on deep reinforcement learning for home energy management. CSEE J. Power Energy Syst. 2020, 6, 572–582. [Google Scholar] [CrossRef]
  89. DiGiovanna, J.; Mahmoudi, B.; Fortes, J.; Principe, J.C.; Sanchez, J.C. Coadaptive Brain–Machine Interface via Reinforcement Learning. IEEE Trans. Biomed. Eng. 2009, 56, 54–64. [Google Scholar] [CrossRef]
  90. Du, Y.; Zandi, H.; Kotevska, O.; Kurte, K.; Munk, J.; Amasyali, K.; Mckee, E.; Li, F. Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning. Appl. Energy 2021, 281, 116117. [Google Scholar] [CrossRef]
  91. Özdoğan, O.; Björnson, E. Deep Learning-based Phase Reconfiguration for Intelligent Reflecting Surfaces. In Proceedings of the 2020 54th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 1–5 November 2020; pp. 707–711. [Google Scholar] [CrossRef]
  92. Sheen, B.; Yang, J.; Feng, X.; Chowdhury, M.M.U. A Deep Learning Based Modeling of Reconfigurable Intelligent Surface Assisted Wireless Communications for Phase Shift Configuration. IEEE Open J. Commun. Soc. 2021, 2, 262–272. [Google Scholar] [CrossRef]
  93. Nguyen, N.T.; Nguyen, L.V.; Huynh-The, T.; Nguyen, D.H.N.; Lee Swindlehurst, A.; Juntti, M. Machine Learning-based Reconfigurable Intelligent Surface-aided MIMO Systems. In Proceedings of the 2021 IEEE 22nd International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Online, 27–30 September 2021; pp. 101–105. [Google Scholar] [CrossRef]
  94. Zahedi, Z.; Ardebilipur, M.; Dehrouye, F. Improved Spectral Efficiency of RIS-aided 6G Communication using Deep Learning. In Proceedings of the 2022 30th International Conference on Electrical Engineering (ICEE), Seoul, Republic of Korea, 17–19 May 2022; pp. 175–179. [Google Scholar] [CrossRef]
  95. Yoga Perdana, R.H.; Nguyen, T.V.; Pramitarini, Y.; Shim, K.; An, B. Deep Learning-based Spectral Efficiency Maximization in Massive MIMO-NOMA Systems with STAR-RIS. In Proceedings of the 2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Bali, Indonesia, 20–23 February 2023; pp. 644–649. [Google Scholar] [CrossRef]
  96. Chen, P.; Huang, W.; Li, X.; Jin, S. Deep reinforcement learning based power minimization for RIS-assisted MISO-OFDM systems. China Commun. 2023, 20, 259–269. [Google Scholar] [CrossRef]
  97. Yang, H.; Xiong, Z.; Zhao, J.; Niyato, D.; Xiao, L.; Wu, Q. Deep Reinforcement Learning-Based Intelligent Reflecting Surface for Secure Wireless Communications. IEEE Trans. Wirel. Commun. 2021, 20, 375–388. [Google Scholar] [CrossRef]
  98. Huang, C.; Chen, G.; Wong, K.K. Multi-Agent Reinforcement Learning-Based Buffer-Aided Relay Selection in IRS-Assisted Secure Cooperative Networks. IEEE Trans. Inf. Forensics Secur. 2021, 16, 4101–4112. [Google Scholar] [CrossRef]
  99. Hayat, S.; Yanmaz, E.; Muzaffar, R. Survey on Unmanned Aerial Vehicle Networks for Civil Applications: A Communications Viewpoint. IEEE Commun. Surv. Tutorials 2016, 18, 2624–2661. [Google Scholar] [CrossRef]
  100. Ebrahimi, D.; Sharafeddine, S.; Ho, P.H.; Assi, C. Autonomous UAV Trajectory for Localizing Ground Objects: A Reinforcement Learning Approach. IEEE Trans. Mob. Comput. 2021, 20, 1312–1324. [Google Scholar] [CrossRef]
  101. Liu, X.; Liu, Y.; Chen, Y.; Hanzo, L. Trajectory Design and Power Control for Multi-UAV Assisted Wireless Networks: A Machine Learning Approach. IEEE Trans. Veh. Technol. 2019, 68, 7957–7969. [Google Scholar] [CrossRef]
  102. Tu, G.T.; Juang, J.G. UAV Path Planning and Obstacle Avoidance Based on Reinforcement Learning in 3D Environments. Actuators 2023, 12, 57. [Google Scholar] [CrossRef]
  103. Chen, C.; Xiang, J.; Ye, Z.; Yan, W.; Wang, S.; Wang, Z.; Chen, P.; Xiao, M. Deep Learning-Based Energy Optimization for Edge Device in UAV-Aided Communications. Drones 2022, 6, 139. [Google Scholar] [CrossRef]
  104. Chen, D.; Qi, Q.; Zhuang, Z.; Wang, J.; Liao, J.; Han, Z. Mean Field Deep Reinforcement Learning for Fair and Efficient UAV Control. IEEE Internet Things J. 2021, 8, 813–828. [Google Scholar] [CrossRef]
  105. Azari, A.; Ghavimi, F.; Ozger, M.; Jantti, R.; Cavdar, C. Machine Learning assisted Handover and Resource Management for Cellular Connected Drones. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Online, 25 May–31 July 2020; pp. 1–7. [Google Scholar] [CrossRef]
  106. Shin, M.; Kim, J.; Levorato, M. Auction-Based Charging Scheduling With Deep Learning Framework for Multi-Drone Networks. IEEE Trans. Veh. Technol. 2019, 68, 4235–4248. [Google Scholar] [CrossRef]
  107. Zedini, E.; Oubei, H.M.; Kammoun, A.; Hamdi, M.; Ooi, B.S.; Alouini, M.S. Unified statistical channel model for turbulence-induced fading in underwater wireless optical communication systems. IEEE Trans. Commun. 2019, 67, 2893–2907. [Google Scholar] [CrossRef]
  108. Kisseleff, S.; Chatzinotas, S.; Ottersten, B. Reconfigurable intelligent surfaces in challenging environments: Underwater, underground, industrial and disaster. IEEE Access 2021, 9, 150214–150233. [Google Scholar] [CrossRef]
  109. Yu, R.; Shi, Z.; Huang, C.; Li, T.; Ma, Q. Deep reinforcement learning based optimal trajectory tracking control of autonomous underwater vehicle. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 June 2017; pp. 4958–4965. [Google Scholar] [CrossRef]
  110. He, Z.; Dong, L.; Sun, C.; Wang, J. Asynchronous Multithreading Reinforcement-Learning-Based Path Planning and Tracking for Unmanned Underwater Vehicle. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 2757–2769. [Google Scholar] [CrossRef]
  111. Li, W.; Yang, X.; Yan, J.; Luo, X. An obstacle avoiding method of autonomous underwater vehicle based on the reinforcement learning. In Proceedings of the 2020 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 4538–4543. [Google Scholar] [CrossRef]
  112. Lin, C.; Wang, H.; Yuan, J.; Yu, D.; Li, C. An improved recurrent neural network for unmanned underwater vehicle online obstacle avoidance. Ocean Eng. 2019, 189, 106327. [Google Scholar] [CrossRef]
  113. Liu, Y.; Zhang, S.; Mu, X.; Ding, Z.; Schober, R.; Al-Dhahir, N.; Hossain, E.; Shen, X. Evolution of NOMA Toward Next Generation Multiple Access (NGMA) for 6G. IEEE J. Sel. Areas Commun. 2022, 40, 1037–1071. [Google Scholar] [CrossRef]
  114. Wu, Y.; Ji, G.; Wang, T.; Qian, L.; Lin, B.; Shen, X. Non-Orthogonal Multiple Access Assisted Secure Computation Offloading via Cooperative Jamming. IEEE Trans. Veh. Technol. 2022, 71, 7751–7768. [Google Scholar] [CrossRef]
  115. Shi, Z.; Gao, W.; Zhang, S.; Liu, J.; Kato, N. Machine Learning-Enabled Cooperative Spectrum Sensing for Non-Orthogonal Multiple Access. IEEE Trans. Wirel. Commun. 2020, 19, 5692–5702. [Google Scholar] [CrossRef]
  116. Narengerile; Thompson, J. Deep Learning for Signal Detection in Non-Orthogonal Multiple Access Wireless Systems. In Proceedings of the 2019 UK/China Emerging Technologies (UCET), Glasgow, UK, 21–22 August 2019; pp. 1–4. [Google Scholar] [CrossRef]
  117. Devipriya, S.; Martin Leo Manickam, J.; Victoria Jancee, B. Energy-efficient semi-supervised learning framework for subchannel allocation in non-orthogonal multiple access systems. ETRI J. 2023. [Google Scholar] [CrossRef]
  118. Siddiqi, U.F.; Sait, S.M.; Uysal, M. Deep Reinforcement Based Power Allocation for the Max-Min Optimization in Non-Orthogonal Multiple Access. IEEE Access 2020, 8, 211235–211247. [Google Scholar] [CrossRef]
  119. Albelaihi, R.; Alasandagutti, A.; Yu, L.; Yao, J.; Sun, X. Deep Reinforcement Learning Assisted Client Selection in Non-orthogonal Multiple Access based Federated Learning. IEEE Internet Things J. 2023, 10, 15515–15525. [Google Scholar] [CrossRef]
  120. Gaballa, M.; Abbod, M.; Aldallal, A. Investigating the Combination of Deep Learning for Channel Estimation and Power Optimization in a Non-Orthogonal Multiple Access System. Sensors 2022, 22, 3666. [Google Scholar] [CrossRef]
  121. Yang, P.; Xiao, Y.; Xiao, M.; Li, S. 6G Wireless Communications: Vision and Potential Techniques. IEEE Netw. 2019, 33, 70–75. [Google Scholar] [CrossRef]
  122. Cacciapuoti, A.S.; Sankhe, K.; Caleffi, M.; Chowdhury, K.R. Beyond 5G: THz-Based Medium Access Protocol for Mobile Heterogeneous Networks. IEEE Commun. Mag. 2018, 56, 110–115. [Google Scholar] [CrossRef]
  123. Bicaïs, S.; Doré, J.B. Design of Digital Communications for Strong Phase Noise Channels. IEEE Open J. Veh. Technol. 2020, 1, 227–243. [Google Scholar] [CrossRef]
  124. Wu, Y.; Koch, J.D.; Vossiek, M.; Schober, R.; Gerstacker, W. ML Detection without CSI for Constant-Weight Codes in THz Communications with Strong Phase Noise. In Proceedings of the GLOBECOM 2022—2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 831–836. [Google Scholar] [CrossRef]
  125. Ma, X.; Chen, Z.; Li, Z.; Chen, W.; Liu, K. Low Complexity Beam Selection Scheme for Terahertz Systems: A Machine Learning Approach. In Proceedings of the 2019 IEEE International Conference on Communications Workshops (ICC Workshops), Shanghai, China, 20–24 May 2019; pp. 1–6. [Google Scholar] [CrossRef]
  126. Mismar, F.B.; Evans, B.L. Partially Blind Handovers for mmWave New Radio Aided by Sub-6 GHz LTE Signaling. In Proceedings of the 2018 IEEE International Conference on Communications Workshops (ICC Workshops), Kansas City, MO, USA, 20–24 May 2018; pp. 1–5. [Google Scholar] [CrossRef]
  127. Lin, Y.; Wang, K.; Ding, Z. Unsupervised Machine Learning-Based User Clustering in THz-NOMA Systems. IEEE Wirel. Commun. Lett. 2023, 12, 1130–1134. [Google Scholar] [CrossRef]
  128. Obeed, M.; Salhab, A.M.; Alouini, M.S.; Zummo, S.A. Survey on Physical Layer Security in Optical Wireless Communication Systems. In Proceedings of the 2018 Seventh International Conference on Communications and Networking (ComNet), Marrakech, Morocco, 2–4 April 2018; pp. 1–5. [Google Scholar] [CrossRef]
  129. Esmail, M.A.; Saif, W.S.; Ragheb, A.M.; Alshebeili, S.A. Free space optic channel monitoring using machine learning. Opt. Express 2021, 29, 10967–10981. [Google Scholar] [CrossRef] [PubMed]
  130. Amirabadi, M.A.; Kahaei, M.H.; Nezamalhosseni, S.A. Low complexity deep learning algorithms for compensating atmospheric turbulence in the free space optical communication system. IET Optoelectron. 2022, 16, 93–105. Available online: http://xxx.lanl.gov/abs/https://ietresearch.onlinelibrary.wiley.com/doi/pdf/10.1049/ote2.12060 (accessed on 22 August 2023). [CrossRef]
  131. Aveta, F.; Refai, H.H.; Lopresti, P.G. Cognitive Multi-Point Free Space Optical Communication: Real-Time Users Discovery Using Unsupervised Machine Learning. IEEE Access 2020, 8, 207575–207588. [Google Scholar] [CrossRef]
  132. Aveta, F.; Algedir, A.; Refai, H. Quality of Transmission Estimation for Multi-User Free Space Optical Communication Using Supervised Machine Learning. In Proceedings of the 2021 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW), Cleveland, OH, USA, 21–23 June 2021; pp. 1–5. [Google Scholar] [CrossRef]
  133. Lohani, S.; Knutson, E.M.; Glasser, R.T. Generative machine learning for robust free-space communication. Commun. Phys. 2020, 3, 177. [Google Scholar] [CrossRef]
  134. Liu, Y.; He, W. Signal Detection and Identification in an Optical Camera Communication System in Moving State. J. Phys. Conf. Ser. 2021, 1873, 012015. [Google Scholar] [CrossRef]
  135. Cen, N.; Jagannath, J.; Moretti, S.; Guan, Z.; Melodia, T. LANET: Visible-light ad hoc networks. Ad Hoc Netw. 2019, 84, 107–123. [Google Scholar] [CrossRef]
  136. Chi, N.; Jia, J.; Hu, F.; Zhao, Y.; Zou, P. Challenges and prospects of machine learning in visible light communication. J. Commun. Inf. Netw. 2020, 5, 302–309. [Google Scholar] [CrossRef]
  137. Xiao, L.; Sheng, G.; Liu, S.; Dai, H.; Peng, M.; Song, J. Deep Reinforcement Learning-Enabled Secure Visible Light Communication Against Eavesdropping. IEEE Trans. Commun. 2019, 67, 6994–7005. [Google Scholar] [CrossRef]
  138. Wang, Y.; Chen, M.; Yang, Z.; Luo, T.; Saad, W. Deep Learning for Optimal Deployment of UAVs With Visible Light Communications. IEEE Trans. Wirel. Commun. 2020, 19, 7049–7063. [Google Scholar] [CrossRef]
  139. Miao, P.; Yin, W.; Peng, H.; Yao, Y. Study of the Performance of Deep Learning-Based Channel Equalization for Indoor Visible Light Communication Systems. Photonics 2021, 8, 453. [Google Scholar] [CrossRef]
  140. Li, Z.; Shi, J.; Zhao, Y.; Li, G.; Chen, J.; Zhang, J.; Chi, N. Deep learning based end-to-end visible light communication with an in-band channel modeling strategy. Opt. Express 2022, 30, 28905–28921. [Google Scholar] [CrossRef] [PubMed]
  141. Mohamed, A.; Tag Eldien, A.S.; Fouda, M.M.; Saad, R.S. LSTM-Autoencoder Deep Learning Technique for PAPR Reduction in Visible Light Communication. IEEE Access 2022, 10, 113028–113034. [Google Scholar] [CrossRef]
  142. Shan, X.; Zhi, H.; Li, P.; Han, Z. A Survey on Computation Offloading for Mobile Edge Computing Information. In Proceedings of the 2018 IEEE 4th International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing, (HPSC) and IEEE International Conference on Intelligent Data and Security (IDS), New York City, NY, USA, 2–5 May 2018; pp. 248–251. [Google Scholar] [CrossRef]
  143. Zamzam, M.; Elshabrawy, T.; Ashour, M. Resource Management using Machine Learning in Mobile Edge Computing: A Survey. In Proceedings of the 2019 Ninth International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 8–10 December 2019; pp. 112–117. [Google Scholar] [CrossRef]
  144. Wang, S.; Chen, M.; Liu, X.; Yin, C.; Cui, S.; Vincent Poor, H. A Machine Learning Approach for Task and Resource Allocation in Mobile-Edge Computing-Based Networks. IEEE Internet Things J. 2021, 8, 1358–1372. [Google Scholar] [CrossRef]
  145. Guo, Y.; Zhao, R.; Lai, S.; Fan, L.; Lei, X.; Karagiannidis, G.K. Distributed Machine Learning for Multiuser Mobile Edge Computing Systems. IEEE J. Sel. Top. Signal Process. 2022, 16, 460–473. [Google Scholar] [CrossRef]
  146. Chen, Y.; Gu, W.; Xu, J.; Zhang, Y.; Min, G. Dynamic task offloading for digital twin-empowered mobile edge computing via deep reinforcement learning. China Commun. 2023, 1–12. [Google Scholar] [CrossRef]
  147. Zhang, L.; Lai, S.; Xia, J.; Gao, C.; Fan, D.; Ou, J. Deep reinforcement learning based IRS-assisted mobile edge computing under physical-layer security. Phys. Commun. 2022, 55, 101896. [Google Scholar] [CrossRef]
  148. Lu, W.; Mo, Y.; Feng, Y.; Gao, Y.; Zhao, N.; Wu, Y.; Nallanathan, A. Secure Transmission for Multi-UAV-Assisted Mobile Edge Computing Based on Reinforcement Learning. IEEE Trans. Netw. Sci. Eng. 2023, 10, 1270–1282. [Google Scholar] [CrossRef]
  149. Zhao, N.; Ye, Z.; Pei, Y.; Liang, Y.C.; Niyato, D. Multi-Agent Deep Reinforcement Learning for Task Offloading in UAV-Assisted Mobile Edge Computing. IEEE Trans. Wirel. Commun. 2022, 21, 6949–6960. [Google Scholar] [CrossRef]
  150. Wang, C.; Rahman, A. Quantum-Enabled 6G Wireless Networks: Opportunities and Challenges. IEEE Wirel. Commun. 2022, 29, 58–69. [Google Scholar] [CrossRef]
  151. Ji, B.; Han, Y.; Liu, S.; Tao, F.; Zhang, G.; Fu, Z.; Li, C. Several Key Technologies for 6G: Challenges and Opportunities. IEEE Commun. Stand. Mag. 2021, 5, 44–51. [Google Scholar] [CrossRef]
  152. Mahmoud, H.H.H.; Amer, A.A.; Ismail, T. 6G: A comprehensive survey on technologies, applications, challenges, and research problems. Trans. Emerg. Telecommun. Technol. 2021, 32, e4233. Available online: http://xxx.lanl.gov/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/ett.4233 (accessed on 22 August 2023). [CrossRef]
  153. Nguyen, V.L.; Hwang, R.H.; Lin, P.C.; Vyas, A.; Nguyen, V.T. Towards the Age of Intelligent Vehicular Networks for Connected and Autonomous Vehicles in 6G. IEEE Netw. 2022, 1–8. [Google Scholar] [CrossRef]
  154. Lee, K.; Lee, S. Knowledge Structure of the Application of High-Performance Computing: A Co-Word Analysis. Sustainability 2021, 13, 1249. [Google Scholar] [CrossRef]
  155. Letaief, K.B.; Chen, W.; Shi, Y.; Zhang, J.; Zhang, Y.J.A. The Roadmap to 6G: AI Empowered Wireless Networks. IEEE Commun. Mag. 2019, 57, 84–90. [Google Scholar] [CrossRef]
  156. Adeogun, R.; Berardinelli, G.; Mogensen, P.E.; Rodriguez, I.; Razzaghpour, M. Towards 6G in-X Subnetworks With Sub-Millisecond Communication Cycles and Extreme Reliability. IEEE Access 2020, 8, 110172–110188. [Google Scholar] [CrossRef]
  157. Yang, P.; Kong, L.; Chen, G. Spectrum Sharing for 5G/6G URLLC: Research Frontiers and Standards. IEEE Commun. Stand. Mag. 2021, 5, 120–125. [Google Scholar] [CrossRef]
  158. Liu, Y.; Deng, Y.; Nallanathan, A.; Yuan, J. Machine Learning for 6G Enhanced Ultra-Reliable and Low-Latency Services. IEEE Wirel. Commun. 2023, 30, 48–54. [Google Scholar] [CrossRef]
  159. Lee, B.M. Systematic operations of Massive MIMO for Internet of Things networks. Expert Syst. Appl. 2022, 210, 118444. [Google Scholar] [CrossRef]
  160. Pang, X.; Sheng, M.; Zhao, N.; Tang, J.; Niyato, D.; Wong, K.K. When UAV meets IRS: Expanding air-ground networks via passive reflection. IEEE Wirel. Commun. 2021, 28, 164–170. [Google Scholar] [CrossRef]
  161. Tung, T.V.; An, T.T.; Lee, B.M. Joint Resource and Trajectory Optimization for Energy Efficiency Maximization in UAV-Based Networks. Mathematics 2022, 10, 3840. [Google Scholar] [CrossRef]
  162. Zhuo, X.; Liu, M.; Wei, Y.; Yu, G.; Qu, F.; Sun, R. AUV-Aided Energy-Efficient Data Collection in Underwater Acoustic Sensor Networks. IEEE Internet Things J. 2020, 7, 10010–10022. [Google Scholar] [CrossRef]
  163. Rasilainen, K.; Phan, T.D.; Berg, M.; Pärssinen, A.; Soh, P.J. Hardware Aspects of Sub-THz Antennas and Reconfigurable Intelligent Surfaces for 6G Communications. IEEE J. Sel. Areas Commun. 2023, 41, 2530–2546. [Google Scholar] [CrossRef]
  164. Falempin, A.; Schmitt, J.; Nguyen, T.D.; Doré, J.B. Towards Implementation of Neural Networks for Non-Coherent Detection MIMO systems. In Proceedings of the 2022 IEEE 96th Vehicular Technology Conference (VTC2022-Fall), Beijing, China/London, UK, 26–29 September 2022; pp. 1–5. [Google Scholar] [CrossRef]
  165. Nemati, M.; Kim, Y.H.; Choi, J. Toward Joint Radar, Communication, Computation, Localization, and Sensing in IoT. IEEE Access 2022, 10, 11772–11788. [Google Scholar] [CrossRef]
  166. Chen, H.; Sarieddeen, H.; Ballal, T.; Wymeersch, H.; Alouini, M.S.; Al-Naffouri, T.Y. A Tutorial on Terahertz-Band Localization for 6G Communication Systems. IEEE Commun. Surv. Tutorials 2022, 24, 1780–1815. [Google Scholar] [CrossRef]
  167. Yu, Z.; Hu, X.; Liu, C.; Peng, M.; Zhong, C. Location Sensing and Beamforming Design for IRS-Enabled Multi-User ISAC Systems. IEEE Trans. Signal Process. 2022, 70, 5178–5193. [Google Scholar] [CrossRef]
  168. Hussain, M.Z.; Hanapi, Z.M. Efficient Secure Routing Mechanisms for the Low-Powered IoT Network: A Literature Review. Electronics 2023, 12, 482. [Google Scholar] [CrossRef]
  169. Vachtsevanou, D.; William, J.; dos Santos, M.M.; De Brito, M.; Hübner, J.F.; Mayer, S.; Gomez, A. Embedding Autonomous Agents into Low-Power Wireless Sensor Networks. In Proceedings of the International Conference on Practical Applications of Agents and Multi-Agent Systems, Guimaraes, Portugal, 12–14 July 2023; pp. 375–387. [Google Scholar]
  170. Chorti, A.; Barreto, A.N.; Köpsell, S.; Zoli, M.; Chafii, M.; Sehier, P.; Fettweis, G.; Poor, H.V. Context-Aware Security for 6G Wireless: The Role of Physical Layer Security. IEEE Commun. Stand. Mag. 2022, 6, 102–108. [Google Scholar] [CrossRef]
  171. Ali, B.; Mirza, J.; Alvi, S.H.; Khan, M.Z.; Javed, M.A.; Noorwali, A. IRS-Assisted Physical Layer Security for 5G Enabled Industrial Internet of Things. IEEE Access 2023, 11, 21354–21363. [Google Scholar] [CrossRef]
  172. Zhang, J.; Li, G.; Marshall, A.; Hu, A.; Hanzo, L. A New Frontier for IoT Security Emerging From Three Decades of Key Generation Relying on Wireless Channels. IEEE Access 2020, 8, 138406–138446. [Google Scholar] [CrossRef]
  173. Cena, G.; Scanzio, S.; Vakili, M.G.; Demartini, C.G.; Valenzano, A. Assessing the Effectiveness of Channel Hopping in IEEE 802.15.4 TSCH Networks. IEEE Open J. Ind. Electron. Soc. 2023, 4, 214–229. [Google Scholar] [CrossRef]
  174. Ustun Ercan, S.; Pena-Quintal, A.; Thomas, D. The Effect of Spread Spectrum Modulation on Power Line Communications. Energies 2023, 16, 5197. [Google Scholar] [CrossRef]
  175. Zhang, Y.; Zhou, X.; Zhang, H.; Yuan, D. Stream level rank constrained transceiver design in MIMO interference channel networks. IET Commun. 2022, 16, 1403–1414. [Google Scholar] [CrossRef]
  176. Qu, Z.; Zhang, Z.; Liu, B.; Tiwari, P.; Ning, X.; Muhammad, K. Quantum detectable Byzantine agreement for distributed data trust management in blockchain. Inf. Sci. 2023, 637, 118909. [Google Scholar] [CrossRef]
  177. Nouioua, T.; Belbachir, A.H. The quantum computer for accelerating image processing and strengthening the security of information systems. Chin. J. Phys. 2023, 81, 104–124. [Google Scholar] [CrossRef]
  178. Hasan, S.R.; Chowdhury, M.Z.; Saiam, M. A New Quantum Visible Light Communication for Future Wireless Network Systems. In Proceedings of the 2022 International Conference on Advancement in Electrical and Electronic Engineering (ICAEEE), Gazipur, Bangladesh, 24–26 February 2022; pp. 1–4. [Google Scholar] [CrossRef]
Figure 1. Application of emerging technologies for wireless communication.
Figure 1. Application of emerging technologies for wireless communication.
Sensors 23 07709 g001
Figure 2. Organization of the paper.
Figure 2. Organization of the paper.
Sensors 23 07709 g002
Figure 3. 6G visions and requirements.
Figure 3. 6G visions and requirements.
Sensors 23 07709 g003
Table 1. List of works surveyed on the implementation of ML for 6G communication networks.
Table 1. List of works surveyed on the implementation of ML for 6G communication networks.
ReferencesYearLimitations and Contributions
[28]2021ML algorithm for application and infrastructure layers in 6G network
[29]2021ML algorithm to meet ultra-low latency communication requirements
[30]2022ML algorithm-aided m-MIMO communication for 5G network
[31]2022DL algorithm for semantic communication in 6G network
[32]2023ML algorithm for integrated sensing and communication
[33]2023RL algorithm for RIS communication
Our work2023ML algorithms for emerging technologies to meet the 6G network requirements
Table 2. Summary of the ML techniques.
Table 2. Summary of the ML techniques.
CategoryAlgorithmsConceptAdvantagesLimitations
Supervised
learning
Linear RegressionPredicts continuous output based on input featuresEasy to implementAssumes a linear relationship between features and target
Logistic RegressionPredicts binary or multi-class outcomes using logistic functionEasy to implement and interpretable resultsAssumes linear decision boundaries
Decision TreesCreates a tree-like structure to make predictionsIntuitive and easy to interpret, faster computation, and capture non-linear relationshipsProne to overfitting
Random ForestEnsemble of decision trees to improve prediction accuracyReduces overfitting compared to individual trees, and effectively handling noisy and missing dataComputationally expensive during training and slower computation
Gradient BoostingBoosts weak learners (usually decision trees) sequentiallyHigh prediction accuracySensitive to noisy data and outliers
Support Vector MachineFinds the optimal hyperplane for binary/multi-class classificationEffective in high-dimensional spacesRequires proper selection of kernel functions
Unsupervised
learning
K-Means Clusteringgrouping data into clusters based on similaritiesSimple and easy to understandRequires pre-determined number of clusters (K)
Hierarchical ClusteringCreates a tree-like hierarchy of clusters based on data similaritiesNo need to specify the number of clusters beforehandSensitive to noise and outliers
Principal Component AnalysisReduces dimensionality while preserving varianceEfficient for large feature spacesInformation loss due to dimensionality reduction
DLANNA set of interconnected artificial neurons that process input dataSuitable for complex tasks like image recognitionProne to overfitting, especially with small datasets
DNNFully connected NN with more than one hidden layerCan learn complex features and patternsLonger training time, especially for deep architectures
CNNMulti-layer NNs with convolution layer connected to the previous layerHighly effective in image and video analysisRequires significant computational resources for training
RNNMulti-layer NNs trained using back-propagation methodCan handle sequential data and suitable for time-series and NLPCan suffer from vanishing gradient problems, computationally expensive to train, and difficult to parallelize the computation
RLTrains agents to make decisions in an environment to maximize rewardsUseful in sequential decision-making tasks, suitable for super complex data, maximizes behavior, provides a decent minimization of performance standardsNot preferable for a simple problem, high sample complexity and training time, highly depend on the reward function quality, and difficult to debug and interpret
Table 3. Summary of the applications of ML for IRS-aided communications.
Table 3. Summary of the applications of ML for IRS-aided communications.
ReferencesML Model ArchitectureContributionsRemarks
[91]Two full-layer DNNsOptimization of phase matrix and beamforming vectorReduced the pilot overhead and provided performance very close to communication with perfect CSI
[92]CNN architectureIRS phase shift optimization and overhead reductionConverged to near-optimal data rates using less than 2% of receiver’s location
[93]LPSNetSpectral efficiencyAchieved almost the same performance as the alternating optimization method with less computational complexity
[94]Three full-layer DNNsSpectral and power efficienciesCould configure real-time phase shift while improved rate performance in low SNRs and provided higher EE
[95]DL-based frameworkSpectral and power efficienciesProvided the low complexity iterative algorithm with guaranteed convergence at a relatively optimal level
[96]TD3 algorithmTransmit power efficiencyReduced the transmit power with lower computation delay
[97]PDS and PER schemesThe learning convergence rate and efficiencyEnhanced the secrecy rate and the satisfied QoS probability
[98]MA-DRLOptimization of secrecy rate and throughputSignificantly improved the secrecy rate and throughput
Table 4. Summary of the applications of ML for UAV-aided communication.
Table 4. Summary of the applications of ML for UAV-aided communication.
ReferencesML Model ArchitectureContributionsRemarks
[100]RL approachUAV trajectorySuperiority in terms of average localization error by considering the fixed amount of UAV energy consumption, path length, flying time, and velocity
[101]ESN algorithmPlacement optimization, trajectory acquisition, and power-controlPredicted the user’s movement at high accuracy and provided a high quality of maintaining the trajectory and power control
[102]DQN-based algorithmUAV path planning and obstacle avoidanceReduction of 50% of computation time and 30% of the path length
[103]DEO algorithmEnergy optimizationAchieved WMAPE in less than 2% under the effect of a communication delay of less than 1 s
[104]MFTRPO algorithmOptimal UAV trajectoryEffective in robustness and superiority in energy efficiency
[105]ML-powered H-RRM schemeResource allocation and handover managementOutperformed the number of handovers, interference incurred, and delay experienced by setting coefficients for delay, interference, and handover
[106]Two full layer DNNsEnergy efficiency of the moving UAVProvided fewer false bids made by drones and revenue-optimal auction without bid distribution among the drones
Table 5. Summary of the applications of ML for AUV-aided communication.
Table 5. Summary of the applications of ML for AUV-aided communication.
ReferencesML Model ArchitectureContributionsRemarks
[109]DRL algorithmAUV optimal trajectoryRobust and effective in different kinds of trajectory tracking
[110]AMPPO-PP and AMPPO-TT algorithmsAutonomous planning, tracking, and emergency obstacle avoidanceOutperformed the classical path-planning algorithm and advanced sampling-based path-planning algorithm
[111]RL-based methodsTracking and obstacle avoidanceEffective in completing the tracking task by avoiding obstacles
[112]CRNN algorithmObstacle avoidanceAvoided obstacles with fewer parameters and shorter computation times, provided shorter paths, and improved energy efficiency
Table 6. Summary of the applications of ML for NOMA communications.
Table 6. Summary of the applications of ML for NOMA communications.
ReferencesML Model ArchitectureContributionsRemarks
[115]Combined unsupervised and supervised learningSpectrum sensingProvided an accurate and effective spectrum sensing while maintaining optimal power allocation
[116]LSTM-based DL modelsSignal detectionOutperformed the (SIC) receiver and the limited radio resources
[117]EE-CSL algorithmPower optimizationSignificantly minimized energy consumption for low computational complexity and achieved more significant sum rate than conventional MIMO orthogonal multiple access
[118]DDQL-based RL algorithmTransmission power optimizationConverged successfully in 91% of the test cases with a value better than the target, and performed better than SLSQP and TCONS algorithms
[119]DREAM-FL schemeClient selectionProvided more qualified clients with high model accuracy than FDMA- and TDMA-based solutions
[120]LSTM-based DL algorithmChannel coefficients predictionProvided reliable performance even when cell capacity is increased
Table 7. Summary of the applications of ML for mmWave and THz communications.
Table 7. Summary of the applications of ML for mmWave and THz communications.
ReferencesML Model ArchitectureContributionsRemarks
[125]RFC algorithmLow-complexity beam selectionAchieved better the maximum uplink sum rate, converged faster than the existing methods, and saved 99.8% of the complexity with for massive users
[126]Supervised ML algorithmBlind handover success rate predictionImproved the inter-RAT handover success rate, kept the session in the optimal band, had a high chance of supporting the self-organizing network
[127]Unsupervised ML-based user clustering algorithmsSecondary user clusterization and data rates improvementThe agglomerative hierarchical clustering outperformed the K-means and DBSCAN algorithms as the number of secondary users increased
Table 8. Summary of the applications of ML for FSO communications.
Table 8. Summary of the applications of ML for FSO communications.
ReferencesML Model ArchitectureContributionsRemarks
[129]CNN and SVM algorithmsChannel predictionCNN outperformed the SVM, predicted channels with ASE noise well, and provided an accurate prediction for turbulence and pointing error in low-speed transmission
[130]DCNN algorithmAtmospheric turbulence problems detectionAchieved the optimum performance with low complexity, 2×, 3×, and 7.5× faster for 16, 64, and 256 modulation orders, respectively
[131]Unsupervised-based techniqueEstimated the number of concurrently transmitting users sharing time, bandwidth, and space resourcesAchieved over 92% accuracy in differentiating simultaneously transmitting users, even in moderate atmospheric turbulence
[132]Supervised learning-based ML methodTransmission quality estimationSVM achieved the highest accuracy of 92%
[133]Combined GNN and CNN schemesTransmission quality estimationEfficiently received improved signals that had deteriorated and showed better classification accuracy
Table 9. Summary of the applications of ML for VLC communications.
Table 9. Summary of the applications of ML for VLC communications.
ReferencesML Model ArchitectureContributionsRemarks
[137]Deep RL algorithmBeamforming controlSignificantly increased the secrecy rate, decreased the BER, and outperformed the zero-forcing and other existing algorithms
[138]GRUs–CNN prediction algorithmUAV deployment optimization, user allocation, and energy efficiencySolved the non-convex optimization problem in low complexity and reduced total transmit power by up to 68.9%
[139]Model-driven DL-nonlinear post-equalizer schemeChannel estimation and symbol detectionSuccessfully proved the robustness and generalization ability, compensated for overall channel impairment, and demodulated distorted symbols to bit streams
[140]ANN-based AE structureLow-frequency noise effect predictionAchieved speeds up to 0.325 Gbps faster than another scheme, and robustness to bias, amplitude, and bitrate changes
[141]LSTM-AE schemeSequential data input handling and sequential data output predictionSignificantly reduced the PAPR while maintaining BER
Table 10. Summary of the applications of ML for MEC communications.
Table 10. Summary of the applications of ML for MEC communications.
ReferencesML Model ArchitectureContributionsRemarks
[144]Multi-stack RL algorithmSubcarriers, transmit power, and task allocationsReduced iterations by 18% and the maximal delay by 11% among users, compared to the Q-learning algorithm
[145]FL framework with DQN-based RL algorithmOffloading ratio, bandwidth, and computational ability optimizationReduced the latency and energy consumption, ensured more bandwidth and computational capability for the higher task priority users
[146]DEETO algorithmEnergy efficiency and workload balance maximizationImproved the energy efficiency and minimized the edge servers’ workload
[147]DDPG-based RL algorithmPhysical-layer security optimizationMade a lower total cost decision and demonstrated the ability to work well under various conditions
[148]MA scheme based on NQL algorithmMulti-UAV secure offloading maximizationOutperformed the single-agent and random-offloading schemes in a better manner and achieved larger system utility
[149]MATD3 schemeTrajectory design, task allocation, and power managementProven adaptable to EU mobility, changes in communication and computing resources, and dynamics of computing tasks
Table 11. ML-based algorithms for 5G/6G applications.
Table 11. ML-based algorithms for 5G/6G applications.
ML-Based AlgorithmDefinition5G/6G ApplicationsReferences
Supervised
learning
ML algorithm that requires
input and output pairs in
advance to train the model
Spectral efficiency[115]
Energy efficient communication[117]
Computational reduction[125]
Throughput improvement[125]
Reliable communication[126,131,132]
Unsupervised
learning
ML algorithm that finds
pattern data without labeled
input and predefined output
Computational reduction[93]
Spectral efficiency[93,115]
Throughput improvement[127]
ANNCollection of neurons at each
layer with inputs working in
the feed-forward structure
Reliable communication[140]
Computational reduction[140]
DNNFully connected structure
connected with neurons in
each layer
Reliable communication[91,95,139]
Spectral efficiency[94,95]
Energy efficient communication[94,95,106]
Throughput improvement[94]
CNNStructure with the same
weight for all links with the
convolution layer connected
to the local path in the
previous layer
Throughput improvement[92]
Spectral efficiency[129]
Computational reduction[130]
Reliable communication[132,138]
Energy efficient communication[138]
RNNMulti-layer feed-forward NNs
Trained using the back-
propagation method which
considers input, weights, and
memory for each output layer
Energy efficient communication[103]
Reliable communication[112,116,120]
Throughput improvement[120,141]
RLML algorithms that allow
machines to continuously
learn from their experience
data sets to automatically
make the most accurate
decisions
Energy efficient communication[96,101,104,118,144,145,146,149]
Reliable communication[96,100,101,102,105,109,110,111,119]
[144,148,149]
Secure communication[97,98,137,147]
Throughput improvement[98]
Computational reduction[144,145,146]
Spectral efficiency[145]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Puspitasari, A.A.; An, T.T.; Alsharif, M.H.; Lee, B.M. Emerging Technologies for 6G Communication Networks: Machine Learning Approaches. Sensors 2023, 23, 7709. https://doi.org/10.3390/s23187709

AMA Style

Puspitasari AA, An TT, Alsharif MH, Lee BM. Emerging Technologies for 6G Communication Networks: Machine Learning Approaches. Sensors. 2023; 23(18):7709. https://doi.org/10.3390/s23187709

Chicago/Turabian Style

Puspitasari, Annisa Anggun, To Truong An, Mohammed H. Alsharif, and Byung Moo Lee. 2023. "Emerging Technologies for 6G Communication Networks: Machine Learning Approaches" Sensors 23, no. 18: 7709. https://doi.org/10.3390/s23187709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop