Next Article in Journal
Energy Harvesting for TDS-OFDM in NOMA-Based Underwater Communication Systems
Next Article in Special Issue
Trajectory Clustering-Based Anomaly Detection in Indoor Human Movement
Previous Article in Journal
Multi-Level Transformer-Based Social Relation Recognition
Previous Article in Special Issue
Tracking Missing Person in Large Crowd Gathering Using Intelligent Video Surveillance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence Applications and Self-Learning 6G Networks for Smart Cities Digital Ecosystems: Taxonomy, Challenges, and Future Directions

1
Intelligent Distributed Computing and Systems (INDUCE) Research Laboratory, Department of Computer Science and Software Engineering, College of Information Technology, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
2
National Water and Energy Center, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
3
Cloud Computing and Distributed Systems (CLOUDS) Laboratory, School of Computing and Information Systems, The University of Melbourne, Parkville, VIC 3010, Australia
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(15), 5750; https://doi.org/10.3390/s22155750
Submission received: 24 June 2022 / Revised: 21 July 2022 / Accepted: 21 July 2022 / Published: 1 August 2022
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)

Abstract

:
The recent upsurge of smart cities’ applications and their building blocks in terms of the Internet of Things (IoT), Artificial Intelligence (AI), federated and distributed learning, big data analytics, blockchain, and edge-cloud computing has urged the design of the upcoming 6G network generation, due to their stringent requirements in terms of the quality of services (QoS), availability, and dependability to satisfy a Service-Level-Agreement (SLA) for the end users. Industries and academia have started to design 6G networks and propose the use of AI in its protocols and operations. Published papers on the topic discuss either the requirements of applications via a top-down approach or the network requirements in terms of agility, performance, and energy saving using a down-top perspective. In contrast, this paper adopts a holistic outlook, considering the applications, the middleware, the underlying technologies, and the 6G network systems towards an intelligent and integrated computing, communication, coordination, and decision-making ecosystem. In particular, we discuss the temporal evolution of the wireless network generations’ development to capture the applications, middleware, and technological requirements that led to the development of the network generation systems from 1G to AI-enabled 6G and its employed self-learning models. We provide a taxonomy of the technology-enabled smart city applications’ systems and present insights into those systems for the realization of a trustworthy and efficient smart city ecosystem. We propose future research directions in 6G networks for smart city applications.

1. Introduction

With the prominence of connected smart cities and the recent emergence of a smart city’s mobile applications and their building blocks architecture in terms of Internet of Things (IoT) [1], Artificial Intelligence (AI) [2], federated and distributed learning [3], big data analytics [4], blockchain [5], and edge-cloud computing [6], the implementation of a new generation of networks has been prompted. While optimization strategies at the application level along with a fast network, such as the currently in-deployment 5G networks, play an important role, this is not enough for AI-based distributed, dynamic, contextual, and secure smart city applications enabled by emergent technologies [7]. As shown in Figure 1, these applications include, but are not limited to, autonomous driving, accident prevention and traffic management enabled by the Internet of Vehicles (IoV), remote patient monitoring, medical drug supply chain management, the prognosis/diagnosis of diseases empowered by the Internet of Medical Things (IoMT), industry automation and surveillance using the Internet of Robotic Things (IoRT), building maintenance and package delivery enabled by the Internet of Drones (IoD), system maintenance and pollution control enabled by the Industrial Internet of Things (IIoT), interactive gaming and aerospace navigation using Holographic Communication (HC), immersive training and guided repair enabled by Extended Reality (XR), intelligent transportation systems and smart connected healthcare using blockchain, and data analytics empowered by edge-cloud computing. These applications and their rigorous support for meeting requirements in terms of the quality of services (QoS) and dependability to satisfy the Service-Level-Agreement (SLA) for the end users [8,9,10,11] have been a driving force for the evolution of networks.
The energy consumption of the smart cities’ digital ecosystem serving these applications is a major issue causing environmental threats and increasing electricity bills, requiring immediate sustainable remedies [12]. Estimates show that cloud data centers, considered the backbone of smart cities, will be responsible for 4.5% of the total global energy consumption by 2025 [13]. The average electricity cost for powering a data center could be as high as $3 million per year [14]. Furthermore, it is predicted that by 2040, Information and Communications Technology (ICT) will be responsible for 14% of global carbon emissions [15]. Consequently, the underlying communication networks should focus on deploying efficient, dependable, and secure applications in smart city applications while considering the critical requirements of privacy, energy efficiency, high data rates, and ultra-low latencies for those applications.
Therefore, industries and academia have started to look beyond 5G networks and design the upcoming 6G. In particular, 6G network designers propose the use of AI in its underlying protocols and operations for optimal performance and energy efficiency. The vibe over AI and its tremendous potential for intelligent applications and network systems, in combination with IoT smart city applications, have been a great motivation in developing AI-IoT-based solutions. These solutions require huge communication and computation resources, giving rise to latency, energy consumption, network congestion, and privacy leakage.
The current research on this topic either focuses on the requirements of AI applications in a smart city ecosystem that can benefit from the underlying 6G networks or on the self-learning 6G networks for agility, flexibility, and energy efficiency. To our knowledge, no work adopts a holistic perspective considering the underlying 6G networks, middleware, and technology-enabled applications for an intelligent and integrated smart city digital ecosystem.
The main contributions of our paper are as follows.
  • We provide a temporal evolution of the wireless communication network generations from 1G to AI-enabled 6G and capture the inherent challenges and technological requirements that lead to the development of a given network generation over a certain period.
  • We present self-learning models that would be infused in 6G to accommodate the strict requirements of smart city applications in terms of low latency, high reliability, security, energy efficiency, execution time, and context awareness.
  • We propose a taxonomy of distributed, dynamic, and contextual AI applications in 6G networks based on the underlying technology used by those applications. In addition, we provide insights on the requirements of these applications that should be considered by the underlying 6G networks.
  • We propose future directions toward the realization of a trustworthy and efficient digital ecosystem consisting of intelligent and connected applications, the middleware, the underlying technologies, and the 6G network systems.
The rest of the paper is organized as follows. Section 2 provides a categorization and overview of related surveys. The temporal evolution of wireless communication network generations is presented in Section 3. Section 4 and Section 5 synthesize the taxonomies of AI-enabled 6G networks with their self-learning models and technology-enabled smart city applications in 6G. Future research directions are discussed in Section 6. Section 7 summarizes and concludes the paper.

2. Related Survey

There have been few surveys on AI-enabled applications and AI-6G in smart cities. We classify these surveys into two categories based on the survey’s approach: (1) a top-down approach highlighting the requirements of AI applications in terms of networks’ capabilities [16,17] and (2) a down-top perspective focusing on AI-enabled 6G networks for agile, flexible, and efficient systems [18,19,20,21].
Concerning the top-down approach, Akhtar et al. [16] presented the projected 6G architecture and its characteristics along with potential technologies enabling the envisioned network generation systems. The authors focused on quantum communication and Machine Learning, blockchain, tactile internet, and free duplexing and spectrum sharing technologies. Furthermore, the authors discussed e-heath and bio-sensing, HC, and IoT applications that will be underlined by 6G networks. However, the authors did not analyze the requirements of the technology-enabled applications in 6G. Using a similar approach, Tataria et al. [17] explained the 6G networks architecture, characteristics, and deployment scenarios. In addition, the authors analyzed the requirements for applications in 6G systems enabled by HC, tactile and haptic internet, edge-cloud computing, and IoT technologies. However, these works [16,17] do not focus on employing self-learning models on the underlined 6G networks layer for security, agility, flexibility, and energy efficiency.
Regarding the down-top approach, Yang et al. [18] proposed an AI-enabled 6G architecture for radio network resource management and service provisioning. Similarly, Letaief et al. [19,20] analyzed the potential of AI for 6G networks design and optimization. Zhang and Zu [21] presented a survey on AI-enabled 6G networks for radio interface, intelligent traffic control, resource management, performance and energy optimization, and security. However, these works do not analyze the requirements of AI applications in 6G networks.
Table 1 summarizes the related survey and their comparison with our work. In contrast to these surveys based on top-down or down-top approaches, in this paper, the 6G technology is prospected from a holistic perspective, where self-learning models are, respectively, inserted into the main technical layers of 6G to achieve the requirements of building an integrated smart city digital ecosystem as a whole, not just looking at application needs or network requirements. In this approach, in addition to AI-enabled 6G network systems and employed self-learning models, middleware and technology-enabled applications and their requirements for an intelligent and connected contextual computing and communication smart cities ecosystem are analyzed.

3. Evolution of Wireless Communication Technology (1G–6G)

Wireless communication technology has evolved over the years intending to provide high-speed, reliable, and secure communication. Figure 2 shows the evolution of wireless network development from 1G to 6G, including the year of proposing and that of deploying a particular network generation over time. In the following, we explain each evolution along with its applications and shortcomings.

3.1. First Generation (1G) Technology

The first generation (1G) communication system was introduced in 1978 in the United States based on the Advanced Mobile Phone System (AMPS) [22]. The AMPS is an analog cellular system allocated with 50MHz bandwidth with a frequency range of 824–894 MHz [23]. The bandwidth in the AMPS is divided into sub-channels of 30 KHz, each using Frequency Division Multiple Access (FDMA) for multiple users to send data. In 1979, 1G was commercially launched in Japan by the Nippon Telegraph and Telephone (NTT) DoCoMo Company. In 1981, the Nordic Mobile Telephone (NMT) standard for 1G was developed by the Nordic countries such as Norway, Denmark, Switzerland, Finland, and Sweden. In 1983, the AMPS was commercially launched in the United States and was later used in Australia. The Total Access Communication Systems (TACS) standard was introduced in the United Kingdom for 1G [24]. First generation technology supported voice calls with up to 2.4 Kbps of bandwidth within one country. However, the underlying technology could not handle international voice and conference calls, and other applications such as messaging services, emails, and accessing information over a mobile wireless network. In addition, being an analog system, 1G suffered from bad voice quality and poor handoff reliability. Furthermore, 1G was less secure. To overcome these shortcomings, the 2G network generation was introduced.

3.2. Second Generation (2G) Technology

To enable applications such as international voice calls, messaging, and access to information over a wireless network, which require a high data transfer rate, and to make communication more secure, the second-generation (2G) wireless technology was designed in the 1980s and introduced in 1991 under the Global System for Mobile (GSM) communication standards in Finland [25]. The analog system of 1G was replaced by a digital system enabling the encryption of voice calls and thus providing security. The GSM uses Time Division Multiple Access (TDMA) such that each network user is allocated the channel bandwidth based on time slots [26]. The GSM operates on a 900–1800 MHz frequency band except for in America where it operates in the 1900 MHz band. TDMA was later used by other digital standards such as the Digital AMPS (D-AMPS) in the United States and the Personal Digital Cellular (PDC) in Japan. As an alternative to TDMA, Code Division Multiple Access (CDMA) was introduced in the United States on the IS-95 standard [22] which allowed multiple network users to simultaneously transmit data based on assigned unique code sequences. In addition to international roaming voice calls, 2G supported conference calls, call hold facility, short message services (SMS), and multimedia message services (MMS) with a data rate up to 9.6 Kbps.
The continuous evolution of the GSM technology led to the development of General Packet Radio Service (GPRS), referred to as 2.5 G, which implemented packet switching in addition to circuit switching. GPRS has provided additional services such as Wireless Application Protocol (WAP) access and internet communication such as e-mail and World Wide Web (WWW) access [27]. It provides data rates up to 115 Kbps [28]. GPRS further evolved to the Enhanced Data Rates for GSM evolution (EDGE), providing higher data rates. For instance, a 40 KB text file can be transferred in 2 s using EDGE compared to 6 s in GPRS. EDGE was deployed on GSM networks in 2003 by Cingular (now AT&T) in the United States. The peak data speed of 2G is 50 Kbps using GPRS and 1 Mbps using EDGE. However, 2G networks were not capable of handling video conferencing, navigation services, and other applications which require high data rates, leading to the 3G network generation.

3.3. Third Generation (3G) Technology

The Third Generation Partnership Project (3GPP) was formed in 1998 to provide a standardized frequency across the globe for mobile networking, enabling high data rate services such as video calls, navigation, and interactive gaming. It is based on the International Mobile Telephone (IMT)-2000 standard and was first made available in Japan by the NTT DoCoMo in 2001. The IMT-2000 focused on providing a wider coverage area, improving the QoS, and making services available to users irrespective of their location [29]. One of the requirements of the IMT-2000 was to have a minimum speed of 200 Kbps for a network to be 3G. The third generation has introduced wireless technology such as video conferencing and video downloading with an increased data transmission rate at a lower cost. It increased the efficiency of the frequency spectrum by improving the audio compression during a call, allowing more simultaneous calls in the same frequency range. Third generation technology evolved between 2000 and 2010 to provide Universal Mobile Telecommunications System (UMTS)-based networks with higher data rates and capacities. In particular, High-speed Downlink Packet Access (HSDPA) was deployed (also referred to as 3.5G), which is a packet-based data service providing downlink data rates 8–10 Mbps. To provide services for applications that require high data rates such as interactive gaming, High-Speed Uplink Packet Access (HSUPA) was introduced (referred to as 3.75G) which enabled an uplink data transmission speed of 1.4–5.8 Mbps. However, IP telephony, 3D videos, and High Definition (HD) mobile TV were not supported by the 3G technology, leading to the foundation of the 4G network generation.

3.4. Fourth Generation (4G) Technology

To transmit data, voice, multimedia, and internet services at a higher rate, quality, and security at a low cost, the Fourth Generation (4G) was initiated in the late 2000s as an all-IP-based network system. Fourth Generation technology was first used commercially in Norway in 2009 after its successful field trial in Japan in 2005. It aimed to provide peak data rates of up to 1 Gbps at low mobility and 100 Mbps at high mobility and is based on Long-Term Evolution (LTE) and Wireless Interoperability for Microwave Access (WiMAX) technologies. The LTE standard was further enhanced to LE-Advanced Pro (referred to as 4.5G) to increase the mobile broadband and connectivity performances [22]. However, 4G was not capable of operating applications that require image processing, such as machine vision, smart connected cars, and augmented reality, giving rise to the 5G network generation.

3.5. Fifth Generation (5G) Technology

To obtain a consistent QoS, low end-to-end latency, reduced cost, and massive device connectivity, the Fifth Generation (5G) communication technology was established to support applications such as AR, home and industrial automation, and machine vision. Fifth Generation technology was first offered in South Korea in 2019. It provides a data rate of 20 Gbps in the downlink and 10 Gbps in the uplink, and is aimed to support three generic services; Enhanced Mobile Broadband (eMBB), Massive Machine-type Communications (mMTC), and Ultra-Reliable Low-Latency Communications (URLLCs) [30]. eMBB aims to deliver peak download speeds of over 10 Gbps to support applications such as Ultra-High Definition (UHD) videos and AR. mMTC defines the requirement to support one million low-powered economical devices per Km2 with a battery life of up to 10 years. It can support applications such as smart homes and industrial automation that involve several sensors, controllers, and actuators. URLLC sets the requirement of high reliability (99.99%), extremely low latencies (<1 ms), and support for low data rates (bps/Kbps) to support applications such as social messaging services, traffic lights, self-driving cars, and smart healthcare. In 2020, Huawei, as part of the 5.5G vision, proposed three additional sets of services; Uplink-centric Broadband Communication (UCBC), Real-Time Broadband Communication (RTBC), and Harmonized Communication and Sensing (HCS) [31]. UCBC aims to increase the uplink bandwidth by 10-fold to support applications involving machine vision. For Augmented Reality (AR), Virtual Reality (VR), and Extended Reality (XR) applications, RTBC would provision large bandwidth and low latency services with a certain level of reliability. HCS focuses to offer communication and sensing functionalities for connected cars and drone scenarios. However, with the emergence of an interactive and connected IoT, communicating IoV, and holographic applications, the 5G networks could not manage the stringent high computing and communications requirements of those applications. Consequently, the 6G network generation was instigated.

3.6. Sixth Generation (6G) Technology

The Sixth Generation (6G) technology was envisioned in 2019–2020 to transform the “Internet of Everything” into an “Intelligent Internet of Everything” with more stringent requirements in terms of a high data rate, high energy efficiency, massive low-latency control, high reliability, connected intelligence with Machine Learning (ML) and Deep Learning (DL), and very broad frequency bands [19]. Three new application services are proposed; Computation Oriented Communications (COC), Contextually Agile eMBB Communications (CAeC), and Event-Defined uRLLC (EDuRLLC) [19]. COC will enable the flexible selection of resources from the rate-latency-reliability space depending on the available communication resources to achieve a certain level of computational accuracy for learning approaches. CAeC will provision eMBB services that would be adaptive to the network congestion, traffic, topology, users’ mobility, and social networking context. 6G-EDuRLLC targets the 5G-URLLC applications that will operate in emergency or extreme situations having spatial-temporal device densities, traffic patterns, and infrastructure availability.
Table 2 summarizes the characteristics of the wireless communication networks from 1G to 6G [16,32].

4. Artificial Intelligence (AI)-Enabled 6G Networks

The disruptive emergence of highly distributed smart city mobile applications [33,34,35] such as the IoV, IoMTs, IoD, IoRT, IIoT, 3D virtual reality, and their stringent requirements in terms of the QoS and the need for the service providers to satisfy SLAs, have been a driving force for 6G. In addition, many of those applications are AI-Big data-driven, making it challenging if not impossible for 5G to satisfy those requirements. Therefore, 6G must provide battery-free device capabilities, very high data rates, a very high energy efficiency, massive low-latency control, very broad frequency bands, and ubiquitous broadband-global network coverage beyond what 5G LTE can offer. To achieve that level of efficiency, in contrast to 5G, 6G needs to be equipped with context-aware algorithms to optimize its architecture, protocols, and operations. For this purpose, 6G will infuse connected intelligence in its design in an integrating communication, computing, and storage infrastructure from the edges to the cloud and core infrastructure. Supporting a wide range of applications that are demanding in terms of low latency, high reliability, security, and execution time requires an AI-enabled optimization for 6G [19]. Traditional approaches using statistical analysis based on prior knowledge and experiences via the deployment of the Software Defined Network (SDN) [19] will not be any more effective due to the elapsed time from analysis to decision making. Consequently, ML and DL algorithms are used to solve several issues in networking, such as caching and data offloading [36]. In this section, we present AI-enabled 6G network protocols and mechanisms (Figure 3) and their employed self-learning ML/DL models.

4.1. Channel Estimation

To fulfill the demanding requirements of smart city applications in terms of high data rate (Tbps), low latency (order of 0.1–1 ms), and high reliability (order of 10−9) [37], the 6G radio access will be enabled by emerging technologies such as Terahertz communication [38], visible-light communication, ultra-massive multiple-input multiple-output (MIMO) [39], and large intelligent surfaces [40]. These technologies will increase the complexity of the radio communication channels, making efficient channel estimation challenging using the traditional mathematical approaches. The wireless communication channel attenuates the phase shifts and attenuates and adds noise to the transmitted information. In this context, channel estimation can be defined as the process of estimating the characteristics of the communication channel to recover the transmitted information from the channel effect. To increase the performance and capacity of 6G communication, precise and real-time channel estimation becomes crucial. Recently, DL has gained wide attention for precise channel estimation. Figure 4 shows a DL-based channel estimation process where the signal is first transmitted along with some pilot (reference) signals. The effects of the channel on the pilot signals are then extracted. The channel characteristics are then estimated by a DL method using the interpolated channel.
Ye et al. [41] proposed a Deep Neural Network (DNN)-based approach for channel estimation and symbol detection in an Orthogonal Frequency Division Multiplexing (OFDM) system. The DNN model is trained offline using OFDM samples generated using different information sequences under distinct channel conditions. The model is then used to recover the transmitted information without estimating the channel characteristics. Gao et al. [42] proposed a Convolutional Neural Network (CNN)-based channel estimation framework for massive MIMO systems. The authors used one-dimensional convolution to shape the input data. Each convolutional block consists of a batch normalization layer to avoid gradient explosions [43] and a Rectified Linear Unit (Relu) activation function.

4.2. Modulation Recognition

With the increasing data traffic in smart cities, different modulation methods are employed in a communication system for efficient and effective data transmission by modulating the transmitted signal. In this context, modulation recognition aims to identify the modulation information of the signals under a noisy interference environment [44]. Modulation recognition aids in signal demodulation and decoding for applications such as interference identification, spectrum monitoring, cognitive radio, threat assessment, and signal recognition. The conventional decision-theory-based and statistical-pattern-recognition-based methods for modulation recognition become computationally expensive and time consuming for smart city applications [44]. DL can be used as an alternative to improve the accuracy and efficiency of modulation recognition as shown in Figure 5. Zhang et al. [44] investigated the applicability of a CNN and Long Short-Term Memory (LSTM) for modulation recognition as the former is good for the automatic feature extraction of spatial data and the latter performs well for sequential data. Yang et al. [45] proposed the use of CNN and Recurrent Neural Networks (RNNs) for modulation recognition over additive white Gaussian noise and Rayleigh fading channels. The authors found that DL algorithms perform modulation recognition more accurately compared to ML algorithms such as the Support Vector Machine (SVM). To ensure the privacy and security of the transmitted data, Shi et al. [46] proposed a CNN-based federated learning approach with differential privacy for modulation recognition.

4.3. Traffic Classification

The categorization of network traffic into different classes, referred to as traffic classification, is important to ensure the QoS, control pricing, resource management, and security of smart city applications. The simplest method for traffic classification involves mapping the applications’ traffic to port numbers [47]. However, this technique provides an inaccurate classification since several applications use dynamic port numbers. Payload-based methods are alternatives to the port-based techniques where the traffic is classified by examining the packet payload [47]. However, the traffic payload cannot be accessed in scenarios where the packets are encrypted due to privacy and security concerns. Consequently, ML/DL-based methods can be used to address the issues of the conventional methods (Figure 6). Ren, Gu, and Wei [48] proposed a Tree-RNN to classify network traffic into 12 different classes. The proposed DL model consists of a tree structure that divides the large classification problem into smaller ones, with each class represented by a tree node. Lopez-Martin et al. [49] proposed a hybrid RNN- and CNN-based network to classify traffic from IoT devices and services. CNN layers extract complex network traffic features automatically from the input data, eliminating the feature selection process used in the classical ML approaches.

4.4. Traffic Prediction

Network traffic predictions focus on predicting future traffic using previous traffic data. This aids in proactively managing the network and computing resources, improving the QoS, making the network operations cost-effective, and detecting anomalies in the data traffic. DL has shown potential in predicting network traffic accurately in real time. Figure 7 shows an overview of the DL-based predictions of the network traffic data. Vinayakumar et al. [50] evaluated the performance of different RNNs, namely the simple RNN, Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Identity Recurrent Unit (IRNN), and Feed-forward Neural Network (FNN) to predict network traffic using a GEANT backbone networks dataset. The experimental results showed that LSTM predicts the network traffic with the least MSE. Aloraifan, Ahmad, and Alrashed [51] used Bi-directional LSTM (Bi-LSTM) and Bi-directional GRU (Bi-GRU) to predict the network traffic matrix. To increase the prediction accuracy, the authors combined a CNN with Bi-LSTM or Bi-GRU. The authors found that the prediction performance of DL algorithms depends on the configuration of the neural network parameters.

4.5. Data Caching

Internet data traffic is exploding at a rapid pace with the increasing popularity and demand of different smart city applications such as infotainment, AV, VR, interactive gaming, and XR. Consequently, it becomes challenging to accommodate these data in terms of storage and transmission for applications that require an ultra-low latency such as autonomous vehicles and smart healthcare. To address this challenge, edge caching [52] is seen to be a potential solution that provides storage facilities to the IoT data at the edge of the network, i.e., in proximity to the mobile devices. This enables IoT applications to retrieve data in real time from the edge resources, eliminating backhaul link communication. Consequently, edge caching reduces data transmission time and energy. However, determining the optimal cache content and cache placement strategies in a dynamic network is challenging [53]. DL can be effective in designing optimal caching strategies, as shown in Figure 8. Jiang et al. [54] proposed a distributed deep Q-learning-based caching mechanism to improve the edge caching efficiency in terms of cache hit rate. The mechanism involves the prediction of users’ preferences offline followed by the online prediction of content popularity. However, due to the limited caching resources of the edge nodes and spatial-temporal content demands from the mobile users, cooperative edge caching schemes are required. Zhang et al. [55] proposed a DRL-based cooperative edge caching approach that enables the communication between distributed edge servers to enlarge the size of the cache data. However, cooperative schemes often collect and analyze the data at a centralized server. The sharing of sensitive data, such as users’ preferences and content popularity, among different edge and cloud servers, raises privacy concerns. To tackle this challenge, federated learning (FL) can be a promising solution in which learning models to predict content popularity are trained locally at the IoT devices for cooperative caching [55].

4.6. Intelligent Routing

To manage the network traffic efficiently and to fulfill the QoS requirements of 6G applications, several routing strategies have been developed. However, the traditional routing protocols developed using meta-heuristic approaches become computationally expensive with increasing traffic variability. Consequently, ML and DL-based approaches have been proposed to address the shortcomings of the traditional methods. Figure 9 shows an example of DL-based intelligent routing in an IoT environment. Tang et al. [56] proposed a real-time deep CNN-based routing algorithm in a wireless mesh network backbone. A CNN model with two convolutional layers and two fully connected layers is trained periodically using the continuous stream of network data. Liu et al. [57] proposed a DRL-based routing in software-defined data center networks by recombining network resources (such as cache and bandwidth) based on their effectiveness in reducing delay and then using DRL for routing with the recombined state. The employed DNN model within DRL consists of two fully connected hidden layers with 30 neurons each. The CNN model in the actor and critic networks of DRL consists of two max pooling layers, three convolutional layers with eight filters, and one fully connected layer with 30 neurons. A Relu activation function is employed in all the layers.

4.7. Radio Resource Management

With the future 6G networks, the density of small-cell networks increases drastically. Consequently, radio resource management has emerged for the system-level management of co-channel interference, radio resources, and other radio transmission characteristics in a wireless communication system to utilize the radio spectrum efficiently. With the increasing dynamicity and complexity of network generations towards 6G, the traditional heuristic-based approaches for radio resource management become inaccurate. ML/DL-based approaches are explored as an alternative solution. Shen et al. [58] proposed graph neural networks for radio resource management in a large-scale network by modeling the wireless network as a wireless channel graph and then formulating the resource management as a graph optimization problem. The neural network consisted of three layers, an adam optimizer, and a learning rate of 0.001. Zhang et al. [59] proposed a DNN framework for radio resource management to minimize the energy consumption of the network constrained by power limitation, inference limitation, and the QoS. A DNN model with three layers, 800 neurons per layer, a 0.01 learning rate, and an adam optimizer is employed for a power optimization scheme, whereas a DNN model with four layers, 80 neurons per layer, a 0.05 learning rate, and an RMSProp optimizer is used for sub-channel allocation.

4.8. Network Fault Management

In network management, fault management is to detect, predict, and eliminate malfunctions in the communication network. The integration of newly emerging technologies and paradigms in 6G networks makes the network more complex, heterogeneous, and dynamic. Consequently, fault management becomes more challenging in 6G networks. ML/DL approaches have been studied recently for efficient fault management. Regin, Rajest, and Singh [60] proposed a Naïve Bayes and CNN-based algorithm for fault detection over a wireless sensor network in a distributed manner. The results show that the proposed approach accurately detects faults and is energy efficient. Regarding fault diagnosis, [61] implemented an FNN for fault detection and classification in wireless sensor networks. The DL model was tuned using a hybrid gravitational search and particle swarm optimization algorithm. Kumar et al. [62] studied the feasibility of ML and DL approaches for fault prediction on a cellular network. The results showed that a DNN with an autoencoder (AE) predicts the network fault with the highest accuracy compared to autoregressive neural networks and the SVM.

4.9. Mobility Management

Sixth Generation networks will serve a spectrum of mobile applications such as the IoV, IoRT, and IoMT that require low latency and highly reliable services. To guarantee the QoS for these applications while improving the resource utilization and network bottleneck, it becomes crucial to learn and predict users’ movements. DL-based approaches can be an alternative solution, as shown in Figure 10. Zhao et al. [63] proposed a mobile user trajectory prediction algorithm by combining LSTM with RL. LSTM is used to predict the trajectories of mobile users, whereas RL is used to improve the model training time of LSTM by finding the most accurate neural network architecture for the given problem without human intervention. An initial learning rate of 0.002 is selected for LSTM and a Q-learning rate and discount factor of 0.001 and 1 are used for RL. Klus et al. [64] proposed ANN models for cell-level and beam-level mobility management optimization in the wireless network. The results showed that DL-based approaches outperform the conventional 3GPP approach for mobility management.

4.10. Energy Optimization

With 6G networks providing efficient connectivity to a wide range of IoT applications, the number of IoT devices is expected to increase dramatically. The data transfer, storage, and analysis from these devices will increase the energy consumption of the network. Recently, ML/DL approaches have shown potential for saving energy in wireless networks. Wei et al. [65] proposed actor-critic RL for users’ requests scheduling and resource allocation in heterogeneous cellular networks to minimize the energy consumption of the overall network. Continuous stochastic actions are generated by the actor part using a Gaussian distribution. The critic part estimates the performance of the policy and aids the actor in learning the gradient of the policy using compatible function approximation. Kong and Panaitopol [66] proposed an online RL algorithm to dynamically activate and deactivate the resources at the base station depending on the network traffic. The online RL algorithm eliminates the need for a separate model training process.

4.11. Intrusion Detection

The evolving smart city applications running on the underlying 6G networks require high reliability and high security. In this context, intrusion detection can be used to identify unauthorized access and malicious activities in smart city applications. Figure 11 shows a DL-based approach for intrusion detection in the IoT environment. Sharifi et al. [67] proposed an intrusion detection system using a combined K Nearest Neighbor (Knn) and K-means algorithm. The proposed system employs principal component analysis for feature extraction and then uses a K-means algorithm to cluster the data. The clustered data is then classified using KNN. Yin et al. [68] proposed an RNN-based approach for binary and multi-class intrusion detection. For binary class intrusion detection, an RNN model with 80 hidden nodes and a learning rate of 0.1 provides the highest accuracy. For multi-class intrusion detection, an RNN model with 80 hidden nodes and a 0.5 learning rate yields the highest accuracy. The results show that DL approaches are better than ML approaches for intrusion detection.

4.12. Traffic Anomaly Detection

Network traffic anomalies refer to unusual changes in the traffic such as a transient change in users’ requests, port scans, and flash crowds. The detection of such anomalies is important for the security of the network and reliable services. DL approaches have recently gained popularity for traffic anomaly detection in complex, dynamic, and heterogeneous wireless networks (Figure 12). Kim and Cho [69] proposed a C-LSTM neural network to model the spatial-temporal traffic data information and to detect an anomaly. The CNN layer in the model reduces the variation in the information, the LSTM layer models the temporal information, and the DNN layer is used to map the data onto a separable space. The tanh activation function is employed in all the layers except the LSTM output layer, which uses softmax activation. Naseer et al. [70] evaluated the performance of ML and DL models for anomaly detection. The authors implemented extreme learning machine, nearest neighbor, decision tree, random forest, SVM, Naïve Bayes, quadratic discriminant analysis, Multilayer Perceptron (MLP), LSTM, RNN, AE, and CNN models. The results showed that DCNN and LSTM detect anomalies with the highest accuracy.

4.13. Botnet Detection

The ever-growing IoT network in smart cities suffers from botnet attacks where a large number of IoT devices are infected by malware to execute repetitive and malicious activities and launch cyber-attacks such as Denial of Service (DoS), distributed DoS (DdoS), or data theft against critical smart city infrastructure [71,72]. For efficient and reliable botnet detection, ML and DL approaches have been restored as potential solutions in the literature (Figure 13). Injadat et al. [73] proposed a combined Bayesian Optimization Gaussian Process (BO-GP) algorithm and Decision Tree (DT) classifier for detecting botnet attacks on the IoT devices. Popoola et al. [74] proposed a DL-based botnet detection system for the resource-constrained IoT devices. The dimensionality of the large volume of network traffic data is reduced using the LSTM autoencoder (LAE) having a Relu activation function and a learning rate of 0.001. A deep Bi-LSTM model, with six input neurons, four dense hidden layers, and an output layer, is then used for botnet detection on the low-dimensional feature set. A Relu activation function is employed in the input and hidden layers, whereas sigmoid and softmax activation functions are used at the output layer for binary and multiclass classification, respectively.

5. Taxonomy of Technology-Enabled Smart City Applications in 6G Networks

In this section, we present a taxonomy of smart city applications for next-generation 6G networks, as shown in Figure 14. We base the taxonomy on the underlying technologies, i.e., IoT, HC, blockchain, XR, and edge-cloud computing, used by those applications empowered by AI, ML/DL, federated and distributed learning, and big data analytics paradigms. In the following, we describe the technologies along with the requirements, in terms of the network characteristics, of the applications using them.

5.1. Internet of Things (IoT)

The IoT is a network of connected devices, sensors, and users using internet technologies that can self-organize, sense and collect data, analyze the stored information, and react to the dynamic environment [75]. The number of connected devices is expected to reach more than 30 billion by 2025 which will be more than 70% of the non-IoT devices. Figure 15 shows the growth of IoT and non-IoT devices over years. The IoT can be further classified based on its application domains such as the IoV, IoMT, IoRT, IoD, and IioT.

5.1.1. Internet of Vehicles (IoV)

The IoV is a distributed network of mobile vehicles that have sensing, computing, and Internet Protocol (IP)-based communication capabilities [76]. The global IoV market is projected to reach $208,107 million by 2024 from $66,075 in 2017, with a GAGR of 18% between 2018 and 2024 [77]. The IoV network interconnects vehicles with pedestrian and urban infrastructure facilities such as the cloud and Roadside Units (RSUs). The IoV includes six types of communications for vehicles to receive and transmit data as shown in Figure 16: (1) Vehicle-to-Vehicle (V2V), (2) Vehicle-to-Infrastructure (V2I), (3) Vehicle-to-Roadside (V2R), (4) Vehicle-to-Sensors (V2S), (5) Vehicle-to-Cloud (V2C), and (6) Vehicle-to-Pedestrian (V2P). Several vehicular applications have been developed for the IoV such as an intelligent parking system, real-time navigation, traffic and accident alert, facial recognition for autonomous driving, cooperative adaptive cruise control, and traffic signal violations. These applications have strict data rate and latency requirements that should be supported by the underlying 6G networks. For instance, autonomous driving involving multiple sensors may require a total data rate of 1 Gbps for V2V and V2X communications [78]. Furthermore, it requires a reliability of 99.999% [79], which cannot be obtained with the existing wireless communication systems. In addition, vehicular applications such as infotainment, e-toll collection, collision warning, autocruise, AR map navigation, and co-operative stability control have stringent latency requirements of 500 ms, 200 ms, 100 ms, 20 ms, 5 ms, and 1 ms, respectively [80]. The 6G networks should consider the issues of the limited spectrum, high latency, and low reliability prevailing in the current vehicular standards, i.e., IEEE 802.11p [81].

5.1.2. Internet of Medical Things (IoMT)

The IoMT is a distributed network of bio-medical sensors and devices that acquire, process, and transmit the bio-medical signals of patients. It integrates the communication protocol of the IoT with medical devices to enable remote patient monitoring and treatment. Its global market is expected to reach $172.4 billion by 2030 from $39.3 billion in 2020, at a CAGR of 15.9% from 2021 to 2030 [82]. The IoMT has several applications, such as the monitoring of patients with chronic diseases, monitoring of elderly people, disease prognosis and diagnosis, medical equipment and drug monitoring, drug anti-counterfeiting, and medical waste management. In the context of a pandemic such as COVID-19, the IoMT can be used for the detection, tracking, and monitoring of infected individuals and the prediction of infections [83]. The IoMT applications require ultra-low latency and high reliability for scenarios such as remote surgery. The tactile and haptic internet is the backbone for such scenarios, whose requirements are not completely fulfilled by the current wireless systems [84]. The tactile internet requires an end-to-end latency of the order of 1 ms and haptic feedback requires a latency of sub-milliseconds [85,86].

5.1.3. Internet of Robotic Things (IoRT)

The IoRT is a distributed network of intelligent robot devices that can monitor events, integrate sensors’ data from multiple heterogeneous sources, and use local/distributed intelligence to take actions [87]. The IoRT market is expected to reach $1.44 billion by 2022, growing at a CAGR of 29.7% from 2016 to 2022 [88]. The IoRT has several applications in several domains such as agriculture, construction, logistics, transportation, banking, healthcare, home automation, and industrial automation [89]. Robotics and automation require control in real time to avoid oscillatory movements, with a maximum tolerable latency of 100µs and round-trip times of 1ms [17]. Moreover, industrial robotic automation requires a reliability of 99.999% [79].

5.1.4. Internet of Drones (IoD)

The IoD is a network of coordinated drones with communication capabilities among themselves, pedestrians, and ground infrastructure [90]. The global drone market is expected to reach $43.4 billion by 2027 with a CAGR of 12.56% between 2022 and 2027 [91]. The IoD applications include smart city surveillance, infrastructure monitoring, and maintenance, search, and rescue missions in place of natural/manmade hazards, logistics, traffic control, weather forecasts, disaster management, and events live streaming [92]. These applications require tactile and haptic internet with an ultra-low latency, high data rate, and high reliability requirements.

5.1.5. Industrial Internet of Things (IioT)

The IioT refers to a network of connected machines and devices in the industry for machine-to-machine (M2M) and machine-to-human (M2H) communications [93]. The IioT market is expected to reach $197 billion by 2023 from $115 billion in 2016 with a CAGR of 7.5% from 2017 to 2023 [94]. Applications involve predictive maintenance, quality control, safety management, and supply chain optimization. The IioT sensors and devices are often placed in noisy environments to support mission-critical safety applications. These applications have stringent latency and reliability requirements for proper control decisions [95]. In some cases, the IioT may require a reliability of 99.99999% [96] as information loss could be catastrophic in some scenarios such as nuclear energy plants.

5.2. Holographic Communication (HC)

HC is the next evolution of 3D videos and images that will capture data from multiple sources, providing end users with an immersive 3D experience. The global holographic display market size is projected to reach $11.65 billion by 2030 from $1.13 billion in 2020, i.e., a CAGR of 29.1% from 2021 to 2030 [97]. It requires very high data rates and an ultra-low latency. The bandwidth requirements for a human-sized hologram after data compression varies from tens of Mbps to 4.3 Tbps [98,99]. However, a high level of compression to reduce the bandwidth requirements will lead to a high latency. To have a seamless 3D experience, holograms require a latency of sub-milliseconds [17,100]. Consequently, there is a tradeoff between the level of compression, computation bandwidth, and latency, which needs to be optimized by the network [101]. Furthermore, the network should have high resilience in the case of HC to maintain a high QoS by assuring reliability and reducing jitter, packet loss, and latency. Considering the security requirements for HC in applications such as smart healthcare (remote surgery), the network must be secured.

5.3. Extended Reality (XR)

Extended reality (XR) technologies involve AR, Mixed Reality (MR), and VR applications. The current wireless communication technologies are unable to provide an immersive XR experience for users of these applications, such as 3D medical imaging, surgical training, immersive gaming, guided remote repair and maintenance, virtual property tours, e-commerce purchase, hands-on virtual learning, and virtual field trips for students. This is due to the inability of the currently deployed 5G technology to deliver ultra-low latencies and very high data rates [102]. These XR applications are highly demanding in terms of communication and computation due to the incorporation of perceptual needs (human senses, physiology, and cognition). The envisioned 6G networks should ensure the Quality-of-Physical-Experience (QoPE) [103] for these XR applications by providing URLLLC and eMBB services.

5.4. Blockchain

Blockchain is a decentralized peer-to-peer technology that eliminates the need for a centralized third party [104]. Each event in the network is recorded in a ledger that is replicated and synchronized among all network participants. A participant in the network owns a public–private key pair [105], which enables authentication [106,107] and allows transaction validation. The consensus, cryptographic, provenance, and finality characteristics of blockchain provide security, privacy, immutability, transparency, and traceability. Blockchain has shown potential in several applications including healthcare [108,109,110,111,112], transportation [113,114], energy [115,116], education [117,118], and governance [119,120]. Figure 17 shows a blockchain-based integrated IoV-edge-cloud computing system, where the ledge is replicated at the edge and cloud servers. The events for different vehicular applications such as autonomous driving, infotainment, and real-time navigation are recorded as transactions in the ledger. The consensus protocol and replication of the ledger involved in blockchain require high bandwidth, reliable connection, and low-latency communications between multiple nodes to reduce communication overhead [121,122,123]. It requires a synergistic aggregation of URLLC and mMTC to provide an ultra-low latency, reliability, and scalability.

5.5. Edge-Cloud Computing

Cloud computing is a technological paradigm that enables on-demand access to a shared pool of configurable computing, storage, and network resources over the internet [124]. It is based on a pay-per-use model and can be provisioned with minimal management effort. With the emergence of the IoT and big data analytics applications in various domains such as healthcare [125,126,127], education [128,129,130], transportation [131], banking [132,133], energy utilities [134,135], and entertainment [136,137], cloud computing provides a sandbox for data processing and storage, enabling the deployment of compute-intensive smart city applications [138]. However, considering the distance between the IoT devices and the remote cloud servers, the latency requirements of time-critical applications may be violated. Consequently, mobile edge computing has been introduced, which provides computing and storage resources close to the IoT devices. For applications with low latency requirements, the service request can be directed to the nearest edge computing site. However, the computing capabilities of edge servers are low compared to the remote cloud servers, leading to the high processing time for compute-intensive applications. Thus, an integrated edge-cloud computing system is often used to handle compute-intensive and/or time-critical applications [139,140]. However, the underlying 6G networks should consider the energy efficiency [141,142,143,144,145,146,147,148], optimal resource provisioning and scheduling [149,150,151,152], and contextual-aware application partitioning [153,154,155,156,157,158,159,160,161,162,163,164] requirements of this integrated system.
In summary, smart city applications have stringent requirements, in terms of an ultra-low latency, high data rate, reliability, energy efficiency, and security that should be fulfilled by the next-generation 6G networks. These applications, underpinned by emerging technologies such as the IoT, HC, blockchain, XR, and edge-cloud computing, provide a lot of potential for unprecedented services to citizens. However, smart urbanism [165] is seen as critical for the success of a smart city. Governments should put in place plans to address some inherent issues to the deployment of AI-sensing and data-driven smart applications. In particular, privacy concerns should be addressed as personal data is collected continuously. In addition, people’s fear that this paradigm shift to smart cities may generate unemployment in some professions should be dealt with. Furthermore, worries of losing social face-to-face interactions with government entities, which would rely on sensing and digital devices to collect personal data for improving services, have to be taken care of. Consequently, smart urbanism advocates incremental changes to cities rather than a massive one.

6. Future Directions

While the 5G mobile communication networks generation is just starting to be deployed, there is already a plan to design new 6G networks. This is because of the proliferation of diversified smart city applications that are extensively distributed and more intelligent than ever before, thanks to the emergence of AI, big data analytics, federated and distributed learning, the IoT, edge-cloud computing, and blockchain. The currently in-deployment 5G networks will not be capable of meeting the heterogeneous and stringent requirements of these applications, in terms of efficiency, real-time operation, and reliability, with ever-increasing traffic demands. For instance, 5G is incapable of delivering ultra-low latencies and high data rates for holographic applications that demand data rates of up to 4.3 Tbps. In contrast to the previous generation networks, 6G is expected to support numerously connected and intelligent applications with stringent requirements in terms of high data rates, high energy efficiency, ultra-low latencies, and very broad frequency bands. Considering the requirements of 6G networks and smart city applications, AI will be the dominant enabler in the network, middleware, and application layers, as shown in Figure 18. Current research practices either focus on self-learning 6G networks, AI-enabled middleware, or AI applications in smart cities’ digital ecosystems. AI with self-learning capabilities empowers 6G networks to be intelligent, agile, flexible, and adaptive by providing functionalities for channel estimation, modulation recognition, network traffic classification and prediction, intelligent routing, radio resource management, fault management, network energy optimization, and intrusion, botnet, and traffic anomaly detections. Furthermore, at the middleware layer, AI can aid in the scheduling of smart city applications’ requests, computing resource management, computation and communication energy optimization, application performance optimization, context-aware data caching, and fault tolerance and data availability. In regard to smart city applications, AI can benefit the evolving applications within emerging technological paradigms such as the IoV, IoMT, IoD, IoRT, IIoT, HC, XR, cloud computing, edge-cloud computing, and blockchain.
Research in the following directions is required for the realization of AI-enabled smart city applications in self-learning 6G networks.
  • Automated AI frameworks: In 6G networks, a massive amount of the data will be generated from the network, middleware, and application layers. The dynamic environment requires ongoing updates of the AI learning models’ parameters. In 6G networks where ultra-low latencies are a key requirement, tuning the parameters using traditional grid search or meta-heuristic approaches may introduce a computational overhead, degrading the performance of smart city applications and the underlying 6G networks. Consequently, there is a need for automated AI frameworks that would select the optimal models’ parameters based on the contextual applications and network dynamics.
  • AI frameworks integration: The self-learning 6G networks in the smart city digital ecosystem will comprise numerous AI models at the network, middleware, and application layers. The output of the learning models from the application and middleware layers should be fed as the input to the learning models at the network layer in a dynamic environment. The high flexibility and scalability of the AI learning frameworks are crucial for supporting a high number of interactions between the learning models at different layers and providing dependable services in real time. Consequently, further research is required on how to integrate dependable, flexible, and scalable learning frameworks for smart city applications in 6G networks.
  • Performance of AI models: In 6G networks, meeting the accuracies of the AI models to process high-dimensional dynamic data at the network, middleware, and application layers is crucial. However, these AI models, deep learning and meta-heuristics in particular, have high computational complexity and require a huge amount of time for convergence. This hinders the deployment of applications with ultra-low latency requirements such as robotics and automation, collision warning in the IoV, and AR map navigation. Furthermore, the computationally expensive AI models have a high energy consumption. Consequently, further research on how to design efficient AI approaches to improve computation efficiency and energy consumption is required.

7. Summary and Conclusions

The design of agile, flexible, and self-learning 6G networks is envisioned to support emerging distributed, dynamic, and intelligent smart city applications. AI is expected to play an important role in smart city applications as well as the 6G networks. In this paper, we provide a temporal evolution of wireless network generations capturing the technological and application requirements that led to the development of a given network generation over a certain period. In addition, we adopt a holistic approach to providing taxonomies for AI-enabled 6G networks and technology-enabled smart city applications. For the 6G networks, we highlight the employed self-learning models. Furthermore, for the applications use cases, we provide the QoS and SLA requirements that should be considered for the deployment of these applications. Finally, we discussed research directions toward intelligent and integrated computing, communication, coordination, and decision-making smart city digital ecosystems in 6G networks.

Author Contributions

Conceptualization, L.I.; methodology, L.I.; investigation, L.I.; writing—original draft preparation, L.I.; writing—review and editing, L.I. and R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Water and Energy Center of the United Arab Emirates University (Grant 31R215).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Buyya, R.; Dastjerdi, A.V. Internet of Things: Principles and Paradigms; Elsevier Science: Amsterdam, The Netherlands, 2016. [Google Scholar]
  2. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach; Prentice Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  3. AbdulRahman, S.; Tout, H.; Ould-Slimane, H.; Mourad, A.; Talhi, C.; Guizani, M. A survey on federated learning: The journey from centralized to distributed on-site learning and beyond. IEEE Internet Things J. 2021, 8, 546–5497. [Google Scholar] [CrossRef]
  4. Chen, M.; Mao, S.; Liu, Y. Big data: A survey. Mob. Netw. Appl. 2014, 19, 171–209. [Google Scholar] [CrossRef]
  5. Ismail, L.; Materwala, H. A Review of Blockchain Architecture and Consensus Protocols: Use Cases, Challenges, and Solutions. Symmetry 2019, 11, 1198. [Google Scholar] [CrossRef] [Green Version]
  6. Cao, K.; Hu, S.; Shi, Y.; Colombo, A.W.; Karnouskos, S.; Li, X. A Survey on Edge and Edge-Cloud Computing Assisted Cyber-Physical Systems. IEEE Trans. Ind. Inform. 2021, 17, 7806–7819. [Google Scholar] [CrossRef]
  7. Burns, R.; Fast, V.; Levenda, A.; Miller, B. Smart cities: Between worlding and provincialising. Urban Stud. 2021, 58, 461–470. [Google Scholar] [CrossRef]
  8. Materwala, H.; Ismail, L.; Shubair, R.M.; Buyya, R. Energy-SLA-aware genetic algorithm for edge–cloud integrated computation offloading in vehicular networks. Future Gener. Comput. Syst. 2022, 135, 205–222. [Google Scholar] [CrossRef]
  9. Zeng, X.; Garg, S.; Barika, M.; Zomaya, A.Y.; Wang, L.; Villari, M.; Chen, D.; Ranjan, R. SLA Management for Big Data Analytical Applications in Clouds: A Taxonomy Study. ACM Comput. Surv. 2021, 53, 1–40. [Google Scholar] [CrossRef]
  10. Ismail, L.; Materwala, H.; Hassanein, H.S. QoS-SLA-Aware Adaptive Genetic Algorithm for Multi-Request Offloading in Integrated Edge-Cloud Computing in Internet of Vehicles. arXiv 2022, arXiv:2202.01696. [Google Scholar]
  11. Ismail, L.; Materwala, H. IoT-Edge-Cloud Computing Framework for QoS-Aware Computation Offloading in Autonomous Mobile Agents: Modeling and Simulation. In Proceedings of the International Conference on Mobile Secure, and Programmable Networking, Paris, France, 28–29 October 2020; pp. 161–176. [Google Scholar]
  12. Ismail, L.; Materwala, H. Computing Server Power Modeling in a Data Center: Survey, Taxonomy, and Performance Evaluation. ACM Comput. Surv. 2020, 53, 1–34. [Google Scholar] [CrossRef]
  13. Liu, Y.; Wei, X.; Xiao, J.; Liu, Z.; Xu, Y.; Tian, Y. Energy consumption and emission mitigation prediction based on data center traffic and PUE for global data centers. Glob. Energy Interconnect. 2020, 3, 22–282. [Google Scholar] [CrossRef]
  14. Sunbird. How Much Does It Cost to Power One Rack in a Data Center? 2020. Available online: https://www.sunbirddcim.com/blog/how-much-does-it-cost-power-one-rack-data-center (accessed on 12 June 2022).
  15. Belkhir, L.; Elmeligi, A. Assessing ICT global emissions footprint: Trends to 2040 & recommendations. Clean. Prod. J. 2018, 177, 448–463. [Google Scholar]
  16. Akhtar, M.W.; Hassan, S.A.; Ghaffar, R.; Jung, H.; Garg, S.; Hossain, M.S. The shift to 6G communications: Vision and requirements. Hum.-Cent. Comput. Inf. Sci. 2020, 10, 1–27. [Google Scholar] [CrossRef]
  17. Tataria, H.; Shafi, M.; Molisch, A.F.; Dohler, M.; Sjöland, H.; Tufvesson, F. 6G wireless systems: Vision, requirements, challenges, insights, and opportunities. Proc. IEEE 2021, 109, 1166–1199. [Google Scholar] [CrossRef]
  18. Yang, H.; Alphones, A.; Xiong, Z.; Niyato, D.; Zhao, J.; Wu, K. Artificial-intelligence-enabled intelligent 6G networks. IEEE Netw. 2020, 34, 272–280. [Google Scholar] [CrossRef]
  19. Letaief, K.B.; Chen, W.; Shi, Y.; Zhang, J.; Zhang, Y.-J.A. The roadmap to 6G: AI empowered wireless networks. IEEE Commun. Mag. 2019, 57, 84–90. [Google Scholar] [CrossRef] [Green Version]
  20. Letaief, K.B.; Shi, Y.; Lu, J.; Lu, J. Edge Artificial Intelligence for 6G: Vision, Enabling Technologies, and Applications. IEEE J. Sel. Areas Commun. 2021, 40, 5–36. [Google Scholar] [CrossRef]
  21. Zhang, S.; Zhu, D. Towards artificial intelligence enabled 6G: State of the art, challenges, and opportunities. Comput. Netw. 2020, 183, 10755. [Google Scholar] [CrossRef]
  22. Del Peral-Rosado, J.A.; Raulefs, R.; López-Salcedo, J.A.; Seco-Granados, G. Survey of cellular mobile radio localization methods: From 1G to 5G. IEEE Commun. Surv. Tutor. 2017, 20, 1124–1148. [Google Scholar] [CrossRef]
  23. Gawas, A.U. An overview on evolution of mobile wireless communication networks: 1G–6G. Int. J. Recent Innov. Trends Comput. Commun. 2015, 3, 3130–3133. [Google Scholar]
  24. Niehenke, E.C. Wireless communications: Present and future: Introduction to focused issue articles. IEEE Microw. Mag. 2014, 15, 26–35. [Google Scholar] [CrossRef]
  25. Dekleva, S.; Shim, J.P.; Varshney, U.; Knoerzer, G. Evolution and emerging issues in mobile wireless networks. Commun. ACM 2007, 50, 38–43. [Google Scholar] [CrossRef]
  26. Hui, S.Y.; Yeung, K.H. Challenges in the migration to 4G mobile systems. IEEE Commun. Mag. 2003, 41, 54–59. [Google Scholar] [CrossRef]
  27. Bos, L.; Leroy, S. Toward an all-IP-based UMTS system architecture. IEEE Netw. 2001, 15, 36–45. [Google Scholar] [CrossRef]
  28. Clerckx, B.; Lozano, A.; Sesia, S.; van Rensburg, C.; Papadias, C.B. 3GPP LTE and LTE-Advanced. EURASIP Wirel. Commun. Netw. J. 2009, 2009, 1–167. [Google Scholar] [CrossRef] [Green Version]
  29. Gupta, A.A.; Jha, R.K. A survey of 5G network: Architecture and emerging technologies. IEEE Access 2015, 3, 1206–1232. [Google Scholar] [CrossRef]
  30. Shafi, M.; Molisch, A.F.; Smith, P.J.; Haustein, T.; Zhu, P.; De Silv, P.; Tufvesson, F.; Benjebbour, A.; Wunder, G. 5G: A tutorial overview of standards, trials, challenges, deployment, and practice. IEEE J. Sel. Areas Commun. 2017, 35, 1201–1221. [Google Scholar] [CrossRef]
  31. Huawei. Huawei 5.5G. 2020. Available online: https://www.huawei.com/en/news/2020/11/mbbf-shanghai-huawei-david-wang-5dot5g (accessed on 24 March 2022).
  32. Zhang, Z.; Xiao, Y.; Ma, Z.; Xiao, M.; Ding, Z.; Lei, X.; Karagiannidis, G.K.; Fan, P. 6G wireless networks: Vision, requirements, architecture, and key technologies. IEEE Veh. Technol. Mag. 2019, 14, 28–41. [Google Scholar] [CrossRef]
  33. Ismail, L.; Hagimont, D.; Mossiere, J. Evaluation of the mobile agents technology: Comparison with the Client/Server Paradigm. Inf. Sci. Technol. 2000, 19, 1–17. [Google Scholar]
  34. Ismail, L.; Belkkhouche, B. Full and autonomic mobility management for Mobile agents. In Proceedings of the First International Conference on Advances in Future Internet, Athens, Greece, 18–23 June 2009; pp. 31–38. [Google Scholar]
  35. Hagimont, D.; Ismail, L. Agents mobiles et client/serveur: Évaluation de performance et comparaison. Tech. Sci. Inform. 2000, 19, 1223–1244. [Google Scholar]
  36. Luong, N.C.; Hoang, D.T.; Gong, S.; Niyato, D.; Wang, P.; Liang, Y.-C.; Kim, D.I. Applications of deep reinforcement learning in communications and networking: A survey. IEEE Commun. Surv. Tutor. 2019, 21, 3133–3174. [Google Scholar] [CrossRef] [Green Version]
  37. Strinati, E.C.; Barbarossa, S.; Gonzalez-Jimenez, J.L.; Ktenas, D.; Cassiau, N.; Maret, L.; Dehos, C. 6G: The next frontier: From holographic messaging to artificial intelligence using subterahertz and visible light communication. IEEE Veh. Technol. Mag. 2019, 14, 42–50. [Google Scholar] [CrossRef]
  38. Akyildiz, I.F.; Jornet, J.M.; Han, C. Terahertz band: Next frontier for wireless communications. Phys. Commun. 2014, 12, 16–32. [Google Scholar] [CrossRef]
  39. Sarieddeen, H.; Alouini, M.-S.; Al-Naffouri, T.Y. Terahertz-band ultra-massive spatial modulation MIMO. IEEE J. Sel. Areas Commun. 2019, 37, 2040–2052. [Google Scholar] [CrossRef] [Green Version]
  40. Basar, E. Transmission through large intelligent surfaces: A new frontier in wireless communications. In Proceedings of the 2019 European Conference on Networks and Communications (EuCNC), Valencia, Spain, 18–21 June 2019; pp. 112–117. [Google Scholar]
  41. Ye, H.; Li, G.Y.; Juang, B.-H. Power of deep learning for channel estimation and signal detection in OFDM systems. IEEE Wirel. Commun. Lett. 2017, 7, 114–117. [Google Scholar] [CrossRef]
  42. Gao, H.M.; Zhong, C.; Li, G.Y.; Zhang, Z. An attention-aided deep learning framework for massive MIMO channel estimation. IEEE Trans. Wirel. Commun. 2021, 21, 1823–1835. [Google Scholar] [CrossRef]
  43. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  44. Zhang, M.; Zeng, Y.; Han, Z.; Gong, Y. Automatic modulation recognition using deep learning architectures. In Proceedings of the 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Kalamata, Greece, 25–28 June 2018. [Google Scholar]
  45. Yang, C.; He, Z.; Peng, Y.; Wang, Y.; Yang, J. Deep learning aided method for automatic modulation recognition. IEEE Access 2019, 7, 109063–109068. [Google Scholar] [CrossRef]
  46. Shi, Q.L.; Li, K.; Lin, Y. Signal Modulation Recognition Method Based on Differential Privacy Federated Learning. Wirel. Commun. Mob. Comput. 2021, 2021, 2537546. [Google Scholar] [CrossRef]
  47. Finsterbusch, M.; Richter, C.; Rocha, E.; Muller, J.-A.; Hanssgen, K. A survey of payload-based traffic classification approaches. IEEE Commun. Surv. Tutor. 2013, 16, 1135–1156. [Google Scholar] [CrossRef]
  48. Ren, X.; Gu, H.; Wei, W. Tree-RNN: Tree structural recurrent neural network for network traffic classification. Expert Syst. Appl. 2021, 167, 114363. [Google Scholar] [CrossRef]
  49. Lopez-Martin, M.; Carro, B.; Sanchez-Esguevillas, A.; Lloret, J. Network traffic classifier with convolutional and recurrent neural networks for Internet of Things. IEEE Access 2017, 5, 18042–18050. [Google Scholar] [CrossRef]
  50. Vinayakumar, R.; Soman, K.; Poornachandran, P. Applying deep learning approaches for network traffic prediction. In Proceedings of the 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Manipal, India, 13–16 September 2017; pp. 2353–2358. [Google Scholar]
  51. Aloraifan, D.; Ahmad, I.; Alrashed, E. Deep learning based network traffic matrix prediction. Int. J. Intell. Netw. 2021, 2, 46–56. [Google Scholar] [CrossRef]
  52. Liu, D.; Chen, B.; Yang, C.; Molisch, A.F. Caching at the wireless edge: Design aspects, challenges, and future directions. IEEE Commun. Mag. 2016, 54, 22–28. [Google Scholar] [CrossRef] [Green Version]
  53. Ismail, L. Implementation and performance of a dynamic-content based cache for a backend database server. In Proceedings of the IASTED International Conference on Software Engineering, Innsbruck, Austria, 12–14 February 2008; pp. 26–30. [Google Scholar]
  54. Jiang, F.; Yuan, Z.; Sun, C.; Wang, J. Deep Q-learning-based content caching with update strategy for fog radio access networks. IEEE Access 2019, 7, 97505–97514. [Google Scholar] [CrossRef]
  55. Yu, Z.; Hu, J.; Min, G.; Wang, Z.; Miao, W.; Li, S. Privacy-preserving federated deep learning for cooperative hierarchical caching in fog computing. IEEE Internet Things J. 2021. [Google Scholar] [CrossRef]
  56. Tang, F.; Mao, B.; Fadlullah, Z.; Kato, N.; Akashi, O.; Inoue, T.; Mizutani, K. On removing routing protocol from future wireless networks: A real-time deep learning approach for intelligent traffic control. IEEE Wirel. Commun. 2017, 25, 154–160. [Google Scholar] [CrossRef]
  57. Liu, W.; Cai, J.; Chen, Q.C.; Wang, Y. DRL-R: Deep reinforcement learning approach for intelligent routing in software-defined data-center networks. J. Netw. Comput. Appl. 2021, 177, 102865. [Google Scholar] [CrossRef]
  58. Shen, Y.; Shi, Y.; Zhang, J.; Letaief, K.B. Graph neural networks for scalable radio resource management: Architecture design and theoretical analysis. IEEE J. Sel. Areas Commun. 2020, 39, 101–115. [Google Scholar] [CrossRef]
  59. Zhang, H.; Zhang, H.; Long, K.; Karagiannidis, G.K. Deep learning based radio resource management in NOMA networks: User association, subchannel and power allocation. IEEE Trans. Netw. Sci. Eng. 2020, 7, 2406–2415. [Google Scholar] [CrossRef]
  60. Regin, R.; Rajest, S.S.; Singh, B. Fault Detection in Wireless Sensor Network Based on Deep Learning Algorithms. EAI Trans. Scalable Inf. Syst. 2021. [Google Scholar] [CrossRef]
  61. Swain, R.R.; Khilar, P.M.; Dash, T. Multifault diagnosis in WSN using a hybrid metaheuristic trained neural network. Digit. Commun. Netw. 2020, 6, 86–100. [Google Scholar] [CrossRef]
  62. Kumar, Y.; Farooq, H.; Imran, A. Fault prediction and reliability analysis in a real cellular network. In Proceedings of the 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC), Valencia, Spain, 26–30 June 2017; pp. 1090–1095. [Google Scholar]
  63. Zhao, Z.; Karimzadeh, M.; Pacheco, L.; Santos, H.; Rosário, D.; Braun, T.; Cerqueira, E. Mobility management with transferable reinforcement learning trajectory prediction. IEEE Trans. Netw. Serv. Manag. 2020, 17, 2102–2116. [Google Scholar] [CrossRef]
  64. Klus, R.; Klus, L.; Solomitckii, D.; Talvitie, J.; Valkama, M. Deep learning-based cell-level and beam-level mobility management system. Sensors 2020, 20, 7124. [Google Scholar] [CrossRef] [PubMed]
  65. Wei, Y.; Yu, F.R.; Song, M.; Han, Z. User scheduling and resource allocation in HetNets with hybrid energy supply: An actor-critic reinforcement learning approach. IEEE Trans. Wirel. Commun. 2017, 17, 680–692. [Google Scholar] [CrossRef]
  66. Kong, P.-Y.; Panaitopol, D. Reinforcement learning approach to dynamic activation of base station resources in wireless networks. In Proceedings of the 2013 IEEE 24th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), London, UK, 8–11 September 2013; pp. 3264–3268. [Google Scholar]
  67. Sharifi, A.M.; Amirgholipour, S.K.; Pourebrahimi, A. Intrusion Detection Based on Joint of K-Means and KNN. J. Converg. Inf. Technol. 2015, 10, 42–51. [Google Scholar]
  68. Yin, B.; Zhu, Y.; Fei, J.; He, X. A deep learning approach for intrusion detection using recurrent neural networks. IEEE Access 2017, 5, 21954–21961. [Google Scholar] [CrossRef]
  69. Kim, T.Y.; Cho, S.B. Web traffic anomaly detection using C-LSTM neural networks. Expert Syst. Appl. 2018, 106, 66–76. [Google Scholar] [CrossRef]
  70. Naseer, S.; Saleem, Y.; Khalid, S.; Bashir, M.K.; Han, J.; Iqbal, M.M.; Han, K. Enhanced network anomaly detection based on deep neural networks. IEEE Access 2018, 6, 48231–48246. [Google Scholar] [CrossRef]
  71. Vormayr, G.; Zseby, T.; Fabini, J. Botnet communication patterns. IEEE Commun. Surv. Tutor. 2017, 19, 2768–2796. [Google Scholar] [CrossRef]
  72. Koroniotis, N.; Moustafa, N.; Sitnikova, E.; Turnbull, B. Towards the development of realistic botnet dataset in the internet of things for network forensic analytics: Bot-iot dataset. Future Gener. Comput. Syst. 2019, 100, 779–796. [Google Scholar] [CrossRef] [Green Version]
  73. Injadat, M.; Moubayed, A.; Shami, A. Detecting botnet attacks in IoT environments: An optimized machine learning approach. In Proceedings of the 2020 32nd International Conference on Microelectronics (ICM), Aqaba, Jordan, 14–17 December 2020; pp. 1–4. [Google Scholar]
  74. Popoola, S.I.; Adebisi, B.; Hammoudeh, M.; Gui, G.; Gacanin, H. Hybrid deep learning for botnet attack detection in the internet-of-things networks. IEEE Internet Things J. 2020, 8, 4944–4956. [Google Scholar] [CrossRef]
  75. Nord, J.H.; Koohang, A.; Paliszkiewicz, J. The Internet of Things: Review and theoretical framework. Expert Syst. Appl. 2019, 133, 97–108. [Google Scholar] [CrossRef]
  76. Sharma, S.; Kaushik, B. A survey on internet of vehicles: Applications, security issues & solutions. Veh. Commun. 2019, 20, 100182. [Google Scholar]
  77. Allied Market Research. Internet of Vehicle Market Growth. 2018. Available online: https://www.alliedmarketresearch.com/internet-of-vehicles-market#:~:text=The%20global%20internet%20of%20vehicles,18.00%25%20from%202018%20to%202024 (accessed on 25 March 2022).
  78. Heath, R.; Gonzalez-Prelcic, N. Vehicle-to-everything (V2X) communication in 5G and beyond. In Proceedings of the IEEE International Conference on Communications (ICC), online, 7–11 June 2020. [Google Scholar]
  79. Schotten, H.D.; Sattiraju, R.; Serrano, D.G.; Ren, Z.; Fertl, P. Availability indication as key enabler for ultra-reliable communication in 5G. In Proceedings of the 2014 European Conference on Networks and Communications (EuCNC), Bologna, Italy, 23–26 June 2014; pp. 1–5. [Google Scholar]
  80. Katsaros, K.; Dianati, M. A conceptual 5G vehicular networking architecture. In 5G Mobile Communications; Springer: Berlin/Heidelberg, Germany, 2017; pp. 595–623. [Google Scholar]
  81. Shah, S.A.A.; Ahmed, E.; Imran, M.; Zeadally, S. 5G for vehicular communications. IEEE Commun. Mag. 2018, 56, 111–117. [Google Scholar] [CrossRef]
  82. Precedence Research. Internet of Medical Things Market Growth. 2021. Available online: https://www.precedenceresearch.com/internet-of-medical-things-market (accessed on 25 March 2022).
  83. Aman, A.H.M.; Hassan, W.H.; Sameen, S.; Attarbashi, Z.S.; Alizadeh, M.; Latiff, L.A. IoMT amid COVID-19 pandemic: Application, architecture, technology, and security. J. Netw. Comput. Appl. 2021, 174, 10288. [Google Scholar]
  84. Antonakoglou, K.; Xu, X.; Steinbach, E.; Mahmoodi, T.; Dohler, M. Toward haptic communications over the 5G tactile Internet. IEEE Commun. Surv. Tutor. 2018, 20, 3034–3059. [Google Scholar] [CrossRef] [Green Version]
  85. Fettweis, G.P. The tactile internet: Applications and challenges. IEEE Veh. Technol. Mag. 2014, 9, 64–70. [Google Scholar] [CrossRef]
  86. Aijaz, A.; Dohler, M.; Aghvami, A.H.; Friderikos, V.; Frodigh, M. Realizing the tactile Internet: Haptic communications over next generation 5G cellular networks. IEEE Wirel. Commun. 2016, 24, 82–89. [Google Scholar] [CrossRef] [Green Version]
  87. Ray, P. Internet of robotic things: Concept, technologies, and challenges. IEEE Access 2016, 4, 9489–9500. [Google Scholar] [CrossRef]
  88. Research and Markets. Internet of Robotic Things Market Growth. 2016. Available online: https://www.researchandmarkets.com/reports/3873998/internet-of-robotic-things-market-by-component (accessed on 25 March 2022).
  89. Vermesan, O.; Bahr, R.; Ottella, M.; Serrano, M.; Karlsen, T.; Wahlstrøm, T.; Sand, H.E.; Ashwathnarayan, M.; Gamba, M.T. Internet of robotic things intelligent connectivity and platforms. Front. Robot. AI 2020, 7, 104. [Google Scholar] [CrossRef]
  90. Abualigah, L.; Diabat, A.; Sumari, P.; Gandomi, A.H. Applications, deployments, and integration of internet of drones (iod): A review. IEEE Sens. J. 2021, 21, 25532–25546. [Google Scholar] [CrossRef]
  91. IMARC Group. Drones Market Growth. 2021. Available online: https://www.imarcgroup.com/drones-market (accessed on 25 March 2022).
  92. Boccadoro, P.; Striccoli, D.; Grieco, L.A. An extensive survey on the Internet of Drones. Ad Hoc Netw. 2021, 122, 102600. [Google Scholar] [CrossRef]
  93. Khan, W.Z.; Rehman, M.; Zangoti, H.M.; Afzal, M.K.; Armi, N.; Salah, K. Industrial internet of things: Recent advances, enabling technologies and open challenges. Comput. Electr. Eng. 2020, 81, 106522. [Google Scholar] [CrossRef]
  94. Allied Market Research. Industrial Internet of Things Market Growth. 2018. Available online: https://www.alliedmarketresearch.com/industrial-internet-of-things-IIOT-market (accessed on 25 March 2022).
  95. Sisinni, E.; Saifullah, A.; Han, S.; Jennehag, U.; Gidlund, M. Industrial internet of things: Challenges, opportunities, and directions. IEEE Trans. Ind. Inform. 2018, 14, 4724–4734. [Google Scholar] [CrossRef]
  96. Vega, M.T.; Mehmli, T.; van der Hooft, J.; Wauters, T.; De Turck, F. Enabling virtual reality for the tactile Internet: Hurdles and opportunities. In Proceedings of the 2018 14th International Conference on Network and Service Management (CNSM), Rome, Italy, 5–9 November 2018; pp. 378–383. [Google Scholar]
  97. Allied Market Research. Holographic Display Market Growth. 2021. Available online: https://www.alliedmarketresearch.com/holographic-display-market-A12501 (accessed on 25 March 2022).
  98. Clemm, A.; Vega, M.T.; Ravuri, H.K.; Wauters, T.; De Turck, F. Toward truly immersive holographic-type communication: Challenges and solutions. IEEE Commun. Mag. 2020, 58, 93–99. [Google Scholar] [CrossRef]
  99. Li, R. Network 2030: Market Drivers and Prospects. In Proceedings of the 1st ITU Workshop on Network 2030, Brooklyn, NY, USA, 2 October 2018. [Google Scholar]
  100. Matsubayashi, A.; Makino, Y.; Shinoda, H. Direct finger manipulation of 3D object image with ultrasound haptic feedback. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–11. [Google Scholar]
  101. Shahbazi, M.; Atashzar, S.F.; Patel, R.V. A systematic review of multilateral teleoperation systems. IEEE Trans. Haptics 2018, 11, 338–356. [Google Scholar] [CrossRef]
  102. Mahmoud, H.H.H.; Amer, A.A.; Ismail, T. 6G: A comprehensive survey on technologies, applications, challenges, and research problems. Trans. Emerg. Telecommun. Technol. 2021, 32, 1–11. [Google Scholar] [CrossRef]
  103. Saad, W.; Bennis, M.; Chen, M. A vision of 6G wireless systems: Applications, trends, technologies, and open research problems. IEEE Netw. 2019, 34, 134–142. [Google Scholar] [CrossRef] [Green Version]
  104. Alasbali, N.; Azzuhri, S.R.B.; Salleh, R.B.; Kiah, M.L.M.; Shariffuddin, A.A.A.S.A.; Kamel, N.M.I.b.N.M.; Ismail, L. Rules of Smart IoT Networks within Smart Cities towards Blockchain Standardization. Mob. Inf. Syst. 2022, 2022, 9109300. [Google Scholar] [CrossRef]
  105. Ismail, L.; Barka, E. Key distribution framework for a mobile agent platform. In Proceedings of the 2008 The Second International Conference on Next Generation Mobile Applications, Services, and Technologies, Cardiff, UK, 16–19 September 2008. [Google Scholar]
  106. Ismail, L. Authentication Mechanisms for Mobile Agents. In Proceedings of the Second International Conference on Availability, Reliability and Security (ARES’07), Vienna, Austria, 10–13 April 2007; pp. 246–253. [Google Scholar]
  107. Ismail, L. Evaluation of Authentication Mechanisms for Mobile Agents on top of Java. In Proceedings of the 6th IEEE/ACIS International Conference on Computer and Information Science, Melbourne, Australia, 1–13 July 2007. [Google Scholar]
  108. Ismail, L.; Materwala, H.; Hennebelle, A. A Scoping Review of Integrated Blockchain-Cloud (BcC) Architecture for Healthcare: Applications, Challenges and Solutions. Sensors 2021, 21, 3753. [Google Scholar] [CrossRef]
  109. Ismail, L.; Zeadally, S. Healthcare Insurance Frauds: Taxonomy and Blockchain-based Detection Framework (Block-HI). IT Prof. 2021, 23, 36–43. [Google Scholar] [CrossRef]
  110. McGhin, T.; Choo, K.-K.R.; Liu, C.Z.; He, D. Blockchain in healthcare applications: Research challenges and opportunities. J. Netw. Comput. Appl. 2019, 135, 62–75. [Google Scholar] [CrossRef]
  111. Ismail, L.; Materwala, H. BlockHR: A Blockchain-based Framework for Health Records Management. In Proceedings of the 12th International Conference on Computer Modeling and Simulation, Brisbane, Australia, 22–24 June 2020; pp. 164–168. [Google Scholar]
  112. Ismail, L.; Materwala, H.; Khan, M.A. Performance Evaluation of a Patient-Centric Blockchain-based Healthcare Records Management Framework. In Proceedings of the 2020 2nd International Electronics Communication Conference, Singapore, 8–10 July 2020; pp. 39–50. [Google Scholar]
  113. Peng, W.C.; Gao, L.; Zhang, J.; Yau, K.-L.A.; Ji, Y. Blockchain for Vehicular Internet of Things: Recent Advances and Open Issues. Sensors 2020, 20, 5079. [Google Scholar] [CrossRef] [PubMed]
  114. Mershad, K.; Cheikhrouhou, O.; Ismail, L. Proof of accumulated trust: A new consensus protocol for the security of the IoV. Veh. Commun. 2021, 32, 100392. [Google Scholar] [CrossRef]
  115. Li, Z.; Kang, J.; Yu, R.; Ye, D.; Deng, Q.; Zhang, Y. Consortium blockchain for secure energy trading in industrial internet of things. IEEE Trans. Ind. Inform. 2017, 14, 3690–3700. [Google Scholar] [CrossRef] [Green Version]
  116. Materwala, H.; Ismail, L. Secure and Privacy-Preserving Lightweight Blockchain for Energy Trading. In Proceedings of the 2021 8th International Conference on Future Internet of Things and Cloud (FiCloud), Rome, Italy, 23–25 August 2021; pp. 394–399. [Google Scholar]
  117. Turkanović, M.; Hölbl, M.; Košič, K.; Heričko, M.; Kamišalić, A. EduCTX: A Blockchain-Based Higher Education Credit Platform. IEEE Access 2018, 6, 5112–5127. [Google Scholar] [CrossRef]
  118. Ismail, L.; Heba, H.; AlShamsi, M.; AlHammadi, M.; AlDhanhani, N. Towards a Blockchain Deployment at UAE University: Performance Evaluation and Blockchain Taxonomy. In Proceedings of the 2019 International Conference on Blockchain Technology, Atlanta, GA, USA, 14–17 July 2019. [Google Scholar]
  119. Baudier, P.; Kondrateva, G.; Ammi, C.; Seulliet, E. Peace engineering: The contribution of blockchain systems to the e-voting process. Technol. Forecast. Soc. Chang. 2021, 162, 120397. [Google Scholar] [CrossRef]
  120. Gao, Y.; Pan, Q.; Liu, Y.; Lin, H.; Chen, Y.; Wen, Q. The Notarial Office in E-government: A Blockchain-Based Solution. IEEE Access 2021, 9, 44411–44425. [Google Scholar] [CrossRef]
  121. Ismail, L. Communication issues in parallel conjugate gradient method using a star-based network. In Proceedings of the International Conference on Computer Applications and Industrial Electronics, Kuala Lumpur, Malaysia, 5–8 December 2010; pp. 350–355. [Google Scholar]
  122. Ismail, L.; Shuaib, K. Empirical Study for Communication Cost of Parallel Conjugate Gradient on a Star-Based Network. In Proceedings of the Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation, Kota Kinabalu, Malaysia, 26–28 May 2010; pp. 498–503. [Google Scholar]
  123. Ismail, L.; Materwala, H.; Sharaf, Y. BlockHR–A Blockchain-based Healthcare Records Management Framework: Performance Evaluation and Comparison with Client/Server Architecture. In Proceedings of the 2020 International Symposium on Networks, Computers and Communications (ISNCC), Montreal, QC, Canada, 16–18 June 2020; pp. 1–8. [Google Scholar]
  124. Ismail, L.; Abed, E.H. Linear Power Modeling for Cloud Data Centers: Taxonomy, Locally Corrected Linear Regression, Simulation Framework and Evaluation. IEEE Access 2019, 7, 175003–175019. [Google Scholar] [CrossRef]
  125. Ismail, L.; Materwala, H. IDMPF: Intelligent diabetes mellitus prediction framework using machine learning. Appl. Comput. Inform. 2021. [Google Scholar] [CrossRef]
  126. Ismail, L.; Materwala, H.; Tayefi, M.; Ngo, P.; Karduck, A.P. Type 2 Diabetes with Artificial Intelligence Machine Learning: Methods and Evaluation. Arch. Comput. Methods Eng. 2022, 29, 313–333. [Google Scholar] [CrossRef]
  127. Ismail, L.; Materwala, H. Comparative Analysis of Machine Learning Models for Diabetes Mellitus Type 2 Prediction. In Proceedings of the 2020 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 16–18 December 2020. [Google Scholar] [CrossRef]
  128. Alamri, R.; Alharbi, B. Explainable Student Performance Prediction Models: A Systematic Review. IEEE Access 2021, 9, 33132–33143. [Google Scholar] [CrossRef]
  129. Ismail, L.; Materwala, H.; Hennebelle, A. Comparative Analysis of Machine Learning Models for Students’ Performance Prediction. In Proceedings of the International Conference on Advances in Digital Science Salvador, Salvador, Brazil, 19–21 February 2021; pp. 149–160. [Google Scholar]
  130. Xu, Z.; Yuan, H.; Liu, Q. Student performance prediction based on blended learning. IEEE Trans. Educ. 2021, 64, 66–73. [Google Scholar] [CrossRef]
  131. Ismail, L.; Zhang, L. Information Innovation Technology in Smart Cities; Springer: Singapore, 2018. [Google Scholar]
  132. Hung, J.-L.; He, W.; Shen, J. Big data analytics for supply chain relationship in banking. Ind. Mark. Manag. 2020, 86, 144–153. [Google Scholar] [CrossRef]
  133. Gubran Al-Hashedi, K.; Magalingam, P. Financial fraud detection applying data mining techniques: A comprehensive review from 2009 to 2019. Comput. Sci. Rev. 2021, 40, 10040. [Google Scholar]
  134. Akhavan-Hejazi, H.; Mohsenian-Rad, H. Power systems big data analytics: An assessment of paradigm shift barriers and prospects. Energy Rep. 2018, 4, 91–100. [Google Scholar] [CrossRef]
  135. Pérez-Chacón, R.; Luna-Romera, J.M.; Troncoso, A.; Martínez-Álvarez, F.; Riquelme, J.C. Big Data Analytics for Discovering Electricity Consumption Patterns in Smart Cities. Energies 2018, 11, 683. [Google Scholar] [CrossRef] [Green Version]
  136. Faroukhi, A.Z.; El Alaoui, I.; Gahi, Y.; Amine, A. Big data monetization throughout Big Data Value Chain: A comprehensive review. J. Big Data 2020, 7, 1–22. [Google Scholar]
  137. Jabbar, A.; Akhtar, P.; Dani, S. Real-time big data processing for instantaneous marketing decisions: A problematization approach. Ind. Mark. Manag. 2020, 90, 558–569. [Google Scholar] [CrossRef]
  138. Ismail, L.; Masud, M.M.; Khan, L. FSBD: A framework for scheduling of big data mining in cloud computing. In Proceedings of the 2014 IEEE International Congress on Big Data (BigData Congress), Anchorage, AK, USA, 27 June–2 July 2014; pp. 514–521. [Google Scholar]
  139. Materwala, H.; Ismail, L. Energy-Aware Edge-Cloud Computation Offloading for Smart Connected Health. In Proceedings of the 2021 8th International Conference on Future Internet of Things and Cloud (FiCloud), Rome, Italy, 23–25 August 2021; pp. 144–150. [Google Scholar]
  140. Ismail, L.; Materwala, H. Machine Learning-based Energy-Aware Offloading in Edge-Cloud Vehicular Networks. Procedia Comput. Sci. 2021, 191, 328–336. [Google Scholar] [CrossRef]
  141. Ismail, L.; Materwala, H. ESCOVE: Energy-SLA-Aware Edge-Cloud Computation Offloading in Vehicular Networks. Sensors 2021, 21, 5233. [Google Scholar] [CrossRef]
  142. Materwala, H.; Ismail, L. Performance and Energy-Aware Bi-objective Tasks Scheduling for Cloud Data Centers. Procedia Comput. Sci. 2021, 197, 238–246. [Google Scholar] [CrossRef]
  143. Liu, F.; Huang, Z.; Wang, L. Energy-efficient collaborative task computation offloading in cloud-assisted edge computing for IoT sensors. Sensors 2019, 19, 1105. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  144. Ismail, L.; Materwala, H. Artificial Intelligent Agent for Energy Savings in Cloud Computing Environment: Implementation and Performance Evaluation. In Proceedings of the Agents and Multi-Agent Systems: Technologies and Applications, Split, Croatia, 17–19 June 2020; pp. 127–140. [Google Scholar]
  145. Ismail, L.; Materwala, H. EATSVM: Energy-Aware Task Scheduling on Cloud Virtual Machines. Procedia Comput. Sci. 2018, 135, 248–258. [Google Scholar] [CrossRef]
  146. Ismail, L.; Fardoun, A.A. Energy-Aware Task Scheduling (EATS) Framework for Efficient Energy in Smart Cities Cloud Computing Infrastructures. Int. J. Therm. Environ. Eng. 2016, 13, 37–48. [Google Scholar]
  147. Wu, H.; Wolter, K.; Jiao, P.; Deng, Y.; Zhao, Y.; Xu, M. EEDTO: An energy-efficient dynamic task offloading algorithm for blockchain-enabled IoT-edge-cloud orchestrated computing. IEEE Internet Things J. 2020, 8, 2163–2176. [Google Scholar] [CrossRef]
  148. Li, X.; Garraghan, P.; Jiang, X.; Wu, Z.; Xu, J. Holistic Virtual Machine Scheduling in Cloud Datacenters towards Minimizing Total Energy. IEEE Trans. Parallel Distrib. Syst. 2018, 29, 1317–1331. [Google Scholar] [CrossRef] [Green Version]
  149. Leila; Bruce, M.; Alain, H. A formal model of dynamic resource allocation in Grid computing environment. In Proceedings of the 2008 Ninth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, Phuket, Thailand, 6–8 August 2008. [Google Scholar]
  150. Ismail, L. Dynamic Resource Allocation Mechanisms for Grid Computing Environment. In Proceedings of the 2007 3rd International Conference on Testbeds and Research Infrastructure for the Development of Networks and Communities, Orlando, FL, USA, 21–23 May 2007. [Google Scholar]
  151. Qureshi, B. Profile-based power-aware workflow scheduling framework for energy-efficient data centers. Future Gener. Comput. Syst. 2019, 94, 453–467. [Google Scholar] [CrossRef]
  152. Han, G.; Que, W.; Jia, G.; Shu, L. An efficient virtual machine consolidation scheme for multimedia cloud computing. Sensors 2016, 16, 246. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  153. Drozdowski, M.; Lawenda, M. Multi-installment divisible load processing in heterogeneous distributed systems. Concurr. Comput. Pract. Exp. 2007, 19, 2237–2253. [Google Scholar] [CrossRef]
  154. Ismail, L.; Abou-Kassem, J.; Qamar, B. Implementation and performance analysis of a parallel oil reservoir simulator tool using a CG method on a GPU-based system. In Proceedings of the 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, Orlando, FL, USA, 22–24 June 2014; pp. 375–380. [Google Scholar]
  155. Ismail, L.; Abou-Kassem, J.H. Toward an automatic Load balanced distribution model in Conjugate gradient method for one-dimensional one-phase oil Reservoir simulation. In Proceedings of the 10th IEEE International Conference on Computer and Information Technology, Bradford, UK, 29 June–1 July 2010; pp. 2958–2963. [Google Scholar]
  156. Aali, N.; Shahhosseini, H.S.; Bagherzadeh, N. Divisible Load Scheduling of Image Processing Applications on the Heterogeneous Star Network Using a new Genetic Algorithm. In Proceedings of the 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP), Cambridge, UK, 21–23 March 2018. [Google Scholar]
  157. Ismail, L. Performance versus Cost of a Parallel Conjugate Gradient Method in Cloud and Commodity Clusters. Int. J. Comput. Sci. Netw. Secur. 2012, 12, 25–34. [Google Scholar]
  158. Ismail, L.; Fardoun, A.A. Towards energy-aware task scheduling (EATS) framework for divisible-load applications in cloud computing infrastructure. In Proceedings of the Annual IEEE International Systems Conference, Montreal, QC, Canada, 24–27 April 2017. [Google Scholar]
  159. Aali, S.N.; Bagherzadeh, N. Divisible load scheduling of image processing applications on the heterogeneous star and tree networks using a new genetic algorithm. Concurr. Comput. Pract. Exp. 2019, 32, e5498. [Google Scholar]
  160. Ismail, L.; Zhang, L. Modeling and Performance Analysis to Predict the Behaviour of a Divisible Load Application in a Star Network Cloud. In Proceedings of the 2010 Fourth UKSim European Symposium on Computer Modeling and Simulation, Pisa, Italy, 17–19 November 2010; pp. 369–374. [Google Scholar]
  161. Ismail, L.; Zhang, L. Modeling and performance analysis to predict the behavior of a divisible load application in a cloud computing environment. Algorithms 2012, 5, 289–303. [Google Scholar] [CrossRef] [Green Version]
  162. Ismail, L.; Barua, R. Implementation and Performance Evaluation of a Distributed Conjugate Gradient Method in a Cloud Computing Environment. Softw. Pract. Exp. 2012, 43, 281–304. [Google Scholar] [CrossRef] [Green Version]
  163. Ismail, L.; Guerchi, D. Performance evaluation of convolution on the Cell Broadband Engine processor. IEEE Trans. Parallel Distrib. Syst. 2011, 22, 337–351. [Google Scholar] [CrossRef]
  164. Ismail, L. A ring-based parallel oil reservoir simulator. Scalable Comput. Pract. Exp. 2012, 13, 85–98. [Google Scholar]
  165. Cugurullo, F. Urban artificial intelligence: From automation to autonomy in the smart city. Front. Sustain. Cities 2020, 2, 38. [Google Scholar] [CrossRef]
Figure 1. A view of smart city digital ecosystem.
Figure 1. A view of smart city digital ecosystem.
Sensors 22 05750 g001
Figure 2. Evolution of wireless communication technology from 1G to 6G.
Figure 2. Evolution of wireless communication technology from 1G to 6G.
Sensors 22 05750 g002
Figure 3. Artificial Intelligence (AI)-enabled 6G networks.
Figure 3. Artificial Intelligence (AI)-enabled 6G networks.
Sensors 22 05750 g003
Figure 4. Deep Learning (DL)-based channel estimation.
Figure 4. Deep Learning (DL)-based channel estimation.
Sensors 22 05750 g004
Figure 5. Convolutional Neural Network (CNN)-based modulation recognition in networking.
Figure 5. Convolutional Neural Network (CNN)-based modulation recognition in networking.
Sensors 22 05750 g005
Figure 6. Deep Learning (DL)-based network traffic classification for Internet of Things (IoT) applications.
Figure 6. Deep Learning (DL)-based network traffic classification for Internet of Things (IoT) applications.
Sensors 22 05750 g006
Figure 7. Deep Learning (DL)-based time series prediction of network traffic data.
Figure 7. Deep Learning (DL)-based time series prediction of network traffic data.
Sensors 22 05750 g007
Figure 8. Deep Reinforcement Learning (DRL)-based data caching in the Internet of Things (IoT) environment.
Figure 8. Deep Reinforcement Learning (DRL)-based data caching in the Internet of Things (IoT) environment.
Sensors 22 05750 g008
Figure 9. Deep Reinforcement Learning (DRL)-based intelligent routing in the Internet of Things (IoT) environment.
Figure 9. Deep Reinforcement Learning (DRL)-based intelligent routing in the Internet of Things (IoT) environment.
Sensors 22 05750 g009
Figure 10. Deep Learning (DL)-based mobility management for Internet of Things (IoT) devices and users.
Figure 10. Deep Learning (DL)-based mobility management for Internet of Things (IoT) devices and users.
Sensors 22 05750 g010
Figure 11. Deep Learning (DL)-based intrusion detection in Internet of Things (IoT) environments.
Figure 11. Deep Learning (DL)-based intrusion detection in Internet of Things (IoT) environments.
Sensors 22 05750 g011
Figure 12. Deep Learning (DL)-based network traffic anomaly detection.
Figure 12. Deep Learning (DL)-based network traffic anomaly detection.
Sensors 22 05750 g012
Figure 13. Machine Learning (ML)- and Deep Learning (DL)-based botnet detection for Internet of Things (IoT).
Figure 13. Machine Learning (ML)- and Deep Learning (DL)-based botnet detection for Internet of Things (IoT).
Sensors 22 05750 g013
Figure 14. Taxonomy of smart city applications in 6G based on underlying technologies.
Figure 14. Taxonomy of smart city applications in 6G based on underlying technologies.
Sensors 22 05750 g014
Figure 15. Growth of Internet of Things (IoT) and non-IoT devices from 2010 to 2025.
Figure 15. Growth of Internet of Things (IoT) and non-IoT devices from 2010 to 2025.
Sensors 22 05750 g015
Figure 16. Types of communications on the Internet of Vehicles (IoV).
Figure 16. Types of communications on the Internet of Vehicles (IoV).
Sensors 22 05750 g016
Figure 17. Blockchain-enabled Integrated Internet of Vehicles (IoV)-Edge-Cloud environment.
Figure 17. Blockchain-enabled Integrated Internet of Vehicles (IoV)-Edge-Cloud environment.
Sensors 22 05750 g017
Figure 18. AI-enabled smart city applications in self-learning 6G networks.
Figure 18. AI-enabled smart city applications in self-learning 6G networks.
Sensors 22 05750 g018
Table 1. Summary of related surveys.
Table 1. Summary of related surveys.
WorkApproachEvolution of Wireless Communication TechnologyAI-Enabled 6G NetworksTechnology-Enabled Applications in 6G
[16]Top-down
[17]
[18]Down-Top
[20]
[19]
[21]
This paperHolistic
→ considered; → not considered.
Table 2. Characteristics of the wireless communication technology from 1G to 6G.
Table 2. Characteristics of the wireless communication technology from 1G to 6G.
Network1G2G3G4G5G6G
Features
Start197019801998200020102020
Deployment198419992001201020192030 *
TechnologyAMPS, NMT, TACSGSM, GPRS, EDGEWCDMA, UMTSLTE, WiMAXMIMO, mm WavesTHz communications, VLC
Frequency30 KHz1.8 GHz1.6–2 GHz2–8 GHz3–30 GHz95 GHz–3 THz
MultiplexingFDMATDMA/CDMACDMAOFDMAOFDMOFDM
SwitchingCircuitCircuit, packetPacketAll packetAll packetAll packet
Core networkPSTNPSTNPacket NetworkInternetInternetInternet
Primary services (in addition to previous generations)Voice callsInternational roaming voice calls, conference calls, SMS, MMS, WAP, WWW, and emailsVideo conferencing, GPSMobile web access, IP telephony, 3D videos, HD mobile TVMachine vision, connected cars, smart homes, ARTactile and haptic internet, connected autonomous systems, holographic society
Peak data rateNA50 Kbps (GPRS)
1 Mbps (EDGE)
21 Mbps100 Mb/s20 Gb/s≥1 Tb/s
Mobility supportNANANA350 km/h500 km/h≥1000 km/h
LatencyNA300 ms100 ms10 ms1 ms10–100 µs
Network energy efficiency (compared to 4G)NA0.01x0.1x1x≥10x≥100x
Spectral efficiency (compared to 4G)NANA0.6x1x3x≥15x
Area traffic capacityNANA1 Kbps/m20.1 Mbps/m210 Mbps/m21 Gbps/m2
Connection density (devices/km2)NANA104105106107
* → Expected; AMPS → Advanced Mobile Phone System; NMT → Nordic Mobile Telephone; TACS → Total Access Communication System; GSM → Global System for Mobile; GPRS → General Packet Radio Service; EDGE → Enhanced Data rates for GSM Evolution; WCDMA → Wideband Code Division Multiple Access; UMTS → Universal Mobile Telecommunications Service; LTE → Long-Term Evolution; WiMAX → Worldwide Interoperability for Microwave Access; MIMO → Multiple Input Multiple Output; THz → Terahertz; VLC → Visible Light Communication; FDMA → Frequency Division Multiple Access; TDMA → Time Division Multiple Access; CDMA → Code Division Multiple Access; OFDMA → Orthogonal Frequency Division Multiple Access; OFDM → Orthogonal Frequency Division Multiplexing; PSTN → Public Switched Telephone Network; SMS → Short Message Service; MMS → Multimedia Message Service; WAP → Wireless Application Protocol; WWW → World Wide Web; GPS → Global Positioning System; HD → High Definition; AR → Augmented Reality; NA → Not Applicable.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ismail, L.; Buyya, R. Artificial Intelligence Applications and Self-Learning 6G Networks for Smart Cities Digital Ecosystems: Taxonomy, Challenges, and Future Directions. Sensors 2022, 22, 5750. https://doi.org/10.3390/s22155750

AMA Style

Ismail L, Buyya R. Artificial Intelligence Applications and Self-Learning 6G Networks for Smart Cities Digital Ecosystems: Taxonomy, Challenges, and Future Directions. Sensors. 2022; 22(15):5750. https://doi.org/10.3390/s22155750

Chicago/Turabian Style

Ismail, Leila, and Rajkumar Buyya. 2022. "Artificial Intelligence Applications and Self-Learning 6G Networks for Smart Cities Digital Ecosystems: Taxonomy, Challenges, and Future Directions" Sensors 22, no. 15: 5750. https://doi.org/10.3390/s22155750

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop