Internet of Things: A Comprehensive Overview on Protocols, Architectures, Technologies, Simulation Tools, and Future Directions

: The Internet of Things (IoT) is a global network of interconnected computing, sensing, and networking devices that can exchange data and information via various network protocols. It can connect numerous smart devices thanks to recent advances in wired, wireless, and hybrid technologies. Lightweight IoT protocols can compensate for IoT devices with restricted hardware characteristics in terms of storage, Central Processing Unit (CPU), energy, etc. Hence, it is critical to identify the optimal communication protocol for system architects. This necessitates an evaluation of next-generation networks with improved characteristics for connectivity. This paper highlights signiﬁcant wireless and wired IoT technologies and their applications, offering a new categorization for conventional IoT network protocols. It provides an in-depth analysis of IoT communication protocols with detailed technical information about their stacks, limitations, and applications. The study further compares industrial IoT-compliant devices and software simulation tools. Finally, the study provides a summary of the current challenges, along with a broad overview of the future directions to tackle the challenges, in the next IoT generation. This study aims to provide a comprehensive primer on IoT concepts, protocols, and future insights that academics and professionals can use in various contexts.


Introduction
The Internet of Things has become vital to sustained economic development [1]. It turns buildings, such as homes, workplaces, factories, and even whole cities, into autonomous, self-regulating systems that do not need human help. They communicate with the physical world via actuation, sensing, and management, using current internet protocols to facilitate data transmission, analytics, and decision-making [2]. At present, it is almost impossible to think of a domain of life where IoT technology is not applicable [3]. IoT is powered by the growth of smart systems, which are enabled by a broad range of wireless technologies, such as wireless fidelity (Wi-Fi), Zigbee, and Bluetooth, as well as integrated actuators and sensors. This results in the development of enormous volumes of data, which must be processed, stored, and displayed in a way that is effective, simple, and seamless [4]. The Internet of Things has matured beyond its infancy and is transitioning from the present conventional internet to the fully comprehensive internet of the upcoming decades. The IoT revolution has increased connections among things, at a size and speed that have never Regarding the IoT protocols, the unique qualities of wireless technologies and issues with their IoT integration were presented in [15]. The study focused on Bluetooth low energy, ZigBee, Long Range (LoRa), and several other Wi-Fi variants. The problem of selecting the best technology for a particular application was investigated in [16], which compared the standard IoT communication protocols utilizing various parameters. The following are some of the criteria to be considered in selecting the best technology: topology, cryptography, power consumption, standards, frequency ranges, data rate, features, security, and coverage. The constraints and deficiencies of present security techniques were studied in [17]. Link, transport, networking, and session layers for IoT communications protocols were the main focus in [18]. Insights into the various administrative and security mechanisms for machine-to-machine (M2M) and IoT devices are provided. The study focused on specific aspects of Internet of Things networks, including communication and security protocols. An extensive summary was provided in [19] of recent developments in the application layer of the IoT and the lightweight protocols required for them to function. Nevertheless, the scope of the survey was limited to communication protocols in the IoT application layer. A description of a number of standardized protocols at a variety of networking abstraction levels, especially those designed for embedded devices with constrained resources, was offered in [20]. However, the protocols are developing and the most recent versions need to be reviewed.

Research Gaps and Contributions
Motivated by the points mentioned in Section 1.1, this work was conducted so as to cover the research gaps and to develop a unified framework wherein to compare different IoT protocols and outdated discussions on the current challenges and future directions of the IoT. This paper investigated the IoT paradigm's significant components, including its architecture, protocols, tools, and applications. A presentation is provided for wireless protocols, including the following: Zigbee, Bluetooth Low Energy (BLE), Z-wave, Wi-Fi, IPv6 over Low-power Wireless Personal Area Networks (6LoWPAN), Wi-SUN, LoRa, Long Range Wide Area Networks (LoRaWAN), NarrowBand-Internet of Things (NB-IoT). Wired protocols, such as Power Line Communication (PLC), in addition to hybrid technology, are also presented. Furthermore, a general method for comparing the IoT protocols' stack, based on the basic Open Systems Interconnection (OSI) stack, is offered. Moreover, we provide additional guidance on protocol choice and application to round out the comparison. We then compare the professional IoT hardware modules and simulation tools for various IoT protocols. Finally, a detailed discussion is provided on future directions in reshaping the IoT paradigm in the Sixth Generation (6G) era to address the present challenges. Overall, this work aims to provide a valuable overview for researchers and professionals interested in learning more about IoT methods and protocols to use in several applications.

Outline
The study is organized as shown in Figure 1. An overview of IoT, including its definition and functional building elements, is presented in Section 2. An investigation of the IoT system architecture and stack is introduced in Section 3. The different application layer protocols are investigated in Section 4, The infrastructural protocols, including wireless, wired, and hybrid communication technologies, are provided in Section 5. Industrial IoT-compliant devices for the different protocols are compared in Section 6. Simulation tools used in IoT and Wireless Sensor Networks (WSNs) are discussed in Section 7. The current challenges and open issues facing the IoT are summarized in Section 8. The stateof-the-art technologies that can be integrated into the IoT paradigm to tackle the challenges in the 6G era are introduced in Section 9. A discussion of the study's findings, including forward-looking insights, is provided in Section 10. Finally, in Section 11, the conclusions of this work are presented.

IoT Overview
The Internet of Things has facilitated discoveries, inventions, and interactions between things and people. These advancements enhance human quality of life and the exploitation of finite resources. Various IoT definitions and functional building blocks are discussed in this section.

IoT Definition
The business and academic worlds have recently become very interested in IoT, primarily due to the capabilities that IoT provides. It creates a world where all smart devices and technologies are linked to the Internet and capable of communicating with one another with the least amount of human interference [21]. Unfortunately, there is no agreed-upon definition for the term "IoT", since definitions are introduced from many interpretations and viewpoints. The following definitions come from several researchers: • Definition I: Things that are interconnected and actively involved in what can be referred to as the future internet [22]. • Definition II: There are two terms in this expression: Things refers to all devices interconnected to a network relying on identical protocols, whereas the Internet is described as the global network of many networks [5]. • Definition III: The IoT concept is any device that is always available to be accessed by anyone, at any moment, from any location, via any application, and over any network [23].

The IoT Functional Building Elements
There are several fundamental building blocks in the IoT that make it easier for smart devices to perform tasks, including sensing, actuation, identification, organization, and networking.
Sensing Devices: The core components of the IoT system are smart devices capable of carrying out a wide range of tasks, including sensing, monitoring, controlling, and actuation activities. Any IoT unit requires a variety of interfaces to connect to other smart devices, such as the following: interfaces for Internet access, Audio/Video (A/V), sensing Input/Output (I/O), and memory and storage ports are among these. In addition, IoT devices vary depending on the purpose for which each device is used, including the following examples: Smartwatches, wearable sensors, vehicles, industrial equipment, Light Emitting Diode (LED) lights, etc. [23].
Management: Remote management, either with or without human intervention, is the primary characteristic of an IoT device that sets it apart from conventional devices that are handled and controlled through mechanical switches or buttons. Additionally, IoT devices can send and receive data so that a suitable decision can be made [24].
Services: IoT applications range from workplace automation and household appliances to production lines and product tracking, among many other uses. These services can be identity-related, information-aggregating, device modeling, device discovery, device control, collaborative awareness, ubiquitous, data analytics, and data publishing services.
Security: Network data, particularly that of wireless networks, is vulnerable to a wide range of attacks, including denial of service, spoofing, eavesdropping, and so on [25]. In an attempt to counteract these assaults, IoT systems include security features, such as content integrity, message integrity, privacy, authorization, and authentication [26].
Application: The application layer offers interfaces to IoT users so they may monitor and manage various IoT applications. Furthermore, they allow users to assess and view the status of IoT systems at any time and from any location to take appropriate actions.

The IoT Architecture
The IoT paradigm was initially developed in largely heterogeneous situations where information could be collected from several resources and handled by various technologies in a heterogeneous environment [27]. Therefore, similar approaches, functionalities, and services can be grouped into the same layer in each proposed IoT model. This makes it easier to develop and improve the architecture of each layer in the future. Although the three-layer design adequately captures the overall concept of the IoT, it is insufficient for research on the IoT, which often concentrates on the deeper points of the Internet of Things [28].

IoT Stack Architecture
The IoT stack is divided into five layers: physical, data link, network, transport, and application layers. These layers are depicted in Figure 2.
The physical layer is also known as the "perception layer" or "recognition layer" in the context of the IoT. The primary function of the physical layer is to sense the physical characteristics of the surrounding objects. It relies on various sensing technologies, such as Radio Frequency Identification (RFID), WSN, and the Global Positioning System (GPS) [12]. Additionally, it is in charge of turning the information into digital signals, which are easier to transmit via a network. Nanotechnologies and embedded intelligence are crucial to the physical layer [29]. The first produces smaller chips inserted into everyday objects, such as nano-integrated wearable devices [30]. The second one provides them with the computing power that any upcoming applications need. The Data Link Layer's primary features are packet boundary distinction, frame synchronization, sender and distinction address management, error detection in the physical media channel, and colliding prevention [31]. In addition, each protocol has unique features ensuing from its design and implementation, including media access control mechanisms, transfer speeds, communication topology among units, coverage distance, power utilization, and many more. The network layer provides data routing channels so that data can be sent as packets throughout the network [32]. It includes all network equipment, such as switches and routers, necessary for proper Third Generation (3G), Fourth Generation(4G), Fifth Generation (5G), Wi-Fi, infrared technology, ZigBee, and communication and routing protocols. The transport layer collaborates with the application layer to transmit and receive data without errors. It offers capabilities including packet delivery order, congestion avoidance, multiplexing, byte orientation, data integrity, and reliability for the sent data [33]. The application layer serves as the IoT architecture's front end, where most of the technology's potential is realized. It provides IoT developers with access to the platforms, interfaces, and tools they need to build IoT applications, such as those for smart homes, intelligent transportation, smart health, etc.

Cloud Computing
The IoT paradigm integrates data and connectivity infrastructure into our surroundings. This results in the creation of enormous amounts of data, which must be displayed, interpreted, and maintained in a format that is effective, convenient, and simple to recognize. In the cloud model, data processing is delegated to a network of remote servers in the cloud [34]. In this centric design, the cloud is considered the heart, with applications built on top and a network of smart devices below. Cloud computing is preferred for its ability to provide flexibility and scalability, offering infrastructure, platforms, software, and storage services. Moreover, the cloud allows programmers to exchange resources for high-performance applications, deep learning models, and analytical tools. It offers a high-performance platform in dynamic resource allocation, universal accessibility, and composability, all of which are critical for the success of upcoming IoT extensions. This platform performs a variety of functions, including receiving information from smart objects, acting as a computer that analyzes and interprets various sorts of data, and providing web-based visualizations.
Several studies have attempted to build a descriptive architecture to define the cloud computing model [35]. Three levels make up this paradigm, with the first being the base layer, which has a database that contains information on every smart device. Following this comes the component layer, which contains the software necessary to communicate with all IoT elements, to utilize some of them to carry out a task or to monitor their condition. Finally, there is the application layer, which is responsible for delivering the desired services to end users.
IoT architecture built on the cloud computing model primarily consists of the physical layer, gateway layer, and cloud services. The physical layer is used to collect data from networked devices [36]. The gateway layer contains network data, such as Local Area Network (LAN), Wide Area Network (WAN), etc. It transforms data and prepares the supplied raw data for cloud services. Cloud services are the central and crucial component of cloud-based architecture [37], responsible for applying data analytical algorithms to execute the data. The essential elements of cloud services are broker and message queues, databases, servers, and event administrators.
The key benefits of the cloud computing paradigm are still the "enormous" storage and processing capabilities, lower capital costs, and smaller ecological footprint. Nevertheless, there are significant challenges associated with this technology, including security concerns, delayed service, and limited bandwidth, in addition to increased latency and jitter when portable devices load the resources computing services [38].

Edge and Fog Computing
"Edge computing" and "fog computing" are frequently considered synonyms. However, "edge computing" is a more general term and precedes "fog computing" [36]. Recently, there has been a trend toward adopting edge computing as an alternative type of system architecture [39]. Edge computing can be defined as a computing approach that makes use of resources at the periphery of a network, while fog computing is a hybrid computing approach that makes use of both on-site resources and cloud services [40], wherein sensors and gateways play a role in data computation and analysis [41]. The major advantage of a distributed fog model over a centralized cloud architecture is its ability to support real-time and latency-sensitive IoT systems that make instant decisions, including autonomous cars, augmented and virtual reality (VR) equipment, and security tools. Owing to the delays encountered, these systems cannot accept transference of their data to be handled on a cloud platform. Edge computing brings the computation closer to the nodes at the network's edge to provide a minimal delay [42]. Fog architecture involves several layers, including monitoring, pre-processing, storing, and security, that are placed between the physical and transport layers [43]. The monitoring layer tracks power, availability, performance, and status. The pre-processing layer filters, processes, and analyzes cloud-let data. Data backup, redistribution, and caching are all services provided by the short-term storage layer. Lastly, the security layer handles decryption and encryption, protecting sensitive information and preventing unauthorized access. Both monitoring and pre-processing occur at the cloud-let before the data is transmitted to the cloud. Edge resources differ from cloud resources in that they include inherent heterogeneity, a non-deterministic load, continuously scaling data, unpredictable links, and multi-tenancy among end users. These challenges need unique approaches to management [44]. Managing the process becomes even more difficult when real-time scenarios compete for busy resources amid unbalanced workloads. Resource management includes activities such as allocating and scheduling resources, offloading tasks, deploying services where they are most needed, and balancing workloads [45].
With edge computing, machine learning (ML) can be deployed closer to the edge of the network, where the raw data is being produced. Improvements in fog computing productivity and efficiency can be achieved by employing edge intelligence and analytical tools [46]. Edge computing that is driven by artificial intelligence may cause significant shifts in several industries. By 2025, the International Data Corporation (IDC) estimates there will be 150 billion smart edge devices available worldwide [47]. Although some kinds of edge computing are currently in use, analysts predict that this volume will increase [48]. There has been remarkable advancement in the use of artificial intelligence (AI), instead of heuristic and meta-heuristic methods, to enhance task scheduling [49]. A more detailed discussion of AI and edge computing integration is provided in Section 9.

Message Queue Telemetry Transport (MQTT)
The MQTT protocol enables messages to be transmitted and received without the sender or recipient being aware of who is sending or receiving the data. Three primary components make up the MQTT: a publisher, a subscriber, and a broker [50]. In MQTT, the publisher (server) and subscriber (clients) do not need to be aware of the identities of one another. Since MQTT supports server-side processing, it is suitable for IoT devices with constrained processing and storage capabilities. Due to its ability in controlling large and small devices, MQTT is flexible and straightforward to utilize [51]. Two agents are present in every MQTT connection: clients and the broker, which acts as a server. Devices used for communication are known as "clients". The subscriber (client) requests a message from the publisher (server) to get information. Then, a client can connect to the specified server with the broker's help. The client in this scenario might be anything, including a sensor, a mobile device, etc. The broker controls the flow of information and is primarily responsible for collecting all publisher messages, sorting them, choosing the subscribers, and sending the messages to all customers who have subscribed. Healthcare applications frequently utilize MQTT.

Constrained Application Protocol (CoAP)
The Internet Engineering Task Force (IETF) Constrained RESTful Environments Working Group created the CoAP as a web transfer protocol for constrained devices with limited capabilities [52]. The CoAP takes advantage of a portion of Hypertext Transfer Protocol (HTTP) features. The resource limitations of many IoT devices have led to reconsidering certain HTTP functionalities. The IoT application-specific protocols can be developed by modifying HTTP's underlying technologies. Many IoT devices, including cellphones and RFID sensors, act as CoAP clients. The COAP server receives the data produced by these clients ubiquitously. The CoAP server transmits this data to the REST CoAP Proxy. A firewall connection is created to enable communication between the CoAP environment and the rest of the Internet. The CoAP is commonly used in smart home applications.

Advanced Message Queuing Protocol (AMQP)
The AMQP is a global standard International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 19464, created by OASIS, that offers queuing, message orientation, point-to-point routing, publish/subscribe, security, and dependability. A stable and effective message queue is the foundation of the publish/subscribe mechanism known as AMQP [53]. By utilizing the name of this exchange, publishers and customers may locate one another. The consumer then creates a "queue" and connects it to the exchange.

HTTP
The text-based and web-based HTTP is a communication protocol that provides request/response RESTful features where the client communicates with the server by sending an HTTP request message [54]. Since HTTP depends on Transmission Control Protocol (TCP) as a transport protocol and TCP as a transport protocol and Transport Layer Security/Secure Sockets Layer (TLS/SSL) as a security protocol, communication between the server and the client is connection-based. On the other hand, an IoT connection using the HTTP protocol consumes many network resources and incurs overheads because it requires the sending of several tiny packets.

Extensible Messaging and Presence Protocol (XMPP)
The XMPP is a communication protocol built on eXtensible Markup Language (XML). Developed to expand HTML, XMPP enables the insertion of unique tags and online features. It offers extensionality and data organization mechanisms as a part of HTML. For real-time communication, such as instant messaging, presence, multi-party chats, phone and video calls, collaboration, content syndication, etc., XMPP has traditionally been utilized. The IoT real-time and scalable networking between devices or objects is made possible by the utilization of XMPP [55]. The objects (devices) have one or more nodes, and each node has several fields (of information). Each field has a value that may be read and written. The nodes must send and accept friendship requests from one another. One node can start receiving updates from another node after the other node accepts the friendship request from the first node. If a second node wants to receive updates from the first node, it must issue a friendship request and acquire approval. A dual subscription is used when both nodes become friends with one another over the network; otherwise, a single-sided subscription is used. One node can read or write field values in the other node, and data is exchanged between them on a one-to-one basis [56].

IoT Communication Technologies
In some IoT applications, the available technological options are constrained by the hardware capabilities, the need for low-power consumption, and the total cost of the device. Achieving low power consumption is a crucial prerequisite for developing the IoT. In addition to reduced power consumption, there are additional needs that must be taken into account. Cost of technology, security, ease of use and management, and wireless data rates and ranges, among other factors, are just a few examples of crucial needs that need to be taken into consideration. Many developing wireless technologies, like ZigBee and Bluetooth, compete to offer the IoT a low-power wireless communication option. The Institute of Electrical and Electronics Engineers (IEEE) IEEE 802.11ah, LoRa, and 6LoWPAN protocols, among others, are emerging as other wireless technologies. Figure 3 compares the distance coverage, rates, ranges, and power consumption of various wireless communication systems. In this section, IoT Protocol stacks are introduced and compared, based on their performance criteria. The reduced OSI stack is used for categorization, wherein the presentation and session layers are omitted as they have no role in IoT akin to the ones in Information Technology (IT) networks.

ZigBee
ZigBee was created when the IEEE 802.15.4 standard was approved in 2004, to be utilized in Personal Area Networks (PAN). Therefore, ZigBee is more suitable for sensors and control devices. It uses 868 MHz in Europe, 915 MHz in the United States, and 2.4 GHz everywhere else. The protocol stack's physical layer (PHY) and Media Access layer (MAC) are defined by the IEEE 802.15.4 standard. In contrast, the ZigBee Alliance determines the network and application layers, as shown in Figure 4. It provides a data rate of 20 kbps at 868 MHz, 40 kbps at 915 MHz, and 250 kbps at 2.4 GHz [57]. ZigBee uses Direct Sequence Spread Spectrum (DSSS) as one of its key modulation techniques to transmit data wirelessly. DSSS works by spreading the signal across a wide range of frequencies, using a unique code to encode each bit of the data. It ranges to 100m and provides low power consumption. The Zigbee PRO Standard expands the capabilities of Zigbee networks to include child device management, enhanced security, and alternative network topologies [58]. The process of adding new devices to connectivity has been made, additionally, more streamlined and consistent thanks to Base Device Behavior (BDB). Moreover, Zigbee 3.0 bundles all profile clusters into a single standard, Zigbee Cluster Library (ZCL) v7. To prevent collisions in the shared communication medium, ZigBee employs Carrier Sense Multiple Access with the Collision Avoidance (CSMA/CA) technique. This ensures that the signal detects an idle channel before transmitting data, reducing the probability of collisions and enhancing the overall reliability of the network [59]. ZigBee requires the usage of a block cipher that uses the Advanced Encryption Standard (AES) and a 128-bit key. ZigBee has many applications, such as home automation, industrial control systems, and medical data collection [60]. Figure 5 shows the three primary Zigbee system structure devices: coordinator, router, and end device. The coordinator is responsible for information management during data transmission and reception. The router serves as an intermediary device, allowing data to pass through. The end device and the parent node have only a few features to use to communicate using battery power. A ZigBee network can have a star, tree, or mesh network topology [61]. The features of ZigBee are low power consumption, low cost, fast response, less interference, self-organization, multiple topologies, and high security. The main advantage of ZigBee is the low data rate and small memory size.

BLE
In 1994, Bluetooth was originally invented by Ericsson, and the first description was published in 2001. The IEEE gave it the 802.15.1 standard in 2002 [62]. Bluetooth has evolved through multiple generations from 2.0, passing from the introduction of the low energy (LE) in Bluetooth version 4.0 to the current Bluetooth version 5.3 [63]. The BLE version 4.0 has a coverage distance of up to 100 m, whereas it can reach up to 400 m in version 5.0 [64]. Furthermore, BLE provides encryption and authentication techniques based on 128-bit Advanced Encryption Standard-Counter with CBC-MAC (AES-CCM) and Connection Signature Resolving Key (CSRK), respectively. It enforces two main security modes, along with a mixed security mode [65]. The first generation of Bluetooth Basic Rate/Enhanced Data Rate (BR/EDR) was exclusively intended for sharing files via an asynchronous connectionless method. Its connection is a single point-to-multiple point connection that can accommodate both symmetrical and asymmetrical connections and is used for data broadcasting [66]. Figure 6 shows the protocol stack of Bluetooth BR or EDR. There are 79 channels, each with a 1 MHz bandwidth, making up the Bluetooth Industrial, Scientific, and Medical (ISM) band at 2.4 GHz. Bluetooth Class protocol's Radio frequency (RF) layer Frequency Shift Keying (FSK) is a modulation method representing a digital 0 and 1 by changing through two unique frequencies inside the assigned band. BR uses a variation of this method, called Gaussian Frequency Shift Keying (GFSK). EDR was introduced in Bluetooth 2.0, and High Speed (HS), also known as "Alternative MAC/PHY" (AMP), was introduced in Bluetooth 3.0 [67]. The modulation mechanism used by EDR is Differential Phase Shift Keying (DPSK). Piconet and Scatternet are two different forms of Bluetooth communication typologies. Piconet is a collection of ad hoc connections among Bluetooth-enabled devices. A piconet begins with two connected devices and can expand to eight, one master, seven active slaves, and 255 parked slaves. A piconet is an architecture based on stars topology, in which the slave communicates only with the master, as shown in Figure 7. Figure 8 represents coexisting piconets, with each piconet utilizing the frequency sequence determined by the master [68].
Bluetooth Special Interest Group (SIG) introduced BLE in version 4.0. It was designed for a low-power wireless network that does not need high throughput and for use in scenarios where Bluetooth was not traditionally appropriate. The link layer, PHY layer, and packet formats were remodeled to obtain lower energy consumption. Furthermore, BR/EDR is only intended for two-way communication [69]. Due to its accessibility in smartphone devices, its low cost, and power consumption, BLE technology has evolved into an effective alternative. It is completely IoT-ready [70,71]. Table 1 represents the two major categories of Bluetooth technology: Bluetooth classic refers to older Bluetooth versions, primarily intended for file transmission and audio streaming, and BLE, which refers to newer Bluetooth versions for IoT applications with low power usage. Both BLE and BR/EDR are becoming more popular as consumers desire low power and high throughput. BLE makes use of 37 general purpose physical channels, as well as three advertising channels. In various applications, one of the shortcomings of the initial version of Bluetooth was that the data rate was insufficient relative to many other wireless protocols, such as 802.11. However, the Bluetooth wireless communication standard has been updated to version 5.0, doubling the previous version's transmission rate [72]. BLE 4.2 and BLE 5 have data speeds of 1 kbps and 2 kbps, respectively. On the other hand, a BLE system would have substantially lower throughput. It must take into consideration a variety of protocol overheads, as well as adaptive RF connection changes to preserve reliable links in the presence of noise. Protocol restrictions, depending on BLE data transfer processes and strategies, such as connectivity duration, frame size, and acknowledgement technique, are also required.    In its early stages, BLE was designed around a star topology, where the master module was at the network's epicenter, and the slaves were at its periphery. Therefore, the master was the only point of contact for all messages. In 2017, Bluetooth SIG announced that mesh topology is supported with Bluetooth standard 5.0. This transformation has led to the development of several mesh network mechanisms, and, consequently, to the classification of the various types of BLE mesh networks [73]. Bluetooth mesh networking provides many-to-many (m:m) device connections. Most BLE mesh protocols are built as layers on top of a standard Bluetooth star network. The networking can be used in a variety of industries, such as the smart building industry, notably in commercial lighting systems and sensor network solutions in various applications. In addition, it is suitable for large-scale device networks and IoT technologies with multiple devices communicating with each other. This BLE stack was modified to support encrypting and authenticating all mesh messages using provisioning data and the application key, and relaying them. It is also responsible for segmenting and reassembling mesh communications as needed. Therefore, the Mesh topology of Bluetooth is highly profitable for smart homes/offices and industrial controls. Although ZigBee is perfect for home automation, Bluetooth may eventually take over because Bluetooth is available on all computers and mobile phones, making it simple for users to operate so as to manage their home offices using their smart devices.

Beacon Technology (iBeacon)
The advancement of either Near Field Communication (NFC) or Quick Response (QR) code technology led to iBeacon technology. An iBeacon is a tiny device that uses BLE to frequently transmit specific data across a predetermined area. It may be operated for a maximum of two years by a coin cell due to BLE's low energy consumption. However, the transmission output power (TX power) and advertisement period selections impact battery lifetime. Beacons have a maximum range of 70 m. Nevertheless, this may be significantly diminished according to surrounding obstacles. Estimote, Kontakt, Gimbal, and other vendors produce BLE beacons. A beacon comprises a Bluetooth chipset (including firmware), a power supply battery, and an antenna. The main BLE chips' current producers are Texas Instruments, Nordic Semiconductor, Bluegiga, and Qualcomm.

Z-Wave
Z-Wave is a newer version of RF that is cheap, has low energy consumption, is accurate, and is applicable for small-distance wireless communication systems [74]. It was originally developed by a Danish company, called Zensys, based in Copenhagen in 1999 [75]. It is a patented technology that merges sensors and actuators over RF to provide smart home and office automation services. Although the protocol is publicly disclosed, details about the network layer are still not ready for analysis. The Z-Wave routing protocol's frame forwarding and topological management aspects are reverse-engineered using a real-world Z-Wave network [76]. It mainly uses strong AES 128-bit encryption for securing connected devices. A Z-Wave network can handle up to 232 devices or up to 4000 nodes on a single network [77].
As shown in Figure 9, Z-Wave is a reduced MAC protocol commonly used in home automation systems. Other than home automation, it is used in various other IoT applications. It has a range of up to 100 m, allows point-to-point communication, and is ideal for sending short messages. It employs CSMA/CA for accurate communication systems, including a small acknowledgment message. A Master and slave system comprises the Z-Wave infrastructure. The Master manages the network's scheduling and commands all slaves linked to it [78].

Wi-Fi
Wi-Fi is a subset of the IEEE 802.11 protocol family that enables users to establish wireless connections to the Internet. It is a perfect option for many IoT applications, since its utility is well-known and widely available, including in people's homes, workplaces, and other locations [61]. It utilizes various encryption protocols for security purposes, which are Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA), and Wi-Fi Protected Access Version 2 (WPA2) [79]. The IEEE 802.11 is a set of protocols for wireless LANs (WLANs). It has evolved through multiple generations: 802.11b, 802.11a, 802.11g, 802.11n, 802.11ac, 802.11ah, 802.11ax, and, finally, the 802.11be. The Wi-Fi Alliance adopted a new name strategy for the 802.11 protocols in 2018 to make them easier to memorize. Table 2 compares the old and updated naming conventions, along with the features of each one and the utilized technologies. The range of possible data rates offered by these specifications is 11 kbps to 40 kbps. The communication range of Wi-Fi is roughly between 20 m (indoors) and 240 m (outdoors) [80].
Among these amendments, the IEEE 802.11ah (Wi-Fi HaLow) is a solution that enables better power consumption, better coverage quality of service, scalability, and affordability for a variety of IoT applications. Wi-Fi HaLow is one of the Low-Power Wide Area Network (LPWAN) technologies that operate at frequencies below 1 GHz [81]. It was developed to connect large numbers of IoT and M2M gadgets [82]. However, the security features of 802.11ah have a significant implementation problem in meeting the requirements of resource-limited IoT nodes [83]. Although Wi-Fi HaLow was designed to facilitate longdistance connectivity while using less energy, it still suffers from high power consumption when compared to other IoT protocols like BLE and ZigBee. Wi-Fi has advanced to its sixth generation with the release of IEEE 802.11ax, which outperforms its predecessor, 802.11ac [84]. Unlike the previous amendments, Wi-Fi 6 includes numerous significant wireless advances that are especially useful for IoT applications. The adoption of 2.4 GHz is the first major modification. Unfortunately, Wi-Fi 5 can only operate at 5 GHz. The 5 GHz range has less interference of RF; however, it has worse wall penetration and longer battery drain than 2.  [85]. Similarly to how OFDMA is exclusive to Wi-Fi 6, this feature is only accessible via that standard [86]. MU was developed to accommodate a high number of interconnected devices while maintaining a minimal collision rate and access time, and it permits numerous synchronous transmissions from various sites. On the other hand, the next Wi-Fi 7 update will provide new capabilities for time-sensitive networking (TSN) [87]. High quality of service (QoS) applications, such as remote healthcare monitors and robotic management, will be supported by this version thanks to the new suggested features [88]. Wi-Fi 6 adopted OFDMA to increase spectrum effectiveness and enable large-scale operations, but Wi-Fi 7 is anticipated to further optimize OFDMA efficiency with the inclusion of additional characteristics to the OFDMA process [89]. Allocating spectrum resources employing an access point (AP) for scheduling may have an impact on the latency. When it obtains the transmission opportunity (TXOP) frame, the AP works as a scheduler and requests uplink transmission from the stations. As a result, use of an optimum scheduling strategy is necessary to get the best delay performance [90]. The definition of a low-latency access category (LL-AC) has the potential to provide reliable operation at a predetermined latency [91]. This will be extremely beneficial for a wide range of latency-sensitive applications, including gesture recognition, object control tracking, healthcare monitoring, and other industrial IoT applications [92].

6LoWPAN
In 2007, the IETF working group developed the 6LoWPAN protocol [93]. Using IEEE 802.15.4 standard radios, 6LoWPAN provides operation in the 2.4 GHz ISM band across the globe, at 913 MHz in North America and 868 MHz in Europe. The signal range is up to 100 m, and the highest data rate is 250 kbps [94]. The group has specified encapsulation and header compression algorithms to send and receive IPv6 packets over IEEE 802.15.4-based networks, as shown in Figure 10. This reduces the IPv6 header size from 40 bytes to 7 bytes. 6LoWPAN uses a self-healing, time-synchronized, and selforganizing mesh architecture [57]. Furthermore, 6LoWPAN integrates the most recent generation of the Internet Protocol (IPv6) with LowPAN networks, allowing low-powered devices to communicate remotely over short distances [95]. Therefore, it is necessary to implement an adaptation layer employing header compression to transmit IPv6 packets across IEEE 802.15.4 networks efficiently. Furthermore, because most of the routing methods used on standard IPv6 networks are incompatible with the limited networks that 6LoWPAN operates on, a new routing protocol, called IPv6 Routing Protocol for Low Power and Lossy Networks (RPL), was explicitly created for 6LoWPAN [93]. RPL is a popular routing protocol due to its flexibility, energy-efficient routing capacity, and QoS support [96]. It is based on source and distance vector routing protocols. Point-to-point, point-to-multi-point, and multi-point-to-point topologies are all supported by RPL. RPL assembles nodes into a Destination Oriented Directed Acyclic Graph (DODAG) topological structure [97]. Each user receives an IP address in the range of 5:1028, and these addresses are distributed independently to each device using the 6LoWPAN protocol. Regarding security, 6LowPAN utilizes the strong AES-128 link-layer security mechanisms introduced by IEEE 802.15.4. A typical 6LoWPAN device may have an average current consumption of a few tens of microampers (µA) during normal operation and a sleep current consumption of a few nanoampers (nA) when the device is in a low-power sleep mode [98]. Applications of 6LoWPAN include automation, industrial monitoring, smart grid, and smart home.

Wi-SUN
The Wireless Smart Utility Network (Wi-SUN) protocol was developed in 2012 and became an open standard, based on IEEE 802.15.4g [95]. The Wi-SUN is a mesh networking protocol that supports bidirectional communication and operates in the 800 MHz, 900 MHz, and 2.4 GHz frequency bands according to the region. The Wi-SUN is made up of a border router that connects the mesh network and the backhaul network, and Wi-SUN end-nodes [99]. Owing to its effectiveness in header compression, 6LoWPAN is integrated into the Wi-SUN profile, so Wi-SUN provides complete IP packets with header compressing to minimize bandwidth. Therefore, long-range connectivity is assured, and the battery lifespan is extended. The Wi-SUN has a power consumption of less than 2 µA (resting) and 8 mA (listening). It has data rates of up to 300 kbps and latency in the tens of milliseconds range [100]. It supports multiple modulation techniques for wireless communication, including OFDM and DSSS. The Wi-SUN connectivity schemes are classified into three types. [101]. Category "1" is a wide-area open space information sensing and monitoring system. This category is based on fixed point-to-multipoint communication and has a 1-5 km range. Category "2" is a wide-area urban information sensing and monitoring system. In this category, multi-hop operation between radio devices or via Wi-SUN routers may be used. Category "3" is a wide area mobile communication information sensing and monitoring system. The Wi-SUN technology offers a set of stacks, also known as profiles, that are optimized for certain uses [102]. The profiles created by the working groups (WGs) in the Wi-SUN alliance are summarized in Figure 11

LoRa
LoRa wireless technology sends low-power data packets over a long distance to another receiver node [104]. It originated in 2009 when two friends, Nicolas Sornin and Olivier Seller, from France decided to create a low-power, long-range modulation method. The LoRa architecture employs a network server, an application server, a gateway that manages numerous devices simultaneously [105], and other components, a majority having different wireless communication designs, as shown in Figure 12. According to the LoRa Alliance, LoRa wireless technology uses the LoRaWAN protocol designed for batteryoperated wireless devices. The LoRaWAN network architecture can be implemented as a star topology with bidirectional communication between end nodes and gateways. Its modulation technique is derived from Chirp Spread Spectrum (CSS) technology, as it uses chirp pulses to encode information over radio waves. LoRa provides up to three miles (five kilometers) of coverage in urban regions, and up to ten miles (15 km) or more (line of sight) in rural areas [106]. It supports mutual authentication, integrity protection, and confidentiality. It mainly relies on the standardized AES cryptographic algorithm. The applications of LoRa include smart agriculture, industrial internet of things, smart supply chain, smart environment, and smart buildings [107].

LoRaWAN
The LoRa Alliance, which was founded in February 2015, created a layer on top of LoRa technology called LoRaWAN. It establishes functionalities for communication [108] (shown in Figure 13). LoRaWAN is one of the most widely-used low-power wireless technologies for communication over long distances featuring low data rates. LoRaWAN is ideal for sending small payloads, such as sensor data, over long distances. This offers a substantially longer communication range while maintaining low bandwidths, compared with other wireless data transmission technologies [109]. It also provides a coverage distance of up to 25 miles in line-of-sight or 800 m through buildings. It mainly utilizes the Advanced Encryption Standard (AES) 128-bit symmetric encryption as a security technique. Many applications utilize the LoRaWAN protocol, such as smart agriculture, smart city, and smart industrial control [110].

NB-IoT
Narrow-Band IoT is a mobile communications protocol with a bandwidth of 180 kHz that the 3GPP standardization group specifically standardized in 2016 [111]. With each 3GPP version, NB-IoT improved in support for more data rate, higher network capacity, better power utilization, better compatibility with 5G New Radio (NR), and more. In 2022, Release-17 was standardized introducing 16-QAM in UL/DL, as well as support for up to 14 HARQ processes. NB-IoT is increasingly being used to support machine-type 5G and Long-Term Evolution (LTE) [112] communication by establishing other ultra-low IoT devices that benefit from enhanced coverage, deep penetration, deployment flexibility, and lower energy consumption. In addition, the variety of LTE networks' frequency bands can provide flexibility in deployment options [113]. NB-IoT has 3 distinct operational modes. The narrowband is used inside an LTE carrier while operating in the in-band option. The Guardband mode allows NB-IoT to take advantage of LTE's spare bandwidth. In the standalone configuration, the narrowband is used in its frequency band [114]. In all deployment modes, NB-IoT offers strong penetrating power and the best coupling loss performance. The NB-enhanced IoT's indoor coverage, reduced latency, sensitivity, ultra-cheap device cost, minimal power consumption, and inherited LTE security are other remarkable features [115]. NB-IoT supports Standalone (SA), Guard-band (GB), In-band (IB), and Hybrid topologies [116]. In SA topology, NB-IoT operates as a standalone network without any interconnection to existing cellular networks. It is suitable for deployment in remote or rural areas where no existing cellular infrastructure is available. In GB topology, NB-IoT operates in the guard band of the existing cellular networks, using the unused spectrum between the DL and UL frequency bands. It allows NB-IoT to coexist with existing cellular networks and reuse the existing infrastructure, making it cost-effective. In IB topology, NB-IoT operates in the same frequency band as the existing cellular networks, sharing the same infrastructure. It is suitable for deployments in urban areas where the existing cellular networks have high capacity and coverage. In Hybrid topology, NB-IoT can operate in both the guard band and in-band modes simultaneously, allowing for more flexibility in network deployment. It has a coverage distance of approximately 1km in urban areas and 10km in rural areas. Common applications which utilize NB-IoT are, for example, smart metering, smart buildings, and smart parking solutions. The network structure of NB-IoT is based on five components: terminal, base station, core network, cloud platform, and vertical business center [117], These components are shown in Figure 14.

PLC
The PLC protocol uses power transmission lines to send data [118]. It originally emerged in 1838 when a remote measurement system was initially introduced for monitoring the battery levels of sites far from a telegraph system. The PLC operates at a frequency of 300-500 kHz, with a data rate of up to 10-500 kbps and up to 3 km. It is ideal for smart grid communication in highly populated locations since it has a high throughput and low latency. It can be employed in practically every aspect of a smart grid environment, from low-voltage residential appliances to high-voltage grid control [119]. It utilizes the AES and DES cryptographic mechanisms for security purposes. According to the current transferred by the electric wires, PLCs can be categorized into PLC over AC (Alternative Current) or PLC over DC (Direct Current) [120]. In PLC networks, the modem node modulates the supplied data before injecting it into the transmission medium and sending it to its target. [121]. The modulation methods that may be used with this system are spread-FSK, binary phase-shift keying (BPSK), OFDM, and FSK [122]. The PLCs have been used in a variety of domains, such as narrow-band PLC radio broadcasting, networking, and transportation [123]. The PLC has three main categories; Narrowband PLC, Mid-band PLC, and Broadband PLC. Narrowband PLC (NB-PLC) has a frequency of less than 148.5 kHz European (EU) and less than 4920 kHz Federal Communications Commission (FCC). It has a low rate and massive communication of up to 1000, and a transmission distance of more than 1 km. It is used in low-voltage power distribution network automation and meter reading. Mid-band PLC has a frequency of 0.7 to 12 MHz. It has low latency and high reliability of up to 99.99%. It is used in smart traffic lights and smart meters. Broadband has a frequency of 1.8 to 30 MHz and 1.8 to 100 MHz. It has a large bandwidth with a latency of less than 50 ms and a transmission distance of fewer than 200 m. It is used in home broadband access and interconnection [124]. Table 3 illustrates that some alliances have produced prominent standards for NB-PLC, such as Powerline Intelligent Metering Evolution (PRIME) (ITU-T G.9904), G3 PLC (ITU-T G.9903), IEEE 1901.2-2013, and ITU-T.G.hnem [125]. G3-PLC and PRIME are open and have been adopted as starting points for the official ITU standard, G 9955 Narrow-band OFDM-based PLC transceivers PHY specification [126]. PRIME: PLC technology called PRIME is based on the ITU G.9904 specification. It takes advantage of already-existing medium and low-voltage power distribution networks to efficiently connect the components of a smart grid through the use of OFDM technology. The PRIME Alliance created PRIME technology, which the ITU has designated as an international standard. Version 1.4 of PRIME is an improvement over version 1.3. (v .1.3). The PHY and MAC updates are part of PRIME version 1.4. These changes have enabled several enhancements, including increased resilience, faster data transfer rates, expanded capacity, more band planning flexibility, and IPv6 capability for the convergence layer. Additionally, these innovations are compatible with PRIME v1.3 products already on the market. G3-PLC: The G3-PLC Alliance created an SG-oriented communication protocol that includes the PHY, MAC, and 6LoWPAN network layers. With a high data rate, high-speed, and long-range communication capacity over the current power-line grid, the G3-PLC standard can also cross distribution-transformers [127].

Ethernet
Ethernet was created to provide link-layer communication via packet switching. It was originally developed by Bob Metcalfe at the Xerox Palo Alto Research Center in 1973 and initially supported by thick copper coaxial-type cables. It is commonly used in connecting devices within local area networks and provides communication of up to 100 m. Its modulation technique is based on Pulse Amplitude Modulation (PAM) and supports both bus and star topologies [128]. It translates datagrams from the top network layer into frames for transmission across wireline networks. The IEEE 802.3 standard governs Ethernet, which allows for nominal transfer rates of up to 400 kbps. Ethernet is divided into numerous wiring and signaling versions of the OSI physical and data connection layers [129]. Regarding power consumption, it varies depending on many factors, such as Ethernet type, standard used, cable length, and involved device power efficiency. The 10BASE-T and 100BASE-T standards typically consume less than 5 Watts per port, while newer standards, such as 1000BASE-T and 10GBASE-T, may consume up to 15 Watts or more per port [130].

G3-PLC Hybrid PLC and RF Profile
The G3 Alliance specification-based G3-PLC systems have been utilized in several Advanced Metering Infrastructure (AMI) and smart grid deployments across numerous nations worldwide. In 2020, the G3-PLC Alliance decided to start work on defining the G3-PLC Hybrid PLC and RF profile to expand G3-PLC's versatility and relevance on global markets [131]. The new hybrid protocol stack combines the primary G3-PLC media with a supplementary channel made up of PHY and MAC RF lower layers depending on IEEE 802.15.4 and IEEE 802.15.4v Smart Utility Network (SUN) FSK RF technologies, as shown in Figure 15. The first version of the G3-PLC Alliance hybrid companion standards supports the spectrum range 863-870 MHz. Additionally, the adaptation layer uses the Lightweight On-demand Ad hoc Distance-vector Routing Protocol -Next Generation (LOADng) routing scheme to determine the transition between primary (PLC) and secondary (RF) media [132]. By providing suitable interfaces, the hybrid abstraction layer links the 6LoWPAN adaptation layer to the dual lower layer stacks. This increases the overall reliability of the hybrid protocol stack by enabling communication via the alternative media if the specified channel is identified in the routing table but fails during transmission [132]. The LOADng routing protocol is a mechanism for assessing connection costs over RF networks, generating optimum routes depending on the routing metric and keeping them up to date.

IoT Hardware Platforms
The power of IoT and computing abilities are reflected by the processing elements, such as microcontrollers (MCUs), systems-on-chip (SoC), systems-in-package (SiP), and field programmable gate arrays (FPGAs). There are many different educational and evaluation boards available for running IoT applications, including Arduino, Raspberry PI, UDOO, Intel Galileo, FriendlyARM, BeagleBone, Gadgeteer, and T-Mote Sky. The IoT is also implemented using a wide range of devices and computerized systems. SoC and SiP technologies are often used to create semiconductor options for IoT devices. While the SoC technique enables semiconductor processes to combine analog, digital, mixed-signal, and RF circuitry on a chip, SiP technology uses packaging processes to combine functional elements created independently into a package, including MCUs, oscillators, and even antennas. By using SoC technology, we may increase system stability and usability while greatly decreasing overall system expenses. However, there are tradeoffs in device performance and energy usage. The SiP devices, on the other hand, boost unit speed and improve power utilization at the expense of reduced reliability and greater system cost, due to the usage of a variety of materials and procedures in the fabrication of the functional units included inside the packages [133]. Table 4 compares various industrial IoT-compliant SoCs and SiPs from different vendors. Each hardware module supports one or many wireless communication technologies. On the other hand, firmware is crucial to computational platforms since it controls the execution of a device throughout its life cycle [134]. Some IoT applications may be improved with the help of Real-Time Operating Systems (RTOSs) [135]. For instance, the Contiki RTOS has seen extensive use in IoT applications. Other embedded IoT solution providers include TinyOS, RiotOS, and LiteOS [136]. One other essential computational area for the IoT is cloud platforms. These solutions advocate for the use of connected devices for data transmission to the Internet, for the periodic analysis of big data sets, and for end-users to reap the most benefit possible from the insights gained from the analysis of this data. The cloud platforms and accessible sites to deploy IoT services are many, and a lot of them are provided at no cost and geared toward commercial use.

IoT Simulation Tools
Due to increasing attention in regard to IoT and WSNs, modern simulators are becoming more and more widespread [137]. Selecting a reliable simulator is a challenging and long-lasting endeavor, particularly in the WSNs arena, where several complicated situations and various protocols need network simulators with specific functionality. Many WSN simulators, including OpenDSS, Network Simulator-2 (NS-2), NS-3, OMNET++, GridLab-D, and GloMoSim, are created to support the simulation operation of IoT setups.

OpenDSS
OpenDSS is a distributed simulation software that is free to use. OpenDSS was created as an open-source power system simulator for the electric distribution system by Microsoft [138]. General AC circuit analyzer, Annual load generation simulation, Windpower simulation, Annual power flow simulation, and Annual load generation simulation are all supported by it. In addition, the simulation aids in handling fault analyses [138].

NS-2/NS-3
NS-2 is a network simulator that is free to use that simulates communication protocols and network topologies. Both wireless and wired networks are supported. Users can play, pause, forward, and stop the simulation using the network animator. However, it is not a real-time simulation termed a virtual world [139]. NS-3 is an enhanced version of NS-2, not a successor. NS-3 supports both parallel and emulation simulation [140].

OMNET++
OMNET++ is a free and open-source simulator that supports Mac OS, Windows, and UNIX. OMNET++ consists of the unit and simple modules that emphasize the model's atomicity, allowing multiple unit modules to be integrated to create a complex module [141].
These unit modules are written in C++, but NEtwork Description (NED) is responsible for integrating them into a network simulator setup. This tool covers a wide range of areas, including ad-hoc networks, peer-to-peer networks, IPv6 networks, sensor networks, storage area networks, and wireless networks [142].

GridLab-D
GridLab-D employs end-use models, such as the consumer, equipment and application, operations and business simulation tools, retail market models, agent-based modeling methodologies, and SynerGEE's power distribution model [143]. In addition, it can incorporate third-party analysis software and data management [144].

MATLAB/Simulink
MATrix LABoratory (MATLAB), developed by MathWorks, comes with a visual interface. Simulink is the name of the interface [145]. It provides many capabilities; Algorithm Development, Graphics, Application Building, Parallel Computing, and Data Analysis [146].

GloMoSiM
The Global Mobile Information System Simulator is written in Parsec and C and is primarily used for parallel programming software. It supports a wireless satellite communication system that supports thousands of notes with heterogeneous connectivity. In addition, it includes a simulation library and a parsec compiler [147]. Figure 16 compares various IoT simulators based on IoT criteria and features with justifications provided for each chosen criterion, including availability, which represents if the simulation tools are open-sourced or licensed, description of the simulator's programming language and how easily future hardware models will be able to utilize the simulated primitives. This most significant network scale can be offered and simulated through simulator tests if the simulator supports the emulator.

IoT Challenges
Despite the advantages of IoT when it comes to many applications, such as Agriculture 4.0, Industry 4.0, and wearable devices, IoT faces various challenges [148]. In this section, we discuss the IoT challenges in multiple aspects, such as standardization, scalability, heterogeneity, interoperability, availability of services, power consumption, and environmental concerns.
Standardization of the IoT is considered one of the main challenges to developing IoT applications due to the diversity of technologies and standards. IoT architecture and communication technology standardization are seen as the foundation for future IoT growth [149]. These findings suggest that one of the essential elements for the successful implementation of IoT is the use of open standards. Due to their accessibility to the general public, these standards play a significant role in fostering innovation. A collaborative consensus-based decision-making process is utilized to improve interoperability for IoT systems using various technologies [12]. Additionally, IoT architecture must provide interoperability and support full mobility to ensure uninterruptible service. Hence, using an open and standardized architecture is considered one of the main challenges in IoT [150].
Interoperability and integration of IoT face critical challenges since the development of IoT systems utilizes a range of protocols and technologies from different vendors. This leads to significant heterogeneity and interoperability problems. Therefore, it is necessary to employ a layered framework with a defined architecture to resolve this problem [151]. Considering the availability of service, coverage is a major obstacle that must be addressed in order to successfully manage the dynamics of IoT systems. Availability refers to the idea that every authorized item should have access to IoT applications at all times and from any location [152]. Ensuring smooth connectivity and desired availability requires the linked nodes to be adaptive and intelligent. Regardless of mobility, dynamic network topology change, or currently employed technologies, the network's availability and coverage area must allow for the continued use of services. All of this necessitates the use of handover, interoperability, intelligent offloading systems, and recovery procedures in the event of some unattended operations [153].
Scalability of the IoT is the ability to expand the capacity of the IoT system while maintaining the stable performance of its current services. Supporting a massive number of different devices with memory, computation, bandwidth, and other resource limitations is a major challenge in scalability [154]. Scalable procedures must be implemented for effective device detection, as well as to make those devices interoperable. A layered framework and architecture must be used to facilitate scalability and interoperability. Future IoT system development will face significant challenges in designing IoT architectures that support scalability, since they must manage a large number of system-connected devices [155].
The power and energy consumption in IoT systems is considered another challenging area, especially when it is necessary to design low-power chipsets and supply reliable power to sensors and devices [156]. This becomes more complex when the device's battery is located in distant places where it is restricted and costly to be replaced. Although the development of wireless power technologies is still in its infancy, they have the potential to send power over a considerable distance. More research should be conducted to lower device costs and power usage while increasing device capabilities (such as processing and networking), especially for edge computing nodes [157].
For edge computing nodes employed in industrial IoT (IIoT) systems, personalization and responsiveness of service problems are raised as high-priority issues [158]. Therefore, the implementation of adaptive devices to the specific needs of each connected node is essential in order to provide reliable outcomes. Moreover, taking into account the dynamic and non-deterministic nature of the industrial process, the service must be responsive, more adaptable, and realistic to future scenarios [159].
When it comes to privacy preservation, secure data transfer to the distant cloud is considered one of the most challenging issues [160]. The majority of device security and privacy concerns concern inadequate authentication and authorization, a lack of transport encryption, an insecure web interface, software, or firmware, etc. [161]. Security and privacy must be taken into account from a variety of perspectives, including legal, social, and cultural ones, to increase confidence in IoT systems. Every level of an IoT architecture needs to include security functionalities, and effective trust management must be implemented [162].
Regarding environmental issues, the environment is impacted by the Internet of Things in both beneficial and harmful ways. Since there are more and more gadgets being deployed every day, future research should pay more attention to the concept of "environmental friendliness" [12]. The sustainability of the environment is one of the biggest issues nowadays because of rising energy needs and electronic waste. To reduce the carbon footprint and, in turn, various negative effects on human health, more work should be conducted to reduce energy consumption, use renewable energy sources, and shrink the size of equipment so as to use less non-biodegradable materials. With increasing demands, the Internet uses up to 5% of global energy [163]. Therefore, this is another issue that will need to be taken into account when developing IoT-based systems in the future. New green ICT (Information and Communications Technology) enabling technologies that adhere to general green ICT principles must be considered when developing IoT systems [164].

IoT Future Directions
By 2030, the 6G standard for wireless communications could make it possible for IoT networks to have coverage everywhere [165]. In the 6G era, satellite-based communications are seen as an auspicious way to meet the needs of IoT services. For IoT applications, such as geo-location live tracking, eco-monitoring, and predicting of global disasters, seamless coverage everywhere and interconnection are vital. However, geostationary IoT networks like Sigfox, LoRa, and NB-IoT cannot satisfy the prerequisites of 6G IoT for broad coverage and increasing reliability [166]. On the other hand, satellite-based systems are available everywhere, can operate in any weather, and are very reliable. Therefore, satellite systems must be integrated into 6G networks to fulfil new IoT needs. Hybrid satellite-terrestrial relay network (HSTRN) technology has also been proposed to provide fully reliable communication in both high and remote areas by using terrestrial stations as relays to forward and enhance the satellite messages to the recipients [167,168].
The advance of 6G communications envisions a global, interconnected network of satellites and aerial platforms powered by AI and big data to address challenges of scalability, ultra-low-power consumption, minimal latency, privacy preservation, personalization, responsiveness, and universal coverage, even in the most inaccessible corners of the globe [169][170][171]. In this section, we present the most advanced research directions to address the mentioned challenges.
The need for great spectral efficiency and vast connection requires the creation of low-cost, dependable, and scalable networks [172]. The multiple access techniques used in these networks have a significant impact on their effectiveness, making it necessary to implement next-generation multiple access (NGMA) systems. Orthogonal multiple access (OMA) systems, utilized in 1G to 4G cellular networks, will no longer be able to handle the anticipated explosion in data traffic and device volume. To overcome this challenge, efficient resource allocation should be used. Effective resource distribution techniques allow for network enhancement in terms of coverage and performance. Many studies have focused on developing multiple access mechanisms to handle the exploding data traffic from IoT devices [173]. As a multiple access approach, non-orthogonal multiple access (NOMA) has attracted much interest. NOMA is superior to OMA in many respects, including its greater spectrum effectiveness, faster cell-edge performance, more relaxation at channel feedback, and lower network delay [174].
Numerous studies and deployment constraints are created by the fact that the next wave of IoT necessitates the network connectivity of a vast number of wireless devices [175]. Rate-splitting multiple access (RSMA) is regarded as a viable strategy, as it allows sequential decoding to realize the full capacity range of the multiple access network. It is a more flexible and efficient transmission mechanism than NOMA. In particular, RSMA is a helpful technique for decreasing collisions and IoT sensor networks that use random access (RA) techniques [176]. Beyond its usage in obtaining high throughput, RSMA has the potential to be used in massive IoT scenarios with a large number of connected devices [175].
Another promising technology is the re-configurable intelligent surface (RIS). For many IoT networks, RIS has become a crucial transmission mechanism [177]. RIS comprises large numbers of inexpensive passive antennas. The reflecting qualities antennas are manipulated by pin-diodes or var-actors that can intelligently organize phase shifters and tune incoming signals to target purposes [178]. The phase is adjusted to influence the radio propagation conditions by reflecting the input electromagnetic wave in a different direction. The RIS can increase the maximum user data rate. Due to these benefits, RIS has inspired much research on RIS-enhanced networks.
Facing the challenges of long-distance and universal coverage, one of the technologies that has received much interest over the last few decades is unmanned aerial vehicles (UAVs) [179]. Since UAVs are more flexible, portable, and adaptable in three-dimensional space than cellular communications, they can better establish Line-of-Sight communication and circumvent signal blocking and shadowing. From a technological standpoint, UAVs are a potentially fruitful means of achieving genuinely pervasive connections for an IoT system. On the other hand, the high cost and poor economic return of infrastructural development in isolated and inaccessible locations means terrestrial and UAV networks are unable to adequately cover the wide range of IoT devices (IoTDs) in both highly populated urban areas and uninhabited distant regions, including smart cities, smart industries, emergency tracking, and environment management [180]. To obtain universal coverage, a massive number of connections, and high-speed communications for supporting IoTDs with a broad range of services, the paradigm of a satellite and aerial-integrated network (SAIN) has the potential to employ satellite and UAV networks as an interconnected solution [173]. Furthermore, situations requiring vast coverage cannot be handled by UAVs or Low Altitude Platform Stations (LAPSs) in general. Meanwhile, the High Altitude Platform Station (HAPS) has become increasingly crucial because of the broader coverage obtained by its elevated vertical position. It is distinguished from other communication structures servicing a broad region by its low price, sensitivity to delay, rapid development and deployment, and enormous capacity [181]. As a result, there is an impetus to adapt the XAPS paradigm, which uses a single HAPS as a macro aerial base station, to provide extensive coverage, and a number of LAPSs, as small aerial base stations, to improve connectivity in densely populated places, outlying regions, and other challenging environments. Moreover, to improve network capacity and performance, cluster-NOMA (C-NOMA) can be integrated with the XAPS model, where terminals in HAPS are divided into multiple groups, allowing for the application of C-NOMA inside each cluster and OMA between them. Using C-NOMA with fewer terminals in each cluster might drastically decrease the decoding difficulty in comparison to NOMA [182]. This allows for more effective use of the available spectrum while also simplifying the end-user experience.
To address the low-latency challenge, fog computing's distributed and real-time solutions can be integrated into recently implemented networks [183]. Research in this area may focus on developing methods for real-time and distributed computing in fog settings, as well as exploring novel applications of fog computing [184]. Moreover, machine learning and optimization algorithms can be deployed at the edge nodes/gateways and servers to provide the required awareness of the QoS. The load on the computing servers is time-varying and nondeterministic because of the dynamic number of nodes accessing the system, especially in the case of mobile nodes. Figuring out how and when the nodes decide to offload the task by transferring the processing to the edge server presents a complicated issue [185]. When it comes to AI, edge computing is often thought of as the "final mile" because of the autonomous installation of smart services and on-edge nodes. Large numbers of edge devices (miniaturized, distributed, and reduced-power) can implement precise AI or, in coordination with other devices, for a variety of uses, such as networks of IoT nodes [48]. The intelligence for such services can be distributed to the edge to handle the task offloading challenge and satisfy the reliability and minimized-latency needs of data transfer over networks. Lyapunov optimization [186], deep reinforcement learning [187], and graph convolution networks [188] are among the promising techniques to be integrated with edge computing in order to achieve communication-aware-computation.
The situation becomes more complex when it comes to the Space-Air-Ground IoT (SAG-IoT), in which the benefits of satellite infrastructure and aerial vehicles are integrated to enhance network coverage [189]. In this heterogeneous situation, task offloading and resource scheduling confront more difficulties due to the fact that the real SAG-IoT operates in non-deterministic spatiotemporal-dynamic scenarios [190]. Considering the long-term performance, the space-time-varying behaviors of the node task reception, transmission, and handling are random processes within a time slot rather than a snapshot of the system, due to network variation and heterogeneity characteristics. Moreover, SAG-edge and cloud servers are used up by offloading jobs; therefore, it is important to distribute them fairly because different parts of the network have varying computational needs and are subject to multiple limitations [191]. In order to reduce the total operating expenses of the network over time, real-time optimization algorithms and reinforcement learning models can be deployed to coordinate the allocation of computing resources at the local level, aerial vehicles, and edge nodes themselves [192,193]. The challenge can be addressed by employing Lyapunov optimization where the problem is broken into its subcomponents, allocating computer resources locally, reusing channels, and reusing communication channels [191,194].

Discussion
To realize the IoT promise of ubiquitous connectivity, the Internet has to accommodate a wide range of portable and wireless protocols for the linking of multiple devices. In this study, we took a closer look at the different integrated layers of state-of-art protocols, such as ZigBee, 6LoWPAN, BLE, LoRa, and Wi-Fi, supporting modern networks for the Internet of Things. We emphasized how challenging it is to establish a set of universal standards and an abstract framework for contrasting various IoT protocol stacks [195]. It is demanding to create a standard method to evaluate them since the documents and rules for each protocol are not always easily accessible [196]. The issue was addressed by mapping the different protocols' layers to the basic OSI stack, making it easier to identify the fundamental structure of each one. Furthermore, different parameters, including coverage, data transfer rate, RF bands, capacity, power efficiency, and the IoT environment, were used to assess and compare the performance of different technologies, as shown in Table 5. A vast volume of traffic generated by an enormous quantity of devices linked to the Internet must be managed via interoperable protocols, especially the ones exhibiting minimum consumption characteristics, such as the systems investigated in this study. The 802.15.4-based protocols, Wi-SUN, LoRa, and ZigBee, all have low power consumption characteristics. In regard to BLE, it can transmit data at a power of 1 to 10 mW [15]. The transmission power of the Wi-Fi is about 100 mW. In contrast, in terms of power utilization, IEEE 802.11ah spent higher energy for the complete transaction of a frame than IEEE 802.15.4, especially in the event of a few nodes in a low traffic scenario [197]. Nevertheless, it was shown that the power consumption of IEEE 802.11ah was comparatively greater than that of IEEE 802.15.4 in dense environments [198]. When it comes to latency-sensitive and real-time applications, current communication technologies simply cannot keep up with the ever-changing requirements imposed on infrastructure by this developing field. The infrastructure and software for the 6G Internet of Things are still in their infancy. As a result, 6G is expected to radically change the current IoT architectures and bring in a whole new era of possibilities for low-latency and high-speed applications, including upgraded service, life comfort, and user experience. Soon, the Internet of Things will be more heterogeneous, combining many complementary networks to be implemented at various times, and it will require simultaneous coping with obsolete protocols and a variety of alternative protocols.

Conclusions
The concept of the IoT has quickly spread throughout modern society with the purpose of improving the quality of life by integrating intelligent devices, applications, and technologies that automate everything around us. This paper discussed the most important elements of the IoT paradigm, as well as its protocols, technologies, and applications. The discussion provided examples of the various operational and efficiency properties of every protocol. Thereby, this should serve as a solid groundwork for scholars and practitioners keen on learning more about the IoT techniques and protocols to comprehend the IoT's general structure and the function of the various parts and protocols, which makes it easier to select the suitable protocol and its simulation tool for any application. In conclusion, we believe that the next IoT generation will be universal in its coverage, intelligent in its offloading and resource allocation decisions, aware of the QoS, and more secure against cyber-attacks, facilitating efficient communication between the physical and cloud levels.  Acknowledgments: Authors would like to thank El Sewedy Electrometer Group (EMG) company for supporting the research done in this work.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: