Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (100)

Search Parameters:
Keywords = massive machine type communication

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1529 KiB  
Article
Peak Age of Information Optimization in Cell-Free Massive Random Access Networks
by Zhiru Zhao, Yuankang Huang and Wen Zhan
Electronics 2025, 14(13), 2714; https://doi.org/10.3390/electronics14132714 - 4 Jul 2025
Viewed by 265
Abstract
With the vigorous development of Internet of Things technologies, Cell-Free Radio Access Network (CF-RAN), leveraging its distributed coverage and single/multi-antenna Access Point (AP) coordination advantages, has become a key technology for supporting massive Machine-Type Communication (mMTC). However, under the grant-free random access mechanism, [...] Read more.
With the vigorous development of Internet of Things technologies, Cell-Free Radio Access Network (CF-RAN), leveraging its distributed coverage and single/multi-antenna Access Point (AP) coordination advantages, has become a key technology for supporting massive Machine-Type Communication (mMTC). However, under the grant-free random access mechanism, this network architecture faces the problem of information freshness degradation due to channel congestion. To address this issue, a joint decoding model based on logical grouping architecture is introduced to analyze the correlation between the successful packet transmission probability and the Peak Age of Information (PAoI) in both single-AP and multi-AP scenarios. On this basis, a global Particle Swarm Optimization (PSO) algorithm is designed to dynamically adjust the channel access probability to minimize the average PAoI across the network. To reduce signaling overhead, a PSO algorithm based on local topology information is further proposed to achieve collaborative optimization among neighboring APs. Simulation results demonstrate that the global PSO algorithm can achieve performance closely approximating the optimum, while the local PSO algorithm maintains similar performance without the need for global information. It is especially suitable for large-scale access scenarios with wide area coverage, providing an efficient solution for optimizing information freshness in CF-RAN. Full article
Show Figures

Figure 1

22 pages, 631 KiB  
Article
Time Travel with the BiTemporal RDF Model
by Abdullah Uz Tansel, Di Wu and Hsien-Tseng Wang
Mathematics 2025, 13(13), 2109; https://doi.org/10.3390/math13132109 - 27 Jun 2025
Viewed by 210
Abstract
The Internet is not just used for communication, transactions, and cloud storage; it also serves as a massive knowledge store where both people and machines can create, analyze, and use data and information. The Semantic Web was designed to enable machines to interpret [...] Read more.
The Internet is not just used for communication, transactions, and cloud storage; it also serves as a massive knowledge store where both people and machines can create, analyze, and use data and information. The Semantic Web was designed to enable machines to interpret the meaning of data, facilitating more informed and autonomous decision-making. The foundation of the Semantic Web is the Resource Description Framework (RDF). The standard RDF is limited to representing simple binary relationships in the form of the <subjectpredicateobject> triple. In this paper, we present a new data model called BiTemporal RDF (BiTRDF), which adds valid time and transaction time to the standard RDF. Our approach treats temporal information as references instead of attributes, simplifying the semantics while enhancing the model’s expressiveness and consistency. BiTRDF treats all resources and relationships as inherently bitemporal, enabling the representation and reasoning of complex temporal relationships in RDF. Illustrative examples demonstrate the model’s support for type propagation, domain-range inference, and transitive relationships in a temporal setting. While this work lays a theoretical foundation, future research will address implementation, query language support, and compatibility with RDF streams and legacy systems. Full article
Show Figures

Figure 1

35 pages, 17275 KiB  
Article
Performance Analysis of Downlink 5G Networks in Realistic Environments
by Aymen I. Zreikat and Hunseok Kang
Appl. Sci. 2025, 15(8), 4526; https://doi.org/10.3390/app15084526 - 19 Apr 2025
Viewed by 599
Abstract
Fifth-generation (5G) networks are the fifth generation of mobile networks and are regarded as a global standard, following 1G, 2G, 3G, and 4G networks. Fifth-generation, with its large available bandwidth provided by mmWave, not only provides the end user with higher spectrum efficiency, [...] Read more.
Fifth-generation (5G) networks are the fifth generation of mobile networks and are regarded as a global standard, following 1G, 2G, 3G, and 4G networks. Fifth-generation, with its large available bandwidth provided by mmWave, not only provides the end user with higher spectrum efficiency, massive capacity, low latency, and high speed but is also a network designed to connect virtually everyone and everything together, including machines, objects, and devices. Therefore, studies of such systems’ performance evaluation and capacity bounds are critical for the research community. Furthermore, the performance of these systems should be investigated in realistic contexts while considering signal strength and restricted uplink power to maintain system coverage and capacity, which are also affected by the environment and the value of the service factor parameter. However, any proposed application should include a multiservice case to reflect the true state of 5G systems. As an extension of previous work, the capacity bounds for 5G networks are derived and analyzed in this research, considering both single and multiservice cases with mobility. In addition, the influence of different parameters on network performance, such as the interference, service factor, and non-orthogonality factors, and cell radii, is also discussed. The numerical findings and analysis reveal that the type of environment and service factor parameters have the greatest influence on system capacity and coverage. Subsequently, it is shown that the investigated parameters have a major impact on cell performance and therefore can be considered key indicators for mobile designers and operators to consider in planning and designing future networks. To validate these findings, some results are evaluated against ITU-T standards, while others are compared with related studies from the literature. Full article
(This article belongs to the Special Issue Trends and Prospects for Wireless Sensor Networks and IoT)
Show Figures

Figure 1

18 pages, 1372 KiB  
Article
Resource Allocation in 5G Cellular IoT Systems with Early Transmissions at the Random Access Phase
by Anastasia Daraseliya, Eduard Sopin, Vyacheslav Begishev, Yevgeni Koucheryavy and Konstantin Samouylov
Sensors 2025, 25(7), 2264; https://doi.org/10.3390/s25072264 - 3 Apr 2025
Viewed by 555
Abstract
While the market for massive machine type communications (mMTC) is evolving at an unprecedented pace, the standardization bodies, including 3GPP, are lagging behind with standardization of truly 5G-grade cellular Internet-of-Things (CIoT) systems. As an intermediate solution, an early data transmission mechanisms encapsulating the [...] Read more.
While the market for massive machine type communications (mMTC) is evolving at an unprecedented pace, the standardization bodies, including 3GPP, are lagging behind with standardization of truly 5G-grade cellular Internet-of-Things (CIoT) systems. As an intermediate solution, an early data transmission mechanisms encapsulating the data into the preambles has been recently proposed for 4G/5G Narrowband IoT (NB-IoT) technology. This mechanism is also expected to become a part of future CIoT systems. The aim of this paper is to propose a model for CIoT systems with and without early transmission functionality and assess the optimal distribution of resources at the random access and data transmission phases. To this end, the developed model captures both phases explicitly as well as different traffic composition in downlink and uplink directions. Our numerical results demonstrate that the use of early transmission functionality allows one to drastically decrease the delay of uplink packets by up to 20–40%, even in presence of downlink traffic sharing the same set of resources. However, it also affects the optimal share of resources allocated for random access and data transmission phases. As a result, the optimal performance of 5G mMTC technologies with or without early transmission mode can only be attained if the dynamic resource allocation is implemented. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

42 pages, 1685 KiB  
Article
Heterogeneity Challenges of Federated Learning for Future Wireless Communication Networks
by Lorena Isabel Barona López and Thomás Borja Saltos
J. Sens. Actuator Netw. 2025, 14(2), 37; https://doi.org/10.3390/jsan14020037 - 1 Apr 2025
Viewed by 2721
Abstract
Two technologies of great interest in recent years—Artificial Intelligence (AI) and massive wireless communication networks—have found a significant point of convergence through Federated Learning (FL). Federated Learning is a Machine Learning (ML) technique that enables multiple participants to collaboratively train a model while [...] Read more.
Two technologies of great interest in recent years—Artificial Intelligence (AI) and massive wireless communication networks—have found a significant point of convergence through Federated Learning (FL). Federated Learning is a Machine Learning (ML) technique that enables multiple participants to collaboratively train a model while keeping their data local. Several studies indicate that while improving performance metrics—such as accuracy, loss reduction, or computation time—is a primary goal, achieving this in real-world scenarios remains challenging. This difficulty arises due to various heterogeneity characteristics inherent to the wireless devices participating in the Federation. Heterogeneity in Federated Learning arises when participants contribute differently, leading to challenges in the model training process. Heterogeneity in Federated Learning may appear in architecture, statistics, and behavior. System heterogeneity arises from differences in device capabilities, including processing power, transmission speeds, availability, energy constraints, and network limitations, among others. Statistical heterogeneity occurs when participants contribute non-independent and non-identically distributed (non-IID) data. This situation can harm the global model instead of improving it, especially when the data are of poor quality or too scarce. The third type, behavioral heterogeneity, refers to cases where participants are unwilling to engage or expect rewards despite minimal effort. Given the growing research in this area, we present a summary of heterogeneity characteristics in Federated Learning to provide a broader perspective on this emerging technology. We also outline key challenges, opportunities, and future directions for Federated Learning. Finally, we conduct a simulation using the LEAF framework to illustrate the impact of heterogeneity in Federated Learning. Full article
(This article belongs to the Special Issue Federated Learning: Applications and Future Directions)
Show Figures

Figure 1

16 pages, 3904 KiB  
Article
Co-Simulation of Interconnection Between Smart Power Grid and Smart Cities Platform via Massive Machine-Type Communication
by Luiz H. N. Rodrigues, Carlos F. M. Almeida, Nelson Kagan, Luiz H. L. Rosa and Milana L. dos Santos
Sensors 2025, 25(5), 1517; https://doi.org/10.3390/s25051517 - 1 Mar 2025
Viewed by 1204
Abstract
With the advent of Industry 5.0, the electrical sector has been endowed with intelligent devices that are propelling high penetration of distributed energy microgeneration, VPP, smart buildings, and smart plants and imposing new challenges on the sector. This new environment requires a smarter [...] Read more.
With the advent of Industry 5.0, the electrical sector has been endowed with intelligent devices that are propelling high penetration of distributed energy microgeneration, VPP, smart buildings, and smart plants and imposing new challenges on the sector. This new environment requires a smarter network, including transforming the simple electricity customer into a “smart customer” who values the quality of energy and its rational use. The SPG (smart power grid) is the perfect solution for meeting these needs. It is crucial to understand energy use to guarantee quality of service and meet data security requirements. The use of simulations to map the behavior of complex infrastructures is the best strategy because it overcomes the limitations of traditional analytical solutions. This article presents the ICT laboratory structure developed within the Department of Electrical Engineering of the Polytechnic School of the Universidade de São Paulo (USP). It is based on an architecture that utilizes LTE/EPC wireless technology (4G, 5G, and B5G) to enable machine-to-machine communication (mMTC) between SPG elements using edge computing (MEC) resources and those of smart city platforms. We evaluate this proposal through simulations using data from real and emulated equipment and co-simulations shared by SPG laboratories at POLI-USP. Finally, we present the preliminary results of integration of the power laboratory, network simulation (ns-3), and a smart city platform (InterSCity) for validation and testing of the architecture. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

24 pages, 8199 KiB  
Article
Redefining 6G Network Slicing: AI-Driven Solutions for Future Use Cases
by Robert Botez, Daniel Zinca and Virgil Dobrota
Electronics 2025, 14(2), 368; https://doi.org/10.3390/electronics14020368 - 18 Jan 2025
Cited by 6 | Viewed by 3219
Abstract
The evolution from 5G to 6G networks is driven by the need to meet the stringent requirements, i.e., ultra-reliable, low-latency, and high-throughput communication. The new services are called Further-Enhanced Mobile Broadband (feMBB), Extremely Reliable and Low-Latency Communications (ERLLCs), Ultra-Massive Machine-Type Communications (umMTCs), Massive [...] Read more.
The evolution from 5G to 6G networks is driven by the need to meet the stringent requirements, i.e., ultra-reliable, low-latency, and high-throughput communication. The new services are called Further-Enhanced Mobile Broadband (feMBB), Extremely Reliable and Low-Latency Communications (ERLLCs), Ultra-Massive Machine-Type Communications (umMTCs), Massive Ultra-Reliable Low-Latency Communications (mURLLCs), and Mobile Broadband Reliable Low-Latency Communications (MBRLLCs). Network slicing emerges as a critical enabler in 6G, providing virtualized, end-to-end network segments tailored to diverse application needs. Despite its significance, existing datasets for slice selection are limited to 5G or LTE-A contexts, lacking relevance to the enhanced requirements. In this work, we present a novel synthetic dataset tailored to 6G network slicing. By analyzing the emerging service requirements, we generated traffic parameters, including latency, packet loss, jitter, and transfer rates. Machine Learning (ML) models like Random Forest (RF), Decision Tree (DT), XGBoost, Support Vector Machine (SVM), and Feedforward Neural Network (FNN) were trained on this dataset, achieving over 99% accuracy in both slice classification and handover prediction. Our results highlight the potential of this dataset as a valuable tool for developing AI-assisted 6G network slicing mechanisms. While still in its early stages, the dataset lays a foundation for future research. As the 6G standardization advances, we aim to refine the dataset and models, ultimately enabling real-time, intelligent slicing solutions for next-generation networks. Full article
(This article belongs to the Special Issue Advances in IoT Security)
Show Figures

Figure 1

27 pages, 1409 KiB  
Article
Adaptive Handover Management in High-Mobility Networks for Smart Cities
by Yahya S. Junejo, Faisal K. Shaikh, Bhawani S. Chowdhry and Waleed Ejaz
Computers 2025, 14(1), 23; https://doi.org/10.3390/computers14010023 - 14 Jan 2025
Cited by 3 | Viewed by 2642
Abstract
The seamless handover of mobile devices is critical for maximizing the potential of smart city applications, which demand uninterrupted connectivity, ultra-low latency, and performance in diverse environments. Fifth-generation (5G) and beyond-5G networks offer advancements in massive connectivity and ultra-low latency by leveraging advanced [...] Read more.
The seamless handover of mobile devices is critical for maximizing the potential of smart city applications, which demand uninterrupted connectivity, ultra-low latency, and performance in diverse environments. Fifth-generation (5G) and beyond-5G networks offer advancements in massive connectivity and ultra-low latency by leveraging advanced technologies like millimeter wave, massive machine-type communication, non-orthogonal multiple access, and beam forming. However, challenges persist in ensuring smooth handovers in dense deployments, especially in higher frequency bands and with increased user mobility. This paper presents an adaptive handover management scheme that utilizes reinforcement learning to optimize handover decisions in dynamic environments. The system selects the best target cell from the available neighbor cell list by predicting key performance indicators, such as reference signal received power and the signal–interference–noise ratio, while considering the fixed time-to-trigger and hysteresis margin values. It dynamically adjusts handover thresholds by incorporating an offset based on real-time network conditions and user mobility patterns. This adaptive approach minimizes handover failures and the ping-pong effect. Compared to the baseline LIM2 model, the proposed system demonstrates a 15% improvement in handover success rate, a 3% improvement in user throughput, and an approximately 6 sec reduction in the latency at 200 km/h speed in high-mobility scenarios. Full article
Show Figures

Figure 1

22 pages, 1116 KiB  
Article
Optimizing Open Radio Access Network Systems with LLAMA V2 for Enhanced Mobile Broadband, Ultra-Reliable Low-Latency Communications, and Massive Machine-Type Communications: A Framework for Efficient Network Slicing and Real-Time Resource Allocation
by H. Ahmed Tahir, Walaa Alayed, Waqar ul Hassan and Thuan Dinh Do
Sensors 2024, 24(21), 7009; https://doi.org/10.3390/s24217009 - 31 Oct 2024
Cited by 1 | Viewed by 1712
Abstract
This study presents an advanced framework integrating LLAMA_V2, a large language model, into Open Radio Access Network (O-RAN) systems. The focus is on efficient network slicing for various services. Sensors in IoT devices generate continuous data streams, enabling resource allocation through O-RAN’s dynamic [...] Read more.
This study presents an advanced framework integrating LLAMA_V2, a large language model, into Open Radio Access Network (O-RAN) systems. The focus is on efficient network slicing for various services. Sensors in IoT devices generate continuous data streams, enabling resource allocation through O-RAN’s dynamic slicing and LLAMA_V2’s optimization. LLAMA_V2 was selected for its superior ability to capture complex network dynamics, surpassing traditional AI/ML models. The proposed method combines sophisticated mathematical models with optimization and interfacing techniques to address challenges in resource allocation and slicing. LLAMA_V2 enhances decision making by offering explanations for policy decisions within the O-RAN framework and forecasting future network conditions using a lightweight LSTM model. It outperforms baseline models in key metrics such as latency reduction, throughput improvement, and packet loss mitigation, making it a significant solution for 5G network applications in advanced industries. Full article
Show Figures

Figure 1

12 pages, 1157 KiB  
Article
Multi-Layered Unsupervised Learning Driven by Signal-to-Noise Ratio-Based Relaying for Vehicular Ad Hoc Network-Supported Intelligent Transport System in eHealth Monitoring
by Ali Nauman, Adeel Iqbal, Tahir Khurshaid and Sung Won Kim
Sensors 2024, 24(20), 6548; https://doi.org/10.3390/s24206548 - 11 Oct 2024
Cited by 1 | Viewed by 1673
Abstract
Every year, about 1.19 million people are killed in traffic accidents; hence, the United Nations has a goal of halving the number of road traffic deaths and injuries by 2030. In line with this objective, technological innovations in telecommunication, particularly brought about by [...] Read more.
Every year, about 1.19 million people are killed in traffic accidents; hence, the United Nations has a goal of halving the number of road traffic deaths and injuries by 2030. In line with this objective, technological innovations in telecommunication, particularly brought about by the rise of 5G networks, have contributed to the development of modern Vehicle-to-Everything (V2X) systems for communication. A New Radio V2X (NR-V2X) was introduced in the latest Third Generation Partnership Project (3GPP) releases which allows user devices to exchange information without relying on roadside infrastructures. This, together with Massive Machine Type Communication (mMTC) and Ultra-Reliable Low Latency Communication (URLLC), has led to the significantly increased reliability, coverage, and efficiency of vehicular communication networks. The use of artificial intelligence (AI), especially K-means clustering, has been very promising in terms of supporting efficient data exchange in vehicular ad hoc networks (VANETs). K-means is an unsupervised machine learning (ML) technique that groups vehicles located near each other geographically so that they can communicate with one another directly within these clusters while also allowing for inter-cluster communication via cluster heads. This paper proposes a multi-layered VANET-enabled Intelligent Transportation System (ITS) framework powered by unsupervised learning to optimize communication efficiency, scalability, and reliability. By leveraging AI in VANET solutions, the proposed framework aims to address road safety challenges and contribute to global efforts to meet the United Nations’ 2030 target. Additionally, this framework’s robust communication and data processing capabilities can be extended to eHealth monitoring systems, enabling real-time health data transmission and processing for continuous patient monitoring and timely medical interventions. This paper’s contributions include exploring AI-driven approaches for enhanced data interaction, improved safety in VANET-based ITS environments, and potential applications in eHealth monitoring. Full article
(This article belongs to the Special Issue Intelligent Sensors and Control for Vehicle Automation)
Show Figures

Figure 1

18 pages, 3241 KiB  
Article
Combining 5G New Radio, Wi-Fi, and LiFi for Industry 4.0: Performance Evaluation
by Jorge Navarro-Ortiz, Juan J. Ramos-Munoz, Felix Delgado-Ferro, Ferran Canellas, Daniel Camps-Mur, Amin Emami and Hamid Falaki
Sensors 2024, 24(18), 6022; https://doi.org/10.3390/s24186022 - 18 Sep 2024
Cited by 3 | Viewed by 2093
Abstract
Fifth-generation mobile networks (5G) are designed to support enhanced Mobile Broadband, Ultra-Reliable Low-Latency Communications, and massive Machine-Type Communications. To meet these diverse needs, 5G uses technologies like network softwarization, network slicing, and artificial intelligence. Multi-connectivity is crucial for boosting mobile device performance by [...] Read more.
Fifth-generation mobile networks (5G) are designed to support enhanced Mobile Broadband, Ultra-Reliable Low-Latency Communications, and massive Machine-Type Communications. To meet these diverse needs, 5G uses technologies like network softwarization, network slicing, and artificial intelligence. Multi-connectivity is crucial for boosting mobile device performance by using different Wireless Access Technologies (WATs) simultaneously, enhancing throughput, reducing latency, and improving reliability. This paper presents a multi-connectivity testbed from the 5G-CLARITY project for performance evaluation. MultiPath TCP (MPTCP) was employed to enable mobile devices to send data through various WATs simultaneously. A new MPTCP scheduler was developed, allowing operators to better control traffic distribution across different technologies and maximize aggregated throughput. Our proposal mitigates the impact of limitations on one path affecting others, avoiding the Head-of-Line blocking problem. Performance was tested with real equipment using 5GNR, Wi-Fi, and LiFi —complementary WATs in the 5G-CLARITY project—in both static and dynamic scenarios. The results demonstrate that the proposed scheduler can manage the traffic distribution across different WATs and achieve the combined capacities of these technologies, approximately 1.4 Gbps in our tests, outperforming the other MPTCP schedulers. Recovery times after interruptions, such as coverage loss in one technology, were also measured, with values ranging from 400 to 500 ms. Full article
Show Figures

Figure 1

13 pages, 392 KiB  
Article
Grant-Free Random Access Enhanced by Massive MIMO and Non-Orthogonal Preambles
by Hao Jiang, Hongming Chen, Hongming Hu and Jie Ding
Electronics 2024, 13(11), 2179; https://doi.org/10.3390/electronics13112179 - 3 Jun 2024
Viewed by 1172
Abstract
Massive multiple input multiple output (MIMO) enabled grant-free random access (mGFRA) stands out as a promising random access (RA) solution, thus effectively addressing the need for massive access in massive machine-type communications (mMTCs) while ensuring high spectral efficiency and minimizing signaling overhead. However, [...] Read more.
Massive multiple input multiple output (MIMO) enabled grant-free random access (mGFRA) stands out as a promising random access (RA) solution, thus effectively addressing the need for massive access in massive machine-type communications (mMTCs) while ensuring high spectral efficiency and minimizing signaling overhead. However, the bottleneck of mGFRA is mainly dominated by the orthogonal preamble collisions, since the orthogonal preamble pool is small and of a fixed-sized. In this paper, we explore the potential of non-orthogonal preambles to overcome limitations and enhance the success probability of mGFRA without extending the length of the preamble. Given the RA procedure of mGFRA, we analyze the factors influencing the success rate of mGFRA with non-orthogonal preamble and propose to use two types of sequences, namely Gold sequence and Gaussian distribution sequence, as the preambles for mGFRA. Simulation results demonstrate the effectiveness of these two types pf non-orthogonal preambles in improving the success probability of mGFRA. Moreover, the system parameters’ impact on the performance of mGFRA with non-orthogonal preambles is examined and deliberated. Full article
Show Figures

Figure 1

24 pages, 1104 KiB  
Article
A Learning-Based Energy-Efficient Device Grouping Mechanism for Massive Machine-Type Communication in the Context of Beyond 5G Networks
by Rubbens Boisguene, Ibrahim Althamary and Chih-Wei Huang
J. Sens. Actuator Netw. 2024, 13(3), 33; https://doi.org/10.3390/jsan13030033 - 28 May 2024
Cited by 3 | Viewed by 1842
Abstract
With the increasing demand for high data rates, low delay, and extended battery life, managing massive machine-type communication (mMTC) in the beyond 5G (B5G) context is challenging. MMTC devices, which play a role in developing the Internet of Things (IoT) and smart cities, [...] Read more.
With the increasing demand for high data rates, low delay, and extended battery life, managing massive machine-type communication (mMTC) in the beyond 5G (B5G) context is challenging. MMTC devices, which play a role in developing the Internet of Things (IoT) and smart cities, need to transmit short amounts of data periodically within a specific time frame. Although blockchain technology is utilized for secure data storage and transfer while digital twin technology provides real-time monitoring and management of the devices, issues such as constrained time delays and network congestion persist. Without a proper data transmission strategy, most devices would fail to transmit in time, thus defying their relevance and purpose. This work investigates the problem of massive random access channel (RACH) attempts while emphasizing the energy efficiency and access latency for mMTC devices with critical missions in B5G networks. Using machine learning techniques, we propose an attention-based reinforcement learning model that orchestrates the device grouping strategy to optimize device placement. Thus, the model guarantees a higher probability of success for the devices during data transmission access, eventually leading to more efficient energy consumption. Through thorough quantitative simulations, we demonstrate that the proposed learning-based approach significantly outperforms the other baseline grouping methods. Full article
Show Figures

Figure 1

18 pages, 834 KiB  
Article
A Multi-Agent Reinforcement Learning-Based Grant-Free Random Access Protocol for mMTC Massive MIMO Networks
by Felipe Augusto Dutra Bueno, Alessandro Goedtel, Taufik Abrão and José Carlos Marinello
J. Sens. Actuator Netw. 2024, 13(3), 30; https://doi.org/10.3390/jsan13030030 - 30 Apr 2024
Cited by 1 | Viewed by 2302
Abstract
The expected huge number of connected devices in Internet of Things (IoT) applications characterizes the massive machine-type communication (mMTC) scenario, one prominent use case of beyond fifth-generation (B5G) systems. To meet mMTC connectivity requirements, grant-free (GF) random access (RA) protocols are seen as [...] Read more.
The expected huge number of connected devices in Internet of Things (IoT) applications characterizes the massive machine-type communication (mMTC) scenario, one prominent use case of beyond fifth-generation (B5G) systems. To meet mMTC connectivity requirements, grant-free (GF) random access (RA) protocols are seen as a promising solution due to the small amount of data that MTC devices usually transmit. In this paper, we propose a GF RA protocol based on a multi-agent reinforcement learning approach, applied to aid IoT devices in selecting the least congested RA pilots. The rewards obtained by the devices in collision cases resemble the congestion level of the chosen pilot. To enable the operation of the proposed method in a realistic B5G network scenario and aiming to reduce signaling overheads and centralized processing, the rewards in our proposed method are computed by the devices taking advantage of a large number of base station antennas. Numerical results demonstrate the superior performance of the proposed method in terms of latency, network throughput, and per-device throughput compared with other protocols. Full article
Show Figures

Figure 1

27 pages, 3597 KiB  
Article
A Blockchain-Assisted Security Protocol for Group Handover of MTC Devices in 5G Wireless Networks
by Ronghao Ma, Jianhong Zhou and Maode Ma
Sensors 2024, 24(7), 2331; https://doi.org/10.3390/s24072331 - 6 Apr 2024
Cited by 5 | Viewed by 2505
Abstract
In the realm of the fifth-generation (5G) wireless cellular networks, renowned for their dense connectivity, there lies a substantial facilitation of a myriad of Internet of Things (IoT) applications, which can be supported by the massive machine-type communication (MTC) technique, a fundamental communication [...] Read more.
In the realm of the fifth-generation (5G) wireless cellular networks, renowned for their dense connectivity, there lies a substantial facilitation of a myriad of Internet of Things (IoT) applications, which can be supported by the massive machine-type communication (MTC) technique, a fundamental communication framework. In some scenarios, a large number of machine-type communication devices (MTCD) may simultaneously enter the communication coverage of a target base station. However, the current handover mechanism specified by the 3rd Generation Partnership Project (3GPP) Release 16 incurs high signaling overhead within the access and core networks, which may have negative impacts on network efficiency. Additionally, other existing solutions are vulnerable to malicious attacks such as Denial of Service (DoS), Distributed Denial of Service (DDoS) attacks, and the failure of Key Forward Secrecy (KFS). To address this challenge, this paper proposes an efficient and secure handover authentication protocol for a group of MTCDs supported by blockchain technology. This protocol leverages the decentralized nature of blockchain technology and combines it with certificateless aggregate signatures to mutually authenticate the identity of a base station and a group of MTCDs. This approach can reduce signaling overhead and avoid key escrow while significantly lowering the risk associated with single points of failure. Additionally, the protocol protects device anonymity by encrypting device identities with temporary anonymous identity markers with the Elliptic Curve Diffie–Hellman (ECDH) to abandon serial numbers to prevent linkage attacks. The resilience of the proposed protocol against predominant malicious attacks has been rigorously validated through the application of the BAN logic and Scyther tool, underscoring its robust security attributes. Furthermore, compared to the existing solutions, the proposed protocol significantly reduces the authentication cost for a group of MTCDs during handover, while ensuring security, demonstrating commendable efficiency. Full article
(This article belongs to the Special Issue Feature Papers in Communications Section 2023)
Show Figures

Figure 1

Back to TopTop