Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (108)

Search Parameters:
Keywords = massive machine type communications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
6 pages, 1451 KB  
Proceeding Paper
Time-Sensitive Networking and Time Scheduling Mechanisms for 5G Networks
by Po-Kai Chuang, Ming-Hung Lee, Yu-Chuan Luo, Jian-Kai Huang, Chin-Cheng Hu and Yu-Ping Yu
Eng. Proc. 2026, 134(1), 8; https://doi.org/10.3390/engproc2026134008 - 30 Mar 2026
Viewed by 256
Abstract
With the rapid development of 5G communication technology, 5G networks are designed to achieve three major objectives: higher bandwidth, support for a greater number of connected devices, and lower latency. It is necessary to meet the requirements of the three primary 5G application [...] Read more.
With the rapid development of 5G communication technology, 5G networks are designed to achieve three major objectives: higher bandwidth, support for a greater number of connected devices, and lower latency. It is necessary to meet the requirements of the three primary 5G application scenarios: Enhanced Mobile Broadband, Massive Machine-Type Communications, and Ultra-Reliable and Low Latency Communications (uRLLC). To meet the stringent requirements for time synchronization and low latency, 5G is being integrated with Ethernet-based Time-Sensitive Networking (TSN) technologies. TSN plays an important role in achieving time determinism in uRLLC scenarios and ensures low-latency and high-reliability Ethernet communication through the transmission of time signals that are also known as the Precision Time Protocol. We applied TSN technology in the Institute of Electrical and Electronics Engineers 802.1Qbv standard and evaluated its transmission delay performance. Modifying the gate control list (GCL) to accommodate varying network traffic ensures low-latency transmission for high-priority traffic. We propose two GCL configurations for TSN that incorporate time-aware shaper to achieve efficient traffic scheduling. Full article
Show Figures

Figure 1

35 pages, 633 KB  
Article
Bi-Objective Optimization for Scalable Resource Scheduling in Dense IoT Deployments via 5G Network Slicing Using NSGA-II
by Francesco Nucci and Gabriele Papadia
Telecom 2026, 7(2), 24; https://doi.org/10.3390/telecom7020024 - 2 Mar 2026
Viewed by 435
Abstract
The proliferation of Internet of Things (IoT) devices demands efficient resource management in fifth-generation (5G) networks, particularly through network slicing mechanisms supporting massive machine-type communications (mMTCs). This paper addresses IoT connectivity in 5G network slicing through a bi-objective optimization framework balancing operational costs [...] Read more.
The proliferation of Internet of Things (IoT) devices demands efficient resource management in fifth-generation (5G) networks, particularly through network slicing mechanisms supporting massive machine-type communications (mMTCs). This paper addresses IoT connectivity in 5G network slicing through a bi-objective optimization framework balancing operational costs with quality-of-service. We formulate a bi-objective optimization problem that balances operational costs with quality-of-service (QoS) requirements across heterogeneous 5G network slices. The proposed approach employs a tailored Non-dominated Sorting Genetic Algorithm II (NSGA-II) incorporating domain-specific constraints, including device priorities, slicing isolation requirements, radio resource limitations, and battery capacity. Through extensive simulations on scenarios with up to 5000 devices, our method generates diverse Pareto-optimal solutions achieving hypervolume improvements of 8–13% over multi-objective DRL, 15–28% over single-objective DRL baselines, and 22–41% over heuristic approaches while maintaining computational scalability suitable for real-time network management (sub-2 min execution). Validation with real-world traffic traces from operational deployments confirms algorithm robustness under realistic burstiness and temporal patterns, with 7% performance degradation vs. synthetic traffic—within expected simulation–reality gaps. This work provides a practical framework for IoT resource scheduling in current 5G and future Beyond-5G (B5G) telecommunications infrastructures, validated in scenarios of up to 5000 devices. Full article
Show Figures

Figure 1

41 pages, 3181 KB  
Article
Transmission-Path Selection with Joint Computation and Communication Resource Allocation in 6G MEC Networks with RIS and D2D Support
by Yao-Liang Chung
Future Internet 2025, 17(12), 565; https://doi.org/10.3390/fi17120565 - 6 Dec 2025
Viewed by 827
Abstract
This paper proposes a transmission-path selection algorithm with joint computation and communication resource allocation for sixth-generation (6G) mobile edge computing (MEC) networks enhanced by helper-assisted device-to-device (D2D) communication and reconfigurable intelligent surfaces (RIS). The novelties of this work lie in the joint design [...] Read more.
This paper proposes a transmission-path selection algorithm with joint computation and communication resource allocation for sixth-generation (6G) mobile edge computing (MEC) networks enhanced by helper-assisted device-to-device (D2D) communication and reconfigurable intelligent surfaces (RIS). The novelties of this work lie in the joint design of three key components: a helper-assisted D2D uplink scheme, a packet-partitioning cooperative MEC offloading mechanism, and RIS-assisted downlink transmission and deployment design. These components collectively enable diverse transmission paths under strict latency constraints, helping mitigate overload and reduce delay. To demonstrate its performance advantages, the proposed algorithm is compared with a baseline algorithm without helper-assisted D2D or RIS support, under two representative scheduling policies—modified maximum rate and modified proportional fair. Simulation results in single-base station (BS) and dual-BS environments show that the proposed algorithm consistently achieves a higher effective packet-delivery success percentage, defined as the fraction of packets whose total delay (uplink, MEC computation, and downlink) satisfies service-specific latency thresholds, and a lower average total delay, defined as the mean total delay of all successfully delivered packets, regardless of whether individual delays exceed their thresholds. Both metrics are evaluated separately for ultra-reliable low-latency communications, enhanced mobile broadband, and massive machine-type communications services. These results indicate that the proposed algorithm provides solid performance and robustness in supporting diverse 6G services under stringent latency requirements across different scheduling policies and deployment scenarios. Full article
Show Figures

Figure 1

12 pages, 2199 KB  
Proceeding Paper
Prototyping LoRaWAN-Based Mobile Air Quality Monitoring System for Public Health and Safety
by Tanzila, Sundus Ali, Muhammad Imran Aslam, Irfan Ahmed and Ayesha Ahmed
Eng. Proc. 2025, 118(1), 20; https://doi.org/10.3390/ECSA-12-26510 - 7 Nov 2025
Viewed by 950
Abstract
In this paper, we present the design, prototyping, and working of a cost-effective, energy-efficient, and scalable air quality monitoring system (AQMS), enabled by a Low-power, long-Range Wide-Area Network (LoRaWAN), an Internet of Things (IoT) technology designed to provide connectivity for massive machine-type communication [...] Read more.
In this paper, we present the design, prototyping, and working of a cost-effective, energy-efficient, and scalable air quality monitoring system (AQMS), enabled by a Low-power, long-Range Wide-Area Network (LoRaWAN), an Internet of Things (IoT) technology designed to provide connectivity for massive machine-type communication applications. The growing threat of air pollution necessitates outdoor and mobile environmental monitoring systems to provide real-time, location-specific data, which is unfortunately not possible using fixed monitoring devices. For our AQMS, we have developed two custom-built sensor nodes. The first node is equipped with a Nucleo-WL55JC1 microcontroller and sensors to measure temperature, humidity, and carbon dioxide (CO2), while the other node is equipped with an Arduino MKR WAN 1310 controller with sensors to measure carbon monoxide (CO), ammonia (NH3), and particulate matter (PM2.5 and PM10). These sensor nodes connect to a WisGate Edge LoRaWAN gateway, which aggregates and forwards the sensor data to The Things Network (TTN) for processing and cloud storage. The final visualization is handled via the Ubidots IoT platform, allowing for real-time visualization of environmental data. Besides environmental data, we were able to acquire a received signal strength indicator, signal-to-noise ratio, as well as a frame counter, which shows the number of packets received by the gateway. We performed laboratory testing, which confirmed reliable communication, with a packet delivery rate of 98% and a minimal average latency of 2.5 s. Both nodes operated efficiently on battery power, with the Nucleo-WL55JC1 consuming an average of 20 mA in active mode, while the Arduino MKR WAN 1310 operated at 15 mA. These values ensured extended operation for remote deployment. The system’s low power consumption and modular architecture make it viable for smart city applications and large-scale deployments in resource-constrained areas. Full article
Show Figures

Figure 1

36 pages, 738 KB  
Article
Activity Detection and Channel Estimation Based on Correlated Hybrid Message Passing for Grant-Free Massive Random Access
by Xiaofeng Liu, Xinrui Gong and Xiao Fu
Entropy 2025, 27(11), 1111; https://doi.org/10.3390/e27111111 - 28 Oct 2025
Viewed by 786
Abstract
Massive machine-type communications (mMTC) in future 6G networks will involve a vast number of devices with sporadic traffic. Grant-free access has emerged as an effective strategy to reduce the access latency and processing overhead by allowing devices to transmit without prior permission, making [...] Read more.
Massive machine-type communications (mMTC) in future 6G networks will involve a vast number of devices with sporadic traffic. Grant-free access has emerged as an effective strategy to reduce the access latency and processing overhead by allowing devices to transmit without prior permission, making accurate active user detection and channel estimation (AUDCE) crucial. In this paper, we investigate the joint AUDCE problem in wideband massive access systems. We develop an innovative channel prior model that captures the dual correlation structure of the channel using three state variables: active indication, channel supports, and channel values. By integrating Markov chains with coupled Gaussian distributions, the model effectively describes both the structural and numerical dependencies within the channel. We propose the correlated hybrid message passing (CHMP) algorithm based on Bethe free energy (BFE) minimization, which adaptively updates model parameters without requiring prior knowledge of user sparsity or channel priors. Simulation results show that the CHMP algorithm accurately detects active users and achieves precise channel estimation. Full article
(This article belongs to the Topic Advances in Sixth Generation and Beyond (6G&B))
Show Figures

Figure 1

17 pages, 1613 KB  
Article
Superimposed CSI Feedback Assisted by Inactive Sensing Information
by Mintao Zhang, Haowen Jiang, Zilong Wang, Linsi He, Yuqiao Yang, Mian Ye and Chaojin Qing
Sensors 2025, 25(19), 6156; https://doi.org/10.3390/s25196156 - 4 Oct 2025
Cited by 2 | Viewed by 720
Abstract
In massive multiple-input and multiple-output (mMIMO) systems, superimposed channel state information (CSI) feedback is developed to improve the occupation of uplink bandwidth resources. Nevertheless, the interference from this superimposed mode degrades the recovery performance of both downlink CSI and uplink data sequences. Although [...] Read more.
In massive multiple-input and multiple-output (mMIMO) systems, superimposed channel state information (CSI) feedback is developed to improve the occupation of uplink bandwidth resources. Nevertheless, the interference from this superimposed mode degrades the recovery performance of both downlink CSI and uplink data sequences. Although machine learning (ML)-based methods effectively mitigate superimposed interference by leveraging the multi-domain features of downlink CSI, the complex interactions among network model parameters cause a significant burden on system resources. To address these issues, inspired by sensing-assisted communication, we propose a novel superimposed CSI feedback method assisted by inactive sensing information that previously existed but was not utilized at the base station (BS). To the best of our knowledge, this is the first time that inactive sensing information is utilized to enhance superimposed CSI feedback. In this method, a new type of modal data, different from communication data, is developed to aid in interference suppression without requiring additional hardware at the BS. Specifically, the proposed method utilizes location, speed, and path information extracted from sensing devices to derive prior information. Then, based on the derived prior information, denoising processing is applied to both the delay and Doppler dimensions of downlink CSI in the delay—Doppler (DD) domain, significantly enhancing the recovery accuracy. Simulation results demonstrate the performance improvement of downlink CSI and uplink data sequences when compared to both classic and novel superimposed CSI feedback methods. Moreover, against parameter variations, simulation results also validate the robustness of the proposed method. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

13 pages, 499 KB  
Article
Optimization of Dynamic Frame Length for Random Access in Machine-Type Communication Systems
by Jiancheng Sun, Guoliang Jing and Jie Ding
Electronics 2025, 14(17), 3414; https://doi.org/10.3390/electronics14173414 - 27 Aug 2025
Cited by 1 | Viewed by 732
Abstract
With the rapid development of the Internet of Things and 5G communication technologies, the demand for the random access of a massive number of user equipment in burst scenarios has significantly increased. Traditional fixed-frame-length mechanisms, due to their inability to dynamically adapt to [...] Read more.
With the rapid development of the Internet of Things and 5G communication technologies, the demand for the random access of a massive number of user equipment in burst scenarios has significantly increased. Traditional fixed-frame-length mechanisms, due to their inability to dynamically adapt to fluctuations in access traffic, are prone to exacerbating channel resource competition, increasing the probability of preamble collisions, and significantly elevating access delays, thereby constraining the system performance of large-scale machine-type communications. To address these issues, this paper proposes a dynamic frame length optimization algorithm based on Q-learning. By leveraging reinforcement learning algorithms to autonomously perceive access traffic characteristics, this algorithm can dynamically adjust frame length parameters without relying on estimates of the number of user equipment. It optimizes the frame length to improve random access performance, reduces collisions among user equipment competing for preambles, and enhances the utilization ratio of preamble resources. Full article
(This article belongs to the Special Issue Antennas and Propagation for Wireless Communication)
Show Figures

Figure 1

16 pages, 1350 KB  
Article
The Synergistic Impact of 5G on Cloud-to-Edge Computing and the Evolution of Digital Applications
by Saleh M. Altowaijri and Mohamed Ayari
Mathematics 2025, 13(16), 2634; https://doi.org/10.3390/math13162634 - 16 Aug 2025
Cited by 2 | Viewed by 6638
Abstract
The integration of 5G technology with cloud and edge computing is redefining the digital landscape by enabling ultra-fast connectivity, low-latency communication, and scalable solutions across diverse application domains. This paper investigates the synergistic impact of 5G on cloud-to-edge architectures, emphasizing its transformative role [...] Read more.
The integration of 5G technology with cloud and edge computing is redefining the digital landscape by enabling ultra-fast connectivity, low-latency communication, and scalable solutions across diverse application domains. This paper investigates the synergistic impact of 5G on cloud-to-edge architectures, emphasizing its transformative role in revolutionizing sectors such as healthcare, smart cities, industrial automation, and autonomous systems. Key advancements in 5G—including Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communication (URLLC), and Massive Machine-Type Communications (mMTC)—are examined for their role in enabling real-time data processing, edge intelligence, and IoT scalability. In addition to conceptual analysis, the paper presents simulation-based evaluations comparing 5G cloud-to-edge systems with traditional 4G cloud models. Quantitative results demonstrate significant improvements in latency, energy efficiency, reliability, and AI prediction accuracy. The study also explores challenges in infrastructure deployment, cybersecurity, and latency management while highlighting the growing opportunities for innovation in AI-driven automation and immersive consumer technologies. Future research directions are outlined, focusing on energy-efficient designs, advanced security mechanisms, and equitable access to 5G infrastructure. Overall, this study offers comprehensive insights and performance benchmarks that will serve as a valuable resource for researchers and practitioners working to advance next-generation digital ecosystems. Full article
(This article belongs to the Special Issue Innovations in Cloud Computing and Machine Learning Applications)
Show Figures

Figure 1

15 pages, 1529 KB  
Article
Peak Age of Information Optimization in Cell-Free Massive Random Access Networks
by Zhiru Zhao, Yuankang Huang and Wen Zhan
Electronics 2025, 14(13), 2714; https://doi.org/10.3390/electronics14132714 - 4 Jul 2025
Viewed by 867
Abstract
With the vigorous development of Internet of Things technologies, Cell-Free Radio Access Network (CF-RAN), leveraging its distributed coverage and single/multi-antenna Access Point (AP) coordination advantages, has become a key technology for supporting massive Machine-Type Communication (mMTC). However, under the grant-free random access mechanism, [...] Read more.
With the vigorous development of Internet of Things technologies, Cell-Free Radio Access Network (CF-RAN), leveraging its distributed coverage and single/multi-antenna Access Point (AP) coordination advantages, has become a key technology for supporting massive Machine-Type Communication (mMTC). However, under the grant-free random access mechanism, this network architecture faces the problem of information freshness degradation due to channel congestion. To address this issue, a joint decoding model based on logical grouping architecture is introduced to analyze the correlation between the successful packet transmission probability and the Peak Age of Information (PAoI) in both single-AP and multi-AP scenarios. On this basis, a global Particle Swarm Optimization (PSO) algorithm is designed to dynamically adjust the channel access probability to minimize the average PAoI across the network. To reduce signaling overhead, a PSO algorithm based on local topology information is further proposed to achieve collaborative optimization among neighboring APs. Simulation results demonstrate that the global PSO algorithm can achieve performance closely approximating the optimum, while the local PSO algorithm maintains similar performance without the need for global information. It is especially suitable for large-scale access scenarios with wide area coverage, providing an efficient solution for optimizing information freshness in CF-RAN. Full article
Show Figures

Figure 1

22 pages, 631 KB  
Article
Time Travel with the BiTemporal RDF Model
by Abdullah Uz Tansel, Di Wu and Hsien-Tseng Wang
Mathematics 2025, 13(13), 2109; https://doi.org/10.3390/math13132109 - 27 Jun 2025
Viewed by 2168
Abstract
The Internet is not just used for communication, transactions, and cloud storage; it also serves as a massive knowledge store where both people and machines can create, analyze, and use data and information. The Semantic Web was designed to enable machines to interpret [...] Read more.
The Internet is not just used for communication, transactions, and cloud storage; it also serves as a massive knowledge store where both people and machines can create, analyze, and use data and information. The Semantic Web was designed to enable machines to interpret the meaning of data, facilitating more informed and autonomous decision-making. The foundation of the Semantic Web is the Resource Description Framework (RDF). The standard RDF is limited to representing simple binary relationships in the form of the <subjectpredicateobject> triple. In this paper, we present a new data model called BiTemporal RDF (BiTRDF), which adds valid time and transaction time to the standard RDF. Our approach treats temporal information as references instead of attributes, simplifying the semantics while enhancing the model’s expressiveness and consistency. BiTRDF treats all resources and relationships as inherently bitemporal, enabling the representation and reasoning of complex temporal relationships in RDF. Illustrative examples demonstrate the model’s support for type propagation, domain-range inference, and transitive relationships in a temporal setting. While this work lays a theoretical foundation, future research will address implementation, query language support, and compatibility with RDF streams and legacy systems. Full article
Show Figures

Figure 1

35 pages, 17275 KB  
Article
Performance Analysis of Downlink 5G Networks in Realistic Environments
by Aymen I. Zreikat and Hunseok Kang
Appl. Sci. 2025, 15(8), 4526; https://doi.org/10.3390/app15084526 - 19 Apr 2025
Cited by 1 | Viewed by 2336
Abstract
Fifth-generation (5G) networks are the fifth generation of mobile networks and are regarded as a global standard, following 1G, 2G, 3G, and 4G networks. Fifth-generation, with its large available bandwidth provided by mmWave, not only provides the end user with higher spectrum efficiency, [...] Read more.
Fifth-generation (5G) networks are the fifth generation of mobile networks and are regarded as a global standard, following 1G, 2G, 3G, and 4G networks. Fifth-generation, with its large available bandwidth provided by mmWave, not only provides the end user with higher spectrum efficiency, massive capacity, low latency, and high speed but is also a network designed to connect virtually everyone and everything together, including machines, objects, and devices. Therefore, studies of such systems’ performance evaluation and capacity bounds are critical for the research community. Furthermore, the performance of these systems should be investigated in realistic contexts while considering signal strength and restricted uplink power to maintain system coverage and capacity, which are also affected by the environment and the value of the service factor parameter. However, any proposed application should include a multiservice case to reflect the true state of 5G systems. As an extension of previous work, the capacity bounds for 5G networks are derived and analyzed in this research, considering both single and multiservice cases with mobility. In addition, the influence of different parameters on network performance, such as the interference, service factor, and non-orthogonality factors, and cell radii, is also discussed. The numerical findings and analysis reveal that the type of environment and service factor parameters have the greatest influence on system capacity and coverage. Subsequently, it is shown that the investigated parameters have a major impact on cell performance and therefore can be considered key indicators for mobile designers and operators to consider in planning and designing future networks. To validate these findings, some results are evaluated against ITU-T standards, while others are compared with related studies from the literature. Full article
(This article belongs to the Special Issue Trends and Prospects for Wireless Sensor Networks and IoT)
Show Figures

Figure 1

18 pages, 1372 KB  
Article
Resource Allocation in 5G Cellular IoT Systems with Early Transmissions at the Random Access Phase
by Anastasia Daraseliya, Eduard Sopin, Vyacheslav Begishev, Yevgeni Koucheryavy and Konstantin Samouylov
Sensors 2025, 25(7), 2264; https://doi.org/10.3390/s25072264 - 3 Apr 2025
Cited by 1 | Viewed by 1594
Abstract
While the market for massive machine type communications (mMTC) is evolving at an unprecedented pace, the standardization bodies, including 3GPP, are lagging behind with standardization of truly 5G-grade cellular Internet-of-Things (CIoT) systems. As an intermediate solution, an early data transmission mechanisms encapsulating the [...] Read more.
While the market for massive machine type communications (mMTC) is evolving at an unprecedented pace, the standardization bodies, including 3GPP, are lagging behind with standardization of truly 5G-grade cellular Internet-of-Things (CIoT) systems. As an intermediate solution, an early data transmission mechanisms encapsulating the data into the preambles has been recently proposed for 4G/5G Narrowband IoT (NB-IoT) technology. This mechanism is also expected to become a part of future CIoT systems. The aim of this paper is to propose a model for CIoT systems with and without early transmission functionality and assess the optimal distribution of resources at the random access and data transmission phases. To this end, the developed model captures both phases explicitly as well as different traffic composition in downlink and uplink directions. Our numerical results demonstrate that the use of early transmission functionality allows one to drastically decrease the delay of uplink packets by up to 20–40%, even in presence of downlink traffic sharing the same set of resources. However, it also affects the optimal share of resources allocated for random access and data transmission phases. As a result, the optimal performance of 5G mMTC technologies with or without early transmission mode can only be attained if the dynamic resource allocation is implemented. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

42 pages, 1685 KB  
Article
Heterogeneity Challenges of Federated Learning for Future Wireless Communication Networks
by Lorena Isabel Barona López and Thomás Borja Saltos
J. Sens. Actuator Netw. 2025, 14(2), 37; https://doi.org/10.3390/jsan14020037 - 1 Apr 2025
Cited by 17 | Viewed by 9277
Abstract
Two technologies of great interest in recent years—Artificial Intelligence (AI) and massive wireless communication networks—have found a significant point of convergence through Federated Learning (FL). Federated Learning is a Machine Learning (ML) technique that enables multiple participants to collaboratively train a model while [...] Read more.
Two technologies of great interest in recent years—Artificial Intelligence (AI) and massive wireless communication networks—have found a significant point of convergence through Federated Learning (FL). Federated Learning is a Machine Learning (ML) technique that enables multiple participants to collaboratively train a model while keeping their data local. Several studies indicate that while improving performance metrics—such as accuracy, loss reduction, or computation time—is a primary goal, achieving this in real-world scenarios remains challenging. This difficulty arises due to various heterogeneity characteristics inherent to the wireless devices participating in the Federation. Heterogeneity in Federated Learning arises when participants contribute differently, leading to challenges in the model training process. Heterogeneity in Federated Learning may appear in architecture, statistics, and behavior. System heterogeneity arises from differences in device capabilities, including processing power, transmission speeds, availability, energy constraints, and network limitations, among others. Statistical heterogeneity occurs when participants contribute non-independent and non-identically distributed (non-IID) data. This situation can harm the global model instead of improving it, especially when the data are of poor quality or too scarce. The third type, behavioral heterogeneity, refers to cases where participants are unwilling to engage or expect rewards despite minimal effort. Given the growing research in this area, we present a summary of heterogeneity characteristics in Federated Learning to provide a broader perspective on this emerging technology. We also outline key challenges, opportunities, and future directions for Federated Learning. Finally, we conduct a simulation using the LEAF framework to illustrate the impact of heterogeneity in Federated Learning. Full article
(This article belongs to the Special Issue Federated Learning: Applications and Future Directions)
Show Figures

Figure 1

16 pages, 3904 KB  
Article
Co-Simulation of Interconnection Between Smart Power Grid and Smart Cities Platform via Massive Machine-Type Communication
by Luiz H. N. Rodrigues, Carlos F. M. Almeida, Nelson Kagan, Luiz H. L. Rosa and Milana L. dos Santos
Sensors 2025, 25(5), 1517; https://doi.org/10.3390/s25051517 - 1 Mar 2025
Cited by 5 | Viewed by 2733
Abstract
With the advent of Industry 5.0, the electrical sector has been endowed with intelligent devices that are propelling high penetration of distributed energy microgeneration, VPP, smart buildings, and smart plants and imposing new challenges on the sector. This new environment requires a smarter [...] Read more.
With the advent of Industry 5.0, the electrical sector has been endowed with intelligent devices that are propelling high penetration of distributed energy microgeneration, VPP, smart buildings, and smart plants and imposing new challenges on the sector. This new environment requires a smarter network, including transforming the simple electricity customer into a “smart customer” who values the quality of energy and its rational use. The SPG (smart power grid) is the perfect solution for meeting these needs. It is crucial to understand energy use to guarantee quality of service and meet data security requirements. The use of simulations to map the behavior of complex infrastructures is the best strategy because it overcomes the limitations of traditional analytical solutions. This article presents the ICT laboratory structure developed within the Department of Electrical Engineering of the Polytechnic School of the Universidade de São Paulo (USP). It is based on an architecture that utilizes LTE/EPC wireless technology (4G, 5G, and B5G) to enable machine-to-machine communication (mMTC) between SPG elements using edge computing (MEC) resources and those of smart city platforms. We evaluate this proposal through simulations using data from real and emulated equipment and co-simulations shared by SPG laboratories at POLI-USP. Finally, we present the preliminary results of integration of the power laboratory, network simulation (ns-3), and a smart city platform (InterSCity) for validation and testing of the architecture. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

24 pages, 8199 KB  
Article
Redefining 6G Network Slicing: AI-Driven Solutions for Future Use Cases
by Robert Botez, Daniel Zinca and Virgil Dobrota
Electronics 2025, 14(2), 368; https://doi.org/10.3390/electronics14020368 - 18 Jan 2025
Cited by 20 | Viewed by 7774
Abstract
The evolution from 5G to 6G networks is driven by the need to meet the stringent requirements, i.e., ultra-reliable, low-latency, and high-throughput communication. The new services are called Further-Enhanced Mobile Broadband (feMBB), Extremely Reliable and Low-Latency Communications (ERLLCs), Ultra-Massive Machine-Type Communications (umMTCs), Massive [...] Read more.
The evolution from 5G to 6G networks is driven by the need to meet the stringent requirements, i.e., ultra-reliable, low-latency, and high-throughput communication. The new services are called Further-Enhanced Mobile Broadband (feMBB), Extremely Reliable and Low-Latency Communications (ERLLCs), Ultra-Massive Machine-Type Communications (umMTCs), Massive Ultra-Reliable Low-Latency Communications (mURLLCs), and Mobile Broadband Reliable Low-Latency Communications (MBRLLCs). Network slicing emerges as a critical enabler in 6G, providing virtualized, end-to-end network segments tailored to diverse application needs. Despite its significance, existing datasets for slice selection are limited to 5G or LTE-A contexts, lacking relevance to the enhanced requirements. In this work, we present a novel synthetic dataset tailored to 6G network slicing. By analyzing the emerging service requirements, we generated traffic parameters, including latency, packet loss, jitter, and transfer rates. Machine Learning (ML) models like Random Forest (RF), Decision Tree (DT), XGBoost, Support Vector Machine (SVM), and Feedforward Neural Network (FNN) were trained on this dataset, achieving over 99% accuracy in both slice classification and handover prediction. Our results highlight the potential of this dataset as a valuable tool for developing AI-assisted 6G network slicing mechanisms. While still in its early stages, the dataset lays a foundation for future research. As the 6G standardization advances, we aim to refine the dataset and models, ultimately enabling real-time, intelligent slicing solutions for next-generation networks. Full article
(This article belongs to the Special Issue Advances in IoT Security)
Show Figures

Figure 1

Back to TopTop