Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,576)

Search Parameters:
Keywords = Internet of Things (IoT) communication

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 14684 KiB  
Article
SDT4Solar: A Spatial Digital Twin Framework for Scalable Rooftop PV Planning in Urban Environments
by Athenee Teofilo, Qian (Chayn) Sun and Marco Amati
Smart Cities 2025, 8(4), 128; https://doi.org/10.3390/smartcities8040128 - 4 Aug 2025
Abstract
To sustainably power future urban communities, cities require advanced solar energy planning tools that overcome the limitations of traditional approaches, such as data fragmentation and siloed decision-making. SDTs present a transformative opportunity by enabling precision urban modelling, integrated simulations, and iterative decision support. [...] Read more.
To sustainably power future urban communities, cities require advanced solar energy planning tools that overcome the limitations of traditional approaches, such as data fragmentation and siloed decision-making. SDTs present a transformative opportunity by enabling precision urban modelling, integrated simulations, and iterative decision support. However, their application in solar energy planning remains underexplored. This study introduces SDT4Solar, a novel SDT-based framework designed to integrate city-scale rooftop solar planning through 3D building semantisation, solar modelling, and a unified geospatial database. By leveraging advanced spatial modelling and Internet of Things (IoT) technologies, SDT4Solar facilitates high-resolution 3D solar potential simulations, improving the accuracy and equity of solar infrastructure deployment. We demonstrate the framework through a proof-of-concept implementation in Ballarat East, Victoria, Australia, structured in four key stages: (a) spatial representation of the urban built environment, (b) integration of multi-source datasets into a unified geospatial database, (c) rooftop solar potential modelling using 3D simulation tools, and (d) dynamic visualization and analysis in a testbed environment. Results highlight SDT4Solar’s effectiveness in enabling data-driven, spatially explicit decision-making for rooftop PV deployment. This work advances the role of SDTs in urban energy transitions, demonstrating their potential to optimise efficiency in solar infrastructure planning. Full article
(This article belongs to the Topic Sustainable Building Development and Promotion)
Show Figures

Figure 1

23 pages, 2029 KiB  
Systematic Review
Exploring the Role of Industry 4.0 Technologies in Smart City Evolution: A Literature-Based Study
by Nataliia Boichuk, Iwona Pisz, Anna Bruska, Sabina Kauf and Sabina Wyrwich-Płotka
Sustainability 2025, 17(15), 7024; https://doi.org/10.3390/su17157024 - 2 Aug 2025
Viewed by 206
Abstract
Smart cities are technologically advanced urban environments where interconnected systems and data-driven technologies enhance public service delivery and quality of life. These cities rely on information and communication technologies, the Internet of Things, big data, cloud computing, and other Industry 4.0 tools to [...] Read more.
Smart cities are technologically advanced urban environments where interconnected systems and data-driven technologies enhance public service delivery and quality of life. These cities rely on information and communication technologies, the Internet of Things, big data, cloud computing, and other Industry 4.0 tools to support efficient city management and foster citizen engagement. Often referred to as digital cities, they integrate intelligent infrastructures and real-time data analytics to improve mobility, security, and sustainability. Ubiquitous sensors, paired with Artificial Intelligence, enable cities to monitor infrastructure, respond to residents’ needs, and optimize urban conditions dynamically. Given the increasing significance of Industry 4.0 in urban development, this study adopts a bibliometric approach to systematically review the application of these technologies within smart cities. Utilizing major academic databases such as Scopus and Web of Science the research aims to identify the primary Industry 4.0 technologies implemented in smart cities, assess their impact on infrastructure, economic systems, and urban communities, and explore the challenges and benefits associated with their integration. The bibliometric analysis included publications from 2016 to 2023, since the emergence of urban researchers’ interest in the technologies of the new industrial revolution. The task is to contribute to a deeper understanding of how smart cities evolve through the adoption of advanced technological frameworks. Research indicates that IoT and AI are the most commonly used tools in urban spaces, particularly in smart mobility and smart environments. Full article
Show Figures

Figure 1

18 pages, 651 KiB  
Article
Enhancing IoT Connectivity in Suburban and Rural Terrains Through Optimized Propagation Models Using Convolutional Neural Networks
by George Papastergiou, Apostolos Xenakis, Costas Chaikalis, Dimitrios Kosmanos and Menelaos Panagiotis Papastergiou
IoT 2025, 6(3), 41; https://doi.org/10.3390/iot6030041 - 31 Jul 2025
Viewed by 180
Abstract
The widespread adoption of the Internet of Things (IoT) has driven major advancements in wireless communication, especially in rural and suburban areas where low population density and limited infrastructure pose significant challenges. Accurate Path Loss (PL) prediction is critical for the effective deployment [...] Read more.
The widespread adoption of the Internet of Things (IoT) has driven major advancements in wireless communication, especially in rural and suburban areas where low population density and limited infrastructure pose significant challenges. Accurate Path Loss (PL) prediction is critical for the effective deployment and operation of Wireless Sensor Networks (WSNs) in such environments. This study explores the use of Convolutional Neural Networks (CNNs) for PL modeling, utilizing a comprehensive dataset collected in a smart campus setting that captures the influence of terrain and environmental variations. Several CNN architectures were evaluated based on different combinations of input features—such as distance, elevation, clutter height, and altitude—to assess their predictive accuracy. The findings reveal that CNN-based models outperform traditional propagation models (Free Space Path Loss (FSPL), Okumura–Hata, COST 231, Log-Distance), achieving lower error rates and more precise PL estimations. The best performing CNN configuration, using only distance and elevation, highlights the value of terrain-aware modeling. These results underscore the potential of deep learning techniques to enhance IoT connectivity in sparsely connected regions and support the development of more resilient communication infrastructures. Full article
Show Figures

Figure 1

22 pages, 1386 KiB  
Article
A Scalable Approach to IoT Interoperability: The Share Pattern
by Riccardo Petracci and Rosario Culmone
Sensors 2025, 25(15), 4701; https://doi.org/10.3390/s25154701 - 30 Jul 2025
Viewed by 172
Abstract
The Internet of Things (IoT) is transforming how devices communicate, with more than 30 billion connected units today and projections exceeding 40 billion by 2025. Despite this growth, the integration of heterogeneous systems remains a significant challenge, particularly in sensitive domains like healthcare, [...] Read more.
The Internet of Things (IoT) is transforming how devices communicate, with more than 30 billion connected units today and projections exceeding 40 billion by 2025. Despite this growth, the integration of heterogeneous systems remains a significant challenge, particularly in sensitive domains like healthcare, where proprietary standards and isolated ecosystems hinder interoperability. This paper presents an extended version of the Share design pattern, a lightweight and contract-based mechanism for dynamic service composition, tailored for resource-constrained IoT devices. Share enables decentralized, peer-to-peer integration by exchanging executable code in our examples written in the LUA programming language. This approach avoids reliance on centralized infrastructures and allows services to discover and interact with each other dynamically through pattern-matching and contract validation. To assess its suitability, we developed an emulator that directly implements the system under test in LUA, allowing us to verify both the structural and behavioral constraints of service interactions. Our results demonstrate that Share is scalable and effective, even in constrained environments, and supports formal correctness via design-by-contract principles. This makes it a promising solution for lightweight, interoperable IoT systems that require flexibility, dynamic configuration, and resilience without centralized control. Full article
(This article belongs to the Special Issue Secure and Decentralised IoT Systems)
Show Figures

Figure 1

25 pages, 19197 KiB  
Article
Empirical Evaluation of TLS-Enhanced MQTT on IoT Devices for V2X Use Cases
by Nikolaos Orestis Gavriilidis, Spyros T. Halkidis and Sophia Petridou
Appl. Sci. 2025, 15(15), 8398; https://doi.org/10.3390/app15158398 - 29 Jul 2025
Viewed by 140
Abstract
The rapid growth of Internet of Things (IoT) deployment has led to an unprecedented volume of interconnected, resource-constrained devices. Securing their communication is essential, especially in vehicular environments, where sensitive data exchange requires robust authentication, integrity, and confidentiality guarantees. In this paper, we [...] Read more.
The rapid growth of Internet of Things (IoT) deployment has led to an unprecedented volume of interconnected, resource-constrained devices. Securing their communication is essential, especially in vehicular environments, where sensitive data exchange requires robust authentication, integrity, and confidentiality guarantees. In this paper, we present an empirical evaluation of TLS (Transport Layer Security)-enhanced MQTT (Message Queuing Telemetry Transport) on low-cost, quad-core Cortex-A72 ARMv8 boards, specifically the Raspberry Pi 4B, commonly used as prototyping platforms for On-Board Units (OBUs) and Road-Side Units (RSUs). Three MQTT entities, namely, the broker, the publisher, and the subscriber, are deployed, utilizing Elliptic Curve Cryptography (ECC) for key exchange and authentication and employing the AES_256_GCM and ChaCha20_Poly1305 ciphers for confidentiality via appropriately selected libraries. We quantify resource consumption in terms of CPU utilization, execution time, energy usage, memory footprint, and goodput across TLS phases, cipher suites, message packaging strategies, and both Ethernet and WiFi interfaces. Our results show that (i) TLS 1.3-enhanced MQTT is feasible on Raspberry Pi 4B devices, though it introduces non-negligible resource overheads; (ii) batching messages into fewer, larger packets reduces transmission cost and latency; and (iii) ChaCha20_Poly1305 outperforms AES_256_GCM, particularly in wireless scenarios, making it the preferred choice for resource- and latency-sensitive V2X applications. These findings provide actionable recommendations for deploying secure MQTT communication on an IoT platform. Full article
(This article belongs to the Special Issue Cryptography in Data Protection and Privacy-Enhancing Technologies)
Show Figures

Figure 1

12 pages, 2500 KiB  
Article
Deep Learning-Based Optical Camera Communication with a 2D MIMO-OOK Scheme for IoT Networks
by Huy Nguyen and Yeng Min Jang
Electronics 2025, 14(15), 3011; https://doi.org/10.3390/electronics14153011 - 29 Jul 2025
Viewed by 312
Abstract
Radio frequency (RF)-based wireless systems are broadly used in communication systems such as mobile networks, satellite links, and monitoring applications. These systems offer outstanding advantages over wired systems, particularly in terms of ease of installation. However, researchers are looking for safer alternatives as [...] Read more.
Radio frequency (RF)-based wireless systems are broadly used in communication systems such as mobile networks, satellite links, and monitoring applications. These systems offer outstanding advantages over wired systems, particularly in terms of ease of installation. However, researchers are looking for safer alternatives as a result of worries about possible health problems connected to high-frequency radiofrequency transmission. Using the visible light spectrum is one promising approach; three cutting-edge technologies are emerging in this regard: Optical Camera Communication (OCC), Light Fidelity (Li-Fi), and Visible Light Communication (VLC). In this paper, we propose a Multiple-Input Multiple-Output (MIMO) modulation technology for Internet of Things (IoT) applications, utilizing an LED array and time-domain on-off keying (OOK). The proposed system is compatible with both rolling shutter and global shutter cameras, including commercially available models such as CCTV, webcams, and smart cameras, commonly deployed in buildings and industrial environments. Despite the compact size of the LED array, we demonstrate that, by optimizing parameters such as exposure time, camera focal length, and channel coding, our system can achieve up to 20 communication links over a 20 m distance with low bit error rate. Full article
(This article belongs to the Special Issue Advances in Optical Communications and Optical Networks)
Show Figures

Figure 1

37 pages, 1037 KiB  
Review
Machine Learning for Flood Resiliency—Current Status and Unexplored Directions
by Venkatesh Uddameri and E. Annette Hernandez
Environments 2025, 12(8), 259; https://doi.org/10.3390/environments12080259 - 28 Jul 2025
Viewed by 675
Abstract
A systems-oriented review of machine learning (ML) over the entire flood management spectrum, encompassing fluvial flood control, pluvial flood management, and resiliency-risk characterization was undertaken. Deep learners like long short-term memory (LSTM) networks perform well in predicting reservoir inflows and outflows. Convolution neural [...] Read more.
A systems-oriented review of machine learning (ML) over the entire flood management spectrum, encompassing fluvial flood control, pluvial flood management, and resiliency-risk characterization was undertaken. Deep learners like long short-term memory (LSTM) networks perform well in predicting reservoir inflows and outflows. Convolution neural networks (CNNs) and other object identification algorithms are being explored in assessing levee and flood wall failures. The use of ML methods in pump station operations is limited due to lack of public-domain datasets. Reinforcement learning (RL) has shown promise in controlling low-impact development (LID) systems for pluvial flood management. Resiliency is defined in terms of the vulnerability of a community to floods. Multi-criteria decision making (MCDM) and unsupervised ML methods are used to capture vulnerability. Supervised learning is used to model flooding hazards. Conventional approaches perform better than deep learners and ensemble methods for modeling flood hazards due to paucity of data and large inter-model predictive variability. Advances in satellite-based, drone-facilitated data collection and Internet of Things (IoT)-based low-cost sensors offer new research avenues to explore. Transfer learning at ungauged basins holds promise but is largely unexplored. Explainable artificial intelligence (XAI) is seeing increased use and helps the transition of ML models from black-box forecasters to knowledge-enhancing predictors. Full article
(This article belongs to the Special Issue Hydrological Modeling and Sustainable Water Resources Management)
Show Figures

Figure 1

30 pages, 7092 KiB  
Article
Slotted Circular-Patch MIMO Antenna for 5G Applications at Sub-6 GHz
by Heba Ahmed, Allam M. Ameen, Ahmed Magdy, Ahmed Nasser and Mohammed Abo-Zahhad
Telecom 2025, 6(3), 53; https://doi.org/10.3390/telecom6030053 - 28 Jul 2025
Viewed by 244
Abstract
The swift advancement of fifth-generation (5G) wireless technology brings forth a range of enhancements to address the increasing demand for data, the proliferation of smart devices, and the growth of the Internet of Things (IoT). This highly interconnected communication environment necessitates using multiple-input [...] Read more.
The swift advancement of fifth-generation (5G) wireless technology brings forth a range of enhancements to address the increasing demand for data, the proliferation of smart devices, and the growth of the Internet of Things (IoT). This highly interconnected communication environment necessitates using multiple-input multiple-output (MIMO) systems to achieve adequate channel capacity. In this article, a 2-port MIMO system using two flipped parallel 1 × 2 arrays and a 2-port MIMO system using two opposite 1 × 4 arrays designed and fabricated antennas for 5G wireless communication in the sub-6 GHz band, are presented, overcoming the limitations of previous designs in gain, radiation efficiency and MIMO performance. The designed and fabricated single-element antenna features a circular microstrip patch design based on ROGER 5880 (RT5880) substrate, which has a thickness of 1.57 mm, a permittivity of 2.2, and a tangential loss of 0.0009. The 2-port MIMO of two 1 × 2 arrays and the 2-port MIMO of two 1 × 4 arrays have overall dimensions of 132 × 66 × 1.57 mm3 and 140 × 132 × 1.57 mm3, respectively. The MIMO of two 1 × 2 arrays and MIMO of two 1 × 4 arrays encompass maximum gains of 8.3 dBi and 10.9 dBi, respectively, with maximum radiation efficiency reaching 95% and 97.46%. High MIMO performance outcomes are observed for both the MIMO of two 1 × 2 arrays and the MIMO of two 1 × 4 arrays, with the channel capacity loss (CCL) ˂ 0.4 bit/s/Hz and ˂0.3 bit/s/Hz, respectively, an envelope correlation coefficient (ECC) ˂ 0.006 and ˂0.003, respectively, directivity gain (DG) about 10 dB, and a total active reflection coefficient (TARC) under −10 dB, ensuring impedance matching and effective mutual coupling among neighboring parameters, which confirms their effectiveness for 5G applications. The three fabricated antennas were experimentally tested and implemented using the MIMO Application Framework version 19.5 for 5G systems, demonstrating operational effectiveness in 5G applications. Full article
Show Figures

Figure 1

20 pages, 5343 KiB  
Article
System-Level Assessment of Ka-Band Digital Beamforming Receivers and Transmitters Implementing Large Thinned Antenna Array for Low Earth Orbit Satellite Communications
by Giovanni Lasagni, Alessandro Calcaterra, Monica Righini, Giovanni Gasparro, Stefano Maddio, Vincenzo Pascale, Alessandro Cidronali and Stefano Selleri
Sensors 2025, 25(15), 4645; https://doi.org/10.3390/s25154645 - 26 Jul 2025
Viewed by 330
Abstract
In this paper, we present a system-level model of a digital multibeam antenna designed for Low Earth Orbit satellite communications operating in the Ka-band. We initially develop a suitable array topology, which is based on a thinned lattice, then adopt it as the [...] Read more.
In this paper, we present a system-level model of a digital multibeam antenna designed for Low Earth Orbit satellite communications operating in the Ka-band. We initially develop a suitable array topology, which is based on a thinned lattice, then adopt it as the foundation for evaluating its performance within a digital beamforming architecture. This architecture is implemented in a system-level simulator to evaluate the performance of the transmitter and receiver chains. This study advances the analysis of the digital antennas by incorporating both the RF front-end and digital sections non-idealities into a digital-twin framework. This approach enhances the designer’s ability to optimize the system with a holistic approach and provides insights into how various impairments affect the transmitter and receiver performance, identifying the subsystems’ parameter limits. To achieve this, we analyze several subsystems’ parameters and impairments, assessing their effects on both the antenna radiation and quality of the transmitted and received signals in a real applicative context. The results of this study reveal the sensitivity of the system to the impairments and suggest strategies to trade them off, emphasizing the importance of selecting appropriate subsystem features to optimize overall system performance. Full article
Show Figures

Figure 1

14 pages, 1295 KiB  
Article
Edge-FLGuard+: A Federated and Lightweight Anomaly Detection Framework for Securing 5G-Enabled IoT in Smart Homes
by Manuel J. C. S. Reis
Future Internet 2025, 17(8), 329; https://doi.org/10.3390/fi17080329 - 24 Jul 2025
Viewed by 187
Abstract
The rapid expansion of 5G-enabled Internet of Things (IoT) devices in smart homes has heightened the need for robust, privacy-preserving, and real-time cybersecurity mechanisms. Traditional cloud-based security systems often face latency and privacy bottlenecks, making them unsuitable for edge-constrained environments. In this work, [...] Read more.
The rapid expansion of 5G-enabled Internet of Things (IoT) devices in smart homes has heightened the need for robust, privacy-preserving, and real-time cybersecurity mechanisms. Traditional cloud-based security systems often face latency and privacy bottlenecks, making them unsuitable for edge-constrained environments. In this work, we propose Edge-FLGuard+, a federated and lightweight anomaly detection framework specifically designed for 5G-enabled smart home ecosystems. The framework integrates edge AI with federated learning to detect network and device anomalies while preserving user privacy and reducing cloud dependency. A lightweight autoencoder-based model is trained across distributed edge nodes using privacy-preserving federated averaging. We evaluate our framework using the TON_IoT and CIC-IDS2018 datasets under realistic smart home attack scenarios. Experimental results show that Edge-FLGuard+ achieves high detection accuracy (≥95%) with minimal communication and computational overhead, outperforming traditional centralized and local-only baselines. Our results demonstrate the viability of federated AI models for real-time security in next-generation smart home networks. Full article
Show Figures

Figure 1

13 pages, 560 KiB  
Article
Balancing Complexity and Performance in Convolutional Neural Network Models for QUIC Traffic Classification
by Giovanni Pettorru, Matteo Flumini and Marco Martalò
Sensors 2025, 25(15), 4576; https://doi.org/10.3390/s25154576 - 24 Jul 2025
Viewed by 279
Abstract
The upcoming deployment of sixth-generation (6G) wireless networks promises to significantly outperform 5G in terms of data rates, spectral efficiency, device densities, and, most importantly, latency and security. To cope with the increasingly complex network traffic, Network Traffic Classification (NTC) will be essential [...] Read more.
The upcoming deployment of sixth-generation (6G) wireless networks promises to significantly outperform 5G in terms of data rates, spectral efficiency, device densities, and, most importantly, latency and security. To cope with the increasingly complex network traffic, Network Traffic Classification (NTC) will be essential to ensure the high performance and security of a network, which is necessary for advanced applications. This is particularly relevant in the Internet of Things (IoT), where resource-constrained platforms at the edge must manage tasks like traffic analysis and threat detection. In this context, balancing classification accuracy with computational efficiency is key to enabling practical, real-world deployments. Traditional payload-based and packet inspection methods are based on the identification of relevant patterns and fields in the packet content. However, such methods are nowadays limited by the rise of encrypted communications. To this end, the research community has turned its attention to statistical analysis and Machine Learning (ML). In particular, Convolutional Neural Networks (CNNs) are gaining momentum in the research community for ML-based NTC leveraging statistical analysis of flow characteristics. Therefore, this paper addresses CNN-based NTC in the presence of encrypted communications generated by the rising Quick UDP Internet Connections (QUIC) protocol. Different models are presented, and their performance is assessed to show the trade-off between classification accuracy and CNN complexity. In particular, our results show that even simple and low-complexity CNN architectures can achieve almost 92% accuracy with a very low-complexity architecture when compared to baseline architectures documented in the existing literature. Full article
Show Figures

Figure 1

19 pages, 313 KiB  
Article
Survey on the Role of Mechanistic Interpretability in Generative AI
by Leonardo Ranaldi
Big Data Cogn. Comput. 2025, 9(8), 193; https://doi.org/10.3390/bdcc9080193 - 23 Jul 2025
Viewed by 755
Abstract
The rapid advancement of artificial intelligence (AI) and machine learning has revolutionised how systems process information, make decisions, and adapt to dynamic environments. AI-driven approaches have significantly enhanced efficiency and problem-solving capabilities across various domains, from automated decision-making to knowledge representation and predictive [...] Read more.
The rapid advancement of artificial intelligence (AI) and machine learning has revolutionised how systems process information, make decisions, and adapt to dynamic environments. AI-driven approaches have significantly enhanced efficiency and problem-solving capabilities across various domains, from automated decision-making to knowledge representation and predictive modelling. These developments have led to the emergence of increasingly sophisticated models capable of learning patterns, reasoning over complex data structures, and generalising across tasks. As AI systems become more deeply integrated into networked infrastructures and the Internet of Things (IoT), their ability to process and interpret data in real-time is essential for optimising intelligent communication networks, distributed decision making, and autonomous IoT systems. However, despite these achievements, the internal mechanisms that drive LLMs’ reasoning and generalisation capabilities remain largely unexplored. This lack of transparency, compounded by challenges such as hallucinations, adversarial perturbations, and misaligned human expectations, raises concerns about their safe and beneficial deployment. Understanding the underlying principles governing AI models is crucial for their integration into intelligent network systems, automated decision-making processes, and secure digital infrastructures. This paper provides a comprehensive analysis of explainability approaches aimed at uncovering the fundamental mechanisms of LLMs. We investigate the strategic components contributing to their generalisation abilities, focusing on methods to quantify acquired knowledge and assess its representation within model parameters. Specifically, we examine mechanistic interpretability, probing techniques, and representation engineering as tools to decipher how knowledge is structured, encoded, and retrieved in AI systems. Furthermore, by adopting a mechanistic perspective, we analyse emergent phenomena within training dynamics, particularly memorisation and generalisation, which also play a crucial role in broader AI-driven systems, including adaptive network intelligence, edge computing, and real-time decision-making architectures. Understanding these principles is crucial for bridging the gap between black-box AI models and practical, explainable AI applications, thereby ensuring trust, robustness, and efficiency in language-based and general AI systems. Full article
Show Figures

Figure 1

18 pages, 1138 KiB  
Article
Intelligent Priority-Aware Spectrum Access in 5G Vehicular IoT: A Reinforcement Learning Approach
by Adeel Iqbal, Tahir Khurshaid and Yazdan Ahmad Qadri
Sensors 2025, 25(15), 4554; https://doi.org/10.3390/s25154554 - 23 Jul 2025
Viewed by 266
Abstract
Efficient and intelligent spectrum access is crucial for meeting the diverse Quality of Service (QoS) demands of Vehicular Internet of Things (V-IoT) systems in next-generation cellular networks. This work proposes a novel reinforcement learning (RL)-based priority-aware spectrum management (RL-PASM) framework, a centralized self-learning [...] Read more.
Efficient and intelligent spectrum access is crucial for meeting the diverse Quality of Service (QoS) demands of Vehicular Internet of Things (V-IoT) systems in next-generation cellular networks. This work proposes a novel reinforcement learning (RL)-based priority-aware spectrum management (RL-PASM) framework, a centralized self-learning priority-aware spectrum management framework operating through Roadside Units (RSUs). RL-PASM dynamically allocates spectrum resources across three traffic classes: high-priority (HP), low-priority (LP), and best-effort (BE), utilizing reinforcement learning (RL). This work compares four RL algorithms: Q-Learning, Double Q-Learning, Deep Q-Network (DQN), and Actor-Critic (AC) methods. The environment is modeled as a discrete-time Markov Decision Process (MDP), and a context-sensitive reward function guides fairness-preserving decisions for access, preemption, coexistence, and hand-off. Extensive simulations conducted under realistic vehicular load conditions evaluate the performance across key metrics, including throughput, delay, energy efficiency, fairness, blocking, and interruption probability. Unlike prior approaches, RL-PASM introduces a unified multi-objective reward formulation and centralized RSU-based control to support adaptive priority-aware access for dynamic vehicular environments. Simulation results confirm that RL-PASM balances throughput, latency, fairness, and energy efficiency, demonstrating its suitability for scalable and resource-constrained deployments. The results also demonstrate that DQN achieves the highest average throughput, followed by vanilla QL. DQL and AC maintain fairness at high levels and low average interruption probability. QL demonstrates the lowest average delay and the highest energy efficiency, making it a suitable candidate for edge-constrained vehicular deployments. Selecting the appropriate RL method, RL-PASM offers a robust and adaptable solution for scalable, intelligent, and priority-aware spectrum access in vehicular communication infrastructures. Full article
(This article belongs to the Special Issue Emerging Trends in Next-Generation mmWave Cognitive Radio Networks)
Show Figures

Figure 1

26 pages, 2875 KiB  
Article
Sustainable THz SWIPT via RIS-Enabled Sensing and Adaptive Power Focusing: Toward Green 6G IoT
by Sunday Enahoro, Sunday Cookey Ekpo, Mfonobong Uko, Fanuel Elias, Rahul Unnikrishnan, Stephen Alabi and Nurudeen Kolawole Olasunkanmi
Sensors 2025, 25(15), 4549; https://doi.org/10.3390/s25154549 - 23 Jul 2025
Viewed by 338
Abstract
Terahertz (THz) communications and simultaneous wireless information and power transfer (SWIPT) hold the potential to energize battery-less Internet-of-Things (IoT) devices while enabling multi-gigabit data transmission. However, severe path loss, blockages, and rectifier nonlinearity significantly hinder both throughput and harvested energy. Additionally, high-power THz [...] Read more.
Terahertz (THz) communications and simultaneous wireless information and power transfer (SWIPT) hold the potential to energize battery-less Internet-of-Things (IoT) devices while enabling multi-gigabit data transmission. However, severe path loss, blockages, and rectifier nonlinearity significantly hinder both throughput and harvested energy. Additionally, high-power THz beams pose safety concerns by potentially exceeding specific absorption rate (SAR) limits. We propose a sensing-adaptive power-focusing (APF) framework in which a reconfigurable intelligent surface (RIS) embeds low-rate THz sensors. Real-time backscatter measurements construct a spatial map used for the joint optimisation of (i) RIS phase configurations, (ii) multi-tone SWIPT waveforms, and (iii) nonlinear power-splitting ratios. A weighted MMSE inner loop maximizes the data rate, while an outer alternating optimisation applies semidefinite relaxation to enforce passive-element constraints and SAR compliance. Full-stack simulations at 0.3 THz with 20 GHz bandwidth and up to 256 RIS elements show that APF (i) improves the rate–energy Pareto frontier by 30–75% over recent adaptive baselines; (ii) achieves a 150% gain in harvested energy and a 440 Mbps peak per-user rate; (iii) reduces energy-efficiency variance by half while maintaining a Jain fairness index of 0.999;; and (iv) caps SAR at 1.6 W/kg, which is 20% below the IEEE C95.1 safety threshold. The algorithm converges in seven iterations and executes within <3 ms on a Cortex-A78 processor, ensuring compliance with real-time 6G control budgets. The proposed architecture supports sustainable THz-powered networks for smart factories, digital-twin logistics, wire-free extended reality (XR), and low-maintenance structural health monitors, combining high-capacity communication, safe wireless power transfer, and carbon-aware operation for future 6G cyber–physical systems. Full article
Show Figures

Figure 1

21 pages, 2065 KiB  
Article
Enhancing Security in 5G and Future 6G Networks: Machine Learning Approaches for Adaptive Intrusion Detection and Prevention
by Konstantinos Kalodanis, Charalampos Papapavlou and Georgios Feretzakis
Future Internet 2025, 17(7), 312; https://doi.org/10.3390/fi17070312 - 18 Jul 2025
Viewed by 355
Abstract
The evolution from 4G to 5G—and eventually to the forthcoming 6G networks—has revolutionized wireless communications by enabling high-speed, low-latency services that support a wide range of applications, including the Internet of Things (IoT), smart cities, and critical infrastructures. However, the unique characteristics of [...] Read more.
The evolution from 4G to 5G—and eventually to the forthcoming 6G networks—has revolutionized wireless communications by enabling high-speed, low-latency services that support a wide range of applications, including the Internet of Things (IoT), smart cities, and critical infrastructures. However, the unique characteristics of these networks—extensive connectivity, device heterogeneity, and architectural flexibility—impose significant security challenges. This paper introduces a comprehensive framework for enhancing the security of current and emerging wireless networks by integrating state-of-the-art machine learning (ML) techniques into intrusion detection and prevention systems. It also thoroughly explores the key aspects of wireless network security, including architectural vulnerabilities in both 5G and future 6G networks, novel ML algorithms tailored to address evolving threats, privacy-preserving mechanisms, and regulatory compliance with the EU AI Act. Finally, a Wireless Intrusion Detection Algorithm (WIDA) is proposed, demonstrating promising results in improving wireless network security. Full article
(This article belongs to the Special Issue Advanced 5G and Beyond Networks)
Show Figures

Figure 1

Back to TopTop