Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (906)

Search Parameters:
Keywords = 5G real-time networking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
48 pages, 2506 KiB  
Article
Enhancing Ship Propulsion Efficiency Predictions with Integrated Physics and Machine Learning
by Hamid Reza Soltani Motlagh, Seyed Behbood Issa-Zadeh, Md Redzuan Zoolfakar and Claudia Lizette Garay-Rondero
J. Mar. Sci. Eng. 2025, 13(8), 1487; https://doi.org/10.3390/jmse13081487 - 31 Jul 2025
Abstract
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte [...] Read more.
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte Carlo simulations provides a solid foundation for training machine learning models, particularly in cases where dataset restrictions are present. The XGBoost model demonstrated superior performance compared to Support Vector Regression, Gaussian Process Regression, Random Forest, and Shallow Neural Network models, achieving near-zero prediction errors that closely matched physics-based calculations. The physics-based analysis demonstrated that the Combined scenario, which combines hull coatings with bulbous bow modifications, produced the largest fuel consumption reduction (5.37% at 15 knots), followed by the Advanced Propeller scenario. The results demonstrate that user inputs (e.g., engine power: 870 kW, speed: 12.7 knots) match the Advanced Propeller scenario, followed by Paint, which indicates that advanced propellers or hull coatings would optimize efficiency. The obtained insights help ship operators modify their operational parameters and designers select essential modifications for sustainable operations. The model maintains its strength at low speeds, where fuel consumption is minimal, making it applicable to other oil tankers. The hybrid approach provides a new tool for maritime efficiency analysis, yielding interpretable results that support International Maritime Organization objectives, despite starting with a limited dataset. The model requires additional research to enhance its predictive accuracy using larger datasets and real-time data collection, which will aid in achieving global environmental stewardship. Full article
(This article belongs to the Special Issue Machine Learning for Prediction of Ship Motion)
26 pages, 4289 KiB  
Article
A Voronoi–A* Fusion Algorithm with Adaptive Layering for Efficient UAV Path Planning in Complex Terrain
by Boyu Dong, Gong Zhang, Yan Yang, Peiyuan Yuan and Shuntong Lu
Drones 2025, 9(8), 542; https://doi.org/10.3390/drones9080542 (registering DOI) - 31 Jul 2025
Abstract
Unmanned Aerial Vehicles (UAVs) face significant challenges in global path planning within complex terrains, as traditional algorithms (e.g., A*, PSO, APF) struggle to balance computational efficiency, path optimality, and safety. This study proposes a Voronoi–A* fusion algorithm, combining Voronoi-vertex-based rapid trajectory generation with [...] Read more.
Unmanned Aerial Vehicles (UAVs) face significant challenges in global path planning within complex terrains, as traditional algorithms (e.g., A*, PSO, APF) struggle to balance computational efficiency, path optimality, and safety. This study proposes a Voronoi–A* fusion algorithm, combining Voronoi-vertex-based rapid trajectory generation with A* supplementary expansion for enhanced performance. First, an adaptive DEM layering strategy divides the terrain into horizontal planes based on obstacle density, reducing computational complexity while preserving 3D flexibility. The Voronoi vertices within each layer serve as a sparse waypoint network, with greedy heuristic prioritizing vertices that ensure safety margins, directional coherence, and goal proximity. For unresolved segments, A* performs localized searches to ensure complete connectivity. Finally, a line-segment interpolation search further optimizes the path to minimize both length and turning maneuvers. Simulations in mountainous environments demonstrate superior performance over traditional methods in terms of path planning success rates, path optimality, and computation. Our framework excels in real-time scenarios, such as disaster rescue and logistics, although it assumes static environments and trades slight path elongation for robustness. Future research should integrate dynamic obstacle avoidance and weather impact analysis to enhance adaptability in real-world conditions. Full article
Show Figures

Figure 1

26 pages, 62045 KiB  
Article
CML-RTDETR: A Lightweight Wheat Head Detection and Counting Algorithm Based on the Improved RT-DETR
by Yue Fang, Chenbo Yang, Chengyong Zhu, Hao Jiang, Jingmin Tu and Jie Li
Electronics 2025, 14(15), 3051; https://doi.org/10.3390/electronics14153051 - 30 Jul 2025
Abstract
Wheat is one of the important grain crops, and spike counting is crucial for predicting spike yield. However, in complex farmland environments, the wheat body scale has huge differences, its color is highly similar to the background, and wheat ears often overlap with [...] Read more.
Wheat is one of the important grain crops, and spike counting is crucial for predicting spike yield. However, in complex farmland environments, the wheat body scale has huge differences, its color is highly similar to the background, and wheat ears often overlap with each other, which makes wheat ear detection work face a lot of challenges. At the same time, the increasing demand for high accuracy and fast response in wheat spike detection has led to the need for models to be lightweight function with reduced the hardware costs. Therefore, this study proposes a lightweight wheat ear detection model, CML-RTDETR, for efficient and accurate detection of wheat ears in real complex farmland environments. In the model construction, the lightweight network CSPDarknet is firstly introduced as the backbone network of CML-RTDETR to enhance the feature extraction efficiency. In addition, the FM module is cleverly introduced to modify the bottleneck layer in the C2f component, and hybrid feature extraction is realized by spatial and frequency domain splicing to enhance the feature extraction capability of wheat to be tested in complex scenes. Secondly, to improve the model’s detection capability for targets of different scales, a multi-scale feature enhancement pyramid (MFEP) is designed, consisting of GHSDConv, for efficiently obtaining low-level detail information and CSPDWOK for constructing a multi-scale semantic fusion structure. Finally, channel pruning based on Layer-Adaptive Magnitude Pruning (LAMP) scoring is performed to reduce model parameters and runtime memory. The experimental results on the GWHD2021 dataset show that the AP50 of CML-RTDETR reaches 90.5%, which is an improvement of 1.2% compared to the baseline RTDETR-R18 model. Meanwhile, the parameters and GFLOPs have been decreased to 11.03 M and 37.8 G, respectively, resulting in a reduction of 42% and 34%, respectively. Finally, the real-time frame rate reaches 73 fps, significantly achieving parameter simplification and speed improvement. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

27 pages, 6715 KiB  
Article
Structural Component Identification and Damage Localization of Civil Infrastructure Using Semantic Segmentation
by Piotr Tauzowski, Mariusz Ostrowski, Dominik Bogucki, Piotr Jarosik and Bartłomiej Błachowski
Sensors 2025, 25(15), 4698; https://doi.org/10.3390/s25154698 - 30 Jul 2025
Viewed by 35
Abstract
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have [...] Read more.
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have become a standard tool and can be used for structural health inspections. A key challenge, however, is the availability of reliable datasets. In this work, the U-net and DeepLab v3+ convolutional neural networks are trained on a synthetic Tokaido dataset. This dataset comprises images representative of data acquired by unmanned aerial vehicle (UAV) imagery and corresponding ground truth data. The data includes semantic segmentation masks for both categorizing structural elements (slabs, beams, and columns) and assessing structural damage (concrete spalling or exposed rebars). Data augmentation, including both image quality degradation (e.g., brightness modification, added noise) and image transformations (e.g., image flipping), is applied to the synthetic dataset. The selected neural network architectures achieve excellent performance, reaching values of 97% for accuracy and 87% for Mean Intersection over Union (mIoU) on the validation data. It also demonstrates promising results in the semantic segmentation of real-world structures captured in photographs, despite being trained solely on synthetic data. Additionally, based on the obtained results of semantic segmentation, it can be concluded that DeepLabV3+ outperforms U-net in structural component identification. However, this is not the case in the damage identification task. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

14 pages, 1771 KiB  
Article
An Adaptive Overcurrent Protection Method for Distribution Networks Based on Dynamic Multi-Objective Optimization Algorithm
by Biao Xu, Fan Ouyang, Yangyang Li, Kun Yu, Fei Ao, Hui Li and Liming Tan
Algorithms 2025, 18(8), 472; https://doi.org/10.3390/a18080472 - 28 Jul 2025
Viewed by 125
Abstract
With the large-scale integration of renewable energy into distribution networks, traditional fixed-setting overcurrent protection strategies struggle to adapt to rapid fluctuations in renewable energy (e.g., wind and photovoltaic) output. Optimizing current settings is crucial for enhancing the stability of modern distribution networks. This [...] Read more.
With the large-scale integration of renewable energy into distribution networks, traditional fixed-setting overcurrent protection strategies struggle to adapt to rapid fluctuations in renewable energy (e.g., wind and photovoltaic) output. Optimizing current settings is crucial for enhancing the stability of modern distribution networks. This paper proposes an adaptive overcurrent protection method based on an improved NSGA-II algorithm. By dynamically detecting renewable power fluctuations and generating adaptive solutions, the method enables the online optimization of protection parameters, effectively reducing misoperation rates, shortening operation times, and significantly improving the reliability and resilience of distribution networks. Using the rate of renewable power variation as the core criterion, renewable power changes are categorized into abrupt and gradual scenarios. Depending on the scenario, either a random solution injection strategy (DNSGA-II-A) or a Gaussian mutation strategy (DNSGA-II-B) is dynamically applied to adjust overcurrent protection settings and time delays, ensuring real-time alignment with grid conditions. Hard constraints such as sensitivity, selectivity, and misoperation rate are embedded to guarantee compliance with relay protection standards. Additionally, the convergence of the Pareto front change rate serves as the termination condition, reducing computational redundancy and avoiding local optima. Simulation tests on a 10 kV distribution network integrated with a wind farm validate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

21 pages, 3448 KiB  
Article
A Welding Defect Detection Model Based on Hybrid-Enhanced Multi-Granularity Spatiotemporal Representation Learning
by Chenbo Shi, Shaojia Yan, Lei Wang, Changsheng Zhu, Yue Yu, Xiangteng Zang, Aiping Liu, Chun Zhang and Xiaobing Feng
Sensors 2025, 25(15), 4656; https://doi.org/10.3390/s25154656 - 27 Jul 2025
Viewed by 299
Abstract
Real-time quality monitoring using molten pool images is a critical focus in researching high-quality, intelligent automated welding. To address interference problems in molten pool images under complex welding scenarios (e.g., reflected laser spots from spatter misclassified as porosity defects) and the limited interpretability [...] Read more.
Real-time quality monitoring using molten pool images is a critical focus in researching high-quality, intelligent automated welding. To address interference problems in molten pool images under complex welding scenarios (e.g., reflected laser spots from spatter misclassified as porosity defects) and the limited interpretability of deep learning models, this paper proposes a multi-granularity spatiotemporal representation learning algorithm based on the hybrid enhancement of handcrafted and deep learning features. A MobileNetV2 backbone network integrated with a Temporal Shift Module (TSM) is designed to progressively capture the short-term dynamic features of the molten pool and integrate temporal information across both low-level and high-level features. A multi-granularity attention-based feature aggregation module is developed to select key interference-free frames using cross-frame attention, generate multi-granularity features via grouped pooling, and apply the Convolutional Block Attention Module (CBAM) at each granularity level. Finally, these multi-granularity spatiotemporal features are adaptively fused. Meanwhile, an independent branch utilizes the Histogram of Oriented Gradient (HOG) and Scale-Invariant Feature Transform (SIFT) features to extract long-term spatial structural information from historical edge images, enhancing the model’s interpretability. The proposed method achieves an accuracy of 99.187% on a self-constructed dataset. Additionally, it attains a real-time inference speed of 20.983 ms per sample on a hardware platform equipped with an Intel i9-12900H CPU and an RTX 3060 GPU, thus effectively balancing accuracy, speed, and interpretability. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

14 pages, 1295 KiB  
Article
Edge-FLGuard+: A Federated and Lightweight Anomaly Detection Framework for Securing 5G-Enabled IoT in Smart Homes
by Manuel J. C. S. Reis
Future Internet 2025, 17(8), 329; https://doi.org/10.3390/fi17080329 - 24 Jul 2025
Viewed by 154
Abstract
The rapid expansion of 5G-enabled Internet of Things (IoT) devices in smart homes has heightened the need for robust, privacy-preserving, and real-time cybersecurity mechanisms. Traditional cloud-based security systems often face latency and privacy bottlenecks, making them unsuitable for edge-constrained environments. In this work, [...] Read more.
The rapid expansion of 5G-enabled Internet of Things (IoT) devices in smart homes has heightened the need for robust, privacy-preserving, and real-time cybersecurity mechanisms. Traditional cloud-based security systems often face latency and privacy bottlenecks, making them unsuitable for edge-constrained environments. In this work, we propose Edge-FLGuard+, a federated and lightweight anomaly detection framework specifically designed for 5G-enabled smart home ecosystems. The framework integrates edge AI with federated learning to detect network and device anomalies while preserving user privacy and reducing cloud dependency. A lightweight autoencoder-based model is trained across distributed edge nodes using privacy-preserving federated averaging. We evaluate our framework using the TON_IoT and CIC-IDS2018 datasets under realistic smart home attack scenarios. Experimental results show that Edge-FLGuard+ achieves high detection accuracy (≥95%) with minimal communication and computational overhead, outperforming traditional centralized and local-only baselines. Our results demonstrate the viability of federated AI models for real-time security in next-generation smart home networks. Full article
Show Figures

Figure 1

25 pages, 539 KiB  
Article
Leadership Uniformity in Timeout-Based Quorum Byzantine Fault Tolerance (QBFT) Consensus
by Andreas Polyvios Delladetsimas, Stamatis Papangelou, Elias Iosif and George Giaglis
Big Data Cogn. Comput. 2025, 9(8), 196; https://doi.org/10.3390/bdcc9080196 - 24 Jul 2025
Viewed by 332
Abstract
This study evaluates leadership uniformity—the degree to which the proposer role is evenly distributed among validator nodes over time—in Quorum-based Byzantine Fault Tolerance (QBFT), a Byzantine Fault-Tolerant (BFT) consensus algorithm used in permissioned blockchain networks. By introducing simulated follower timeouts derived from uniform, [...] Read more.
This study evaluates leadership uniformity—the degree to which the proposer role is evenly distributed among validator nodes over time—in Quorum-based Byzantine Fault Tolerance (QBFT), a Byzantine Fault-Tolerant (BFT) consensus algorithm used in permissioned blockchain networks. By introducing simulated follower timeouts derived from uniform, normal, lognormal, and Weibull distributions, it models a range of network conditions and latency patterns across nodes. This approach integrates Raft-inspired timeout mechanisms into the QBFT framework, enabling a more detailed analysis of leader selection under different network conditions. Three leader selection strategies are tested: Direct selection of the node with the shortest timeout, and two quorum-based approaches selecting from the top 20% and 30% of nodes with the shortest timeouts. Simulations were conducted over 200 rounds in a 10-node network. Results show that leader selection was most equitable under the Weibull distribution with shape k=0.5, which captures delay behavior observed in real-world networks. In contrast, the uniform distribution did not consistently yield the most balanced outcomes. The findings also highlight the effectiveness of quorum-based selection: While choosing the node with the lowest timeout ensures responsiveness in each round, it does not guarantee uniform leadership over time. In low-variability distributions, certain nodes may be repeatedly selected by chance, as similar timeout values increase the likelihood of the same nodes appearing among the fastest. Incorporating controlled randomness through quorum-based voting improves rotation consistency and promotes fairer leader distribution, especially under heavy-tailed latency conditions. However, expanding the candidate pool beyond 30% (e.g., to 40% or 50%) introduced vote fragmentation, which complicated quorum formation in small networks and led to consensus failure. Overall, the study demonstrates the potential of timeout-aware, quorum-based leader selection as a more adaptive and equitable alternative to round-robin approaches, and provides a foundation for developing more sophisticated QBFT variants tailored to latency-sensitive networks. Full article
Show Figures

Figure 1

25 pages, 8652 KiB  
Article
Performance Improvement of Seismic Response Prediction Using the LSTM-PINN Hybrid Method
by Seunggoo Kim, Donwoo Lee and Seungjae Lee
Biomimetics 2025, 10(8), 490; https://doi.org/10.3390/biomimetics10080490 - 24 Jul 2025
Viewed by 225
Abstract
Accurate and rapid prediction of structural responses to seismic loading is critical for ensuring structural safety. Recently, there has been active research focusing on the application of deep learning techniques, including Physics-Informed Neural Networks (PINNs) and Long Short-Term Memory (LSTM) networks, to predict [...] Read more.
Accurate and rapid prediction of structural responses to seismic loading is critical for ensuring structural safety. Recently, there has been active research focusing on the application of deep learning techniques, including Physics-Informed Neural Networks (PINNs) and Long Short-Term Memory (LSTM) networks, to predict the dynamic behavior of structures. While these methods have shown promise, each comes with distinct limitations. PINNs offer physical consistency but struggle with capturing long-term temporal dependencies in nonlinear systems, while LSTMs excel in learning sequential data but lack physical interpretability. To address these complementary limitations, this study proposes a hybrid LSTM-PINN model, combining the temporal learning ability of LSTMs with the physics-based constraints of PINNs. This hybrid approach allows the model to capture both nonlinear, time-dependent behaviors and maintain physical consistency. The proposed model is evaluated on both single-degree-of-freedom (SDOF) and multi-degree-of-freedom (MDOF) structural systems subjected to the El-Centro ground motion. For validation, the 1940 El-Centro NS earthquake record was used, and the ground acceleration data were normalized and discretized for numerical simulation. The proposed LSTM-PINN is trained under the same conditions as the conventional PINN models (e.g., same optimizer, learning rate, and loss structure), but with fewer training epochs, to evaluate learning efficiency. Prediction accuracy is quantitatively assessed using mean error and mean squared error (MSE) for displacement, velocity, and acceleration, and results are compared with PINN-only models (PINN-1, PINN-2). The results show that LSTM-PINN consistently achieves the most stable and precise predictions across the entire time domain. Notably, it outperforms the baseline PINNs even with fewer training epochs. Specifically, it achieved up to 50% lower MSE with only 10,000 epochs, compared to the PINN’s 50,000 epochs, demonstrating improved generalization through temporal sequence learning. This study empirically validates the potential of physics-guided time-series AI models for dynamic structural response prediction. The proposed approach is expected to contribute to future applications such as real-time response estimation, structural health monitoring, and seismic performance evaluation. Full article
Show Figures

Figure 1

20 pages, 1816 KiB  
Article
A Self-Attention-Enhanced 3D Object Detection Algorithm Based on a Voxel Backbone Network
by Zhiyong Wang and Xiaoci Huang
World Electr. Veh. J. 2025, 16(8), 416; https://doi.org/10.3390/wevj16080416 - 23 Jul 2025
Viewed by 385
Abstract
3D object detection is a fundamental task in autonomous driving. In recent years, voxel-based methods have demonstrated significant advantages in reducing computational complexity and memory consumption when processing large-scale point cloud data. A representative method, Voxel-RCNN, introduces Region of Interest (RoI) pooling on [...] Read more.
3D object detection is a fundamental task in autonomous driving. In recent years, voxel-based methods have demonstrated significant advantages in reducing computational complexity and memory consumption when processing large-scale point cloud data. A representative method, Voxel-RCNN, introduces Region of Interest (RoI) pooling on voxel features, successfully bridging the gap between voxel and point cloud representations for enhanced 3D object detection. However, its robustness deteriorates when detecting distant objects or in the presence of noisy points (e.g., traffic signs and trees). To address this limitation, we propose an enhanced approach named Self-Attention Voxel-RCNN (SA-VoxelRCNN). Our method integrates two complementary attention mechanisms into the feature extraction phase. First, a full self-attention (FSA) module improves global context modeling across all voxel features. Second, a deformable self-attention (DSA) module enables adaptive sampling of representative feature subsets at strategically selected positions. After extracting contextual features through attention mechanisms, these features are fused with spatial features from the base algorithm to form enhanced feature representations, which are subsequently input into the region proposal network (RPN) to generate high-quality 3D bounding boxes. Experimental results on the KITTI test set demonstrate that SA-VoxelRCNN achieves consistent improvements in challenging scenarios, with gains of 2.49 and 1.87 percentage points at Moderate and Hard difficulty levels, respectively, while maintaining real-time performance at 22.3 FPS. This approach effectively balances local geometric details with global contextual information, providing a robust detection solution for autonomous driving applications. Full article
Show Figures

Figure 1

27 pages, 705 KiB  
Article
A Novel Wavelet Transform and Deep Learning-Based Algorithm for Low-Latency Internet Traffic Classification
by Ramazan Enisoglu and Veselin Rakocevic
Algorithms 2025, 18(8), 457; https://doi.org/10.3390/a18080457 - 23 Jul 2025
Viewed by 293
Abstract
Accurate and real-time classification of low-latency Internet traffic is critical for applications such as video conferencing, online gaming, financial trading, and autonomous systems, where millisecond-level delays can degrade user experience. Existing methods for low-latency traffic classification, reliant on raw temporal features or static [...] Read more.
Accurate and real-time classification of low-latency Internet traffic is critical for applications such as video conferencing, online gaming, financial trading, and autonomous systems, where millisecond-level delays can degrade user experience. Existing methods for low-latency traffic classification, reliant on raw temporal features or static statistical analyses, fail to capture dynamic frequency patterns inherent to real-time applications. These limitations hinder accurate resource allocation in heterogeneous networks. This paper proposes a novel framework integrating wavelet transform (WT) and artificial neural networks (ANNs) to address this gap. Unlike prior works, we systematically apply WT to commonly used temporal features—such as throughput, slope, ratio, and moving averages—transforming them into frequency-domain representations. This approach reveals hidden multi-scale patterns in low-latency traffic, akin to structured noise in signal processing, which traditional time-domain analyses often overlook. These wavelet-enhanced features train a multilayer perceptron (MLP) ANN, enabling dual-domain (time–frequency) analysis. We evaluate our approach on a dataset comprising FTP, video streaming, and low-latency traffic, including mixed scenarios with up to four concurrent traffic types. Experiments demonstrate 99.56% accuracy in distinguishing low-latency traffic (e.g., video conferencing) from FTP and streaming, outperforming k-NN, CNNs, and LSTMs. Notably, our method eliminates reliance on deep packet inspection (DPI), offering ISPs a privacy-preserving and scalable solution for prioritizing time-sensitive traffic. In mixed-traffic scenarios, the model achieves 74.2–92.8% accuracy, offering ISPs a scalable solution for prioritizing time-sensitive traffic without deep packet inspection. By bridging signal processing and deep learning, this work advances efficient bandwidth allocation and enables Internet Service Providers to prioritize time-sensitive flows without deep packet inspection, improving quality of service in heterogeneous network environments. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 2363 KiB  
Review
Handover Decisions for Ultra-Dense Networks in Smart Cities: A Survey
by Akzhibek Amirova, Ibraheem Shayea, Didar Yedilkhan, Laura Aldasheva and Alma Zakirova
Technologies 2025, 13(8), 313; https://doi.org/10.3390/technologies13080313 - 23 Jul 2025
Viewed by 258
Abstract
Handover (HO) management plays a key role in ensuring uninterrupted connectivity across evolving wireless networks. While previous generations such as 4G and 5G have introduced several HO strategies, these techniques are insufficient to meet the rigorous demands of sixth-generation (6G) networks in ultra-dense, [...] Read more.
Handover (HO) management plays a key role in ensuring uninterrupted connectivity across evolving wireless networks. While previous generations such as 4G and 5G have introduced several HO strategies, these techniques are insufficient to meet the rigorous demands of sixth-generation (6G) networks in ultra-dense, heterogeneous smart city environments. Existing studies often fail to provide integrated HO solutions that consider key concerns such as energy efficiency, security vulnerabilities, and interoperability across diverse network domains, including terrestrial, aerial, and satellite systems. Moreover, the dynamic and high-mobility nature of smart city ecosystems further complicate real-time HO decision-making. This survey aims to highlight these critical gaps by systematically categorizing state-of-the-art HO approaches into AI-based, fuzzy logic-based, and hybrid frameworks, while evaluating their performance against emerging 6G requirements. Future research directions are also outlined, emphasizing the development of lightweight AI–fuzzy hybrid models for real-time decision-making, the implementation of decentralized security mechanisms using blockchain, and the need for global standardization to enable seamless handovers across multi-domain networks. The key outcome of this review is a structured and in-depth synthesis of current advancements, which serves as a foundational reference for researchers and engineers aiming to design intelligent, scalable, and secure HO mechanisms that can support the operational complexity of next-generation smart cities. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

26 pages, 2875 KiB  
Article
Sustainable THz SWIPT via RIS-Enabled Sensing and Adaptive Power Focusing: Toward Green 6G IoT
by Sunday Enahoro, Sunday Cookey Ekpo, Mfonobong Uko, Fanuel Elias, Rahul Unnikrishnan, Stephen Alabi and Nurudeen Kolawole Olasunkanmi
Sensors 2025, 25(15), 4549; https://doi.org/10.3390/s25154549 - 23 Jul 2025
Viewed by 305
Abstract
Terahertz (THz) communications and simultaneous wireless information and power transfer (SWIPT) hold the potential to energize battery-less Internet-of-Things (IoT) devices while enabling multi-gigabit data transmission. However, severe path loss, blockages, and rectifier nonlinearity significantly hinder both throughput and harvested energy. Additionally, high-power THz [...] Read more.
Terahertz (THz) communications and simultaneous wireless information and power transfer (SWIPT) hold the potential to energize battery-less Internet-of-Things (IoT) devices while enabling multi-gigabit data transmission. However, severe path loss, blockages, and rectifier nonlinearity significantly hinder both throughput and harvested energy. Additionally, high-power THz beams pose safety concerns by potentially exceeding specific absorption rate (SAR) limits. We propose a sensing-adaptive power-focusing (APF) framework in which a reconfigurable intelligent surface (RIS) embeds low-rate THz sensors. Real-time backscatter measurements construct a spatial map used for the joint optimisation of (i) RIS phase configurations, (ii) multi-tone SWIPT waveforms, and (iii) nonlinear power-splitting ratios. A weighted MMSE inner loop maximizes the data rate, while an outer alternating optimisation applies semidefinite relaxation to enforce passive-element constraints and SAR compliance. Full-stack simulations at 0.3 THz with 20 GHz bandwidth and up to 256 RIS elements show that APF (i) improves the rate–energy Pareto frontier by 30–75% over recent adaptive baselines; (ii) achieves a 150% gain in harvested energy and a 440 Mbps peak per-user rate; (iii) reduces energy-efficiency variance by half while maintaining a Jain fairness index of 0.999;; and (iv) caps SAR at 1.6 W/kg, which is 20% below the IEEE C95.1 safety threshold. The algorithm converges in seven iterations and executes within <3 ms on a Cortex-A78 processor, ensuring compliance with real-time 6G control budgets. The proposed architecture supports sustainable THz-powered networks for smart factories, digital-twin logistics, wire-free extended reality (XR), and low-maintenance structural health monitors, combining high-capacity communication, safe wireless power transfer, and carbon-aware operation for future 6G cyber–physical systems. Full article
Show Figures

Figure 1

28 pages, 1858 KiB  
Article
Agriculture 5.0 in Colombia: Opportunities Through the Emerging 6G Network
by Alexis Barrios-Ulloa, Andrés Solano-Barliza, Wilson Arrubla-Hoyos, Adelaida Ojeda-Beltrán, Dora Cama-Pinto, Francisco Manuel Arrabal-Campos and Alejandro Cama-Pinto
Sustainability 2025, 17(15), 6664; https://doi.org/10.3390/su17156664 - 22 Jul 2025
Viewed by 421
Abstract
Agriculture 5.0 represents a shift towards a more sustainable agricultural model, integrating Artificial Intelligence (AI), the Internet of Things (IoT), robotics, and blockchain technologies to enhance productivity and resource management, with an emphasis on social and environmental resilience. This article explores how the [...] Read more.
Agriculture 5.0 represents a shift towards a more sustainable agricultural model, integrating Artificial Intelligence (AI), the Internet of Things (IoT), robotics, and blockchain technologies to enhance productivity and resource management, with an emphasis on social and environmental resilience. This article explores how the evolution of wireless technologies to sixth-generation networks (6G) can support innovation in Colombia’s agricultural sector and foster rural advancement. The study follows three main phases: search, analysis, and selection of information. In the search phase, key government policies, spectrum management strategies, and the relevant literature from 2020 to 2025 were reviewed. The analysis phase addresses challenges such as spectrum regulation and infrastructure deployment within the context of a developing country. Finally, the selection phase evaluates technological readiness and policy frameworks. Findings suggest that 6G could revolutionize Colombian agriculture by improving connectivity, enabling real-time monitoring, and facilitating precision farming, especially in rural areas with limited infrastructure. Successful 6G deployment could boost agricultural productivity, reduce socioeconomic disparities, and foster sustainable rural development, contingent on aligned public policies, infrastructure investments, and human capital development. Full article
(This article belongs to the Special Issue Sustainable Precision Agriculture: Latest Advances and Prospects)
Show Figures

Figure 1

18 pages, 1261 KiB  
Article
Firmware Attestation in IoT Swarms Using Relational Graph Neural Networks and Static Random Access Memory
by Abdelkabir Rouagubi, Chaymae El Youssofi and Khalid Chougdali
AI 2025, 6(7), 161; https://doi.org/10.3390/ai6070161 - 21 Jul 2025
Viewed by 375
Abstract
The proliferation of Internet of Things (IoT) swarms—comprising billions of low-end interconnected embedded devices—has transformed industrial automation, smart homes, and agriculture. However, these swarms are highly susceptible to firmware anomalies that can propagate across nodes, posing serious security threats. To address this, we [...] Read more.
The proliferation of Internet of Things (IoT) swarms—comprising billions of low-end interconnected embedded devices—has transformed industrial automation, smart homes, and agriculture. However, these swarms are highly susceptible to firmware anomalies that can propagate across nodes, posing serious security threats. To address this, we propose a novel Remote Attestation (RA) framework for real-time firmware verification, leveraging Relational Graph Neural Networks (RGNNs) to model the graph-like structure of IoT swarms and capture complex inter-node dependencies. Unlike conventional Graph Neural Networks (GNNs), RGNNs incorporate edge types (e.g., Prompt, Sensor Data, Processed Signal), enabling finer-grained detection of propagation dynamics. The proposed method uses runtime Static Random Access Memory (SRAM) data to detect malicious firmware and its effects without requiring access to firmware binaries. Experimental results demonstrate that the framework achieves 99.94% accuracy and a 99.85% anomaly detection rate in a 4-node swarm (Swarm-1), and 100.00% accuracy with complete anomaly detection in a 6-node swarm (Swarm-2). Moreover, the method proves resilient against noise, dropped responses, and trace replay attacks, offering a robust and scalable solution for securing IoT swarms. Full article
Show Figures

Figure 1

Back to TopTop