Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (280)

Search Parameters:
Keywords = clustering overhead

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 3733 KB  
Article
SSDBFAN: Scalable and Secure Cluster-Based Data Aggregation with Blockchain for Flying Ad Hoc Networks
by Sufian Al Majmaie, Ghazal Ghajari, Niraj Prasad Bhatta, Mohamed I. Ibrahem and Fathi Amsaad
Sensors 2026, 26(9), 2585; https://doi.org/10.3390/s26092585 - 22 Apr 2026
Viewed by 189
Abstract
Mobile Unmanned Aerial Vehicles (UAVs) forming Flying Ad Hoc Networks (FANETs) offer promising applications, but dynamic network structures, limited resources, and potential single points of failure create security challenges. While cluster-based data aggregation, where data is collected and combined at Cluster Heads (CHs) [...] Read more.
Mobile Unmanned Aerial Vehicles (UAVs) forming Flying Ad Hoc Networks (FANETs) offer promising applications, but dynamic network structures, limited resources, and potential single points of failure create security challenges. While cluster-based data aggregation, where data is collected and combined at Cluster Heads (CHs) before transmission, improves efficiency, traditional techniques can compromise data privacy. This paper introduces SSDBFAN, a scalable and secure cluster-based data aggregation framework for Flying Ad Hoc Networks (FANETs). The proposed approach integrates the Frilled Lizard Optimization Algorithm (FLOA) for efficient cluster head selection with blockchain technology and post-quantum cryptographic techniques, including lattice-based homomorphic encryption and the Chinese Remainder Theorem, to ensure privacy-preserving data aggregation. Additionally, a hybrid online/offline signature mechanism is employed to achieve secure and efficient authentication with reduced computational overhead. The performance of the proposed framework is evaluated using NS-3 simulations under varying network sizes. Experimental results demonstrate that SSDBFAN significantly improves communication efficiency, reduces computational cost, and enhances network stability compared to existing schemes. Furthermore, scalability analysis with up to 500 UAV nodes confirms that the proposed framework effectively controls blockchain overhead, including bandwidth consumption, consensus latency, and storage requirements. Comparative evaluation with existing optimization algorithms shows that FLOA achieves superior performance in terms of cluster stability, delay, and throughput. These results validate the effectiveness of SSDBFAN as a scalable and security-aware solution for large-scale FANET environments. Full article
(This article belongs to the Special Issue Security, Privacy and Threat Detection in Sensor Networks)
Show Figures

Figure 1

24 pages, 1534 KB  
Article
Hybrid Energy-Aware Ranking and Optimization
by Zhiling Zeng, Yuxuan Jiang and Na Niu
Future Internet 2026, 18(5), 226; https://doi.org/10.3390/fi18050226 - 22 Apr 2026
Viewed by 87
Abstract
The increase in delay-sensitive application tasks requires heterogeneous edge clusters to maintain low online latency and energy efficiency without relying on rigid scheduling policies. To address this, we propose HERO (Hybrid Energy-aware Ranking and Optimization), a lightweight collaborative scheduling framework. HERO utilizes a [...] Read more.
The increase in delay-sensitive application tasks requires heterogeneous edge clusters to maintain low online latency and energy efficiency without relying on rigid scheduling policies. To address this, we propose HERO (Hybrid Energy-aware Ranking and Optimization), a lightweight collaborative scheduling framework. HERO utilizes a perturbation-based communication-aware multi-layer perceptron (MLP) predictor to quantify global time sensitivity and discover latent time slack in non-critical paths. A hybrid budget mechanism then converts this slack into customized DVFS decisions. These decisions are based on the inherent computational load and topological criticality to optimize energy consumption. A communication-aware hole-filling strategy dynamically recovers sporadic idle times fragmented by heterogeneous communication overhead. Extensive simulations were conducted across varying DAG depths, parallelism levels, and system utilizations. Compared to state-of-the-art algorithms (NSGA-II, SSA, TOM, and DPMC), HERO reduced the completion time by an average of 10.89% under high-density topologies, and achieved up to 4.04% energy savings across varying task depths. Full article
23 pages, 2704 KB  
Article
VANET-GPSR+: A Lightweight Direction-Aware Routing Protocol for Vehicular Ad Hoc Networks
by Zhuhua Zhang and Ning Ye
Sensors 2026, 26(8), 2525; https://doi.org/10.3390/s26082525 - 19 Apr 2026
Viewed by 281
Abstract
Vehicular Ad hoc Networks (VANETs) feature high node mobility and volatile topologies, rendering the conventional Greedy Perimeter Stateless Routing (GPSR) protocol prone to weak link stability and inefficient route discovery due to its lack of direction awareness. Existing direction-aware improvements typically rely on [...] Read more.
Vehicular Ad hoc Networks (VANETs) feature high node mobility and volatile topologies, rendering the conventional Greedy Perimeter Stateless Routing (GPSR) protocol prone to weak link stability and inefficient route discovery due to its lack of direction awareness. Existing direction-aware improvements typically rely on multi-criteria weighting or clustering, introducing heavy parameter fusion and computational overhead that conflict with the resource-constrained nature of onboard units. To overcome these limitations, this paper presents VANET-GPSR+, a lightweight enhanced routing protocol. Its key novelty is that it discards multi-parameter fusion and relies solely on movement direction, supported by a synergistic framework of three lightweight mechanisms: direction-aware neighbor classification to prioritize nodes with consistent trajectories, adaptive greedy forwarding region expansion in sparse and dynamic networks, and path deviation angle-based next-hop selection. This work builds a probabilistic link lifetime model that theoretically quantifies the stability gains of direction awareness—a novel theoretical foundation. Comprehensive urban and highway simulations show that VANET-GPSR+ improves the packet delivery ratio by 16.3% and reduces end-to-end delay by 27.5% compared with standard GPSR, and it outperforms both OP-GPSR and AK-GPSR. It introduces negligible CPU and memory overhead, with CPU usage over 50% lower than the two benchmark protocols at 80 vehicles/km, and demonstrates strong robustness against varying beacon intervals and communication radii. Retaining GPSR’s stateless and distributed traits, VANET-GPSR+ delivers substantial performance gains with minimal overhead, serving as an efficient routing solution for highly dynamic VANETs. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

29 pages, 2018 KB  
Article
Energy-Efficient Optimization in Wireless Sensor Networks Using a Hybrid Bat-Artificial Bee Colony Algorithm
by Hussein S. Mohammed, Poria Pirozmand, Sheeraz Memon, Sajad Ghatrehsamani and Indra Seher
Sensors 2026, 26(8), 2401; https://doi.org/10.3390/s26082401 - 14 Apr 2026
Viewed by 404
Abstract
This study presents a novel hybrid Bat-Artificial Bee Colony (BA-ABC) algorithm for energy-efficient optimization in Wireless Sensor Networks (WSNs), addressing the critical challenge of limited node energy and network lifetime degradation. The proposed framework integrates the rapid local convergence of the Bat Algorithm [...] Read more.
This study presents a novel hybrid Bat-Artificial Bee Colony (BA-ABC) algorithm for energy-efficient optimization in Wireless Sensor Networks (WSNs), addressing the critical challenge of limited node energy and network lifetime degradation. The proposed framework integrates the rapid local convergence of the Bat Algorithm with the robust global exploration of the Artificial Bee Colony to achieve unified optimization of clustering and routing processes. An adaptive multi-objective fitness function is developed to balance energy consumption, network lifetime, and communication efficiency, enabling dynamic, efficient resource utilization across varying network conditions. Comprehensive simulations conducted in MATLAB R2024a demonstrate that the proposed BA-ABC algorithm significantly outperforms conventional and recent optimization approaches. The results show a reduction in total energy consumption of approximately 22–30%, an improvement in network lifetime of 18–25%, and a latency reduction of nearly 24% compared to baseline methods such as Ant Colony Optimization (ACO). Statistical validation, including confidence intervals and hypothesis testing, confirms the robustness, stability, and consistency of the proposed framework across multiple simulation runs. Unlike existing hybrid and machine-learning-based approaches, the BA-ABC algorithm achieves high optimization performance without introducing excessive computational overhead or complex training requirements, making it suitable for resource-constrained WSN environments. Furthermore, the proposed method demonstrates strong scalability and adaptability, positioning it as a practical solution for real-world applications, including smart cities, environmental monitoring, and healthcare systems. This work contributes to the advancement of intelligent WSN optimization by providing a scalable, adaptive, and computationally efficient hybrid framework aligned with emerging trends in next-generation IoT-enabled networks. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

24 pages, 1522 KB  
Article
M-DGNN: Accelerating Large-Scale Dynamic Graph Neural Network Training via PCIe-Interconnected Multiple Computational Storage Devices
by Junhao Zhu, Xiaotong Han, Wenqing Wang, Liang Fang, Xinjie Shi and Junwei Zeng
Electronics 2026, 15(8), 1620; https://doi.org/10.3390/electronics15081620 - 13 Apr 2026
Viewed by 223
Abstract
The explosive growth of temporal graph data has led to significant training overheads for Dynamic Graph Neural Networks (DGNNs), a bottleneck primarily driven by massive data movement between host processors and storage arrays across conventional PCIe I/O buses. While near-data processing with Computational [...] Read more.
The explosive growth of temporal graph data has led to significant training overheads for Dynamic Graph Neural Networks (DGNNs), a bottleneck primarily driven by massive data movement between host processors and storage arrays across conventional PCIe I/O buses. While near-data processing with Computational Storage Devices (CSDs) can alleviate this bottleneck, a single CSD is inherently incapable of meeting the terabyte-scale capacity requirements and complex sequence modeling demands of modern large-scale DGNNs. Horizontal scaling with multi-CSD clusters over standard PCIe topologies presents a viable, cost-effective solution, yet our in-depth profiling identifies two critical architectural bottlenecks in naive multi-CSD architectures: host-bounced memory copies significantly compromise inter-device communication efficiency, and sparse graph sampling frequently exceeds the capacity of the tightly constrained local DRAM of CSDs, resulting in excessive flash I/O and performance degradation. To address these interconnected bottlenecks, we propose M-DGNN, a hardware–software co-designed architecture optimized for standard PCIe interconnects. First, M-DGNN orchestrates direct peer-to-peer (P2P) DMA dataflows for inter-CSD hidden state exchange, completely bypassing host operating system intervention and reducing the context-switching overhead. Second, we design a host-assisted caching strategy with a Host-Pinned Memory Extension (HPME) mechanism, which leverages host-pinned memory as an asynchronous DMA extension pool to shield resource-constrained CSDs from high-latency flash I/O during structural subgraph sampling. Extensive experimental evaluations across seven large-scale dynamic graph datasets demonstrate that M-DGNN delivers up to a 6.2× end-to-end speedup over the state-of-the-art DGNN systems. This work establishes an efficient, scalable near-data computing paradigm for large-scale DGNN training. Full article
(This article belongs to the Special Issue High-Performance Computer Architectures: Designs and Applications)
Show Figures

Figure 1

31 pages, 1361 KB  
Article
Ground User Clustering for Adaptive Multibeam GEO Satellite Networks
by Heba Shehata, Hazer Inaltekin and Iain B. Collings
Sensors 2026, 26(8), 2384; https://doi.org/10.3390/s26082384 - 13 Apr 2026
Viewed by 277
Abstract
This paper considers geometry-aware ground user clustering and Cluster Center Optimization for beam pointing and scheduling in adaptive multibeam Geostationary Earth Orbit (GEO) satellite networks. By grouping ground users, beams can be directed toward user clusters to maximize satellite throughput. We propose GeoClust, [...] Read more.
This paper considers geometry-aware ground user clustering and Cluster Center Optimization for beam pointing and scheduling in adaptive multibeam Geostationary Earth Orbit (GEO) satellite networks. By grouping ground users, beams can be directed toward user clusters to maximize satellite throughput. We propose GeoClust, a polynomial-time geometric user clustering algorithm for adaptive multibeam GEO satellite networks, using a geometric set-cover approach that explicitly balances link signal-to-interference-plus-noise ratio (SINR) and hopping overhead. The algorithm employs a Boyle–Dykstra projection-based cluster center update within an alternating optimization framework, combined with nearest-center membership updates, to enforce the cluster-radius constraint while ensuring feasibility and provable convergence. It also achieves near-linear throughput scaling with increasing number of RF chains. Numerical evaluations on real-world population data show that, under heavy traffic conditions, our approach more than doubles the zero outage and median user rates compared to benchmark schemes. Full article
(This article belongs to the Special Issue Feature Papers in Communications Section 2025–2026)
Show Figures

Figure 1

23 pages, 1056 KB  
Article
Deep Learning-Driven Atomic Norm Optimization for Accurate Downlink Channel Estimation in FDD Systems
by Ke Xu, Sining Li, Changwei Huang, Dan Wu, Changning Wei, Dongjun Zhang, Richu Jin, Huilin Ren, Zhuoqiao Ji, Xinbo Chen and Weiqiang Wu
Electronics 2026, 15(7), 1461; https://doi.org/10.3390/electronics15071461 - 1 Apr 2026
Viewed by 250
Abstract
In this paper, we propose a downlink (DL) channel estimation scheme for frequency-division duplex (FDD) multi-antenna orthogonal frequency-division multiplexing (OFDM) systems, leveraging atomic norm minimization (ANM) and deep neural networks (DNN). Unlike time-division duplex (TDD) systems, where uplink (UL) and DL channels are [...] Read more.
In this paper, we propose a downlink (DL) channel estimation scheme for frequency-division duplex (FDD) multi-antenna orthogonal frequency-division multiplexing (OFDM) systems, leveraging atomic norm minimization (ANM) and deep neural networks (DNN). Unlike time-division duplex (TDD) systems, where uplink (UL) and DL channels are reciprocal, FDD systems do not share this reciprocity, leading to increased channel training overhead. However, both theoretical analyses and empirical evidence reveal that key channel characteristics—such as angles of arrival and departure, path delays, and the number of propagation paths—exhibit partial reciprocity between UL and DL. Building on this insight, we design a DL channel estimation scheme that exploits frequency-independent UL parameters along with estimated DL channel gains. Our method integrates ANM with DNN to enhance estimation accuracy and efficiency. Specifically, ANM formulates the estimation problem while avoiding the off-grid errors inherent in traditional grid-based methods. To further mitigate performance degradation in clustered-path channels and reduce computational complexity, we introduce a DNN-based architecture that predicts channel parameters. The DNN captures hidden relationships between received pilot signals and frequency-independent channel parameters, enabling accurate estimation with linear time complexity. During training, ANM assists in serving users, ensuring reliable performance. Once the DNN is fully trained, it takes over to balance quality of service (QoS) and latency, providing an efficient and accurate solution for DL channel estimation in FDD-OFDM systems. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

21 pages, 1332 KB  
Article
Impact of Fabrication Defects on FPGA Logic Using Memristor-Based Memory Cells
by Jonas Schoenen, Jonas Gehrunger, Leon Mayrhofer, Timo Oster, Eszter Piros, Taewook Kim, Alexey Arzumanov, Enrique Miranda, Klaus Hofmann, Lambert Alff and Christian Hochberger
Micromachines 2026, 17(4), 429; https://doi.org/10.3390/mi17040429 - 31 Mar 2026
Viewed by 360
Abstract
Memristor-based configuration memory offers an alternative solution to the volatility and large area overhead of conventional Static Random Access Memory (SRAM)-based FPGA configuration memory. Their non-volatile nature and the possibility of stacking them on top of the logic layer in a process called [...] Read more.
Memristor-based configuration memory offers an alternative solution to the volatility and large area overhead of conventional Static Random Access Memory (SRAM)-based FPGA configuration memory. Their non-volatile nature and the possibility of stacking them on top of the logic layer in a process called Back-End-Of-Line (BEOL) manufacturing help not only dramatically reduce area consumption but also significantly reduce startup time. However, due to the comparatively high defect probability caused by manufacturing defects, traditional approaches for defect tolerance are not fit to address these defects. This work introduces an approach to defect-aware and tolerant synthesis. Based on this, an investigation into the defect tolerance of different architecture choices regarding the size of LUTs and the fracturability of LUTs is presented. We can show that smaller, non-fracturable LUTs exhibit a higher defect tolerance. Moreover, multiple strategies to improve the mapping result based on the properties of the logic functions are introduced. Notably, reducing the mapping complexity of logic clusters during the packing stage significantly improves the mapping success rate. Full article
(This article belongs to the Special Issue Advances in Field-Programmable Gate Arrays (FPGAs))
Show Figures

Figure 1

25 pages, 4508 KB  
Article
Lightweight Multimode Day-Ahead PV Power Forecasting for Intelligent Control Terminals Using CURE Clustering and Self-Updating Batch-Lasso
by Ting Yang, Butian Chen, Yuying Wang, Qi Cheng and Danhong Lu
Sustainability 2026, 18(7), 3319; https://doi.org/10.3390/su18073319 - 29 Mar 2026
Viewed by 302
Abstract
Lightweight day-ahead photovoltaic (PV) forecasting models encounter a significant technical challenge: under resource-constrained deployment conditions, it is difficult to simultaneously address weather-regime heterogeneity, maintain model interpretability, and preserve adaptability as operating conditions evolve. To address this issue, we propose a multimodal short-term photovoltaic [...] Read more.
Lightweight day-ahead photovoltaic (PV) forecasting models encounter a significant technical challenge: under resource-constrained deployment conditions, it is difficult to simultaneously address weather-regime heterogeneity, maintain model interpretability, and preserve adaptability as operating conditions evolve. To address this issue, we propose a multimodal short-term photovoltaic (PV) forecasting method that integrates weather-mode partitioning using the Clustering Using Representatives (CURE) algorithm with a self-updating Batch-Lasso model. First, the meteorological-PV dataset is partitioned along two dimensions by combining seasonal grouping with CURE clustering within each season, producing representative weather modes and enhancing the fidelity of weather pattern classification. Second, to extract informative predictors from high-dimensional meteorological inputs while maintaining interpretability, we formulate per-mode Lasso regression and adopt the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) to efficiently solve for the sparse regression coefficients. Third, we introduce a batch-based self-update and correction mechanism with rollback verification, enabling the mode-specific models to be refreshed as new historical data become available while preventing performance degradation. Compared with representative machine learning baselines, the proposed method maintains competitive accuracy with substantially lower computational and storage overhead, enabling high-frequency and energy-efficient inference on resource-constrained terminals, thereby reducing operational burdens and computational energy costs and better meeting the deployment needs of sustainable energy systems under heterogeneous weather conditions. Full article
Show Figures

Figure 1

23 pages, 1004 KB  
Article
A Lightweight IDS Based on Blockchain and Machine Learning for Detecting Physical Attacks in Wireless Sensor Networks
by Maytham S. Jabor, Aqeel S. Azez, José Carlos Campelo and Alberto Bonastre
Sensors 2026, 26(6), 1961; https://doi.org/10.3390/s26061961 - 20 Mar 2026
Viewed by 563
Abstract
Wireless sensor networks (WSNs) are vulnerable to physical attacks in which adversaries gain partial or full control of sensor nodes, compromising the integrity of the network. Conventional security mechanisms impose excessive computational overhead and are not well suited to resource-constrained WSN devices. This [...] Read more.
Wireless sensor networks (WSNs) are vulnerable to physical attacks in which adversaries gain partial or full control of sensor nodes, compromising the integrity of the network. Conventional security mechanisms impose excessive computational overhead and are not well suited to resource-constrained WSN devices. This paper proposes a lightweight, two-layer intrusion detection system (IDS) that integrates blockchain (BC) technology with machine learning for physical attack detection in WSNs. The first layer employs a lightweight BC protocol among cluster heads (CHs) and the base station (BS) to detect data integrity violations through hash-based consensus. The second layer applies an artificial neural network (ANN) at the base station to detect attacks that bypass blockchain verification, without imposing any processing load on sensor nodes. Simulation experiments on a 100-node WSN demonstrate that the combined system achieves 97.42% accuracy and 98.35% recall, outperforming five established classifiers and both standalone components. The system sustains detection rates above 99.98% under 30 simultaneous attackers and maintains reliable operation under packet loss conditions up to 10%. Full article
(This article belongs to the Special Issue Privacy and Cybersecurity in IoT-Based Applications)
Show Figures

Figure 1

23 pages, 630 KB  
Article
Depth-First Search-Based Malicious Node Detection with Honeypot Technology in Wireless Sensor Networks
by Sercan Demirci, Doğan Yıldız, Durmuş Özkan Şahin and Asmaa Alaadin
Mathematics 2026, 14(6), 1050; https://doi.org/10.3390/math14061050 - 20 Mar 2026
Viewed by 345
Abstract
Wireless sensor networks (WSNs) are highly susceptible to Denial-of-Service (DoS) attacks due to their resource-constrained and distributed nature. In this study, we propose a novel trust-based malicious node detection mechanism that leverages a Depth-First Search (DFS) strategy to trace and identify attack sources [...] Read more.
Wireless sensor networks (WSNs) are highly susceptible to Denial-of-Service (DoS) attacks due to their resource-constrained and distributed nature. In this study, we propose a novel trust-based malicious node detection mechanism that leverages a Depth-First Search (DFS) strategy to trace and identify attack sources within clustered WSN architectures efficiently. The proposed approach dynamically evaluates trust scores between nodes to detect anomalous behaviors and employs a honeypot-based redirection system to isolate compromised nodes from the main communication flow. This combination enhances detection accuracy while minimizing false positives and energy overhead. The method is implemented and evaluated using a custom simulation environment. Comparative experimental results against state-of-the-art techniques such as the Evolved Trust Updating Mechanism (EVO) and Multi-agent Trust-based Intrusion Detection System (MULTI) demonstrate that our Trust-Based Honeypot (TBHP) achieves superior performance in terms of detection rate, false-alarm rate, and network lifetime extension. Full article
(This article belongs to the Topic Recent Advances in Security, Privacy, and Trust)
Show Figures

Figure 1

25 pages, 3362 KB  
Article
Adaptive Clustering and Machine-Learning-Based DoS Intrusion Detection in MANETs
by Hwanseok Yang
Appl. Sci. 2026, 16(6), 2723; https://doi.org/10.3390/app16062723 - 12 Mar 2026
Viewed by 335
Abstract
Mobile ad hoc networks (MANETs) are highly vulnerable to denial-of-service (DoS) attacks because their decentralized operation, rapidly changing topology, and constrained node resources limit the use of heavyweight security mechanisms. This paper presents an Adaptive Clustering and Random-Forest-based Intrusion Detection System (ACRF-IDS), a [...] Read more.
Mobile ad hoc networks (MANETs) are highly vulnerable to denial-of-service (DoS) attacks because their decentralized operation, rapidly changing topology, and constrained node resources limit the use of heavyweight security mechanisms. This paper presents an Adaptive Clustering and Random-Forest-based Intrusion Detection System (ACRF-IDS), a lightweight intrusion detection framework that combines mobility-aware adaptive clustering with supervised learning to detect network-layer DoS behaviors. Cluster heads are elected using a multi-metric utility (residual energy, link stability, and mobility) to stabilize observations under node movement. Within fixed monitoring windows, cluster heads aggregate routing-, forwarding-, and energy-related features and classify nodes using a Random Forest model; a temporal voting scheme further suppresses transient mobility-induced alarms. Using ns-2.35 simulations with Ad hoc On-Demand Distance Vector (AODV) and both flooding and blackhole DoS scenarios, ACRF-IDS is compared with (i) a static clustering-based threshold IDS, (ii) a non-clustered Support Vector Machine (SVM)-based IDS, and (iii) AIFAODV, which specializes in flooding. Across the evaluated network sizes (4–50 nodes), the proposed method achieves a higher detection rate and F1-score while maintaining a lower false positive rate than the baseline techniques. We additionally quantify network-level impact using PDR, throughput, and routing overhead, showing that ACRF-IDS improves availability under DoS while adding bounded overhead. Future work will extend the evaluation to more diverse attack behaviors and validate the approach in real-world MANET testbeds. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 2849 KB  
Systematic Review
Intrusion Detection in Fog Computing: A Systematic Review of Security Advances and Challenges
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Computers 2026, 15(3), 169; https://doi.org/10.3390/computers15030169 - 5 Mar 2026
Viewed by 734
Abstract
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents [...] Read more.
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents research on intrusion detection systems (IDSs) for fog computing and synthesizes advances and research gaps. The study was guided by the “Preferred-Reporting-Items for-Systematic-Reviews-and-Meta-Analyses” (PRISMA) framework. Scopus and Web of Science were searched in the title field using TITLE/TI = (“intrusion detection” AND “fog computing”) for 2021–2025. The inclusion criteria were (i) 2021–2025 publications, (ii) journal or conference papers, (iii) English language, and (iv) open access availability; duplicates were removed programmatically using a DOI-first key with a title, year, and author alternative. The search identified 8560 records, of which 4905 were unique and included for qualitative grouping and bibliometric synthesis. Metadata (year, venue, authors, affiliations, keywords, and citations) were extracted and analyzed in Python to compute trends and collaboration. Intrusion detection systems in fog networks were categorized into traditional/signature-based, machine learning, deep learning, and hybrid/ensemble. Hybrid and DL approaches reported accuracy ranging from 95 to 99% on benchmark datasets (such as NSL-KDD, UNSW-NB15, CIC-IDS2017, KDD99, BoT-IoT). Notable bottlenecks included computational load relative to real-time latency on resource-constrained nodes, elevated false-positive rates for anomaly detection under concept drift, limited generalization to unseen attacks, privacy risks from centralizing data, and limited real-world validation. Bibliometric analyses highlighted the field’s concentration in fast-turnaround, open-access journals such as IEEE Access and Sensors, as well as a small number of highly collaborative author clusters, alongside dominant terms such as “learning,” “federated,” “ensemble,” “lightweight,” and “explainability.” Emerging directions include federated and distributed training to preserve privacy, as well as online/continual learning adaptation. Future work should consist of real-world evaluation of fog networks, ultra-lightweight yet adaptive hybrid IDS, self-learning, and secure cooperative frameworks. These insights help researchers select appropriate IDS models for fog networks. Full article
Show Figures

Figure 1

31 pages, 9962 KB  
Article
Adaptive Spatio-Temporal Federated Learning for Traffic Flow Prediction: Framework and Aggregation Approaches Evaluation
by Basma Alsehaimi, Ohoud Alzamzami, Nahed Alowidi and Manar Ali
Appl. Sci. 2026, 16(5), 2402; https://doi.org/10.3390/app16052402 - 28 Feb 2026
Viewed by 340
Abstract
Traffic flow prediction (TFP) is a fundamental component of intelligent transportation systems (ITS) that supports traffic management, congestion mitigation, and route planning. Although recent advances in deep learning have demonstrated strong capability in modeling non-linear spatio-temporal correlations, most existing approaches rely on centralized [...] Read more.
Traffic flow prediction (TFP) is a fundamental component of intelligent transportation systems (ITS) that supports traffic management, congestion mitigation, and route planning. Although recent advances in deep learning have demonstrated strong capability in modeling non-linear spatio-temporal correlations, most existing approaches rely on centralized training paradigms, which incur substantial communication costs, high computational overhead, and significant data privacy risks. Federated Learning (FL) has emerged as a promising alternative by enabling decentralized model training across distributed clients while reducing privacy risks and communication overhead. However, existing FL-based TFP frameworks often employ local models with limited capacity to capture complex spatio-temporal dependencies, and their reliance on the conventional FedAvg aggregation approach restricts robustness under heterogeneous traffic data distributions. To address these challenges, this study proposes the FedASTAM framework, which integrates FL with the Adaptive Spatio-Temporal Attention-based Multi-Model (ASTAM) to effectively model complex and non-linear spatio-temporal traffic correlations in a data-local FL setting. Within FedASTAM, the road network is divided into sub-regions using spectral clustering, allowing each sub-region to train a local ASTAM model tailored to localized and heterogeneous traffic patterns. At the central server, locally trained models are aggregated using seven aggregation schemes, including the classical FedAvg, to optimize global model updates while preserving data locality. Extensive experiments conducted on two real-world benchmark datasets, PeMS04 and PeMS08, demonstrate that FedASTAM achieved strong and stable predictive performance while keeping raw data localized throughout the federated training process. The results further indicate that the aggregation approaches used in the proposed FedASTAM framework generally outperform classical FedAvg under heterogeneous traffic conditions, highlighting FedASTAM as an effective approach for traffic flow prediction in complex, distributed ITS environments. Full article
Show Figures

Figure 1

20 pages, 1627 KB  
Article
BigchainDB for Precision Agriculture Data Sharing: A Feasibility Study
by Željko Džafić, Branko Milosavljević, Mladen Čučak and Slobodanka Pavlović
Future Internet 2026, 18(3), 121; https://doi.org/10.3390/fi18030121 - 27 Feb 2026
Viewed by 550
Abstract
Centralized agricultural data platforms raise concerns about ownership, provenance, and vendor lock-in, motivating decentralized alternatives. This study evaluates BigchainDB as a blockchain-database hybrid for owner-controlled precision agriculture data sharing. We address three research questions: (1) functional feasibility for data integrity, access control, and [...] Read more.
Centralized agricultural data platforms raise concerns about ownership, provenance, and vendor lock-in, motivating decentralized alternatives. This study evaluates BigchainDB as a blockchain-database hybrid for owner-controlled precision agriculture data sharing. We address three research questions: (1) functional feasibility for data integrity, access control, and heterogeneous sensor integration; (2) integration patterns bridging IoT ingestion with blockchain consensus; and (3) operational trade-offs versus centralized alternatives. A proof-of-concept implementation comprising a sensor simulator, FastAPI middleware, and three-node BigchainDB cluster demonstrates end-to-end data flow with cryptographic provenance. Key contributions include the following: identification of three integration patterns (message queue buffering for high-throughput ingestion, hierarchical asset modeling, and dual-key access control); comparative analysis against five blockchain-database alternatives; and characterization of deployment complexity. Results show BigchainDB satisfies the functional requirements for data integrity and access control, while requiring increased operational overhead compared to single-node databases. The architecture is viable when multi-party governance outweighs operational simplicity, though production deployments require further scalability validation, including detailed performance benchmarking. Full article
(This article belongs to the Topic Applications of IoT in Multidisciplinary Areas)
Show Figures

Graphical abstract

Back to TopTop