Next Issue
Volume 18, March
Previous Issue
Volume 18, January
 
 

Future Internet, Volume 18, Issue 2 (February 2026) – 48 articles

Cover Story (view full-size image): Accurate time synchronization is a key enabler for large-scale IoT deployments, but GNSS-based solutions are costly and energy demanding. This paper investigates the high-precision timestamping of LoRaWAN beacon frames using matched-filter processing of raw IQ samples. Through extensive MATLAB simulations, the impact of sampling rate, spreading factor, SNR, and clock non-idealities (CFO, SFO, SPO) on Time-of-Arrival estimation is quantified. Results show that oversampling enables sub-microsecond precision, while Sampling Frequency Offset becomes the dominant source of bias at high spreading factors, requiring explicit compensation for reliable GNSS-free synchronization. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
26 pages, 2412 KB  
Article
Optimizing Collaborative Filtering for Accurate Rating Predictions in Very Sparse Datasets
by Sofia-Anna Lapadaki, John Nanos, Dionisis Margaris, Costas Vassilakis and Dimitris Spiliotopoulos
Future Internet 2026, 18(2), 114; https://doi.org/10.3390/fi18020114 - 23 Feb 2026
Viewed by 272
Abstract
Collaborative filtering is one of the most widely used methods for user rating prediction in recommender systems. To evaluate a collaborative filtering system, rating datasets are typically used, which comprise thousands to millions of records consisting of user–item–rating tuples. Initially, a similarity metric [...] Read more.
Collaborative filtering is one of the most widely used methods for user rating prediction in recommender systems. To evaluate a collaborative filtering system, rating datasets are typically used, which comprise thousands to millions of records consisting of user–item–rating tuples. Initially, a similarity metric is used to quantify the closeness between each user and every other user in the dataset, typically based on the ratings that each pair of users has given to the same items. Subsequently, the K users having the largest similarity to the target user are used to produce rating predictions, which lead to recommendations. A particularly challenging case arises when the rating dataset is very sparse. In this scenario, it is difficult not only to find users with commonly rated items but also to determine the optimal similarity metric and suitable values for variable K. Setting a small value for K results in extremely low prediction coverage, leading to unsuccessful recommendations, while setting a very large K value increases memory requirements and prediction/recommendation generation time. Through a multiparameter experiment, this work aims to determine the optimal settings for rating predictions when very sparse datasets are used in collaborative filtering recommender systems. Full article
Show Figures

Graphical abstract

25 pages, 1245 KB  
Article
Machine Learning-Driven Intrusion Detection for Securing IoT-Based Wireless Sensor Networks
by Yirga Yayeh Munaye, Abebaw Demelash Gebeyehu, Li-Chia Tai, Zemenu Alem Abebe, Aeneas Bekele Workneh, Robel Berie Tarekegn, Yenework Belayneh Chekol and Getaneh Berie Tarekegn
Future Internet 2026, 18(2), 113; https://doi.org/10.3390/fi18020113 - 21 Feb 2026
Viewed by 278
Abstract
Wireless sensor networks (WSNs) have become a critical component of modern Internet of Things (IoT) infrastructures; however, their constrained resources and distributed deployment expose them to various cyber threats. In this work, we present a machine learning-driven intrusion detection framework optimized for WSN-based [...] Read more.
Wireless sensor networks (WSNs) have become a critical component of modern Internet of Things (IoT) infrastructures; however, their constrained resources and distributed deployment expose them to various cyber threats. In this work, we present a machine learning-driven intrusion detection framework optimized for WSN-based IoT environments. The proposed approach employs the WSN-DS benchmark dataset and integrates adaptive synthetic sampling (ADASYN) to address class imbalance, followed by a hybrid feature selection strategy combining Feature Importance Selection (FIS) and Recursive Feature Elimination (RFE) to reduce dimensionality and improve learning efficiency. An XGBoost classifier is then trained using five-fold cross-validation to ensure robust generalization. The experimental results demonstrate that the proposed framework significantly outperforms baseline methods, achieving an overall accuracy of 99.87%, with substantial gains in terms of F1-score, precision, and recall. Comparative analysis against recent WSN-DS studies confirms the effectiveness of combining imbalance correction, optimized feature selection, and ensemble learning. These findings highlight the potential of the proposed model as a lightweight and highly accurate intrusion detection solution for emerging WSN-IoT deployments. Full article
(This article belongs to the Special Issue Machine Learning and Internet of Things in Industry 4.0)
Show Figures

Graphical abstract

43 pages, 1927 KB  
Article
A Large-Scale Empirical Study of LLM Orchestration and Ensemble Strategies for Sentiment Analysis in Recommender Systems
by Konstantinos I. Roumeliotis, Dionisis Margaris, Dimitris Spiliotopoulos and Costas Vassilakis
Future Internet 2026, 18(2), 112; https://doi.org/10.3390/fi18020112 - 20 Feb 2026
Viewed by 536
Abstract
This paper presents a comprehensive empirical evaluation comparing meta-model aggregation strategies with traditional ensemble methods and standalone models for sentiment analysis in recommender systems beyond standalone large language model (LLM) performance. We investigate whether aggregating multiple LLMs through a reasoning-based meta-model provides measurable [...] Read more.
This paper presents a comprehensive empirical evaluation comparing meta-model aggregation strategies with traditional ensemble methods and standalone models for sentiment analysis in recommender systems beyond standalone large language model (LLM) performance. We investigate whether aggregating multiple LLMs through a reasoning-based meta-model provides measurable performance advantages over individual models and standard statistical aggregation approaches in zero-shot sentiment classification. Using a balanced dataset of 5000 verified Amazon purchase reviews (1000 reviews per rating category from 1 to 5 stars, sampled via two-stage stratified sampling across five product categories), we evaluate 12 different leading pre-trained LLMs from four major providers (OpenAI, Anthropic, Google, and DeepSeek) in both standalone and meta-model configurations. Our experimental design systematically compares individual model performance against GPT-based meta-model aggregation and traditional ensemble baselines (majority voting, mean aggregation). Results show statistically significant improvements (McNemar’s test, p < 0.001): the GPT-5 meta-model achieves 71.40% accuracy (10.15 percentage point improvement over the 61.25% individual model average), while the GPT-5 mini meta-model reaches 70.32% (9.07 percentage point improvement). These observed improvements surpass traditional ensemble methods (majority voting: 62.64%; mean aggregation: 62.96%), suggesting potential value in meta-model aggregation for sentiment analysis tasks. Our analysis reveals empirical patterns including neutral sentiment classification challenges (3-star ratings show 64.83% failure rates across models), model influence hierarchies, and cost-accuracy trade-offs ($130.45 aggregation cost vs. $0.24–$43.97 for individual models per 5000 predictions). This work provides evidence-based insights into the comparative effectiveness of LLM aggregation strategies in recommender systems, demonstrating that meta-model aggregation with natural language reasoning capabilities achieves measurable performance gains beyond statistical aggregation alone. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

21 pages, 1714 KB  
Article
Lightweight Authentication and Dynamic Key Generation for IMU-Based Canine Motion Recognition IoT Systems
by Guanyu Chen, Hiroki Watanabe, Kohei Matsumura and Yoshinari Takegawa
Future Internet 2026, 18(2), 111; https://doi.org/10.3390/fi18020111 - 20 Feb 2026
Viewed by 218
Abstract
The integration of wearable inertial measurement units (IMU) in animal welfare Internet of Things (IoT) systems has become crucial for monitoring animal behaviors and enhancing welfare management. However, the vulnerability of IoT devices to network and hardware attacks poses significant risks, potentially compromising [...] Read more.
The integration of wearable inertial measurement units (IMU) in animal welfare Internet of Things (IoT) systems has become crucial for monitoring animal behaviors and enhancing welfare management. However, the vulnerability of IoT devices to network and hardware attacks poses significant risks, potentially compromising data integrity and misleading caregivers, negatively impacting animal welfare. Additionally, current animal monitoring solutions often rely on intrusive tagging methods, such as Radio Frequency Identification (RFID) or ear tagging, which may cause unnecessary stress and discomfort to animals. In this study, we propose a lightweight integrity and provenance-oriented security stack that complements standard transport security, specifically tailored to IMU-based animal motion IoT systems. Our system utilizes a 1D-convolutional neural network (CNN) model, achieving 88% accuracy for precise motion recognition, alongside a lightweight behavioral fingerprinting CNN model attaining 83% accuracy, serving as an auxiliary consistency signal to support collar–animal association and reduce mis-attribution risks. We introduce a dynamically generated pre-shared key (PSK) mechanism based on SHA-256 hashes derived from motion features and timestamps, further securing communication channels via application-layer Hash-based Message Authentication Code (HMAC) combined with Message Queuing Telemetry Transport (MQTT)/Transport Layer Security (TLS) protocols. In our design, MQTT/TLS provides primary device authentication and channel protection, while behavioral fingerprinting and per-window dynamic–HMAC provide auxiliary provenance cues and tamper-evident integrity at the application layer. Experimental validation is conducted primarily via offline, dataset-driven experiments on a public canine IMU dataset; system-level overhead and sensor-to-edge latency are measured on a Raspberry Pi-based testbed by replaying windows through the MQTT/TLS pipeline. Overall, this work integrates motion recognition, behavioral fingerprinting, and dynamic key management into a cohesive, lightweight telemetry integrity/provenance stack and provides a foundation for future extensions to multi-species adaptive scenarios and federated learning applications. Full article
(This article belongs to the Special Issue Secure Integration of IoT and Cloud Computing)
Show Figures

Figure 1

33 pages, 5295 KB  
Article
Payment Rails in Smart Contract as a Service (SCaaS) Solutions from BPMN Models
by Christian Gang Liu, Peter Bodorik and Dawn Jutla
Future Internet 2026, 18(2), 110; https://doi.org/10.3390/fi18020110 - 19 Feb 2026
Viewed by 326
Abstract
The adoption of blockchain-based smart contracts for the trading of goods and services promises greater transparency, automation, and trustlessness, but also raises challenges related to payment integration and modularity. While business analysts (BAs) can express business logic and control flow using BPMN and [...] Read more.
The adoption of blockchain-based smart contracts for the trading of goods and services promises greater transparency, automation, and trustlessness, but also raises challenges related to payment integration and modularity. While business analysts (BAs) can express business logic and control flow using BPMN and decision rules using DMN, payment tasks that involve concrete transfers (on-chain, off-chain, cross-chain, or hybrid) require careful implementation by developers due to platform-specific constraints and semantic richness. To address this separation of concerns, we introduce a methodology within the context of the smart contract-as-a-service (SCaaS) approach that supports (1) identifying and mapping generic payment tasks in BPMN to pre-deployed payment smart contracts, (2) augmenting BPMN models with matching payment fragments from a pattern repository, and (3) automatically transforming the augmented models into smart contracts that invoke the appropriate payment services. Our approach builds on prior work in automated BPMN-to-smart contract transformation using Discrete Event–Hierarchical State Machine (DE-HSM) multi-modal modeling to capture process semantics and nested transactions, while enabling payment service reuse, extensibility, and the separation of concerns. We illustrate this methodology via representative use cases spanning conventional, DeFi, and cross-chain payments, and discuss the implications for modular contract deployment and maintainability. Full article
Show Figures

Graphical abstract

31 pages, 2986 KB  
Systematic Review
A Systematic Review of Machine-Learning-Based Detection of DDoS Attacks in Software-Defined Networks
by Surendren Ganeshan and R Kanesaraj Ramasamy
Future Internet 2026, 18(2), 109; https://doi.org/10.3390/fi18020109 - 19 Feb 2026
Viewed by 383
Abstract
Software-Defined Networking (SDN) has emerged as a fundamental architecture for future Internet systems by enabling centralized control, programmability, and fine-grained traffic management. However, the logical centralization of the SDN control plane also introduces critical vulnerabilities, particularly to Distributed Denial-of-Service (DDoS) attacks that can [...] Read more.
Software-Defined Networking (SDN) has emerged as a fundamental architecture for future Internet systems by enabling centralized control, programmability, and fine-grained traffic management. However, the logical centralization of the SDN control plane also introduces critical vulnerabilities, particularly to Distributed Denial-of-Service (DDoS) attacks that can severely disrupt network availability and performance. To address these challenges, machine-learning (ML) techniques have been increasingly adopted to enable intelligent, adaptive, and data-driven DDoS detection mechanisms within SDN environments. This study presents a PRISMA-guided systematic literature review of recent ML-based approaches for DDoS detection in SDN-based networks. A comprehensive search of IEEE Xplore, ACM Digital Library, ScienceDirect, and Google Scholar identified 38 primary studies published between 2021 and 2025. The selected studies were systematically analyzed to examine learning paradigms, experimental environments, evaluation metrics, datasets, and emerging architectural trends. The synthesis reveals that while single machine-learning classifiers remain dominant in the literature, hybrid and ensemble-based approaches are increasingly adopted to improve detection robustness under dynamic and high-volume traffic conditions. Experimental evaluations are predominantly conducted using SDN emulation platforms such as Mininet integrated with controllers, including Ryu and OpenDaylight, with performance commonly measured using accuracy, precision, recall, and F1 score, alongside emerging system-level metrics such as detection latency and controller resource utilization. Public datasets, including CICIDS2017, CICDDoS2019, and InSDN, are widely used, although a significant portion of studies rely on custom SDN-generated datasets to capture control-plane-specific behaviors. Despite notable advances in detection accuracy, several challenges persist, including limited generalization to low-rate and unknown attacks, dependency on synthetic traffic, and insufficient validation under real-time operational conditions. Based on the synthesized findings, this review highlights key research directions toward intelligent, scalable, and resilient DDoS defense mechanisms for future Internet architectures, emphasizing adaptive learning, lightweight deployment, and integration with programmable networking infrastructures. Full article
Show Figures

Graphical abstract

20 pages, 7816 KB  
Article
Novel Video Understanding Approach for Embodied Learning of Robotics Technology
by Hongbo Zhang, Benjamin Li and Gregory Rushton
Future Internet 2026, 18(2), 108; https://doi.org/10.3390/fi18020108 - 19 Feb 2026
Viewed by 321
Abstract
Embodied learning involves the use of the physical embodiment of hands-on experiences, including gestures, body language, and gaze, during the instructional process for facilitation of the learning outcomes of robotics technology. Understanding the embodiment process is however challenging. In this research, large language [...] Read more.
Embodied learning involves the use of the physical embodiment of hands-on experiences, including gestures, body language, and gaze, during the instructional process for facilitation of the learning outcomes of robotics technology. Understanding the embodiment process is however challenging. In this research, large language model-based video understanding was used for the study of the effectiveness of embodied learning of robotics technology. Embodied and conventional videos were randomly selected, and the user comments were correlated with the transcript and summary of the videos. Results showed that there were higher numbers of user comments correlated with video content for the embodied learning-centered robotics instructional videos than the conventional learning-centered approach in terms of user sentiment and logical reasoning. The sentiment analysis of the video comments showed that the use of embodied learning was effective in achieving engagement in learning robotics, yielding fewer numbers of negative comments in comparison to the conventional learning videos. The embodied learning-centered videos were also helpful to enhance the logical reasoning of students. This user study shows that embodied learning is effective in engaging students, granting more positive sentiments toward the videos. Similarly, the logical reasoning of the students was also enhanced through the use of embodied learning for learning robotics technology. Full article
Show Figures

Graphical abstract

21 pages, 2079 KB  
Article
Assuring Brokerage Quality in the Cloud–Edge Continuum
by Evangelos Barmpas, Simeon Veloudis, Yiannis Verginadis and Iraklis Paraskakis
Future Internet 2026, 18(2), 107; https://doi.org/10.3390/fi18020107 - 19 Feb 2026
Viewed by 254
Abstract
The Cloud–Edge Continuum (CEC) has emerged as a paradigm for distributing computational resources across cloud, fog, and edge layers, enabling latency-sensitive applications to operate efficiently. However, ensuring the quality of service (QoS) brokerage in such environments remains a challenge. Existing frameworks primarily focus [...] Read more.
The Cloud–Edge Continuum (CEC) has emerged as a paradigm for distributing computational resources across cloud, fog, and edge layers, enabling latency-sensitive applications to operate efficiently. However, ensuring the quality of service (QoS) brokerage in such environments remains a challenge. Existing frameworks primarily focus on resource management techniques such as allocation, scheduling, and offloading but fail to address the quality assurance of the brokerage process itself. This paper introduces SLA governance as a means of ensuring the quality of service brokerage by validating—through automated reasoning—Service Level Agreements (SLAs) against meta-quality constraints—high-level policies that define permissible QoS conditions. We propose an ontology-driven approach leveraging the ODRL ontology for SLA representation and capturing meta-quality constraints. Our method also enables introspective reasoning ensuring internal SLA consistency. Additionally, we integrate SLA governance with a real-time monitoring framework, the Event Management System (EMS), to continuously track workload performance and trigger SLA adaptation when necessary. This integration ensures that SLA-based brokerage decisions remain dynamic and context-aware. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

16 pages, 1578 KB  
Article
FedAWR: Aggregation Optimization in Federated Learning with Adaptive Weights and Learning Rates
by Tong Yao, Jianqi Li and Jianhua Liu
Future Internet 2026, 18(2), 106; https://doi.org/10.3390/fi18020106 - 18 Feb 2026
Viewed by 175
Abstract
Federated Learning (FL) enables collaborative model training without sharing raw data, offering a promising solution for privacy-sensitive applications. However, in real-world deployments, significant disparities in client computational capabilities lead to imbalanced model updates, resulting in slow convergence and degraded model generalization. To address [...] Read more.
Federated Learning (FL) enables collaborative model training without sharing raw data, offering a promising solution for privacy-sensitive applications. However, in real-world deployments, significant disparities in client computational capabilities lead to imbalanced model updates, resulting in slow convergence and degraded model generalization. To address this challenge, this paper proposes a novel federated aggregation optimization method, FedAWR, which features adaptive adjustment of learning rates and weights. Specifically, during the global aggregation phase, our method dynamically adjusts each client’s aggregation weight based on its computational capability and configures an appropriate learning rate to balance training progress. Experiments on multi-classification tasks using the Steel Rail Defect and CIFAR-10 datasets demonstrate that the proposed method exhibits significant advantages over mainstream federated algorithms in both convergence efficiency and model generalization performance, thereby validating its effectiveness and superiority. Full article
Show Figures

Figure 1

35 pages, 43326 KB  
Article
A Hybrid LoRa/ZigBee IoT Mesh Architecture for Real-Time Performance Monitoring in Orienteering Sport Competitions: A Measurement Campaign on Different Environments
by Romeo Giuliano, Stefano Alessandro Ignazio Mocci De Martis, Antonello Tomeo, Francesco Terlizzi, Marco Gerardi, Francesca Fallucchi, Lorenzo Felli and Nicola Dall’Ora
Future Internet 2026, 18(2), 105; https://doi.org/10.3390/fi18020105 - 16 Feb 2026
Viewed by 515
Abstract
The sport of orienteering requires athletes to reach specific points marked on a map (called “punching stations”) in the shortest possible time. Currently, the recording of athletes’ passages through the stations is performed offline. In addition to delays in generating intermediate and final [...] Read more.
The sport of orienteering requires athletes to reach specific points marked on a map (called “punching stations”) in the shortest possible time. Currently, the recording of athletes’ passages through the stations is performed offline. In addition to delays in generating intermediate and final rankings, this approach often leads to detection errors and potential cheating related to the lack of authentication of an athlete’s actual passage at a given station. This paper aims to define and design a system enabling three main functionalities: 1. real-time monitoring of athletes’ trajectories through a sensor network connected to control stations; 2. multi-modal authentication of athletes at each station; and 3. immutable certification of each athlete’s passage through blockchain-based recording. System performance is evaluated in terms of wireless network coverage and data collection efficiency across three representative environments: urban, rural, and forested areas. Results are obtained through a measurement campaign for two dedicated wireless technologies: ZigBee for local mesh network and LoRa for long-range links to connect local mesh networks to the cloud over the Internet, which is then accessed by the race organizers. Furthermore, two supporting subsystems are described, addressing athlete authentication and data integrity assurance, as well as a blockchain recording for the overall event management framework. Results are in terms of coverage distances for both technologies, proving highly effective across varied terrains. Field tests demonstrated significant communication capabilities, achieving distances of up to 1800 m in open spaces. Even in challenging, dense wooded environments, the system maintained reliable coverage, reaching transmission distances of up to 600 m. Local ZigBee links between punching stations achieved ranges between 70 and 150 m in forested areas. These findings validate the use of a wireless multi-hop network designed to minimize packet loss and ensure reliable data delivery in competitive scenarios. The feasibility is also investigated in terms of WSN performance, delay analysis and power consumption evaluation. Full article
Show Figures

Graphical abstract

28 pages, 2555 KB  
Article
Deep Learning-Based Video Watermarking: A Robust Framework for Spatial–Temporal Embedding and Retrieval
by Antonio Cedillo-Hernandez, Lydia Velazquez-Garcia, Francisco Javier Garcia-Ugalde and Manuel Cedillo-Hernandez
Future Internet 2026, 18(2), 104; https://doi.org/10.3390/fi18020104 - 16 Feb 2026
Viewed by 316
Abstract
This paper introduces a deep learning-based framework for video watermarking that achieves robust, imperceptible, and fast embedding under a wide range of visual and temporal conditions. The proposed method is organized into seven modules that collaboratively perform frame encoding, semantic region analysis, block [...] Read more.
This paper introduces a deep learning-based framework for video watermarking that achieves robust, imperceptible, and fast embedding under a wide range of visual and temporal conditions. The proposed method is organized into seven modules that collaboratively perform frame encoding, semantic region analysis, block selection, watermark transformation, and spatiotemporal injection, followed by decoding and multi-objective optimization. A key component of the framework is its ability to learn a visual importance map, which guides a saliency-based block selection strategy. This allows the model to embed the watermark in perceptually redundant regions while minimizing distortion. To enhance resilience, the watermark is distributed across multiple frames, leveraging temporal redundancy to improve recovery under frame loss, insertion, and reordering. Experimental evaluations conducted on a large-scale video dataset demonstrate that the proposed method achieves high fidelity, while preserving low decoding error rates under compression, noise, and temporal distortions. The proposed method operates processing 38 video frames per second on a standard GPU. Additional ablation studies confirm the contribution of each module to the system’s robustness. This framework offers a promising solution for watermarking in streaming, surveillance, and content verification applications. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Graphical abstract

21 pages, 722 KB  
Article
Differential Privacy Data Publication Based on Scoring Function
by Ke Yuan, Quan Zhang, Yinghao Lin, Yuye Wang and Chunfu Jia
Future Internet 2026, 18(2), 103; https://doi.org/10.3390/fi18020103 - 15 Feb 2026
Viewed by 291
Abstract
Existing Bayesian network-based differential privacy algorithms predominantly employ uniform privacy budget allocation. However, since attribute nodes carry heterogeneous information loads, the traditional privacy budget allocation strategy may result in insufficient noise being added to important attributes, while excessive noise is added to less [...] Read more.
Existing Bayesian network-based differential privacy algorithms predominantly employ uniform privacy budget allocation. However, since attribute nodes carry heterogeneous information loads, the traditional privacy budget allocation strategy may result in insufficient noise being added to important attributes, while excessive noise is added to less important attributes. To optimize privacy budget utilization, we propose SA-PrivBayes, a scoring-function-driven allocation method. To enhance Bayesian network precision, we introduce a threshold mechanism during network construction that pre-filters low-scoring attribute pairs before applying the exponential mechanism for selection. Subsequently, during parameter learning, privacy budgets are dynamically allocated to low-dimensional attribute sets based on node-specific scoring functions. Under identical privacy budgets, our algorithm demonstrates stronger data protection capabilities compared to the PrivBayes algorithm. Experimental results indicate that, compared to traditional differential privacy methods based on Bayesian networks under identical privacy budgets, our algorithm better meets the privacy protection requirements of high-dimensional data while maintaining higher data utility. Full article
Show Figures

Figure 1

23 pages, 1202 KB  
Article
Image-Based Malware Classification Using DCGAN-Augmented Data and a CNN–Transformer Hybrid Model
by Manya Dhingra, Achin Jain, Niharika Thakur, Anurag Choubey, Massimo Donelli, Arun Kumar Dubey and Arvind Panwar
Future Internet 2026, 18(2), 102; https://doi.org/10.3390/fi18020102 - 14 Feb 2026
Viewed by 388
Abstract
With the rapid growth and diversification of malware, accurate multi-class detection remains challenging due to severe class imbalance and limited labeled data. This work presents an image-based malware classification framework that converts executable binaries into 64×64 grayscale images, employs class-wise DCGAN [...] Read more.
With the rapid growth and diversification of malware, accurate multi-class detection remains challenging due to severe class imbalance and limited labeled data. This work presents an image-based malware classification framework that converts executable binaries into 64×64 grayscale images, employs class-wise DCGAN augmentation to mitigate severe imbalance (initial imbalance ratio >12 across 31 families, N9300), and trains a hybrid CNN–Transformer model that captures both local texture features and long-range contextual dependencies. The DCGAN generator produces high-fidelity synthetic samples, evaluated using Inception Score (IS) =3.43, Fréchet Inception Distance (FID) =10.99, and Kernel Inception Distance (KID) =0.0022, and is used to equalize class counts before classifier training. On the blended dataset the proposed GAN-balanced CNN–Transformer achieves an overall accuracy of 95% and a macro-averaged F1-score of 0.95; the hybrid model also attains validation accuracy of ≈94% while substantially improving minority-class recognition. Compared to CNN-only and Transformer-only baselines, the hybrid approach yields more stable convergence, reduced overfitting, and stronger per-class performance, while remaining feasible for practical deployment. These results demonstrate that DCGAN-driven balancing combined with CNN–Transformer feature fusion is an effective, scalable solution for robust malware family classification. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Graphical abstract

26 pages, 1912 KB  
Article
A Temporally Dynamic Feature-Extraction Framework for Phishing Detection with LIME and SHAP Explanations
by Chris Mayo, Michael Tchuindjang, Sarfraz Brohi and Nikolaos Ersotelos
Future Internet 2026, 18(2), 101; https://doi.org/10.3390/fi18020101 - 14 Feb 2026
Viewed by 382
Abstract
Phishing remains one of the most pervasive social engineering threats, exploiting human vulnerabilities and continuously evolving to bypass static detection mechanisms. Existing machine learning models achieve high accuracy but often act as opaque systems that lack robustness to evolving tactics and explainability, limiting [...] Read more.
Phishing remains one of the most pervasive social engineering threats, exploiting human vulnerabilities and continuously evolving to bypass static detection mechanisms. Existing machine learning models achieve high accuracy but often act as opaque systems that lack robustness to evolving tactics and explainability, limiting trust and real-world deployment. In this research, we propose a dynamic Explainable AI (XAI) approach for phishing detection that integrates temporally aware feature extraction with dual interpretability through LIME and SHAP applied to the resulting window-level features. The novelty of this research lies in a temporally dynamic feature framework that simulates a plausible email reading progression using a heuristic temporal model and employs a sliding window aggregation method to capture behavioural and temporal patterns within email content. Using an aggregated dataset of 82,500 phishing and legitimate emails, dynamic features were extracted and used to train four classifiers: Random Forest, XGBoost, Multi-Layer Perceptron, and Logistic Regression. Ensemble models demonstrated strong performance with XGBoost achieving 94% accuracy and Random Forest 93%. This research addresses an important gap by combining dynamically constructed temporal features with transparent explanations, achieving high detection performance while preserving interpretability. These findings demonstrate that dynamic temporal modelling with explainable learning can enhance the trustworthiness and practicality of phishing detection systems, highlighting that temporally structured features and explainable learning can enhance the trustworthiness and practical deployability of phishing detection systems without incurring excessive computational overhead. Full article
Show Figures

Figure 1

25 pages, 2045 KB  
Article
A Comparative Analysis of Self-Aware Reinforcement Learning Models for Real-Time Intrusion Detection in Fog Networks
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Future Internet 2026, 18(2), 100; https://doi.org/10.3390/fi18020100 - 14 Feb 2026
Viewed by 258
Abstract
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study [...] Read more.
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study proposes integrating self-awareness (online learning and concept drift adaptation) into a lightweight RL (reinforcement learning)-based IDS for fog networks and quantitatively comparing it with non-RL static thresholds and bandit-based approaches in real time. Novel self-aware reinforcement learning (RL) models, the Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (HATS-RL) model, and the Federated Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (F-HATS-RL), were proposed for real-time intrusion detection in a fog network. These self-aware RL policies integrated online uncertainty estimation and concept-drift detection to adapt to evolving attacks. The RL models were benchmarked against the static threshold (ST) model and a widely adopted linear bandit (Linear Upper Confidence Bound/LinUCB). A realistic fog network simulator with heterogeneous nodes and streaming traffic, including multi-type attack bursts and gradual concept drift, was established. The models’ detection performance was compared using metrics including latency, energy consumption, detection accuracy, and the area under the precision–recall curve (AUPR) and the area under the receiver operating characteristic curve (AUROC). Notably, the federated self-aware agent (F-HATS-RL) achieved the best AUROC (0.933) and AUPR (0.857), with a latency of 0.27 ms and the lowest energy consumption of 0.0137 mJ, indicating its ability to detect intrusions in fog networks with minimal energy. The findings suggest that self-aware RL agents can detect traffic–dynamic attack methods and adapt accordingly, resulting in more stable long-term performance. By contrast, a static model’s accuracy degrades under drift. Full article
Show Figures

Figure 1

29 pages, 2986 KB  
Article
Analysis of GitHub Advanced Security: Security Integration in GitHub and Azure DevOps
by Vladislav Manolov, Daniela Gotseva and Nikolay Hinov
Future Internet 2026, 18(2), 99; https://doi.org/10.3390/fi18020099 - 13 Feb 2026
Viewed by 426
Abstract
This paper examines the integration and operationalization of GitHub Advanced Security (GHAS) across GitHub and Azure DevOps to enhance DevSecOps practices in cloud-native software delivery. As organizations increasingly adopt continuous integration and continuous deployment (CI/CD) pipelines, embedding security into every stage of development [...] Read more.
This paper examines the integration and operationalization of GitHub Advanced Security (GHAS) across GitHub and Azure DevOps to enhance DevSecOps practices in cloud-native software delivery. As organizations increasingly adopt continuous integration and continuous deployment (CI/CD) pipelines, embedding security into every stage of development has become essential to protect the software supply chain. This study explores how GHAS can unify security governance across these two major Microsoft platforms by enabling consistent code scanning, secret detection, and dependency analysis within developer workflows. The methodology involves evaluating GHAS features and configurations in GitHub and Azure DevOps, analyzing feature parity, deployment models, and integration patterns, and identifying architectural approaches that support enterprise scalability. Implementation details include CodeQL customization, secret-scanning configurations, and dependency-management workflows, supported by real-world pipeline examples. Findings indicate that both GitHub and Azure DevOps benefit from GHAS through strong native integration, actionable security feedback, and automated protection embedded directly into development workflows. GHAS enhances vulnerability visibility, supports compliance enforcement, and strengthens collaboration between security and engineering teams. The study concludes that effective DevSecOps maturity requires standardized security automation and governance across platforms. GHAS provides a unified, scalable solution that aligns developer productivity with enterprise-grade security objectives in multi-environmental ecosystems. Full article
Show Figures

Figure 1

18 pages, 549 KB  
Review
Beyond Centralized AI: Blockchain-Enabled Decentralized Learning
by Daren Wang, Tengfei Ma, Juntao Zhu and Haihan Duan
Future Internet 2026, 18(2), 98; https://doi.org/10.3390/fi18020098 - 13 Feb 2026
Viewed by 473
Abstract
The dominance of centralized artificial intelligence architectures raises significant concerns regarding privacy, data ownership, and control. These limitations have motivated the development of decentralized learning paradigms that aim to remove reliance on a central authority during model training. While federated learning represents an [...] Read more.
The dominance of centralized artificial intelligence architectures raises significant concerns regarding privacy, data ownership, and control. These limitations have motivated the development of decentralized learning paradigms that aim to remove reliance on a central authority during model training. While federated learning represents an intermediate step by allowing distributed training without raw data exchange, it still depends on a centralized server which could lead to single-point vulnerabilities. Beyond this, a fully decentralized learning in general faces challenges in security vulnerabilities, absence of governance, and lack of incentive alignment. Recent advances in blockchain technology offer a promising foundation for addressing these issues. This paper provides a systematic analysis of blockchain’s mechanism-level roles in security, consensus, smart contract, and incentives to support decentralized learning. By reviewing state-of-the-art approaches, this paper suggests that appropriately designed blockchain architectures have the potential to enable practical, secure, and incentive-compatible decentralized learning as technological capabilities continue to evolve. Full article
Show Figures

Graphical abstract

20 pages, 7833 KB  
Review
Interference-Resilient Concurrent Sensing in Dense Environments: A Review of OFDM and OTFS Waveforms for JRC
by Mehmet Yazgan, Buldan Karahan, Hüseyin Arslan and Stavros Vakalis
Future Internet 2026, 18(2), 97; https://doi.org/10.3390/fi18020097 - 13 Feb 2026
Viewed by 309
Abstract
This paper presents a unified perspective on Orthogonal Frequency-Division Multiplexing (OFDM)-based joint radar–communication (JRC) sensing, focusing on the efficient reuse of time and frequency resources in range–Doppler estimation and imaging scenarios. By leveraging OFDM’s inherent subcarrier orthogonality, noise-like temporal properties, and minor carrier [...] Read more.
This paper presents a unified perspective on Orthogonal Frequency-Division Multiplexing (OFDM)-based joint radar–communication (JRC) sensing, focusing on the efficient reuse of time and frequency resources in range–Doppler estimation and imaging scenarios. By leveraging OFDM’s inherent subcarrier orthogonality, noise-like temporal properties, and minor carrier frequency offsets, these systems can support concurrent transmissions over the same spectral and temporal resources while maintaining interference resilience. Experimental and simulation-based insights demonstrate the feasibility of simultaneous sensing across users and antennas, even in dense Radio Frequency (RF) environments. We analyze trade-offs, implementation considerations, and system-level implications to provide a consolidated foundation for designing future OFDM-based JRC systems. The feasibility of an Orthogonal Time Frequency Space (OTFS) waveform for the proposed method is also investigated. The review highlights the potential of such architectures in spectrum and time-congested applications such as Vehicle-to-Everything (V2X), indoor localization, Internet of Things (IoT), and beyond fifth-generation (5G) networks. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2024–2025)
Show Figures

Figure 1

27 pages, 3227 KB  
Review
A Review of Research on the Applications of Large Models in Each Functional Module of the Entire Rehabilitation Process
by Tingting Bai, Kaiwen Jiang, Yixuan Yu, Shuyan Qie, Congxiao Wang, Boyuan Wang and Wenli Zhang
Future Internet 2026, 18(2), 95; https://doi.org/10.3390/fi18020095 - 12 Feb 2026
Viewed by 367
Abstract
Population ageing and chronic disease are increasing demand for rehabilitation, while resources remain limited. This review does not report an implemented end-to-end system; instead, it proposes a modular workflow framework for applying large AI foundation models across rehabilitation. Organised into four stages—assessment, prescription, [...] Read more.
Population ageing and chronic disease are increasing demand for rehabilitation, while resources remain limited. This review does not report an implemented end-to-end system; instead, it proposes a modular workflow framework for applying large AI foundation models across rehabilitation. Organised into four stages—assessment, prescription, execution, and monitoring—we summarise recent evidence and highlight techniques most suitable at each stage. In assessment, multimodal models can enable more continuous and objective functional measurement from heterogeneous sensor and imaging data. In prescription, large language models can support evidence-informed, personalised plan formulation by synthesising guidelines and patient context. In execution, vision–language–sensor models can provide real-time feedback for telerehabilitation and adherence support. In monitoring, longitudinal and cross-setting data integration can facilitate risk prediction and early warning for safety and long-term management. We also discuss practical adaptation options (e.g., parameter-efficient fine-tuning) and propose a clinimetric-oriented evaluation framework to assess validity, reliability, and generalisability. By mapping AI capabilities to concrete workflow tasks, the framework provides a theoretical foundation and roadmap for reproducible research and future translation toward a universal rehabilitation model. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Smart Healthcare)
Show Figures

Figure 1

22 pages, 1636 KB  
Article
Evaluating Reconstruction-Based and Proximity-Based Methods: A Four-Way Comparison (AE, LSTM-AE, OCSVM, IF) in SCADA Anomaly Detection Under Inverted Imbalance
by Lukasz Pawlik
Future Internet 2026, 18(2), 96; https://doi.org/10.3390/fi18020096 - 11 Feb 2026
Viewed by 264
Abstract
This article investigates and compares four unsupervised anomaly detection algorithms: the Autoencoder (AE), LSTM-Autoencoder (LSTM-AE), One-Class SVM (OCSVM), and the Isolation Forest (IF). The analysis focuses on SCADA telemetry data from an urban wind turbine, characterized by a unique case of extreme inverted [...] Read more.
This article investigates and compares four unsupervised anomaly detection algorithms: the Autoencoder (AE), LSTM-Autoencoder (LSTM-AE), One-Class SVM (OCSVM), and the Isolation Forest (IF). The analysis focuses on SCADA telemetry data from an urban wind turbine, characterized by a unique case of extreme inverted class imbalance, where operational anomalies constitute 75.7% of the records. The AE model, trained exclusively on the rare normal state, achieved the best overall performance (AUC 0.9667), maintaining balanced and high classification effectiveness for both classes (Recall Normal ≈ 95%, Recall Anomaly ≈ 88.5%; Macro F1-Score 0.8962). In contrast, the IF model, despite a strong discriminative ability (AUC 0.8616), exhibited a complete inability to correctly recognize the normal class (Recall Normal 0.00) when using the optimal F1-score threshold. This performance degradation was a direct consequence of the necessity to apply a classification threshold imposed by the statistical fraction of the anomaly-dominated dataset. These results empirically demonstrate the methodological superiority of the reconstruction-based approach (AE) in constructing a stable decision boundary independent of the statistically dominant class. The study provides quantitative guidelines for the selection and calibration of algorithms in PHM diagnostic systems where states deviating from the operational norm constitute the majority. Full article
(This article belongs to the Special Issue Machine Learning and Internet of Things in Industry 4.0)
Show Figures

Figure 1

20 pages, 2816 KB  
Article
Benchmarking Large Language Models for Embedded Systems Programming in Microcontroller-Driven IoT Applications
by Marek Babiuch and Pavel Smutný
Future Internet 2026, 18(2), 94; https://doi.org/10.3390/fi18020094 - 11 Feb 2026
Viewed by 461
Abstract
Large language models (LLMs) have shown strong potential for automated code generation in software development, yet their effectiveness in embedded systems programming—requiring understanding of software logic and hardware constraints—has not been well studied. Existing evaluation frameworks do not comprehensively cover practical microcontroller development [...] Read more.
Large language models (LLMs) have shown strong potential for automated code generation in software development, yet their effectiveness in embedded systems programming—requiring understanding of software logic and hardware constraints—has not been well studied. Existing evaluation frameworks do not comprehensively cover practical microcontroller development scenarios in real-world Internet of Things (IoT) projects. This study systematically evaluates 27 state-of-the-art LLMs across eight embedded systems scenarios of increasing complexity, from basic sensor reading to complete cloud database integration with visualization dashboards. Using ESP32 microcontrollers with environmental and motion sensors, we employed the Analytic Hierarchy Process with four weighted criteria: functional, instructions, output and creativity, evaluated independently by two expert reviewers. Top-performing models were Claude Sonnet 4.5, Claude Opus 4.1, and Gemini 2.5 Pro, with scores from 0.984 to 0.910. Performance degraded with complexity: 19–23 models generated compilable code for simple applications, but only 3–5 produced functional solutions for complex scenarios involving Grafana and cloud databases. The most frequent failure was hallucinated non-existent libraries or incorrect API usage, with functional capability as the primary barrier and instruction-following quality the key differentiator among competent models. These findings provide empirical guidance for embedded developers on LLM selection and identify limitations of zero-shot prompting for hardware-dependent IoT development. Full article
Show Figures

Figure 1

22 pages, 468 KB  
Article
LeapNP: A Modular Python Framework for Benchmarking Learned Heuristics in Numeric Planning
by Valerio Borelli, Alfonso Emilio Gerevini, Enrico Scala and Ivan Serina
Future Internet 2026, 18(2), 93; https://doi.org/10.3390/fi18020093 - 11 Feb 2026
Viewed by 321
Abstract
This paper introduces LeapNP (Learning and Planning Framework for Numeric Problems), a lightweight, Python-native framework engineered to support both classical and numeric planning tasks. Designed with a fully modular interface, it specifically aims to facilitate the seamless integration of deep learning methodologies. The [...] Read more.
This paper introduces LeapNP (Learning and Planning Framework for Numeric Problems), a lightweight, Python-native framework engineered to support both classical and numeric planning tasks. Designed with a fully modular interface, it specifically aims to facilitate the seamless integration of deep learning methodologies. The design philosophy of LeapNP stems from the observation that traditional planners, while highly efficient, lack the necessary flexibility for experimental research, particularly at the intersection of learning and planning. Most state-of-the-art engines are built as highly optimized, rigid executables that are resistant to internal modification. LeapNP disrupts this paradigm by offering a framework where the entire planning stack is accessible and mutable. Users can seamlessly plug in custom implementations for grounding, define novel state representations, or design bespoke search strategies, thereby enabling a level of integration with learning models that is currently impractical with standard tools. By significantly lowering the engineering barrier, our planner fosters rapid experimentation and accelerates research in neuro-symbolic planning. We also present a comprehensive suite of search algorithms, designed to evaluate different properties of learned heuristics. These include two algorithms designed to exploit batching to maximize inference throughput, and a greedy algorithm meant to test the intrinsic robustness of the learned models, running them as general policies. Full article
Show Figures

Figure 1

30 pages, 5139 KB  
Article
Research on an On-Chain and Off-Chain Collaborative Storage Method Based on Blockchain and IPFS
by Tianqi Zhu, Yuxiang Huang, Zhihong Liang, Mingming Qin, Ruicheng Niu, Yuanyuan Ma and Qi Feng
Future Internet 2026, 18(2), 92; https://doi.org/10.3390/fi18020092 - 10 Feb 2026
Viewed by 419
Abstract
Blockchain technology, with its characteristics of decentralization, immutability, auditability, and traceability, has gradually become a core infrastructure in the digital economy era, demonstrating great potential in fields such as finance, government services, and the Internet of Things (IoT). However, as the scale of [...] Read more.
Blockchain technology, with its characteristics of decentralization, immutability, auditability, and traceability, has gradually become a core infrastructure in the digital economy era, demonstrating great potential in fields such as finance, government services, and the Internet of Things (IoT). However, as the scale of blockchain networks expands and data volumes surge, issues such as full-node storage redundancy, limited transaction throughput, and inefficient synchronization of historical data have become increasingly prominent, severely restricting the large-scale application of blockchain systems. The storage scalability problem faced by blockchain is therefore becoming more critical. To address the challenge in which on-chain storage expansion still cannot meet the demand for large-scale data storage, a storage method combining the InterPlanetary File System (IPFS) with blockchain, referred to as IPFS-BC, is proposed. In IPFS-BC, large-scale raw data are stored in the decentralized and content-addressable IPFS network, while the blockchain only retains the unique content identifier (CID) hash and related metadata. Through smart contracts enabling dynamic permission management and fine-grained access control, efficient interaction and collaborative storage between on-chain and off-chain systems are achieved. In this work, file upload simulation experiments were conducted, and two evaluation indicators—storage space consumption and storage performance (file read/write time and speed)—were used to compare three storage approaches: Distributed Hash Table (DHT)-based off-chain storage, Financial Blockchain Shenzhen Open Source (FISCO BCOS) on-chain storage, and the IPFS-BC on-chain/off-chain collaborative storage model. Experimental results show that the IPFS-BC model reduces storage space consumption by approximately 75% compared with FISCO BCOS blockchain storage when storing file data, significantly decreasing data redundancy. Moreover, IPFS-BC ensures system security during the on-chain process, and through the automated management and auditing provided by smart contracts, it effectively enhances system security and realizes scalable on-chain/off-chain collaborative storage. Full article
(This article belongs to the Special Issue Advances in Multimedia Information System Security)
Show Figures

Figure 1

20 pages, 2488 KB  
Article
Network Instability as a Signal of Systemic Financial Stress: An Explainable Machine-Learning Framework
by Livia Valentina Moretti, Enrico Barbierato and Alice Gatti
Future Internet 2026, 18(2), 91; https://doi.org/10.3390/fi18020091 - 9 Feb 2026
Viewed by 274
Abstract
This paper develops a framework for monitoring and forecasting episodes of systemic financial stress using a combination of market information, macro-financial indicators, and measures derived from time-varying correlation networks, embedded in a sequential machine-learning setting. The contribution is not tied to a single [...] Read more.
This paper develops a framework for monitoring and forecasting episodes of systemic financial stress using a combination of market information, macro-financial indicators, and measures derived from time-varying correlation networks, embedded in a sequential machine-learning setting. The contribution is not tied to a single modelling innovation, but rather to the way these ingredients are brought together under an evaluation protocol designed to mimic real-time supervisory use, and to an interpretability layer that makes the resulting predictions easier to inspect. Monthly data covering the period from 2006 to 2025 are used to construct evolving correlation structures and summary indicators of market co-movement. These features are combined with standard predictors and fed into logistic regression, random forest, and gradient boosting models, all estimated in expanding windows and assessed strictly on future observations. Predictive accuracy remains limited, which is consistent with the difficulty of anticipating stress regimes several months ahead at monthly frequency, although gradient boosting attains the highest average AUC across evaluation folds and displays noticeable variation over time. Inspection of SHAP values points to instability in correlation networks, volatility conditions, and short-horizon return behaviour as recurring drivers of the predicted stress probabilities, suggesting that the models draw on information that goes beyond individual market series. Taken together, the results indicate that recurrent statistical regularities and changes in market structure can be exploited for monitoring purposes when models are trained and tested in a sequential fashion. The overall design is intended to be usable in practice and to support supervisory analysis, while remaining transparent enough to allow scrutiny of the signals driving the forecasts. Full article
Show Figures

Figure 1

8 pages, 6060 KB  
Editorial
Special Issue on Security and Privacy in Blockchains and the IoT—3rd Edition
by Christoph Stach, Clémentine Gritti and Iouliana Litou
Future Internet 2026, 18(2), 90; https://doi.org/10.3390/fi18020090 - 9 Feb 2026
Viewed by 252
Abstract
The rapid digital transformation of modern society is increasingly driven by the convergence of blockchain technologies and the Internet of Things (IoT), both of which have become foundational pillars of contemporary distributed systems [...] Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT—3rd Edition)
13 pages, 1612 KB  
Article
Rethinking the Security Assurances of MTD: A Gap Analysis for Network Defense
by Łukasz Jalowski, Marek Zmuda, Mariusz Rawski and Paulina Rekosz
Future Internet 2026, 18(2), 89; https://doi.org/10.3390/fi18020089 - 7 Feb 2026
Viewed by 390
Abstract
Moving Target Defense (MTD) is a paradigm that has the potential to revolutionize the approach to network security. Although a significant number of papers have been published on the topic, there are still no standards or any dominant implementations of this concept. This [...] Read more.
Moving Target Defense (MTD) is a paradigm that has the potential to revolutionize the approach to network security. Although a significant number of papers have been published on the topic, there are still no standards or any dominant implementations of this concept. This article identifies and attempts to bridge the gap in understanding various aspects of MTD security while also defining the research directions necessary to implement MTD techniques in real-world scenarios. It discusses the security of key design principles of MTD, considers problems regarding applying MTD to real networks, and finally addresses threat modeling in the context of MTD. By aggregating various security aspects related to MTD, some of which have not been typically discussed in the available literature on the subject, this work aims to assist in designing future MTD schemes, help navigate around various security caveats, and highlight possible research directions that have not been sufficiently explored in the existing literature. Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems, 3rd Edition)
Show Figures

Graphical abstract

17 pages, 1497 KB  
Article
SPARTA: Sparse Parallel Architecture for Real-Time Threat Analysis for Lightweight Edge Network Defense
by Shi Li, Xiyun Mi, Lin Zhang and Ye Lu
Future Internet 2026, 18(2), 88; https://doi.org/10.3390/fi18020088 - 6 Feb 2026
Viewed by 243
Abstract
AI-driven network security relies increasingly on Large Language Models (LLMs) to detect sophisticated threats; however, their deployment on resource-constrained edge devices is severely hindered by immense parameter scales. While unstructured pruning offers a theoretical reduction in model size, commodity Graphics Processing Unit (GPU) [...] Read more.
AI-driven network security relies increasingly on Large Language Models (LLMs) to detect sophisticated threats; however, their deployment on resource-constrained edge devices is severely hindered by immense parameter scales. While unstructured pruning offers a theoretical reduction in model size, commodity Graphics Processing Unit (GPU) architectures fail to efficiently leverage element-wise sparsity due to the mismatch between fine-grained pruning patterns and the coarse-grained parallelism of Tensor Cores, leading to latency bottlenecks that compromise real-time analysis of high-volume security telemetry. To bridge this gap, we propose SPARTA (Sparse Parallel Architecture for Real-Time Threat Analysis), an algorithm–architecture co-design framework. Specifically, we integrate a hardware-based address remapping interface to enable flexible row-offset access. This mechanism facilitates a novel graph-based column vector merging strategy that aligns sparse data with Tensor Core parallelism, complemented by a pipelined execution scheme to mask decoding latencies. Evaluations on Llama2-7B and Llama2-13B benchmarks demonstrate that SPARTA achieves an average speedup of 2.35× compared to Flash-LLM, with peak speedups reaching 5.05×. These findings indicate that hardware-aware microarchitectural adaptations can effectively mitigate the penalties of unstructured sparsity, providing a viable pathway for efficient deployment in resource-constrained edge security. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

21 pages, 43172 KB  
Article
Location-Aware SDN-IDPS Framework for Real-Time DoS Mitigation in Vehicular Networks
by Aung Aung, Kuljaree Tantayakul and Adisak Intana
Future Internet 2026, 18(2), 87; https://doi.org/10.3390/fi18020087 - 6 Feb 2026
Viewed by 675
Abstract
Integrating Software-Defined Networking (SDN) to enhance mobility management in Vehicular Ad Hoc Networks (VANETs) comes with an additional critical risk. Because centralized controllers are single points of failure, they create the risk that the network will be subject to denial-of-service (DoS) attacks during [...] Read more.
Integrating Software-Defined Networking (SDN) to enhance mobility management in Vehicular Ad Hoc Networks (VANETs) comes with an additional critical risk. Because centralized controllers are single points of failure, they create the risk that the network will be subject to denial-of-service (DoS) attacks during handovers. Most Intrusion Detection and Prevention systems (IDPSs) do not adequately address these risks because they are topology-blind and have excessive processing layers. This article presents a novel Location-Aware SDN-IDPS Framework that employs a hierarchical defense approach to protect vehicular networks against volumetric attacks. This two-plane system operates with the first tier, which uses dynamic host-location mappings to drop spoofed traffic at the switch level (data plane). In contrast, the second tier analyzes confirmed traffic through a Suricata-based engine to identify and respond to complex flood attack patterns. The experimental results from the Mininet-WiFi testbed show that the system provides a significant improvement over the unprotected state, with controller CPU utilization reduced by up to 18 times (from 9.0% to below 0.5%). In addition, the system provides a 2.3 s guaranteed recovery time, service continuity, successful microsecond-level mitigation time, and a packet delivery ratio (PDR) of 99.73% for legitimate safety messages. In control-plane stress testing, the proposed location-aware logic improved throughput stability by approximately 76.26% compared to the baseline. These findings confirm that offloading anti-spoofing logic to the network edge significantly enhances resilience without compromising performance in safety-critical vehicular environments. Full article
Show Figures

Graphical abstract

20 pages, 682 KB  
Article
Semantic Search for System Dynamics Models Using Vector Embeddings in a Cloud Microservices Environment
by Pavel Kyurkchiev, Anton Iliev and Nikolay Kyurkchiev
Future Internet 2026, 18(2), 86; https://doi.org/10.3390/fi18020086 - 5 Feb 2026
Viewed by 426
Abstract
Efficient retrieval of mathematical and structural similarities in System Dynamics models remains a significant challenge for traditional lexical systems, which often fail to capture the contextual dependencies of simulation processes. This paper presents an architectural approach and implementation of a semantic search module [...] Read more.
Efficient retrieval of mathematical and structural similarities in System Dynamics models remains a significant challenge for traditional lexical systems, which often fail to capture the contextual dependencies of simulation processes. This paper presents an architectural approach and implementation of a semantic search module integrated into an existing cloud-based modeling and simulation system. The proposed method employs a strategy for serializing graph structures into textual descriptions, followed by the generation of vector embeddings via local ONNX inference and indexing within a vector database (Qdrant). Experimental validation performed on a diverse corpus of complex dynamic models, compares the proposed approach against traditional information retrieval methods (Full-Text Search, Keyword Search in PostgreSQL, and Apache Lucene with Standard and BM25 scoring). The results demonstrate the distinct advantage of semantic search, achieving high precision (over 90%) within the scope of the evaluated corpus and effectively eliminating information noise. In comparison, keyword search exhibited only 24.8% precision with a significant rate of false positives, while standard full-text analysis failed to identify relevant models for complex conceptual queries (0 results). Despite a recorded increase in latency (~2 s), the study proves that the vector-based approach is a significantly more robust solution for detecting hidden semantic connections in mathematical model databases, providing a foundation for future developments toward multi-vector indexing strategies. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

24 pages, 2506 KB  
Article
CEVD: Cluster-Based Ensemble Learning for Cross-Project Vulnerability Detection
by Yang Cao, Yunwei Dong and Jie Liu
Future Internet 2026, 18(2), 85; https://doi.org/10.3390/fi18020085 - 5 Feb 2026
Viewed by 218
Abstract
Deep learning has become an important approach for automated software vulnerability detection. However, due to domain shift, existing models often suffer from significant performance degradation when applied to unseen projects. To address this issue, prior studies have widely adopted Domain Adaptation (DA) techniques [...] Read more.
Deep learning has become an important approach for automated software vulnerability detection. However, due to domain shift, existing models often suffer from significant performance degradation when applied to unseen projects. To address this issue, prior studies have widely adopted Domain Adaptation (DA) techniques to improve cross-project generalization. Nevertheless, these methods typically rely on the implicit “project-as-domain” assumption and require access to target project data during training, which limits their applicability in practice. To overcome these limitations, this paper proposes a vulnerability detection framework that combines semantic clustering with ensemble-based Domain Generalization (DG), termed Cluster-based Ensemble Learning for Vulnerability Detection (CEVD). CEVD first performs unsupervised clustering on code semantic embeddings to automatically automatically identify latent semantic structures that transcend project boundaries, constructing pseudo-domains with intra-domain homogeneity. A soft domain labeling strategy is further introduced to model the membership of samples in multiple pseudo-domains, preserving semantic overlap across boundaries. Building upon this, CEVD employs an ensemble learning framework that jointly trains multiple expert models and a domain classifier. The predictions of these experts are dynamically fused based on the weights generated by the domain classifier, enabling effective vulnerability detection on unseen projects without requiring access to target data. Extensive experiments on real-world datasets demonstrate that CEVD consistently outperforms state-of-the-art baselines across different pre-trained backbone models. This work demonstrates the effectiveness of semantic structure mining in capturing latent domains and offers a practical solution for improving generalization in cross-project vulnerability detection. Full article
(This article belongs to the Special Issue Security of Computer System and Network)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop