Next Issue
Volume 18, February
Previous Issue
Volume 17, December
 
 

Future Internet, Volume 18, Issue 1 (January 2026) – 66 articles

Cover Story (view full-size image): Modern data management introduces significant challenges for current systems, specifically regarding limited scalability, data quality, and the integration of distributed sources. Structured P2P overlays are positioned to address these limits by (i) ensuring massive scalability while maintaining node autonomy, and (ii) facilitating data querying and (iii) integration through efficient lookup mechanisms. Consequently, this work combines P2P overlays with graph databases to leverage efficient scaling and flexible data models, thereby enhancing data quality. Data integration is further facilitated by a novel querying strategy that ensures query propagation and tracking across the overlay. This mechanism allows for precise control over the total time spent querying, the volume of data retrieved, and the number of peers reached. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
36 pages, 3068 KB  
Article
IRDS4C–CTIB: A Blockchain-Driven Deception Architecture for Ransomware Detection and Intelligence Sharing
by Ahmed El-Kosairy, Heba Aslan and Nashwa AbdelBaki
Future Internet 2026, 18(1), 66; https://doi.org/10.3390/fi18010066 - 21 Jan 2026
Viewed by 279
Abstract
This paper introduces a cybersecurity framework that combines a deception-based ransomware detection system, called the Intrusion and Ransomware Detection System for Cloud (IRDS4C), with a blockchain-enabled Cyber Threat Intelligence platform (CTIB). The framework aims to improve the detection, reporting, and sharing of ransomware [...] Read more.
This paper introduces a cybersecurity framework that combines a deception-based ransomware detection system, called the Intrusion and Ransomware Detection System for Cloud (IRDS4C), with a blockchain-enabled Cyber Threat Intelligence platform (CTIB). The framework aims to improve the detection, reporting, and sharing of ransomware threats in cloud environments. IRDS4C uses deception techniques such as honeypots, honeytokens, pretender network paths, and decoy applications to identify ransomware behavior within cloud systems. Tests on 53 Windows-based ransomware samples from seven families showed an ordinary detection time of about 12 s, often quicker than tralatitious methods like file hashing or entropy analysis. These detection results are currently limited to Windows-based ransomware environments, and do not yet cover Linux, containerized, or hypervisor-level ransomware. Detected threats are formatted using STIX/TAXII standards and firmly shared through CTIB. CTIB applies a hybrid blockchain consensus of Proof of Stake (PoS) and Proof of Work (PoW) to ensure data integrity and protection from tampering. Security analysis shows that an attacker would need to control over 71% of the network to compromise the system. CTIB also improves trust, accuracy, and participation in intelligence sharing, while smart contracts control access to erogenous data. In a local prototype deployment (Hardhat devnet + FastAPI/Uvicorn), CTIB achieved 74.93–125.92 CTI submissions/min, The number of attempts or requests in each test was 100 with median end-to-end latency 455.55–724.99 ms (p95: 577.68–1364.17 ms) across PoW difficulty profiles (difficulty_bits = 8–16). Full article
(This article belongs to the Special Issue Anomaly and Intrusion Detection in Networks)
Show Figures

Graphical abstract

22 pages, 1469 KB  
Article
RBCrowd: A Reliable Blockchain-Based Reputation Management Framework for Privacy Preservation in Mobile Crowdsensing
by Zaina Maqour, Hanan El Bakkali, Driss Benhaddou and Houda Benbrahim
Future Internet 2026, 18(1), 65; https://doi.org/10.3390/fi18010065 - 21 Jan 2026
Viewed by 160
Abstract
Mobile crowdsensing (MCS) is an emerging paradigm that enables cost-effective, large-scale, and participatory data collection through mobile devices. However, the open nature of MCS raises significant privacy and trust challenges. Existing reputation models have made progress in assessing the quality of contributions, but [...] Read more.
Mobile crowdsensing (MCS) is an emerging paradigm that enables cost-effective, large-scale, and participatory data collection through mobile devices. However, the open nature of MCS raises significant privacy and trust challenges. Existing reputation models have made progress in assessing the quality of contributions, but they still struggle to manage prolonged inactivity, which can lead to outdated scores that no longer reflect current engagement. To address these issues, this paper presents RBCrowd, a dynamic reputation management system based on a dual blockchain architecture. It consists of the Sensing Chain (SC), a public blockchain recording sensing tasks and results, and the Reputation Chain (RC), a consortium blockchain managing user reputation scores. To guarantee privacy, the framework limits identity verification to the RC, ensuring that data on the SC is stored without direct links to the worker. We paired this privacy mechanism with a reputation model that rewards consistent, high-quality contributions. The system updates reputation scores by first validating the specific task and then adjusting for historical engagement, specifically penalizing prolonged inactivity. We evaluate RBCrowd through simulations in realistic MCS scenarios, and the results show that our framework provides more effective dynamic trust management than existing models. It also achieves increased reliability and fairness while managing prolonged inactivity through adaptive penalties. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

30 pages, 965 KB  
Article
Guarded Swarms: Building Trusted Autonomy Through Digital Intelligence and Physical Safeguards
by Uwe M. Borghoff, Paolo Bottoni and Remo Pareschi
Future Internet 2026, 18(1), 64; https://doi.org/10.3390/fi18010064 - 21 Jan 2026
Viewed by 274
Abstract
Autonomous UAV/UGV swarms increasingly operate in contested environments where purely digital control architectures are vulnerable to cyber compromise, communication denial, and timing faults. This paper presents Guarded Swarms, a hybrid framework that combines digital coordination with hardware-level analog safety enforcement. The architecture builds [...] Read more.
Autonomous UAV/UGV swarms increasingly operate in contested environments where purely digital control architectures are vulnerable to cyber compromise, communication denial, and timing faults. This paper presents Guarded Swarms, a hybrid framework that combines digital coordination with hardware-level analog safety enforcement. The architecture builds on Topic-Based Communication Space Petri Nets (TB-CSPN) for structured multi-agent coordination, extending this digital foundation with independent analog guard channels—thrust clamps, attitude limiters, proximity sensors, and emergency stops—that operate in parallel at the actuator interface. Each channel can unilaterally veto unsafe commands within microseconds, independently of software state. The digital–analog interface is formalized via timing contracts that specify sensor-consistency windows and actuation latency bounds. A two-robot case study demonstrates token-based arbitration at the digital level and OR-style inhibition at the analog level. The framework ensures local safety deterministically while maintaining global coordination as a best-effort property. This paper presents an architectural contribution establishing design principles and interface contracts. Empirical validation remains future work. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

37 pages, 2717 KB  
Review
Synthetizing 6G KPIs for Diverse Future Use Cases: A Comprehensive Review of Emerging Standards, Technologies, and Societal Needs
by Shujat Ali, Asma Abu-Samah, Mohammed H. Alsharif, Rosdiadee Nordin, Nauman Saqib, Mohammed Sani Adam, Umawathy Techanamurthy, Manzareen Mustafa and Nor Fadzilah Abdullah
Future Internet 2026, 18(1), 63; https://doi.org/10.3390/fi18010063 - 21 Jan 2026
Viewed by 488
Abstract
The anticipated transition from 5G to 6G is driven not by incremental performance demands but by a widening mismatch between emerging application requirements and the capabilities of existing cellular systems. Despite rapid progress across 3GPP Releases 15–20, the current literature lacks a unified [...] Read more.
The anticipated transition from 5G to 6G is driven not by incremental performance demands but by a widening mismatch between emerging application requirements and the capabilities of existing cellular systems. Despite rapid progress across 3GPP Releases 15–20, the current literature lacks a unified analysis that connects these standardization milestones to the concrete technical gaps that 6G must resolve. This study addresses this omission through a cross-release, application-driven review that traces how the evolution from enhanced mobile broadband to intelligent, sensing integrated networks lays the foundation for three core 6G service pillars: immersive communication (IC), everything connected (EC), and high-precision positioning. By examining use cases such as holographic telepresence, cooperative drone swarms, and large-scale Extended Reality (XR) ecosystems, this study exposes the limitations of today’s spectrum strategies, network architectures, and device capabilities and identifies the performance thresholds of Tbps-level throughput, sub-10 cm localization, sub-ms latency, and 10 M/km2 device density that next-generation systems must achieve. The novelty of this review lies in its synthesis of 3GPP advancements in XR, the non-terrestrial network (NTN), RedCap, ambient Internet of Things (IoT), and consideration of sustainability into a cohesive key performance indicator (KPI) framework that links future services to the required architectural and protocol innovations, including AI-native design and sub-THz operation. Positioned against global initiatives such as Hexa-X and the Next G Alliance, this paper argues that 6G represents a fundamental redesign of wireless communication advancement in 5G, driven by intelligence, adaptability, and long-term energy efficiency to satisfy diverse uses cases and requirements. Full article
Show Figures

Graphical abstract

24 pages, 1137 KB  
Article
Detecting TLS Protocol Anomalies Through Network Monitoring and Compliance Tools
by Diana Gratiela Berbecaru and Marco De Santo
Future Internet 2026, 18(1), 62; https://doi.org/10.3390/fi18010062 - 21 Jan 2026
Viewed by 238
Abstract
The Transport Layer Security (TLS) protocol is widely used nowadays to create secure communications over TCP/IP networks. Its purpose is to ensure confidentiality, authentication, and data integrity for messages exchanged between two endpoints. In order to facilitate its integration into widely used applications, [...] Read more.
The Transport Layer Security (TLS) protocol is widely used nowadays to create secure communications over TCP/IP networks. Its purpose is to ensure confidentiality, authentication, and data integrity for messages exchanged between two endpoints. In order to facilitate its integration into widely used applications, the protocol is typically implemented through libraries, such as OpenSSL, BoringSSL, LibreSSL, WolfSSL, NSS, or mbedTLS. These libraries encompass functions that execute the specialized TLS handshake required for channel establishment, as well as the construction and processing of TLS records, and the procedures for closing the secure channel. However, these software libraries may contain vulnerabilities or errors that could potentially jeopardize the security of the TLS channel. To identify flaws or deviations from established standards within the implemented TLS code, a specialized tool known as TLS-Anvil can be utilized. This tool also verifies the compliance of TLS libraries with the specifications outlined in the Request for Comments documents published by the IETF. TLS-Anvil conducts numerous tests with a client/server configuration utilizing a specified TLS library and subsequently generates a report that details the number of successful tests. In this work, we exploit the results obtained from a selected subset of TLS-Anvil tests to generate rules used for anomaly detection in Suricata, a well-known signature-based Intrusion Detection System. During the tests, TLS-Anvil generates .pcap capture files that report all the messages exchanged. Such files can be subsequently analyzed with Wireshark, allowing for a detailed examination of the messages exchanged during the tests and a thorough understanding of their structure on a byte-by-byte basis. Through the analysis of the TLS handshake messages produced during testing, we develop customized Suricata rules aimed at detecting TLS anomalies that result from flawed implementations within the intercepted traffic. Furthermore, we describe the specific test environment established for the purpose of deriving and validating certain Suricata rules intended to identify anomalies in nodes utilizing a version of the OpenSSL library that does not conform to the TLS specification. The rules that delineate TLS deviations or potential attacks may subsequently be integrated into a threat detection platform supporting Suricata. This integration will enhance the capability to identify TLS anomalies arising from code that fails to adhere to the established specifications. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

32 pages, 4599 KB  
Article
Adaptive Assistive Technologies for Learning Mexican Sign Language: Design of a Mobile Application with Computer Vision and Personalized Educational Interaction
by Carlos Hurtado-Sánchez, Ricardo Rosales Cisneros, José Ricardo Cárdenas-Valdez, Andrés Calvillo-Téllez and Everardo Inzunza-Gonzalez
Future Internet 2026, 18(1), 61; https://doi.org/10.3390/fi18010061 - 21 Jan 2026
Viewed by 168
Abstract
Integrating people with hearing disabilities into schools is one of the biggest problems that Latin American societies face. Mexican Sign Language (MSL) is the main language and culture of the deaf community in Mexico. However, its use in formal education is still limited [...] Read more.
Integrating people with hearing disabilities into schools is one of the biggest problems that Latin American societies face. Mexican Sign Language (MSL) is the main language and culture of the deaf community in Mexico. However, its use in formal education is still limited by structural inequalities, a lack of qualified interpreters, and a lack of technology that can support personalized instruction. This study outlines the conceptualization and development of a mobile application designed as an adaptive assistive technology for learning MSL, utilizing a combination of computer vision techniques, deep learning algorithms, and personalized pedagogical interaction. The suggested system uses convolutional neural networks (CNNs) and pose-estimation models to recognize hand gestures in real time with 95.7% accuracy. It then gives the learner instant feedback by changing the difficulty level. A dynamic learning engine automatically changes the level of difficulty based on how well the learner is doing, which helps them learn signs and phrases over time. The Scrum agile methodology was used during the development process. This meant that educators, linguists, and members of the deaf community all worked together to design the product. Early tests show that sign recognition accuracy and indicators of user engagement and motivation show favorable performance and are at appropriate levels. This proposal aims to enhance inclusive digital ecosystems and foster linguistic equity in Mexican education through scalable, mobile, and culturally relevant technologies, in addition to its technical contributions. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Computer Vision—2nd Edition)
Show Figures

Figure 1

17 pages, 759 KB  
Article
Unsupervised Detection of SOC Spoofing in OCPP 2.0.1 EV Charging Communication Protocol Using One-Class SVM
by Aisha B. Rahman, Md Sadman Siraj, Eirini Eleni Tsiropoulou, Georgios Fragkos, Ryan Sullivant, Yung Ryn Choe, Jhaell Jimenez, Junghwan Rhee and Kyu Hyung Lee
Future Internet 2026, 18(1), 60; https://doi.org/10.3390/fi18010060 - 21 Jan 2026
Viewed by 267
Abstract
The electric vehicles (EVs) market keeps growing globally; thus, it is critical to secure the EV charging communication protocols in order to guarantee reliable and fair charging operations among the customers. The Open Charge Point Protocol (OCPP) 2.0.1 supports the communication between the [...] Read more.
The electric vehicles (EVs) market keeps growing globally; thus, it is critical to secure the EV charging communication protocols in order to guarantee reliable and fair charging operations among the customers. The Open Charge Point Protocol (OCPP) 2.0.1 supports the communication between the Electric Vehicle Supply Equipment (EVSE) and Charging Station Management Systems (CSMSs); therefore, it becomes vulnerable to several types of attacks, which aim to jeopardize smart charging, billing, and energy management. Specifically, OCPP 2.0.1 allows the self-reporting of the State of Charge (SOC) values, which makes it vulnerable to spoofing-based cyberattacks, which target manipulating the scheduling priorities, distorting the load forecasts, and extending the charging sessions in an unfair manner. In this paper, we try to address this type of attack by providing a comprehensive analysis of the SOC spoofing attacks and introducing a novel unsupervised detection framework based on the One-Class Support Vector Machine (OCSVM) algorithm. Specifically, two types of attack scenarios are analyzed (i.e., priority manipulation and session extension) by deriving engineered features that capture the nonlinear relationships under normal charging behavior. Detailed simulation-based results are derived by utilizing the DESL-EPFL Level 3 EV charging dataset. Our results demonstrate high F1-score and recall in identifying spoofed SOC values and that the proposed OCSVM model demonstrates superior performance compared to alternative clustering and deep-learning based detectors. Full article
Show Figures

Graphical abstract

25 pages, 7167 KB  
Article
Edge-Enhanced YOLOV8 for Spacecraft Instance Segmentation in Cloud-Edge IoT Environments
by Ming Chen, Wenjie Chen, Yanfei Niu, Ping Qi and Fucheng Wang
Future Internet 2026, 18(1), 59; https://doi.org/10.3390/fi18010059 - 20 Jan 2026
Viewed by 166
Abstract
The proliferation of smart devices and the Internet of Things (IoT) has led to massive data generation, particularly in complex domains such as aerospace. Cloud computing provides essential scalability and advanced analytics for processing these vast datasets. However, relying solely on the cloud [...] Read more.
The proliferation of smart devices and the Internet of Things (IoT) has led to massive data generation, particularly in complex domains such as aerospace. Cloud computing provides essential scalability and advanced analytics for processing these vast datasets. However, relying solely on the cloud introduces significant challenges, including high latency, network congestion, and substantial bandwidth costs, which are critical for real-time on-orbit spacecraft services. Cloud-edge Internet of Things (cloud-edge IoT) computing emerges as a promising architecture to mitigate these issues by pushing computation closer to the data source. This paper proposes an improved YOLOV8-based model specifically designed for edge computing scenarios within a cloud-edge IoT framework. By integrating the Cross Stage Partial Spatial Pyramid Pooling Fast (CSPPF) module and the WDIOU loss function, the model achieves enhanced feature extraction and localization accuracy without significantly increasing computational cost, making it suitable for deployment on resource-constrained edge devices. Meanwhile, by processing image data locally at the edge and transmitting only the compact segmentation results to the cloud, the system effectively reduces bandwidth usage and supports efficient cloud-edge collaboration in IoT-based spacecraft monitoring systems. Experimental results show that, compared to the original YOLOV8 and other mainstream models, the proposed model demonstrates superior accuracy and instance segmentation performance at the edge, validating its practicality in cloud-edge IoT environments. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

22 pages, 1217 KB  
Article
A Multi-Objective Optimization-Based Container Cloud Resource Scheduling Method
by Danping Zhang, Xiaolan Xie and Yuhui Song
Future Internet 2026, 18(1), 58; https://doi.org/10.3390/fi18010058 - 20 Jan 2026
Viewed by 171
Abstract
Container-based cloud platforms enable flexible and lightweight application deployment, yet container scheduling remains challenged by resource fragmentation, load imbalance, excessive energy consumption, and service-level agreement (SLA) violations. To address these issues, this paper proposes a hybrid multi-objective optimization approach, termed HHO-GWO, which combines [...] Read more.
Container-based cloud platforms enable flexible and lightweight application deployment, yet container scheduling remains challenged by resource fragmentation, load imbalance, excessive energy consumption, and service-level agreement (SLA) violations. To address these issues, this paper proposes a hybrid multi-objective optimization approach, termed HHO-GWO, which combines Harris Hawks Optimization (HHO) with the Grey Wolf Optimizer (GWO) for container initial placement in cloud environments. A unified fitness function is designed to jointly consider resource utilization, load balancing, resource fragmentation, energy consumption, and SLA violation rate. In addition, a dynamic weight adjustment mechanism and Lévy flight perturbation are incorporated to improve search adaptability and prevent premature convergence. The proposed method is evaluated through extensive simulations under different workload scales and compared with several representative metaheuristic algorithms. The results show that HHO-GWO achieves improved convergence behavior, solution quality, and stability, particularly in large-scale container deployment scenarios. These findings suggest that the proposed approach provides a practical and energy-aware solution for multi-objective container scheduling in cloud data centers. Full article
Show Figures

Figure 1

26 pages, 1629 KB  
Article
Performance Evaluation of MongoDB and RavenDB in IIoT-Inspired Data-Intensive Mobile and Web Applications
by Mădălina Ciumac, Cornelia Aurora Győrödi, Robert Ștefan Győrödi and Felicia Mirabela Costea
Future Internet 2026, 18(1), 57; https://doi.org/10.3390/fi18010057 - 20 Jan 2026
Viewed by 206
Abstract
The exponential growth of data generated by modern digital applications, including systems inspired by Industrial Internet of Things (IIoT) requirements, has accelerated the adoption of NoSQL databases due to their scalability, flexibility, and performance advantages over traditional relational systems. Among document-oriented solutions, MongoDB [...] Read more.
The exponential growth of data generated by modern digital applications, including systems inspired by Industrial Internet of Things (IIoT) requirements, has accelerated the adoption of NoSQL databases due to their scalability, flexibility, and performance advantages over traditional relational systems. Among document-oriented solutions, MongoDB and RavenDB stand out due to their architectural features and their ability to manage dynamic, large-scale datasets. This paper presents a comparative analysis of MongoDB and RavenDB, focusing on the performance of fundamental CRUD (Create, Read, Update, Delete) operations. To ensure a controlled performance evaluation, a mobile and web application for managing product orders was implemented as a case study inspired by IIoT data characteristics, such as high data volume and frequent transactional operations, with experiments conducted on datasets ranging from 1000 to 1,000,000 records. Beyond the core CRUD evaluation, the study also investigates advanced operational scenarios, including joint processing strategies (lookup versus document inclusion), bulk data ingestion techniques, aggregation performance, and full-text search capabilities. These complementary tests provide deeper insight into the systems’ architectural strengths and their behavior under more complex and data-intensive workloads. The experimental results highlight MongoDB’s consistent performance advantage in terms of response time, particularly with large data volumes, while RavenDB demonstrates competitive behavior and offers additional benefits such as built-in ACID compliance, automatic indexing, and optimized mechanisms for relational retrieval and bulk ingestion. The analysis does not propose a new benchmarking methodology but provides practical insights for selecting an appropriate document-oriented database for data intensive mobile and web application contexts, including IIoT-inspired data characteristics, based on a controlled single-node experimental setting, while acknowledging the limitations of a single-host experimental environment. Full article
Show Figures

Graphical abstract

30 pages, 6341 KB  
Article
MCS-VD: Alliance Chain-Driven Multi-Cloud Storage and Verifiable Deletion Scheme for Smart Grid Data
by Lihua Zhang, Jiali Luo, Yi Yang and Wenbiao Wang
Future Internet 2026, 18(1), 56; https://doi.org/10.3390/fi18010056 - 20 Jan 2026
Cited by 1 | Viewed by 162
Abstract
The entire system collapses due to the issues of inadequate centralized storage capacity, poor scalability, low storage efficiency, and susceptibility to single point of failure brought on by huge power consumption data in the smart grid; thus, an alliance chain-driven multi-cloud storage and [...] Read more.
The entire system collapses due to the issues of inadequate centralized storage capacity, poor scalability, low storage efficiency, and susceptibility to single point of failure brought on by huge power consumption data in the smart grid; thus, an alliance chain-driven multi-cloud storage and verifiable deletion method for smart grid data is proposed. By leveraging the synergy between alliance blockchain and multi-cloud architecture, the encrypted power data originating from edge nodes is dispersed across a decentralized multi-cloud infrastructure, which effectively mitigates the danger of data loss resulting from single-point failures or malicious intrusions. The removal of expired and user-defined data is guaranteed through a transaction deletion algorithm integrated into the indexed storage deletion chain and strengthens the flexibility and security of the storage architecture. Based on the Practical Byzantine Fault-Tolerant Consensus Protocol with Ultra-Low Storage Overhead (ULS-PBFT), by the hierarchical grouping of nodes, the system communication overhead and storage overhead are reduced. Security analysis proves that the scheme can resist tampering attacks, impersonation attacks, collusion attacks, double spend attacks, and replay attacks. Performance evaluation shows that the scheme improves compared to similar methods. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT—3rd Edition)
Show Figures

Figure 1

31 pages, 3039 KB  
Article
Digital Learning Hubs: Evaluating Their Role in Fostering Complex and Computational Thinking
by Inés Alvarez-Icaza, Luis Magdiel Oliva-Córdova, Rasikh Tariq and José Luis Martín-Núñez
Future Internet 2026, 18(1), 55; https://doi.org/10.3390/fi18010055 - 19 Jan 2026
Viewed by 259
Abstract
Digital Learning Hubs and educational repositories are key tools to offer innovative educational experiences in the context of the digital transformation of education. However, their evaluation has often been approached from fragmented perspectives, limiting a comprehensive understanding of their role as integrated digital [...] Read more.
Digital Learning Hubs and educational repositories are key tools to offer innovative educational experiences in the context of the digital transformation of education. However, their evaluation has often been approached from fragmented perspectives, limiting a comprehensive understanding of their role as integrated digital learning ecosystems. This study aimed to evaluate the functionalities, usability, and accessibility of 25 digital platforms through 1519 observations, conceiving these dimensions as interconnected components for supporting the development of complex and computational thinking, and testing five hypotheses related to their performance. A quantitative descriptive–correlational approach was employed for the study. Internal and external functionalities were assessed using a specifically designed instrument, while usability was analyzed according to ISO 9241: Ergonomics of human–system interaction (efficiency, effectiveness, and user satisfaction), and accessibility was evaluated based on WCAG 2.1 standards. The results showed that platforms with higher scores in internal functionalities, particularly personalization and evaluation, exhibited a positive correlation with higher usability metrics, including efficiency and user satisfaction. Accessibility limitations and weaknesses in external functionalities were identified as relevant factors affecting platform performance, with recurring shortcomings in Operable and Understandable. In addition, the availability of courses explicitly focused on complex and computational thinking revealed a gap in specialized training within the analyzed platforms. From a research perspective, this study contributes by providing a multi-criteria evaluation framework and comparative empirical evidence that clarifies the relationships between platform functionality, usability, accessibility, and the development of complex and computational thinking. These findings support developing a Digital Learning Hub as balanced, robust, and evidence informed digital learning ecosystems. Full article
Show Figures

Graphical abstract

26 pages, 3132 KB  
Article
An Unsupervised Cloud-Centric Intrusion Diagnosis Framework Using Autoencoder and Density-Based Learning
by Suresh K. S, Thenmozhi Elumalai, Radhakrishnan Rajamani, Anubhav Kumar, Balamurugan Balusamy, Sumendra Yogarayan and Kaliyaperumal Prabu
Future Internet 2026, 18(1), 54; https://doi.org/10.3390/fi18010054 - 19 Jan 2026
Viewed by 209
Abstract
Cloud computing environments generate high-dimensional, large-scale, and highly dynamic network traffic, making intrusion diagnosis challenging due to evolving attack patterns, severe traffic imbalance, and limited availability of labeled data. To address these challenges, this study presents an unsupervised, cloud-centric intrusion diagnosis framework that [...] Read more.
Cloud computing environments generate high-dimensional, large-scale, and highly dynamic network traffic, making intrusion diagnosis challenging due to evolving attack patterns, severe traffic imbalance, and limited availability of labeled data. To address these challenges, this study presents an unsupervised, cloud-centric intrusion diagnosis framework that integrates autoencoder-based representation learning with density-based attack categorization. A dual-stage autoencoder is trained exclusively on benign traffic to learn compact latent representations and to identify anomalous flows using reconstruction-error analysis, enabling effective anomaly detection without prior attack labels. The detected anomalies are subsequently grouped using density-based learning to uncover latent attack structures and support fine-grained multiclass intrusion diagnosis under varying attack densities. Experiments conducted on the large-scale CSE-CIC-IDS2018 dataset demonstrate that the proposed framework achieves an anomaly detection accuracy of 99.46%, with high recall and low false-negative rates in the optimal latent-space configuration. The density-based classification stage achieves an overall multiclass attack classification accuracy of 98.79%, effectively handling both majority and minority attack categories. Clustering quality evaluation reports a Silhouette Score of 0.9857 and a Davies–Bouldin Index of 0.0091, indicating strong cluster compactness and separability. Comparative analysis against representative supervised and unsupervised baselines confirms the framework’s scalability and robustness under highly imbalanced cloud traffic, highlighting its suitability for future Internet cloud security ecosystems. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

29 pages, 2803 KB  
Article
Benchmarking SQL and NoSQL Persistence in Microservices Under Variable Workloads
by Nenad Pantelic, Ljiljana Matic, Lazar Jakovljevic, Stefan Eric, Milan Eric, Miladin Stefanović and Aleksandar Djordjevic
Future Internet 2026, 18(1), 53; https://doi.org/10.3390/fi18010053 - 15 Jan 2026
Viewed by 306
Abstract
This paper presents a controlled comparative evaluation of SQL and NoSQL persistence mechanisms in containerized microservice architectures under variable workload conditions. Three persistence configurations—SQL with indexing, SQL without indexing, and a document-oriented NoSQL database, including supplementary hybrid SQL variants used for robustness analysis—are [...] Read more.
This paper presents a controlled comparative evaluation of SQL and NoSQL persistence mechanisms in containerized microservice architectures under variable workload conditions. Three persistence configurations—SQL with indexing, SQL without indexing, and a document-oriented NoSQL database, including supplementary hybrid SQL variants used for robustness analysis—are assessed across read-dominant, write-dominant, and mixed workloads, with concurrency levels ranging from low to high contention. The experimental setup is fully containerized and executed in a single-node environment to isolate persistence-layer behavior and ensure reproducibility. System performance is evaluated using multiple metrics, including percentile-based latency (p95), throughput, CPU utilization, and memory consumption. The results reveal distinct performance trade-offs among the evaluated configurations, highlighting the sensitivity of persistence mechanisms to workload composition and concurrency intensity. In particular, indexing strategies significantly affect read-heavy scenarios, while document-oriented persistence demonstrates advantages under write-intensive workloads. The findings emphasize the importance of workload-aware persistence selection in microservice-based systems and support the adoption of polyglot persistence strategies. Rather than providing absolute performance benchmarks, the study focuses on comparative behavioral trends that can inform architectural decision-making in practical microservice deployments. Full article
Show Figures

Figure 1

21 pages, 1555 KB  
Article
Cyber Approach for DDoS Attack Detection Using Hybrid CNN-LSTM Model in IoT-Based Healthcare
by Mbarka Belhaj Mohamed, Dalenda Bouzidi, Manar Khalid Ibraheem, Abdullah Ali Jawad Al-Abadi and Ahmed Fakhfakh
Future Internet 2026, 18(1), 52; https://doi.org/10.3390/fi18010052 - 15 Jan 2026
Viewed by 286
Abstract
Healthcare has been fundamentally changed by the expansion of IoT, which enables advanced diagnostics and continuous monitoring of patients outside clinical settings. Frequently interconnected medical devices often encounter resource limitations and lack comprehensive security safeguards. Therefore, such devices are prone to intrusions, with [...] Read more.
Healthcare has been fundamentally changed by the expansion of IoT, which enables advanced diagnostics and continuous monitoring of patients outside clinical settings. Frequently interconnected medical devices often encounter resource limitations and lack comprehensive security safeguards. Therefore, such devices are prone to intrusions, with DDoS attacks in particular threatening the integrity of vital infrastructure. To safe guard sensitive patient information and ensure the integrity and confidentiality of medical devices, this article explores the critical importance of robust security measures in healthcare IoT systems. In order to detect DDoS attacks in healthcare networks supported by WBSN-enabled IoT devices, we propose a hybrid detection model. The model utilizes the advantages of Long Short-Term Memory (LSTM) networks for modeling temporal dependencies in network traffic and Convolutional Neural Networks (CNNs) for extracting spatial features. The effectiveness of the model is demonstrated by simulation results on the CICDDoS2019 datasets, which indicate a detection accuracy of 99% and a loss of 0.05%, respectively. The evaluation results highlight the capability of the hybrid model to reliably detect potential anomalies, showing superior performance over leading contemporary methods in healthcare environments. Full article
Show Figures

Graphical abstract

25 pages, 1759 KB  
Article
Resilience-by-Design: Extracting Resilience Requirements Using the Resilience Graph in the Automotive Concept Phase
by Isaac Mpidi Bita, Elif Ugur, Aschot Hovemann and Roman Dumitrescu
Future Internet 2026, 18(1), 51; https://doi.org/10.3390/fi18010051 - 15 Jan 2026
Viewed by 231
Abstract
The complexity and interconnectivity of modern automotive systems are rapidly increasing, particularly with the rise of distributed and cooperative driving functions. These developments increase exposure to a range of disruptions, from technical failures to cyberattacks, and demand a shift towards resilience-by-design. This study [...] Read more.
The complexity and interconnectivity of modern automotive systems are rapidly increasing, particularly with the rise of distributed and cooperative driving functions. These developments increase exposure to a range of disruptions, from technical failures to cyberattacks, and demand a shift towards resilience-by-design. This study addresses the early integration of resilience into the automotive design process by proposing a structured method for identifying gaps and eliciting resilience requirements. Building upon the concept of resilience scenarios, the approach extends traditional hazard and threat analyses as defined in ISO 26262, ISO 21448 and ISO/SAE 21434. Using a structured, graph-based modeling method, these scenarios enable the anticipation of functional degradation and its impact on driving scenarios. The methodology helps developers to specify resilience requirements at an early stage, enabling the integration of resilience properties throughout the system lifecycle. Its practical applicability is demonstrated through an example in the field of automotive cybersecurity. This study advances the field of resilience engineering by providing a concrete approach for operationalizing resilience within automotive systems engineering. Full article
(This article belongs to the Special Issue Security of Computer System and Network)
Show Figures

Graphical abstract

37 pages, 653 KB  
Article
Highly Efficient Software Development Using DevOps and Microservices: A Comprehensive Framework
by David Barbosa, Vítor Santos, Maria Clara Silveira, Arnaldo Santos and Henrique S. Mamede
Future Internet 2026, 18(1), 50; https://doi.org/10.3390/fi18010050 - 14 Jan 2026
Viewed by 457
Abstract
With the growing popularity of DevOps culture among companies and the corresponding increase in Microservices architecture development—both known to boost productivity and efficiency in software development—an increasing number of organizations are aiming to integrate them. Implementing DevOps culture and best practices can be [...] Read more.
With the growing popularity of DevOps culture among companies and the corresponding increase in Microservices architecture development—both known to boost productivity and efficiency in software development—an increasing number of organizations are aiming to integrate them. Implementing DevOps culture and best practices can be challenging, but it is increasingly important as software applications become more robust and complex, and performance is considered essential by end users. By following the Design Science Research methodology, this paper proposes an iterative framework that closely follows the recommended DevOps practices, validated with the assistance of expert interviews, for implementing DevOps practices into Microservices architecture software development, while also offering a series of tools that serve as a base guideline for anyone following this framework, in the form of a theoretical use case. Therefore, this paper provides organizations with a guideline for adapting DevOps and offers organizations already using this methodology a framework to potentially enhance their established practices. Full article
Show Figures

Graphical abstract

2 pages, 129 KB  
Correction
Correction: Sokhansanj, B.A. Uncensored AI in the Wild: Tracking Publicly Available and Locally Deployable LLMs. Future Internet 2025, 17, 477
by Bahrad A. Sokhansanj
Future Internet 2026, 18(1), 49; https://doi.org/10.3390/fi18010049 - 14 Jan 2026
Viewed by 162
Abstract
The author requested to add some new content and cite a new reference in Section 2 after the sentence “The empirical work presented in this paper is particularly needed now because, despite the rapid spread of open-weight models, empirical research on their safety [...] Read more.
The author requested to add some new content and cite a new reference in Section 2 after the sentence “The empirical work presented in this paper is particularly needed now because, despite the rapid spread of open-weight models, empirical research on their safety remains limited [...] Full article
35 pages, 8323 KB  
Article
Evaluating Digital Marketing, Innovation, and Entrepreneurial Impact in AI-Built vs. Professionally Developed DeFi Websites
by Nikolaos T. Giannakopoulos, Damianos P. Sakas and Nikos Kanellos
Future Internet 2026, 18(1), 48; https://doi.org/10.3390/fi18010048 - 13 Jan 2026
Viewed by 338
Abstract
This study evaluates whether an AI-built DeFi website case can match professionally developed DeFi platforms in digital marketing performance, innovation-related strategic behavior, and entrepreneurial impact. Using a multi-method design, we compare five established DeFi websites (Aave, Lido, Curve, MakerDAO, Uniswap) against one AI-built [...] Read more.
This study evaluates whether an AI-built DeFi website case can match professionally developed DeFi platforms in digital marketing performance, innovation-related strategic behavior, and entrepreneurial impact. Using a multi-method design, we compare five established DeFi websites (Aave, Lido, Curve, MakerDAO, Uniswap) against one AI-built interface (Nexus Protocol). The analysis is designed as a five-platform benchmarking study of established professional DeFi websites, complemented by one AI-built case (Nexus Protocol) used as an illustrative comparison rather than a representative class of AI-built interface. The objectives are to (i) test differences in traffic composition and acquisition strategies, (ii) quantify how engagement signals predict authority and branded traffic, (iii) examine cognitive processing and trust-cue attention via eye tracking, and (iv) model emergent engagement and authority dynamics using agent-based simulation (ABM). Web analytics (March–October 2025) show significant variation in traffic composition across professional platforms (ANOVA F = 3.41, p = 0.0205), while regression models indicate that time on site and pages per visit positively predict Authority Score (R2 = 0.61) and Branded Traffic (R2 = 0.55), with bounce rate exerting an adverse effect. PCA and k-means clustering identify three strategic archetypes (innovation-driven, balanced-growth, efficiency-focused). Eye-tracking results show that professional interfaces generate tighter fixation clusters and shorter scan paths, indicating higher cognitive efficiency. In contrast, fixation on key UI elements and trust cues is comparable across interface types. ABM outputs further suggest that reduced engagement depth in the AI-built interface yields weaker long-run branded-traffic and authority trajectories. Overall, the study provides an integrated evaluation framework and evidence-based implications for AI-driven interface design in high-trust fintech environments. Full article
Show Figures

Graphical abstract

32 pages, 8110 KB  
Article
A Secure and Efficient Sharing Framework for Student Electronic Academic Records: Integrating Zero-Knowledge Proof and Proxy Re-Encryption
by Xin Li, Minsheng Tan and Wenlong Tian
Future Internet 2026, 18(1), 47; https://doi.org/10.3390/fi18010047 - 12 Jan 2026
Viewed by 256
Abstract
A sharing framework based on Zero-Knowledge Proof (ZKP) and Proxy Re-encryption (PRE) technologies offers a promising solution for sharing Student Electronic Academic Records (SEARs). As core credentials in the education sector, student records are characterized by strong identity binding, the need for long-term [...] Read more.
A sharing framework based on Zero-Knowledge Proof (ZKP) and Proxy Re-encryption (PRE) technologies offers a promising solution for sharing Student Electronic Academic Records (SEARs). As core credentials in the education sector, student records are characterized by strong identity binding, the need for long-term retention, frequent cross-institutional verification, and sensitive information. Compared with electronic health records and government archives, they face more complex security, privacy protection, and storage scalability challenges during sharing. These records not only contain sensitive data such as personal identity and academic performance but also serve as crucial evidence in key scenarios such as further education, employment, and professional title evaluation. Leakage or tampering could have irreversible impacts on a student’s career development. Furthermore, traditional blockchain technology faces storage capacity limitations when storing massive academic records, and existing general electronic record sharing solutions struggle to meet the high-frequency verification demands of educational authorities, universities, and employers for academic data. This study proposes a dedicated sharing framework for students’ electronic academic records, leveraging PRE technology and the distributed ledger characteristics of blockchain to ensure transparency and immutability during sharing. By integrating the InterPlanetary File System (IPFS) with Ethereum Smart Contract (SC), it addresses blockchain storage bottlenecks, enabling secure storage and efficient sharing of academic records. Relying on optimized ZKP technology, it supports verifying the authenticity and integrity of records without revealing sensitive content. Furthermore, the introduction of gate circuit merging, constant folding techniques, Field-Programmable Gate Array (FPGA) hardware acceleration, and the efficient Bulletproofs algorithm alleviates the high computational complexity of ZKP, significantly reducing proof generation time. The experimental results demonstrate that the framework, while ensuring strong privacy protection, can meet the cross-scenario sharing needs of student records and significantly improve sharing efficiency and security. Therefore, this method exhibits superior security and performance in privacy-preserving scenarios. This framework can be applied to scenarios such as cross-institutional academic certification, employer background checks, and long-term management of academic records by educational authorities, providing secure and efficient technical support for the sharing of electronic academic credentials in the digital education ecosystem. Full article
Show Figures

Graphical abstract

64 pages, 13395 KB  
Review
Low-Cost Malware Detection with Artificial Intelligence on Single Board Computers
by Phil Steadman, Paul Jenkins, Rajkumar Singh Rathore and Chaminda Hewage
Future Internet 2026, 18(1), 46; https://doi.org/10.3390/fi18010046 - 12 Jan 2026
Viewed by 893
Abstract
The proliferation of Internet of Things (IoT) devices has significantly expanded the threat landscape for malicious software (malware), rendering traditional signature-based detection methods increasingly ineffective in coping with the volume and evolving nature of modern threats. In response, researchers are utilising artificial intelligence [...] Read more.
The proliferation of Internet of Things (IoT) devices has significantly expanded the threat landscape for malicious software (malware), rendering traditional signature-based detection methods increasingly ineffective in coping with the volume and evolving nature of modern threats. In response, researchers are utilising artificial intelligence (AI) for a more dynamic and robust malware detection solution. An innovative approach utilising AI is focusing on image classification techniques to detect malware on resource-constrained Single-Board Computers (SBCs) such as the Raspberry Pi. In this method the conversion of malware binaries into 2D images is examined, which can be analysed by deep learning models such as convolutional neural networks (CNNs) to classify them as benign or malicious. The results show that the image-based approach demonstrates high efficacy, with many studies reporting detection accuracy rates exceeding 98%. That said, there is a significant challenge in deploying these demanding models on devices with limited processing power and memory, in particular those involving of both calculation and time complexity. Overcoming this issue requires critical model optimisation strategies. Successful approaches include the use of a lightweight CNN architecture and federated learning, which may be used to preserve privacy while training models with decentralised data are processed. This hybrid workflow in which models are trained on powerful servers before the learnt algorithms are deployed on SBCs is an emerging field attacting significant interest in the field of cybersecurity. This paper synthesises the current state of the art, performance compromises, and optimisation techniques contributing to the understanding of how AI and image representation can enable effective low-cost malware detection on resource-constrained systems. Full article
Show Figures

Graphical abstract

44 pages, 1040 KB  
Article
Linearization Strategies for Energy-Aware Optimization of Single-Truck, Multiple-Drone Last-Mile Delivery Systems
by Ornela Gordani, Eglantina Kalluci and Fatos Xhafa
Future Internet 2026, 18(1), 45; https://doi.org/10.3390/fi18010045 - 9 Jan 2026
Viewed by 620
Abstract
The increasing demand for rapid and sustainable parcel delivery has motivated the exploration of innovative logistics systems that integrate drones with traditional ground vehicles. Among these, the single-truck, multiple-drone last-mile delivery configuration has attracted significant attention due to its potential to reduce both [...] Read more.
The increasing demand for rapid and sustainable parcel delivery has motivated the exploration of innovative logistics systems that integrate drones with traditional ground vehicles. Among these, the single-truck, multiple-drone last-mile delivery configuration has attracted significant attention due to its potential to reduce both delivery time and environmental impact. However, optimizing such systems remains computationally challenging because of the nonlinear energy consumption behavior of drones, which depends on factors such as payload weight and travel time, among others. This study investigates the energy-aware optimization of truck–drone collaborative delivery systems, with a particular focus on the mathematical formulation as mixed-integer nonlinear problem (MINLP) formulations and linearization of drone energy consumption constraints. Building upon prior models proposed in the literature in the field, we analyze the MINLP computational complexity and introduce alternative linearization strategies that preserve model accuracy while improving performance solvability. The resulting linearized mixed-integer linear problem (MILP) formulations are solved using the PuLP software, a Python library solver, to evaluate the efficacy of linearization on computation time and solution quality across diverse problem instance sizes from a benchmark of instances in the literature. Thus, extensive computational results drawn from a standard dataset benchmark from the literature by running the solver in a cluster infrastructure demonstrated that the designed linearization methods can reduce optimization time of nonlinear solvers to several orders of magnitude without compromising energy estimation accuracy, enabling the model to handle larger problem instances effectively. This performance improvement opens the door to a real-time or near-real-time solution of the problem, allowing the delivery system to dynamically react to operational changes and uncertainties during delivery. Full article
Show Figures

Graphical abstract

25 pages, 5868 KB  
Article
Using Traffic Management Approaches to Assess Digital Infrastructure Disruptions: Insights from a Signal Tampering Case Study
by Chrysostomos Mylonas, Apostolos Vouitsis, Evangelos Mitsakis and Konstantinos Kepaptsoglou
Future Internet 2026, 18(1), 44; https://doi.org/10.3390/fi18010044 - 9 Jan 2026
Viewed by 405
Abstract
This paper introduces a methodological framework for assessing the impacts of digital infrastructure disruptions in urban traffic networks, using traffic signal tampering as a case study. A readily quantifiable indicator, the Average Flow Reduction Metric (AFRM), is proposed to capture network-wide flow reduction [...] Read more.
This paper introduces a methodological framework for assessing the impacts of digital infrastructure disruptions in urban traffic networks, using traffic signal tampering as a case study. A readily quantifiable indicator, the Average Flow Reduction Metric (AFRM), is proposed to capture network-wide flow reduction based on the principles of the Macroscopic Fundamental Diagram (MFD). The framework is applied to a simulated network under various tampering scenarios and routing behaviors, including fixed, flexible, semi, and fully adaptive routing. The results show that AFRM correlates meaningfully with conventional disutility-based network performance metrics and serves as a reliable proxy for network degradation, especially under established or rationally adaptive routing behaviors. Due to its reliance on commonly available traffic measurements, AFRM provides a practical tool for diagnosing and managing digital disruptions in traffic networks. As such, it may support traffic managers in assessing, preparing for, monitoring, and responding to disruptions in real time. Full article
Show Figures

Figure 1

26 pages, 547 KB  
Article
A Two-Stage Multi-Objective Cooperative Optimization Strategy for Computation Offloading in Space–Air–Ground Integrated Networks
by He Ren and Yinghua Tong
Future Internet 2026, 18(1), 43; https://doi.org/10.3390/fi18010043 - 9 Jan 2026
Viewed by 307
Abstract
With the advancement of 6G networks, terrestrial centralized network architectures are evolving toward integrated space–air–ground network frameworks, imposing higher requirements on the efficiency of computation offloading and multi-objective collaborative optimization. However, existing single-decision strategies in integrated space–air–ground networks find it difficult to achieve [...] Read more.
With the advancement of 6G networks, terrestrial centralized network architectures are evolving toward integrated space–air–ground network frameworks, imposing higher requirements on the efficiency of computation offloading and multi-objective collaborative optimization. However, existing single-decision strategies in integrated space–air–ground networks find it difficult to achieve coordinated optimization of delay and load balancing under energy tolerance constraints during task offloading. To address this challenge, this paper integrates communication transmission and computation models to design a two-stage computation offloading model and formulates a multi-objective optimization problem under energy tolerance constraints, with the primary objectives of minimizing overall system delay and improving network load balance. To efficiently solve this constrained optimization problem, a two-stage computation offloading solution based on a Hierarchical Cooperative African Vulture Optimization Algorithm (HC-AVOA) is proposed. In the first stage, the task offloading ratio from ground devices to unmanned aerial vehicles (UAVs) is optimized; in the second stage, the task offloading ratio from UAVs to satellites is optimized. Through a hierarchical cooperative decision-making mechanism, dynamic and efficient task allocation is achieved. Simulation results show that the proposed method consistently maintains energy consumption within tolerance and outperforms PSO, WaOA, ABC, and ESOA, reduces the average delay and improves load imbalance, demonstrating its superiority in multi-objective optimization. Full article
Show Figures

Graphical abstract

14 pages, 498 KB  
Article
Intrusion Detection for Internet of Vehicles CAN Bus Communications Using Machine Learning: An Empirical Study on the CICIoV2024 Dataset
by Hop Le and Izzat Alsmadi
Future Internet 2026, 18(1), 42; https://doi.org/10.3390/fi18010042 - 9 Jan 2026
Viewed by 466
Abstract
The rapid integration of connectivity and automation in modern vehicles has significantly expanded the attack surface of in-vehicle networks, particularly the Controller Area Network (CAN) bus, which lacks native security mechanisms. This study investigates machine learning-based intrusion detection for Internet of Vehicles (IoV) [...] Read more.
The rapid integration of connectivity and automation in modern vehicles has significantly expanded the attack surface of in-vehicle networks, particularly the Controller Area Network (CAN) bus, which lacks native security mechanisms. This study investigates machine learning-based intrusion detection for Internet of Vehicles (IoV) environments using the CICIoV2024 dataset. Unlike prior studies that rely on highly redundant traffic traces, this work applies strict de-duplication to eliminate repetitive CAN frames, resulting in a dataset of unique attack signatures. To ensure statistical robustness despite the reduced data size, Stratified K-Fold Cross-Validation was employed. Experimental results reveal that while traditional models like Random Forest (optimized with ANOVA feature selection) maintain stability (F1-Macro ≈ 0.64), Deep Learning models fail to generalize (F1-Macro < 0.55) when denied the massive redundancy they typically require. These findings challenge the “near-perfect” detection rates reported in the literature, suggesting that previous benchmarks may reflect data leakage rather than true anomaly detection capabilities. The study concludes that lightweight models offer superior resilience for resource-constrained vehicular environments when evaluated on realistic, non-redundant data. Full article
Show Figures

Graphical abstract

18 pages, 418 KB  
Article
AnonymAI: An Approach with Differential Privacy and Intelligent Agents for the Automated Anonymization of Sensitive Data
by Marcelo Nascimento Oliveira Soares, Leonardo Barbosa Oliveira, Antonio João Gonçalves Azambuja, Jean Phelipe de Oliveira Lima and Anderson Silva Soares
Future Internet 2026, 18(1), 41; https://doi.org/10.3390/fi18010041 - 9 Jan 2026
Viewed by 607
Abstract
Data governance for responsible AI systems remains challenged by the lack of automated tools that can apply robust privacy-preserving techniques without destroying analytical value. We propose AnonymAI, a novel methodological framework that integrates LLM-based intelligent agents, the mathematical guarantees of differential privacy, and [...] Read more.
Data governance for responsible AI systems remains challenged by the lack of automated tools that can apply robust privacy-preserving techniques without destroying analytical value. We propose AnonymAI, a novel methodological framework that integrates LLM-based intelligent agents, the mathematical guarantees of differential privacy, and an automated workflow to generate anonymized datasets for analytical applications. This framework produces data tables with formally verifiable privacy protection, dramatically reducing the need for manual classification and the risk of human error. Focusing on the protection of tabular data containing sensitive personal information, AnonymAI is designed as a generalized, replicable pipeline adaptable to different regulations (e.g., General Data Protection Regulation) and use-case scenarios. The novelty lies in combining the contextual classification capabilities of LLMs with the mathematical rigor of differential privacy, enabling an end-to-end pipeline from raw data to a protected, analysis-ready dataset. The efficiency and formal guarantees of this approach offer significant advantages over conventional anonymization methods, which are often manual, inconsistent, and lack the verifiable protections of differential privacy. Validation studies, covering both controlled experiments on four types of synthetic datasets and broader tests on 19 real-world public tables from various domains, confirmed the applicability of the framework, with the agent-based classifier achieving high overall accuracy in identifying confidential columns. The results demonstrate that the protected data maintains high value for statistical analysis and machine learning models, highlighting AnonymAI’s potential to advance responsible data sharing. This work paves the way for trustworthy and scalable data governance in AI through a rigorously engineered automated anonymization pipeline. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

28 pages, 2702 KB  
Article
Adaptive and Sustainable Smart Environments Using Predictive Reasoning and Context-Aware Reinforcement Learning
by Abderrahim Lakehal, Boubakeur Annane, Adel Alti, Philippe Roose and Soliman Aljarboa
Future Internet 2026, 18(1), 40; https://doi.org/10.3390/fi18010040 - 8 Jan 2026
Viewed by 480
Abstract
Smart environments play a key role in improving user comfort, energy efficiency, and sustainability through intelligent automation. Nevertheless, real-world deployments still face major challenges, including network instability, delayed responsiveness, inconsistent AI decisions, and limited adaptability under dynamic conditions. Many existing approaches lack advanced [...] Read more.
Smart environments play a key role in improving user comfort, energy efficiency, and sustainability through intelligent automation. Nevertheless, real-world deployments still face major challenges, including network instability, delayed responsiveness, inconsistent AI decisions, and limited adaptability under dynamic conditions. Many existing approaches lack advanced context-awareness, effective multi-agent coordination, and scalable learning, leading to high computational cost and reduced reliability. To address these limitations, this paper proposes MACxRL, a lightweight Multi-Agent Context-Aware Reinforcement Learning framework for autonomous smart-environment control. The system adopts a three-tier architecture consisting of real-time context acquisition, lightweight prediction, and centralized RL-based decision learning. Local agents act quickly at the edge using rule-based reasoning, while a shared CxRL engine refines actions for global coordination, combining fast responsiveness with continuous adaptive learning. Experiments show that MACxRL reduces energy consumption by 45–60%, converges faster, and achieves more stable performance than standard and deep RL baselines. Future work will explore self-adaptive reward tuning and extend deployment to multi-room environments toward practical real-world realization. Full article
Show Figures

Graphical abstract

25 pages, 2150 KB  
Article
Architecting Multi-Cluster Layer-2 Connectivity for Cloud-Native Network Slicing
by Alex T. de Cock Buning, Ivan Vidal and Francisco Valera
Future Internet 2026, 18(1), 39; https://doi.org/10.3390/fi18010039 - 8 Jan 2026
Viewed by 286
Abstract
Connecting distributed applications across multiple cloud-native domains is growing in complexity. Applications have become containerized and fragmented across heterogeneous infrastructures, such as public clouds, edge nodes, and private data centers, including emerging IoT-driven environments. Existing networking solutions like CNI plugins and service meshes [...] Read more.
Connecting distributed applications across multiple cloud-native domains is growing in complexity. Applications have become containerized and fragmented across heterogeneous infrastructures, such as public clouds, edge nodes, and private data centers, including emerging IoT-driven environments. Existing networking solutions like CNI plugins and service meshes have proven insufficient for providing isolated, low-latency and secure multi-cluster communication. By combining SDN control with Kubernetes abstractions, we present L2S-CES, a Kubernetes-native solution for multi-cluster layer-2 network slicing that offers flexible isolated connectivity for microservices while maintaining performance and automation. In this work, we detail the design and implementation of L2S-CES, outlining its architecture and operational workflow. We experimentally validate against state-of-the-art alternatives and show superior isolation, reduced setup time, native support for broadcast and multicast, and minimal performance overhead. By addressing the current lack of native link-layer networking capabilities across multiple Kubernetes domains, L2S-CES provides a unified and practical foundation for deploying scalable, multi-tenant, and latency-sensitive cloud-native applications. Full article
Show Figures

Figure 1

22 pages, 2918 KB  
Article
Multi-Attribute Physical-Layer Authentication Against Jamming and Battery-Depletion Attacks in LoRaWAN
by Azita Pourghasem, Raimund Kirner, Athanasios Tsokanos, Iosif Mporas and Alexios Mylonas
Future Internet 2026, 18(1), 38; https://doi.org/10.3390/fi18010038 - 8 Jan 2026
Viewed by 315
Abstract
LoRaWAN is widely used for IoT environmental monitoring, but its lightweight security mechanisms leave the physical layer vulnerable to availability attacks such as jamming and battery-depletion. These risks are particularly critical in mission-critical environmental monitoring systems. This paper proposes a multi-attribute physical-layer authentication [...] Read more.
LoRaWAN is widely used for IoT environmental monitoring, but its lightweight security mechanisms leave the physical layer vulnerable to availability attacks such as jamming and battery-depletion. These risks are particularly critical in mission-critical environmental monitoring systems. This paper proposes a multi-attribute physical-layer authentication (PLA) framework that supports uplink legitimacy assessment by jointly exploiting radio, energy, and temporal attributes, specifically RSSI, altitude, battery_level, battery_drop_speed, event_step, and time_rank. Using publicly available Brno LoRaWAN traces, we construct a device-aware semi-synthetic dataset comprising 230,296 records from 1921 devices over 13.68 days, augmented with energy, spatial, and temporal attributes and injected with controlled jamming and battery-depletion anomalies. Five classifiers (Random Forest, Multi-Layer Perceptron, XGBoost, Logistic Regression, and K-Nearest Neighbours) are evaluated using accuracy, precision, recall, F1-score, and AUC-ROC. The Multi-Layer Perceptron achieves the strongest detection performance (F1-score = 0.8260, AUC-ROC = 0.8953), with Random Forest performing comparably. Deployment-oriented computational profiling shows that lightweight models such as Logistic Regression and the MLP achieve near-instantaneous prediction latency (below 2 µs per sample) with minimal CPU overhead, while tree-based models incur higher training and storage costs but remain feasible for Network Server-side deployment. Full article
Show Figures

Figure 1

42 pages, 824 KB  
Article
Leveraging the DAO for Edge-to-Cloud Data Sharing and Availability
by Adnan Imeri, Uwe Roth, Michail Alexandros Kourtis, Andreas Oikonomakis, Achilleas Economopoulos, Lorenzo Fogli, Antonella Cadeddu, Alessandro Bianchini, Daniel Iglesias and Wouter Tavernier
Future Internet 2026, 18(1), 37; https://doi.org/10.3390/fi18010037 - 8 Jan 2026
Viewed by 458
Abstract
Reliable data availability and transparent governance are fundamental requirements for distributed edge-to-cloud systems that must operate across multiple administrative domains. Conventional cloud-centric architectures centralize control and storage, creating bottlenecks and limiting autonomous collaboration at the network edge. This paper introduces a decentralized governance [...] Read more.
Reliable data availability and transparent governance are fundamental requirements for distributed edge-to-cloud systems that must operate across multiple administrative domains. Conventional cloud-centric architectures centralize control and storage, creating bottlenecks and limiting autonomous collaboration at the network edge. This paper introduces a decentralized governance and service-management framework that leverages Decentralized Autonomous Organizations (DAOs) and Decentralized Applications (DApps) to to govern and orchestrate verifiable, tamper-resistant, and continuously accessible data exchange between heterogeneous edge and cloud components. By embedding blockchain-based smart contracts within swarm-enabled edge infrastructures, the approach enables automated decision-making, auditable coordination, and fault-tolerant data sharing without relying on trusted intermediaries. The proposed OASEES framework demonstrates how DAO-driven orchestration can enhance data availability and accountability in real-world scenarios, including energy grid balancing, structural safety monitoring, and predictive maintenance of wind turbines. Results highlight that decentralized governance mechanisms enhance transparency, resilience, and trust, offering a scalable foundation for next-generation edge-to-cloud data ecosystems. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop