Topical Advisory Panel applications are now closed. Please contact the Editorial Office with any queries.
Journal Description
Future Internet
Future Internet
is an international, peer-reviewed, open access journal on internet technologies and the information society, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, Inspec, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q1 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17 days after submission; acceptance to publication is undertaken in 3.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Clusters of Network and Communications Technology: Future Internet, IoT, Telecom, Journal of Sensor and Actuator Networks, Network, Signals.
Impact Factor:
3.6 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Efficient Lightweight Image Classification via Coordinate Attention and Channel Pruning for Resource-Constrained Systems
Future Internet 2025, 17(11), 489; https://doi.org/10.3390/fi17110489 (registering DOI) - 25 Oct 2025
Abstract
Image classification is central to computer vision, supporting applications from autonomous driving to medical imaging, yet state-of-the-art convolutional neural networks remain constrained by heavy floating-point operations (FLOPs) and parameter counts on edge devices. To address this accuracy–efficiency trade-off, we propose a unified lightweight
[...] Read more.
Image classification is central to computer vision, supporting applications from autonomous driving to medical imaging, yet state-of-the-art convolutional neural networks remain constrained by heavy floating-point operations (FLOPs) and parameter counts on edge devices. To address this accuracy–efficiency trade-off, we propose a unified lightweight framework built on a pruning-aware coordinate attention block (PACB). PACB integrates coordinate attention (CA) with L1-regularized channel pruning, enriching feature representation while enabling structured compression. Applied to MobileNetV3 and RepVGG, the framework achieves substantial efficiency gains. On GTSRB, MobileNetV3 parameters drop from 16.239 M to 9.871 M (–6.37 M) and FLOPs from 11.297 M to 8.552 M (–24.3%), with accuracy improving from 97.09% to 97.37%. For RepVGG, parameters fall from 7.683 M to 7.093 M (–0.59 M) and FLOPs from 31.264 M to 27.918 M (–3.35 M), with only ~0.51% average accuracy loss across CIFAR-10, Fashion-MNIST, and GTSRB. Complexity analysis further confirms PACB does not increase asymptotic order, since the additional CA operations contribute only lightweight lower-order terms. These results demonstrate that coupling CA with structured pruning yields a scalable accuracy–efficiency trade-off under hardware-agnostic metrics, making PACB a promising, deployment-ready solution for mobile and edge applications.
Full article
(This article belongs to the Special Issue Clustered Federated Learning for Networks)
Open AccessArticle
TRIDENT-DE: Triple-Operator Differential Evolution with Adaptive Restarts and Greedy Refinement
by
Vasileios Charilogis, Ioannis G. Tsoulos and Anna Maria Gianni
Future Internet 2025, 17(11), 488; https://doi.org/10.3390/fi17110488 (registering DOI) - 24 Oct 2025
Abstract
This paper introduces TRIDENT-DE, a novel ensemble-based variant of Differential Evolution (DE) designed to tackle complex continuous global optimization problems. The algorithm leverages three complementary trial vector generation strategies best/1/bin, current-to-best/1/bin, and pbest/1/bin executed within a self-adaptive framework that employs jDE parameter control.
[...] Read more.
This paper introduces TRIDENT-DE, a novel ensemble-based variant of Differential Evolution (DE) designed to tackle complex continuous global optimization problems. The algorithm leverages three complementary trial vector generation strategies best/1/bin, current-to-best/1/bin, and pbest/1/bin executed within a self-adaptive framework that employs jDE parameter control. To prevent stagnation and premature convergence, TRIDENT-DE incorporates adaptive micro-restart mechanisms, which periodically reinitialize a fraction of the population around the elite solution using Gaussian perturbations, thereby sustaining exploration even in rugged landscapes. Additionally, the algorithm integrates a greedy line-refinement operator that accelerates convergence by projecting candidate solutions along promising base-to-trial directions. These mechanisms are coordinated within a mini-batch update scheme, enabling aggressive iteration cycles while preserving diversity in the population. Experimental results across a diverse set of benchmark problems, including molecular potential energy surfaces and engineering design tasks, show that TRIDENT-DE consistently outperforms or matches state-of-the-art optimizers in terms of both best-found and mean performance. The findings highlight the potential of multi-operator, restart-aware DE frameworks as a powerful approach to advancing the state of the art in global optimization.
Full article
Open AccessArticle
Beyond the Polls: Quantifying Early Signals in Decentralized Prediction Markets with Cross-Correlation and Dynamic Time Warping
by
Francisco Cordoba Otalora and Marinos Themistocleous
Future Internet 2025, 17(11), 487; https://doi.org/10.3390/fi17110487 (registering DOI) - 24 Oct 2025
Abstract
In response to the persistent failures of traditional election polling, this study introduces the Decentralized Prediction Market Voter Framework (DPMVF), a novel tool to empirically test and quantify the predictive capabilities of Decentralized Prediction Markets (DPMs). We apply the DPMVF to Polymarket, analysing
[...] Read more.
In response to the persistent failures of traditional election polling, this study introduces the Decentralized Prediction Market Voter Framework (DPMVF), a novel tool to empirically test and quantify the predictive capabilities of Decentralized Prediction Markets (DPMs). We apply the DPMVF to Polymarket, analysing over 11 million on-chain transactions from 1 September to 5 November 2024 against aggregated polling in the 2024 U.S. Presidential Election across seven key swing states. By employing Cross-Correlation Function (CCF) for linear analysis and Dynamic Time Warping (DTW) for non-linear pattern similarity, the framework provides a robust, multi-faceted measure of the lead-lag relationship between market sentiment and public opinion. Results reveal a striking divergence in predictive clarity across different electoral contexts. In highly contested states like Arizona, Nevada, and Pennsylvania, the DPMVF identified statistically significant early signals. Using a non-parametric Permutation Test to validate the observed alignments, we found that Polymarket’s price trends preceded polling shifts by up to 14 days, a finding confirmed as non-spurious with a high confidence (p < 0.01) and with an exceptionally high correlation (up to 0.988) and shape similarity. At the same time, in states with low polling volatility like North Carolina, the framework correctly diagnosed a weak signal, identifying a “low-signal environment” where the market had no significant polling trend to predict. This study’s primary contribution is a validated, descriptive tool for contextualizing DPM signals. The DPMVF moves beyond a simple “pass/fail” verdict on prediction markets, offering a systematic approach to differentiate between genuine early signals and market noise. It provides a foundational tool for researchers, journalists, and campaigns to understand not only if DPMs are predictive but when and why, thereby offering a more nuanced and reliable path forward in the future of election analysis.
Full article
Open AccessArticle
Class-Level Feature Disentanglement for Multi-Label Image Classification
by
Yingduo Tong, Zhenyu Lu, Yize Dong and Yonggang Lu
Future Internet 2025, 17(11), 486; https://doi.org/10.3390/fi17110486 - 23 Oct 2025
Abstract
Generally, the interpretability of deep neural networks is categorized into a priori and a posteriori interpretability. A priori interpretability involves improving model transparency through deliberate design prior to training. Feature disentanglement is a method for achieving a priori interpretability. Existing disentanglement methods mostly
[...] Read more.
Generally, the interpretability of deep neural networks is categorized into a priori and a posteriori interpretability. A priori interpretability involves improving model transparency through deliberate design prior to training. Feature disentanglement is a method for achieving a priori interpretability. Existing disentanglement methods mostly focus on semantic features, such as intrinsic and shared features. These methods distinguish between the background and the main subject, but overlook class-level features in images. To address this, we take a further step by advancing feature disentanglement to the class level. For multi-label image classification tasks, we propose a class-level feature disentanglement method. Specifically, we introduce a multi-head classifier within the feature extraction layer of the backbone network to disentangle features. Each head in this classifier corresponds to a specific class and generates independent predictions, thereby guiding the model to better leverage the intrinsic features of each class and improving multi-label classification precision. Experiments demonstrate that our method significantly enhances performance metrics across various benchmarks while simultaneously achieving a priori interpretability.
Full article
(This article belongs to the Special Issue Machine Learning Techniques for Computer Vision)
►▼
Show Figures

Figure 1
Open AccessReview
Understanding Security Vulnerabilities in Private 5G Networks: Insights from a Literature Review
by
Jacinta Fue, Jairo A. Gutierrez and Yezid Donoso
Future Internet 2025, 17(11), 485; https://doi.org/10.3390/fi17110485 - 23 Oct 2025
Abstract
Private fifth generation (5G) networks have emerged as a cornerstone for ultra-reliable, low-latency connectivity across mission-critical domains such as industrial automation, healthcare, and smart cities. Compared to conventional technologies like 4G or Wi-Fi, they provide clear advantages, including enhanced service continuity, higher reliability,
[...] Read more.
Private fifth generation (5G) networks have emerged as a cornerstone for ultra-reliable, low-latency connectivity across mission-critical domains such as industrial automation, healthcare, and smart cities. Compared to conventional technologies like 4G or Wi-Fi, they provide clear advantages, including enhanced service continuity, higher reliability, and customizable security controls. However, these benefits come with new security challenges, particularly regarding the confidentiality, integrity, and availability of data and services. This article presents a review of security vulnerabilities in private 5G networks. The review pursues four objectives: (i) to identify and categorize key vulnerabilities, (ii) to analyze threats that undermine core security principles, (iii) to evaluate mitigation strategies proposed in the literature, and (iv) to outline gaps that demand further investigation. The findings offer a structured perspective on the evolving threat landscape of private 5G networks, highlighting both well-documented risks and emerging concerns. By mapping vulnerabilities to mitigation approaches and identifying areas where current solutions fall short, this study provides critical insights for researchers, practitioners, and policymakers. Ultimately, the review underscores the urgent need for robust and adaptive security frameworks to ensure the resilience of private 5G deployments in increasingly complex and high-stakes environments.
Full article
(This article belongs to the Special Issue 5G Security: Challenges, Opportunities, and the Road Ahead—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
SFC-GS: A Multi-Objective Optimization Service Function Chain Scheduling Algorithm Based on Matching Game
by
Shi Kuang, Moshu Niu, Sunan Wang, Haoran Li, Siyuan Liang and Rui Chen
Future Internet 2025, 17(11), 484; https://doi.org/10.3390/fi17110484 - 22 Oct 2025
Abstract
►▼
Show Figures
Service Function Chain (SFC) is a framework that dynamically orchestrates Virtual Network Functions (VNFs) and is essential to enhancing resource scheduling efficiency. However, traditional scheduling methods face several limitations, such as low matching efficiency, suboptimal resource utilization, and limited global coordination capabilities. To
[...] Read more.
Service Function Chain (SFC) is a framework that dynamically orchestrates Virtual Network Functions (VNFs) and is essential to enhancing resource scheduling efficiency. However, traditional scheduling methods face several limitations, such as low matching efficiency, suboptimal resource utilization, and limited global coordination capabilities. To this end, we propose a multi-objective scheduling algorithm for SFCs based on matching games (SFC-GS). First, a multi-objective cooperative optimization model is established that aims to reduce scheduling time, increase request acceptance rate, lower latency, and minimize resource consumption. Second, a matching model is developed through the construction of preference lists for service nodes and VNFs, followed by multi-round iterative matching. In each round, only the resource status of the current and neighboring nodes is evaluated, thereby reducing computational complexity and improving response speed. Finally, a hierarchical batch processing strategy is introduced, in which service requests are scheduled in priority-based batches, and subsequent allocations are dynamically adjusted based on feedback from previous batches. This establishes a low-overhead iterative optimization mechanism to achieve global resource optimization. Experimental results demonstrate that, compared to baseline methods, SFC-GS improves request acceptance rate and resource utilization by approximately 8%, reduces latency and resource consumption by around 10%, and offers clear advantages in scheduling time.
Full article

Figure 1
Open AccessArticle
Joint Optimization of Container Resource Defragmentation and Task Scheduling in Queueing Cloud Computing: A DRL-Based Approach
by
Yan Guo, Lan Wei, Cunqun Fan, You Ma, Xiangang Zhao and Henghong He
Future Internet 2025, 17(11), 483; https://doi.org/10.3390/fi17110483 - 22 Oct 2025
Abstract
Container-based virtualization has become pivotal in cloud computing, and resource fragmentation is inevitable due to the frequency of container deployment/termination and the heterogeneous nature of IoT tasks. In queuing cloud systems, resource defragmentation and task scheduling are interdependent yet rarely co-optimized in existing
[...] Read more.
Container-based virtualization has become pivotal in cloud computing, and resource fragmentation is inevitable due to the frequency of container deployment/termination and the heterogeneous nature of IoT tasks. In queuing cloud systems, resource defragmentation and task scheduling are interdependent yet rarely co-optimized in existing research. This paper addresses this gap by investigating the joint optimization of resource defragmentation and task scheduling in a queuing cloud computing system. We first formulate the problem to minimize task completion time and maximize resource utilization, then transform it into an online decision problem. We propose a Deep Reinforcement Learning (DRL)-based two-layer iterative approach called DRL-RDG, which uses a Resource Defragmentation approach based on a Greedy strategy (RDG) to find the optimal container migration solution and a DRL algorithm to learn the optimal task-scheduling solution. Simulation results show that DRL-RDG achieves a low average task completion time and high resource utilization, demonstrating its effectiveness in queuing cloud environments.
Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
BiTAD: An Interpretable Temporal Anomaly Detector for 5G Networks with TwinLens Explainability
by
Justin Li Ting Lau, Ying Han Pang, Charilaos Zarakovitis, Heng Siong Lim, Dionysis Skordoulis, Shih Yin Ooi, Kah Yoong Chan and Wai Leong Pang
Future Internet 2025, 17(11), 482; https://doi.org/10.3390/fi17110482 - 22 Oct 2025
Abstract
The transition to 5G networks brings unprecedented speed, ultra-low latency, and massive connectivity. Nevertheless, it introduces complex traffic patterns and broader attack surfaces that render traditional intrusion detection systems (IDSs) ineffective. Existing rule-based methods and classical machine learning approaches struggle to capture the
[...] Read more.
The transition to 5G networks brings unprecedented speed, ultra-low latency, and massive connectivity. Nevertheless, it introduces complex traffic patterns and broader attack surfaces that render traditional intrusion detection systems (IDSs) ineffective. Existing rule-based methods and classical machine learning approaches struggle to capture the temporal and dynamic characteristics of 5G traffic, while many deep learning models lack interpretability, making them unsuitable for high-stakes security environments. To address these challenges, we propose Bidirectional Temporal Anomaly Detector (BiTAD), a deep temporal learning architecture for anomaly detection in 5G networks. BiTAD leverages dual-direction temporal sequence modelling with attention to encode both past and future dependencies while focusing on critical segments within network sequences. Like many deep models, BiTAD’s faces interpretability challenges. To resolve its “black-box” nature, a dual-perspective explainability module, coined TwinLens, is proposed. This module integrates SHAP and TimeSHAP to provide global feature attribution and temporal relevance, delivering dual-perspective interpretability. Evaluated on the public 5G-NIDD dataset, BiTAD demonstrates superior detection performance compared to existing models. TwinLens enables transparent insights by identifying which features and when they were most influential to anomaly predictions. By jointly addressing the limitations in temporal modelling and interpretability, our work contributes a practical IDS framework tailored to the demands of next-generation mobile networks.
Full article
(This article belongs to the Special Issue Intrusion Detection and Resiliency in Cyber-Physical Systems and Networks—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessReview
A Review on Blockchain Sharding for Improving Scalability
by
Mahran Morsidi, Sharul Tajuddin, S. H. Shah Newaz, Ravi Kumar Patchmuthu and Gyu Myoung Lee
Future Internet 2025, 17(10), 481; https://doi.org/10.3390/fi17100481 - 21 Oct 2025
Abstract
Blockchain technology, originally designed as a secure and immutable ledger, has expanded its applications across various domains. However, its scalability remains a fundamental bottleneck, limiting throughput, specifically Transactions Per Second (TPS) and increasing confirmation latency. Among the many proposed solutions, sharding has emerged
[...] Read more.
Blockchain technology, originally designed as a secure and immutable ledger, has expanded its applications across various domains. However, its scalability remains a fundamental bottleneck, limiting throughput, specifically Transactions Per Second (TPS) and increasing confirmation latency. Among the many proposed solutions, sharding has emerged as a promising Layer 1 approach by partitioning blockchain networks into smaller, parallelized components, significantly enhancing processing efficiency while maintaining decentralization and security. In this paper, we have conducted a systematic literature review, resulting in a comprehensive review of sharding. We provide a detailed comparative analysis of various sharding approaches and emerging AI-assisted sharding approaches, assessing their effectiveness in improving TPS and reducing latency. Notably, our review is the first to incorporate and examine the standardization efforts of the ITU-T and ETSI, with a particular focus on activities related to blockchain sharding. Integrating these standardization activities allows us to bridge the gap between academic research and practical standardization in blockchain sharding, thereby enhancing the relevance and applicability of our review. Additionally, we highlight the existing research gaps, discuss critical challenges such as security risks and inter-shard communication inefficiencies, and provide insightful future research directions. Our work serves as a foundational reference for researchers and practitioners aiming to optimize blockchain scalability through sharding, contributing to the development of more efficient, secure, and high-performance decentralized networks. Our comparative synthesis further highlights that while Bitcoin and Ethereum remain limited to 7–15 TPS with long confirmation delays, sharding-based systems such as Elastico and OmniLedger have reported significant throughput improvements, demonstrating sharding’s clear advantage over traditional Layer 1 enhancements. In contrast to other state-of-the-art scalability techniques such as block size modification, consensus optimization, and DAG-based architectures, sharding consistently achieves higher transaction throughput and lower latency, indicating its position as one of the most effective Layer 1 solutions for improving blockchain scalability.
Full article
(This article belongs to the Special Issue AI and Blockchain: Synergies, Challenges, and Innovations)
►▼
Show Figures

Figure 1
Open AccessArticle
MambaNet0: Mamba-Based Sustainable Cloud Resource Prediction Framework Towards Net Zero Goals
by
Thananont Chevaphatrakul, Han Wang and Sukhpal Singh Gill
Future Internet 2025, 17(10), 480; https://doi.org/10.3390/fi17100480 - 21 Oct 2025
Abstract
►▼
Show Figures
With the ever-growing reliance on cloud computing, efficient resource allocation is crucial for maximising the effective use of provisioned resources from cloud service providers. Proactive resource management is therefore critical for minimising costs and striving for net zero emission goals. One of the
[...] Read more.
With the ever-growing reliance on cloud computing, efficient resource allocation is crucial for maximising the effective use of provisioned resources from cloud service providers. Proactive resource management is therefore critical for minimising costs and striving for net zero emission goals. One of the most promising methods involves the use of Artificial Intelligence (AI) techniques to analyse and predict resource demand, such as cloud CPU utilisation. This paper presents MambaNet0, a Mamba-based cloud resource prediction framework. The model is implemented on Google’s Vertex AI workbench and uses the real-world Bitbrains Grid Workload Archive-T-12 dataset, which contains the resource usage metrics of 1750 virtual machines. The Mamba model’s performance is then evaluated against established baseline models, including Autoregressive Integrated Moving Average (ARIMA), Long Short-Term Memory (LSTM), and Amazon Chronos, to demonstrate its potential for accurate prediction of CPU utilisation. The MambaNet0 model achieved a 29% improvement in Symmetric Mean Absolute Percentage Error (SMAPE) compared to the best-performing baseline Amazon Chronos. These findings reinforce the Mamba model’s ability to forecast accurate CPU utilisation, highlighting its potential for optimising cloud resource allocation in contribution to net zero goals.
Full article

Figure 1
Open AccessArticle
The Paradox of AI Knowledge: A Blockchain-Based Approach to Decentralized Governance in Chinese New Media Industry
by
Jing Wu and Yaoyi Cai
Future Internet 2025, 17(10), 479; https://doi.org/10.3390/fi17100479 - 20 Oct 2025
Abstract
AI text-to-video systems, such as OpenAI’s Sora, promise substantial efficiency gains in media production but also pose risks of biased outputs, opaque optimization, and deceptive content. Using the Orientation–Stimulus–Orientation–Response (O-S-O-R) model, we conduct an empirical study with 209 Chinese new media professionals and
[...] Read more.
AI text-to-video systems, such as OpenAI’s Sora, promise substantial efficiency gains in media production but also pose risks of biased outputs, opaque optimization, and deceptive content. Using the Orientation–Stimulus–Orientation–Response (O-S-O-R) model, we conduct an empirical study with 209 Chinese new media professionals and employ structural equation modeling to examine how information elaboration relates to AI knowledge, perceptions, and adoption intentions. Our findings reveal a knowledge paradox: higher objective AI knowledge negatively moderates elaboration, suggesting that centralized information ecosystems can misguide even well-informed practitioners. Building on these behavioral insights, we propose a blockchain-based governance framework that operationalizes five mechanisms to enhance oversight and trust while maintaining efficiency: Expert Assessment DAOs, Community Validation DAOs, real-time algorithm monitoring, professional integrity protection, and cross-border coordination. While our study focuses on China’s substantial new media market, the observed patterns and design principles generalize to global contexts. This work contributes empirical grounding for Web3-enabled AI governance, specifies implementable smart-contract patterns for multi-stakeholder validation and incentives, and outlines a research agenda spanning longitudinal, cross-cultural, and implementation studies.
Full article
(This article belongs to the Special Issue Blockchain and Web3: Applications, Challenges and Future Trends—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Towards Proactive Domain Name Security: An Adaptive System for .ro domains Reputation Analysis
by
Carmen Ionela Rotună, Ioan Ștefan Sacală and Adriana Alexandru
Future Internet 2025, 17(10), 478; https://doi.org/10.3390/fi17100478 - 18 Oct 2025
Abstract
In a digital landscape marked by the exponential growth of cyber threats, the development of automated domain reputation systems is extremely important. Emerging technologies such as artificial intelligence and machine learning now enable proactive and scalable approaches to early identification of malicious or
[...] Read more.
In a digital landscape marked by the exponential growth of cyber threats, the development of automated domain reputation systems is extremely important. Emerging technologies such as artificial intelligence and machine learning now enable proactive and scalable approaches to early identification of malicious or suspicious domains. This paper presents an adaptive domain name reputation system that integrates advanced machine learning to enhance cybersecurity resilience. The proposed framework uses domain data from .ro domain Registry and several other sources (blacklists, whitelists, DNS, SSL certificate), detects anomalies using machine learning techniques, and scores domain security risk levels. A supervised XGBoost model is trained and assessed through five-fold stratified cross-validation and a held-out 80/20 split. On an example dataset of 25,000 domains, the system attains accuracy 0.993 and F1 0.993 and is exposed through a lightweight Flask service that performs asynchronous feature collection for near real-time scoring. The contribution is a blueprint that links list supervision with registry/DNS/TLS features and deployable inference to support proactive domain abuse mitigation in ccTLD environments.
Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security)
►▼
Show Figures

Figure 1
Open AccessArticle
Uncensored AI in the Wild: Tracking Publicly Available and Locally Deployable LLMs
by
Bahrad A. Sokhansanj
Future Internet 2025, 17(10), 477; https://doi.org/10.3390/fi17100477 - 18 Oct 2025
Abstract
Open-weight generative large language models (LLMs) can be freely downloaded and modified. Yet, little empirical evidence exists on how these models are systematically altered and redistributed. This study provides a large-scale empirical analysis of safety-modified open-weight LLMs, drawing on 8608 model repositories and
[...] Read more.
Open-weight generative large language models (LLMs) can be freely downloaded and modified. Yet, little empirical evidence exists on how these models are systematically altered and redistributed. This study provides a large-scale empirical analysis of safety-modified open-weight LLMs, drawing on 8608 model repositories and evaluating 20 representative modified models on unsafe prompts designed to elicit, for example, election disinformation, criminal instruction, and regulatory evasion. This study demonstrates that modified models exhibit substantially higher compliance: while an average of unmodified models complied with only 19.2% of unsafe requests, modified variants complied at an average rate of 80.0%. Modification effectiveness was independent of model size, with smaller, 14-billion-parameter variants sometimes matching or exceeding the compliance levels of 70B parameter versions. The ecosystem is highly concentrated yet structurally decentralized; for example, the top 5% of providers account for over 60% of downloads and the top 20 for nearly 86%. Moreover, more than half of the identified models use GGUF packaging, optimized for consumer hardware, and 4-bit quantization methods proliferate widely, though full-precision and lossless 16-bit models remain the most downloaded. These findings demonstrate how locally deployable, modified LLMs represent a paradigm shift for Internet safety governance, calling for new regulatory approaches suited to decentralized AI.
Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
►▼
Show Figures

Graphical abstract
Open AccessArticle
Integrating Large Language Models into Automated Software Testing
by
Yanet Sáez Iznaga, Luís Rato, Pedro Salgueiro and Javier Lamar León
Future Internet 2025, 17(10), 476; https://doi.org/10.3390/fi17100476 - 18 Oct 2025
Abstract
►▼
Show Figures
This work investigates the use of LLMs to enhance automation in software testing, with a particular focus on generating high-quality, context-aware test scripts from natural language descriptions, while addressing both text-to-code and text+code-to-code generation tasks. The Codestral Mamba model was fine-tuned by proposing
[...] Read more.
This work investigates the use of LLMs to enhance automation in software testing, with a particular focus on generating high-quality, context-aware test scripts from natural language descriptions, while addressing both text-to-code and text+code-to-code generation tasks. The Codestral Mamba model was fine-tuned by proposing a way to integrate LoRA matrices into its architecture, enabling efficient domain-specific adaptation and positioning Mamba as a viable alternative to Transformer-based models. The model was trained and evaluated on two benchmark datasets: CONCODE/CodeXGLUE and the proprietary TestCase2Code dataset. Through structured prompt engineering, the system was optimized to generate syntactically valid and semantically meaningful code for test cases. Experimental results demonstrate that the proposed methodology successfully enables the automatic generation of code-based test cases using large language models. In addition, this work reports secondary benefits, including improvements in test coverage, automation efficiency, and defect detection when compared to traditional manual approaches. The integration of LLMs into the software testing pipeline also showed potential for reducing time and cost while enhancing developer productivity and software quality. The findings suggest that LLM-driven approaches can be effectively aligned with continuous integration and deployment workflows. This work contributes to the growing body of research on AI-assisted software engineering and offers practical insights into the capabilities and limitations of current LLM technologies for testing automation.
Full article

Figure 1
Open AccessArticle
Decentralized Federated Learning for IoT Malware Detection at the Multi-Access Edge: A Two-Tier, Privacy-Preserving Design
by
Mohammed Asiri, Maher A. Khemakhem, Reemah M. Alhebshi, Bassma S. Alsulami and Fathy E. Eassa
Future Internet 2025, 17(10), 475; https://doi.org/10.3390/fi17100475 - 17 Oct 2025
Abstract
Botnet attacks on Internet of Things (IoT) devices are escalating at the 5G/6G multi-access edge, yet most federated learning frameworks for IoT malware detection (FL-IMD) still hinge on a central aggregator, enlarging the attack surface, weakening privacy, and creating a single point of
[...] Read more.
Botnet attacks on Internet of Things (IoT) devices are escalating at the 5G/6G multi-access edge, yet most federated learning frameworks for IoT malware detection (FL-IMD) still hinge on a central aggregator, enlarging the attack surface, weakening privacy, and creating a single point of failure. We propose a two-tier, fully decentralized FL architecture aligned with MEC’s Proximal Edge Server (PES)/Supplementary Edge Server (SES) hierarchy. PES nodes train locally and encrypt updates with the Cheon–Kim–Kim–Song (CKKS) scheme; SES nodes verify ECDSA-signed provenance, homomorphically aggregate ciphertexts, and finalize each round via an Algorand-style committee that writes a compact, tamper-evident record (update digests/URIs and a global-model hash) to an append-only ledger. Using the N-BaIoT benchmark with an unsupervised autoencoder, we evaluate known-device and leave-one-device-out regimes against a classical centralized baseline and a cryptographically hardened but server-centric variant. With the heavier CKKS profile, attack sensitivity is preserved (TPR ), and specificity (TNR) declines by only 0.20 percentage points relative to plaintext in both regimes; a lighter profile maintains TPR while trading 3.5–4.8 percentage points of TNR for about 71% smaller payloads. Decentralization adds only a negligible per-round overhead for committee finality, while homomorphic aggregation dominates latency. Overall, our FL-IMD design removes the trusted aggregator and provides verifiable, ledger-backed provenance suitable for trustless MEC deployments.
Full article
(This article belongs to the Special Issue Edge-Cloud Computing and Federated-Split Learning in Internet of Things—Second Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Bi-Scale Mahalanobis Detection for Reactive Jamming in UAV OFDM Links
by
Nassim Aich, Zakarya Oubrahim, Hachem Ait Talount and Ahmed Abbou
Future Internet 2025, 17(10), 474; https://doi.org/10.3390/fi17100474 - 17 Oct 2025
Abstract
Reactive jamming remains a critical threat to low-latency telemetry of Unmanned Aerial Vehicles (UAVs) using Orthogonal Frequency Division Multiplexing (OFDM). In this paper, a Bi-scale Mahalanobis approach is proposed to detect and classify reactive jamming attacks on UAVs; it jointly exploits window-level energy
[...] Read more.
Reactive jamming remains a critical threat to low-latency telemetry of Unmanned Aerial Vehicles (UAVs) using Orthogonal Frequency Division Multiplexing (OFDM). In this paper, a Bi-scale Mahalanobis approach is proposed to detect and classify reactive jamming attacks on UAVs; it jointly exploits window-level energy and the Sevcik fractal dimension and employs self-adapting thresholds to detect any drift in additive white Gaussian noise (AWGN), fading effects, or Radio Frequency (RF) gain. The simulations were conducted on 5000 frames of OFDM signals, which were distorted by Rayleigh fading, a ±10 kHz frequency drift, and log-normal power shadowing. The simulation results achieved a precision of 99.4%, a recall of 100%, an F1 score of 99.7%, an area under the receiver operating characteristic curve (AUC) of 0.9997, and a mean alarm latency of 80 μs. The method used reinforces jam resilience in low-power commercial UAVs, yet it needs no extra RF hardware and avoids heavy deep learning computation.
Full article
(This article belongs to the Special Issue Intelligent IoT and Wireless Communication)
►▼
Show Figures

Graphical abstract
Open AccessArticle
QL-AODV: Q-Learning-Enhanced Multi-Path Routing Protocol for 6G-Enabled Autonomous Aerial Vehicle Networks
by
Abdelhamied A. Ateya, Nguyen Duc Tu, Ammar Muthanna, Andrey Koucheryavy, Dmitry Kozyrev and János Sztrik
Future Internet 2025, 17(10), 473; https://doi.org/10.3390/fi17100473 - 16 Oct 2025
Abstract
With the arrival of sixth-generation (6G) wireless systems comes radical potential for the deployment of autonomous aerial vehicle (AAV) swarms in mission-critical applications, ranging from disaster rescue to intelligent transportation. However, 6G-supporting AAV environments present challenges such as dynamic three-dimensional topologies, highly restrictive
[...] Read more.
With the arrival of sixth-generation (6G) wireless systems comes radical potential for the deployment of autonomous aerial vehicle (AAV) swarms in mission-critical applications, ranging from disaster rescue to intelligent transportation. However, 6G-supporting AAV environments present challenges such as dynamic three-dimensional topologies, highly restrictive energy constraints, and extremely low latency demands, which substantially degrade the efficiency of conventional routing protocols. To this end, this work presents a Q-learning-enhanced ad hoc on-demand distance vector (QL-AODV). This intelligent routing protocol uses reinforcement learning within the AODV protocol to support adaptive, data-driven route selection in highly dynamic aerial networks. QL-AODV offers four novelties, including a multipath route set collection methodology that retains up to ten candidate routes for each destination using an extended route reply (RREP) waiting mechanism, a more detailed RREP message format with cumulative node buffer usage, enabling informed decision-making, a normalized 3D state space model recording hop count, average buffer occupancy, and peak buffer saturation, optimized to adhere to aerial network dynamics, and a light-weighted distributed Q-learning approach at the source node that uses an ε-greedy policy to balance exploration and exploitation. Large-scale simulations conducted with NS-3.34 for various node densities and mobility conditions confirm the better performance of QL-AODV compared to conventional AODV. In high-mobility environments, QL-AODV offers up to 9.8% improvement in packet delivery ratio and up to 12.1% increase in throughput, while remaining persistently scalable for various network sizes. The results prove that QL-AODV is a reliable, scalable, and intelligent routing method for next-generation AAV networks that will operate in intensive environments that are expected for 6G.
Full article
(This article belongs to the Special Issue Moving Towards 6G Wireless Technologies—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Security Analysis and Designing Advanced Two-Party Lattice-Based Authenticated Key Establishment and Key Transport Protocols for Mobile Communication
by
Mani Rajendran, Dharminder Chaudhary, S. A. Lakshmanan and Cheng-Chi Lee
Future Internet 2025, 17(10), 472; https://doi.org/10.3390/fi17100472 - 16 Oct 2025
Abstract
►▼
Show Figures
In this paper, we have proposed a two-party authenticated key establishment (AKE), and authenticated key transport protocols based on lattice-based cryptography, aiming to provide security against quantum attacks for secure communication. This protocol enables two parties, who may share long-term public keys, to
[...] Read more.
In this paper, we have proposed a two-party authenticated key establishment (AKE), and authenticated key transport protocols based on lattice-based cryptography, aiming to provide security against quantum attacks for secure communication. This protocol enables two parties, who may share long-term public keys, to securely establish a shared session key, and transportation of the session key from the server while achieving mutual authentication. Our construction leverages the hardness of lattice problems Ring Learning With Errors (Ring-LWE), ensuring robustness against quantum and classical adversaries. Unlike traditional schemes whose security depends upon number-theoretic assumptions being vulnerable to quantum attacks, our protocol ensures security in the post-quantum era. The proposed protocol ensures forward secrecy, and provides security even if the long-term key is compromised. This protocol also provides essential property key freshness and resistance against man-in-the-middle attacks, impersonation attacks, replay attacks, and key mismatch attacks. On the other hand, the proposed key transport protocol provides essential property key freshness, anonymity, and resistance against man-in-the-middle attacks, impersonation attacks, replay attacks, and key mismatch attacks. A two-party key transport protocol is a cryptographic protocol in which one party (typically a trusted key distribution center or sender) securely generates and sends a session key to another party. Unlike key exchange protocols (where both parties contribute to key generation), key transport protocols rely on one party to generate the key and deliver it securely. The protocol possesses a minimum number of exchanged messages and can reduce the number of communication rounds to help minimize the communication overhead.
Full article

Figure 1
Open AccessArticle
Explainable AI-Based Semantic Retrieval from an Expert-Curated Oncology Knowledge Graph for Clinical Decision Support
by
Sameer Mushtaq, Marcello Trovati and Nik Bessis
Future Internet 2025, 17(10), 471; https://doi.org/10.3390/fi17100471 - 16 Oct 2025
Abstract
The modern oncology landscape is characterised by a deluge of high-dimensional data from genomic sequencing, medical imaging, and electronic health records, negatively impacting the analytical capacity of clinicians and health practitioners. This field is not new and it has drawn significant attention from
[...] Read more.
The modern oncology landscape is characterised by a deluge of high-dimensional data from genomic sequencing, medical imaging, and electronic health records, negatively impacting the analytical capacity of clinicians and health practitioners. This field is not new and it has drawn significant attention from the research community. However, one of the main limiting issues is the data itself. Despite the vast amount of available data, most of it lacks scalability, quality, and semantic information. This work is motivated by the data platform provided by OncoProAI, an AI-driven clinical decision support platform designed to address this challenge by enabling highly personalised, precision cancer care. The platform is built on a comprehensive knowledge graph, formally modelled as a directed acyclic graph, which has been manually populated, assessed and maintained to provide a unique data ecosystem. This enables targeted and bespoke information extraction and assessment.
Full article
(This article belongs to the Special Issue Artificial Intelligence for Smart Healthcare: Methods, Applications, and Challenges)
►▼
Show Figures

Figure 1
Open AccessArticle
Multifractality and Its Sources in the Digital Currency Market
by
Stanisław Drożdż, Robert Kluszczyński, Jarosław Kwapień and Marcin Wątorek
Future Internet 2025, 17(10), 470; https://doi.org/10.3390/fi17100470 - 13 Oct 2025
Abstract
►▼
Show Figures
Multifractality in time series analysis characterizes the presence of multiple scaling exponents, indicating heterogeneous temporal structures and complex dynamical behaviors beyond simple monofractal models. In the context of digital currency markets, multifractal properties arise due to the interplay of long-range temporal correlations and
[...] Read more.
Multifractality in time series analysis characterizes the presence of multiple scaling exponents, indicating heterogeneous temporal structures and complex dynamical behaviors beyond simple monofractal models. In the context of digital currency markets, multifractal properties arise due to the interplay of long-range temporal correlations and heavy-tailed distributions of returns, reflecting intricate market microstructure and trader interactions. Incorporating multifractal analysis into the modeling of cryptocurrency price dynamics enhances the understanding of market inefficiencies. It may also improve volatility forecasting and facilitate the detection of critical transitions or regime shifts. Based on the multifractal cross-correlation analysis (MFCCA) whose spacial case is the multifractal detrended fluctuation analysis (MFDFA), as the most commonly used practical tools for quantifying multifractality, we applied a recently proposed method of disentangling sources of multifractality in time series to the most representative instruments from the digital market. They include Bitcoin (BTC), Ethereum (ETH), decentralized exchanges (DEX) and non-fungible tokens (NFT). The results indicate the significant role of heavy tails in generating a broad multifractal spectrum. However, they also clearly demonstrate that the primary source of multifractality encompasses the temporal correlations in the series, and without them, multifractality fades out. It appears characteristic that these temporal correlations, to a large extent, do not depend on the thickness of the tails of the fluctuation distribution. These observations, made here in the context of the digital currency market, provide a further strong argument for the validity of the proposed methodology of disentangling sources of multifractality in time series.
Full article

Figure 1
Journal Menu
► ▼ Journal Menu-
- Future Internet Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Education Sciences, Future Internet, Information, Sustainability
Advances in Online and Distance Learning
Topic Editors: Neil Gordon, Han ReichgeltDeadline: 31 December 2025
Topic in
Applied Sciences, Electronics, Future Internet, IoT, Technologies, Inventions, Sensors, Vehicles
Next-Generation IoT and Smart Systems for Communication and Sensing
Topic Editors: Dinh-Thuan Do, Vitor Fialho, Luis Pires, Francisco Rego, Ricardo Santos, Vasco VelezDeadline: 31 January 2026
Topic in
Entropy, Future Internet, Healthcare, Sensors, Data
Communications Challenges in Health and Well-Being, 2nd Edition
Topic Editors: Dragana Bajic, Konstantinos Katzis, Gordana GardasevicDeadline: 28 February 2026
Topic in
AI, Computers, Education Sciences, Societies, Future Internet, Technologies
AI Trends in Teacher and Student Training
Topic Editors: José Fernández-Cerero, Marta Montenegro-RuedaDeadline: 11 March 2026
Conferences
Special Issues
Special Issue in
Future Internet
Digital Twins in Next-Generation IoT Networks
Guest Editors: Junhui Jiang, Yu Zhao, Mengmeng Yu, Dongwoo KimDeadline: 25 October 2025
Special Issue in
Future Internet
Internet of Things and Cyber-Physical Systems, 3rd Edition
Guest Editor: Iwona GrobelnaDeadline: 30 October 2025
Special Issue in
Future Internet
IoT–Edge–Cloud Computing and Decentralized Applications for Smart Cities
Guest Editors: Antonis Litke, Rodger Lea, Takuro YonezawaDeadline: 30 October 2025
Special Issue in
Future Internet
Intelligent Decision Support Systems and Prediction Models in IoT-Based Scenarios
Guest Editors: Mario Casillo, Marco Lombardi, Domenico SantanielloDeadline: 31 October 2025
Topical Collections
Topical Collection in
Future Internet
Information Systems Security
Collection Editor: Luis Javier Garcia Villalba
Topical Collection in
Future Internet
Innovative People-Centered Solutions Applied to Industries, Cities and Societies
Collection Editors: Dino Giuli, Filipe Portela
Topical Collection in
Future Internet
Computer Vision, Deep Learning and Machine Learning with Applications
Collection Editors: Remus Brad, Arpad Gellert
Topical Collection in
Future Internet
5G/6G Networks for the Internet of Things: Communication Technologies and Challenges
Collection Editor: Sachin Sharma

