Previous Issue
Volume 17, August
 
 

Future Internet, Volume 17, Issue 9 (September 2025) – 39 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
40 pages, 2568 KB  
Review
Intelligent Edge Computing and Machine Learning: A Survey of Optimization and Applications
by Sebastián A. Cajas Ordóñez, Jaydeep Samanta, Andrés L. Suárez-Cetrulo and Ricardo Simón Carbajo
Future Internet 2025, 17(9), 417; https://doi.org/10.3390/fi17090417 (registering DOI) - 11 Sep 2025
Abstract
Intelligent edge machine learning has emerged as a paradigm for deploying smart applications across resource-constrained devices in next-generation network infrastructures. This survey addresses the critical challenges of implementing machine learning models on edge devices within distributed network environments, including computational limitations, memory constraints, [...] Read more.
Intelligent edge machine learning has emerged as a paradigm for deploying smart applications across resource-constrained devices in next-generation network infrastructures. This survey addresses the critical challenges of implementing machine learning models on edge devices within distributed network environments, including computational limitations, memory constraints, and energy-efficiency requirements for real-time intelligent inference. We provide comprehensive analysis of soft computing optimization strategies essential for intelligent edge deployment, systematically examining model compression techniques including pruning, quantization methods, knowledge distillation, and low-rank decomposition approaches. The survey explores intelligent MLOps frameworks tailored for network edge environments, addressing continuous model adaptation, monitoring under data drift, and federated learning for distributed intelligence while preserving privacy in next-generation networks. Our work covers practical applications across intelligent smart agriculture, energy management, healthcare, and industrial monitoring within network infrastructures, highlighting domain-specific challenges and emerging solutions. We analyze specialized hardware architectures, cloud offloading strategies, and distributed learning approaches that enable intelligent edge computing in heterogeneous network environments. The survey identifies critical research gaps in multimodal model deployment, streaming learning under concept drift, and integration of soft computing techniques with intelligent edge orchestration frameworks for network applications. These gaps directly manifest as open challenges in balancing computational efficiency with model robustness due to limited multimodal optimization techniques, developing sustainable intelligent edge AI systems arising from inadequate streaming learning adaptation, and creating adaptive network applications for dynamic environments resulting from insufficient soft computing integration. This comprehensive roadmap synthesizes current intelligent edge machine learning solutions with emerging soft computing approaches, providing researchers and practitioners with insights for developing next-generation intelligent edge computing systems that leverage machine learning capabilities in distributed network infrastructures. Full article
Show Figures

Graphical abstract

19 pages, 609 KB  
Article
Prompt-Driven and Kubernetes Error Report-Aware Container Orchestration
by Niklas Beuter, André Drews and Nane Kratzke
Future Internet 2025, 17(9), 416; https://doi.org/10.3390/fi17090416 - 11 Sep 2025
Abstract
Background: Container orchestration systems like Kubernetes rely heavily on declarative manifest files, which serve as orchestration blueprints. However, managing these manifest files is often complex and requires substantial DevOps expertise. Methodology: This study investigates the use of Large Language Models (LLMs) to automate [...] Read more.
Background: Container orchestration systems like Kubernetes rely heavily on declarative manifest files, which serve as orchestration blueprints. However, managing these manifest files is often complex and requires substantial DevOps expertise. Methodology: This study investigates the use of Large Language Models (LLMs) to automate the creation of Kubernetes manifest files from natural language specifications, utilizing prompt engineering techniques within an innovative error- and warning-report–aware refinement process. We assess the capabilities of these LLMs using Zero-Shot, Few-Shot, Prompt-Chaining, and Self-Refine methods to address DevOps needs and support fully automated deployment pipelines. Results: Our findings show that LLMs can generate Kubernetes manifests with varying levels of manual intervention. Notably, GPT-4 and GPT-3.5 demonstrate strong potential for deployment automation. Interestingly, smaller models sometimes outperform larger ones, challenging the assumption that larger models always yield better results. Conclusions: This research highlights the crucial impact of prompt engineering on LLM performance for Kubernetes tasks and recommends further exploration of prompt techniques and model comparisons, outlining a promising path for integrating LLMs into automated deployment workflows. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

28 pages, 3252 KB  
Article
Toward Secure SDN Infrastructure in Smart Cities: Kafka-Enabled Machine Learning Framework for Anomaly Detection
by Gayathri Karthick, Glenford Mapp and Jon Crowcroft
Future Internet 2025, 17(9), 415; https://doi.org/10.3390/fi17090415 - 11 Sep 2025
Abstract
As smart cities evolve, the demand for real-time, secure, and adaptive network monitoring, continues to grow. Software-Defined Networking (SDN) offers a centralized approach to managing network flows; However, anomaly detection within SDN environments remains a significant challenge, particularly at the intelligent edge. This [...] Read more.
As smart cities evolve, the demand for real-time, secure, and adaptive network monitoring, continues to grow. Software-Defined Networking (SDN) offers a centralized approach to managing network flows; However, anomaly detection within SDN environments remains a significant challenge, particularly at the intelligent edge. This paper presents a conceptual Kafka-enabled ML framework for scalable, real-time analytics in SDN environments, supported by offline evaluation and a prototype streaming demonstration. A range of supervised ML models covering traditional methods and ensemble approaches (Random Forest, Linear Regression & XGBoost) were trained and validated using the InSDN intrusion detection dataset. These models were tested against multiple cyber threats, including botnets, dos, ddos, network reconnaissance, brute force, and web attacks, achieving up to 99% accuracy for ensemble classifiers under offline conditions. A Dockerized prototype demonstrates Kafka’s role in offline data ingestion, processing, and visualization through PostgreSQL and Grafana. While full ML pipeline integration into Kafka remains part of future work, the proposed architecture establishes a foundation for secure and intelligent Software-Defined Vehicular Networking (SDVN) infrastructure in smart cities. Full article
Show Figures

Figure 1

22 pages, 2537 KB  
Article
GraphRAG-Enhanced Dialogue Engine for Domain-Specific Question Answering: A Case Study on the Civil IoT Taiwan Platform
by Hui-Hung Yu, Wei-Tsun Lin, Chih-Wei Kuan, Chao-Chi Yang and Kuan-Min Liao
Future Internet 2025, 17(9), 414; https://doi.org/10.3390/fi17090414 - 10 Sep 2025
Abstract
The proliferation of sensor technology has led to an explosion in data volume, making the retrieval of specific information from large repositories increasingly challenging. While Retrieval-Augmented Generation (RAG) can enhance Large Language Models (LLMs), they often lack precision in specialized domains. Taking the [...] Read more.
The proliferation of sensor technology has led to an explosion in data volume, making the retrieval of specific information from large repositories increasingly challenging. While Retrieval-Augmented Generation (RAG) can enhance Large Language Models (LLMs), they often lack precision in specialized domains. Taking the Civil IoT Taiwan Data Service Platform as a case study, this study addresses this gap by developing a dialogue engine enhanced with a GraphRAG framework, aiming to provide accurate, context-aware responses to user queries. Our method involves constructing a domain-specific knowledge graph by extracting entities (e.g., ‘Dataset’, ‘Agency’) and their relationships from the platform’s documentation. For query processing, the system interprets natural language inputs, identifies corresponding paths within the knowledge graph, and employs a recursive self-reflection mechanism to ensure the final answer aligns with the user’s intent. The final answer transformed into natural language by utilizing the TAIDE (Trustworthy AI Dialogue Engine) model. The implemented framework successfully translates complex, multi-constraint questions into executable graph queries, moving beyond keyword matching to navigate semantic pathways. This results in highly accurate and verifiable answers grounded in the source data. In conclusion, this research validates that applying a GraphRAG-enhanced engine is a robust solution for building intelligent dialogue systems for specialized data platforms, significantly improving the precision and usability of information retrieval and offering a replicable model for other knowledge-intensive domains. Full article
Show Figures

Figure 1

19 pages, 9954 KB  
Article
Improved Generation of Drawing Sequences Using Variational and Skip-Connected Deep Networks for a Drawing Support System
by Atomu Nakamura, Homari Matsumoto, Koharu Chiba and Shun Nishide
Future Internet 2025, 17(9), 413; https://doi.org/10.3390/fi17090413 - 10 Sep 2025
Abstract
This study presents a deep generative model designed to predict intermediate stages in the drawing process of character illustrations. To enhance generalization and robustness, the model integrates a variational bottleneck based on the Variational Autoencoder (VAE) and employs Gaussian noise augmentation during training. [...] Read more.
This study presents a deep generative model designed to predict intermediate stages in the drawing process of character illustrations. To enhance generalization and robustness, the model integrates a variational bottleneck based on the Variational Autoencoder (VAE) and employs Gaussian noise augmentation during training. We also investigate the effect of U-Net-style skip connections, which allow for the direct propagation of low-level features, on autoregressive sequence generation. Comparative experiments with baseline models demonstrate that the proposed VAE with noise augmentation outperforms both CNN- and RNN-based baselines in long-term stability and visual fidelity. While skip connections improve local detail retention, they also introduce instability in extended sequences, suggesting a trade-off between spatial precision and temporal coherence. The findings highlight the advantages of probabilistic modeling and data augmentation for sequential image generation and provide practical insights for designing intelligent drawing support systems. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

29 pages, 734 KB  
Article
GPT-4.1 Sets the Standard in Automated Experiment Design Using Novel Python Libraries
by Nuno Fachada, Daniel Fernandes, Carlos M. Fernandes, Bruno D. Ferreira-Saraiva and João P. Matos-Carvalho
Future Internet 2025, 17(9), 412; https://doi.org/10.3390/fi17090412 - 8 Sep 2025
Abstract
Large language models (LLMs) have advanced rapidly as tools for automating code generation in scientific research, yet their ability to interpret and use unfamiliar Python APIs for complex computational experiments remains poorly characterized. This study systematically benchmarks a selection of state-of-the-art LLMs in [...] Read more.
Large language models (LLMs) have advanced rapidly as tools for automating code generation in scientific research, yet their ability to interpret and use unfamiliar Python APIs for complex computational experiments remains poorly characterized. This study systematically benchmarks a selection of state-of-the-art LLMs in generating functional Python code for two increasingly challenging scenarios: conversational data analysis with the ParShift library, and synthetic data generation and clustering using pyclugen and scikit-learn. Both experiments use structured, zero-shot prompts specifying detailed requirements but omitting in-context examples. Model outputs are evaluated quantitatively for functional correctness and prompt compliance over multiple runs, and qualitatively by analyzing the errors produced when code execution fails. Results show that only a small subset of models consistently generate correct, executable code. GPT-4.1 achieved a 100% success rate across all runs in both experimental tasks, whereas most other models succeeded in fewer than half of the runs, with only Grok-3 and Mistral-Large approaching comparable performance. In addition to benchmarking LLM performance, this approach helps identify shortcomings in third-party libraries, such as unclear documentation or obscure implementation bugs. Overall, these findings highlight current limitations of LLMs for end-to-end scientific automation and emphasize the need for careful prompt design, comprehensive library documentation, and continued advances in language model capabilities. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

25 pages, 5281 KB  
Article
Detection and Mitigation in IoT Ecosystems Using oneM2M Architecture and Edge-Based Machine Learning
by Yu-Yong Luo, Yu-Hsun Chiu and Chia-Hsin Cheng
Future Internet 2025, 17(9), 411; https://doi.org/10.3390/fi17090411 - 8 Sep 2025
Abstract
Distributed denial-of-service (DDoS) attacks are a prevalent threat to resource-constrained IoT deployments. We present an edge-based detection and mitigation system integrated with the oneM2M architecture. By using a Raspberry Pi 4 client and five Raspberry Pi 3 attack nodes in a smart-home testbed, [...] Read more.
Distributed denial-of-service (DDoS) attacks are a prevalent threat to resource-constrained IoT deployments. We present an edge-based detection and mitigation system integrated with the oneM2M architecture. By using a Raspberry Pi 4 client and five Raspberry Pi 3 attack nodes in a smart-home testbed, we collected 200,000 packets with 19 features across four traffic states (normal, SYN/UDP/ICMP floods), trained Decision Tree, 2D-CNN, and LSTM models, and deployed the best model on an edge computer for real-time inference. The edge node classifies traffic and triggers per-attack defenses on the device (SYN cookies, UDP/ICMP iptables rules). On a held-out test set, the 2D-CNN achieved 98.45% accuracy, outperforming the LSTM (96.14%) and Decision Tree (93.77%). In end-to-end trials, the system sustained service during SYN floods (time to capture 200 packets increased from 5.05 s to 5.51 s after enabling SYN cookies), mitigated ICMP floods via rate limiting, and flagged UDP floods for administrator intervention due to residual performance degradation. These results show that lightweight, edge-deployed learning with targeted controls can harden oneM2M-based IoT systems against common DDoS vectors. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

24 pages, 664 KB  
Article
Fuzzy-Based MEC-Assisted Video Adaptation Framework for HTTP Adaptive Streaming
by Waqas ur Rahman
Future Internet 2025, 17(9), 410; https://doi.org/10.3390/fi17090410 - 8 Sep 2025
Abstract
As the demand for high-quality video streaming applications continues to rise, multi-access edge computing (MEC)-assisted streaming schemes have emerged as a viable solution within the context of HTTP adaptive streaming (HAS). These schemes aim to enhance both quality of experience (QoE) and utilization [...] Read more.
As the demand for high-quality video streaming applications continues to rise, multi-access edge computing (MEC)-assisted streaming schemes have emerged as a viable solution within the context of HTTP adaptive streaming (HAS). These schemes aim to enhance both quality of experience (QoE) and utilization of network resources. HAS faces a significant challenge when applied to mobile cellular networks. Designing a HAS scheme that fairly allocates bitrates to users ensures a high QoE and optimizes bandwidth utilization remains a challenge. To this end, we designed an MEC- and client-assisted adaptation framework for HAS, facilitating collaboration between the edge and client to enhance users’ quality of experience. The proposed framework employs fuzzy logic at the user end to determine the upper limit for the video streaming rate. On the MEC side, we developed an integer nonlinear programming (INLP) optimization model that collectively enhances the QoE of video clients by considering the upper limit set by the client. Due to the NP-hardness of the problem, we utilized a greedy algorithm to efficiently solve the quality adaptation optimization problem. The results demonstrate that the proposed framework, on average, (i) improves users’ QoE by 30%, (ii) achieves a fair allocation of bitrates by 22.6%, and (iii) enhances network utilization by 4.2% compared to state-of-the-art approaches. In addition, the proposed approach prevents playback interruptions regardless of the client’s buffer size and video segment duration. Full article
25 pages, 693 KB  
Review
Survey of Federated Learning for Cyber Threat Intelligence in Industrial IoT: Techniques, Applications and Deployment Models
by Abin Kumbalapalliyil Tom, Ansam Khraisat, Tony Jan, Md Whaiduzzaman, Thien D. Nguyen and Ammar Alazab
Future Internet 2025, 17(9), 409; https://doi.org/10.3390/fi17090409 - 8 Sep 2025
Abstract
The Industrial Internet of Things (IIoT) is transforming industrial operations through connected devices and real-time automation but also introduces significant cybersecurity risks. Cyber threat intelligence (CTI) is critical for detecting and mitigating such threats, yet traditional centralized CTI approaches face limitations in latency, [...] Read more.
The Industrial Internet of Things (IIoT) is transforming industrial operations through connected devices and real-time automation but also introduces significant cybersecurity risks. Cyber threat intelligence (CTI) is critical for detecting and mitigating such threats, yet traditional centralized CTI approaches face limitations in latency, scalability, and data privacy. Federated learning (FL) offers a privacy-preserving alternative by enabling decentralized model training without sharing raw data. This survey explores how FL can enhance CTI in IIoT environments. It reviews FL architectures, orchestration strategies, and aggregation methods, and maps their applications to domains such as intrusion detection, malware analysis, botnet mitigation, anomaly detection, and trust management. Among its contributions is an empirical synthesis comparing FL aggregation strategies—including FedAvg, FedProx, Krum, ClippedAvg, and Multi-Krum—across accuracy, robustness, and efficiency under IIoT constraints. The paper also presents a taxonomy of FL-based CTI approaches and outlines future research directions to support the development of secure, scalable, and decentralized threat intelligence systems for industrial ecosystems. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Figure 1

38 pages, 790 KB  
Article
A GHZ-Based Protocol for the Dining Information Brokers Problem
by Theodore Andronikos, Constantinos Bitsakos, Konstantinos Nikas, Georgios I. Goumas and Nectarios Koziris
Future Internet 2025, 17(9), 408; https://doi.org/10.3390/fi17090408 - 6 Sep 2025
Viewed by 113
Abstract
This article introduces the innovative Quantum Dining Information Brokers Problem, presenting a novel entanglement-based quantum protocol to address it. The scenario involves n information brokers, all located in distinct geographical regions, engaging in a metaphorical virtual dinner. The objective is for each broker [...] Read more.
This article introduces the innovative Quantum Dining Information Brokers Problem, presenting a novel entanglement-based quantum protocol to address it. The scenario involves n information brokers, all located in distinct geographical regions, engaging in a metaphorical virtual dinner. The objective is for each broker to share a unique piece of information with all the others simultaneously. Unlike previous approaches, this protocol enables a fully parallel, single-step communication exchange among all the brokers, regardless of their physical locations. A key feature of this protocol is its ability to ensure that both the anonymity and privacy of all the participants are preserved, meaning that no broker can discern the identity of the sender of any received information. At its core, the Quantum Dining Information Brokers Problem serves as a conceptual framework for achieving anonymous, untraceable, and massively parallel information exchange in a distributed system. The proposed protocol introduces three significant advancements. First, while quantum protocols for one-to-many simultaneous information transmission have been developed, this is, to the best of our knowledge, one of the first quantum protocols to facilitate many-to-many simultaneous information exchange. Second, it guarantees complete anonymity and untraceability for all senders, a critical improvement over sequential applications of one-to-many protocols, which fail to ensure such robust anonymity. Third, leveraging quantum entanglement, the protocol operates in a fully distributed manner, accommodating brokers in diverse spatial locations. This approach marks a substantial advancement in secure, scalable, and anonymous communication, with potential applications in distributed environments where privacy and parallelism are paramount. Full article
(This article belongs to the Special Issue Advanced 5G and Beyond Networks)
Show Figures

Figure 1

20 pages, 1328 KB  
Article
From Divergence to Alignment: Evaluating the Role of Large Language Models in Facilitating Agreement Through Adaptive Strategies
by Loukas Triantafyllopoulos and Dimitris Kalles
Future Internet 2025, 17(9), 407; https://doi.org/10.3390/fi17090407 - 6 Sep 2025
Viewed by 222
Abstract
Achieving consensus in group decision-making often involves overcoming significant challenges, particularly reconciling diverse perspectives and mitigating biases hindering agreement. Traditional methods relying on human facilitators are usually constrained by scalability and efficiency, especially in large-scale, fast-paced discussions. To address these challenges, this study [...] Read more.
Achieving consensus in group decision-making often involves overcoming significant challenges, particularly reconciling diverse perspectives and mitigating biases hindering agreement. Traditional methods relying on human facilitators are usually constrained by scalability and efficiency, especially in large-scale, fast-paced discussions. To address these challenges, this study proposes a novel real-time facilitation framework, employing large language models (LLMs) as automated facilitators within a custom-built multi-user chat system. This framework is distinguished by its real-time adaptive system architecture, which enables dynamic adjustments to facilitation strategies based on ongoing discussion dynamics. Leveraging cosine similarity as a core metric, this approach evaluates the ability of three state-of-the-art LLMs—ChatGPT 4.0, Mistral Large 2, and AI21 Jamba-Instruct—to synthesize consensus proposals that align with participants’ viewpoints. Unlike conventional techniques, the system integrates adaptive facilitation strategies, including clarifying misunderstandings, summarizing discussions, and proposing compromises, enabling the LLMs to refine consensus proposals based on user feedback iteratively. Experimental results indicate that ChatGPT 4.0 achieved the highest alignment with participant opinions and required fewer iterations to reach consensus. A one-way ANOVA confirmed that differences in performance between models were statistically significant. Moreover, descriptive analyses revealed nuanced differences in model behavior across various sustainability-focused discussion topics, including climate action, quality education, good health and well-being, and access to clean water and sanitation. These findings highlight the promise of LLM-driven facilitation for improving collective decision-making processes and underscore the need for further research into robust evaluation metrics, ethical considerations, and cross-cultural adaptability. Full article
Show Figures

Figure 1

21 pages, 6118 KB  
Article
3D Spatial Path Planning Based on Improved Particle Swarm Optimization
by Junxia Ma, Zixu Yang and Ming Chen
Future Internet 2025, 17(9), 406; https://doi.org/10.3390/fi17090406 - 5 Sep 2025
Viewed by 144
Abstract
Three-dimensional path planning is critical for the successful operation of unmanned aerial vehicles (UAVs), automated guided vehicles (AGVs), and robots in industrial Internet of Things (IIoT) applications. In 3D path planning, the standard Particle Swarm Optimization (PSO) algorithm suffers from premature convergence and [...] Read more.
Three-dimensional path planning is critical for the successful operation of unmanned aerial vehicles (UAVs), automated guided vehicles (AGVs), and robots in industrial Internet of Things (IIoT) applications. In 3D path planning, the standard Particle Swarm Optimization (PSO) algorithm suffers from premature convergence and a tendency to fall into local optima, leading to significant deviations from the optimal path. This paper proposes an improved PSO (IPSO) algorithm that enhances particle diversity and randomness through the introduction of logistic chaotic mapping, while employing dynamic learning factors and nonlinear inertia weights to improve global search capability. Experimental results demonstrate that IPSO outperforms traditional methods in terms of path length and computational efficiency, showing potential for real-time path planning in complex environments. Full article
Show Figures

Figure 1

28 pages, 15259 KB  
Article
1D-CNN-Based Performance Prediction in IRS-Enabled IoT Networks for 6G Autonomous Vehicle Applications
by Radwa Ahmed Osman
Future Internet 2025, 17(9), 405; https://doi.org/10.3390/fi17090405 - 5 Sep 2025
Viewed by 148
Abstract
To foster the performance of wireless communication while saving energy, the integration of Intelligent Reflecting Surfaces (IRS) into autonomous vehicle (AV) communication networks is considered a powerful technique. This paper proposes a novel IRS-assisted vehicular communication model that combines Lagrange optimization and Gradient-Based [...] Read more.
To foster the performance of wireless communication while saving energy, the integration of Intelligent Reflecting Surfaces (IRS) into autonomous vehicle (AV) communication networks is considered a powerful technique. This paper proposes a novel IRS-assisted vehicular communication model that combines Lagrange optimization and Gradient-Based Phase Optimization to determine the optimal transmission power, optimal interference transmission power, and IRS phase shifts. Additionally, the proposed model help increase the Signal-to-Interference-plus-Noise Ratio (SINR) by utilizing IRS, which leads to maximizes energy efficiency and the achievable data rate under a variety of environmental conditions, while guaranteeing that resource limits are satisfied. In order to represent dense vehicular environments, practical constraints for the system model, such as IRS reflection efficiency and interference, have been incorporated from multiple sources, namely, Device-to-Device (D2D), Vehicle-to-Vehicle (V2V), Vehicle-to-Base Station (V2B), and Cellular User Equipment (CUE). A Lagrangian optimization approach has been implemented to determine the required transmission interference power and the best IRS phase designs in order to enhance the system performance. Consequently, a one-dimensional convolutional neural network has been implemented for the optimized data provided by this framework as training input. This deep learning algorithm learns to predict the required optimal IRS settings quickly, allowing for real-time adaptation in dynamic wireless environments. The obtained results from the simulation show that the combined optimization and prediction strategy considerably enhances the system reliability and energy efficiency over baseline techniques. This study lays a solid foundation for implementing IRS-assisted AV networks in real-world settings, hence facilitating the development of next-generation vehicular communication systems that are both performance-driven and energy-efficient. Full article
Show Figures

Figure 1

50 pages, 2360 KB  
Review
The Rise of Agentic AI: A Review of Definitions, Frameworks, Architectures, Applications, Evaluation Metrics, and Challenges
by Ajay Bandi, Bhavani Kongari, Roshini Naguru, Sahitya Pasnoor and Sri Vidya Vilipala
Future Internet 2025, 17(9), 404; https://doi.org/10.3390/fi17090404 - 4 Sep 2025
Viewed by 758
Abstract
Agentic AI systems are a recently emerged and important approach that goes beyond traditional AI, generative AI, and autonomous systems by focusing on autonomy, adaptability, and goal-driven reasoning. This study provides a clear review of agentic AI systems by bringing together their definitions, [...] Read more.
Agentic AI systems are a recently emerged and important approach that goes beyond traditional AI, generative AI, and autonomous systems by focusing on autonomy, adaptability, and goal-driven reasoning. This study provides a clear review of agentic AI systems by bringing together their definitions, frameworks, and architectures, and by comparing them with related areas like generative AI, autonomic computing, and multi-agent systems. To do this, we reviewed 143 primary studies on current LLM-based and non-LLM-driven agentic systems and examined how they support planning, memory, reflection, and goal pursuit. Furthermore, we classified architectural models, input–output mechanisms, and applications based on their task domains where agentic AI is applied, supported using tabular summaries that highlight real-world case studies. Evaluation metrics were classified as qualitative and quantitative measures, along with available testing methods of agentic AI systems to check the system’s performance and reliability. This study also highlights the main challenges and limitations of agentic AI, covering technical, architectural, coordination, ethical, and security issues. We organized the conceptual foundations, available tools, architectures, and evaluation metrics in this research, which defines a structured foundation for understanding and advancing agentic AI. These findings aim to help researchers and developers build better, clearer, and more adaptable systems that support responsible deployment in different domains. Full article
Show Figures

Figure 1

26 pages, 4880 KB  
Article
Cell-Sequence-Based Covert Signal for Tor De-Anonymization Attacks
by Ran Xin, Yapeng Wang, Xiaohong Huang, Xu Yang and Sio Kei Im
Future Internet 2025, 17(9), 403; https://doi.org/10.3390/fi17090403 - 4 Sep 2025
Viewed by 200
Abstract
This research introduces a novel de-anonymization technique targeting the Tor network, addressing limitations in prior attack models, particularly concerning router positioning following the introduction of bridge relays. Our method exploits two specific, inherent protocol-level vulnerabilities: the absence of a continuity check for circuit-level [...] Read more.
This research introduces a novel de-anonymization technique targeting the Tor network, addressing limitations in prior attack models, particularly concerning router positioning following the introduction of bridge relays. Our method exploits two specific, inherent protocol-level vulnerabilities: the absence of a continuity check for circuit-level cells and anomalous residual values in RELAY_EARLY cell counters, working by manipulating cell headers to embed a covert signal. This signal is composed of reserved fields, start and end delimiters, and a payload that encodes target identifiers. Using this signal, malicious routers can effectively mark data flows for later identification. These routers employ a finite state machine (FSM) to adaptively switch between signal injection and detection. Experimental evaluations, conducted within a controlled environment using attacker-controlled onion routers, demonstrated that the embedded signals are undetectable by standard Tor routers, cause no noticeable performance degradation, and allow reliable correlation of Tor users with public services and deanonymization of hidden service IP addresses. This work reveals a fundamental design trade-off in Tor: the decision to conceal circuit length inadvertently exposes cell transmission characteristics. This creates a bidirectional vector for stealthy, protocol-level de-anonymization attacks, even though Tor payloads remain encrypted. Full article
Show Figures

Figure 1

21 pages, 1293 KB  
Article
Dynamic Resource Management in 5G-Enabled Smart Elderly Care Using Deep Reinforcement Learning
by Krishnapriya V. Shaji, Srilakshmi S. Rethy, Simi Surendran, Livya George, Namita Suresh and Hrishika Dayan
Future Internet 2025, 17(9), 402; https://doi.org/10.3390/fi17090402 - 2 Sep 2025
Viewed by 344
Abstract
The increasing elderly population presents major challenges to traditional healthcare due to the need for continuous care, a shortage of skilled professionals, and increasing medical costs. To address this, smart elderly care homes where multiple residents live with the support of caregivers and [...] Read more.
The increasing elderly population presents major challenges to traditional healthcare due to the need for continuous care, a shortage of skilled professionals, and increasing medical costs. To address this, smart elderly care homes where multiple residents live with the support of caregivers and IoT-based assistive technologies have emerged as a promising solution. For their effective operation, a reliable high speed network like 5G is essential, along with intelligent resource allocation to ensure efficient service delivery. This study proposes a deep reinforcement learning (DRL)-based resource management framework for smart elderly homes, formulated as a Markov decision process. The framework dynamically allocates computing and network resources in response to real-time application demands and system constraints. We implement and compare two DRL algorithms, emphasizing their strengths in optimizing edge utilization and throughput. System performance is evaluated across balanced, high-demand, and resource-constrained scenarios. The results demonstrate that the proposed DRL approach effectively learns adaptive resource management policies, making it a promising solution for next-generation intelligent elderly care environments. Full article
Show Figures

Figure 1

19 pages, 6002 KB  
Article
UAV Deployment Design Under Incomplete Information with a Connectivity Constraint for UAV-Assisted Networks
by Takumi Sakamoto, Tomotaka Kimura and Kouji Hirata
Future Internet 2025, 17(9), 401; https://doi.org/10.3390/fi17090401 - 2 Sep 2025
Viewed by 256
Abstract
In this paper, we introduce an Unmanned Aerial Vehicle (UAV) deployment design with a connectivity constraint for UAV-assisted communication networks. In such networks, multiple UAVs are collaboratively deployed in the air to form a network that realizes efficient relay communications from ground mobile [...] Read more.
In this paper, we introduce an Unmanned Aerial Vehicle (UAV) deployment design with a connectivity constraint for UAV-assisted communication networks. In such networks, multiple UAVs are collaboratively deployed in the air to form a network that realizes efficient relay communications from ground mobile clients to the base station. We consider a scenario where ground clients are widely distributed in a target area, with their population significantly outnumbering available UAVs. The goal is to enable UAVs to collect and relay all client data to the base station by continuously moving while preserving end-to-end connectivity with the base station. To achieve this, we propose two dynamic UAV deployment methods: genetic algorithm-based and modified ε-greedy algorithm-based methods. These methods are designed to efficiently collect data from mobile clients while maintaining UAV connectivity, based solely on local information about nearby client positions. Through numerical experiments, we demonstrate that the proposed methods dynamically form UAV-assisted networks to efficiently and rapidly collect client data transmitted to the base station. Full article
Show Figures

Figure 1

32 pages, 808 KB  
Article
Real-Time Detection and Mitigation Strategies Newly Appearing for DDoS Profiles
by Peter Orosz, Balazs Nagy and Pal Varga
Future Internet 2025, 17(9), 400; https://doi.org/10.3390/fi17090400 - 1 Sep 2025
Viewed by 515
Abstract
The recent worldwide turbulence of events from the pandemic lockdown through increased industrial digitization to geopolitical unease shifted towards new primary targets for the latest generation of DDoS threats. Although certain characteristics of current DDoS attack patterns existed before the pandemic or the [...] Read more.
The recent worldwide turbulence of events from the pandemic lockdown through increased industrial digitization to geopolitical unease shifted towards new primary targets for the latest generation of DDoS threats. Although certain characteristics of current DDoS attack patterns existed before the pandemic or the cloud platform boom, they have now gained prominence and reached their current level of sophistication. In addition to employing innovative methods and tools, the frequency, scale, and complexity of these attacks have also experienced a significant surge. The amalgamation of diverse attack vectors has paved the way for multi-vector attacks, incorporating a distinctive combination of L3–L7 attacking profiles. The integration of the hit-and-run strategy with the multi-vector approach has notably bolstered the success rate. This paper centers around two main aspects. Firstly, it explores the characteristics of the most recent DDoS attacks identified within actual data center infrastructures. To underscore the changes in attack profiles, we reference samples collected recently from diverse data center networks. Secondly, it offers an extensive overview of the cutting-edge methods and techniques for detecting and mitigating recent attacks. The paper places particular emphasis on the precision and speed of these detection and mitigation approaches, predominantly those related to networking. Additionally, we establish criteria, both quantitative and qualitative, to aid in the development of detection methods capable of addressing the latest threat profiles. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

3 pages, 128 KB  
Editorial
IoT Security: Threat Detection, Analysis, and Defense
by Olivier Markowitch and Jean-Michel Dricot
Future Internet 2025, 17(9), 399; https://doi.org/10.3390/fi17090399 - 30 Aug 2025
Viewed by 228
Abstract
In recent years, the rapid growth of Internet of Things (IoT) technologies has created numerous opportunities across fields such as smart cities, transportation, energy, and healthcare [...] Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
21 pages, 2482 KB  
Article
SwiftKV: A Metadata Indexing Scheme Integrating LSM-Tree and Learned Index for Distributed KV Stores
by Zhenfei Wang, Jianxun Feng, Longxiang Dun, Ziliang Bao and Chunfeng Du
Future Internet 2025, 17(9), 398; https://doi.org/10.3390/fi17090398 - 30 Aug 2025
Viewed by 311
Abstract
Optimizing metadata indexing remains critical for enhancing distributed file system performance. The Traditional Log-Structured Merge-Trees (LSM-Trees) architecture, while effective for write-intensive operations, exhibits significant limitations when handling massive metadata workloads, particularly manifesting as suboptimal read performance and substantial indexing overhead. Although existing learned [...] Read more.
Optimizing metadata indexing remains critical for enhancing distributed file system performance. The Traditional Log-Structured Merge-Trees (LSM-Trees) architecture, while effective for write-intensive operations, exhibits significant limitations when handling massive metadata workloads, particularly manifesting as suboptimal read performance and substantial indexing overhead. Although existing learned indexes perform well on read-only workloads, they struggle to support modifications such as inserts and updates effectively. This paper proposes SwiftKV, a novel metadata indexing scheme that combines LSM-Tree and learned indexes to address these issues. Firstly, SwiftKV employs a dynamic partition strategy to narrow the metadata search range. Secondly, a two-level learned index block, consisting of Greedy Piecewise Linear Regression (Greedy-PLR) and Linear Regression (LR) models, is leveraged to replace the typical Sorted String Table (SSTable) index block for faster location prediction than binary search. Thirdly, SwiftKV incorporates a load-aware construction mechanism and parallel optimization to minimize training overhead and enhance efficiency. This work bridges the gap between LSM-Trees’ write efficiency and learned indexes’ query performance, offering a scalable and high-performance solution for modern distributed file systems. This paper implements the prototype of SwiftKV based on RocksDB. The experimental results show that it narrows the memory usage of index blocks by 30.06% and reduces read latency by 1.19×~1.60× without affecting write performance. Furthermore, SwiftKV’s two-level learned index achieves a 15.13% reduction in query latency and a 44.03% reduction in memory overhead compared to a single-level model. For all YCSB workloads, SwiftKV outperforms other schemes. Full article
Show Figures

Figure 1

28 pages, 1711 KB  
Article
Identifying Literary Microgenres and Writing Style Differences in Romanian Novels with ReaderBench and Large Language Models
by Aura Cristina Udrea, Stefan Ruseti, Vlad Pojoga, Stefan Baghiu, Andrei Terian and Mihai Dascalu
Future Internet 2025, 17(9), 397; https://doi.org/10.3390/fi17090397 - 30 Aug 2025
Viewed by 388
Abstract
Recent developments in natural language processing, particularly large language models (LLMs), create new opportunities for literary analysis in underexplored languages like Romanian. This study investigates stylistic heterogeneity and genre blending in 175 late 19th- and early 20th-century Romanian novels, each classified by literary [...] Read more.
Recent developments in natural language processing, particularly large language models (LLMs), create new opportunities for literary analysis in underexplored languages like Romanian. This study investigates stylistic heterogeneity and genre blending in 175 late 19th- and early 20th-century Romanian novels, each classified by literary historians into one of 17 genres. Our findings reveal that most novels do not adhere to a single genre label but instead combine elements of multiple (micro)genres, challenging traditional single-label classification approaches. We employed a dual computational methodology combining an analysis with Romanian-tailored linguistic features with general-purpose LLMs. ReaderBench, a Romanian-specific framework, was utilized to extract surface, syntactic, semantic, and discourse features, capturing fine-grained linguistic patterns. Alternatively, we prompted two LLMs (Llama3.3 70B and DeepSeek-R1 70B) to predict genres at the paragraph level, leveraging their ability to detect contextual and thematic coherence across multiple narrative scales. Statistical analyses using Kruskal–Wallis and Mann–Whitney tests identified genre-defining features at both novel and chapter levels. The integration of these complementary approaches enhances microgenre detection beyond traditional classification capabilities. ReaderBench provides quantifiable linguistic evidence, while LLMs capture broader contextual patterns; together, they provide a multi-layered perspective on literary genre that reflects the complex and heterogeneous character of fictional texts. Our results argue that both language-specific and general-purpose computational tools can effectively detect stylistic diversity in Romanian fiction, opening new avenues for computational literary analysis in limited-resourced languages. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

28 pages, 5322 KB  
Review
Reinforcement Learning in Medical Imaging: Taxonomy, LLMs, and Clinical Challenges
by A. B. M. Kamrul Islam Riad, Md. Abdul Barek, Hossain Shahriar, Guillermo Francia III and Sheikh Iqbal Ahamed
Future Internet 2025, 17(9), 396; https://doi.org/10.3390/fi17090396 - 30 Aug 2025
Viewed by 427
Abstract
Reinforcement learning (RL) is being used more in medical imaging for segmentation, detection, registration, and classification. This survey provides a comprehensive overview of RL techniques applied in this domain, categorizing the literature based on clinical task, imaging modality, learning paradigm, and algorithmic design. [...] Read more.
Reinforcement learning (RL) is being used more in medical imaging for segmentation, detection, registration, and classification. This survey provides a comprehensive overview of RL techniques applied in this domain, categorizing the literature based on clinical task, imaging modality, learning paradigm, and algorithmic design. We introduce a unified taxonomy that supports reproducibility, highlights design guidance, and identifies underexplored intersections. Furthermore, we examine the integration of Large Language Models (LLMs) for automation and interpretability, and discuss privacy-preserving extensions using Differential Privacy (DP) and Federated Learning (FL). Finally, we address deployment challenges and outline future research directions toward trustworthy and scalable medical RL systems. Full article
Show Figures

Figure 1

23 pages, 5508 KB  
Article
From CSI to Coordinates: An IoT-Driven Testbed for Individual Indoor Localization
by Diana Macedo, Miguel Loureiro, Óscar G. Martins, Joana Coutinho Sousa, David Belo and Marco Gomes
Future Internet 2025, 17(9), 395; https://doi.org/10.3390/fi17090395 - 30 Aug 2025
Viewed by 447
Abstract
Indoor wireless networks face increasing challenges in maintaining stable coverage and performance, particularly with the widespread use of high-frequency Wi-Fi and growing demands from smart home devices. Traditional methods to improve signal quality, such as adding access points, often fall short in dynamic [...] Read more.
Indoor wireless networks face increasing challenges in maintaining stable coverage and performance, particularly with the widespread use of high-frequency Wi-Fi and growing demands from smart home devices. Traditional methods to improve signal quality, such as adding access points, often fall short in dynamic environments where user movement and physical obstructions affect signal behavior. In this work, we propose a system that leverages existing Internet of Things (IoT) devices to perform real-time user localization and network adaptation using fine-grained Channel State Information (CSI) and Received Signal Strength Indicator (RSSI) measurements. We deploy multiple ESP-32 microcontroller-based receivers in fixed positions to capture wireless signal characteristics and process them through a pipeline that includes filtering, segmentation, and feature extraction. Using supervised machine learning, we accurately predict the user’s location within a defined indoor grid. Our system achieves over 82% accuracy in a realistic laboratory setting and shows improved performance when excluding redundant sensors. The results demonstrate the potential of communication-based sensing to enhance both user tracking and wireless connectivity without requiring additional infrastructure. Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems, 2nd Edition)
Show Figures

Figure 1

27 pages, 5936 KB  
Article
Elasticsearch-Based Threat Hunting to Detect Privilege Escalation Using Registry Modification and Process Injection Attacks
by Akashdeep Bhardwaj, Luxmi Sapra and Shawon Rahman
Future Internet 2025, 17(9), 394; https://doi.org/10.3390/fi17090394 - 29 Aug 2025
Viewed by 444
Abstract
Malicious actors often exploit persistence mechanisms, such as unauthorized modifications to Windows startup directories or registry keys, to achieve privilege escalation and maintain access on compromised systems. While information technology (IT) teams legitimately use these AutoStart Extension Points (ASEPs), adversaries frequently deploy malicious [...] Read more.
Malicious actors often exploit persistence mechanisms, such as unauthorized modifications to Windows startup directories or registry keys, to achieve privilege escalation and maintain access on compromised systems. While information technology (IT) teams legitimately use these AutoStart Extension Points (ASEPs), adversaries frequently deploy malicious binaries with non-standard naming conventions or execute files from transient directories (e.g., Temp or Public folders). This study proposes a threat-hunting framework using a custom Elasticsearch Security Information and Event Management (SIEM) system to detect such persistence tactics. Two hypothesis-driven investigations were conducted: the first focused on identifying unauthorized ASEP registry key modifications during user logon events, while the second targeted malicious Dynamic Link Library (DLL) injections within temporary directories. By correlating Sysmon event logs (e.g., registry key creation/modification and process creation events), the researchers identified attack chains involving sequential registry edits and malicious file executions. Analysis confirmed that Sysmon Event ID 12 (registry object creation) and Event ID 7 (DLL loading) provided critical forensic evidence for detecting these tactics. The findings underscore the efficacy of real-time event correlation in SIEM systems in disrupting adversarial workflows, enabling rapid mitigation through the removal of malicious entries. This approach advances proactive defense strategies against privilege escalation and persistence, emphasizing the need for granular monitoring of registry and filesystem activities in enterprise environments. Full article
(This article belongs to the Special Issue Security of Computer System and Network)
Show Figures

Figure 1

20 pages, 3143 KB  
Article
RS-MADDPG: Routing Strategy Based on Multi-Agent Deep Deterministic Policy Gradient for Differentiated QoS Services
by Shi Kuang, Jinyu Zheng, Shilin Liang, Yingying Li, Siyuan Liang and Wanwei Huang
Future Internet 2025, 17(9), 393; https://doi.org/10.3390/fi17090393 - 29 Aug 2025
Viewed by 298
Abstract
As network environments become increasingly dynamic and users’ Quality of Service (QoS) demands grow more diverse, efficient and adaptive routing strategies are urgently needed. However, traditional routing strategies suffer from limitations such as poor adaptability to fluctuating traffic, lack of differentiated service handling, [...] Read more.
As network environments become increasingly dynamic and users’ Quality of Service (QoS) demands grow more diverse, efficient and adaptive routing strategies are urgently needed. However, traditional routing strategies suffer from limitations such as poor adaptability to fluctuating traffic, lack of differentiated service handling, and slow convergence in complex network scenarios. To this end, we propose a routing strategy based on multi-agent deep deterministic policy gradient for differentiated QoS services (RS-MADDPG) in a software-defined networking (SDN) environment. First, network state information is collected in real time and transmitted to the control layer for processing. Then, the processed information is forwarded to the intelligent layer. In this layer, multiple agents cooperate during training to learn routing policies that adapt to dynamic network conditions. Finally, the learned policies enable agents to perform adaptive routing decisions that explicitly address differentiated QoS requirements by incorporating a custom reward structure that dynamically balances throughput, delay, and packet loss according to traffic type. Simulation results demonstrate that RS-MADDPG achieves convergence approximately 30 training cycles earlier than baseline methods, while improving average throughput by 3%, reducing latency by 7%, and lowering packet loss rate by 2%. Full article
Show Figures

Figure 1

28 pages, 57007 KB  
Article
Hybrid B5G-DTN Architecture with Federated Learning for Contextual Communication Offloading
by Manuel Jesús-Azabal, Meichun Zheng and Vasco N. G. J. Soares
Future Internet 2025, 17(9), 392; https://doi.org/10.3390/fi17090392 - 29 Aug 2025
Viewed by 389
Abstract
In dense urban environments and large-scale events, Internet infrastructure often becomes overloaded due to high communication demand. Many of these communications are local and short-lived, exchanged between users in close proximity but still relying on global infrastructure, leading to unnecessary network stress. In [...] Read more.
In dense urban environments and large-scale events, Internet infrastructure often becomes overloaded due to high communication demand. Many of these communications are local and short-lived, exchanged between users in close proximity but still relying on global infrastructure, leading to unnecessary network stress. In this context, delay-tolerant networks (DTNs) offer an alternative by enabling device-to-device (D2D) communication without requiring constant connectivity. However, DTNs face significant challenges in routing due to unpredictable node mobility and intermittent contacts, making reliable delivery difficult. Considering these challenges, this paper presents a hybrid Beyond 5G (B5G) DTN architecture to provide private context-aware routing in dense scenarios. In this proposal, dynamic contextual notifications are shared among relevant local nodes, combining federated learning (FL) and edge artificial intelligence (AI) to estimate the optimal relay paths based on variables such as mobility patterns and contact history. To keep the local FL models updated with the evolving context, edge nodes, integrated as part of the B5G architecture, act as coordinating entities for model aggregation and redistribution. The proposed architecture has been implemented and evaluated in simulation testbeds, studying its performance and sensibility to the node density in a realistic scenario. In high-density scenarios, the architecture outperforms state-of-the-art routing schemes, achieving an average delivery probability of 77%, with limited latency and overhead, demonstrating relevant technical viability. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Figure 1

24 pages, 1689 KB  
Article
Safeguarding Brand and Platform Credibility Through AI-Based Multi-Model Fake Profile Detection
by Vishwas Chakranarayan, Fadheela Hussain, Fayzeh Abdulkareem Jaber, Redha J. Shaker and Ali Rizwan
Future Internet 2025, 17(9), 391; https://doi.org/10.3390/fi17090391 - 29 Aug 2025
Viewed by 428
Abstract
The proliferation of fake profiles on social media presents critical cybersecurity and misinformation challenges, necessitating robust and scalable detection mechanisms. Such profiles weaken consumer trust, reduce user engagement, and ultimately harm brand reputation and platform credibility. As adversarial tactics and synthetic identity generation [...] Read more.
The proliferation of fake profiles on social media presents critical cybersecurity and misinformation challenges, necessitating robust and scalable detection mechanisms. Such profiles weaken consumer trust, reduce user engagement, and ultimately harm brand reputation and platform credibility. As adversarial tactics and synthetic identity generation evolve, traditional rule-based and machine learning approaches struggle to detect evolving and deceptive behavioral patterns embedded in dynamic user-generated content. This study aims to develop an AI-driven, multi-modal deep learning-based detection system for identifying fake profiles that fuses textual, visual, and social network features to enhance detection accuracy. It also seeks to ensure scalability, adversarial robustness, and real-time threat detection capabilities suitable for practical deployment in industrial cybersecurity environments. To achieve these objectives, the current study proposes an integrated AI system that combines the Robustly Optimized BERT Pretraining Approach (RoBERTa) for deep semantic textual analysis, ConvNeXt for high-resolution profile image verification, and Heterogeneous Graph Attention Networks (Hetero-GAT) for modeling complex social interactions. The extracted features from all three modalities are fused through an attention-based late fusion strategy, enhancing interpretability, robustness, and cross-modal learning. Experimental evaluations on large-scale social media datasets demonstrate that the proposed RoBERTa-ConvNeXt-HeteroGAT model significantly outperforms baseline models, including Support Vector Machine (SVM), Random Forest, and Long Short-Term Memory (LSTM). Performance achieves 98.9% accuracy, 98.4% precision, and a 98.6% F1-score, with a per-profile speed of 15.7 milliseconds, enabling real-time applicability. Moreover, the model proves to be resilient against various types of attacks on text, images, and network activity. This study advances the application of AI in cybersecurity by introducing a highly interpretable, multi-modal detection system that strengthens digital trust, supports identity verification, and enhances the security of social media platforms. This alignment of technical robustness with brand trust highlights the system’s value not only in cybersecurity but also in sustaining platform credibility and consumer confidence. This system provides practical value to a wide range of stakeholders, including platform providers, AI researchers, cybersecurity professionals, and public sector regulators, by enabling real-time detection, improving operational efficiency, and safeguarding online ecosystems. Full article
Show Figures

Figure 1

21 pages, 1457 KB  
Article
A Framework for Data Lifecycle Model Selection
by Mauro Iacono, Michele Mastroianni, Christian Riccio and Bruna Viscardi
Future Internet 2025, 17(9), 390; https://doi.org/10.3390/fi17090390 - 28 Aug 2025
Viewed by 307
Abstract
The selection of Data Lifecycle Models (DLMs) in complex data management scenarios necessitates finding a balance between quantitative and qualitative characteristics to ensure regulation, improve performance, and maintain governance requirements. In this context, an interactive web application based on AHP-Express has been developed [...] Read more.
The selection of Data Lifecycle Models (DLMs) in complex data management scenarios necessitates finding a balance between quantitative and qualitative characteristics to ensure regulation, improve performance, and maintain governance requirements. In this context, an interactive web application based on AHP-Express has been developed as a user-friendly tool to facilitate decision-making processes related to DLM. The application facilitates customized decision matrices, organizes various expert interviews with distinct weights, calculates local and global priorities, and delivers final DLM rankings by consolidating sub-criteria scores into weighted macro-category values, accompanied by graphical representations. Key functions encompass consistency checks, sensitivity analysis for macro-category weight variations, and graphical representations (bar charts, radar maps, sensitivity charts) that emphasize strengths, shortcomings, and the robustness of rankings. In a suggested application for sensor-based artifact monitoring at the Museo del Carbone, the tool swiftly selected the most appropriate DLM as the leading contender, exhibiting consistent performance across diverse weight scenarios. The results of the Museo del Carbone case validate that AHP-Express facilitates rapid, transparent, and reproducible DLM selection, reducing cognitive load while maintaining scientific rigor. The tool’s modular architecture and visualization features enable educated decision making for various data management issues. Full article
Show Figures

Figure 1

33 pages, 4233 KB  
Article
A Comparative Study of PEGASUS, BART, and T5 for Text Summarization Across Diverse Datasets
by Eman Daraghmi, Lour Atwe and Areej Jaber
Future Internet 2025, 17(9), 389; https://doi.org/10.3390/fi17090389 - 28 Aug 2025
Viewed by 629
Abstract
This study aims to conduct a comprehensive comparative evaluation of three transformer-based models, PEGASUS, BART, and T5 variants (SMALL and BASE), for the task of abstractive text summarization. The evaluation spans across three benchmark datasets: CNN/DailyMail (long-form news articles), Xsum (extreme single-sentence summaries [...] Read more.
This study aims to conduct a comprehensive comparative evaluation of three transformer-based models, PEGASUS, BART, and T5 variants (SMALL and BASE), for the task of abstractive text summarization. The evaluation spans across three benchmark datasets: CNN/DailyMail (long-form news articles), Xsum (extreme single-sentence summaries of BBC articles), and Samsum (conversational dialogues). Each dataset presents unique challenges in terms of length, style, and domain, enabling a robust assessment of the models’ capabilities. All models were fine-tuned under controlled experimental settings using filtered and preprocessed subsets, with token length limits applied to maintain consistency and prevent truncation. The evaluation leveraged ROUGE-1, ROUGE-2, and ROUGE-L scores to measure summary quality, while efficiency metrics such as training time were also considered. An additional qualitative assessment was conducted through expert human evaluation of fluency, relevance, and conciseness. Results indicate that PEGASUS achieved the highest ROUGE scores on CNN/DailyMail, BART excelled in Xsum and Samsum, while T5 models, particularly T5-Base, narrowed the performance gap with larger models while still offering efficiency advantages compared to PEGASUS and BART. These findings highlight the trade-offs between model performance and computational efficiency, offering practical insights into model scaling—where T5-Small favors lightweight efficiency and T5-Base provides stronger accuracy without excessive resource demands. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Graphical abstract

34 pages, 7213 KB  
Article
Design and Implementation of a Scalable LoRaWAN-Based Air Quality Monitoring Infrastructure for the Kurdistan Region of Iraq
by Nasih Abdulkarim Muhammed and Bakhtiar Ibrahim Saeed
Future Internet 2025, 17(9), 388; https://doi.org/10.3390/fi17090388 - 28 Aug 2025
Viewed by 406
Abstract
Air pollution threatens human and environmental health worldwide. A Harvard study in partnership with UK institutions found that fossil fuel pollution killed over 8 million people in 2018, accounting for 1 in 5 fatalities worldwide. Iraq, including the Kurdistan Region of Iraq, has [...] Read more.
Air pollution threatens human and environmental health worldwide. A Harvard study in partnership with UK institutions found that fossil fuel pollution killed over 8 million people in 2018, accounting for 1 in 5 fatalities worldwide. Iraq, including the Kurdistan Region of Iraq, has a major environmental issue in that it ranks second worst in 2022 due to the high level of particulate matter, specifically PM2.5. In this paper, a LoRa-based infrastructure for environmental monitoring in the Kurdistan Region has been designed and developed. The infrastructure encompasses end-node devices, an open-source network server, and an IoT platform. Two AirQNodes were prototyped and deployed to measure particulate matter values, temperature, humidity, and atmospheric pressure using manufacturer-calibrated PM sensors and combined temperature, humidity, and atmospheric sensors. An open-source network server is adopted to manage the AirQNodes and the entire network. In addition, an IoT platform has also been designed and implemented to visualize and analyze the collected data. The platform processes and stores the data, making it accessible for the public and decision-making parties. The infrastructure was tested and results validated by deploying two AirQNodes at separate locations adjacent to existing air quality monitoring stations as reference points. The findings demonstrated that the AirQNodes reliably mirrored the trends and patterns observed in the reference monitors. This research establishes a comprehensive infrastructure for monitoring air quality in the Kurdistan Region of Iraq. Furthermore, complete ownership of the system can be attained by possessing and overseeing the critical components of the infrastructure, which encompass the end devices, network server, and IoT platform. This integrated strategy is especially crucial for the Kurdistan Region of Iraq, where cost-efficiency and enduring sustainability are vital, yet such a structure is absent. Full article
(This article belongs to the Special Issue Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

Previous Issue
Back to TopTop