Next Issue
Volume 17, October
Previous Issue
Volume 17, August
 
 

Future Internet, Volume 17, Issue 9 (September 2025) – 53 articles

Cover Story (view full-size image): This study presents a framework to enhance large-scale group decision-making using large language models. Unlike traditional methods, it captures how preferences are expressed, emphasizing clarity and trust. Experts are grouped into behavioural profiles with representative preferences and weighted influence. A sentiment-informed consensus mechanism integrates these profiles into a collective decision. This approach improves scalability, fairness, and interpretability, fostering balanced outcomes in digital platforms. Our case study shows that behavioural signals enhance decision quality, highlighting LLMs’ potential for next-generation decision support. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 31775 KB  
Review
Deep Learning Approaches for Automatic Livestock Detection in UAV Imagery: State-of-the-Art and Future Directions
by Muhammad Adam, Jianchao Song, Wei Yu and Qingqing Li
Future Internet 2025, 17(9), 431; https://doi.org/10.3390/fi17090431 - 21 Sep 2025
Viewed by 275
Abstract
Accurate livestock monitoring is critical for precision agriculture, supporting effective farm management, disease prevention, and sustainable resource allocation. Deep learning and remote sensing are recent technological advancements that have created auspicious opportunities for the development of livestock monitoring systems. This paper presents a [...] Read more.
Accurate livestock monitoring is critical for precision agriculture, supporting effective farm management, disease prevention, and sustainable resource allocation. Deep learning and remote sensing are recent technological advancements that have created auspicious opportunities for the development of livestock monitoring systems. This paper presents a comprehensive survey of deep learning approaches for automatic livestock detection in unmanned aerial vehicles (UAVs), highlighting key deep learning techniques, livestock detection challenges, and emerging trends. We analyze the innovations of popular deep learning models in the area of object detection, including You Look Only Once (YOLO) versions, Region-based Convolutional Neural Networks (RCNN), Anchor-based networks, and transformer models, to discuss their suitability for scalable and cost-efficient UAV-based livestock detection scenarios. To complement the survey, a case study is conducted on a custom UAV cattle dataset to benchmark representation detection models. Evaluation results demonstrate a trade-off between Precision, Recall, F1 score, IoU, mAP@50, mAP@50-95, inference speed, and model size. The case study results provide a clear understanding of selection and adapting deep learning models for UAV-based livestock monitoring and outline future directions for lightweight, domain-adapted frameworks in precision farming applications. Full article
Show Figures

Figure 1

22 pages, 2036 KB  
Article
AI-Driven Transformations in Manufacturing: Bridging Industry 4.0, 5.0, and 6.0 in Sustainable Value Chains
by Andrés Fernández-Miguel, Fernando Enrique García-Muiña, Susana Ortíz-Marcos, Mariano Jiménez-Calzado, Alfonso P. Fernández del Hoyo and Davide Settembre-Blundo
Future Internet 2025, 17(9), 430; https://doi.org/10.3390/fi17090430 - 21 Sep 2025
Viewed by 410
Abstract
This study investigates how AI-driven innovations are reshaping manufacturing value chains through the transition from Industry 4.0 to Industry 6.0, particularly in resource-intensive sectors such as ceramics. Addressing a gap in the literature, the research situates the evolution of manufacturing within the broader [...] Read more.
This study investigates how AI-driven innovations are reshaping manufacturing value chains through the transition from Industry 4.0 to Industry 6.0, particularly in resource-intensive sectors such as ceramics. Addressing a gap in the literature, the research situates the evolution of manufacturing within the broader context of digital transformation, sustainability, and regulatory demands. A mixed-methods approach was employed, combining semi-structured interviews with key industry stakeholders and an extensive review of secondary data, to develop an Industry 6.0 model tailored to the ceramics industry. The findings demonstrate that artificial intelligence, digital twins, and cognitive automation significantly enhance predictive maintenance, real-time supply chain optimization, and regulatory compliance, notably with the Corporate Sustainability Reporting Directive (CSRD). These technological advancements also facilitate circular economy practices and cognitive logistics, thereby fostering greater transparency and sustainability in B2B manufacturing networks. The study concludes that integrating AI-driven automation and cognitive logistics into digital ecosystems and supply chain management serves as a strategic enabler of operational resilience, regulatory alignment, and long-term competitiveness. While the industry-specific focus may limit generalizability, the study underscores the need for further research in diverse manufacturing sectors and longitudinal analyses to fully assess the long-term impact of AI-enabled Industry 6.0 frameworks. Full article
(This article belongs to the Special Issue Artificial Intelligence and Control Systems for Industry 4.0 and 5.0)
Show Figures

Figure 1

18 pages, 1356 KB  
Article
A Behavior-Aware Caching Architecture for Web Applications Using Static, Dynamic, and Burst Segmentation
by Carlos Gómez-Pantoja, Daniela Baeza-Rocha and Alonso Inostrosa-Psijas
Future Internet 2025, 17(9), 429; https://doi.org/10.3390/fi17090429 - 20 Sep 2025
Viewed by 216
Abstract
This work proposes a behavior-aware caching architecture that improves cache hit rates by up to 10.8% over LRU and 36% over LFU in large-scale web applications, reducing redundant traffic and alleviating backend server load. The architecture partitions the cache into three sections—static, dynamic, [...] Read more.
This work proposes a behavior-aware caching architecture that improves cache hit rates by up to 10.8% over LRU and 36% over LFU in large-scale web applications, reducing redundant traffic and alleviating backend server load. The architecture partitions the cache into three sections—static, dynamic, and burst—according to query reuse patterns derived from user behavior. Static queries remain permanently stored, dynamic queries have time-bound validity, and burst queries are detected in real time using a statistical monitoring mechanism to prioritize sudden, high-demand requests. The proposed architecture was evaluated through simulation experiments using real-world query logs (a one-month trace of 1.5 billion queries from a commercial search engine) under multiple cache capacity configurations ranging from 1000 to 100,000 entries and in combination with the Least Recently Used (LRU) and Least Frequently Used (LFU) replacement policies. The results show that the proposed architecture consistently achieves higher performance than the baselines, with the largest relative gains in smaller cache configurations and applicability to distributed and hybrid caching environments without fundamental design changes. The integration of user-behavior modeling and burst-aware segmentation delivers a practical and reproducible framework that optimizes cache allocation policies in high-traffic and distributed environments. Full article
Show Figures

Figure 1

13 pages, 9467 KB  
Article
Collaborative Fusion Attention Mechanism for Vehicle Fault Prediction
by Hong Jia, Dalin Qian, Fanghua Chen and Wei Zhou
Future Internet 2025, 17(9), 428; https://doi.org/10.3390/fi17090428 - 19 Sep 2025
Viewed by 177
Abstract
In this study, we investigate a deep learning-based vehicle fault prediction model aimed at achieving accurate prediction of vehicle faults by analyzing the correlations among different faults and the impact of critical faults on future fault development. To this end, we propose a [...] Read more.
In this study, we investigate a deep learning-based vehicle fault prediction model aimed at achieving accurate prediction of vehicle faults by analyzing the correlations among different faults and the impact of critical faults on future fault development. To this end, we propose a collaborative modeling approach utilizing multiple attention mechanisms. This approach incorporates a graph attention mechanism for the fusion representation of fault correlation information and employs a novel learning method that combines a Long Short-Term Memory (LSTM) network with an attention mechanism to capture the impact of key faults. Based on experimental validation using real-world vehicle fault record data, the model significantly outperforms existing prediction models in terms of fault prediction accuracy. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence, 3rd Edition)
Show Figures

Figure 1

29 pages, 2889 KB  
Systematic Review
Exploring the Evolution of Big Data Technologies: A Systematic Literature Review of Trends, Challenges, and Future Directions
by Tahani Ali Hakami, Yasser M. Alginahi and Omar Sabri
Future Internet 2025, 17(9), 427; https://doi.org/10.3390/fi17090427 - 19 Sep 2025
Viewed by 327
Abstract
This study examines the evolution and impact of Big Data technologies across sectors, emphasizing key algorithms, emerging trends, and organizational challenges in their adoption. Special attention is given to ethical concerns related to data privacy, security, and scalability, underscoring the importance of responsible [...] Read more.
This study examines the evolution and impact of Big Data technologies across sectors, emphasizing key algorithms, emerging trends, and organizational challenges in their adoption. Special attention is given to ethical concerns related to data privacy, security, and scalability, underscoring the importance of responsible governance frameworks. The review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines to ensure transparency and methodological rigor. A comprehensive literature search identified 83 peer-reviewed articles from high-indexed journals, and a complementary bibliometric analysis of 1108 Scopus-sourced articles (2015–2024) was conducted using R Biblioshiny. This dual-method approach offers both qualitative depth and quantitative insights into major trends, influential sources, and leading countries in Big Data research. Key findings reveal that real-time data processing and AI integration have significantly enhanced data management capabilities, supporting faster and more informed organizational decision-making. This study concludes by highlighting the importance of ethical governance and recommending future research on sector-specific adoption patterns and strategic frameworks that maximize Big Data’s value while safeguarding privacy and trust. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

29 pages, 3242 KB  
Article
A Platform-Agnostic Publish–Subscribe Architecture with Dynamic Optimization
by Ahmed Twabi, Yepeng Ding and Tohru Kondo
Future Internet 2025, 17(9), 426; https://doi.org/10.3390/fi17090426 - 19 Sep 2025
Viewed by 185
Abstract
Real-time media streaming over publish–subscribe platforms is increasingly vital in scenarios that demand the scalability of event-driven architectures while ensuring timely media delivery. This is especially true in multi-modal and resource-constrained environments, such as IoT, Physical Activity Recognition and Measure (PARM), and Internet [...] Read more.
Real-time media streaming over publish–subscribe platforms is increasingly vital in scenarios that demand the scalability of event-driven architectures while ensuring timely media delivery. This is especially true in multi-modal and resource-constrained environments, such as IoT, Physical Activity Recognition and Measure (PARM), and Internet of Video Things (IoVT), where integrating sensor data with media streams often leads to complex hybrid setups that compromise consistency and maintainability. Publish–subscribe (pub/sub) platforms like Kafka and MQTT offer scalability and decoupled communication but fall short in supporting real-time video streaming due to platform-dependent design, rigid optimization, and poor sub-second media handling. This paper presents FrameMQ, a layered, platform-agnostic architecture designed to overcome these limitations by decoupling application logic from platform-specific configurations and enabling dynamic real-time optimization. FrameMQ exposes tunable parameters such as compression and segmentation, allowing integration with external optimizers. Using Particle Swarm Optimization (PSO) as an exemplary optimizer, FrameMQ reduces total latency from over 2300 ms to below 400ms under stable conditions (over an 80% improvement) and maintains up to a 52% reduction under adverse network conditions. These results demonstrate FrameMQ’s ability to meet the demands of latency-sensitive applications, such as real-time streaming, IoT, and surveillance, while offering portability, extensibility, and platform independence without modifying the core application logic. Full article
Show Figures

Figure 1

32 pages, 6375 KB  
Article
Design and Evaluation of a Research-Oriented Open-Source Platform for Smart Grid Metering: A Comprehensive Review and Experimental Intercomparison of Smart Meter Technologies
by Nikolaos S. Korakianitis, Panagiotis Papageorgas, Georgios A. Vokas, Dimitrios D. Piromalis, Stavros D. Kaminaris, George Ch. Ioannidis and Ander Ochoa de Zuazola
Future Internet 2025, 17(9), 425; https://doi.org/10.3390/fi17090425 - 19 Sep 2025
Viewed by 295
Abstract
Smart meters (SMs) are essential components of modern smart grids, enabling real-time and accurate monitoring of electricity consumption. However, their evaluation is often hindered by proprietary communication protocols and the high cost of commercial testing tools. This study presents a low-cost, open-source experimental [...] Read more.
Smart meters (SMs) are essential components of modern smart grids, enabling real-time and accurate monitoring of electricity consumption. However, their evaluation is often hindered by proprietary communication protocols and the high cost of commercial testing tools. This study presents a low-cost, open-source experimental platform for smart meter validation, using a microcontroller and light sensor to detect optical pulses emitted by standard SMs. This non-intrusive approach circumvents proprietary restrictions while enabling transparent and reproducible comparisons. A case study was conducted comparing the static meter GAMA 300 model, manufactured by Elgama-Elektronika Ltd. (Vilnius, Lithuania), which is a closed-source commercial meter, with theTexas Instruments EVM430-F67641 evaluation module, manufactured by Texas Instruments Inc. (Dallas, TX, USA), which serves as an open-source reference design. Statistical analyses—based on confidence intervals and ANOVA—revealed a mean deviation of less than 1.5% between the devices, confirming the platform’s reliability. The system supports indirect power monitoring without hardware modification or access to internal data, making it suitable for both educational and applied contexts. Compared to existing tools, it offers enhanced accessibility, modularity, and open-source compatibility. Its scalable design supports IoT and environmental sensor integration, aligning with Internet of Energy (IoE) principles. The platform facilitates transparent, reproducible, and cost-effective smart meter evaluations, supporting the advancement of intelligent energy systems. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technologies in Greece 2024–2025)
Show Figures

Figure 1

3 pages, 129 KB  
Editorial
Special Issue: Intrusion Detection and Resiliency in Cyber-Physical Systems and Networks
by Olusola T. Odeyomi and Temitayo O. Olowu
Future Internet 2025, 17(9), 424; https://doi.org/10.3390/fi17090424 - 18 Sep 2025
Viewed by 186
Abstract
The rapid expansion of cyber-physical systems (CPSs) and networked environments—including the Internet of Things (IoT), Industrial IoT (IIoT), and the Internet of Vehicles (IoV)—has transformed modern infrastructures, enabling unprecedented connectivity, automation, and data-driven intelligence [...] Full article
24 pages, 921 KB  
Article
A Game Theoretic Approach for D2D Assisted Uncoded Caching in IoT Networks
by Jiajie Ren and Chang Guo
Future Internet 2025, 17(9), 423; https://doi.org/10.3390/fi17090423 - 18 Sep 2025
Viewed by 198
Abstract
Content caching and exchange through device-to-device (D2D) communications can offload data from the centralized base station and improve the quality of users’ experience. However, existing studies often overlook the selfish nature of user equipment (UE) and the heterogeneity of content preferences, which limits [...] Read more.
Content caching and exchange through device-to-device (D2D) communications can offload data from the centralized base station and improve the quality of users’ experience. However, existing studies often overlook the selfish nature of user equipment (UE) and the heterogeneity of content preferences, which limits their practical applicability. In this paper, we propose a novel incentive-driven uncoded caching framework modeled as a Stackelberg game between a base station (BS) and cache-enabled UEs. The BS acts as the leader by determining the unit incentive reward, while UEs jointly optimize their caching strategies as followers. The particular challenge in our formulation is that the uncoded caching decisions make the UEs’ total utility maximization problem into a non-convex integer programming problem. To address this, we map the UEs’ total utility maximization problem into a potential sub-game and design a potential game-based distributed caching (PGDC) algorithm that guarantees convergence to the optimal joint caching strategy. Building on this, we further develop a dynamic iterative algorithm to derive the Stackelberg equilibrium by jointly optimizing the BS’s cost and the total utility of UEs. The simulation results confirm the existence of the Stackelberg Equilibrium and demonstrate that the proposed PGDC algorithm significantly outperforms benchmark caching schemes. Full article
Show Figures

Figure 1

32 pages, 1572 KB  
Article
Intercepting and Monitoring Potentially Malicious Payloads with Web Honeypots
by Rareș-Mihail Visalom, Maria-Elena Mihăilescu, Răzvan Rughiniș and Dinu Țurcanu
Future Internet 2025, 17(9), 422; https://doi.org/10.3390/fi17090422 - 17 Sep 2025
Viewed by 386
Abstract
The rapid development of an increasing volume of web apps and the improper testing of the resulting code invariably provide more attack surfaces to potentially exploit. This leads to higher chances of facing cybersecurity breaches that can negatively impact both the users and [...] Read more.
The rapid development of an increasing volume of web apps and the improper testing of the resulting code invariably provide more attack surfaces to potentially exploit. This leads to higher chances of facing cybersecurity breaches that can negatively impact both the users and providers of web services. Moreover, current data leaks resulting from breaches are most probably the fuel of future breaches and social engineering attacks. Given the context, a better analysis and understanding of web attacks are of the utmost priority. Our study provides practical insights into developing, implementing, deploying, and actively monitoring a web application-agnostic honeypot with the objective of improving the odds of defending against web attacks. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

16 pages, 623 KB  
Review
A Digital Twin Architecture for Forest Restoration: Integrating AI, IoT, and Blockchain for Smart Ecosystem Management
by Nophea Sasaki and Issei Abe
Future Internet 2025, 17(9), 421; https://doi.org/10.3390/fi17090421 - 15 Sep 2025
Viewed by 564
Abstract
Meeting global forest restoration targets by 2030 requires a transition from labor-intensive and opaque practices to scalable, intelligent, and verifiable systems. This paper introduces a cyber–physical digital twin architecture for forest restoration, structured across four layers: (i) a Physical Layer with drones and [...] Read more.
Meeting global forest restoration targets by 2030 requires a transition from labor-intensive and opaque practices to scalable, intelligent, and verifiable systems. This paper introduces a cyber–physical digital twin architecture for forest restoration, structured across four layers: (i) a Physical Layer with drones and IoT-enabled sensors for in situ environmental monitoring; (ii) a Data Layer for secure and structured transmission of spatiotemporal data; (iii) an Intelligence Layer applying AI-driven modeling, simulation, and predictive analytics to forecast biomass, biodiversity, and risk; and (iv) an Application Layer providing stakeholder dashboards, milestone-based smart contracts, and automated climate finance flows. Evidence from Dronecoria, Flash Forest, and AirSeed Technologies shows that digital twins can reduce per-tree planting costs from USD 2.00–3.75 to USD 0.11–1.08, while enhancing accuracy, scalability, and community participation. The paper further outlines policy directions for integrating digital MRV systems into the Enhanced Transparency Framework (ETF) and Article 5 of the Paris Agreement. By embedding simulation, automation, and participatory finance into a unified ecosystem, digital twins offer a resilient, interoperable, and climate-aligned pathway for next-generation forest restoration. Full article
(This article belongs to the Special Issue Advances in Smart Environments and Digital Twin Technologies)
Show Figures

Figure 1

22 pages, 398 KB  
Article
Dynamic Channel Selection for Rendezvous in Cognitive Radio Networks
by Mohammed Hawa, Ramzi Saifan, Talal A. Edwan and Oswa M. Amro
Future Internet 2025, 17(9), 420; https://doi.org/10.3390/fi17090420 - 15 Sep 2025
Viewed by 280
Abstract
In an attempt to improve utilization of the frequency spectrum left vacant by license holders, cognitive radio networks (CRNs) permit secondary users (SUs) to utilize such spectrum when the license holders, known as primary users (PUs), are inactive. When a pair of SUs [...] Read more.
In an attempt to improve utilization of the frequency spectrum left vacant by license holders, cognitive radio networks (CRNs) permit secondary users (SUs) to utilize such spectrum when the license holders, known as primary users (PUs), are inactive. When a pair of SUs wants to communicate over the CRN, they need to converge simultaneously on one of the vacant channels, in a process known as rendezvous. In this work, we attempt to reduce the rendezvous time for SUs executing the well-known enhanced jump-stay (EJS) channel hopping procedure. We achieve this by modifying EJS in order to search the vacant spectrum around a specific favorite channel, instead of hopping across the whole spectrum. Moreover, the search process is carefully designed in order to accommodate the dynamic nature of CRNs, where PUs repeatedly become active and inactive, resulting in disturbances to the rendezvous process. A main feature of our proposed technique, named dynamic jump-stay (DJS), is that the SUs do not need any prior coordination over a common control channel (CCC), thereby allowing for scalable and more robust distributed CRNs. Simulations are used to quantify the resulting performance improvement in terms of expected time to rendezvous, maximum time to rendezvous, and interference on PUs. Full article
Show Figures

Figure 1

30 pages, 1306 KB  
Article
SAVE: Securing Avatars in Virtual Healthcare Through Environmental Fingerprinting for Elder Safety Monitoring
by Qian Qu, Yu Chen and Erik Blasch
Future Internet 2025, 17(9), 419; https://doi.org/10.3390/fi17090419 - 15 Sep 2025
Viewed by 398
Abstract
The rapid adoption of Metaverse technologies in healthcare, particularly for elder safety monitoring, has introduced new security challenges related to the authenticity of virtual representations. As healthcare providers increasingly rely on avatars and digital twins to monitor and interact with elderly patients remotely, [...] Read more.
The rapid adoption of Metaverse technologies in healthcare, particularly for elder safety monitoring, has introduced new security challenges related to the authenticity of virtual representations. As healthcare providers increasingly rely on avatars and digital twins to monitor and interact with elderly patients remotely, ensuring the integrity of these virtual entities becomes paramount. This paper introduces SAVE (Securing Avatars in Virtual Environments), an emerging framework that leverages environmental fingerprinting based on Electric Network Frequency (ENF) signals to authenticate avatars and detect potential deepfake attacks in virtual healthcare settings. Unlike conventional authentication methods that rely solely on digital credentials, SAVE anchors virtual entities to the physical world by utilizing the unique temporal and spatial characteristics of ENF signals. We implement and evaluate SAVE in a Microverse-based nursing home environment designed for monitoring elderly individuals living alone. We evaluated SAVE using a prototype system with Raspberry Pi devices and multiple environmental sensors, demonstrating effectiveness across three attack scenarios in a 30-minute experimental window. Through the experimental evaluation of three distinct attack scenarios, unauthorized device attacks, device ID spoofing, and replay attacks using intercepted data, our system demonstrates high detection accuracy with minimal false positives. Results show that by comparing ENF fingerprints embedded in transmitted data with reference ENF signals, SAVE can effectively identify tampering and ensure the authenticity of avatar updates in real time. The SAVE approach enhances the security of virtual healthcare monitoring without requiring additional user intervention, making it particularly suitable for elderly care applications where ease of use is essential. Our findings highlight the potential of physical environmental fingerprints as a robust security layer for virtual healthcare systems, contributing to safer and more trustworthy remote monitoring solutions for vulnerable populations. Full article
Show Figures

Figure 1

27 pages, 432 KB  
Article
Refactoring Loops in the Era of LLMs: A Comprehensive Study
by Alessandro Midolo and Emiliano Tramontana
Future Internet 2025, 17(9), 418; https://doi.org/10.3390/fi17090418 - 12 Sep 2025
Viewed by 403
Abstract
Java 8 brought functional programming to the Java language and library, enabling more expressive and concise code to replace loops by using streams. Despite such advantages, for-loops remain prevalent in current codebases as the transition to the functional paradigm requires a significant shift [...] Read more.
Java 8 brought functional programming to the Java language and library, enabling more expressive and concise code to replace loops by using streams. Despite such advantages, for-loops remain prevalent in current codebases as the transition to the functional paradigm requires a significant shift in the developer mindset. Traditional approaches for assisting refactoring loops into streams check a set of strict preconditions to ensure correct transformation, hence limiting their applicability. Conversely, generative artificial intelligence (AI), particularly ChatGPT, is a promising tool for automating software engineering tasks, including refactoring. While prior studies examined ChatGPT’s assistance in various development contexts, none have specifically investigated its ability to refactor for-loops into streams. This paper addresses such a gap by evaluating ChatGPT’s effectiveness in transforming loops into streams. We analyzed 2132 loops extracted from four open-source GitHub repositories and classified them according to traditional refactoring templates and preconditions. We then tasked ChatGPT with the refactoring of such loops and evaluated the correctness and quality of the generated code. Our findings revealed that ChatGPT could successfully refactor many more loops than traditional approaches, although it struggled with complex control flows and implicit dependencies. This study provides new insights into the strengths and limitations of ChatGPT in loop-to-stream refactoring and outlines potential improvements for future AI-driven refactoring tools. Full article
Show Figures

Figure 1

40 pages, 2568 KB  
Review
Intelligent Edge Computing and Machine Learning: A Survey of Optimization and Applications
by Sebastián A. Cajas Ordóñez, Jaydeep Samanta, Andrés L. Suárez-Cetrulo and Ricardo Simón Carbajo
Future Internet 2025, 17(9), 417; https://doi.org/10.3390/fi17090417 - 11 Sep 2025
Cited by 1 | Viewed by 1025
Abstract
Intelligent edge machine learning has emerged as a paradigm for deploying smart applications across resource-constrained devices in next-generation network infrastructures. This survey addresses the critical challenges of implementing machine learning models on edge devices within distributed network environments, including computational limitations, memory constraints, [...] Read more.
Intelligent edge machine learning has emerged as a paradigm for deploying smart applications across resource-constrained devices in next-generation network infrastructures. This survey addresses the critical challenges of implementing machine learning models on edge devices within distributed network environments, including computational limitations, memory constraints, and energy-efficiency requirements for real-time intelligent inference. We provide comprehensive analysis of soft computing optimization strategies essential for intelligent edge deployment, systematically examining model compression techniques including pruning, quantization methods, knowledge distillation, and low-rank decomposition approaches. The survey explores intelligent MLOps frameworks tailored for network edge environments, addressing continuous model adaptation, monitoring under data drift, and federated learning for distributed intelligence while preserving privacy in next-generation networks. Our work covers practical applications across intelligent smart agriculture, energy management, healthcare, and industrial monitoring within network infrastructures, highlighting domain-specific challenges and emerging solutions. We analyze specialized hardware architectures, cloud offloading strategies, and distributed learning approaches that enable intelligent edge computing in heterogeneous network environments. The survey identifies critical research gaps in multimodal model deployment, streaming learning under concept drift, and integration of soft computing techniques with intelligent edge orchestration frameworks for network applications. These gaps directly manifest as open challenges in balancing computational efficiency with model robustness due to limited multimodal optimization techniques, developing sustainable intelligent edge AI systems arising from inadequate streaming learning adaptation, and creating adaptive network applications for dynamic environments resulting from insufficient soft computing integration. This comprehensive roadmap synthesizes current intelligent edge machine learning solutions with emerging soft computing approaches, providing researchers and practitioners with insights for developing next-generation intelligent edge computing systems that leverage machine learning capabilities in distributed network infrastructures. Full article
Show Figures

Graphical abstract

19 pages, 611 KB  
Article
Prompt-Driven and Kubernetes Error Report-Aware Container Orchestration
by Niklas Beuter, André Drews and Nane Kratzke
Future Internet 2025, 17(9), 416; https://doi.org/10.3390/fi17090416 - 11 Sep 2025
Viewed by 291
Abstract
Background: Container orchestration systems like Kubernetes rely heavily on declarative manifest files, which serve as orchestration blueprints. However, managing these manifest files is often complex and requires substantial DevOps expertise. Methodology: This study investigates the use of Large Language Models (LLMs) to automate [...] Read more.
Background: Container orchestration systems like Kubernetes rely heavily on declarative manifest files, which serve as orchestration blueprints. However, managing these manifest files is often complex and requires substantial DevOps expertise. Methodology: This study investigates the use of Large Language Models (LLMs) to automate the creation of Kubernetes manifest files from natural language specifications, utilizing prompt engineering techniques within an innovative error- and warning-report–aware refinement process. We assess the capabilities of these LLMs using Zero-Shot, Few-Shot, Prompt-Chaining, and Self-Refine methods to address DevOps needs and support fully automated deployment pipelines. Results: Our findings show that LLMs can generate Kubernetes manifests with varying levels of manual intervention. Notably, GPT-4 and GPT-3.5 demonstrate strong potential for deployment automation. Interestingly, smaller models sometimes outperform larger ones, challenging the assumption that larger models always yield better results. Conclusions: This research highlights the crucial impact of prompt engineering on LLM performance for Kubernetes tasks and recommends further exploration of prompt techniques and model comparisons, outlining a promising path for integrating LLMs into automated deployment workflows. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

28 pages, 3252 KB  
Article
Toward Secure SDN Infrastructure in Smart Cities: Kafka-Enabled Machine Learning Framework for Anomaly Detection
by Gayathri Karthick, Glenford Mapp and Jon Crowcroft
Future Internet 2025, 17(9), 415; https://doi.org/10.3390/fi17090415 - 11 Sep 2025
Viewed by 325
Abstract
As smart cities evolve, the demand for real-time, secure, and adaptive network monitoring, continues to grow. Software-Defined Networking (SDN) offers a centralized approach to managing network flows; However, anomaly detection within SDN environments remains a significant challenge, particularly at the intelligent edge. This [...] Read more.
As smart cities evolve, the demand for real-time, secure, and adaptive network monitoring, continues to grow. Software-Defined Networking (SDN) offers a centralized approach to managing network flows; However, anomaly detection within SDN environments remains a significant challenge, particularly at the intelligent edge. This paper presents a conceptual Kafka-enabled ML framework for scalable, real-time analytics in SDN environments, supported by offline evaluation and a prototype streaming demonstration. A range of supervised ML models covering traditional methods and ensemble approaches (Random Forest, Linear Regression & XGBoost) were trained and validated using the InSDN intrusion detection dataset. These models were tested against multiple cyber threats, including botnets, dos, ddos, network reconnaissance, brute force, and web attacks, achieving up to 99% accuracy for ensemble classifiers under offline conditions. A Dockerized prototype demonstrates Kafka’s role in offline data ingestion, processing, and visualization through PostgreSQL and Grafana. While full ML pipeline integration into Kafka remains part of future work, the proposed architecture establishes a foundation for secure and intelligent Software-Defined Vehicular Networking (SDVN) infrastructure in smart cities. Full article
Show Figures

Figure 1

22 pages, 2537 KB  
Article
GraphRAG-Enhanced Dialogue Engine for Domain-Specific Question Answering: A Case Study on the Civil IoT Taiwan Platform
by Hui-Hung Yu, Wei-Tsun Lin, Chih-Wei Kuan, Chao-Chi Yang and Kuan-Min Liao
Future Internet 2025, 17(9), 414; https://doi.org/10.3390/fi17090414 - 10 Sep 2025
Viewed by 389
Abstract
The proliferation of sensor technology has led to an explosion in data volume, making the retrieval of specific information from large repositories increasingly challenging. While Retrieval-Augmented Generation (RAG) can enhance Large Language Models (LLMs), they often lack precision in specialized domains. Taking the [...] Read more.
The proliferation of sensor technology has led to an explosion in data volume, making the retrieval of specific information from large repositories increasingly challenging. While Retrieval-Augmented Generation (RAG) can enhance Large Language Models (LLMs), they often lack precision in specialized domains. Taking the Civil IoT Taiwan Data Service Platform as a case study, this study addresses this gap by developing a dialogue engine enhanced with a GraphRAG framework, aiming to provide accurate, context-aware responses to user queries. Our method involves constructing a domain-specific knowledge graph by extracting entities (e.g., ‘Dataset’, ‘Agency’) and their relationships from the platform’s documentation. For query processing, the system interprets natural language inputs, identifies corresponding paths within the knowledge graph, and employs a recursive self-reflection mechanism to ensure the final answer aligns with the user’s intent. The final answer transformed into natural language by utilizing the TAIDE (Trustworthy AI Dialogue Engine) model. The implemented framework successfully translates complex, multi-constraint questions into executable graph queries, moving beyond keyword matching to navigate semantic pathways. This results in highly accurate and verifiable answers grounded in the source data. In conclusion, this research validates that applying a GraphRAG-enhanced engine is a robust solution for building intelligent dialogue systems for specialized data platforms, significantly improving the precision and usability of information retrieval and offering a replicable model for other knowledge-intensive domains. Full article
Show Figures

Figure 1

19 pages, 9954 KB  
Article
Improved Generation of Drawing Sequences Using Variational and Skip-Connected Deep Networks for a Drawing Support System
by Atomu Nakamura, Homari Matsumoto, Koharu Chiba and Shun Nishide
Future Internet 2025, 17(9), 413; https://doi.org/10.3390/fi17090413 - 10 Sep 2025
Viewed by 372
Abstract
This study presents a deep generative model designed to predict intermediate stages in the drawing process of character illustrations. To enhance generalization and robustness, the model integrates a variational bottleneck based on the Variational Autoencoder (VAE) and employs Gaussian noise augmentation during training. [...] Read more.
This study presents a deep generative model designed to predict intermediate stages in the drawing process of character illustrations. To enhance generalization and robustness, the model integrates a variational bottleneck based on the Variational Autoencoder (VAE) and employs Gaussian noise augmentation during training. We also investigate the effect of U-Net-style skip connections, which allow for the direct propagation of low-level features, on autoregressive sequence generation. Comparative experiments with baseline models demonstrate that the proposed VAE with noise augmentation outperforms both CNN- and RNN-based baselines in long-term stability and visual fidelity. While skip connections improve local detail retention, they also introduce instability in extended sequences, suggesting a trade-off between spatial precision and temporal coherence. The findings highlight the advantages of probabilistic modeling and data augmentation for sequential image generation and provide practical insights for designing intelligent drawing support systems. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

28 pages, 734 KB  
Article
GPT-4.1 Sets the Standard in Automated Experiment Design Using Novel Python Libraries
by Nuno Fachada, Daniel Fernandes, Carlos M. Fernandes, Bruno D. Ferreira-Saraiva and João P. Matos-Carvalho
Future Internet 2025, 17(9), 412; https://doi.org/10.3390/fi17090412 - 8 Sep 2025
Viewed by 399
Abstract
Large language models (LLMs) have advanced rapidly as tools for automating code generation in scientific research, yet their ability to interpret and use unfamiliar Python APIs for complex computational experiments remains poorly characterized. This study systematically benchmarks a selection of state-of-the-art LLMs in [...] Read more.
Large language models (LLMs) have advanced rapidly as tools for automating code generation in scientific research, yet their ability to interpret and use unfamiliar Python APIs for complex computational experiments remains poorly characterized. This study systematically benchmarks a selection of state-of-the-art LLMs in generating functional Python code for two increasingly challenging scenarios: conversational data analysis with the ParShift library, and synthetic data generation and clustering using pyclugen and scikit-learn. Both experiments use structured, zero-shot prompts specifying detailed requirements but omitting in-context examples. Model outputs are evaluated quantitatively for functional correctness and prompt compliance over multiple runs, and qualitatively by analyzing the errors produced when code execution fails. Results show that only a small subset of models consistently generate correct, executable code. GPT-4.1 achieved a 100% success rate across all runs in both experimental tasks, whereas most other models succeeded in fewer than half of the runs, with only Grok-3 and Mistral-Large approaching comparable performance. In addition to benchmarking LLM performance, this approach helps identify shortcomings in third-party libraries, such as unclear documentation or obscure implementation bugs. Overall, these findings highlight current limitations of LLMs for end-to-end scientific automation and emphasize the need for careful prompt design, comprehensive library documentation, and continued advances in language model capabilities. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

25 pages, 5281 KB  
Article
Detection and Mitigation in IoT Ecosystems Using oneM2M Architecture and Edge-Based Machine Learning
by Yu-Yong Luo, Yu-Hsun Chiu and Chia-Hsin Cheng
Future Internet 2025, 17(9), 411; https://doi.org/10.3390/fi17090411 - 8 Sep 2025
Viewed by 319
Abstract
Distributed denial-of-service (DDoS) attacks are a prevalent threat to resource-constrained IoT deployments. We present an edge-based detection and mitigation system integrated with the oneM2M architecture. By using a Raspberry Pi 4 client and five Raspberry Pi 3 attack nodes in a smart-home testbed, [...] Read more.
Distributed denial-of-service (DDoS) attacks are a prevalent threat to resource-constrained IoT deployments. We present an edge-based detection and mitigation system integrated with the oneM2M architecture. By using a Raspberry Pi 4 client and five Raspberry Pi 3 attack nodes in a smart-home testbed, we collected 200,000 packets with 19 features across four traffic states (normal, SYN/UDP/ICMP floods), trained Decision Tree, 2D-CNN, and LSTM models, and deployed the best model on an edge computer for real-time inference. The edge node classifies traffic and triggers per-attack defenses on the device (SYN cookies, UDP/ICMP iptables rules). On a held-out test set, the 2D-CNN achieved 98.45% accuracy, outperforming the LSTM (96.14%) and Decision Tree (93.77%). In end-to-end trials, the system sustained service during SYN floods (time to capture 200 packets increased from 5.05 s to 5.51 s after enabling SYN cookies), mitigated ICMP floods via rate limiting, and flagged UDP floods for administrator intervention due to residual performance degradation. These results show that lightweight, edge-deployed learning with targeted controls can harden oneM2M-based IoT systems against common DDoS vectors. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

23 pages, 1658 KB  
Article
Fuzzy-Based MEC-Assisted Video Adaptation Framework for HTTP Adaptive Streaming
by Waqas ur Rahman
Future Internet 2025, 17(9), 410; https://doi.org/10.3390/fi17090410 - 8 Sep 2025
Viewed by 278
Abstract
As the demand for high-quality video streaming applications continues to rise, multi-access edge computing (MEC)-assisted streaming schemes have emerged as a viable solution within the context of HTTP adaptive streaming (HAS). These schemes aim to enhance both quality of experience (QoE) and utilization [...] Read more.
As the demand for high-quality video streaming applications continues to rise, multi-access edge computing (MEC)-assisted streaming schemes have emerged as a viable solution within the context of HTTP adaptive streaming (HAS). These schemes aim to enhance both quality of experience (QoE) and utilization of network resources. HAS faces a significant challenge when applied to mobile cellular networks. Designing a HAS scheme that fairly allocates bitrates to users ensures a high QoE and optimizes bandwidth utilization remains a challenge. To this end, we designed an MEC- and client-assisted adaptation framework for HAS, facilitating collaboration between the edge and client to enhance users’ quality of experience. The proposed framework employs fuzzy logic at the user end to determine the upper limit for the video streaming rate. On the MEC side, we developed an integer nonlinear programming (INLP) optimization model that collectively enhances the QoE of video clients by considering the upper limit set by the client. Due to the NP-hardness of the problem, we utilized a greedy algorithm to efficiently solve the quality adaptation optimization problem. The results demonstrate that the proposed framework, on average, (i) improves users’ QoE by 30%, (ii) achieves a fair allocation of bitrates by 22.6%, and (iii) enhances network utilization by 4.2% compared to state-of-the-art approaches. In addition, the proposed approach prevents playback interruptions regardless of the client’s buffer size and video segment duration. Full article
Show Figures

Figure 1

25 pages, 693 KB  
Review
Survey of Federated Learning for Cyber Threat Intelligence in Industrial IoT: Techniques, Applications and Deployment Models
by Abin Kumbalapalliyil Tom, Ansam Khraisat, Tony Jan, Md Whaiduzzaman, Thien D. Nguyen and Ammar Alazab
Future Internet 2025, 17(9), 409; https://doi.org/10.3390/fi17090409 - 8 Sep 2025
Viewed by 654
Abstract
The Industrial Internet of Things (IIoT) is transforming industrial operations through connected devices and real-time automation but also introduces significant cybersecurity risks. Cyber threat intelligence (CTI) is critical for detecting and mitigating such threats, yet traditional centralized CTI approaches face limitations in latency, [...] Read more.
The Industrial Internet of Things (IIoT) is transforming industrial operations through connected devices and real-time automation but also introduces significant cybersecurity risks. Cyber threat intelligence (CTI) is critical for detecting and mitigating such threats, yet traditional centralized CTI approaches face limitations in latency, scalability, and data privacy. Federated learning (FL) offers a privacy-preserving alternative by enabling decentralized model training without sharing raw data. This survey explores how FL can enhance CTI in IIoT environments. It reviews FL architectures, orchestration strategies, and aggregation methods, and maps their applications to domains such as intrusion detection, malware analysis, botnet mitigation, anomaly detection, and trust management. Among its contributions is an empirical synthesis comparing FL aggregation strategies—including FedAvg, FedProx, Krum, ClippedAvg, and Multi-Krum—across accuracy, robustness, and efficiency under IIoT constraints. The paper also presents a taxonomy of FL-based CTI approaches and outlines future research directions to support the development of secure, scalable, and decentralized threat intelligence systems for industrial ecosystems. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Figure 1

38 pages, 790 KB  
Article
A GHZ-Based Protocol for the Dining Information Brokers Problem
by Theodore Andronikos, Constantinos Bitsakos, Konstantinos Nikas, Georgios I. Goumas and Nectarios Koziris
Future Internet 2025, 17(9), 408; https://doi.org/10.3390/fi17090408 - 6 Sep 2025
Viewed by 292
Abstract
This article introduces the innovative Quantum Dining Information Brokers Problem, presenting a novel entanglement-based quantum protocol to address it. The scenario involves n information brokers, all located in distinct geographical regions, engaging in a metaphorical virtual dinner. The objective is for each broker [...] Read more.
This article introduces the innovative Quantum Dining Information Brokers Problem, presenting a novel entanglement-based quantum protocol to address it. The scenario involves n information brokers, all located in distinct geographical regions, engaging in a metaphorical virtual dinner. The objective is for each broker to share a unique piece of information with all the others simultaneously. Unlike previous approaches, this protocol enables a fully parallel, single-step communication exchange among all the brokers, regardless of their physical locations. A key feature of this protocol is its ability to ensure that both the anonymity and privacy of all the participants are preserved, meaning that no broker can discern the identity of the sender of any received information. At its core, the Quantum Dining Information Brokers Problem serves as a conceptual framework for achieving anonymous, untraceable, and massively parallel information exchange in a distributed system. The proposed protocol introduces three significant advancements. First, while quantum protocols for one-to-many simultaneous information transmission have been developed, this is, to the best of our knowledge, one of the first quantum protocols to facilitate many-to-many simultaneous information exchange. Second, it guarantees complete anonymity and untraceability for all senders, a critical improvement over sequential applications of one-to-many protocols, which fail to ensure such robust anonymity. Third, leveraging quantum entanglement, the protocol operates in a fully distributed manner, accommodating brokers in diverse spatial locations. This approach marks a substantial advancement in secure, scalable, and anonymous communication, with potential applications in distributed environments where privacy and parallelism are paramount. Full article
(This article belongs to the Special Issue Advanced 5G and Beyond Networks)
Show Figures

Figure 1

20 pages, 1328 KB  
Article
From Divergence to Alignment: Evaluating the Role of Large Language Models in Facilitating Agreement Through Adaptive Strategies
by Loukas Triantafyllopoulos and Dimitris Kalles
Future Internet 2025, 17(9), 407; https://doi.org/10.3390/fi17090407 - 6 Sep 2025
Viewed by 410
Abstract
Achieving consensus in group decision-making often involves overcoming significant challenges, particularly reconciling diverse perspectives and mitigating biases hindering agreement. Traditional methods relying on human facilitators are usually constrained by scalability and efficiency, especially in large-scale, fast-paced discussions. To address these challenges, this study [...] Read more.
Achieving consensus in group decision-making often involves overcoming significant challenges, particularly reconciling diverse perspectives and mitigating biases hindering agreement. Traditional methods relying on human facilitators are usually constrained by scalability and efficiency, especially in large-scale, fast-paced discussions. To address these challenges, this study proposes a novel real-time facilitation framework, employing large language models (LLMs) as automated facilitators within a custom-built multi-user chat system. This framework is distinguished by its real-time adaptive system architecture, which enables dynamic adjustments to facilitation strategies based on ongoing discussion dynamics. Leveraging cosine similarity as a core metric, this approach evaluates the ability of three state-of-the-art LLMs—ChatGPT 4.0, Mistral Large 2, and AI21 Jamba-Instruct—to synthesize consensus proposals that align with participants’ viewpoints. Unlike conventional techniques, the system integrates adaptive facilitation strategies, including clarifying misunderstandings, summarizing discussions, and proposing compromises, enabling the LLMs to refine consensus proposals based on user feedback iteratively. Experimental results indicate that ChatGPT 4.0 achieved the highest alignment with participant opinions and required fewer iterations to reach consensus. A one-way ANOVA confirmed that differences in performance between models were statistically significant. Moreover, descriptive analyses revealed nuanced differences in model behavior across various sustainability-focused discussion topics, including climate action, quality education, good health and well-being, and access to clean water and sanitation. These findings highlight the promise of LLM-driven facilitation for improving collective decision-making processes and underscore the need for further research into robust evaluation metrics, ethical considerations, and cross-cultural adaptability. Full article
Show Figures

Graphical abstract

21 pages, 6118 KB  
Article
3D Spatial Path Planning Based on Improved Particle Swarm Optimization
by Junxia Ma, Zixu Yang and Ming Chen
Future Internet 2025, 17(9), 406; https://doi.org/10.3390/fi17090406 - 5 Sep 2025
Viewed by 314
Abstract
Three-dimensional path planning is critical for the successful operation of unmanned aerial vehicles (UAVs), automated guided vehicles (AGVs), and robots in industrial Internet of Things (IIoT) applications. In 3D path planning, the standard Particle Swarm Optimization (PSO) algorithm suffers from premature convergence and [...] Read more.
Three-dimensional path planning is critical for the successful operation of unmanned aerial vehicles (UAVs), automated guided vehicles (AGVs), and robots in industrial Internet of Things (IIoT) applications. In 3D path planning, the standard Particle Swarm Optimization (PSO) algorithm suffers from premature convergence and a tendency to fall into local optima, leading to significant deviations from the optimal path. This paper proposes an improved PSO (IPSO) algorithm that enhances particle diversity and randomness through the introduction of logistic chaotic mapping, while employing dynamic learning factors and nonlinear inertia weights to improve global search capability. Experimental results demonstrate that IPSO outperforms traditional methods in terms of path length and computational efficiency, showing potential for real-time path planning in complex environments. Full article
Show Figures

Figure 1

28 pages, 15259 KB  
Article
1D-CNN-Based Performance Prediction in IRS-Enabled IoT Networks for 6G Autonomous Vehicle Applications
by Radwa Ahmed Osman
Future Internet 2025, 17(9), 405; https://doi.org/10.3390/fi17090405 - 5 Sep 2025
Viewed by 291
Abstract
To foster the performance of wireless communication while saving energy, the integration of Intelligent Reflecting Surfaces (IRS) into autonomous vehicle (AV) communication networks is considered a powerful technique. This paper proposes a novel IRS-assisted vehicular communication model that combines Lagrange optimization and Gradient-Based [...] Read more.
To foster the performance of wireless communication while saving energy, the integration of Intelligent Reflecting Surfaces (IRS) into autonomous vehicle (AV) communication networks is considered a powerful technique. This paper proposes a novel IRS-assisted vehicular communication model that combines Lagrange optimization and Gradient-Based Phase Optimization to determine the optimal transmission power, optimal interference transmission power, and IRS phase shifts. Additionally, the proposed model help increase the Signal-to-Interference-plus-Noise Ratio (SINR) by utilizing IRS, which leads to maximizes energy efficiency and the achievable data rate under a variety of environmental conditions, while guaranteeing that resource limits are satisfied. In order to represent dense vehicular environments, practical constraints for the system model, such as IRS reflection efficiency and interference, have been incorporated from multiple sources, namely, Device-to-Device (D2D), Vehicle-to-Vehicle (V2V), Vehicle-to-Base Station (V2B), and Cellular User Equipment (CUE). A Lagrangian optimization approach has been implemented to determine the required transmission interference power and the best IRS phase designs in order to enhance the system performance. Consequently, a one-dimensional convolutional neural network has been implemented for the optimized data provided by this framework as training input. This deep learning algorithm learns to predict the required optimal IRS settings quickly, allowing for real-time adaptation in dynamic wireless environments. The obtained results from the simulation show that the combined optimization and prediction strategy considerably enhances the system reliability and energy efficiency over baseline techniques. This study lays a solid foundation for implementing IRS-assisted AV networks in real-world settings, hence facilitating the development of next-generation vehicular communication systems that are both performance-driven and energy-efficient. Full article
Show Figures

Figure 1

50 pages, 2359 KB  
Review
The Rise of Agentic AI: A Review of Definitions, Frameworks, Architectures, Applications, Evaluation Metrics, and Challenges
by Ajay Bandi, Bhavani Kongari, Roshini Naguru, Sahitya Pasnoor and Sri Vidya Vilipala
Future Internet 2025, 17(9), 404; https://doi.org/10.3390/fi17090404 - 4 Sep 2025
Cited by 1 | Viewed by 2911
Abstract
Agentic AI systems are a recently emerged and important approach that goes beyond traditional AI, generative AI, and autonomous systems by focusing on autonomy, adaptability, and goal-driven reasoning. This study provides a clear review of agentic AI systems by bringing together their definitions, [...] Read more.
Agentic AI systems are a recently emerged and important approach that goes beyond traditional AI, generative AI, and autonomous systems by focusing on autonomy, adaptability, and goal-driven reasoning. This study provides a clear review of agentic AI systems by bringing together their definitions, frameworks, and architectures, and by comparing them with related areas like generative AI, autonomic computing, and multi-agent systems. To do this, we reviewed 143 primary studies on current LLM-based and non-LLM-driven agentic systems and examined how they support planning, memory, reflection, and goal pursuit. Furthermore, we classified architectural models, input–output mechanisms, and applications based on their task domains where agentic AI is applied, supported using tabular summaries that highlight real-world case studies. Evaluation metrics were classified as qualitative and quantitative measures, along with available testing methods of agentic AI systems to check the system’s performance and reliability. This study also highlights the main challenges and limitations of agentic AI, covering technical, architectural, coordination, ethical, and security issues. We organized the conceptual foundations, available tools, architectures, and evaluation metrics in this research, which defines a structured foundation for understanding and advancing agentic AI. These findings aim to help researchers and developers build better, clearer, and more adaptable systems that support responsible deployment in different domains. Full article
Show Figures

Figure 1

26 pages, 4880 KB  
Article
Cell-Sequence-Based Covert Signal for Tor De-Anonymization Attacks
by Ran Xin, Yapeng Wang, Xiaohong Huang, Xu Yang and Sio Kei Im
Future Internet 2025, 17(9), 403; https://doi.org/10.3390/fi17090403 - 4 Sep 2025
Viewed by 628
Abstract
This research introduces a novel de-anonymization technique targeting the Tor network, addressing limitations in prior attack models, particularly concerning router positioning following the introduction of bridge relays. Our method exploits two specific, inherent protocol-level vulnerabilities: the absence of a continuity check for circuit-level [...] Read more.
This research introduces a novel de-anonymization technique targeting the Tor network, addressing limitations in prior attack models, particularly concerning router positioning following the introduction of bridge relays. Our method exploits two specific, inherent protocol-level vulnerabilities: the absence of a continuity check for circuit-level cells and anomalous residual values in RELAY_EARLY cell counters, working by manipulating cell headers to embed a covert signal. This signal is composed of reserved fields, start and end delimiters, and a payload that encodes target identifiers. Using this signal, malicious routers can effectively mark data flows for later identification. These routers employ a finite state machine (FSM) to adaptively switch between signal injection and detection. Experimental evaluations, conducted within a controlled environment using attacker-controlled onion routers, demonstrated that the embedded signals are undetectable by standard Tor routers, cause no noticeable performance degradation, and allow reliable correlation of Tor users with public services and deanonymization of hidden service IP addresses. This work reveals a fundamental design trade-off in Tor: the decision to conceal circuit length inadvertently exposes cell transmission characteristics. This creates a bidirectional vector for stealthy, protocol-level de-anonymization attacks, even though Tor payloads remain encrypted. Full article
Show Figures

Figure 1

21 pages, 1293 KB  
Article
Dynamic Resource Management in 5G-Enabled Smart Elderly Care Using Deep Reinforcement Learning
by Krishnapriya V. Shaji, Srilakshmi S. Rethy, Simi Surendran, Livya George, Namita Suresh and Hrishika Dayan
Future Internet 2025, 17(9), 402; https://doi.org/10.3390/fi17090402 - 2 Sep 2025
Viewed by 433
Abstract
The increasing elderly population presents major challenges to traditional healthcare due to the need for continuous care, a shortage of skilled professionals, and increasing medical costs. To address this, smart elderly care homes where multiple residents live with the support of caregivers and [...] Read more.
The increasing elderly population presents major challenges to traditional healthcare due to the need for continuous care, a shortage of skilled professionals, and increasing medical costs. To address this, smart elderly care homes where multiple residents live with the support of caregivers and IoT-based assistive technologies have emerged as a promising solution. For their effective operation, a reliable high speed network like 5G is essential, along with intelligent resource allocation to ensure efficient service delivery. This study proposes a deep reinforcement learning (DRL)-based resource management framework for smart elderly homes, formulated as a Markov decision process. The framework dynamically allocates computing and network resources in response to real-time application demands and system constraints. We implement and compare two DRL algorithms, emphasizing their strengths in optimizing edge utilization and throughput. System performance is evaluated across balanced, high-demand, and resource-constrained scenarios. The results demonstrate that the proposed DRL approach effectively learns adaptive resource management policies, making it a promising solution for next-generation intelligent elderly care environments. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop