Previous Issue
Volume 17, September
 
 

Future Internet, Volume 17, Issue 10 (October 2025) – 39 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 1965 KB  
Article
Multifractality and Its Sources in the Digital Currency Market
by Stanisław Drożdż, Robert Kluszczyński, Jarosław Kwapień and Marcin Wątorek
Future Internet 2025, 17(10), 470; https://doi.org/10.3390/fi17100470 (registering DOI) - 13 Oct 2025
Abstract
Multifractality in time series analysis characterizes the presence of multiple scaling exponents, indicating heterogeneous temporal structures and complex dynamical behaviors beyond simple monofractal models. In the context of digital currency markets, multifractal properties arise due to the interplay of long-range temporal correlations and [...] Read more.
Multifractality in time series analysis characterizes the presence of multiple scaling exponents, indicating heterogeneous temporal structures and complex dynamical behaviors beyond simple monofractal models. In the context of digital currency markets, multifractal properties arise due to the interplay of long-range temporal correlations and heavy-tailed distributions of returns, reflecting intricate market microstructure and trader interactions. Incorporating multifractal analysis into the modeling of cryptocurrency price dynamics enhances the understanding of market inefficiencies. It may also improve volatility forecasting and facilitate the detection of critical transitions or regime shifts. Based on the multifractal cross-correlation analysis (MFCCA) whose spacial case is the multifractal detrended fluctuation analysis (MFDFA), as the most commonly used practical tools for quantifying multifractality, we applied a recently proposed method of disentangling sources of multifractality in time series to the most representative instruments from the digital market. They include Bitcoin (BTC), Ethereum (ETH), decentralized exchanges (DEX) and non-fungible tokens (NFT). The results indicate the significant role of heavy tails in generating a broad multifractal spectrum. However, they also clearly demonstrate that the primary source of multifractality encompasses the temporal correlations in the series, and without them, multifractality fades out. It appears characteristic that these temporal correlations, to a large extent, do not depend on the thickness of the tails of the fluctuation distribution. These observations, made here in the context of the digital currency market, provide a further strong argument for the validity of the proposed methodology of disentangling sources of multifractality in time series. Full article
Show Figures

Figure 1

26 pages, 930 KB  
Article
Modular Microservices Architecture for Generative Music Integration in Digital Audio Workstations via VST Plugin
by Adriano N. Raposo and Vasco N. G. J. Soares
Future Internet 2025, 17(10), 469; https://doi.org/10.3390/fi17100469 (registering DOI) - 12 Oct 2025
Abstract
This paper presents the design and implementation of a modular cloud-based architecture that enables generative music capabilities in Digital Audio Workstations through a MIDI microservices backend and a user-friendly VST plugin frontend. The system comprises a generative harmony engine deployed as a standalone [...] Read more.
This paper presents the design and implementation of a modular cloud-based architecture that enables generative music capabilities in Digital Audio Workstations through a MIDI microservices backend and a user-friendly VST plugin frontend. The system comprises a generative harmony engine deployed as a standalone service, a microservice layer that orchestrates communication and exposes an API, and a VST plugin that interacts with the backend to retrieve harmonic sequences and MIDI data. Among the microservices is a dedicated component that converts textual chord sequences into MIDI files. The VST plugin allows the user to drag and drop the generated chord progressions directly into a DAW’s MIDI track timeline. This architecture prioritizes modularity, cloud scalability, and seamless integration into existing music production workflows, while abstracting away technical complexity from end users. The proposed system demonstrates how microservice-based design and cross-platform plugin development can be effectively combined to support generative music workflows, offering both researchers and practitioners a replicable and extensible framework. Full article
Show Figures

Figure 1

23 pages, 3251 KB  
Article
Intelligent Control Approaches for Warehouse Performance Optimisation in Industry 4.0 Using Machine Learning
by Ádám Francuz and Tamás Bányai
Future Internet 2025, 17(10), 468; https://doi.org/10.3390/fi17100468 (registering DOI) - 11 Oct 2025
Viewed by 46
Abstract
In conventional logistics optimization problems, an objective function describes the relationship between parameters. However, in many industrial practices, such a relationship is unknown, and only observational data is available. The objective of the research is to use machine learning-based regression models to uncover [...] Read more.
In conventional logistics optimization problems, an objective function describes the relationship between parameters. However, in many industrial practices, such a relationship is unknown, and only observational data is available. The objective of the research is to use machine learning-based regression models to uncover patterns in the warehousing dataset and use them to generate an accurate objective function. The models are not only suitable for prediction, but also for interpreting the effect of input variables. This data-driven approach is consistent with the automated, intelligent systems of Industry 4.0, while Industry 5.0 provides opportunities for sustainable, flexible, and collaborative development. In this research, machine learning (ML) models were tested on a fictional dataset using Automated Machine Learning (AutoML), through which Light Gradient Boosting Machine (LightGBM) was selected as the best method (R2 = 0.994). Feature Importance and Partial Dependence Plots revealed the key factors influencing storage performance and their functional relationships. Defining performance as a cost indicator allowed us to interpret optimization as cost minimization, demonstrating that ML-based methods can uncover hidden patterns and support efficiency improvements in warehousing. The proposed approach not only achieves outstanding predictive accuracy, but also transforms model outputs into actionable, interpretable insights for warehouse optimization. By combining automation, interpretability, and optimization, this research advances the practical realization of intelligent warehouse systems in the era of Industry 4.0. Full article
(This article belongs to the Special Issue Artificial Intelligence and Control Systems for Industry 4.0 and 5.0)
Show Figures

Figure 1

38 pages, 5895 KB  
Article
Beyond Accuracy: Benchmarking Machine Learning Models for Efficient and Sustainable SaaS Decision Support
by Efthimia Mavridou, Eleni Vrochidou, Michail Selvesakis and George A. Papakostas
Future Internet 2025, 17(10), 467; https://doi.org/10.3390/fi17100467 (registering DOI) - 11 Oct 2025
Viewed by 50
Abstract
Machine learning (ML) methods have been successfully employed to support decision-making for Software as a Service (SaaS) providers. While most of the published research primarily emphasizes prediction accuracy, other important aspects, such as cloud deployment efficiency and environmental impact, have received comparatively less [...] Read more.
Machine learning (ML) methods have been successfully employed to support decision-making for Software as a Service (SaaS) providers. While most of the published research primarily emphasizes prediction accuracy, other important aspects, such as cloud deployment efficiency and environmental impact, have received comparatively less attention. It is also critical to effectively use factors such as training time, prediction time and carbon footprint in production. SaaS decision support systems use the output of ML models to provide actionable recommendations, such as running reactivation campaigns for users who are likely to churn. To this end, in this paper, we present a benchmarking comparison of 17 different ML models for churn prediction in SaaS, which include cloud deployment efficiency metrics (e.g., latency, prediction time, etc.) and sustainability metrics (e.g., CO2 emissions, consumed energy, etc.) along with predictive performance metrics (e.g., AUC, Log Loss, etc.). Two public datasets are employed, experiments are repeated on four different machines, locally and on the cloud, while a new weighted Green Efficiency Weighted Score (GEWS) is introduced, as steps towards choosing the simpler, greener and more efficient ML model. Experimental results indicated XGBoost and LightGBM as the models capable of offering a good balance on predictive performance, fast training, inference times, and limited emissions, while the importance of region selection towards minimizing the carbon footprint of the ML models was confirmed. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Figure 1

45 pages, 4909 KB  
Review
Building Trust in Autonomous Aerial Systems: A Review of Hardware-Rooted Trust Mechanisms
by Sagir Muhammad Ahmad, Mohammad Samie and Barmak Honarvar Shakibaei Asli
Future Internet 2025, 17(10), 466; https://doi.org/10.3390/fi17100466 - 10 Oct 2025
Viewed by 429
Abstract
Unmanned aerial vehicles (UAVs) are redefining both civilian and defense operations, with swarm-based architectures unlocking unprecedented scalability and autonomy. However, these advancements introduce critical security challenges, particularly in location verification and authentication. This review provides a comprehensive synthesis of hardware security primitives (HSPs)—including [...] Read more.
Unmanned aerial vehicles (UAVs) are redefining both civilian and defense operations, with swarm-based architectures unlocking unprecedented scalability and autonomy. However, these advancements introduce critical security challenges, particularly in location verification and authentication. This review provides a comprehensive synthesis of hardware security primitives (HSPs)—including Physical Unclonable Functions (PUFs), Trusted Platform Modules (TPMs), and blockchain-integrated frameworks—as foundational enablers of trust in UAV ecosystems. We systematically analyze communication architectures, cybersecurity vulnerabilities, and deployment constraints, followed by a comparative evaluation of HSP-based techniques in terms of energy efficiency, scalability, and operational resilience. The review further identifies unresolved research gaps and highlights transformative trends such as AI-augmented environmental PUFs, post-quantum secure primitives, and RISC-V-based secure control systems. By bridging current limitations with emerging innovations, this work underscores the pivotal role of hardware-rooted security in shaping the next generation of autonomous aerial networks. Full article
(This article belongs to the Special Issue Security and Privacy Issues in the Internet of Cloud—2nd Edition)
Show Figures

Figure 1

28 pages, 3474 KB  
Article
OptoBrain: A Wireless Sensory Interface for Optogenetics
by Rodrigo de Albuquerque Pacheco Andrade, Helder Eiki Oshiro, Gabriel Augusto Ginja, Eduardo Colombari, Maria Celeste Dias, José A. Afonso and João Paulo Pereira do Carmo
Future Internet 2025, 17(10), 465; https://doi.org/10.3390/fi17100465 (registering DOI) - 9 Oct 2025
Viewed by 302
Abstract
Optogenetics leverages light to control neural circuits, but traditional systems are often bulky and tethered, limiting their use. This work introduces OptoBrain, a novel, portable wireless system for optogenetics designed to overcome these challenges. The system integrates modules for multichannel data acquisition, smart [...] Read more.
Optogenetics leverages light to control neural circuits, but traditional systems are often bulky and tethered, limiting their use. This work introduces OptoBrain, a novel, portable wireless system for optogenetics designed to overcome these challenges. The system integrates modules for multichannel data acquisition, smart neurostimulation, and continuous processing, with a focus on low-power and low-voltage operation. OptoBrain features up to eight neuronal acquisition channels with a low input-referred noise (e.g., 0.99 µVRMS at 250 sps with 1 V/V gain), and reliably streams data via a Bluetooth 5.0 link at a measured throughput of up to 400 kbps. Experimental results demonstrate robust performance, highlighting its potential as a simple, practical, and low-cost solution for emerging optogenetics research centers and enabling new avenues in neuroscience. Full article
Show Figures

Figure 1

17 pages, 1076 KB  
Article
Adaptive Cyber Defense Through Hybrid Learning: From Specialization to Generalization
by Muhammad Omer Farooq
Future Internet 2025, 17(10), 464; https://doi.org/10.3390/fi17100464 - 9 Oct 2025
Viewed by 136
Abstract
This paper introduces a hybrid learning framework that synergistically combines Reinforcement Learning (RL) and Supervised Learning (SL) to train autonomous cyber-defense agents capable of operating effectively in dynamic and adversarial environments. The proposed approach leverages RL for strategic exploration and policy development, while [...] Read more.
This paper introduces a hybrid learning framework that synergistically combines Reinforcement Learning (RL) and Supervised Learning (SL) to train autonomous cyber-defense agents capable of operating effectively in dynamic and adversarial environments. The proposed approach leverages RL for strategic exploration and policy development, while incorporating SL to distill high-reward trajectories into refined policy updates, enhancing sample efficiency, learning stability, and robustness. The framework first targets specialized agent training, where each agent is optimized against a specific adversarial behavior. Subsequently, it is extended to enable the training of a generalized agent that learns to counter multiple, diverse attack strategies through multi-task and curriculum learning techniques. Comprehensive experiments conducted in the CybORG simulation environment demonstrate that the hybrid RL–SL framework consistently outperforms pure RL baselines across both specialized and generalized settings, achieving higher cumulative rewards. Specifically, hybrid-trained agents achieve up to 23% higher cumulative rewards in specialized defense tasks and approximately 18% improvements in generalized defense scenarios compared to RL-only agents. Moreover, incorporating temporal context into the observation space yields a further 4–6% performance gain in policy robustness. Furthermore, we investigate the impact of augmenting the observation space with historical actions and rewards, revealing consistent, albeit incremental, gains in SL-based learning performance. Key contributions of this work include: (i) a novel hybrid learning paradigm that integrates RL and SL for effective cyber-defense policy learning, (ii) a scalable extension for training generalized agents across heterogeneous threat models, and (iii) empirical analysis on the role of temporal context in agent observability and decision-making. Collectively, the results highlight the promise of hybrid learning strategies for building intelligent, resilient, and adaptable cyber-defense systems in evolving threat landscapes. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

19 pages, 1318 KB  
Article
Quantifying Website Privacy Posture Through Technical and Policy-Based Assessment
by Ioannis Fragkiadakis, Stefanos Gritzalis and Costas Lambrinoudakis
Future Internet 2025, 17(10), 463; https://doi.org/10.3390/fi17100463 - 9 Oct 2025
Viewed by 185
Abstract
With the rapid growth of digital interactions, safeguarding user privacy on websites has become a critical concern. This paper introduces a comprehensive framework that integrates both technical and policy-based factors to assess a website’s level of privacy protection. The framework employs a scoring [...] Read more.
With the rapid growth of digital interactions, safeguarding user privacy on websites has become a critical concern. This paper introduces a comprehensive framework that integrates both technical and policy-based factors to assess a website’s level of privacy protection. The framework employs a scoring system that evaluates key technical elements, such as HTTP security headers, email authentication protocols (SPF, DKIM, DMARC), SSL/TLS certificate usage, domain reputation, DNSSEC, and cookie practices. In parallel, it examines the clarity and GDPR compliance of privacy policies. The resulting score reflects not only the technical strength of a website’s defenses but also the transparency with which data processing practices are communicated to users. To demonstrate its effectiveness, the framework was applied to two similarly sized private hospitals, generating comparative privacy scores under a unified metric. The results confirm the framework’s value in producing measurable insights that enable cross-organizational privacy benchmarking. By combining policy evaluation with technical analysis, this work addresses a significant gap in existing research and offers a reproducible, extensible methodology for assessing website privacy posture from a visitor’s perspective. Full article
Show Figures

Figure 1

14 pages, 427 KB  
Article
Performance Modeling of Cloud Systems by an Infinite-Server Queue Operating in Rarely Changing Random Environment
by Svetlana Moiseeva, Evgeny Polin, Alexander Moiseev and Janos Sztrik
Future Internet 2025, 17(10), 462; https://doi.org/10.3390/fi17100462 - 8 Oct 2025
Viewed by 184
Abstract
This paper considers a heterogeneous queuing system with an unlimited number of servers, where the parameters are determined by a random environment. A distinctive feature is that the parameters of the exponential distribution of the request processing time do not change their values [...] Read more.
This paper considers a heterogeneous queuing system with an unlimited number of servers, where the parameters are determined by a random environment. A distinctive feature is that the parameters of the exponential distribution of the request processing time do not change their values until the end of service. Thus, the devices in the system under consideration are heterogeneous. For the study, a method of asymptotic analysis is proposed under the condition of extremely rare changes in the states of the random environment. We consider the following problem. Cloud node accepts requests of one type that have a similar intensity of arrival and duration of processing. Sometimes an input scheduler switches to accept requests of another type with other intensity and duration of processing. We model the system as an infinite-server queue in a random environment, which influences the arrival intensity and service time of new requests. The random environment is modeled by a Markov chain with a finite number of states. Arrivals are modeled as a Poisson process with intensity dependent on the state of the random environment. Service times are exponentially distributed with rates also dependent on the state of the random environment at the time moment when the request arrived. When the environment changes its state, requests that are already in the system do not change their service times. So, we have requests of different types (serviced with different rates) present in the system at the same time. For the study, we consider a situation where changes of the random environment are made rarely. The method of asymptotic analysis is used for the study. The asymptotic condition of a rarely changing random environment (entries of the generator of the corresponding Markov chain tend to zero) is used. A multi-dimensional joint steady-state probability distribution of the number of requests of different types present in the system is obtained. Several numerical examples illustrate the comparisons of asymptotic results to simulations. Full article
Show Figures

Figure 1

16 pages, 4740 KB  
Article
Measuring Inter-Bias Effects and Fairness-Accuracy Trade-Offs in GNN-Based Recommender Systems
by Nikzad Chizari, Keywan Tajfar and María N. Moreno-García
Future Internet 2025, 17(10), 461; https://doi.org/10.3390/fi17100461 - 8 Oct 2025
Viewed by 253
Abstract
Bias in artificial intelligence is a critical issue because these technologies increasingly influence decision-making in a wide range of areas. The recommender system field is one of them, where biases can lead to unfair or skewed outcomes. The origin usually lies in data [...] Read more.
Bias in artificial intelligence is a critical issue because these technologies increasingly influence decision-making in a wide range of areas. The recommender system field is one of them, where biases can lead to unfair or skewed outcomes. The origin usually lies in data biases coming from historical inequalities or irregular sampling. Recommendation algorithms using such data contribute to a greater or lesser extent to amplify and perpetuate those imbalances. On the other hand, different types of biases can be found in the outputs of recommender systems, and they can be evaluated by a variety of metrics specific to each of them. However, biases should not be treated independently, as they are interrelated and can potentiate or mask each other. Properly assessing the biases is crucial for ensuring fair and equitable recommendations. This work focuses on analyzing the interrelationship between different types of biases and proposes metrics designed to jointly evaluate multiple interrelated biases, with particular emphasis on those biases that tend to mask or obscure discriminatory treatment against minority or protected demographic groups, evaluated in terms of disparities in recommendation quality outcomes. This approach enables a more comprehensive assessment of algorithmic performance in terms of both fairness and predictive accuracy. Special attention is given to Graph Neural Network-based recommender systems, due to their strong performance in this application domain. Full article
(This article belongs to the Special Issue Deep Learning in Recommender Systems)
Show Figures

Figure 1

24 pages, 1582 KB  
Article
Future Internet Applications in Healthcare: Big Data-Driven Fraud Detection with Machine Learning
by Konstantinos P. Fourkiotis and Athanasios Tsadiras
Future Internet 2025, 17(10), 460; https://doi.org/10.3390/fi17100460 - 8 Oct 2025
Viewed by 271
Abstract
Hospital fraud detection has often relied on periodic audits that miss evolving, internet-mediated patterns in electronic claims. An artificial intelligence and machine learning pipeline is being developed that is leakage-safe, imbalance aware, and aligned with operational capacity for large healthcare datasets. The preprocessing [...] Read more.
Hospital fraud detection has often relied on periodic audits that miss evolving, internet-mediated patterns in electronic claims. An artificial intelligence and machine learning pipeline is being developed that is leakage-safe, imbalance aware, and aligned with operational capacity for large healthcare datasets. The preprocessing stack integrates four tables, engineers 13 features, applies imputation, categorical encoding, Power transformation, Boruta selection, and denoising autoencoder representations, with class balancing via SMOTE-ENN evaluated inside cross-validation folds. Eight algorithms are compared under a fraud-oriented composite productivity index that weighs recall, precision, MCC, F1, ROC-AUC, and G-Mean, with per-fold threshold calibration and explicit reporting of Type I and Type II errors. Multilayer perceptron attains the highest composite index, while CatBoost offers the strongest control of false positives with high accuracy. SMOTE-ENN provides limited gains once representations regularize class geometry. The calibrated scores support prepayment triage, postpayment audit, and provider-level profiling, linking alert volume to expected recovery and protecting investigator workload. Situated in the Future Internet context, this work targets internet-mediated claim flows and web-accessible provider registries. Governance procedures for drift monitoring, fairness assessment, and change control complete an internet-ready deployment path. The results indicate that disciplined preprocessing and evaluation, more than classifier choice alone, translate AI improvements into measurable economic value and sustainable fraud prevention in digital health ecosystems. Full article
Show Figures

Figure 1

22 pages, 2631 KB  
Article
Adversarial Robustness Evaluation for Multi-View Deep Learning Cybersecurity Anomaly Detection
by Min Li, Yuansong Qiao and Brian Lee
Future Internet 2025, 17(10), 459; https://doi.org/10.3390/fi17100459 - 8 Oct 2025
Viewed by 207
Abstract
In the evolving cyberthreat landscape, a critical challenge for intrusion detection systems (IDSs) lies in defending against meticulously crafted adversarial attacks. Traditional single-view detection frameworks, constrained by their reliance on limited and unidimensional feature representations, are often inadequate for identifying maliciously manipulated samples. [...] Read more.
In the evolving cyberthreat landscape, a critical challenge for intrusion detection systems (IDSs) lies in defending against meticulously crafted adversarial attacks. Traditional single-view detection frameworks, constrained by their reliance on limited and unidimensional feature representations, are often inadequate for identifying maliciously manipulated samples. To address these limitations, this study proposes a key hypothesis: a detection architecture that adopts a multi-view fusion strategy can significantly enhance the system’s resilience to attacks. To validate the proposed hypothesis, this study developed a multi-view fusion architecture and conducted a series of comparative experiments. A two-pronged validation framework was employed. First, we examined whether the multi-view fusion model demonstrates superior robustness compared to a single-view model in intrusion detection tasks, thereby providing empirical evidence for the effectiveness of multi-view strategies. Second, we evaluated the generalization capability of the multi-view model under varying levels of attack intensity and coverage, assessing its stability in complex adversarial scenarios. Methodologically, a dual-axis training assessment scheme was introduced, comprising (i) continuous gradient testing of perturbation intensity, with the ε parameter increasing from 0.01 to 0.2, and (ii) variation in attack density, with sample contamination rates ranging from 80% to 90%. Adversarial test samples were generated using the Fast Gradient Sign Method (FGSM) on the TON_IoT and UNSW-NB15 datasets. Furthermore, we propose a validation mechanism that integrates both performance and robustness testing. The model is evaluated on clean and adversarial test sets, respectively. By analyzing performance retention and adversarial robustness, we provide a comprehensive assessment of the stability of the multi-view model under varying evaluation conditions. The experimental results provide clear support for the research hypothesis: The multi-view fusion model is more robust than the single-view model under adversarial scenarios. Even under high-intensity attack scenarios, the multi-view model consistently demonstrates superior robustness and stability. More importantly, the multi-view model, through its architectural feature diversity, effectively resists targeted attacks to which the single-view model is vulnerable, confirming the critical role of feature space redundancy in enhancing adversarial robustness. Full article
Show Figures

Figure 1

13 pages, 748 KB  
Article
Lattice-Based Identity Authentication Protocol with Enhanced Privacy and Scalability for Vehicular Ad Hoc Networks
by Kuo-Yu Tsai and Ying-Hsuan Yang
Future Internet 2025, 17(10), 458; https://doi.org/10.3390/fi17100458 - 7 Oct 2025
Viewed by 174
Abstract
Vehicular ad hoc networks (VANETs) demand authentication mechanisms that are both secure and privacy-preserving, particularly in light of emerging quantum-era threats. In this work, we propose a lattice-based identity authentication protocol that leverages pseudo-IDs to safeguard user privacy, while allowing the Trusted Authority [...] Read more.
Vehicular ad hoc networks (VANETs) demand authentication mechanisms that are both secure and privacy-preserving, particularly in light of emerging quantum-era threats. In this work, we propose a lattice-based identity authentication protocol that leverages pseudo-IDs to safeguard user privacy, while allowing the Trusted Authority (TA) to trace misbehaving vehicles when necessary. Compared with existing approaches, the proposed scheme strengthens accountability, improves scalability, and offers resistance against quantum attacks. A comprehensive complexity analysis is presented, addressing computational, communication, and storage overhead. Analysis results under practical parameter settings demonstrate that the protocol delivers robust security with manageable overhead, maintaining authentication latency within the real-time requirements of VANET applications. Full article
Show Figures

Figure 1

19 pages, 1327 KB  
Article
An IoT Architecture for Sustainable Urban Mobility: Towards Energy-Aware and Low-Emission Smart Cities
by Manuel J. C. S. Reis, Frederico Branco, Nishu Gupta and Carlos Serôdio
Future Internet 2025, 17(10), 457; https://doi.org/10.3390/fi17100457 - 4 Oct 2025
Viewed by 298
Abstract
The rapid growth of urban populations intensifies congestion, air pollution, and energy demand. Green mobility is central to sustainable smart cities, and the Internet of Things (IoT) offers a means to monitor, coordinate, and optimize transport systems in real time. This paper presents [...] Read more.
The rapid growth of urban populations intensifies congestion, air pollution, and energy demand. Green mobility is central to sustainable smart cities, and the Internet of Things (IoT) offers a means to monitor, coordinate, and optimize transport systems in real time. This paper presents an Internet of Things (IoT)-based architecture integrating heterogeneous sensing with edge–cloud orchestration and AI-driven control for green routing and coordinated Electric Vehicle (EV) charging. The framework supports adaptive traffic management, energy-aware charging, and multimodal integration through standards-aware interfaces and auditable Key Performance Indicators (KPIs). We hypothesize that, relative to a static shortest-path baseline, the integrated green routing and EV-charging coordination reduce (H1) mean travel time per trip by ≥7%, (H2) CO2 intensity (g/km) by ≥6%, and (H3) station peak load by ≥20% under moderate-to-high demand conditions. These hypotheses are tested in Simulation of Urban MObility (SUMO) with Handbook Emission Factors for Road Transport (HBEFA) emission classes, using 10 independent random seeds and reporting means with 95% confidence intervals and formal significance testing. The results confirm the hypotheses: average travel time decreases by approximately 9.8%, CO2 intensity by approximately 8%, and peak load by approximately 25% under demand multipliers ≥1.2 and EV shares ≥20%. Gains are attenuated under light demand, where congestion effects are weaker. We further discuss scalability, interoperability, privacy/security, and the simulation-to-deployment gap, and outline priorities for reproducible field pilots. In summary, a pragmatic edge–cloud IoT stack has the potential to lower congestion, reduce per-kilometer emissions, and smooth charging demand, provided it is supported by reliable data integration, resilient edge services, and standards-compliant interoperability, thereby contributing to sustainable urban mobility in line with the objectives of SDG 11 (Sustainable Cities and Communities). Full article
Show Figures

Figure 1

25 pages, 666 KB  
Article
Continual Learning for Intrusion Detection Under Evolving Network Threats
by Chaoqun Guo, Xihan Li, Jubao Cheng, Shunjie Yang and Huiquan Gong
Future Internet 2025, 17(10), 456; https://doi.org/10.3390/fi17100456 - 4 Oct 2025
Viewed by 260
Abstract
In the face of ever-evolving cyber threats, modern intrusion detection systems (IDS) must achieve long-term adaptability without sacrificing performance on previously encountered attacks. Traditional IDS approaches often rely on static training assumptions, making them prone to forgetting old patterns, underperforming in label-scarce conditions, [...] Read more.
In the face of ever-evolving cyber threats, modern intrusion detection systems (IDS) must achieve long-term adaptability without sacrificing performance on previously encountered attacks. Traditional IDS approaches often rely on static training assumptions, making them prone to forgetting old patterns, underperforming in label-scarce conditions, and struggling with imbalanced class distributions as new attacks emerge. To overcome these limitations, we present a continual learning framework tailored for adaptive intrusion detection. Unlike prior methods, our approach is designed to operate under real-world network conditions characterized by high-dimensional, sparse traffic data and task-agnostic learning sequences. The framework combines three core components: a clustering-based memory strategy that selectively retains informative historical samples using DP-Means; multi-level knowledge distillation that aligns current and previous model states at output and intermediate feature levels; and a meta-learning-driven class reweighting mechanism that dynamically adjusts to shifting attack distributions. Empirical evaluations on benchmark intrusion detection datasets demonstrate the framework’s ability to maintain high detection accuracy while effectively mitigating forgetting. Notably, it delivers reliable performance in continually changing environments where the availability of labeled data is limited, making it well-suited for real-world cybersecurity systems. Full article
Show Figures

Figure 1

27 pages, 1588 KB  
Article
Toward the Theoretical Foundations of Industry 6.0: A Framework for AI-Driven Decentralized Manufacturing Control
by Andrés Fernández-Miguel, Susana Ortíz-Marcos, Mariano Jiménez-Calzado, Alfonso P. Fernández del Hoyo, Fernando E. García-Muiña and Davide Settembre-Blundo
Future Internet 2025, 17(10), 455; https://doi.org/10.3390/fi17100455 - 3 Oct 2025
Viewed by 455
Abstract
This study advances toward establishing the theoretical foundations of Industry 6.0 by developing a comprehensive framework that integrates artificial intelligence (AI), decentralized control systems, and cyber–physical production environments for intelligent, sustainable, and adaptive manufacturing. The research employs a tri-modal methodology (deductive, inductive, and [...] Read more.
This study advances toward establishing the theoretical foundations of Industry 6.0 by developing a comprehensive framework that integrates artificial intelligence (AI), decentralized control systems, and cyber–physical production environments for intelligent, sustainable, and adaptive manufacturing. The research employs a tri-modal methodology (deductive, inductive, and abductive reasoning) to construct a theoretical architecture grounded in five interdependent constructs: advanced technology integration, decentralized organizational structures, mass customization and sustainability strategies, cultural transformation, and innovation enhancement. Unlike prior conceptualizations of Industry 6.0, the proposed framework explicitly emphasizes the cyclical feedback between innovation and organizational design, as well as the role of cultural transformation as a binding element across technological, organizational, and strategic domains. The resulting framework demonstrates that AI-driven decentralized control systems constitute the cornerstone of Industry 6.0, enabling autonomous real-time decision-making, predictive zero-defect manufacturing, and strategic organizational agility through distributed intelligent control architectures. This work contributes foundational theory and actionable guidance for transitioning from centralized control paradigms to AI-driven distributed intelligent manufacturing control systems, establishing a conceptual foundation for the emerging Industry 6.0 paradigm. Full article
(This article belongs to the Special Issue Artificial Intelligence and Control Systems for Industry 4.0 and 5.0)
Show Figures

Figure 1

15 pages, 1705 KB  
Article
Enhancing Two-Step Random Access in LEO Satellite Internet an Attack-Aware Adaptive Backoff Indicator (AA-BI)
by Jiajie Dong, Yong Wang, Qingsong Zhao, Ruiqian Ma and Jiaxiong Yang
Future Internet 2025, 17(10), 454; https://doi.org/10.3390/fi17100454 - 1 Oct 2025
Viewed by 190
Abstract
Low-Earth-Orbit Satellite Internet (LEO SI), with its capability for seamless global coverage, is a key solution for connecting IoT devices in areas beyond terrestrial network reach, playing a vital role in building a future ubiquitous IoT system. Inspired by the IEEE 802.15.4 Improved [...] Read more.
Low-Earth-Orbit Satellite Internet (LEO SI), with its capability for seamless global coverage, is a key solution for connecting IoT devices in areas beyond terrestrial network reach, playing a vital role in building a future ubiquitous IoT system. Inspired by the IEEE 802.15.4 Improved Adaptive Backoff Algorithm (I-ABA), this paper proposes an Attack-Aware Adaptive Backoff Indicator (AA-BI) mechanism to enhance the security and robustness of the two-step random access process in LEO SI. The mechanism constructs a composite threat intensity indicator that incorporates collision probability, Denial-of-Service (DoS) attack strength, and replay attack intensity. This quantified threat level is smoothly mapped to a dynamic backoff window to achieve adaptive backoff adjustment. Simulation results demonstrate that, with 200 pieces of user equipment (UE), the AA-BI mechanism significantly improves the access success rate (ASR) and jamming resistance rate (JRR) under various attack scenarios compared to the I-ABA and Binary Exponential Backoff (BEB) algorithms. Notably, under high-attack conditions, AA-BI improves ASR by up to 25.1% and 56.6% over I-ABA and BEB, respectively. Moreover, under high-load conditions with 800 users, AA-BI still maintains superior performance, achieving an ASR of 0.42 and a JRR of 0.68, thereby effectively ensuring the access performance and reliability of satellite Internet in malicious environments. Full article
Show Figures

Figure 1

37 pages, 5285 KB  
Article
Assessing Student Engagement: A Machine Learning Approach to Qualitative Analysis of Institutional Effectiveness
by Abbirah Ahmed, Martin J. Hayes and Arash Joorabchi
Future Internet 2025, 17(10), 453; https://doi.org/10.3390/fi17100453 - 1 Oct 2025
Viewed by 246
Abstract
In higher education, institutional quality is traditionally assessed through metrics such as academic programs, research output, educational resources, and community services. However, it is important that their activities align with student expectations, particularly in relation to interactive learning environments, learning management system interaction, [...] Read more.
In higher education, institutional quality is traditionally assessed through metrics such as academic programs, research output, educational resources, and community services. However, it is important that their activities align with student expectations, particularly in relation to interactive learning environments, learning management system interaction, curricular and co-curricular activities, accessibility, support services and other learning resources that ensure academic success and, jointly, career readiness. The growing popularity of student engagement metrics as one of the key measures to evaluate institutional efficacy is now a feature across higher education. By monitoring student engagement, institutions assess the impact of existing resources and make necessary improvements or interventions to ensure student success. This study presents a comprehensive analysis of student feedback from the StudentSurvey.ie dataset (2016–2022), which consists of approximately 275,000 student responses, focusing on student self-perception of engagement in the learning process. By using classical topic modelling techniques such as Latent Dirichlet Allocation (LDA) and Bi-term Topic Modelling (BTM), along with the advanced transformer-based BERTopic model, we identify key themes in student responses that can impact institutional strength performance metrics. BTM proved more effective than LDA for short text analysis, whereas BERTopic offered greater semantic coherence and uncovered hidden themes using deep learning embeddings. Moreover, a custom Named Entity Recognition (NER) model successfully extracted entities such as university personnel, digital tools, and educational resources, with improved performance as the training data size increased. To enable students to offer actionable feedback, suggesting areas of improvement, an n-gram and bigram network analysis was used to focus on common modifiers such as “more” and “better” and trends across student groups. This study introduces a fully automated, scalable pipeline that integrates topic modelling, NER, and n-gram analysis to interpret student feedback, offering reportable insights and supporting structured enhancements to the student learning experience. Full article
(This article belongs to the Special Issue Machine Learning and Natural Language Processing)
Show Figures

Figure 1

19 pages, 944 KB  
Article
Robust Optimization for IRS-Assisted SAGIN Under Channel Uncertainty
by Xu Zhu, Litian Kang and Ming Zhao
Future Internet 2025, 17(10), 452; https://doi.org/10.3390/fi17100452 (registering DOI) - 1 Oct 2025
Viewed by 178
Abstract
With the widespread adoption of space–air–ground integrated networks (SAGINs) in next-generation wireless communications, intelligent reflecting surfaces (IRSs) have emerged as a key technology for enhancing system performance through passive link reinforcement. This paper addresses the prevalent issue of channel state information (CSI) uncertainty [...] Read more.
With the widespread adoption of space–air–ground integrated networks (SAGINs) in next-generation wireless communications, intelligent reflecting surfaces (IRSs) have emerged as a key technology for enhancing system performance through passive link reinforcement. This paper addresses the prevalent issue of channel state information (CSI) uncertainty in practical systems by constructing an IRS-assisted multi-hop SAGIN communication model. To capture the performance degradation caused by channel estimation errors, a norm-bounded uncertainty model is introduced. A simulated annealing (SA)-based phase optimization algorithm is proposed to enhance system robustness and improve worst-case communication quality. Simulation results demonstrate that the proposed method significantly outperforms traditional multiple access strategies (SDMA and NOMA) under various user densities and perturbation levels, highlighting its stability and scalability in complex environments. Full article
Show Figures

Figure 1

18 pages, 2045 KB  
Article
TwinP2G: A Software Application for Optimal Power-to- Gas Planning
by Eugenia Skepetari, Sotiris Pelekis, Hercules Koutalidis, Alexandros Menelaos Tzortzis, Georgios Kormpakis, Christos Ntanos and Dimitris Askounis
Future Internet 2025, 17(10), 451; https://doi.org/10.3390/fi17100451 - 30 Sep 2025
Viewed by 175
Abstract
This paper presents TwinP2G, a software application for optimal planning of investments in power-to-gas (PtG) systems. TwinP2G provides simulation and optimization services for the techno-economic analysis of user-customized energy networks. The core of TwinP2G is based on power flow simulation; however it supports [...] Read more.
This paper presents TwinP2G, a software application for optimal planning of investments in power-to-gas (PtG) systems. TwinP2G provides simulation and optimization services for the techno-economic analysis of user-customized energy networks. The core of TwinP2G is based on power flow simulation; however it supports energy sector coupling, including electricity, green hydrogen, natural gas, and synthetic methane. The framework provides a user-friendly user interface (UI) suitable for various user roles, including data scientists and energy experts, using visualizations and metrics on the assessed investments. An identity and access management mechanism also serves the security and authorization needs of the framework. Finally, TwinP2G revolutionizes the concept of data availability and data sharing by granting its users access to distributed energy datasets available in the EnerShare Data Space. These data are available to TwinP2G users for conducting their experiments and extracting useful insights on optimal PtG investments for the energy grid. Full article
18 pages, 654 KB  
Article
Trustworthy Face Recognition as a Service: A Multi-Layered Approach for Mitigating Spoofing and Ensuring System Integrity
by Mostafa Kira, Zeyad Alajamy, Ahmed Soliman, Yusuf Mesbah and Manuel Mazzara
Future Internet 2025, 17(10), 450; https://doi.org/10.3390/fi17100450 - 30 Sep 2025
Viewed by 353
Abstract
Facial recognition systems are increasingly used for authentication across domains such as finance, e-commerce, and public services, but their growing adoption raises significant concerns about spoofing attacks enabled by printed photos, replayed videos, or AI-generated deepfakes. To address this gap, we introduce a [...] Read more.
Facial recognition systems are increasingly used for authentication across domains such as finance, e-commerce, and public services, but their growing adoption raises significant concerns about spoofing attacks enabled by printed photos, replayed videos, or AI-generated deepfakes. To address this gap, we introduce a multi-layered Face Recognition-as-a-Service (FRaaS) platform that integrates passive liveness detection with active challenge–response mechanisms, thereby defending against both low-effort and sophisticated presentation attacks. The platform is designed as a scalable cloud-based solution, complemented by an open-source SDK for seamless third-party integration, and guided by ethical AI principles of fairness, transparency, and privacy. A comprehensive evaluation validates the system’s logic and implementation: (i) Frontend audits using Lighthouse consistently scored above 96% in performance, accessibility, and best practices; (ii) SDK testing achieved over 91% code coverage with reliable OAuth flow and error resilience; (iii) Passive liveness layer employed the DeepPixBiS model, which achieves an Average Classification Error Rate (ACER) of 0.4 on the OULU–NPU benchmark, outperforming prior state-of-the-art methods; and (iv) Load simulations confirmed high throughput (276 req/s), low latency (95th percentile at 1.51 ms), and zero error rates. Together, these results demonstrate that the proposed platform is robust, scalable, and trustworthy for security-critical applications. Full article
Show Figures

Figure 1

35 pages, 2054 KB  
Article
Evaluating Learning Success, Engagement, and Usability of Moalemy: An Arabic Rule-Based Chatbot
by Dalal Al Faia and Khalid Alomar
Future Internet 2025, 17(10), 449; https://doi.org/10.3390/fi17100449 - 30 Sep 2025
Viewed by 152
Abstract
A rule-based chatbot is a type of chatbot that responds by matching users’ queries with pre-defined rules. In e-learning, chatbots can enhance the learning experience by assisting teachers in delivering learning materials pleasantly. This research introduces Moalemy, an Arabic rule-based chatbot designed to [...] Read more.
A rule-based chatbot is a type of chatbot that responds by matching users’ queries with pre-defined rules. In e-learning, chatbots can enhance the learning experience by assisting teachers in delivering learning materials pleasantly. This research introduces Moalemy, an Arabic rule-based chatbot designed to provide a personalized learning experience by tailoring educational content to each learner’s prior knowledge. This empirical study evaluates learning outcomes, user engagement, and system usability using both subjective and objective metrics. It compares the effectiveness of a proposed Arabic rule-based chatbot with adaptive personalization to that of a static, non-personalized chatbot. The comparison was conducted across three levels of task difficulty (easy, medium, and hard) using a 2 × 3 within-subject experimental design with 34 participants. Descriptive statistics revealed higher mean values of usability and engagement in the adaptive method. Although the analysis revealed no significant variations in learning outcomes and SUS scores, it showed statistically significant differences in user satisfaction in favor of the adaptive method, p = 0.003. Analyses showed no significant differences between the two learning methods in terms of effectiveness, efficiency, and engagement. Across difficulty levels, the adaptive method outperforms the static method in terms of efficiency and effectiveness at the medium level, and in engagement at the easy level. Full article
Show Figures

Figure 1

36 pages, 2113 KB  
Article
Self-Sovereign Identities and Content Provenance: VeriTrust—A Blockchain-Based Framework for Fake News Detection
by Maruf Farhan, Usman Butt, Rejwan Bin Sulaiman and Mansour Alraja
Future Internet 2025, 17(10), 448; https://doi.org/10.3390/fi17100448 - 30 Sep 2025
Viewed by 567
Abstract
The widespread circulation of digital misinformation exposes a critical shortcoming in prevailing detection strategies, namely, the absence of robust mechanisms to confirm the origin and authenticity of online content. This study addresses this by introducing VeriTrust, a conceptual and provenance-centric framework designed to [...] Read more.
The widespread circulation of digital misinformation exposes a critical shortcoming in prevailing detection strategies, namely, the absence of robust mechanisms to confirm the origin and authenticity of online content. This study addresses this by introducing VeriTrust, a conceptual and provenance-centric framework designed to establish content-level trust by integrating Self-Sovereign Identity (SSI), blockchain-based anchoring, and AI-assisted decentralized verification. The proposed system is designed to operate through three key components: (1) issuing Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) through Hyperledger Aries and Indy; (2) anchoring cryptographic hashes of content metadata to an Ethereum-compatible blockchain using Merkle trees and smart contracts; and (3) enabling a community-led verification model enhanced by federated learning with future extensibility toward zero-knowledge proof techniques. Theoretical projections, derived from established performance benchmarks, suggest the framework offers low latency and high scalability for content anchoring and minimal on-chain transaction fees. It also prioritizes user privacy by ensuring no on-chain exposure of personal data. VeriTrust redefines misinformation mitigation by shifting from reactive content-based classification to proactive provenance-based verification, forming a verifiable link between digital content and its creator. VeriTrust, while currently at the conceptual and theoretical validation stage, holds promise for enhancing transparency, accountability, and resilience against misinformation attacks across journalism, academia, and online platforms. Full article
(This article belongs to the Special Issue AI and Blockchain: Synergies, Challenges, and Innovations)
Show Figures

Figure 1

26 pages, 1076 KB  
Article
NL-COMM: Enabling High-Performing Next-Generation Networks via Advanced Non-Linear Processing
by Chathura Jayawardena, George Ntavazlis Katsaros and Konstantinos Nikitopoulos
Future Internet 2025, 17(10), 447; https://doi.org/10.3390/fi17100447 - 30 Sep 2025
Viewed by 229
Abstract
Future wireless networks are expected to deliver enhanced spectral efficiency while being energy efficient. MIMO and other non-orthogonal transmission schemes, such as non-orthogonal multiple access (NOMA), offer substantial theoretical spectral efficiency gains. However, these gains have yet to translate into practical deployments, largely [...] Read more.
Future wireless networks are expected to deliver enhanced spectral efficiency while being energy efficient. MIMO and other non-orthogonal transmission schemes, such as non-orthogonal multiple access (NOMA), offer substantial theoretical spectral efficiency gains. However, these gains have yet to translate into practical deployments, largely due to limitations in current signal processing methods. Linear transceiver processing, though widely adopted, fails to fully exploit non-orthogonal transmissions, forcing massive MIMO systems to use a disproportionately large number of RF chains for relatively few streams, increasing power consumption. Non-linear processing can unlock the full potential of non-orthogonal schemes but is hindered by high computational complexity and integration challenges. Moreover, existing message-passing receivers for NOMA depend on specially designed sparse signals, limiting resource allocation flexibility and efficiency. This work presents NL-COMM, an efficient non-linear processing framework that translates the theoretical gains of non-orthogonal transmissions into practical benefits for both the uplink and downlink. NL-COMM delivers over 200% spectral efficiency gains, enables 50% reductions in antennas and RF chains (and thus base station power consumption), and increases concurrently supported users by 450%. In distributed MIMO deployments, the antenna reduction halves fronthaul bandwidth requirements, mitigating a key system bottleneck. Furthermore, NL-COMM offers the flexibility to unlock new NOMA schemes. Finally, we present both hardware and software architectures for NL-COMM that support massively parallel execution, demonstrating how advanced non-linear processing can be realized in practice to meet the demands of next-generation networks. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Figure 1

33 pages, 1715 KB  
Article
A Dependency-Aware Task Stealing Framework for Mobile Crowd Computing
by Sanjay Segu Nagesh, Niroshinie Fernando, Seng W. Loke, Azadeh Ghari Neiat and Pubudu N. Pathirana
Future Internet 2025, 17(10), 446; https://doi.org/10.3390/fi17100446 - 29 Sep 2025
Viewed by 200
Abstract
Mobile crowd computing (MCdC) leverages the collective computational resources of nearby mobile devices to execute complex tasks without relying on remote cloud infrastructure. However, existing MCdC systems struggle with device heterogeneity and complex application dependencies, often leading to inefficient resource utilization and poor [...] Read more.
Mobile crowd computing (MCdC) leverages the collective computational resources of nearby mobile devices to execute complex tasks without relying on remote cloud infrastructure. However, existing MCdC systems struggle with device heterogeneity and complex application dependencies, often leading to inefficient resource utilization and poor scalability. This paper presents Honeybee-Tx, a novel dependency-aware work stealing framework designed for heterogeneous mobile device clusters. The framework introduces three key contributions: (1) capability-aware job selection that matches computational tasks to device capabilities through lightweight profiling and dynamic scoring, (2) static dependency-aware work stealing that respects predefined task dependencies while maintaining decentralized execution, and (3) staged result transfers that minimize communication overhead by selectively transmitting intermediate results. We evaluate Honeybee-Tx using two applications: Human Activity Recognition (HAR) for sensor analytics and multi-camera video processing for compute-intensive workflows. The experimental results on five heterogeneous Android devices (OnePlus 5T, Pixel 6 Pro, and Pixel 7) demonstrate performance improvements over monolithic execution. For HAR workloads, Honeybee-Tx achieves up to 4.72× speed-up while reducing per-device energy consumption by 63% (from 1.5% to 0.56% battery usage). For video processing tasks, the framework delivers 2.06× speed-up compared to monolithic execution, with 51.4% energy reduction and 71.6% memory savings, while generating 42% less network traffic than non-dependency-aware approaches. These results demonstrate that Honeybee-Tx successfully addresses key challenges in heterogeneous MCdC environments, enabling efficient execution of dependency-aware applications across diverse mobile device capabilities. The framework provides a practical foundation for collaborative mobile computing applications in scenarios where cloud connectivity is limited or unavailable. Full article
Show Figures

Figure 1

23 pages, 832 KB  
Article
Sentiment Analysis in Mexican Spanish: A Comparison Between Fine-Tuning and In-Context Learning with Large Language Models
by Tomás Bernal-Beltrán, Mario Andrés Paredes-Valverde, María del Pilar Salas-Zárate, José Antonio García-Díaz and Rafael Valencia-García
Future Internet 2025, 17(10), 445; https://doi.org/10.3390/fi17100445 - 29 Sep 2025
Viewed by 295
Abstract
The proliferation of social media has made Sentiment Analysis an essential tool for understanding user opinions, particularly in underrepresented language variants such as Mexican Spanish. Recent advances in Large Language Models have made effective sentiment analysis through in-context learning techniques, reducing the need [...] Read more.
The proliferation of social media has made Sentiment Analysis an essential tool for understanding user opinions, particularly in underrepresented language variants such as Mexican Spanish. Recent advances in Large Language Models have made effective sentiment analysis through in-context learning techniques, reducing the need for supervised training. This study compares the performance of zero and few-shot with traditional fine-tuning approaches of tourism-related texts in Mexican Spanish. Two annotated datasets from the REST-MEX 2022 and 2023 shared tasks were used for this purpose. Results show that fine-tuning, particularly with the MarIA model, achieves the best overall performance. However, modern LLMs that use in-context learning strategies, such as Mixtral 8x7B for zero-shot and Mistral 7B for few-shot, demonstrate strong potential in low-resource settings by closely approximating the accuracy of fine-tuned models, suggesting that in-context learning is a viable alternative to fine-tuning for sentiment analysis in Mexican Spanish when labeled data is limited. These approaches can enable intelligent, data-driven digital services with applications in tourism platforms and urban information systems that enhance user experience and trust in large-scale socio-technical ecosystems. Full article
Show Figures

Figure 1

24 pages, 1641 KB  
Article
Intellectual Property Protection Through Blockchain: Introducing the Novel SmartRegistry-IP for Secure Digital Ownership
by Abeer S. Al-Humaimeedy
Future Internet 2025, 17(10), 444; https://doi.org/10.3390/fi17100444 - 29 Sep 2025
Viewed by 286
Abstract
The rise of digital content has made the need for reliable and practical intellectual property (IP) management systems more critical than ever. Most traditional IP systems are prone to issues such as delays, inefficiency, and data security breaches. This paper introduces SmartRegistry-IP, a [...] Read more.
The rise of digital content has made the need for reliable and practical intellectual property (IP) management systems more critical than ever. Most traditional IP systems are prone to issues such as delays, inefficiency, and data security breaches. This paper introduces SmartRegistry-IP, a system developed to simplify the registration, licensing, and transfer of intellectual property assets in a secure and scalable decentralized environment. By utilizing the InterPlanetary File System (IPFS) for decentralized storage, SmartRegistry-IP achieves a low storage latency of 300 milliseconds, outperforming both cloud storage (500 ms) and local storage (700 ms). The system also supports a high transaction throughput of 120 transactions per second. Through the use of smart contracts, licensing agreements are automatically and securely enforced, reducing the need for intermediaries and lowering operational costs. Additionally, the proof-of-work process verifies all transactions, ensuring higher security and maintaining data consistency. The platform integrates an intuitive graphical user interface that enables seamless asset uploads, license management, and analytics visualization in real time. SmartRegistry-IP demonstrates superior efficiency compared to traditional systems, achieving a blockchain delay of 300 ms, which is half the latency of standard systems, averaging 600 ms. According to this study, adopting SmartRegistry-IP provides IP organizations with enhanced security and transparent management, ensuring they can overcome operational challenges regardless of their size. As a result, the use of blockchain for intellectual property management is expected to increase, helping maintain precise records and reducing time spent on online copyright registration. Full article
Show Figures

Figure 1

29 pages, 1644 KB  
Article
Commercial Off-the-Shelf IoT-Based Infant Car Seat Application for Preventing the Forgotten Baby Syndrome
by Apostolos Panagiotopoulos and Vasileios Karyotis
Future Internet 2025, 17(10), 443; https://doi.org/10.3390/fi17100443 - 29 Sep 2025
Viewed by 259
Abstract
The Forgotten Baby Syndrome (FBS), the accidental abandonment of infants in vehicles, continues to result in otherwise preventable tragedies worldwide. This work presents a prototype system called SafeCuddle, designed to mitigate the risks associated with FBS. The proposed solution utilizes an Arduino [...] Read more.
The Forgotten Baby Syndrome (FBS), the accidental abandonment of infants in vehicles, continues to result in otherwise preventable tragedies worldwide. This work presents a prototype system called SafeCuddle, designed to mitigate the risks associated with FBS. The proposed solution utilizes an Arduino UNO R4 WiFi microcontroller integrated with low-cost IoT sensors for real-time data acquisition and processing. Processed signals are visualized via a Python-based desktop application. A key feature of the system is its ability to issue immediate alerts to the driver upon detecting their departure from the vehicle while an infant remains seated in the vehicle. An extensive review of the syndrome’s etiology identifies disrupted routines and the high demands of modern life as primary contributing factors. In response, the proposed system can be easily implemented with commercial off-the-shelf components and aims to support caregivers by acting as a fail-safe mechanism. The paper is structured into two primary sections: (i) an analytical overview of FBS and its contributing factors and (ii) a detailed account of the system’s design, implementation, operational workflow, and evaluation results. The unique contribution of this work lies in the integration of a low-cost, real-time alert system within a modular and easily deployable architecture that can be integrated in existing infant car seats as an aftermarket solution, if properly commercialized, specifically tailored to prevent FBS through immediate driver feedback at the critical moment of risk. Full article
(This article belongs to the Special Issue Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

27 pages, 4067 KB  
Article
Opportunities for Adapting Data Write Latency in Geo-Distributed Replicas of Multicloud Systems
by Olha Kozina, José Machado, Maksym Volk, Hennadii Heiko, Volodymyr Panchenko, Mykyta Kozin and Maryna Ivanova
Future Internet 2025, 17(10), 442; https://doi.org/10.3390/fi17100442 - 28 Sep 2025
Viewed by 230
Abstract
This paper proposes an AI-based approach to adapting the data write latency in multicloud systems (MCSs) that supports data consistency across geo-distributed replicas of cloud service providers (CSPs). The proposed approach allows for dynamically forming adaptation scenarios based on the proposed model of [...] Read more.
This paper proposes an AI-based approach to adapting the data write latency in multicloud systems (MCSs) that supports data consistency across geo-distributed replicas of cloud service providers (CSPs). The proposed approach allows for dynamically forming adaptation scenarios based on the proposed model of multi-criteria optimization of data write latency. The generated adaptation scenarios are aimed at maintaining the required data write latency under changes in the intensity of the incoming request flow and network transmission time between replicas in CSPs. To generate adaptation scenarios, the features of the algorithmic Latord method of data consistency, are used. To determine the threshold values and predict the external parameters affecting the data write latency, we propose using learning AI models. An artificial neural network is used to form rules for changing the parameters of the Latord method when the external operating conditions of MCSs change. The features of the Latord method that influence data write latency are demonstrated by the results of simulation experiments on three MCSs with different configurations. To confirm the effectiveness of the developed approach, an adaptation scenario was considered that allows reducing the data write latency by 13% when changing the standard deviation of network transmission time between DCs of MCS. Full article
(This article belongs to the Special Issue Artificial Intelligence and Control Systems for Industry 4.0 and 5.0)
Show Figures

Figure 1

46 pages, 1768 KB  
Article
Healing Intelligence: A Bio-Inspired Metaheuristic Optimization Method Using Recovery Dynamics
by Vasileios Charilogis and Ioannis G. Tsoulos
Future Internet 2025, 17(10), 441; https://doi.org/10.3390/fi17100441 - 27 Sep 2025
Viewed by 175
Abstract
BioHealing Optimization (BHO) is a bio-inspired metaheuristic that operationalizes the injury–recovery paradigm through an iterative loop of recombination, stochastic injury, and guided healing. The algorithm is further enhanced by adaptive mechanisms, including scar map, hot-dimension focusing, RAGE/hyper-RAGE bursts (Rapid Aggressive Global Exploration), and [...] Read more.
BioHealing Optimization (BHO) is a bio-inspired metaheuristic that operationalizes the injury–recovery paradigm through an iterative loop of recombination, stochastic injury, and guided healing. The algorithm is further enhanced by adaptive mechanisms, including scar map, hot-dimension focusing, RAGE/hyper-RAGE bursts (Rapid Aggressive Global Exploration), and healing-rate modulation, enabling a dynamic balance between exploration and exploitation. Across 17 benchmark problems with 30 runs, each under a fixed budget of 1.5·105 function evaluations, BHO achieves the lowest overall rank in both the “best-of-runs” (47) and the “mean-of-runs” (48), giving an overall rank sum of 95 and an average rank of 2.794. Representative first-place results include Frequency-Modulated Sound Waves, the Lennard–Jones potential, and Electricity Transmission Pricing. In contrast to prior healing-inspired optimizers such as Wound Healing Optimization (WHO) and Synergistic Fibroblast Optimization (SFO), BHO uniquely integrates (i) an explicit tri-phasic architecture (DE/best/1/bin recombination → Gaussian/Lévy injury → guided healing), (ii) per-dimension stateful adaptation (scar map, hot-dims), and (iii) stagnation-triggered bursts (RAGE/hyper-RAGE). These features provide a principled exploration–exploitation separation that is absent in WHO/SFO. Full article
Show Figures

Graphical abstract

Previous Issue
Back to TopTop