Journal Description
Future Internet
Future Internet
is an international, peer-reviewed, open access journal on internet technologies and the information society, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, Inspec, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q1 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.9 days after submission; acceptance to publication is undertaken in 2.7 days (median values for papers published in this journal in the second half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.8 (2023);
5-Year Impact Factor:
3.0 (2023)
Latest Articles
Inter-Data Center RDMA: Challenges, Status, and Future Directions
Future Internet 2025, 17(6), 242; https://doi.org/10.3390/fi17060242 (registering DOI) - 29 May 2025
Abstract
Remote Direct Memory Access (RDMA) has been widely implemented in data centers (DCs) due to its high-bandwidth, low-latency, and low-overhead characteristics. In recent years, as various applications relying on inter-DC interconnection have continuously emerged, the demand for deploying RDMA across DCs has been
[...] Read more.
Remote Direct Memory Access (RDMA) has been widely implemented in data centers (DCs) due to its high-bandwidth, low-latency, and low-overhead characteristics. In recent years, as various applications relying on inter-DC interconnection have continuously emerged, the demand for deploying RDMA across DCs has been on the rise. Numerous studies have focused on intra-DC RDMA. However, research on inter-DC RDMA is relatively scarce, yet it is showing a growing tendency. Inspired by this trend, this article identifies and discusses specific challenges in inter-DC RDMA deployment, such as congestion control and load balancing. Subsequently, it surveys the recent progress in enhancing the applicability of inter-DC RDMA. Finally, it presents future research directions and opportunities. As the first review article focusing on inter-DC RDMA, this article aims to provide valuable insights and guidance for future research in this emerging field. By systematically reviewing the current state of inter-DC RDMA, we hope to establish a foundation that will inspire and direct subsequent studies.
Full article
Open AccessArticle
Navigating Data Corruption in Machine Learning: Balancing Quality, Quantity, and Imputation Strategies
by
Qi Liu and Wanjing Ma
Future Internet 2025, 17(6), 241; https://doi.org/10.3390/fi17060241 - 29 May 2025
Abstract
Data corruption, including missing and noisy entries, is a common challenge in real-world machine learning. This paper examines its impact and mitigation strategies through two experimental setups: supervised NLP tasks (NLP-SL) and deep reinforcement learning for traffic signal control (Signal-RL). This study analyzes
[...] Read more.
Data corruption, including missing and noisy entries, is a common challenge in real-world machine learning. This paper examines its impact and mitigation strategies through two experimental setups: supervised NLP tasks (NLP-SL) and deep reinforcement learning for traffic signal control (Signal-RL). This study analyzes how varying corruption levels affect model performance, evaluate imputation strategies, and assess whether expanding datasets can counteract corruption effects. The results indicate that performance degradation follows a diminishing-return pattern, well modeled by an exponential function. Noisy data harm performance more than missing data, especially in sequential tasks like Signal-RL where errors may compound. Imputation helps recover missing data but can introduce noise, with its effectiveness depending on corruption severity and imputation accuracy. This study identifies clear boundaries between when imputation is beneficial versus harmful, and classifies tasks as either noise-sensitive or noise-insensitive. Larger datasets reduce corruption effects but offer diminishing gains at high corruption levels. These insights guide the design of robust systems, emphasizing smart data collection, imputation decisions, and preprocessing strategies in noisy environments.
Full article
(This article belongs to the Special Issue Smart Technology: Artificial Intelligence, Robotics and Algorithms)
►▼
Show Figures

Figure 1
Open AccessArticle
Enhancing Customer Quality of Experience Through Omnichannel Digital Strategies: Evidence from a Service Environment in an Emerging Context
by
Fabricio Miguel Moreno-Menéndez, Victoriano Eusebio Zacarías-Rodríguez, Sara Ricardina Zacarías-Vallejos, Vicente González-Prida, Pedro Emil Torres-Quillatupa, Hilario Romero-Girón, José Francisco Vía y Rada-Vittes and Luis Ángel Huaynate-Espejo
Future Internet 2025, 17(6), 240; https://doi.org/10.3390/fi17060240 (registering DOI) - 29 May 2025
Abstract
The proliferation of digital platforms and interactive technologies has transformed the way service providers engage with their customers, particularly in emerging economies, where digital inclusion is an ongoing process. This study explores the relationship between omnichannel strategies and customer satisfaction, conceptualized here as
[...] Read more.
The proliferation of digital platforms and interactive technologies has transformed the way service providers engage with their customers, particularly in emerging economies, where digital inclusion is an ongoing process. This study explores the relationship between omnichannel strategies and customer satisfaction, conceptualized here as a proxy for Quality of Experience (QoE), within a smart service station located in a digitally underserved region. Grounded in customer journey theory and the expectancy–disconfirmation paradigm, the study investigates how data integration, digital payment systems, and logistical flexibility—key components of intelligent e-service systems—influence user perceptions and satisfaction. Based on a correlational design with a non-probabilistic sample of 108 customers, the findings reveal a moderate association between overall omnichannel integration and satisfaction (ρ = 0.555, p < 0.01). However, a multiple regression analysis indicates that no individual dimension significantly predicts satisfaction (adjusted R2 = 0.002). These results suggest that while users value system integration and interaction flexibility, no single technical feature drives satisfaction independently. The study contributes to the growing field of intelligent human-centric service systems by contextualizing QoE and digital inclusion within emerging markets and by emphasizing the importance of perceptual factors in ICT-enabled environments.
Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Open AccessArticle
Multi Stage Retrieval for Web Search During Crisis
by
Claudiu Constantin Tcaciuc, Daniele Rege Cambrin and Paolo Garza
Future Internet 2025, 17(6), 239; https://doi.org/10.3390/fi17060239 - 29 May 2025
Abstract
During crisis events, digital information volume can increase by over 500% within hours, with social media platforms alone generating millions of crisis-related posts. This volume creates critical challenges for emergency responders who require timely access to the concise subset of accurate information they
[...] Read more.
During crisis events, digital information volume can increase by over 500% within hours, with social media platforms alone generating millions of crisis-related posts. This volume creates critical challenges for emergency responders who require timely access to the concise subset of accurate information they are interested in. Existing approaches strongly rely on the power of large language models. However, the use of large language models limits the scalability of the retrieval procedure and may introduce hallucinations. This paper introduces a novel multi-stage text retrieval framework to enhance information retrieval during crises. Our framework employs a novel three-stage extractive pipeline where (1) a topic modeling component filters candidates based on thematic relevance, (2) an initial high-recall lexical retriever identifies a broad candidate set, and (3) a dense retriever reranks the remaining documents. This architecture balances computational efficiency with retrieval effectiveness, prioritizing high recall in early stages while refining precision in later stages. The framework avoids the introduction of hallucinations, achieving a 15% improvement in BERT-Score compared to existing solutions without requiring any costly abstractive model. Moreover, our sequential approach accelerates the search process by 5% compared to the use of a single-stage based on a dense retrieval approach, with minimal effect on the performance in terms of BERT-Score.
Full article
(This article belongs to the Collection Innovative People-Centered Solutions Applied to Industries, Cities and Societies)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Security Challenges for Users of Extensible Smart Home Hubs: A Systematic Literature Review
by
Tobias Rødahl Thingnes and Per Håkon Meland
Future Internet 2025, 17(6), 238; https://doi.org/10.3390/fi17060238 - 28 May 2025
Abstract
Smart home devices and home automation systems, which control features such as lights, blinds, heaters, door locks, cameras, and speakers, have become increasingly popular and can be found in homes worldwide. Central to these systems are smart home hubs, which serve as the
[...] Read more.
Smart home devices and home automation systems, which control features such as lights, blinds, heaters, door locks, cameras, and speakers, have become increasingly popular and can be found in homes worldwide. Central to these systems are smart home hubs, which serve as the primary control units, allowing users to manage connected devices from anywhere in the world. While this feature is convenient, it also makes smart home hubs attractive targets for cyberattacks. Unfortunately, the average user lacks substantial cybersecurity knowledge, making the security of these systems crucial. This is particularly important as smart home systems are expected to safeguard users’ privacy and security within their homes. This paper synthesizes eight prevalent cybersecurity challenges associated with smart home hubs through a systematic literature review. The review process involved identifying relevant keywords, searching, and screening 713 papers in multiple rounds to arrive at a final selection of 16 papers, which were then summarized and synthesized. This process included research from Scopus published between January 2019 and November 2024 and excluded papers on prototypes or individual features. The study is limited by scarce academic sources on open-source smart home hubs, strict selection criteria, rapid technological changes, and some subjectivity in study inclusion. The security of extensible smart home hubs is a complex and evolving issue. This review provides a foundation for understanding the key challenges and potential solutions, which is useful for future research and development to secure this increasingly important part of our everyday homes.
Full article
(This article belongs to the Special Issue Human-Centered Cybersecurity)
Open AccessArticle
Machine Learning and Deep Learning-Based Atmospheric Duct Interference Detection and Mitigation in TD-LTE Networks
by
Rasendram Muralitharan, Upul Jayasinghe, Roshan G. Ragel and Gyu Myoung Lee
Future Internet 2025, 17(6), 237; https://doi.org/10.3390/fi17060237 - 27 May 2025
Abstract
The variations in the atmospheric refractivity in the lower atmosphere create a natural phenomenon known as atmospheric ducts. The atmospheric ducts allow radio signals to travel long distances. This can adversely affect telecommunication systems, as cells with similar frequencies can interfere with each
[...] Read more.
The variations in the atmospheric refractivity in the lower atmosphere create a natural phenomenon known as atmospheric ducts. The atmospheric ducts allow radio signals to travel long distances. This can adversely affect telecommunication systems, as cells with similar frequencies can interfere with each other due to frequency reuse, which is intended to optimize resource allocation. Thus, the downlink signals of one base station will travel a long distance via the atmospheric duct and interfere with the uplink signals of another base station. This scenario is known as atmospheric duct interference (ADI). ADI could be mitigated using digital signal processing, machine learning, and hybrid approaches. To address this challenge, we explore machine learning and deep learning techniques for ADI prediction and mitigation in Time-Division Long-Term Evolution (TD-LTE) networks. Our results show that the Random Forest algorithm achieves the highest prediction accuracy, while a convolutional neural network demonstrates the best mitigation performance with accuracy. Additionally, we propose optimizing special subframe configurations in TD-LTE networks using machine learning-based methods to effectively reduce ADI.
Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
►▼
Show Figures

Figure 1
Open AccessArticle
Position Accuracy and Distributed Beamforming Performance in WSNs: A Simulation Study
by
José Casca, Prabhat Gupta, Marco Gomes, Vitor Silva and Rui Dinis
Future Internet 2025, 17(6), 236; https://doi.org/10.3390/fi17060236 - 27 May 2025
Abstract
This work investigates the performance of distributed beamforming in Wireless Sensor Networks (WSNs), focusing on the impact of node position errors. A comprehensive simulation testbed was developed to assess how varying network topologies and position uncertainties impact system performance. Our results reveal that
[...] Read more.
This work investigates the performance of distributed beamforming in Wireless Sensor Networks (WSNs), focusing on the impact of node position errors. A comprehensive simulation testbed was developed to assess how varying network topologies and position uncertainties impact system performance. Our results reveal that distributed beamforming in the near-field is highly sensitive to position errors, resulting in a noticeable degradation in performance, particularly in terms of Bit Error Rate (BER). Cramer–Rao Lower Bound (CRB) was used to analyse the theoretical limitations of position estimation accuracy and how these limitations affect beamforming performance. These findings underscore the critical importance of accurate localisation techniques and robust beamforming algorithms to fully realise the potential of distributed beamforming in practical WSN applications.
Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
LLM Performance in Low-Resource Languages: Selecting an Optimal Model for Migrant Integration Support in Greek
by
Alexandros Tassios, Stergios Tegos, Christos Bouas, Konstantinos Manousaridis, Maria Papoutsoglou, Maria Kaltsa, Eleni Dimopoulou, Thanassis Mavropoulos, Stefanos Vrochidis and Georgios Meditskos
Future Internet 2025, 17(6), 235; https://doi.org/10.3390/fi17060235 - 26 May 2025
Abstract
►▼
Show Figures
The integration of Large Language Models (LLMs) in chatbot applications gains momentum. However, to successfully deploy such systems, the underlying capabilities of LLMs must be carefully considered, especially when dealing with low-resource languages and specialized fields. This paper presents the results of a
[...] Read more.
The integration of Large Language Models (LLMs) in chatbot applications gains momentum. However, to successfully deploy such systems, the underlying capabilities of LLMs must be carefully considered, especially when dealing with low-resource languages and specialized fields. This paper presents the results of a comprehensive evaluation of several LLMs conducted in the context of a chatbot agent designed to assist migrants in their integration process. Our aim is to identify the optimal LLM that can effectively process and generate text in Greek and provide accurate information, addressing the specific needs of migrant populations. The design of the evaluation methodology leverages input from experts on social assistance initiatives, social impact and technological solutions, as well as from automated LLM self-evaluations. Given the linguistic challenges specific to the Greek language and the application domain, research findings indicate that Claude 3.7 Sonnet and Gemini 2.0 Flash demonstrate superior performance across all criteria, with Claude 3.7 Sonnet emerging as the leading candidate for the chatbot. Moreover, the results suggest that automated custom evaluations of LLMs can align with human assessments, offering a viable option for preliminary low-cost analysis to assist stakeholders in selecting the optimal LLM based on user and application domain requirements.
Full article

Figure 1
Open AccessArticle
Federated XAI IDS: An Explainable and Safeguarding Privacy Approach to Detect Intrusion Combining Federated Learning and SHAP
by
Kazi Fatema, Samrat Kumar Dey, Mehrin Anannya, Risala Tasin Khan, Mohammad Mamunur Rashid, Chunhua Su and Rashed Mazumder
Future Internet 2025, 17(6), 234; https://doi.org/10.3390/fi17060234 - 26 May 2025
Abstract
An intrusion detection system (IDS) is a crucial element in cyber security concerns. IDS is a safeguarding module that is designed to identify unauthorized activities in network environments. The importance of constructing IDSs has never been this significant with the growing number of
[...] Read more.
An intrusion detection system (IDS) is a crucial element in cyber security concerns. IDS is a safeguarding module that is designed to identify unauthorized activities in network environments. The importance of constructing IDSs has never been this significant with the growing number of attacks on network layers. This research work was intended to draw the attention of the authors to a different aspect of intrusion detection, considering privacy and the contribution of the features on attack classes. At present, the majority of the existing IDSs are designed based on centralized infrastructure, which raises serious concerns about security as the network data from one system are exposed to another system. This act of sharing the original network data with another server can worsen the current arrangement of protecting privacy within the network. In addition, the existing IDS models are merely a tool for identifying the attack categories without analyzing a further emphasis of the network feature on the attacks. In this article, we propose a novel framework, FEDXAIIDS, converging federated learning and explainable AI. The proposed approach enables IDS models to be collaboratively trained across multiple decentralized devices while ensuring that local data remain securely on edge nodes, thus mitigating privacy risks. The primary objectives of the proposed study are to reveal the privacy concerns of centralized systems and identify the most significant features to comprehend the contribution of the features to the final output. Our proposed model was designed, fusing federated learning (FL) with Shapley additive explanations (SHAPs), using an artificial neural network (ANN) as a local model. The framework has a server device and four client devices that have their own data set on their end. The server distributes the primary model constructed using an ANN among the local clients. Next, the local clients train their individual part of the data set, deploying the distributed model from the server, and they share their feedback with the central end. The central end then incorporates an aggregator model named FedAvg to assemble the separate results from the clients into one output. At last, the contribution of the ten most significant features is evaluated by incorporating SHAP. The entire research work was executed on CICIoT2023. The data set was partitioned into four parts and distributed among the four local ends. The proposed method demonstrated efficacy in intrusion detection, achieving 88.4% training and 88.2% testing accuracy. Furthermore, UDP has been found to be the most significant feature of the network layer from the SHAP analysis. Simultaneously, the incorporation of federated learning has ensured the safeguarding of the confidentiality of the network information of the individual ends. This enhances transparency and ensures that the model is both reliable and interpretable. Federated XAI IDS effectively addresses privacy concerns and feature interpretability issues in modern IDS frameworks, contributing to the advancement of secure, interpretable, and decentralized intrusion detection systems. Our findings accelerate the development of cyber security solutions that leverage federated learning and explainable AI (XAI), paving the way for future research and practical implementations in real-world network security environments.
Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
►▼
Show Figures

Figure 1
Open AccessArticle
Secure and Trustworthy Open Radio Access Network (O-RAN) Optimization: A Zero-Trust and Federated Learning Framework for 6G Networks
by
Mohammed El-Hajj
Future Internet 2025, 17(6), 233; https://doi.org/10.3390/fi17060233 - 25 May 2025
Abstract
The Open Radio Access Network (O-RAN) paradigm promises unprecedented flexibility and cost efficiency for 6G networks but introduces critical security risks due to its disaggregated, AI-driven architecture. This paper proposes a secure optimization framework integrating zero-trust principles and privacy-preserving Federated Learning (FL) to
[...] Read more.
The Open Radio Access Network (O-RAN) paradigm promises unprecedented flexibility and cost efficiency for 6G networks but introduces critical security risks due to its disaggregated, AI-driven architecture. This paper proposes a secure optimization framework integrating zero-trust principles and privacy-preserving Federated Learning (FL) to address vulnerabilities in O-RAN’s RAN Intelligent Controllers (RICs) and xApps/rApps. We first establish a novel threat model targeting O-RAN’s optimization processes, highlighting risks such as adversarial Machine Learning (ML) attacks on resource allocation models and compromised third-party applications. To mitigate these, we design a Zero-Trust Architecture (ZTA) enforcing continuous authentication and micro-segmentation for RIC components, coupled with an FL framework that enables collaborative ML training across operators without exposing raw network data. A differential privacy mechanism is applied to global model updates to prevent inference attacks. We validate our framework using the DAWN Dataset (5G/6G traffic traces with slicing configurations) and the OpenRAN Gym Dataset (O-RAN-compliant resource utilization metrics) to simulate energy efficiency optimization under adversarial conditions. A dynamic DU sleep scheduling case study demonstrates 32% energy savings with <5% latency degradation, even when data poisoning attacks compromise 15% of the FL participants. Comparative analysis shows that our ZTA reduces unauthorized RIC access attempts by 89% compared to conventional O-RAN security baselines. This work bridges the gap between performance optimization and trustworthiness in next-generation O-RAN, offering actionable insights for 6G standardization.
Full article
(This article belongs to the Special Issue Secure and Trustworthy Next Generation O-RAN Optimisation)
►▼
Show Figures

Figure 1
Open AccessArticle
Grouping-Based Dynamic Routing, Core, and Spectrum Allocation Method for Avoiding Spectrum Fragmentation and Inter-Core Crosstalk in Multi-Core Fiber Networks
by
Funa Fukui, Tomotaka Kimura, Yutaka Fukuchi and Kouji Hirata
Future Internet 2025, 17(6), 232; https://doi.org/10.3390/fi17060232 - 23 May 2025
Abstract
►▼
Show Figures
In this paper, we propose a grouping-based dynamic routing, core, and spectrum allocation (RCSA) method for preventing spectrum fragmentation and inter-core crosstalk in elastic optical path networks based on multi-core fiber environments. Multi-core fibers enable us to considerably enhance the transmission capacity of
[...] Read more.
In this paper, we propose a grouping-based dynamic routing, core, and spectrum allocation (RCSA) method for preventing spectrum fragmentation and inter-core crosstalk in elastic optical path networks based on multi-core fiber environments. Multi-core fibers enable us to considerably enhance the transmission capacity of optical links; however, this induces inter-core crosstalk, which degrades the quality of optical signals. We should thus avoid using the same frequency bands in adjacent cores in order to ensure high-quality communications. However, this simple strategy leads to inefficient use of frequency-spectrum resources, resulting in spectrum fragmentation and a high blocking probability for lightpath establishment. The proposed method allows one to overcome this difficulty by grouping lightpath-setup requests according to their required number of frequency slots. By assigning lightpath-setup requests belonging to the same group to cores according to their priority, the proposed method aims to suppress inter-core crosstalk. Furthermore, the proposed method is designed to mitigate spectrum fragmentation by determining the prioritized frequency bandwidth for lightpath-setup requests according to their required number of frequency slots. We show that the proposed method reduces the blocking of lightpath establishment while suppressing inter-core crosstalk through simulation experiments.
Full article

Figure 1
Open AccessArticle
AI-Driven Framework for Evaluating Climate Misinformation and Data Quality on Social Media
by
Zeinab Shahbazi, Rezvan Jalali and Zahra Shahbazi
Future Internet 2025, 17(6), 231; https://doi.org/10.3390/fi17060231 - 22 May 2025
Abstract
In the digital age, climate change content on social media is frequently distorted by misinformation, driven by unrestricted content sharing and monetization incentives. This paper proposes a novel AI-based framework to evaluate the data quality of climate-related discourse across platforms like Twitter and
[...] Read more.
In the digital age, climate change content on social media is frequently distorted by misinformation, driven by unrestricted content sharing and monetization incentives. This paper proposes a novel AI-based framework to evaluate the data quality of climate-related discourse across platforms like Twitter and YouTube. Data quality is defined using key dimensions of credibility, accuracy, relevance, and sentiment polarity, and a pipeline is developed using transformer-based NLP models, sentiment classifiers, and misinformation detection algorithms. The system processes user-generated content to detect sentiment drift, engagement patterns, and trustworthiness scores. Datasets were collected from three major platforms, encompassing over 1 million posts between 2018 and 2024. Evaluation metrics such as precision, recall, F1-score, and AUC were used to assess model performance. Results demonstrate a 9.2% improvement in misinformation filtering and 11.4% enhancement in content credibility detection compared to baseline models. These findings provide actionable insights for researchers, media outlets, and policymakers aiming to improve climate communication and reduce content-driven polarization on social platforms.
Full article
(This article belongs to the Special Issue Information Communication Technologies and Social Media)
►▼
Show Figures

Figure 1
Open AccessArticle
A Deep Learning Approach for Multiclass Attack Classification in IoT and IIoT Networks Using Convolutional Neural Networks
by
Ali Abdi Seyedkolaei, Fatemeh Mahmoudi and José García
Future Internet 2025, 17(6), 230; https://doi.org/10.3390/fi17060230 - 22 May 2025
Abstract
The rapid expansion of the Internet of Things (IoT) and industrial Internet of Things (IIoT) ecosystems has introduced new security challenges, particularly the need for robust intrusion detection systems (IDSs) capable of adapting to increasingly sophisticated cyberattacks. In this study, we propose a
[...] Read more.
The rapid expansion of the Internet of Things (IoT) and industrial Internet of Things (IIoT) ecosystems has introduced new security challenges, particularly the need for robust intrusion detection systems (IDSs) capable of adapting to increasingly sophisticated cyberattacks. In this study, we propose a novel intrusion detection approach based on convolutional neural networks (CNNs), designed to automatically extract spatial patterns from network traffic data. Leveraging the DNN-EdgeIIoT dataset, which includes a wide range of attack types and traffic scenarios, we conduct comprehensive experiments to compare the CNN-based model against traditional machine learning techniques, including decision trees, random forests, support vector machines, and K-nearest neighbors. Our approach consistently outperforms baseline models across multiple performance metrics—such as F1 score, precision, and recall—in both binary (benign vs. attack) and multiclass settings (6-class and 15-class classification). The CNN model achieves F1 scores of 1.00, 0.994, and 0.946, respectively, highlighting its strong generalization ability across diverse attack categories. These results demonstrate the effectiveness of deep-learning-based IDSs in enhancing the security posture of IoT and IIoT infrastructures, paving the way for intelligent, adaptive, and scalable threat detection systems.
Full article
(This article belongs to the Special Issue Intrusion Detection and Resiliency in Cyber-Physical Systems and Networks)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Optimization of Ground Station Energy Saving in LEO Satellite Constellations for Earth Observation Applications
by
Francesco Valente, Francesco Giacinto Lavacca, Marco Polverini, Tiziana Fiori and Vincenzo Eramo
Future Internet 2025, 17(6), 229; https://doi.org/10.3390/fi17060229 - 22 May 2025
Abstract
Orbital Edge Computing (OEC) capability on board satellites in Earth Observation (EO) constellations would surely enable a more effective usage of bandwidth, since the possibility to process images on board enables extracting and sending only useful information to the ground. However, OEC can
[...] Read more.
Orbital Edge Computing (OEC) capability on board satellites in Earth Observation (EO) constellations would surely enable a more effective usage of bandwidth, since the possibility to process images on board enables extracting and sending only useful information to the ground. However, OEC can also help to reduce the amount of energy required to process EO data on Earth. In fact, even though energy is a valuable resource on satellites, the on-board energy is pre-allocated due to the presence of solar panels and batteries and it is always generated and available, regardless of its actual need and use in time. Instead, energy consumption on the ground is strictly dependent on the demand, and it increases with the increase in EO data to be processed by ground stations. In this work, we first define and solve an optimization problem to jointly allocate resources and place processing within a constellation-wide network to leverage in-orbit processing as much as possible. This aims to reduce the amount of data to be processed on the ground, and thus, to maximize the energy saving in ground stations. Given the NP hardness of the proposed optimization problem, we also propose the Ground Station Energy-Saving Heuristic (GSESH) algorithm to evaluate the energy saving we would obtain in ground stations in a real orbital scenario. After validating the GSESH algorithm by means of a comparison with the results of the optimal solution, we have compared it to a benchmark algorithm in a typical scenario and we have verified that the GSESH algorithm allows for energy saving in the ground station up to 40% higher than the one achieved with the benchmark solution.
Full article
(This article belongs to the Special Issue Cloud Computing and High Performance Computing (HPC) Advances for Next Generation Internet—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Analysis of Digital Skills and Infrastructure in EU Countries Based on DESI 2024 Data
by
Kvitoslava Obelovska, Andrii Abziatov, Anastasiya Doroshenko, Ivanna Dronyuk, Oleh Liskevych and Rostyslav Liskevych
Future Internet 2025, 17(6), 228; https://doi.org/10.3390/fi17060228 - 22 May 2025
Abstract
This paper presents an analysis of digital skills and network infrastructure in the European Union (EU) countries based on data from the Digital Economy and Society Index (DESI) 2024. We analyze the current state of digital skills and network infrastructure in EU countries,
[...] Read more.
This paper presents an analysis of digital skills and network infrastructure in the European Union (EU) countries based on data from the Digital Economy and Society Index (DESI) 2024. We analyze the current state of digital skills and network infrastructure in EU countries, which in the DESI framework is called digital infrastructure, identifying key trends and differences between EU member states. The analysis shows significant differences in the relative share of citizens with a certain level of digital skills across countries, both among ordinary users of digital services and among information and communication technology professionals. The analysis of digital infrastructure includes fixed broadband coverage, mobile broadband, and edge networks, the latter of which are expected to become an important component of future digital infrastructure. The results highlight the importance of harmonizing the development of digital skills and digital infrastructure to support the EU’s digital transformation. Significant attention is paid to 5G technology. The feasibility of including a new additional indicator in DESI for next-generation 5G technology in the frequency range of 24.25–52.6 GHz is shown. The value of this indicator can be used to assess the readiness of the EU economy for technological leaps that place extremely high demands on reliability and data transmission delays. The results of the current state and the analysis of digital skills and infrastructure contribute to understanding the potential for the future development of the EU digital economy.
Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Heuristic Fuzzy Approach to Traffic Flow Modelling and Control on Urban Networks
by
Alexander Gegov, Boriana Vatchova, Yordanka Boneva and Alexandar Ichtev
Future Internet 2025, 17(5), 227; https://doi.org/10.3390/fi17050227 - 20 May 2025
Abstract
►▼
Show Figures
Computer-aided transport modelling is essential for testing different control strategies for traffic lights. One approach to modelling traffic control is by heuristically defining fuzzy rules for the control of traffic light systems and applying them to a network of hierarchically dependent crossroads. In
[...] Read more.
Computer-aided transport modelling is essential for testing different control strategies for traffic lights. One approach to modelling traffic control is by heuristically defining fuzzy rules for the control of traffic light systems and applying them to a network of hierarchically dependent crossroads. In this paper, such a network is investigated through modelling the geometry of the network in the simulation environment Aimsun. This environment is based on real-world traffic data and is used in this paper with the MATLAB R2019a-Fuzzy toolbox. It focuses on the development of a network of intersections, as well as four fuzzy models and the behaviour of these models on the investigated intersections. The transport network consists of four intersections. The novelty of the proposed approach is in the application of heuristic fuzzy rules to the modelling and control of traffic flow through these intersections. The motivation behind the use of this approach is to address inherent uncertainties using a fuzzy method and analyse its main findings in relation to a classical deterministic approach.
Full article

Figure 1
Open AccessArticle
Explainable AI Assisted IoMT Security in Future 6G Networks
by
Navneet Kaur and Lav Gupta
Future Internet 2025, 17(5), 226; https://doi.org/10.3390/fi17050226 - 20 May 2025
Abstract
The rapid integration of the Internet of Medical Things (IoMT) is transforming healthcare through real-time monitoring, AI-driven diagnostics, and remote treatment. However, the growing reliance on IoMT devices, such as robotic surgical systems, life-support equipment, and wearable health monitors, has expanded the attack
[...] Read more.
The rapid integration of the Internet of Medical Things (IoMT) is transforming healthcare through real-time monitoring, AI-driven diagnostics, and remote treatment. However, the growing reliance on IoMT devices, such as robotic surgical systems, life-support equipment, and wearable health monitors, has expanded the attack surface, exposing healthcare systems to cybersecurity risks like data breaches, device manipulation, and potentially life-threatening disruptions. While 6G networks offer significant benefits for healthcare, such as ultra-low latency, extensive connectivity, and AI-native capabilities, as highlighted in the ITU 6G (IMT-2030) framework, they are expected to introduce new and potentially more severe security challenges. These advancements put critical medical systems at greater risk, highlighting the need for more robust security measures. This study leverages AI techniques to systematically identify security vulnerabilities within 6G-enabled healthcare environments. Additionally, the proposed approach strengthens AI-driven security through use of multiple XAI techniques cross-validated against each other. Drawing on the insights provided by XAI, we tailor our mitigation strategies to the ITU-defined 6G usage scenarios, with a focus on their applicability to medical IoT networks. We propose that these strategies will effectively address potential vulnerabilities and enhance the security of medical systems leveraging IoT and 6G networks.
Full article
(This article belongs to the Special Issue Toward 6G Networks: Challenges and Technologies)
►▼
Show Figures

Figure 1
Open AccessArticle
Trajectory Optimization for UAV-Aided IoT Secure Communication Against Multiple Eavesdroppers
by
Lingfeng Shen, Jiangtao Nie, Ming Li, Guanghui Wang, Qiankun Zhang and Xin He
Future Internet 2025, 17(5), 225; https://doi.org/10.3390/fi17050225 - 19 May 2025
Abstract
This study concentrates on physical layer security (PLS) in UAV-aided Internet of Things (IoT) networks and proposes an innovative approach to enhance security by optimizing the trajectory of unmanned aerial vehicles (UAVs). In an IoT system with multiple eavesdroppers, formulating the optimal UAV
[...] Read more.
This study concentrates on physical layer security (PLS) in UAV-aided Internet of Things (IoT) networks and proposes an innovative approach to enhance security by optimizing the trajectory of unmanned aerial vehicles (UAVs). In an IoT system with multiple eavesdroppers, formulating the optimal UAV trajectory poses a non-convex and non-differentiable optimization challenge. The paper utilizes the successive convex approximation (SCA) method in conjunction with hypograph theory to address this challenge. First, a set of trajectory increment variables is introduced to replace the original UAV trajectory coordinates, thereby converting the original non-convex problem into a sequence of convex subproblems. Subsequently, hypograph theory is employed to convert these non-differentiable subproblems into standard convex forms, which can be solved using the CVX toolbox. Simulation results demonstrate the UAV’s trajectory fluctuations under different parameters, affirming that trajectory optimization significantly improves PLS performance in IoT systems.
Full article
(This article belongs to the Section Internet of Things)
►▼
Show Figures

Figure 1
Open AccessArticle
Research on Advancing Radio Wave Source Localization Technology Through UAV Path Optimization
by
Tomoroh Takahashi and Gia Khanh Tran
Future Internet 2025, 17(5), 224; https://doi.org/10.3390/fi17050224 - 16 May 2025
Abstract
►▼
Show Figures
With an increasing number of illegal radio stations, connected cars, and IoT devices, high-accuracy radio source localization techniques are in demand. Traditional methods such as GPS positioning and triangulation suffer from accuracy degradation in NLOS (non-line-of-sight) environments due to obstructions. In contrast, the
[...] Read more.
With an increasing number of illegal radio stations, connected cars, and IoT devices, high-accuracy radio source localization techniques are in demand. Traditional methods such as GPS positioning and triangulation suffer from accuracy degradation in NLOS (non-line-of-sight) environments due to obstructions. In contrast, the fingerprinting method builds a database of pre-collected radio information and estimates the source location via pattern matching, maintaining relatively high accuracy in NLOS environments. This study aims to improve the accuracy of fingerprinting-based localization by optimizing UAV flight paths. Previous research mainly relied on RSSI-based localization, but we introduce an AOA model considering AOA (angle of arrival) and EOA (elevation of arrival), as well as a HYBRID model that integrates multiple radio features with weighting. Using Wireless Insite, we conducted ray-tracing simulations based on the Institute of Science Tokyo’s Ookayama campus and optimized UAV flight paths with PSO (Particle Swarm Optimization). Results show that the HYBRID model achieved the highest accuracy, limiting the maximum error to 20 m. Sequential estimation improved accuracy for high-error sources, particularly when RSSI was used first, followed by AOA or HYBRID. Future work includes estimating unknown frequency sources, refining sequential estimation, and implementing cooperative localization.
Full article

Figure 1
Open AccessArticle
Real-Time Identification of Look-Alike Medical Vials Using Mixed Reality-Enabled Deep Learning
by
Bahar Uddin Mahmud, Guanyue Hong, Virinchi Ravindrakumar Lalwani, Nicholas Brown and Zachary D. Asher
Future Internet 2025, 17(5), 223; https://doi.org/10.3390/fi17050223 - 16 May 2025
Abstract
The accurate identification of look-alike medical vials is essential for patient safety, particularly when similar vials contain different substances, volumes, or concentrations. Traditional methods, such as manual selection or barcode-based identification, are prone to human error or face reliability issues under varying lighting
[...] Read more.
The accurate identification of look-alike medical vials is essential for patient safety, particularly when similar vials contain different substances, volumes, or concentrations. Traditional methods, such as manual selection or barcode-based identification, are prone to human error or face reliability issues under varying lighting conditions. This study addresses these challenges by introducing a real-time deep learning-based vial identification system, leveraging a Lightweight YOLOv4 model optimized for edge devices. The system is integrated into a Mixed Reality (MR) environment, enabling the real-time detection and annotation of vials with immediate operator feedback. Compared to standard barcode-based methods and the baseline YOLOv4-Tiny model, the proposed approach improves identification accuracy while maintaining low computational overhead. The experimental evaluations demonstrate a mean average precision (mAP) of 98.76 percent, with an inference speed of 68 milliseconds per frame on HoloLens 2, achieving real-time performance. The results highlight the model’s robustness in diverse lighting conditions and its ability to mitigate misclassifications of visually similar vials. By combining deep learning with MR, this system offers a more reliable and efficient alternative for pharmaceutical and medical applications, paving the way for AI-driven MR-assisted workflows in critical healthcare environments.
Full article
(This article belongs to the Special Issue Smart Technology: Artificial Intelligence, Robotics and Algorithms)
►▼
Show Figures

Figure 1

Journal Menu
► ▼ Journal Menu-
- Future Internet Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Electronics, Future Internet, Machines, Systems, Technologies, Biomimetics
Theoretical and Applied Problems in Human-Computer Intelligent Systems
Topic Editors: Jiahui Yu, Charlie Yang, Zhenyu Wen, Dalin Zhou, Dongxu Gao, Changting LinDeadline: 30 June 2025
Topic in
Algorithms, Future Internet, Information, Mathematics, Symmetry
Research on Data Mining of Electronic Health Records Using Deep Learning Methods
Topic Editors: Dawei Yang, Yu Zhu, Hongyi XinDeadline: 31 August 2025
Topic in
Algorithms, Applied Sciences, Future Internet, Information, Mathematics
Soft Computing and Machine Learning
Topic Editors: Rui Araújo, António Pedro Aguiar, Nuno Lau, Rodrigo Ventura, João FabroDeadline: 30 September 2025
Topic in
Education Sciences, Future Internet, Information, Sustainability
Advances in Online and Distance Learning
Topic Editors: Neil Gordon, Han ReichgeltDeadline: 31 December 2025

Conferences
Special Issues
Special Issue in
Future Internet
Convergence of Edge Computing and Next Generation Networking
Guest Editors: Armir Bujari, Gabriele Elia, Johann M. Marquez-BarjaDeadline: 31 May 2025
Special Issue in
Future Internet
Emerging Technologies for Cybersecurity in the Internet of Things (IoT)
Guest Editors: Fei Tong, Guanghui WangDeadline: 31 May 2025
Special Issue in
Future Internet
IoT Architecture for Smart Environments: Mechanisms, Approaches, and Applications
Guest Editors: Manuel José Cabral dos Santos Reis, Carlos SerôdioDeadline: 31 May 2025
Special Issue in
Future Internet
AI Based Natural Language Processing: Emerging Approaches and Applications
Guest Editors: William Hsu, Huichen YangDeadline: 31 May 2025
Topical Collections
Topical Collection in
Future Internet
Innovative People-Centered Solutions Applied to Industries, Cities and Societies
Collection Editors: Dino Giuli, Filipe Portela
Topical Collection in
Future Internet
Computer Vision, Deep Learning and Machine Learning with Applications
Collection Editors: Remus Brad, Arpad Gellert
Topical Collection in
Future Internet
Machine Learning Approaches for User Identity
Collection Editors: Kaushik Roy, Mustafa Atay, Ajita Rattani
Topical Collection in
Future Internet
5G/6G Networks for the Internet of Things: Communication Technologies and Challenges
Collection Editor: Sachin Sharma