Previous Issue
Volume 17, May
 
 

Future Internet, Volume 17, Issue 6 (June 2025) – 29 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 791 KiB  
Article
Building Equi-Width Histograms on Homomorphically Encrypted Data
by Dragoș Lazea, Anca Hangan and Tudor Cioara
Future Internet 2025, 17(6), 256; https://doi.org/10.3390/fi17060256 - 10 Jun 2025
Abstract
Histograms are widely used for summarizing data distributions, detecting anomalies, and improving machine learning models’ accuracy. However, traditional histogram-based methods require access to raw data, raising privacy concerns, particularly in sensitive IoT applications. Encryption-based techniques offer potential solutions; however, they secure the data [...] Read more.
Histograms are widely used for summarizing data distributions, detecting anomalies, and improving machine learning models’ accuracy. However, traditional histogram-based methods require access to raw data, raising privacy concerns, particularly in sensitive IoT applications. Encryption-based techniques offer potential solutions; however, they secure the data in transit or storage, requiring decryption during analysis, which exposes raw data to potential privacy risks. In this paper, we propose a method for constructing privacy-preserving histograms directly on homomorphically encrypted IoT data, leveraging the Fast Fully Homomorphic Encryption over the Torus (TFHE) scheme implemented in the Concrete framework. To overcome the challenges posed by homomorphic encryption, we redesign the traditional histogram construction algorithm, optimizing it for secure computation by addressing constraints related to nested loops and conditional statements. As an evaluation use case, we have considered an outlier detection mechanism based on histogram frequency counts, ensuring that all data and computations remain encrypted throughout the process. Our method achieves results consistent with plaintext-based outlier detection while maintaining reasonable computational overhead compared to those reported in the existing literature. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

20 pages, 1102 KiB  
Article
Exact and Approximation Algorithms for Task Offloading with Service Caching and Dependency in Mobile Edge Computing
by Bowen Cui and Jianwei Zhang
Future Internet 2025, 17(6), 255; https://doi.org/10.3390/fi17060255 (registering DOI) - 10 Jun 2025
Abstract
With the continuous development of the Internet of Things (IoT) and communication technologies, the demand for low latency in practical applications is becoming increasingly significant. Mobile edge computing, as a promising computational model, is receiving growing attention. However, most existing studies fail to [...] Read more.
With the continuous development of the Internet of Things (IoT) and communication technologies, the demand for low latency in practical applications is becoming increasingly significant. Mobile edge computing, as a promising computational model, is receiving growing attention. However, most existing studies fail to consider two critical factors: task dependency and service caching. Additionally, the majority of proposed solutions are not related to the optimal solution. We investigate the task offloading problem in mobile edge computing. Considering the requirements of applications for service caching and task dependency, we define an optimization problem to minimize the delay under the constraint of maximum completion cost and present a (1+ϵ)-approximation algorithm and an exact algorithm. Specifically, the offloading scheme is determined based on the relationships between tasks as well as the cost and delay incurred by data transmission and task execution. Simulation results demonstrate that in all cases, the offloading schemes obtained by our algorithm consistently outperform other algorithms. Moreover, the approximation ratio to the optimal solution from the approximation algorithm is validated to be less than (1+ϵ), and the exact algorithm consistently produces the optimal solution. Full article
Show Figures

Figure 1

29 pages, 9734 KiB  
Article
Internet of Things (IoT)-Based Solutions for Uneven Roads and Balanced Vehicle Systems Using YOLOv8
by Momotaz Begum, Abm Kamrul Islam Riad, Abdullah Al Mamun, Thofazzol Hossen, Salah Uddin, Md Nurul Absur and Hossain Shahriar
Future Internet 2025, 17(6), 254; https://doi.org/10.3390/fi17060254 - 9 Jun 2025
Abstract
Uneven roads pose significant challenges to vehicle stability, passenger comfort, and safety, especially in snowy and mountainous regions. These problems are often complex and challenging to resolve with traditional detection and stabilization methods. This paper presents a dual-method approach to improving vehicle stability [...] Read more.
Uneven roads pose significant challenges to vehicle stability, passenger comfort, and safety, especially in snowy and mountainous regions. These problems are often complex and challenging to resolve with traditional detection and stabilization methods. This paper presents a dual-method approach to improving vehicle stability by identifying road irregularities and dynamically adjusting the balance. The proposed solution combines YOLOv8 for real-time road anomaly detection with a GY-521 sensor to track the speed of servo motors, facilitating immediate stabilization. YOLOv8 achieves a peak precision of 0.99 at a confidence threshold of 1.0 rate in surface recognition, surpassing conventional sensor-based detection. The vehicle design is divided into two sections: an upper passenger seating area and a lower section that contains the engine and wheels. The GY-521 sensor is strategically placed to monitor road conditions, while the servomotor stabilizes the upper section, ensuring passenger comfort and reducing the risk of accidents. This setup maintains stability even on uneven terrain. Furthermore, the proposed solution significantly reduces collision risk, vehicle wear, and maintenance costs while improving operational efficiency. Its compatibility with various vehicles and capabilities makes it an excellent candidate for enhancing road safety and driving experience in challenging environments. In addition, this work marks a crucial step towards a safer, more sustainable, and more comfortable transportation system. Full article
Show Figures

Figure 1

18 pages, 1289 KiB  
Article
Topology-Aware Anchor Node Selection Optimization for Enhanced DV-Hop Localization in IoT
by Haixu Niu, Yonghai Li, Shuaixin Hou, Tianfei Chen, Lijun Sun, Mingyang Gu and Muhammad Irsyad Abdullah
Future Internet 2025, 17(6), 253; https://doi.org/10.3390/fi17060253 - 8 Jun 2025
Abstract
Node localization is a critical challenge in Internet of Things (IoT) applications. The DV-Hop algorithm, which relies on hop counts for localization, assumes that network nodes are uniformly distributed. It estimates actual distances between nodes based on the number of hops. However, in [...] Read more.
Node localization is a critical challenge in Internet of Things (IoT) applications. The DV-Hop algorithm, which relies on hop counts for localization, assumes that network nodes are uniformly distributed. It estimates actual distances between nodes based on the number of hops. However, in practical IoT networks, node distribution is often non-uniform, leading to complex and irregular topologies that significantly reduce the localization accuracy of the original DV-Hop algorithm. To improve localization performance in non-uniform topologies, we propose an enhanced DV-Hop algorithm using Grey Wolf Optimization (GWO). First, the impact of non-uniform node distribution on hop count and average hop distance is analyzed. A binary Grey Wolf Optimization algorithm (BGWO) is then applied to develop an optimal anchor node selection strategy. This strategy eliminates anchor nodes with high estimation errors and selects a subset of high-quality anchors to improve the localization of unknown nodes. Second, in the multilateration stage, the traditional least square method is replaced by a continuous GWO algorithm to solve the distance equations with higher precision. Simulated experimental results show that the proposed GWO-enhanced DV-Hop algorithm significantly improves localization accuracy in non-uniform topologies. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

37 pages, 2907 KiB  
Review
LLM4Rec: A Comprehensive Survey on the Integration of Large Language Models in Recommender Systems—Approaches, Applications and Challenges
by Sarama Shehmir and Rasha Kashef
Future Internet 2025, 17(6), 252; https://doi.org/10.3390/fi17060252 - 4 Jun 2025
Viewed by 323
Abstract
The synthesis of large language models (LLMs) and recommender systems has been a game-changer in tailored content onslaught with applications ranging from e-commerce, social media, and education to health care. This survey covers the usage of LLMs for content recommendations (LLM4Rec). LLM4Rec has [...] Read more.
The synthesis of large language models (LLMs) and recommender systems has been a game-changer in tailored content onslaught with applications ranging from e-commerce, social media, and education to health care. This survey covers the usage of LLMs for content recommendations (LLM4Rec). LLM4Rec has opened up a whole set of challenges in terms of scale, real-time processing, and data privacy, all of which we touch upon along with potential future directions for research in areas such as multimodal recommendations and reinforcement learning for long-term engagement. This survey combines existing developments and outlines possible future developments, thus becoming a point of reference for other researchers and practitioners in developing the future of LLM-based recommendation systems. Full article
(This article belongs to the Special Issue Deep Learning in Recommender Systems)
Show Figures

Figure 1

23 pages, 750 KiB  
Article
Hybrid Model for Novel Attack Detection Using a Cluster-Based Machine Learning Classification Approach for the Internet of Things (IoT)
by Naveed Ahmed, Md Asri Ngadi, Abdulaleem Ali Almazroi and Nouf Atiahallah Alghanmi
Future Internet 2025, 17(6), 251; https://doi.org/10.3390/fi17060251 - 31 May 2025
Viewed by 235
Abstract
To combat the growing danger of zero-day attacks on IoT networks, this study introduces a Cluster-Based Classification (CBC) method. Security vulnerabilities have become more apparent with the growth of IoT devices, calling for new approaches to identify unique threats quickly. The hybrid CBC [...] Read more.
To combat the growing danger of zero-day attacks on IoT networks, this study introduces a Cluster-Based Classification (CBC) method. Security vulnerabilities have become more apparent with the growth of IoT devices, calling for new approaches to identify unique threats quickly. The hybrid CBC approach uses optimized k-means clustering to find commonalities across different abnormalities, intending to quickly identify and classify unknown harmful attacks in a varied IoT network. The technique is fine-tuned for eight-class and two-class classifications, supporting different attacks using the IoTCIC2023 dataset and SelectKBest feature selection. Robust analysis is achieved by evaluating and aggregating the performance of machine learning classifiers such as XGBoost, AdaBoost, KNN, and Random Forest. In two-class classification, Random Forest achieves 95.11% accuracy, while in eight-class classification, KNN tops the charts with 88.24%. These results demonstrate noteworthy accuracy. The suggested CBC technique is effective, as shown by comparisons with state-of-the-art approaches. Despite several caveats and dataset specifications, this study provides a useful tool for academics and practitioners in the ever-changing field of cybersecurity by suggesting a method to strengthen the security of IoT networks against new threats. Full article
(This article belongs to the Special Issue Privacy and Security Issues in IoT Systems)
Show Figures

Figure 1

25 pages, 5629 KiB  
Article
Signal Preprocessing for Enhanced IoT Device Identification Using Support Vector Machine
by Rene Francisco Santana-Cruz, Martin Moreno, Daniel Aguilar-Torres, Román Arturo Valverde-Domínguez and Rubén Vázquez-Medina
Future Internet 2025, 17(6), 250; https://doi.org/10.3390/fi17060250 - 31 May 2025
Viewed by 225
Abstract
Device identification based on radio frequency fingerprinting is widely used to improve the security of Internet of Things systems. However, noise and acquisition inconsistencies in raw radio frequency signals can affect the effectiveness of classification, identification and authentication algorithms used to distinguish Bluetooth [...] Read more.
Device identification based on radio frequency fingerprinting is widely used to improve the security of Internet of Things systems. However, noise and acquisition inconsistencies in raw radio frequency signals can affect the effectiveness of classification, identification and authentication algorithms used to distinguish Bluetooth devices. This study investigates how the RF signal preprocessing techniques affect the performance of a support vector machine classifier based on radio frequency fingerprinting. Four options derived from an RF signal preprocessing technique are evaluated, each of which is applied to the raw radio frequency signals in an attempt to improve the consistency between signals emitted by the same Bluetooth device. Experiments conducted on raw Bluetooth signals from twentyfour smartphone radios from two public databases of RF signals show that selecting an appropriate RF signal preprocessing approach can significantly improve the effectiveness of a support vector machine classifier-based algorithm used to discriminate Bluetooth devices. Full article
Show Figures

Graphical abstract

55 pages, 17044 KiB  
Review
Perspectives and Research Challenges in Wireless Communications Hardware for the Future Internet and Its Applications Services
by Dimitrios G. Arnaoutoglou, Tzichat M. Empliouk, Theodoros N. F. Kaifas, Constantinos L. Zekios and George A. Kyriacou
Future Internet 2025, 17(6), 249; https://doi.org/10.3390/fi17060249 - 31 May 2025
Viewed by 253
Abstract
The transition from 5G to 6G wireless systems introduces new challenges at the physical layer, including the need for higher frequency operations, massive MIMO deployment, advanced beamforming techniques, and sustainable energy harvesting mechanisms. A plethora of feature articles, review and white papers, and [...] Read more.
The transition from 5G to 6G wireless systems introduces new challenges at the physical layer, including the need for higher frequency operations, massive MIMO deployment, advanced beamforming techniques, and sustainable energy harvesting mechanisms. A plethora of feature articles, review and white papers, and roadmaps elaborate on the perspectives and research challenges of wireless systems, in general, including both unified physical and cyber space. Hence, this paper presents a comprehensive review of the technological challenges and recent advancements in wireless communication hardware that underpin the development of next-generation networks, particularly 6G. Emphasizing the physical layer, the study explores critical enabling technologies including beamforming, massive MIMO, reconfigurable intelligent surfaces (RIS), millimeter-wave (mmWave) and terahertz (THz) communications, wireless power transfer, and energy harvesting. These technologies are analyzed in terms of their functional roles, implementation challenges, and integration into future wireless infrastructure. Beyond traditional physical layer components, the paper also discusses the role of reconfigurable RF front-ends, innovative antenna architectures, and user-end devices that contribute to the adaptability and efficiency of emerging communication systems. In addition, the inclusion of application-driven paradigms such as digital twins highlights how new use cases are shaping design requirements and pushing the boundaries of hardware capabilities. By linking foundational physical-layer technologies with evolving application demands, this work provides a holistic perspective aimed at guiding future research directions and informing the design of scalable, energy-efficient, and resilient wireless communication platforms for the Future Internet. Specifically, we first try to identify the demands and, in turn, explore existing or emerging technologies that have the potential to meet these needs. Especially, there will be an extended reference about the state-of-the-art antennas for massive MIMO terrestrial and non-terrestrial networks. Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems)
Show Figures

Figure 1

31 pages, 1011 KiB  
Article
A Tale of Many Networks: Splitting and Merging of Chord-like Overlays in Partitioned Networks
by Tobias Amft and Kalman Graffi
Future Internet 2025, 17(6), 248; https://doi.org/10.3390/fi17060248 - 31 May 2025
Viewed by 239
Abstract
Peer-to-peer overlays define an approach to operating data management platforms, which are robust against censorship attempts from countries or large enterprises. The robustness of such overlays is endangered in the presence of national Internet isolations, such as was the case in recent years [...] Read more.
Peer-to-peer overlays define an approach to operating data management platforms, which are robust against censorship attempts from countries or large enterprises. The robustness of such overlays is endangered in the presence of national Internet isolations, such as was the case in recent years during political revolutions. In this paper, we focus on splits and, with stronger emphasis, on the merging of ring-based overlays in the presence of network partitioning in the underlying Internet due to various reasons. We present a new merging algorithm named the Ring Reunion Algorithm and highlight a method for reducing the number of messages in both separated and united overlay states. The algorithm is parallelized for accelerated merging and is able to automatically detect overlay partitioning and start the corresponding merging processes. Through simulations, we evaluate the new Ring Reunion Algorithm in its simple and parallelized forms in comparison to a plain Chord algorithm, the Chord–Zip algorithm, and two versions of the Ring-Unification Algorithm. The evaluation shows that only our parallelized Ring Reunion Algorithm allows the merging of two, three, and more isolated overlay networks in parallel. Our approach quickly merges the overlays, even under churn, and stabilizes the node contacts in the overlay with small traffic overhead. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

27 pages, 1766 KiB  
Article
Enhanced Peer-to-Peer Botnet Detection Using Differential Evolution for Optimized Feature Selection
by Sangita Baruah, Vaskar Deka, Dulumani Das, Utpal Barman and Manob Jyoti Saikia
Future Internet 2025, 17(6), 247; https://doi.org/10.3390/fi17060247 - 30 May 2025
Viewed by 269
Abstract
With the growing prevalence of cybercrime, botnets have emerged as a significant threat, infiltrating an increasing number of legitimate computers annually. Challenges arising for organizations, educational institutions, and individuals as a result of botnet attacks include distributed denial of service (DDoS) attacks, phishing [...] Read more.
With the growing prevalence of cybercrime, botnets have emerged as a significant threat, infiltrating an increasing number of legitimate computers annually. Challenges arising for organizations, educational institutions, and individuals as a result of botnet attacks include distributed denial of service (DDoS) attacks, phishing attacks, and extortion attacks, generation of spam, and identity theft. The stealthy nature of botnets, characterized by constant alterations in network structures, attack methodologies, and data transmission patterns, poses a growing difficulty in their detection. This paper introduces an innovative strategy for mitigating botnet threats. Employing differential evolution, we propose a feature selection approach that enhances the ability to discern peer-to-peer (P2P) botnet traffic amidst evolving cyber threats. Differential evolution is a population-based meta-heuristic technique which can be applied to nonlinear and non-differentiable optimization problems owing to its fast convergence and use of few control parameters. Apart from that, an ensemble learning algorithm is also employed to support and enhance the detection phase, providing a robust defense against the dynamic and sophisticated nature of modern P2P botnets. The results demonstrate that our model achieves 99.99% accuracy, 99.49% precision, 98.98% recall, and 99.23% F1-score, which outperform the state-of-the-art P2P detection approaches. Full article
Show Figures

Figure 1

22 pages, 698 KiB  
Article
An AI-Driven Framework for Integrated Security and Privacy in Internet of Things Using Quantum-Resistant Blockchain
by Mahmoud Elkhodr
Future Internet 2025, 17(6), 246; https://doi.org/10.3390/fi17060246 - 30 May 2025
Viewed by 235
Abstract
The growing deployment of the Internet of Things (IoT) across various sectors introduces significant security and privacy challenges. Although numerous individual solutions exist, comprehensive frameworks that effectively combine advanced technologies to address evolving threats are lacking. This paper presents the Integrated Adaptive Security [...] Read more.
The growing deployment of the Internet of Things (IoT) across various sectors introduces significant security and privacy challenges. Although numerous individual solutions exist, comprehensive frameworks that effectively combine advanced technologies to address evolving threats are lacking. This paper presents the Integrated Adaptive Security Framework for IoT (IASF-IoT), which integrates artificial intelligence, blockchain technology, and quantum-resistant cryptography into a unified solution tailored for IoT environments. Central to the framework is an adaptive AI-driven security orchestration mechanism, complemented by blockchain-based identity management, lightweight quantum-resistant protocols, and Digital Twins to predict and proactively mitigate threats. A theoretical performance model and large-scale simulation involving 1000 heterogeneous IoT devices were used to evaluate the framework. Results showed that IASF-IoT achieved detection accuracy between 85% and 99%, with simulated energy consumption remaining below 1.5 mAh per day and response times averaging around 2 s. These findings suggest that the framework offers strong potential for scalable, low-overhead security in resource-constrained IoT environments. Full article
(This article belongs to the Special Issue Security and Privacy in AI-Powered Systems)
Show Figures

Figure 1

27 pages, 5632 KiB  
Article
Semantic Fusion of Health Data: Implementing a Federated Virtualized Knowledge Graph Framework Leveraging Ontop System
by Abid Ali Fareedi, Stephane Gagnon, Ahmad Ghazawneh and Raul Valverde
Future Internet 2025, 17(6), 245; https://doi.org/10.3390/fi17060245 - 30 May 2025
Viewed by 216
Abstract
Data integration (DI) and semantic interoperability (SI) are critical in healthcare, enabling seamless, patient-centric data sharing across systems to meet the demand for instant, unambiguous access to health information. Federated information systems (FIS) highlight auspicious issues for seamless DI and SI stemming from [...] Read more.
Data integration (DI) and semantic interoperability (SI) are critical in healthcare, enabling seamless, patient-centric data sharing across systems to meet the demand for instant, unambiguous access to health information. Federated information systems (FIS) highlight auspicious issues for seamless DI and SI stemming from diverse data sources or models. We present a hybrid ontology-based design science research engineering (ODSRE) methodology that combines design science activities with ontology engineering principles to address the above-mentioned issues. The ODSRE constructs a systematic mechanism leveraging the Ontop virtual paradigm to establish a state-of-the-art federated virtual knowledge graph framework (FVKG) embedded virtualized knowledge graph approach to mitigate the aforementioned challenges effectively. The proposed FVKG helps construct a virtualized data federation leveraging the Ontop semantic query engine that effectively resolves data bottlenecks. Using a virtualized technique, the FVKG helps to reduce data migration, ensures low latency and dynamic freshness, and facilitates real-time access while upholding integrity and coherence throughout the federation system. As a result, we suggest a customized framework for constructing ontological monolithic semantic artifacts, especially in FIS. The proposed FVKG incorporates ontology-based data access (OBDA) to build a monolithic virtualized repository that integrates various ontological-driven artifacts and ensures semantic alignments using schema mapping techniques. Full article
Show Figures

Figure 1

27 pages, 4256 KiB  
Article
A Robust Conformal Framework for IoT-Based Predictive Maintenance
by Alberto Moccardi, Claudia Conte, Rajib Chandra Ghosh and Francesco Moscato
Future Internet 2025, 17(6), 244; https://doi.org/10.3390/fi17060244 - 30 May 2025
Viewed by 279
Abstract
This study, set within the vast and varied research field of industrial Internet of Things (IoT) systems, proposes a methodology to address uncertainty quantification (UQ) issues in predictive maintenance (PdM) practices. At its core, this paper leverages the commercial modular aero-propulsion system simulation [...] Read more.
This study, set within the vast and varied research field of industrial Internet of Things (IoT) systems, proposes a methodology to address uncertainty quantification (UQ) issues in predictive maintenance (PdM) practices. At its core, this paper leverages the commercial modular aero-propulsion system simulation (CMAPSS) dataset to evaluate different artificial intelligence (AI) prognostic algorithms for remaining useful life (RUL) forecasting while supporting the estimation of a robust confidence interval (CI). The methodology primarily involves the comparison of statistical learning (SL), machine learning (ML), and deep learning (DL) techniques for each different scenario of the CMAPSS, evaluating the performances through a tailored metric, the S-score metric, and then benchmarking diverse conformal-based uncertainty estimation techniques, remarkably naive, weighted, and bootstrapping, offering a more suitable and reliable alternative to classical RUL prediction. The results obtained highlight the peculiarities and benefits of the conformal approach, despite probabilistic models favoring the adoption of complex models in cases where the operating conditions of the machine are multiple, and suggest the use of weighted conformal practices in non-exchangeability conditions while recommending bootstrapping alternatives for contexts with a more substantial presence of noise in the data. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Internet of Things (IoT))
Show Figures

Figure 1

18 pages, 546 KiB  
Article
Resource Allocation for Federated Learning with Heterogeneous Computing Capability in Cloud–Edge–Client IoT Architecture
by Xubo Zhang and Yang Luo
Future Internet 2025, 17(6), 243; https://doi.org/10.3390/fi17060243 - 30 May 2025
Viewed by 182
Abstract
A federated learning (FL) framework for cloud–edge–client collaboration performs local aggregation of model parameters through edges, reducing communication overhead from clients to the cloud. This framework is particularly suitable for Internet of Things (IoT)-based secure computing scenarios that require extensive computation and frequent [...] Read more.
A federated learning (FL) framework for cloud–edge–client collaboration performs local aggregation of model parameters through edges, reducing communication overhead from clients to the cloud. This framework is particularly suitable for Internet of Things (IoT)-based secure computing scenarios that require extensive computation and frequent parameter updates, as it leverages the distributed nature of IoT devices to enhance data privacy and reduce latency. To address the issue of high-computation-capability clients waiting due to varying computing capabilities under heterogeneous device conditions, this paper proposes an improved resource allocation scheme based on a three-layer FL framework. This scheme optimizes the communication parameter volume from clients to the edge by implementing a method based on random dropout and parameter completion before and after communication, ensuring that local models can be transmitted to the edge simultaneously, regardless of different computation times. This scheme effectively resolves the problem of high-computation-capability clients experiencing long waiting times. Additionally, it optimizes the similarity pairing method, the Shapley Value (SV) aggregation strategy, and the client selection method to better accommodate heterogeneous computing capabilities found in IoT environments. Experiments demonstrate that this improved scheme is more suitable for heterogeneous IoT client scenarios, reducing system latency and energy consumption while enhancing model performance. Full article
Show Figures

Figure 1

22 pages, 2851 KiB  
Review
Inter-Data Center RDMA: Challenges, Status, and Future Directions
by Xiaoying Huang and Jingwei Wang
Future Internet 2025, 17(6), 242; https://doi.org/10.3390/fi17060242 - 29 May 2025
Viewed by 178
Abstract
Remote Direct Memory Access (RDMA) has been widely implemented in data centers (DCs) due to its high-bandwidth, low-latency, and low-overhead characteristics. In recent years, as various applications relying on inter-DC interconnection have continuously emerged, the demand for deploying RDMA across DCs has been [...] Read more.
Remote Direct Memory Access (RDMA) has been widely implemented in data centers (DCs) due to its high-bandwidth, low-latency, and low-overhead characteristics. In recent years, as various applications relying on inter-DC interconnection have continuously emerged, the demand for deploying RDMA across DCs has been on the rise. Numerous studies have focused on intra-DC RDMA. However, research on inter-DC RDMA is relatively scarce, yet it is showing a growing tendency. Inspired by this trend, this article identifies and discusses specific challenges in inter-DC RDMA deployment, such as congestion control and load balancing. Subsequently, it surveys the recent progress in enhancing the applicability of inter-DC RDMA. Finally, it presents future research directions and opportunities. As the first review article focusing on inter-DC RDMA, this article aims to provide valuable insights and guidance for future research in this emerging field. By systematically reviewing the current state of inter-DC RDMA, we hope to establish a foundation that will inspire and direct subsequent studies. Full article
Show Figures

Figure 1

21 pages, 14175 KiB  
Article
Navigating Data Corruption in Machine Learning: Balancing Quality, Quantity, and Imputation Strategies
by Qi Liu and Wanjing Ma
Future Internet 2025, 17(6), 241; https://doi.org/10.3390/fi17060241 - 29 May 2025
Viewed by 146
Abstract
Data corruption, including missing and noisy entries, is a common challenge in real-world machine learning. This paper examines its impact and mitigation strategies through two experimental setups: supervised NLP tasks (NLP-SL) and deep reinforcement learning for traffic signal control (Signal-RL). This study analyzes [...] Read more.
Data corruption, including missing and noisy entries, is a common challenge in real-world machine learning. This paper examines its impact and mitigation strategies through two experimental setups: supervised NLP tasks (NLP-SL) and deep reinforcement learning for traffic signal control (Signal-RL). This study analyzes how varying corruption levels affect model performance, evaluate imputation strategies, and assess whether expanding datasets can counteract corruption effects. The results indicate that performance degradation follows a diminishing-return pattern, well modeled by an exponential function. Noisy data harm performance more than missing data, especially in sequential tasks like Signal-RL where errors may compound. Imputation helps recover missing data but can introduce noise, with its effectiveness depending on corruption severity and imputation accuracy. This study identifies clear boundaries between when imputation is beneficial versus harmful, and classifies tasks as either noise-sensitive or noise-insensitive. Larger datasets reduce corruption effects but offer diminishing gains at high corruption levels. These insights guide the design of robust systems, emphasizing smart data collection, imputation decisions, and preprocessing strategies in noisy environments. Full article
(This article belongs to the Special Issue Smart Technology: Artificial Intelligence, Robotics and Algorithms)
Show Figures

Figure 1

26 pages, 1761 KiB  
Article
Enhancing Customer Quality of Experience Through Omnichannel Digital Strategies: Evidence from a Service Environment in an Emerging Context
by Fabricio Miguel Moreno-Menéndez, Victoriano Eusebio Zacarías-Rodríguez, Sara Ricardina Zacarías-Vallejos, Vicente González-Prida, Pedro Emil Torres-Quillatupa, Hilario Romero-Girón, José Francisco Vía y Rada-Vittes and Luis Ángel Huaynate-Espejo
Future Internet 2025, 17(6), 240; https://doi.org/10.3390/fi17060240 - 29 May 2025
Viewed by 202
Abstract
The proliferation of digital platforms and interactive technologies has transformed the way service providers engage with their customers, particularly in emerging economies, where digital inclusion is an ongoing process. This study explores the relationship between omnichannel strategies and customer satisfaction, conceptualized here as [...] Read more.
The proliferation of digital platforms and interactive technologies has transformed the way service providers engage with their customers, particularly in emerging economies, where digital inclusion is an ongoing process. This study explores the relationship between omnichannel strategies and customer satisfaction, conceptualized here as a proxy for Quality of Experience (QoE), within a smart service station located in a digitally underserved region. Grounded in customer journey theory and the expectancy–disconfirmation paradigm, the study investigates how data integration, digital payment systems, and logistical flexibility—key components of intelligent e-service systems—influence user perceptions and satisfaction. Based on a correlational design with a non-probabilistic sample of 108 customers, the findings reveal a moderate association between overall omnichannel integration and satisfaction (ρ = 0.555, p < 0.01). However, a multiple regression analysis indicates that no individual dimension significantly predicts satisfaction (adjusted R2 = 0.002). These results suggest that while users value system integration and interaction flexibility, no single technical feature drives satisfaction independently. The study contributes to the growing field of intelligent human-centric service systems by contextualizing QoE and digital inclusion within emerging markets and by emphasizing the importance of perceptual factors in ICT-enabled environments. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

17 pages, 1272 KiB  
Article
Multi Stage Retrieval for Web Search During Crisis
by Claudiu Constantin Tcaciuc, Daniele Rege Cambrin and Paolo Garza
Future Internet 2025, 17(6), 239; https://doi.org/10.3390/fi17060239 - 29 May 2025
Viewed by 254
Abstract
During crisis events, digital information volume can increase by over 500% within hours, with social media platforms alone generating millions of crisis-related posts. This volume creates critical challenges for emergency responders who require timely access to the concise subset of accurate information they [...] Read more.
During crisis events, digital information volume can increase by over 500% within hours, with social media platforms alone generating millions of crisis-related posts. This volume creates critical challenges for emergency responders who require timely access to the concise subset of accurate information they are interested in. Existing approaches strongly rely on the power of large language models. However, the use of large language models limits the scalability of the retrieval procedure and may introduce hallucinations. This paper introduces a novel multi-stage text retrieval framework to enhance information retrieval during crises. Our framework employs a novel three-stage extractive pipeline where (1) a topic modeling component filters candidates based on thematic relevance, (2) an initial high-recall lexical retriever identifies a broad candidate set, and (3) a dense retriever reranks the remaining documents. This architecture balances computational efficiency with retrieval effectiveness, prioritizing high recall in early stages while refining precision in later stages. The framework avoids the introduction of hallucinations, achieving a 15% improvement in BERT-Score compared to existing solutions without requiring any costly abstractive model. Moreover, our sequential approach accelerates the search process by 5% compared to the use of a single-stage based on a dense retrieval approach, with minimal effect on the performance in terms of BERT-Score. Full article
Show Figures

Figure 1

17 pages, 661 KiB  
Systematic Review
Security Challenges for Users of Extensible Smart Home Hubs: A Systematic Literature Review
by Tobias Rødahl Thingnes and Per Håkon Meland
Future Internet 2025, 17(6), 238; https://doi.org/10.3390/fi17060238 - 28 May 2025
Viewed by 154
Abstract
Smart home devices and home automation systems, which control features such as lights, blinds, heaters, door locks, cameras, and speakers, have become increasingly popular and can be found in homes worldwide. Central to these systems are smart home hubs, which serve as the [...] Read more.
Smart home devices and home automation systems, which control features such as lights, blinds, heaters, door locks, cameras, and speakers, have become increasingly popular and can be found in homes worldwide. Central to these systems are smart home hubs, which serve as the primary control units, allowing users to manage connected devices from anywhere in the world. While this feature is convenient, it also makes smart home hubs attractive targets for cyberattacks. Unfortunately, the average user lacks substantial cybersecurity knowledge, making the security of these systems crucial. This is particularly important as smart home systems are expected to safeguard users’ privacy and security within their homes. This paper synthesizes eight prevalent cybersecurity challenges associated with smart home hubs through a systematic literature review. The review process involved identifying relevant keywords, searching, and screening 713 papers in multiple rounds to arrive at a final selection of 16 papers, which were then summarized and synthesized. This process included research from Scopus published between January 2019 and November 2024 and excluded papers on prototypes or individual features. The study is limited by scarce academic sources on open-source smart home hubs, strict selection criteria, rapid technological changes, and some subjectivity in study inclusion. The security of extensible smart home hubs is a complex and evolving issue. This review provides a foundation for understanding the key challenges and potential solutions, which is useful for future research and development to secure this increasingly important part of our everyday homes. Full article
(This article belongs to the Special Issue Human-Centered Cybersecurity)
Show Figures

Figure 1

29 pages, 2566 KiB  
Article
Machine Learning and Deep Learning-Based Atmospheric Duct Interference Detection and Mitigation in TD-LTE Networks
by Rasendram Muralitharan, Upul Jayasinghe, Roshan G. Ragel and Gyu Myoung Lee
Future Internet 2025, 17(6), 237; https://doi.org/10.3390/fi17060237 - 27 May 2025
Viewed by 340
Abstract
The variations in the atmospheric refractivity in the lower atmosphere create a natural phenomenon known as atmospheric ducts. The atmospheric ducts allow radio signals to travel long distances. This can adversely affect telecommunication systems, as cells with similar frequencies can interfere with each [...] Read more.
The variations in the atmospheric refractivity in the lower atmosphere create a natural phenomenon known as atmospheric ducts. The atmospheric ducts allow radio signals to travel long distances. This can adversely affect telecommunication systems, as cells with similar frequencies can interfere with each other due to frequency reuse, which is intended to optimize resource allocation. Thus, the downlink signals of one base station will travel a long distance via the atmospheric duct and interfere with the uplink signals of another base station. This scenario is known as atmospheric duct interference (ADI). ADI could be mitigated using digital signal processing, machine learning, and hybrid approaches. To address this challenge, we explore machine learning and deep learning techniques for ADI prediction and mitigation in Time-Division Long-Term Evolution (TD-LTE) networks. Our results show that the Random Forest algorithm achieves the highest prediction accuracy, while a convolutional neural network demonstrates the best mitigation performance with accuracy. Additionally, we propose optimizing special subframe configurations in TD-LTE networks using machine learning-based methods to effectively reduce ADI. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Graphical abstract

12 pages, 604 KiB  
Article
Position Accuracy and Distributed Beamforming Performance in WSNs: A Simulation Study
by José Casca, Prabhat Gupta, Marco Gomes, Vitor Silva and Rui Dinis
Future Internet 2025, 17(6), 236; https://doi.org/10.3390/fi17060236 - 27 May 2025
Viewed by 201
Abstract
This work investigates the performance of distributed beamforming in Wireless Sensor Networks (WSNs), focusing on the impact of node position errors. A comprehensive simulation testbed was developed to assess how varying network topologies and position uncertainties impact system performance. Our results reveal that [...] Read more.
This work investigates the performance of distributed beamforming in Wireless Sensor Networks (WSNs), focusing on the impact of node position errors. A comprehensive simulation testbed was developed to assess how varying network topologies and position uncertainties impact system performance. Our results reveal that distributed beamforming in the near-field is highly sensitive to position errors, resulting in a noticeable degradation in performance, particularly in terms of Bit Error Rate (BER). Cramer–Rao Lower Bound (CRB) was used to analyse the theoretical limitations of position estimation accuracy and how these limitations affect beamforming performance. These findings underscore the critical importance of accurate localisation techniques and robust beamforming algorithms to fully realise the potential of distributed beamforming in practical WSN applications. Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems)
Show Figures

Figure 1

24 pages, 614 KiB  
Article
LLM Performance in Low-Resource Languages: Selecting an Optimal Model for Migrant Integration Support in Greek
by Alexandros Tassios, Stergios Tegos, Christos Bouas, Konstantinos Manousaridis, Maria Papoutsoglou, Maria Kaltsa, Eleni Dimopoulou, Thanassis Mavropoulos, Stefanos Vrochidis and Georgios Meditskos
Future Internet 2025, 17(6), 235; https://doi.org/10.3390/fi17060235 - 26 May 2025
Viewed by 212
Abstract
The integration of Large Language Models (LLMs) in chatbot applications gains momentum. However, to successfully deploy such systems, the underlying capabilities of LLMs must be carefully considered, especially when dealing with low-resource languages and specialized fields. This paper presents the results of a [...] Read more.
The integration of Large Language Models (LLMs) in chatbot applications gains momentum. However, to successfully deploy such systems, the underlying capabilities of LLMs must be carefully considered, especially when dealing with low-resource languages and specialized fields. This paper presents the results of a comprehensive evaluation of several LLMs conducted in the context of a chatbot agent designed to assist migrants in their integration process. Our aim is to identify the optimal LLM that can effectively process and generate text in Greek and provide accurate information, addressing the specific needs of migrant populations. The design of the evaluation methodology leverages input from experts on social assistance initiatives, social impact and technological solutions, as well as from automated LLM self-evaluations. Given the linguistic challenges specific to the Greek language and the application domain, research findings indicate that Claude 3.7 Sonnet and Gemini 2.0 Flash demonstrate superior performance across all criteria, with Claude 3.7 Sonnet emerging as the leading candidate for the chatbot. Moreover, the results suggest that automated custom evaluations of LLMs can align with human assessments, offering a viable option for preliminary low-cost analysis to assist stakeholders in selecting the optimal LLM based on user and application domain requirements. Full article
Show Figures

Figure 1

23 pages, 1191 KiB  
Article
Federated XAI IDS: An Explainable and Safeguarding Privacy Approach to Detect Intrusion Combining Federated Learning and SHAP
by Kazi Fatema, Samrat Kumar Dey, Mehrin Anannya, Risala Tasin Khan, Mohammad Mamunur Rashid, Chunhua Su and Rashed Mazumder
Future Internet 2025, 17(6), 234; https://doi.org/10.3390/fi17060234 - 26 May 2025
Viewed by 282
Abstract
An intrusion detection system (IDS) is a crucial element in cyber security concerns. IDS is a safeguarding module that is designed to identify unauthorized activities in network environments. The importance of constructing IDSs has never been this significant with the growing number of [...] Read more.
An intrusion detection system (IDS) is a crucial element in cyber security concerns. IDS is a safeguarding module that is designed to identify unauthorized activities in network environments. The importance of constructing IDSs has never been this significant with the growing number of attacks on network layers. This research work was intended to draw the attention of the authors to a different aspect of intrusion detection, considering privacy and the contribution of the features on attack classes. At present, the majority of the existing IDSs are designed based on centralized infrastructure, which raises serious concerns about security as the network data from one system are exposed to another system. This act of sharing the original network data with another server can worsen the current arrangement of protecting privacy within the network. In addition, the existing IDS models are merely a tool for identifying the attack categories without analyzing a further emphasis of the network feature on the attacks. In this article, we propose a novel framework, FEDXAIIDS, converging federated learning and explainable AI. The proposed approach enables IDS models to be collaboratively trained across multiple decentralized devices while ensuring that local data remain securely on edge nodes, thus mitigating privacy risks. The primary objectives of the proposed study are to reveal the privacy concerns of centralized systems and identify the most significant features to comprehend the contribution of the features to the final output. Our proposed model was designed, fusing federated learning (FL) with Shapley additive explanations (SHAPs), using an artificial neural network (ANN) as a local model. The framework has a server device and four client devices that have their own data set on their end. The server distributes the primary model constructed using an ANN among the local clients. Next, the local clients train their individual part of the data set, deploying the distributed model from the server, and they share their feedback with the central end. The central end then incorporates an aggregator model named FedAvg to assemble the separate results from the clients into one output. At last, the contribution of the ten most significant features is evaluated by incorporating SHAP. The entire research work was executed on CICIoT2023. The data set was partitioned into four parts and distributed among the four local ends. The proposed method demonstrated efficacy in intrusion detection, achieving 88.4% training and 88.2% testing accuracy. Furthermore, UDP has been found to be the most significant feature of the network layer from the SHAP analysis. Simultaneously, the incorporation of federated learning has ensured the safeguarding of the confidentiality of the network information of the individual ends. This enhances transparency and ensures that the model is both reliable and interpretable. Federated XAI IDS effectively addresses privacy concerns and feature interpretability issues in modern IDS frameworks, contributing to the advancement of secure, interpretable, and decentralized intrusion detection systems. Our findings accelerate the development of cyber security solutions that leverage federated learning and explainable AI (XAI), paving the way for future research and practical implementations in real-world network security environments. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

60 pages, 633 KiB  
Article
Secure and Trustworthy Open Radio Access Network (O-RAN) Optimization: A Zero-Trust and Federated Learning Framework for 6G Networks
by Mohammed El-Hajj
Future Internet 2025, 17(6), 233; https://doi.org/10.3390/fi17060233 - 25 May 2025
Viewed by 470
Abstract
The Open Radio Access Network (O-RAN) paradigm promises unprecedented flexibility and cost efficiency for 6G networks but introduces critical security risks due to its disaggregated, AI-driven architecture. This paper proposes a secure optimization framework integrating zero-trust principles and privacy-preserving Federated Learning (FL) to [...] Read more.
The Open Radio Access Network (O-RAN) paradigm promises unprecedented flexibility and cost efficiency for 6G networks but introduces critical security risks due to its disaggregated, AI-driven architecture. This paper proposes a secure optimization framework integrating zero-trust principles and privacy-preserving Federated Learning (FL) to address vulnerabilities in O-RAN’s RAN Intelligent Controllers (RICs) and xApps/rApps. We first establish a novel threat model targeting O-RAN’s optimization processes, highlighting risks such as adversarial Machine Learning (ML) attacks on resource allocation models and compromised third-party applications. To mitigate these, we design a Zero-Trust Architecture (ZTA) enforcing continuous authentication and micro-segmentation for RIC components, coupled with an FL framework that enables collaborative ML training across operators without exposing raw network data. A differential privacy mechanism is applied to global model updates to prevent inference attacks. We validate our framework using the DAWN Dataset (5G/6G traffic traces with slicing configurations) and the OpenRAN Gym Dataset (O-RAN-compliant resource utilization metrics) to simulate energy efficiency optimization under adversarial conditions. A dynamic DU sleep scheduling case study demonstrates 32% energy savings with <5% latency degradation, even when data poisoning attacks compromise 15% of the FL participants. Comparative analysis shows that our ZTA reduces unauthorized RIC access attempts by 89% compared to conventional O-RAN security baselines. This work bridges the gap between performance optimization and trustworthiness in next-generation O-RAN, offering actionable insights for 6G standardization. Full article
(This article belongs to the Special Issue Secure and Trustworthy Next Generation O-RAN Optimisation)
Show Figures

Figure 1

20 pages, 1305 KiB  
Article
Grouping-Based Dynamic Routing, Core, and Spectrum Allocation Method for Avoiding Spectrum Fragmentation and Inter-Core Crosstalk in Multi-Core Fiber Networks
by Funa Fukui, Tomotaka Kimura, Yutaka Fukuchi and Kouji Hirata
Future Internet 2025, 17(6), 232; https://doi.org/10.3390/fi17060232 - 23 May 2025
Viewed by 243
Abstract
In this paper, we propose a grouping-based dynamic routing, core, and spectrum allocation (RCSA) method for preventing spectrum fragmentation and inter-core crosstalk in elastic optical path networks based on multi-core fiber environments. Multi-core fibers enable us to considerably enhance the transmission capacity of [...] Read more.
In this paper, we propose a grouping-based dynamic routing, core, and spectrum allocation (RCSA) method for preventing spectrum fragmentation and inter-core crosstalk in elastic optical path networks based on multi-core fiber environments. Multi-core fibers enable us to considerably enhance the transmission capacity of optical links; however, this induces inter-core crosstalk, which degrades the quality of optical signals. We should thus avoid using the same frequency bands in adjacent cores in order to ensure high-quality communications. However, this simple strategy leads to inefficient use of frequency-spectrum resources, resulting in spectrum fragmentation and a high blocking probability for lightpath establishment. The proposed method allows one to overcome this difficulty by grouping lightpath-setup requests according to their required number of frequency slots. By assigning lightpath-setup requests belonging to the same group to cores according to their priority, the proposed method aims to suppress inter-core crosstalk. Furthermore, the proposed method is designed to mitigate spectrum fragmentation by determining the prioritized frequency bandwidth for lightpath-setup requests according to their required number of frequency slots. We show that the proposed method reduces the blocking of lightpath establishment while suppressing inter-core crosstalk through simulation experiments. Full article
Show Figures

Figure 1

21 pages, 3480 KiB  
Article
AI-Driven Framework for Evaluating Climate Misinformation and Data Quality on Social Media
by Zeinab Shahbazi, Rezvan Jalali and Zahra Shahbazi
Future Internet 2025, 17(6), 231; https://doi.org/10.3390/fi17060231 - 22 May 2025
Viewed by 257
Abstract
In the digital age, climate change content on social media is frequently distorted by misinformation, driven by unrestricted content sharing and monetization incentives. This paper proposes a novel AI-based framework to evaluate the data quality of climate-related discourse across platforms like Twitter and [...] Read more.
In the digital age, climate change content on social media is frequently distorted by misinformation, driven by unrestricted content sharing and monetization incentives. This paper proposes a novel AI-based framework to evaluate the data quality of climate-related discourse across platforms like Twitter and YouTube. Data quality is defined using key dimensions of credibility, accuracy, relevance, and sentiment polarity, and a pipeline is developed using transformer-based NLP models, sentiment classifiers, and misinformation detection algorithms. The system processes user-generated content to detect sentiment drift, engagement patterns, and trustworthiness scores. Datasets were collected from three major platforms, encompassing over 1 million posts between 2018 and 2024. Evaluation metrics such as precision, recall, F1-score, and AUC were used to assess model performance. Results demonstrate a 9.2% improvement in misinformation filtering and 11.4% enhancement in content credibility detection compared to baseline models. These findings provide actionable insights for researchers, media outlets, and policymakers aiming to improve climate communication and reduce content-driven polarization on social platforms. Full article
(This article belongs to the Special Issue Information Communication Technologies and Social Media)
Show Figures

Figure 1

21 pages, 2229 KiB  
Article
A Deep Learning Approach for Multiclass Attack Classification in IoT and IIoT Networks Using Convolutional Neural Networks
by Ali Abdi Seyedkolaei, Fatemeh Mahmoudi and José García
Future Internet 2025, 17(6), 230; https://doi.org/10.3390/fi17060230 - 22 May 2025
Viewed by 295
Abstract
The rapid expansion of the Internet of Things (IoT) and industrial Internet of Things (IIoT) ecosystems has introduced new security challenges, particularly the need for robust intrusion detection systems (IDSs) capable of adapting to increasingly sophisticated cyberattacks. In this study, we propose a [...] Read more.
The rapid expansion of the Internet of Things (IoT) and industrial Internet of Things (IIoT) ecosystems has introduced new security challenges, particularly the need for robust intrusion detection systems (IDSs) capable of adapting to increasingly sophisticated cyberattacks. In this study, we propose a novel intrusion detection approach based on convolutional neural networks (CNNs), designed to automatically extract spatial patterns from network traffic data. Leveraging the DNN-EdgeIIoT dataset, which includes a wide range of attack types and traffic scenarios, we conduct comprehensive experiments to compare the CNN-based model against traditional machine learning techniques, including decision trees, random forests, support vector machines, and K-nearest neighbors. Our approach consistently outperforms baseline models across multiple performance metrics—such as F1 score, precision, and recall—in both binary (benign vs. attack) and multiclass settings (6-class and 15-class classification). The CNN model achieves F1 scores of 1.00, 0.994, and 0.946, respectively, highlighting its strong generalization ability across diverse attack categories. These results demonstrate the effectiveness of deep-learning-based IDSs in enhancing the security posture of IoT and IIoT infrastructures, paving the way for intelligent, adaptive, and scalable threat detection systems. Full article
Show Figures

Graphical abstract

32 pages, 2147 KiB  
Article
Optimization of Ground Station Energy Saving in LEO Satellite Constellations for Earth Observation Applications
by Francesco Valente, Francesco Giacinto Lavacca, Marco Polverini, Tiziana Fiori and Vincenzo Eramo
Future Internet 2025, 17(6), 229; https://doi.org/10.3390/fi17060229 - 22 May 2025
Viewed by 172
Abstract
Orbital Edge Computing (OEC) capability on board satellites in Earth Observation (EO) constellations would surely enable a more effective usage of bandwidth, since the possibility to process images on board enables extracting and sending only useful information to the ground. However, OEC can [...] Read more.
Orbital Edge Computing (OEC) capability on board satellites in Earth Observation (EO) constellations would surely enable a more effective usage of bandwidth, since the possibility to process images on board enables extracting and sending only useful information to the ground. However, OEC can also help to reduce the amount of energy required to process EO data on Earth. In fact, even though energy is a valuable resource on satellites, the on-board energy is pre-allocated due to the presence of solar panels and batteries and it is always generated and available, regardless of its actual need and use in time. Instead, energy consumption on the ground is strictly dependent on the demand, and it increases with the increase in EO data to be processed by ground stations. In this work, we first define and solve an optimization problem to jointly allocate resources and place processing within a constellation-wide network to leverage in-orbit processing as much as possible. This aims to reduce the amount of data to be processed on the ground, and thus, to maximize the energy saving in ground stations. Given the NP hardness of the proposed optimization problem, we also propose the Ground Station Energy-Saving Heuristic (GSESH) algorithm to evaluate the energy saving we would obtain in ground stations in a real orbital scenario. After validating the GSESH algorithm by means of a comparison with the results of the optimal solution, we have compared it to a benchmark algorithm in a typical scenario and we have verified that the GSESH algorithm allows for energy saving in the ground station up to 40% higher than the one achieved with the benchmark solution. Full article
Show Figures

Figure 1

20 pages, 6315 KiB  
Article
Analysis of Digital Skills and Infrastructure in EU Countries Based on DESI 2024 Data
by Kvitoslava Obelovska, Andrii Abziatov, Anastasiya Doroshenko, Ivanna Dronyuk, Oleh Liskevych and Rostyslav Liskevych
Future Internet 2025, 17(6), 228; https://doi.org/10.3390/fi17060228 - 22 May 2025
Viewed by 664
Abstract
This paper presents an analysis of digital skills and network infrastructure in the European Union (EU) countries based on data from the Digital Economy and Society Index (DESI) 2024. We analyze the current state of digital skills and network infrastructure in EU countries, [...] Read more.
This paper presents an analysis of digital skills and network infrastructure in the European Union (EU) countries based on data from the Digital Economy and Society Index (DESI) 2024. We analyze the current state of digital skills and network infrastructure in EU countries, which in the DESI framework is called digital infrastructure, identifying key trends and differences between EU member states. The analysis shows significant differences in the relative share of citizens with a certain level of digital skills across countries, both among ordinary users of digital services and among information and communication technology professionals. The analysis of digital infrastructure includes fixed broadband coverage, mobile broadband, and edge networks, the latter of which are expected to become an important component of future digital infrastructure. The results highlight the importance of harmonizing the development of digital skills and digital infrastructure to support the EU’s digital transformation. Significant attention is paid to 5G technology. The feasibility of including a new additional indicator in DESI for next-generation 5G technology in the frequency range of 24.25–52.6 GHz is shown. The value of this indicator can be used to assess the readiness of the EU economy for technological leaps that place extremely high demands on reliability and data transmission delays. The results of the current state and the analysis of digital skills and infrastructure contribute to understanding the potential for the future development of the EU digital economy. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

Previous Issue
Back to TopTop