Next Issue
Volume 17, May
Previous Issue
Volume 17, March
 
 

Future Internet, Volume 17, Issue 4 (April 2025) – 53 articles

Cover Story (view full-size image): This study employs secrecy energy efficiency (SEE) as a key performance metric to evaluate the trade-off between power consumption and secure communication efficiency. Additionally, a multi-objective improved biogeography-based optimization (MOIBBO) algorithm is utilized to optimize hyperparameters, ensuring an improved balance between convergence speed and model performance. Extensive simulation results demonstrate that the proposed MOIBBO-CNN–LSTM framework achieves superior SEE performance compared to benchmark schemes. These findings confirm that MOIBBO-CNN–LSTM offers an effective solution for optimizing SEE in CF m-MIMO-based IoT networks, paving the way for more energy-efficient and secure IoT communications. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 1078 KiB  
Article
Mitigating Quantization Errors Due to Activation Spikes in Gated Linear Unit-Based Large Language Models
by Jaewoo Yang, Hayun Kim, Junyung Ji and Younghoon Kim
Future Internet 2025, 17(4), 185; https://doi.org/10.3390/fi17040185 - 21 Apr 2025
Viewed by 182
Abstract
Modern large language models (LLMs) achieve state-of-the-art performance through architectural advancements but require high computational costs for inference. Post-training quantization is a widely adopted approach to reduce these costs by quantizing weights and activations to lower precision, such as INT8. However, we identify [...] Read more.
Modern large language models (LLMs) achieve state-of-the-art performance through architectural advancements but require high computational costs for inference. Post-training quantization is a widely adopted approach to reduce these costs by quantizing weights and activations to lower precision, such as INT8. However, we identify a critical challenge in activation quantization for GLU (Gated Linear Unit) variants, which are commonly used in the feed-forward networks of modern LLMs like the LLaMA family. Specifically, severe local quantization errors arise due to excessively large activation magnitudes, which we refer to as activation spikes, leading to significant degradation in model performance. Our analysis reveals a systematic pattern of these spikes: they predominantly occur in the FFN (feed-forward network) layers at the early and late layers of the model and are concentrated on a small subset of tokens rather than being uniformly distributed across a token sequence. To mitigate this issue, we propose two empirical methods: Quantization-free Module (QFeM) and Quantization-free Prefix (QFeP), which isolate activation spikes during quantization. Extensive experiments demonstrated that our methods effectively improve activation quantization, particularly in coarse-grained quantization schemes, enhancing the performance of LLMs with GLU variants and addressing the limitations of existing quantization techniques. The code for implementing our methods and reproducing the experiments is publicly available our GitHub repository. Full article
(This article belongs to the Special Issue Machine Learning and Natural Language Processing)
Show Figures

Figure 1

27 pages, 3313 KiB  
Article
Big-Delay Estimation for Speech Separation in Assisted Living Environments
by Swarnadeep Bagchi and Ruairí de Fréin
Future Internet 2025, 17(4), 184; https://doi.org/10.3390/fi17040184 - 21 Apr 2025
Viewed by 188
Abstract
Phase wraparound due to large inter-sensor spacings in multi-channel demixing renders the DUET and AdRess source separation algorithms—known for their low computational complexity and effective speech demixing performance—unsuitable for hearing-assisted living applications, where such configurations are needed. DUET is limited to relative delays [...] Read more.
Phase wraparound due to large inter-sensor spacings in multi-channel demixing renders the DUET and AdRess source separation algorithms—known for their low computational complexity and effective speech demixing performance—unsuitable for hearing-assisted living applications, where such configurations are needed. DUET is limited to relative delays of up to 7 samples, given a sampling rate of Fs=16 kHz in anechoic scenarios, while the AdRess algorithm is constrained to instantaneous mixing problems. The task of this paper is to improve the performance of DUET-type time–frequency (TF) masks when microphones are placed far apart. A significant challenge in assistive hearing scenarios is phase wraparound caused by large relative delays. We evaluate the performance of a large relative delay estimation method, called the Elevatogram, in the presence of significant phase wraparound. We present extensions of DUET and AdRess, termed Elevato-DUET and Elevato-AdRess, which are effective in scenarios with relative delays of up to 200 samples. The findings demonstrate that Elevato-AdRess not only outperforms Elevato-DUET in terms of objective separation quality metrics—BSS_Eval and PEASS—but also achieves higher intelligibility scores, as measured by the Perceptual Evaluation of Speech Quality (PESQ) Mean Opinion Score (MOS) scores. These findings suggest that the phase wraparound limitations of DUET and AdRess algorithms in assistive hearing scenarios involving large inter-microphone spacing can be addressed by introducing the Elevatogram-based Elevato-DUET and Elevato-AdRess algorithms. These algorithms improve separation quality and intelligibility, with Elevato-AdRess demonstrating the best overall performance. Full article
Show Figures

Figure 1

41 pages, 1419 KiB  
Systematic Review
Securing Decentralized Ecosystems: A Comprehensive Systematic Review of Blockchain Vulnerabilities, Attacks, and Countermeasures and Mitigation Strategies
by Md Kamrul Siam, Bilash Saha, Md Mehedi Hasan, Md Jobair Hossain Faruk, Nafisa Anjum, Sharaban Tahora, Aiasha Siddika and Hossain Shahriar
Future Internet 2025, 17(4), 183; https://doi.org/10.3390/fi17040183 - 21 Apr 2025
Viewed by 186
Abstract
Blockchain technology has emerged as a transformative innovation, providing a transparent, immutable, and decentralized platform that underpins critical applications across industries such as cryptocurrencies, supply chain management, healthcare, and finance. Despite their promise of enhanced security and trust, the increasing sophistication of cyberattacks [...] Read more.
Blockchain technology has emerged as a transformative innovation, providing a transparent, immutable, and decentralized platform that underpins critical applications across industries such as cryptocurrencies, supply chain management, healthcare, and finance. Despite their promise of enhanced security and trust, the increasing sophistication of cyberattacks has exposed vulnerabilities within blockchain ecosystems, posing severe threats to their integrity, reliability, and adoption. This study presents a comprehensive and systematic review of blockchain vulnerabilities by categorizing and analyzing potential threats, including network-level attacks, consensus-based exploits, smart contract vulnerabilities, and user-centric risks. Furthermore, the research evaluates existing countermeasures and mitigation strategies by examining their effectiveness, scalability, and adaptability to diverse blockchain architectures and use cases. The study highlights the critical need for context-aware security solutions that address the unique requirements of various blockchain applications and proposes a framework for advancing proactive and resilient security designs. By bridging gaps in the existing literature, this research offers valuable insights for academics, industry practitioners, and policymakers, contributing to the ongoing development of robust and secure decentralized ecosystems. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT III)
Show Figures

Figure 1

31 pages, 4356 KiB  
Article
Cybersecurity Intelligence Through Textual Data Analysis: A Framework Using Machine Learning and Terrorism Datasets
by Mohammed Salem Atoum, Ala Abdulsalam Alarood, Eesa Alsolami, Adamu Abubakar, Ahmad K. Al Hwaitat and Izzat Alsmadi
Future Internet 2025, 17(4), 182; https://doi.org/10.3390/fi17040182 - 21 Apr 2025
Viewed by 169
Abstract
This study examines multi-lexical data sources, utilizing an extracted dataset from an open-source corpus and the Global Terrorism Datasets (GTDs), to predict lexical patterns that are directly linked to terrorism. This is essential as specific patterns within a textual context can facilitate the [...] Read more.
This study examines multi-lexical data sources, utilizing an extracted dataset from an open-source corpus and the Global Terrorism Datasets (GTDs), to predict lexical patterns that are directly linked to terrorism. This is essential as specific patterns within a textual context can facilitate the identification of terrorism-related content. The research methodology focuses on generating a corpus from various published works and extracting texts pertinent to “terrorism”. Afterwards, we extract additional lexical contexts of GTDs that directly relate to terrorism. The integration of multi-lexical data sources generates lexical patterns linked to terrorism. Machine learning models were used to train the dataset. We conducted two primary experiments and analyzed the results. The analysis of data obtained from open sources reveals that while the Extra Trees model achieved the highest accuracy at 94.31%, the XGBoost model demonstrated superior overall performance with a higher recall (81.32%) and F1-Score (83.06%) after tuning, indicating a better balance between sensitivity and precision. Similarly, on the GTD dataset, XGBoost consistently outperformed other models in recall and the F1-score, making it a more suitable candidate for tasks where minimizing false negatives is critical. This implies that we can establish a specific co-occurrence and context within the terrorism dataset from multiple lexical data sources in effectively identifying certain multi-lexical patterns such as “Suicide Attack/Casualty”, “Civilians/Victims”, and “Hostage Taking/Abduction” across various applications or contexts. This will facilitate the development of a framework for understanding the lexical patterns associated with terrorism. Full article
Show Figures

Figure 1

24 pages, 592 KiB  
Article
Analysis of Universal Decoding Techniques for 6G Ultra-Reliable and Low-Latency Communication Scenario
by Abhilasha Gautam, Prabhat Thakur and Ghanshyam Singh
Future Internet 2025, 17(4), 181; https://doi.org/10.3390/fi17040181 - 21 Apr 2025
Viewed by 107
Abstract
Ultra-reliable and low-latency communication (URLLC) in 6G networks is characterized by very high reliability and very low latency to enable mission-critical applications. The ability of a coding scheme to support diverse use cases requires flexibility on the part of the decoder. High reliability [...] Read more.
Ultra-reliable and low-latency communication (URLLC) in 6G networks is characterized by very high reliability and very low latency to enable mission-critical applications. The ability of a coding scheme to support diverse use cases requires flexibility on the part of the decoder. High reliability and low latency require decoders with improved error rate performance and reduced complexity. This article investigates candidate universal decoding algorithms for 6G communication scenarios. Universal decoders work on a wide range of error-correcting codes, making them scalable for different communication protocols. This article undertakes the comparative analysis and performance evaluation of the code-agnostic decoding schemes, including automorphism ensemble (AED), guessing random additive noise (GRAND), ordered statistics (OSD), belief propagation (BPD), bit flipping (BFD), and their variants. Simulations are carried out in MATLAB (R2024a) for the error rate performance of decoders, and plots are provided for the comparative analysis from the results of inferred data. The key findings in this paper highlight the competitive advantage of universal decoding techniques in comparison to the standardized CA-SCL decoding of polar code. Consequently, this work will help in identifying more efficient decoding algorithms for potential 6G URLLC applications. We aim to provide an insight into the scalability of universal decoding techniques by exploring their key performance metrics and comparing their performances. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

34 pages, 5804 KiB  
Article
AI-MDD-UX: Revolutionizing E-Commerce User Experience with Generative AI and Model-Driven Development
by Adel Alti and Abderrahim Lakehal
Future Internet 2025, 17(4), 180; https://doi.org/10.3390/fi17040180 - 20 Apr 2025
Viewed by 125
Abstract
E-commerce applications have emerged as key drivers of digital transformation, reshaping consumer behavior and driving demand for seamless online transactions. Despite the growth of smart mobile technologies, existing methods rely on fixed UI content that cannot adjust to local cultural preferences and fluctuating [...] Read more.
E-commerce applications have emerged as key drivers of digital transformation, reshaping consumer behavior and driving demand for seamless online transactions. Despite the growth of smart mobile technologies, existing methods rely on fixed UI content that cannot adjust to local cultural preferences and fluctuating user behaviors. This paper explores the combination of generative Artificial Intelligence (AI) technologies with Model-Driven Development (MDD) to enhance personalization, engagement, and adaptability in e-commerce. Unlike static adaptation approaches, generative AI enables real-time, adaptive interactions tailored to individual needs, providing a more engaging and adaptable user experience. The proposed framework follows a three-tier architecture: first, it collects and analyzes user behavior data from UI interactions; second, it leverages MDD to model and personalize user personas and interactions and third, AI techniques, including generative AI and multi-agent reinforcement learning, are applied to refine and optimize UI/UX design. This automation-driven approach uses a multi-agent system to continuously enhance AI-generated layouts. Technical validation demonstrated strong user engagement across diverse platforms and superior performance in UI optimization, achieving an average user satisfaction improvement of 2.3% compared to GAN-based models, 18.6% compared to Bootstrap-based designs, and 11.8% compared to rule-based UI adaptation. These results highlight generative AI-driven MDD tools as a promising tool for e-commerce, enhancing engagement, personalization, and efficiency. Full article
Show Figures

Figure 1

26 pages, 2006 KiB  
Article
Edge AI for Real-Time Anomaly Detection in Smart Homes
by Manuel J. C. S. Reis and Carlos Serôdio
Future Internet 2025, 17(4), 179; https://doi.org/10.3390/fi17040179 - 18 Apr 2025
Viewed by 477
Abstract
The increasing adoption of smart home technologies has intensified the demand for real-time anomaly detection to improve security, energy efficiency, and device reliability. Traditional cloud-based approaches introduce latency, privacy concerns, and network dependency, making Edge AI a compelling alternative for low-latency, on-device processing. [...] Read more.
The increasing adoption of smart home technologies has intensified the demand for real-time anomaly detection to improve security, energy efficiency, and device reliability. Traditional cloud-based approaches introduce latency, privacy concerns, and network dependency, making Edge AI a compelling alternative for low-latency, on-device processing. This paper presents an Edge AI-based anomaly detection framework that combines Isolation Forest (IF) and Long Short-Term Memory Autoencoder (LSTM-AE) models to identify anomalies in IoT sensor data. The system is evaluated on both synthetic and real-world smart home datasets, including temperature, motion, and energy consumption signals. Experimental results show that LSTM-AE achieves higher detection accuracy (up to 93.6%) and recall but requires more computational resources. In contrast, IF offers faster inference and lower power consumption, making it suitable for constrained environments. A hybrid architecture integrating both models is proposed to balance accuracy and efficiency, achieving sub-50 ms inference latency on embedded platforms such as Raspberry Pi and NVIDEA Jetson Nano. Optimization strategies such as quantization reduced LSTM-AE inference time by 76% and power consumption by 35%. Adaptive learning mechanisms, including federated learning, are also explored to minimize cloud dependency and enhance data privacy. These findings demonstrate the feasibility of deploying real-time, privacy-preserving, and energy-efficient anomaly detection directly on edge devices. The proposed framework can be extended to other domains such as smart buildings and industrial IoT. Future work will investigate self-supervised learning, transformer-based detection, and deployment in real-world operational settings. Full article
Show Figures

Graphical abstract

27 pages, 3907 KiB  
Article
Detecting Disinformation in Croatian Social Media Comments
by Igor Ljubi, Zdravko Grgić, Marin Vuković and Gordan Gledec
Future Internet 2025, 17(4), 178; https://doi.org/10.3390/fi17040178 - 17 Apr 2025
Viewed by 249
Abstract
The frequency with which fake news or misinformation is published on social networks is constantly increasing. Users of social networks are confronted with many different posts every day, often with sensationalist titles and content of dubious veracity. The problem is particularly common in [...] Read more.
The frequency with which fake news or misinformation is published on social networks is constantly increasing. Users of social networks are confronted with many different posts every day, often with sensationalist titles and content of dubious veracity. The problem is particularly common in times of sensitive social or political situations, such as epidemics of contagious diseases or elections. As such messages can have an impact on democratic processes or cause panic among the population, many countries and the European Commission itself have recently stepped up their activities to combat disinformation campaigns on social networks. Since previous research has shown that there are no tools available to combat disinformation in the Croatian language, we proposed a framework to detect potentially misinforming content in the comments on social media. The case study was conducted with real public comments published on Croatian Facebook pages. The initial results of this framework were encouraging as it can successfully classify and detect disinformation content. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

47 pages, 2579 KiB  
Systematic Review
Enhancing Transplantation Care with eHealth: Benefits, Challenges, and Key Considerations for the Future
by Ilaisaane Falevai and Farkhondeh Hassandoust
Future Internet 2025, 17(4), 177; https://doi.org/10.3390/fi17040177 - 17 Apr 2025
Viewed by 172
Abstract
eHealth has transformed transplantation care by enhancing communication between patients and clinics, supporting self-management, and improving adherence to medical advice. However, existing research on eHealth in transplantation remains fragmented, lacking a comprehensive understanding of its diverse users, associated benefits and challenges, and key [...] Read more.
eHealth has transformed transplantation care by enhancing communication between patients and clinics, supporting self-management, and improving adherence to medical advice. However, existing research on eHealth in transplantation remains fragmented, lacking a comprehensive understanding of its diverse users, associated benefits and challenges, and key considerations for intervention development. This systematic review, conducted following the PRISMA guidelines, analyzed the literature on eHealth in transplantation published between 2018 and September 2023 across multiple databases. A total of 60 studies were included, highlighting benefits such as improved patient engagement, accessibility, empowerment, and cost-efficiency. Three primary categories of barriers were identified: knowledge and access barriers, usability and implementation challenges, and trust issues. Additionally, patient-centered design and readiness were found to be crucial factors in developing effective eHealth solutions. These findings underscore the need for tailored, patient-centric interventions to maximize the potential of eHealth in transplantation care. Moreover, the success of eHealth interventions in transplantation is increasingly dependent on robust networking infrastructure, cloud-based telemedicine systems, and secure data-sharing platforms. These technologies facilitate real-time communication between transplant teams and patients, ensuring continuous care and monitoring. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

25 pages, 3269 KiB  
Article
Augmentation and Classification of Requests in Moroccan Dialect to Improve Quality of Public Service: A Comparative Study of Algorithms
by Hajar Zaidani, Rim Koulali, Abderrahim Maizate and Mohamed Ouzzif
Future Internet 2025, 17(4), 176; https://doi.org/10.3390/fi17040176 - 17 Apr 2025
Viewed by 143
Abstract
Moroccan Law 55.19 aims to streamline administrative procedures, fostering trust between citizens and public administrations. To implement this law effectively and enhance public service quality, it is essential to use the Moroccan dialect to involve a wide range of people by leveraging Natural [...] Read more.
Moroccan Law 55.19 aims to streamline administrative procedures, fostering trust between citizens and public administrations. To implement this law effectively and enhance public service quality, it is essential to use the Moroccan dialect to involve a wide range of people by leveraging Natural Language Processing (NLP) techniques customized to its specific linguistic characteristics. It is worth noting that the Moroccan dialect presents a unique linguistic landscape, marked by the coexistence of multiple texts. Though it has emerged as the preferred medium of communication on social media, reaching wide audiences, its perceived difficulty of comprehension remains unaddressed. This article introduces a new approach to addressing these challenges. First, we compiled and processed a dataset of Moroccan dialect requests for public administration documents, employing a new augmentation technique to enhance its size and diversity. Second, we conducted text classification experiments using various machine learning algorithms, ranging from traditional methods to advanced large language models (LLMs), to categorize the requests into three classes. The results indicate promising outcomes, with an accuracy of more than 80% for LLMs. Finally, we propose a chatbot system architecture for deploying the most efficient classification algorithm. This solution also contains a voice assistant system that can contribute to the social inclusion of illiterate people. The article concludes by outlining potential avenues for future research. Full article
Show Figures

Figure 1

54 pages, 5836 KiB  
Review
A Survey on Edge Computing (EC) Security Challenges: Classification, Threats, and Mitigation Strategies
by Abdul Manan Sheikh, Md. Rafiqul Islam, Mohamed Hadi Habaebi, Suriza Ahmad Zabidi, Athaur Rahman Bin Najeeb and Adnan Kabbani
Future Internet 2025, 17(4), 175; https://doi.org/10.3390/fi17040175 - 16 Apr 2025
Viewed by 664
Abstract
Edge computing (EC) is a distributed computing approach to processing data at the network edge, either by the device or a local server, instead of centralized data centers or the cloud. EC proximity to the data source can provide faster insights, response time, [...] Read more.
Edge computing (EC) is a distributed computing approach to processing data at the network edge, either by the device or a local server, instead of centralized data centers or the cloud. EC proximity to the data source can provide faster insights, response time, and bandwidth utilization. However, the distributed architecture of EC makes it vulnerable to data security breaches and diverse attack vectors. The edge paradigm has limited availability of resources like memory and battery power. Also, the heterogeneous nature of the hardware, diverse communication protocols, and difficulty in timely updating security patches exist. A significant number of researchers have presented countermeasures for the detection and mitigation of data security threats in an EC paradigm. However, an approach that differs from traditional data security and privacy-preserving mechanisms already used in cloud computing is required. Artificial Intelligence (AI) greatly improves EC security through advanced threat detection, automated responses, and optimized resource management. When combined with Physical Unclonable Functions (PUFs), AI further strengthens data security by leveraging PUFs’ unique and unclonable attributes alongside AI’s adaptive and efficient management features. This paper investigates various edge security strategies and cutting-edge solutions. It presents a comparison between existing strategies, highlighting their benefits and limitations. Additionally, the paper offers a detailed discussion of EC security threats, including their characteristics and the classification of different attack types. The paper also provides an overview of the security and privacy needs of the EC, detailing the technological methods employed to address threats. Its goal is to assist future researchers in pinpointing potential research opportunities. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

40 pages, 6881 KiB  
Article
Distributed Reputation for Accurate Vehicle Misbehavior Reporting (DRAMBR)
by Dimah Almani, Tim Muller and Steven Furnell
Future Internet 2025, 17(4), 174; https://doi.org/10.3390/fi17040174 - 15 Apr 2025
Viewed by 235
Abstract
Vehicle-to-Vehicle (V2V) communications technology offers enhanced road safety, traffic efficiency, and connectivity. In V2V, vehicles cooperate by broadcasting safety messages to quickly detect and avoid dangerous situations on time or to avoid and reduce congestion. However, vehicles might misbehave, creating false information and [...] Read more.
Vehicle-to-Vehicle (V2V) communications technology offers enhanced road safety, traffic efficiency, and connectivity. In V2V, vehicles cooperate by broadcasting safety messages to quickly detect and avoid dangerous situations on time or to avoid and reduce congestion. However, vehicles might misbehave, creating false information and sharing it with neighboring vehicles, such as, for example, failing to report an observed accident or falsely reporting one when none exists. If other vehicles detect such misbehavior, they can report it. However, false accusations also constitute misbehavior. In disconnected areas with limited infrastructure, the potential for misbehavior increases due to the scarcity of Roadside Units (RSUs) necessary for verifying the truthfulness of communications. In such a situation, identifying malicious behavior using a standard misbehaving management system is ineffective in areas with limited connectivity. This paper presents a novel mechanism, Distributed Reputation for Accurate Misbehavior Reporting (DRAMBR), offering a fully integrated reputation solution that utilizes reputation to enhance the accuracy of the reporting system by identifying misbehavior in rural networks. The system operates in two phases: offline, using the Local Misbehavior Detection Mechanism (LMDM), where vehicles detect misbehavior and store reports locally, and online, where these reports are sent to a central reputation server. DRAMBR aggregates the reports and integrates DBSCAN for clustering spatial and temporal misbehavior reports, Isolation Forest for anomaly detection, and Gaussian Mixture Models for probabilistic classification of reports. Additionally, Random Forest and XGBoost models are combined to improve decision accuracy. DRAMBR distinguishes between honest mistakes, intentional deception, and malicious reporting. Using an existing mechanism, the updated reputation is available even in an offline environment. Through simulations, we evaluate our proposed reputation system’s performance, demonstrating its effectiveness in achieving a reporting accuracy of approximately 98%. The findings highlight the potential of reputation-based strategies to minimize misbehavior and improve the reliability and security of V2V communications, particularly in rural areas with limited infrastructure, ultimately contributing to safer and more reliable transportation systems. Full article
Show Figures

Figure 1

22 pages, 2235 KiB  
Article
Multimodal Fall Detection Using Spatial–Temporal Attention and Bi-LSTM-Based Feature Fusion
by Jungpil Shin, Abu Saleh Musa Miah, Rei Egawa, Najmul Hassan, Koki Hirooka and Yoichi Tomioka
Future Internet 2025, 17(4), 173; https://doi.org/10.3390/fi17040173 - 15 Apr 2025
Viewed by 596
Abstract
Human fall detection is a significant healthcare concern, particularly among the elderly, due to its links to muscle weakness, cardiovascular issues, and locomotive syndrome. Accurate fall detection is crucial for timely intervention and injury prevention, which has led many researchers to work on [...] Read more.
Human fall detection is a significant healthcare concern, particularly among the elderly, due to its links to muscle weakness, cardiovascular issues, and locomotive syndrome. Accurate fall detection is crucial for timely intervention and injury prevention, which has led many researchers to work on developing effective detection systems. However, existing unimodal systems that rely solely on skeleton or sensor data face challenges such as poor robustness, computational inefficiency, and sensitivity to environmental conditions. While some multimodal approaches have been proposed, they often struggle to capture long-range dependencies effectively. In order to address these challenges, we propose a multimodal fall detection framework that integrates skeleton and sensor data. The system uses a Graph-based Spatial-Temporal Convolutional and Attention Neural Network (GSTCAN) to capture spatial and temporal relationships from skeleton and motion data information in stream-1, while a Bi-LSTM with Channel Attention (CA) processes sensor data in stream-2, extracting both spatial and temporal features. The GSTCAN model uses AlphaPose for skeleton extraction, calculates motion between consecutive frames, and applies a graph convolutional network (GCN) with a CA mechanism to focus on relevant features while suppressing noise. In parallel, the Bi-LSTM with CA processes inertial signals, with Bi-LSTM capturing long-range temporal dependencies and CA refining feature representations. The features from both branches are fused and passed through a fully connected layer for classification, providing a comprehensive understanding of human motion. The proposed system was evaluated on the Fall Up and UR Fall datasets, achieving a classification accuracy of 99.09% and 99.32%, respectively, surpassing existing methods. This robust and efficient system demonstrates strong potential for accurate fall detection and continuous healthcare monitoring. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Smart Healthcare)
Show Figures

Figure 1

18 pages, 620 KiB  
Article
C3: Leveraging the Native Messaging Application Programming Interface for Covert Command and Control
by Efstratios Chatzoglou and Georgios Kambourakis
Future Internet 2025, 17(4), 172; https://doi.org/10.3390/fi17040172 - 14 Apr 2025
Viewed by 143
Abstract
Traditional command and control (C2) frameworks struggle with evasion, automation, and resilience against modern detection techniques. This paper introduces covert C2 (C3), a novel C2 framework designed to enhance operational security and minimize detection. C3 employs a decentralized architecture, enabling independent victim communication [...] Read more.
Traditional command and control (C2) frameworks struggle with evasion, automation, and resilience against modern detection techniques. This paper introduces covert C2 (C3), a novel C2 framework designed to enhance operational security and minimize detection. C3 employs a decentralized architecture, enabling independent victim communication with the C2 server for covert persistence. Its adaptable design supports diverse post-exploitation and lateral movement techniques for optimized results across various environments. Through optimized performance and the use of the native messaging API, C3 agents achieve a demonstrably low detection rate against prevalent Endpoint Detection and Response (EDR) solutions. A publicly available proof-of-concept implementation demonstrates C3’s effectiveness in real-world adversarial simulations, specifically in direct code execution for privilege escalation and lateral movement. Our findings indicate that integrating novel techniques, such as the native messaging API, and a decentralized architecture significantly improves the stealth, efficiency, and reliability of offensive operations. The paper further analyzes C3’s post-exploitation behavior, explores relevant defense strategies, and compares it with existing C2 solutions, offering practical insights for enhancing network security. Full article
Show Figures

Figure 1

37 pages, 3696 KiB  
Article
Design Analysis for a Distributed Business Innovation System Employing Generated Expert Profiles, Matchmaking, and Blockchain Technology
by Adrian Alexandrescu, Delia-Elena Bărbuță, Cristian Nicolae Buțincu, Alexandru Archip, Silviu-Dumitru Pavăl, Cătălin Mironeanu and Gabriel-Alexandru Scînteie
Future Internet 2025, 17(4), 171; https://doi.org/10.3390/fi17040171 - 14 Apr 2025
Viewed by 163
Abstract
Innovation ecosystems often face challenges such as inadequate coordination, insufficient protection of intellectual property, limited access to quality expertise, and inefficient matchmaking between innovators and experts. This paper provides an in-depth design analysis of SPARK-IT, a novel business innovation platform specifically addressing these [...] Read more.
Innovation ecosystems often face challenges such as inadequate coordination, insufficient protection of intellectual property, limited access to quality expertise, and inefficient matchmaking between innovators and experts. This paper provides an in-depth design analysis of SPARK-IT, a novel business innovation platform specifically addressing these challenges. The platform leverages advanced AI to precisely match innovators with suitable mentors, supported by a distributed web scraper that constructs expert profiles from reliable sources (e.g., LinkedIn and BrainMap). Data privacy and security are prioritized through robust encryption that restricts sensitive content exclusively to innovators and mentors, preventing unauthorized access even by platform administrators. Additionally, documents are stored encrypted on decentralized storage, with their cryptographic hashes anchored on blockchain to ensure transparency, traceability, non-repudiation, and immutability. To incentivize active participation, SPARK-IT utilizes a dual-token approach comprising reward and reputation tokens. The reward tokens, SparkCoins, are wrapped stablecoins with tangible monetary value, enabling seamless internal transactions and external exchanges. Finally, the paper discusses key design challenges and critical architectural trade-offs and evaluates the socio-economic impacts of implementing this innovative solution. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

37 pages, 3006 KiB  
Article
Employing Streaming Machine Learning for Modeling Workload Patterns in Multi-Tiered Data Storage Systems
by Edson Ramiro Lucas Filho, George Savva, Lun Yang, Kebo Fu, Jianqiang Shen and Herodotos Herodotou
Future Internet 2025, 17(4), 170; https://doi.org/10.3390/fi17040170 - 11 Apr 2025
Viewed by 335
Abstract
Modern multi-tiered data storage systems optimize file access by managing data across a hybrid composition of caches and storage tiers while using policies whose decisions can severely impact the storage system’s performance. Recently, different Machine-Learning (ML) algorithms have been used to model access [...] Read more.
Modern multi-tiered data storage systems optimize file access by managing data across a hybrid composition of caches and storage tiers while using policies whose decisions can severely impact the storage system’s performance. Recently, different Machine-Learning (ML) algorithms have been used to model access patterns from complex workloads. Yet, current approaches train their models offline in a batch-based approach, even though storage systems are processing a stream of file requests with dynamic workloads. In this manuscript, we advocate the streaming ML paradigm for modeling access patterns in multi-tiered storage systems as it introduces various advantages, including high efficiency, high accuracy, and high adaptability. Moreover, representative file access patterns, including temporal, spatial, length, and frequency patterns, are identified for individual files, directories, and file formats, and used as features. Streaming ML models are developed, trained, and tested on different file system traces for making two types of predictions: the next offset to be read in a file and the future file hotness. An extensive evaluation is performed with production traces provided by Huawei Technologies, showing that the models are practical, with low memory consumption (<1.3 MB) and low training delay (<1.8 ms per training instance), and can make accurate predictions online (0.98 F1 score and 0.07 MAE on average). Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

29 pages, 1763 KiB  
Article
Energy-Efficient Secure Cell-Free Massive MIMO for Internet of Things: A Hybrid CNN–LSTM-Based Deep-Learning Approach
by Ali Vaziri, Pardis Sadatian Moghaddam, Mehrdad Shoeibi and Masoud Kaveh
Future Internet 2025, 17(4), 169; https://doi.org/10.3390/fi17040169 - 11 Apr 2025
Viewed by 299
Abstract
The Internet of Things (IoT) has revolutionized modern communication systems by enabling seamless connectivity among low-power devices. However, the increasing demand for high-performance wireless networks necessitates advanced frameworks that optimize both energy efficiency (EE) and security. Cell-free massive multiple-input multiple-output (CF m-MIMO) has [...] Read more.
The Internet of Things (IoT) has revolutionized modern communication systems by enabling seamless connectivity among low-power devices. However, the increasing demand for high-performance wireless networks necessitates advanced frameworks that optimize both energy efficiency (EE) and security. Cell-free massive multiple-input multiple-output (CF m-MIMO) has emerged as a promising solution for IoT networks, offering enhanced spectral efficiency, low-latency communication, and robust connectivity. Nevertheless, balancing EE and security in such systems remains a significant challenge due to the stringent power and computational constraints of IoT devices. This study employs secrecy energy efficiency (SEE) as a key performance metric to evaluate the trade-off between power consumption and secure communication efficiency. By jointly considering energy consumption and secrecy rate, our analysis provides a comprehensive assessment of security-aware energy efficiency in CF m-MIMO-based IoT networks. To enhance SEE, we introduce a hybrid deep-learning (DL) framework that integrates convolutional neural networks (CNN) and long short-term memory (LSTM) networks for joint EE and security optimization. The CNN extracts spatial features, while the LSTM captures temporal dependencies, enabling a more robust and adaptive modeling of dynamic IoT communication patterns. Additionally, a multi-objective improved biogeography-based optimization (MOIBBO) algorithm is utilized to optimize hyperparameters, ensuring an improved balance between convergence speed and model performance. Extensive simulation results demonstrate that the proposed MOIBBO-CNN–LSTM framework achieves superior SEE performance compared to benchmark schemes. Specifically, MOIBBO-CNN–LSTM attains an SEE gain of up to 38% compared to LSTM and 22% over CNN while converging significantly faster at early training epochs. Furthermore, our results reveal that SEE improves with increasing AP transmit power up to a saturation point (approximately 9.5 Mb/J at PAPmax=500 mW), beyond which excessive power consumption limits efficiency gains. Additionally, SEE decreases as the number of APs increases, underscoring the need for adaptive AP selection strategies to mitigate static power consumption in backhaul links. These findings confirm that MOIBBO-CNN–LSTM offers an effective solution for optimizing SEE in CF m-MIMO-based IoT networks, paving the way for more energy-efficient and secure IoT communications. Full article
(This article belongs to the Special Issue Moving Towards 6G Wireless Technologies—2nd Edition)
Show Figures

Figure 1

14 pages, 522 KiB  
Article
NUDIF: A Non-Uniform Deployment Framework for Distributed Inference in Heterogeneous Edge Clusters
by Peng Li, Chen Qing and Hao Liu
Future Internet 2025, 17(4), 168; https://doi.org/10.3390/fi17040168 - 11 Apr 2025
Viewed by 228
Abstract
Distributed inference in resource-constrained heterogeneous edge clusters is fundamentally limited by disparities in device capabilities and load imbalance issues. Existing methods predominantly focus on optimizing single-pipeline allocation schemes for partitioned sub-models. However, such approaches often lead to load imbalance and suboptimal resource utilization [...] Read more.
Distributed inference in resource-constrained heterogeneous edge clusters is fundamentally limited by disparities in device capabilities and load imbalance issues. Existing methods predominantly focus on optimizing single-pipeline allocation schemes for partitioned sub-models. However, such approaches often lead to load imbalance and suboptimal resource utilization under concurrent batch processing scenarios. To address these challenges, we propose a non-uniform deployment inference framework (NUDIF), which achieves high-throughput distributed inference service by adapting to heterogeneous resources and balancing inter-stage processing capabilities. Formulated as a mixed-integer nonlinear programming (MINLP) problem, NUDIF is responsible for planning the number of instances for each sub-model and determining the specific devices for deploying these instances, while considering computational capacity, memory constraints, and communication latency. This optimization minimizes inter-stage processing discrepancies and maximizes resource utilization. Experimental evaluations demonstrate that NUDIF enhances system throughput by an average of 9.95% compared to traditional single-pipeline optimization methods under various scales of cluster device configurations. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

24 pages, 687 KiB  
Article
Analyzing Impact and Systemwide Effects of the SlowROS Attack in an Industrial Automation Scenario
by Ivan Cibrario Bertolotti, Luca Durante and Enrico Cambiaso
Future Internet 2025, 17(4), 167; https://doi.org/10.3390/fi17040167 - 11 Apr 2025
Viewed by 271
Abstract
The ongoing adoption of Robot Operating Systems (ROSs) not only for research-oriented projects but also for industrial applications demands a more thorough assessment of its security than in the past. This paper highlights that a key ROS component—the ROS Master—is indeed vulnerable to [...] Read more.
The ongoing adoption of Robot Operating Systems (ROSs) not only for research-oriented projects but also for industrial applications demands a more thorough assessment of its security than in the past. This paper highlights that a key ROS component—the ROS Master—is indeed vulnerable to a novel kind of Slow Denial of Service (slow DoS) attack, the root reason of this vulnerability being an extremely high idle connection timeout. The effects of vulnerability exploitation have been evaluated in detail by means of a realistic test bed, showing how it leads to a systemwide and potentially dangerous disruption of ROS system operations. Moreover, it has been shown how some basic forms of built-in protection of the Linux kernel can be easily circumvented, and are therefore ineffective against this kind of threat. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

22 pages, 2477 KiB  
Article
Reinforcement Learning-Based Dynamic Fuzzy Weight Adjustment for Adaptive User Interfaces in Educational Software
by Christos Troussas, Akrivi Krouska, Phivos Mylonas and Cleo Sgouropoulou
Future Internet 2025, 17(4), 166; https://doi.org/10.3390/fi17040166 - 9 Apr 2025
Viewed by 292
Abstract
Adaptive educational systems are essential for addressing the diverse learning needs of students by dynamically adjusting instructional content and user interfaces (UI) based on real-time performance. Traditional adaptive learning environments often rely on static fuzzy logic rules, which lack the flexibility to evolve [...] Read more.
Adaptive educational systems are essential for addressing the diverse learning needs of students by dynamically adjusting instructional content and user interfaces (UI) based on real-time performance. Traditional adaptive learning environments often rely on static fuzzy logic rules, which lack the flexibility to evolve with learners’ changing behaviors. To address this limitation, this paper presents an adaptive UI system for educational software in Java programming, integrating fuzzy logic and reinforcement learning (RL) to personalize learning experiences. The system consists of two main modules: (a) the Fuzzy Inference Module, which classifies learners into Fast, Moderate, or Slow categories based on triangular membership functions, and (b) the Reinforcement Learning Optimization Module, which dynamically adjusts the fuzzy membership function thresholds to enhance personalization over time. By refining the timing and necessity of UI modifications, the system optimizes hints, difficulty levels, and structured guidance, ensuring interventions are neither premature nor delayed. The system was evaluated in educational software for Java programming, with 100 postgraduate students. The evaluation, based on learning efficiency, engagement, and usability metrics, demonstrated promising results, particularly for slow and moderate learners, confirming that reinforcement learning-driven fuzzy weight adjustments significantly improve adaptive UI effectiveness. Full article
(This article belongs to the Special Issue Advances and Perspectives in Human-Computer Interaction—2nd Edition)
Show Figures

Figure 1

24 pages, 2548 KiB  
Article
CPCROK: A Communication-Efficient and Privacy-Preserving Scheme for Low-Density Vehicular Ad Hoc Networks
by Junchao Wang, Honglin Li, Yan Sun, Chris Phillips, Alexios Mylonas and Dimitris Gritzalis
Future Internet 2025, 17(4), 165; https://doi.org/10.3390/fi17040165 - 9 Apr 2025
Viewed by 253
Abstract
The mix-zone method is effective in preserving real-time vehicle identity and location privacy in Vehicular Ad Hoc Networks (VANETs). However, it has limitations in low-vehicle-density scenarios, where adversaries can still identify the real trajectories of the victim vehicle. To address this issue, researchers [...] Read more.
The mix-zone method is effective in preserving real-time vehicle identity and location privacy in Vehicular Ad Hoc Networks (VANETs). However, it has limitations in low-vehicle-density scenarios, where adversaries can still identify the real trajectories of the victim vehicle. To address this issue, researchers often generate numerous fake beacons to deceive attackers, but this increases transmission overhead significantly. Therefore, we propose the Communication-Efficient Pseudonym-Changing Scheme within the Restricted Online Knowledge Scheme (CPCROK) to protect vehicle privacy without causing significant communication overhead in low-density VANETs by generating highly authentic fake beacons to form a single fabricated trajectory. Specifically, the CPCROK consists of three main modules: firstly, a special Kalman filter module that provides real-time, coarse-grained vehicle trajectory estimates to reduce the need for real-time vehicle state information; secondly, a Recurrent Neural Network (RNN) module that enhances predictions within the mix zone by incorporating offline data engineering and considering online vehicle steering angles; and finally, a trajectory generation module that collaborates with the first two to generate highly convincing fake trajectories outside the mix zone. The experimental results confirm that CPCROK effectively reduces the attack success rate by over 90%, outperforming the plain mix-zone scheme and beating other fake beacon schemes by more than 60%. Additionally, CPCROK effectively minimizes transmission overhead by 67%, all while ensuring a high level of protection. Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities)
Show Figures

Figure 1

12 pages, 386 KiB  
Article
A Transformer-Based Autoencoder with Isolation Forest and XGBoost for Malfunction and Intrusion Detection in Wireless Sensor Networks for Forest Fire Prediction
by Ahshanul Haque and Hamdy Soliman
Future Internet 2025, 17(4), 164; https://doi.org/10.3390/fi17040164 - 9 Apr 2025
Viewed by 282
Abstract
Wireless Sensor Networks (WSNs) play a critical role in environmental monitoring and early forest fire detection. However, they are susceptible to sensor malfunctions and network intrusions, which can compromise data integrity and lead to false alarms or missed detections. This study presents a [...] Read more.
Wireless Sensor Networks (WSNs) play a critical role in environmental monitoring and early forest fire detection. However, they are susceptible to sensor malfunctions and network intrusions, which can compromise data integrity and lead to false alarms or missed detections. This study presents a hybrid anomaly detection framework that integrates a Transformer-based Autoencoder, Isolation Forest, and XGBoost to effectively classify normal sensor behavior, malfunctions, and intrusions. The Transformer Autoencoder models spatiotemporal dependencies in sensor data, while adaptive thresholding dynamically adjusts sensitivity to anomalies. Isolation Forest provides unsupervised anomaly validation, and XGBoost further refines classification, enhancing detection precision. Experimental evaluation using real-world sensor data demonstrates that our model achieves 95% accuracy, with high recall for intrusion detection, minimizing false negatives. The proposed approach improves the reliability of WSN-based fire monitoring by reducing false alarms, adapting to dynamic environmental conditions, and distinguishing between hardware failures and security threats. Full article
(This article belongs to the Special Issue Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

40 pages, 470 KiB  
Systematic Review
A Systematic Review on the Combination of VR, IoT and AI Technologies, and Their Integration in Applications
by Dimitris Kostadimas, Vlasios Kasapakis and Konstantinos Kotis
Future Internet 2025, 17(4), 163; https://doi.org/10.3390/fi17040163 - 7 Apr 2025
Viewed by 841
Abstract
The convergence of Virtual Reality (VR), Artificial Intelligence (AI), and the Internet of Things (IoT) offers transformative potential across numerous sectors. However, existing studies often examine these technologies independently or in limited pairings, which overlooks the synergistic possibilities of their combined usage. This [...] Read more.
The convergence of Virtual Reality (VR), Artificial Intelligence (AI), and the Internet of Things (IoT) offers transformative potential across numerous sectors. However, existing studies often examine these technologies independently or in limited pairings, which overlooks the synergistic possibilities of their combined usage. This systematic review adheres to the PRISMA guidelines in order to critically analyze peer-reviewed literature from highly recognized academic databases related to the intersection of VR, AI, and IoT, and identify application domains, methodologies, tools, and key challenges. By focusing on real-life implementations and working prototypes, this review highlights state-of-the-art advancements and uncovers gaps that hinder practical adoption, such as data collection issues, interoperability barriers, and user experience challenges. The findings reveal that digital twins (DTs), AIoT systems, and immersive XR environments are promising as emerging technologies (ET), but require further development to achieve scalability and real-world impact, while in certain fields a limited amount of research is conducted until now. This review bridges theory and practice, providing a targeted foundation for future interdisciplinary research aimed at advancing practical, scalable solutions across domains such as healthcare, smart cities, industry, education, cultural heritage, and beyond. The study found that the integration of VR, AI, and IoT holds significant potential across various domains, with DTs, IoT systems, and immersive XR environments showing promising applications, but challenges such as data interoperability, user experience limitations, and scalability barriers hinder widespread adoption. Full article
(This article belongs to the Special Issue Advances in Extended Reality for Smart Cities)
Show Figures

Figure 1

14 pages, 274 KiB  
Article
Multi-Class Intrusion Detection in Internet of Vehicles: Optimizing Machine Learning Models on Imbalanced Data
by Ágata Palma, Mário Antunes, Jorge Bernardino and Ana Alves
Future Internet 2025, 17(4), 162; https://doi.org/10.3390/fi17040162 - 7 Apr 2025
Viewed by 295
Abstract
The Internet of Vehicles (IoV) presents complex cybersecurity challenges, particularly against Denial-of-Service (DoS) and spoofing attacks targeting the Controller Area Network (CAN) bus. This study leverages the CICIoV2024 dataset, comprising six distinct classes of benign traffic and various types of attacks, to evaluate [...] Read more.
The Internet of Vehicles (IoV) presents complex cybersecurity challenges, particularly against Denial-of-Service (DoS) and spoofing attacks targeting the Controller Area Network (CAN) bus. This study leverages the CICIoV2024 dataset, comprising six distinct classes of benign traffic and various types of attacks, to evaluate advanced machine learning techniques for instrusion detection systems (IDS). The models XGBoost, Random Forest, AdaBoost, Extra Trees, Logistic Regression, and Deep Neural Network were tested under realistic, imbalanced data conditions, ensuring that the evaluation reflects real-world scenarios where benign traffic dominates. Using hyperparameter optimization with Optuna, we achieved significant improvements in detection accuracy and robustness. Ensemble methods such as XGBoost and Random Forest consistently demonstrated superior performance, achieving perfect accuracy and macro-average F1-scores, even when detecting minority attack classes, in contrast to previous results for the CICIoV2024 dataset. The integration of optimized hyperparameter tuning and a broader methodological scope culminated in an IDS framework capable of addressing diverse attack scenarios with exceptional precision. Full article
Show Figures

Figure 1

39 pages, 4156 KiB  
Review
Enabling Green Cellular Networks: A Review and Proposal Leveraging Software-Defined Networking, Network Function Virtualization, and Cloud-Radio Access Network
by Radheshyam Singh, Line M. P. Larsen, Eder Ollora Zaballa, Michael Stübert Berger, Christian Kloch and Lars Dittmann
Future Internet 2025, 17(4), 161; https://doi.org/10.3390/fi17040161 - 5 Apr 2025
Viewed by 216
Abstract
The increasing demand for enhanced communication systems, driven by applications such as real-time video streaming, online gaming, critical operations, and Internet-of-Things (IoT) services, has necessitated the optimization of cellular networks to meet evolving requirements while addressing power consumption challenges. In this context, various [...] Read more.
The increasing demand for enhanced communication systems, driven by applications such as real-time video streaming, online gaming, critical operations, and Internet-of-Things (IoT) services, has necessitated the optimization of cellular networks to meet evolving requirements while addressing power consumption challenges. In this context, various initiatives undertaken by industry, academia, and researchers to reduce the power consumption of cellular network systems are comprehensively reviewed. Particular attention is given to emerging technologies, including Software-Defined Networking (SDN), Network Function Virtualization (NFV), and Cloud-Radio Access Network (C-RAN), which are identified as key enablers for reshaping cellular infrastructure. Their collective potential to enhance energy efficiency while addressing convergence challenges is analyzed, and solutions for sustainable network evolution are proposed. A conceptual architecture based on SDN, NFV, and C-RAN is presented as an illustrative example of integrating these technologies to achieve significant power savings. The proposed framework outlines an approach to developing energy-efficient cellular networks, capable of reducing power consumption by approximately 40 to 50% through the optimal placement of virtual network functions. Full article
Show Figures

Figure 1

26 pages, 9869 KiB  
Article
Comparative Feature-Guided Regression Network with a Model-Eye Pretrained Model for Online Refractive Error Screening
by Jiayi Wang, Tianyou Zheng, Yang Zhang, Tianli Zheng and Weiwei Fu
Future Internet 2025, 17(4), 160; https://doi.org/10.3390/fi17040160 - 3 Apr 2025
Viewed by 186
Abstract
With the development of the internet, the incidence of myopia is showing a trend towards younger ages, making routine vision screening increasingly essential. This paper designs an online refractive error screening solution centered on the CFGN (Comparative Feature-Guided Network), a refractive error screening [...] Read more.
With the development of the internet, the incidence of myopia is showing a trend towards younger ages, making routine vision screening increasingly essential. This paper designs an online refractive error screening solution centered on the CFGN (Comparative Feature-Guided Network), a refractive error screening network based on the eccentric photorefraction method. Additionally, a training strategy incorporating an objective model-eye pretraining model is introduced to enhance screening accuracy. Specifically, we obtain six-channel infrared eccentric photorefraction pupil images to enrich image information and design a comparative feature-guided module and a multi-channel information fusion module based on the characteristics of each channel image to enhance network performance. Experimental results show that CFGN achieves an accuracy exceeding 92% within a ±1.00 D refractive error range across datasets from two regions, with mean absolute errors (MAEs) of 0.168 D and 0.108 D, outperforming traditional models and meeting vision screening requirements. The pretrained model helps achieve better performance with small samples. The vision screening scheme proposed in this study is more efficient and accurate than existing networks, and the cost-effectiveness of the pretrained model with transfer learning provides a technical foundation for subsequent rapid online screening and routine tracking via networking. Full article
Show Figures

Figure 1

30 pages, 3565 KiB  
Systematic Review
Internet of Things and Deep Learning for Citizen Security: A Systematic Literature Review on Violence and Crime
by Chrisbel Simisterra-Batallas, Pablo Pico-Valencia, Jaime Sayago-Heredia and Xavier Quiñónez-Ku
Future Internet 2025, 17(4), 159; https://doi.org/10.3390/fi17040159 - 3 Apr 2025
Viewed by 364
Abstract
This study conducts a systematic literature review following the PRISMA framework and the guidelines of Kitchenham and Charters to analyze the application of Internet of Things (IoT) technologies and deep learning models in monitoring violent actions and criminal activities in smart cities. A [...] Read more.
This study conducts a systematic literature review following the PRISMA framework and the guidelines of Kitchenham and Charters to analyze the application of Internet of Things (IoT) technologies and deep learning models in monitoring violent actions and criminal activities in smart cities. A total of 45 studies published between 2010 and 2024 were selected, revealing that most research, primarily from India and China, focuses on cybersecurity in IoT networks (76%), while fewer studies address the surveillance of physical violence and crime-related events (17%). Advanced neural network models, such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and hybrid approaches, have demonstrated high accuracy rates, averaging over 97.44%, in detecting suspicious behaviors. These models perform well in identifying anomalies in IoT security; however, they have primarily been tested in simulation environments (91% of analyzed studies), most of which incorporate real-world data. From a legal perspective, existing proposals mainly emphasize security and privacy. This study contributes to the development of smart cities by promoting IoT-based security methodologies that enhance surveillance and crime prevention in cities in developing countries. Full article
(This article belongs to the Special Issue Internet of Things (IoT) in Smart City)
Show Figures

Figure 1

17 pages, 2956 KiB  
Article
A3C-R: A QoS-Oriented Energy-Saving Routing Algorithm for Software-Defined Networks
by Sunan Wang, Rong Song, Xiangyu Zheng, Wanwei Huang and Hongchang Liu
Future Internet 2025, 17(4), 158; https://doi.org/10.3390/fi17040158 - 3 Apr 2025
Viewed by 213
Abstract
With the rapid growth of Internet applications and network traffic, existing routing algorithms are usually difficult to guarantee the quality of service (QoS) indicators such as delay, bandwidth, and packet loss rate as well as network energy consumption for various data flows with [...] Read more.
With the rapid growth of Internet applications and network traffic, existing routing algorithms are usually difficult to guarantee the quality of service (QoS) indicators such as delay, bandwidth, and packet loss rate as well as network energy consumption for various data flows with business characteristics. They have problems such as unbalanced traffic scheduling and unreasonable network resource allocation. Aiming at the above problems, this paper proposes a QoS-oriented energy-saving routing algorithm A3C-R in the software-defined network (SDN) environment. Based on the asynchronous update advantages of the asynchronous advantage Actor-Critic (A3C) algorithm and the advantages of independent interaction between multiple agents and the environment, the A3C-R algorithm can effectively improve the convergence of the routing algorithm. The process of the A3C-R algorithm first takes QoS indicators such as delay, bandwidth, and packet loss rate and the network energy consumption of the link as input. Then, it creates multiple agents to start asynchronous training, through the continuous updating of Actors and Critics in each agent and periodically synchronizes the model parameters to the global model. After the algorithm training converges, it can output the link weights of the network topology to facilitate the calculation of intelligent routing strategies that meet QoS requirements and lower network energy consumption. The experimental results indicate that the A3C-R algorithm, compared to the baseline algorithms ECMP, I-DQN, and DDPG-EEFS, reduces delay by approximately 9.4%, increases throughput by approximately 7.0%, decreases the packet loss rate by approximately 9.5%, and improves energy-saving percentage by approximately 10.8%. Full article
Show Figures

Figure 1

24 pages, 3782 KiB  
Article
The New CAP Theorem on Blockchain Consensus Systems
by Aristidis G. Anagnostakis and Euripidis Glavas
Future Internet 2025, 17(4), 157; https://doi.org/10.3390/fi17040157 - 2 Apr 2025
Viewed by 297
Abstract
One of the most emblematic theorems in the theory of distributed databases is Eric Brewer’s CAP theorem. It stresses the tradeoffs between Consistency, Availability, and Partition and states that it is impossible to guarantee all three of them simultaneously. Inspired by this, we [...] Read more.
One of the most emblematic theorems in the theory of distributed databases is Eric Brewer’s CAP theorem. It stresses the tradeoffs between Consistency, Availability, and Partition and states that it is impossible to guarantee all three of them simultaneously. Inspired by this, we introduce the new CAP theorem for autonomous consensus systems, and we demonstrate that, at most, two of the three elementary properties, Consensus achievement (C), Autonomy (A), and entropic Performance (P) can be optimized simultaneously in the generic case. This provides a theoretical limit to Blockchain systems’ decentralization, impacting their scalability, security, and real-world adoption. To formalize and analyze this tradeoff, we utilize the IoT micro-Blockchain as a universal, minimal, consensus-enabling framework. We define a set of quantitative functions relating each of the properties to the number of event witnesses in the system. We identify the existing mutual exclusions, and formally prove for one homogenous system consideration, that (A), (C), and (P) cannot be optimized simultaneously. This suggests that a requirement for concurrent optimization of the three properties cannot be satisfied in the generic case and reveals an intrinsic limitation on the design and the optimization of distributed Blockchain consensus mechanisms. Our findings are formally proved utilizing the IoT micro-Blockchain framework and validated through the empirical data benchmarking of large-scale Blockchain systems, i.e., Bitcoin, Ethereum, and Hyperledger Fabric. Full article
Show Figures

Figure 1

23 pages, 2670 KiB  
Article
Database Security and Performance: A Case of SQL Injection Attacks Using Docker-Based Virtualisation and Its Effect on Performance
by Ade Dotun Ajasa, Hassan Chizari and Abu Alam
Future Internet 2025, 17(4), 156; https://doi.org/10.3390/fi17040156 - 2 Apr 2025
Viewed by 452
Abstract
Modern database systems are critical for storing sensitive information but are increasingly targeted by cyber threats, including SQL injection (SQLi) attacks. This research proposes a robust security framework leveraging Docker-based virtualisation to enhance database security and mitigate the impact of SQLi attacks. A [...] Read more.
Modern database systems are critical for storing sensitive information but are increasingly targeted by cyber threats, including SQL injection (SQLi) attacks. This research proposes a robust security framework leveraging Docker-based virtualisation to enhance database security and mitigate the impact of SQLi attacks. A controlled experimental methodology evaluated the framework’s effectiveness using Damn Vulnerable Web Application (DVWA) and Acunetix databases. The findings reveal that Docker significantly reduces the vulnerability to SQLi attacks by isolating database instances, thereby safeguarding user data and system integrity. While Docker introduces a significant increase in CPU utilisation during high-traffic scenarios, the trade-off ensures enhanced security and reliability for real-world applications. This study highlights Docker’s potential as a practical solution for addressing evolving database security challenges in distributed and cloud environments. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop