Next Issue
Volume 18, January
Previous Issue
Volume 17, November
 
 

Future Internet, Volume 17, Issue 12 (December 2025) – 53 articles

Cover Story (view full-size image): The Internet’s evolution from Web 1.0 to Web 3.0 has expanded the attack surface for cybercriminals, making malware attacks more sophisticated. The use of AI in antivirus solutions challenges traditional detection methods. Research shows that converting malware files into textured images enhances resistance to obfuscation and improves the detection of zero-day threats. This paper explores the use of image quality assessment (IQA) to enhance the curation of visual malware datasets. We propose MalScore, a no-reference IQA algorithm designed to evaluate dataset quality and guide future dataset development. Our evaluation demonstrates that MalScore effectively differentiates dataset quality, with MalNet Tiny scoring 95% and NARAD 50%, highlighting its potential for visual malware detection and the integration of IQA techniques. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 1171 KB  
Article
Methodology for Detecting Suspicious Claims in Health Insurance Using Supervised Machine Learning
by Jose Villegas-Ortega, Luis Napoleon Quiroz Aviles, Juan Nazario Arancibia, Wilder Carpio Montenegro, Rosa Delgadillo and David Mauricio
Future Internet 2025, 17(12), 584; https://doi.org/10.3390/fi17120584 - 18 Dec 2025
Viewed by 366
Abstract
Health insurance fraud (HIF) places a substantial economic burden on global health systems. While supervised machine learning (SML) offers a promising solution for its detection, most approaches are ad hoc and lack a systematic methodological framework that ensures replicability, adaptability, and effectiveness, especially [...] Read more.
Health insurance fraud (HIF) places a substantial economic burden on global health systems. While supervised machine learning (SML) offers a promising solution for its detection, most approaches are ad hoc and lack a systematic methodological framework that ensures replicability, adaptability, and effectiveness, especially in contexts with severe class imbalance. We developed PDHIF (Phases for Detecting Fraud in Health Insurance), a six-phase systematic methodology that introduces a holistic focus that integrates fraud theory, actors, manifestations, and factors with the complete SML lifecycle. We applied this methodology in a case study using a dataset of 8.5 million claims from a public health insurance system in Peru. We trained and evaluated three SML models (Random Forest, XGBoost, and multilayer perceptron) in two experimental scenarios: one with the original, highly unbalanced dataset and another with a training set balanced via the K-means SMOTE technique. When PDHIF was applied, the results revealed a stark contrast: in the unbalanced scenario, the models were ineffective at detecting fraud (F1 score < 0.521) despite high accuracy (>98%). In the balanced scenario, the performance improved dramatically. The best-performing model, RF, achieved an F1 score of 0.994, a sensitivity of 0.994, and an AUC of 0.994 on the test set, demonstrating a robust ability to distinguish suspicious claims. Full article
Show Figures

Figure 1

33 pages, 1981 KB  
Article
DSGTA: A Dynamic and Stochastic Game-Theoretic Allocation Model for Scalable and Efficient Resource Management in Multi-Tenant Cloud Environments
by Said El Kafhali and Oumaima Ghandour
Future Internet 2025, 17(12), 583; https://doi.org/10.3390/fi17120583 - 17 Dec 2025
Viewed by 246
Abstract
Efficient resource allocation is a central challenge in multi-tenant cloud, fog, and edge environments, where heterogeneous tenants compete for shared resources under dynamic and uncertain workloads. Static or purely heuristic methods often fail to capture strategic tenant behavior, whereas many existing game-theoretic approaches [...] Read more.
Efficient resource allocation is a central challenge in multi-tenant cloud, fog, and edge environments, where heterogeneous tenants compete for shared resources under dynamic and uncertain workloads. Static or purely heuristic methods often fail to capture strategic tenant behavior, whereas many existing game-theoretic approaches overlook stochastic demand variability, fairness, or scalability. This paper proposes a Dynamic and Stochastic Game-Theoretic Allocation (DSGTA) model that jointly models non-cooperative tenant interactions, repeated strategy adaptation, and random workload fluctuations. The framework combines a Nash-like dynamic equilibrium, achieved via a lightweight best-response update rule, with an approximate Shapley-value-based fairness mechanism that remains tractable for large tenant populations. The model is evaluated on synthetic scenarios, with a trace-driven setup built from the Google 2019 Cluster dataset, and a scalability study is conducted with up to K=500 heterogeneous tenants. Using a consistent set of core metrics (tenant utility, resource cost, fairness index, and SLA satisfaction rate), DSGTA is compared against a static game-theoretic allocation (SGTA) and a dynamic pricing-based allocation (DPBA). The results, supported by statistical significance tests, show that DSGTA achieves higher utility, lower average cost, improved fairness and competitive utilization across diverse strategy profiles and stochastic conditions, thereby demonstrating its practical relevance for scalable, fair, and economically efficient resource allocation in realistic multi-tenant cloud environments. Full article
Show Figures

Figure 1

38 pages, 8382 KB  
Article
Ontology-Driven Emotion Multi-Class Classification and Influence Analysis of User Opinions on Online Travel Agency
by Putri Utami Rukmana, Muharman Lubis, Hanif Fakhrurroja, Asriana and Alif Noorachmad Muttaqin
Future Internet 2025, 17(12), 582; https://doi.org/10.3390/fi17120582 - 17 Dec 2025
Viewed by 383
Abstract
The rise in social media has transformed Online Travel Agencies (OTAs) into platforms where users actively share their experiences and opinions. However, conventional opinion mining approaches often fail to capture nuanced emotional expressions or connect them to user influence. To address this gap, [...] Read more.
The rise in social media has transformed Online Travel Agencies (OTAs) into platforms where users actively share their experiences and opinions. However, conventional opinion mining approaches often fail to capture nuanced emotional expressions or connect them to user influence. To address this gap, this study introduces an ontology-driven opinion mining framework that integrates multi-class emotion classification, aspect-based analysis, and influence modeling using Indonesian-language discussions from the social media platform X. The framework combines an OTA-specific ontology that formally represents service aspects such as booking support, financial, platform experience, and event with fine-tuned IndoBERT for emotion recognition and sentiment polarity detection, and Social Network Analysis (SNA) enhanced by entropy weighting and TOPSIS to quantify and rank user influence. The results show that the fine-tuned IndoBERT performs strongly with respect to identification and sentiment polarity detection, with moderate results for multi-class emotion classification. Emotion labels enrich the ontology by linking user opinions to their affective context, enabling the deeper interpretation of customer experiences and service-related issues. The influence analysis further reveals that structural network properties, particularly betweenness, closeness, and eigenvector centrality, serve as the primary determinants of user influence, while engagement indicators act as discriminative amplifiers that highlight users whose content attains high visibility. Overall, the proposed framework offers a comprehensive and interpretable approach to understanding public perception in Indonesian-language OTA discussions. It advances opinion mining for low-resource languages by bridging semantic ontology modeling, emotional understanding, and influence analysis, while providing practical insights for OTAs to enhance service responsiveness, manage emotional engagement, and strengthen digital communication strategies. Full article
Show Figures

Graphical abstract

32 pages, 1043 KB  
Article
Modeling Student Acceptance of AI Technologies in Higher Education: A Hybrid SEM–ANN Approach
by Charmine Sheena R. Saflor
Future Internet 2025, 17(12), 581; https://doi.org/10.3390/fi17120581 - 17 Dec 2025
Viewed by 541
Abstract
This study examines the role of different factors in supporting the sustainable use of Artificial Intelligence (AI) technologies in higher education, particularly in the context of student interactions with intelligent and human-centered learning tools. Using Structural Equation Modeling (SEM) and Artificial Neural Networks [...] Read more.
This study examines the role of different factors in supporting the sustainable use of Artificial Intelligence (AI) technologies in higher education, particularly in the context of student interactions with intelligent and human-centered learning tools. Using Structural Equation Modeling (SEM) and Artificial Neural Networks (ANN) within the Technology Acceptance Model (TAM), the research provides a detailed look at how trust influences students’ attitudes and behaviors toward AI-based learning platforms. Data were gathered from 200 students at Occidental Mindoro State College to analyze the effects of social influence, self-efficacy, perceived ease of use, perceived risk, attitude toward use, behavioral intention, acceptance, and actual use. Results from SEM indicate that perceived risk and ease of use have a stronger impact on AI adoption than perceived usefulness and trust. The ANN analysis further shows that acceptance is the most important factor influencing actual AI use, reflecting the complex, non-linear relationships between trust, risk, and adoption. These findings highlight the need for AI systems that are adaptive, transparent, and designed with the user experience in mind. By building interfaces that are more intuitive and reliable, educators and designers can strengthen human–AI interaction and promote responsible and lasting integration of AI in education. Full article
Show Figures

Figure 1

25 pages, 821 KB  
Article
Enhancing Microservice Security Through Adaptive Moving Target Defense Policies to Mitigate DDoS Attacks in Cloud-Native Environments
by Yuyang Zhou, Guang Cheng and Kang Du
Future Internet 2025, 17(12), 580; https://doi.org/10.3390/fi17120580 - 16 Dec 2025
Viewed by 225
Abstract
Cloud-native microservice architectures offer scalability and resilience but introduce complex interdependencies and new attack surfaces, making them vulnerable to resource-exhaustion Distributed Denial-of-Service (DDoS) attacks. These attacks propagate along service call chains, closely mimic legitimate traffic, and evade traditional detection and mitigation techniques, resulting [...] Read more.
Cloud-native microservice architectures offer scalability and resilience but introduce complex interdependencies and new attack surfaces, making them vulnerable to resource-exhaustion Distributed Denial-of-Service (DDoS) attacks. These attacks propagate along service call chains, closely mimic legitimate traffic, and evade traditional detection and mitigation techniques, resulting in cascading bottlenecks and degraded Quality of Service (QoS). Existing Moving Target Defense (MTD) approaches lack adaptive, cost-aware policy guidance and are often ineffective against spatiotemporally adaptive adversaries. To address these challenges, this paper proposes ScaleShield, an adaptive MTD framework powered by Deep Reinforcement Learning (DRL) that learns coordinated, attack-aware defense policies for microservices. ScaleShield formulates defense as a Markov Decision Process (MDP) over multi-dimensional discrete actions, leveraging a Multi-Dimensional Double Deep Q-Network (MD3QN) to optimize service availability and minimize operational overhead. Experimental results demonstrate that ScaleShield achieves near 100% defense success rates and reduces compromised nodes to zero within approximately 5 steps, significantly outperforming state-of-the-art baselines. It lowers service latency by up to 72% under dynamic attacks while maintaining over 94% resource efficiency, providing robust and cost-effective protection against resource-exhaustion DDoS attacks in cloud-native environments. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

27 pages, 5763 KB  
Article
SatNet-B3: A Lightweight Deep Edge Intelligence Framework for Satellite Imagery Classification
by Tarbia Hasan, Jareen Anjom, Md. Ishan Arefin Hossain and Zia Ush Shamszaman
Future Internet 2025, 17(12), 579; https://doi.org/10.3390/fi17120579 - 16 Dec 2025
Viewed by 369
Abstract
Accurate weather classification plays a vital role in disaster management and minimizing economic losses. However, satellite-based weather classification remains challenging due to high inter-class similarity; the computational complexity of existing deep learning models, which limits real-time deployment on resource-constrained edge devices; and the [...] Read more.
Accurate weather classification plays a vital role in disaster management and minimizing economic losses. However, satellite-based weather classification remains challenging due to high inter-class similarity; the computational complexity of existing deep learning models, which limits real-time deployment on resource-constrained edge devices; and the limited interpretability of model decisions in practical environments. To address these challenges, this study proposes SatNet-B3, a quantized, lightweight deep learning framework that integrates an EfficientNetB3 backbone with custom classification layers to enable accurate and edge-deployable weather event recognition from satellite imagery. SatNet-B3 is evaluated on the LSCIDMR dataset and demonstrates high-precision performance, achieving 98.20% accuracy and surpassing existing benchmarks. Ten CNN models, including SatNet-B3, were experimented with to classify eight weather conditions, Tropical Cyclone, Extratropical Cyclone, Snow, Low Water Cloud, High Ice Cloud, Vegetation, Desert, and Ocean, with SatNet-B3 yielding the best results. The model addresses class imbalance and inter-class similarity through extensive preprocessing and augmentation, and the pipeline supports the efficient handling of high-resolution geospatial imagery. Post-training quantization reduced the model size by 90.98% while retaining accuracy, and deployment on a Raspberry Pi 4 achieved a 0.3 s inference time. Integrating explainable AI tools such as LIME and CAM enhances interpretability for intelligent climate monitoring. Full article
Show Figures

Graphical abstract

27 pages, 814 KB  
Article
Concurrency Bug Detection via Static Analysis and Large Language Models
by Zuocheng Feng, Yiming Chen, Kaiwen Zhang, Xiaofeng Li and Guanjun Liu
Future Internet 2025, 17(12), 578; https://doi.org/10.3390/fi17120578 - 15 Dec 2025
Viewed by 429
Abstract
Concurrency bugs originate from complex and improper synchronization of shared resources, presenting a significant challenge for detection. Traditional static analysis relies heavily on expert knowledge and frequently fails when code is non-compilable. Conversely, large language models struggle with semantic sparsity, inadequate comprehension of [...] Read more.
Concurrency bugs originate from complex and improper synchronization of shared resources, presenting a significant challenge for detection. Traditional static analysis relies heavily on expert knowledge and frequently fails when code is non-compilable. Conversely, large language models struggle with semantic sparsity, inadequate comprehension of concurrent semantics, and the tendency to hallucinate. To address the limitations of static analysis in capturing complex concurrency semantics and the hallucination risks associated with large language models, this study proposes ConSynergy. This novel framework integrates the structural rigor of static analysis with the semantic reasoning capabilities of large language models. The core design employs a robust task decomposition strategy that decomposes concurrency bug detection into a four-stage pipeline: shared resource identification, concurrency-aware slicing, data-flow reasoning, and formal verification. This approach fundamentally mitigates hallucinations from large language models caused by insufficient program context. First, the framework identifies shared resources and applies a concurrency-aware program slicing technique to precisely extract concurrency-related structural features, thereby alleviating semantic sparsity. Second, to enhance the large language model’s comprehension of concurrent semantics, we design a concurrency data-flow analysis based on Chain-of-Thought prompting. Third, the framework incorporates a Satisfiability Modulo Theories solver to ensure the reliability of detection results, alongside an iterative repair mechanism based on large language models that dramatically reduces dependency on code compilability. Extensive experiments on three mainstream concurrency bug datasets, including DataRaceBench, the concurrency subset of Juliet, and DeepRace, demonstrate that ConSynergy achieves an average precision and recall of 80.0% and 87.1%, respectively. ConSynergy outperforms state-of-the-art baselines by 10.9% to 68.2% in average F1 score, demonstrating significant potential for practical application. Full article
Show Figures

Figure 1

22 pages, 1380 KB  
Article
Selection of Optimal Cluster Head Using MOPSO and Decision Tree for Cluster-Oriented Wireless Sensor Networks
by Rahul Mishra, Sudhanshu Kumar Jha, Shiv Prakash and Rajkumar Singh Rathore
Future Internet 2025, 17(12), 577; https://doi.org/10.3390/fi17120577 - 15 Dec 2025
Viewed by 276
Abstract
Wireless sensor networks (WSNs) consist of distributed nodes to monitor various physical and environmental parameters. The sensor nodes (SNs) are usually resource constrained such as power source, communication, and computation capacity. In WSN, energy consumption varies depending on the distance between sender and [...] Read more.
Wireless sensor networks (WSNs) consist of distributed nodes to monitor various physical and environmental parameters. The sensor nodes (SNs) are usually resource constrained such as power source, communication, and computation capacity. In WSN, energy consumption varies depending on the distance between sender and receiver SNs. Communication among SNs having long distance requires significantly additional energy that negatively affects network longevity. To address these issues, WSNs are deployed using multi-hop routing. Using multi-hop routing solves various problems like reduced communication and communication cost but finding an optimal cluster head (CH) and route remain an issue. An optimal CH reduces energy consumption and maintains reliable data transmission throughout the network. To improve the performance of multi-hop routing in WSN, we propose a model that combines Multi-Objective Particle Swarm Optimization (MOPSO) and a Decision Tree for dynamic CH selection. The proposed model consists of two phases, namely, the offline phase and the online phase. In the offline phase, various network scenarios with node densities, initial energy levels, and BS positions are simulated, required features are collected, and MOPSO is applied to the collected features to generate a Pareto front of optimal CH nodes to optimize energy efficiency, coverage, and load balancing. Each node is labeled as selected CH or not by the MOPSO, and the labelled dataset is then used to train a Decision Tree classifier, which generates a lightweight and interpretable model for CH prediction. In the online phase, the trained model is used in the deployed network to quickly and adaptively select CHs using features of each node and classifying them as a CH or non-CH. The predicted nodes broadcast the information and manage the intra-cluster communication, data aggregation, and routing to the base station. CH selection is re-initiated based on residual energy drop below a threshold, load saturation, and coverage degradation. The simulation results demonstrate that the proposed model outperforms protocols such as LEACH, HEED, and standard PSO regarding energy efficiency and network lifetime, making it highly suitable for applications in green computing, environmental monitoring, precision agriculture, healthcare, and industrial IoT. Full article
(This article belongs to the Special Issue Clustered Federated Learning for Networks)
Show Figures

Figure 1

19 pages, 1724 KB  
Article
Smart IoT-Based Temperature-Sensing Device for Energy-Efficient Glass Window Monitoring
by Vaclav Mach, Jiri Vojtesek, Milan Adamek, Pavel Drabek, Pavel Stoklasek, Stepan Dlabaja, Lukas Kopecek and Ales Mizera
Future Internet 2025, 17(12), 576; https://doi.org/10.3390/fi17120576 - 15 Dec 2025
Viewed by 309
Abstract
This paper presents the development and validation of an IoT-enabled temperature-sensing device for real-time monitoring of the thermal insulation properties of glass windows. The system integrates contact and non-contact temperature sensors into a compact PCB platform equipped with WiFi connectivity, enabling seamless integration [...] Read more.
This paper presents the development and validation of an IoT-enabled temperature-sensing device for real-time monitoring of the thermal insulation properties of glass windows. The system integrates contact and non-contact temperature sensors into a compact PCB platform equipped with WiFi connectivity, enabling seamless integration into smart home and building management frameworks. By continuously assessing window insulation performance, the device addresses the challenge of energy loss in buildings, where glazing efficiency often degrades over time. The collected data can be transmitted to cloud-based services or local IoT infrastructures, allowing for advanced analytics, remote access, and adaptive control of heating, ventilation, and air-conditioning (HVAC) systems. Experimental results demonstrate the accuracy and reliability of the proposed system, confirming its potential to contribute to energy conservation and sustainable living practices. Beyond energy efficiency, the device provides a scalable approach to environmental monitoring within the broader future internet ecosystem, supporting the evolution of intelligent, connected, and human-centered living environments. Full article
(This article belongs to the Special Issue Artificial Intelligence and Control Systems for Industry 4.0 and 5.0)
Show Figures

Figure 1

26 pages, 614 KB  
Systematic Review
Cybersecurity in Higher Education Institutions: A Systematic Review of Emerging Trends, Challenges and Solutions
by Oladele Afolalu and Mohohlo Samuel Tsoeu
Future Internet 2025, 17(12), 575; https://doi.org/10.3390/fi17120575 - 15 Dec 2025
Viewed by 595
Abstract
Higher education institutions (HEIs) are increasingly becoming vulnerable to cyberattacks as they adopt digital technologies to support their administrative, research and academic activities. These institutions, which typically operate in open and decentralized environments, face serious challenges as a result of the growing complexity [...] Read more.
Higher education institutions (HEIs) are increasingly becoming vulnerable to cyberattacks as they adopt digital technologies to support their administrative, research and academic activities. These institutions, which typically operate in open and decentralized environments, face serious challenges as a result of the growing complexity of cyberattacks such as phishing, ransomware and data breaches. This systematic review synthesizes existing literature on cybersecurity in HEIs, identifying key challenges, emerging solutions and current trends. The review analyses the adoption of advanced technologies such as zero trust architectures (ZTAs), artificial intelligence (AI)-driven security and cloud-based systems. Furthermore, it investigates the underlying causes of cybersecurity vulnerabilities, including fragmented security procedures, lack of proper awareness about cybersecurity among users and associated technology gaps. The review also examines how governance frameworks, institutional policies and the incorporation of state-of-the-art security technologies can significantly mitigate these threats. Findings reveal that considerable progress has been made by some institutions in implementing security measures. However, comprehensive cybersecurity plans that integrate technological solutions with a robust institutional culture of cybersecurity awareness are still critically needed. The review concludes by highlighting the need for HEIs to collaborate and foster institution-wide partnership to strengthen cybersecurity measures. Finally, an in-depth study into the strategies and best practices for handling emerging cyberthreats in the HEIs is recommended. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of AI, IoT, and Edge Computing)
Show Figures

Figure 1

30 pages, 10600 KB  
Article
Edge-to-Cloud Continuum Orchestrator Based on Heterogeneous Nodes for Urban Traffic Monitoring
by Pietro Ruiu, Andrea Lagorio, Claudio Rubattu, Matteo Anedda, Michele Sanna and Mauro Fadda
Future Internet 2025, 17(12), 574; https://doi.org/10.3390/fi17120574 - 13 Dec 2025
Viewed by 349
Abstract
This paper presents an edge-to-cloud orchestrator capable of supporting services running at the edge on heterogeneous nodes based on general-purpose processing units and Field Programmable Gate Array (FPGA) platform (i.e., AMD Kria K26 SoM) in an urban environment, integrated with a series of [...] Read more.
This paper presents an edge-to-cloud orchestrator capable of supporting services running at the edge on heterogeneous nodes based on general-purpose processing units and Field Programmable Gate Array (FPGA) platform (i.e., AMD Kria K26 SoM) in an urban environment, integrated with a series of cloud-based services and capable of minimizing energy consumption. A use case of vehicle traffic monitoring is considered in a mobility scenario involving computing nodes equipped with video acquisition systems to evaluate the feasibility of the system. Since the use case concerns the monitoring of vehicular traffic by AI-based images and video processing, specific support for application orchestration in the form of containers was required. The development concerned the feasibility of managing containers with hardware acceleration derived from the Vitis AI design flow, leveraged to accelerate AI inference on the AMD Kria K26 SoM. A Kubernetes-based controller node was designed to facilitate the tracking and monitoring of specific vehicles. These vehicles may either be flagged by law enforcement authorities due to legal concerns or identified by the system itself through detection mechanisms deployed in computing nodes. Strategically distributed across the city, these nodes continuously analyze traffic, identifying vehicles that match the search criteria. Using containerized microservices and Kubernetes orchestration, the infrastructure ensures that tracking operations remain uninterrupted even in high-traffic scenarios. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

14 pages, 508 KB  
Article
Cross-Gen: An Efficient Generator Network for Adversarial Attacks on Cross-Modal Hashing Retrieval
by Chao Hu, Li Chen, Sisheng Li, Yin Yi, Yu Zhan, Chengguang Liu, Jianling Liu and Ronghua Shi
Future Internet 2025, 17(12), 573; https://doi.org/10.3390/fi17120573 - 13 Dec 2025
Viewed by 200
Abstract
Research on deep neural network (DNN)-based multi-dimensional data visualization has thoroughly explored cross-modal hash retrieval (CMHR) systems, yet their vulnerability to malicious adversarial examples remains evident. Recent work improves the robustness of CMHR networks by augmenting training datasets with adversarial examples. Prior approaches [...] Read more.
Research on deep neural network (DNN)-based multi-dimensional data visualization has thoroughly explored cross-modal hash retrieval (CMHR) systems, yet their vulnerability to malicious adversarial examples remains evident. Recent work improves the robustness of CMHR networks by augmenting training datasets with adversarial examples. Prior approaches typically formulate the generation of cross-modal adversarial examples as an optimization problem solved through iterative methods. Although effective, such techniques often suffer from slow generation speed, limiting research efficiency. To address this, we propose a generative-based method that enables rapid synthesis of adversarial examples via a carefully designed adversarial generator network. Specifically, we introduce Cross-Gen, a parallel cross-modal framework that constructs semantic triplet data by interacting with the target model through query-based feedback. The generator is optimized using a tailored objective comprising adversarial loss, reconstruction loss, and quantization loss. The experimental results show that Cross-Gen generates adversarial examples significantly faster than iterative methods while achieving competitive attack performance. Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security)
Show Figures

Figure 1

20 pages, 3900 KB  
Article
A Conceptual Model of a Digital Twin Driven Co-Pilot for Speed Coordination in Congested Urban Traffic
by Adrian Vasile Olteanu, Maximilian Nicolae, Bianca Alexe and Stefan Mocanu
Future Internet 2025, 17(12), 572; https://doi.org/10.3390/fi17120572 - 13 Dec 2025
Viewed by 272
Abstract
Digital Twins (DTs) are increasingly used to support real-time decision making in connected mobility systems, where network latency and uncertainty limit the effectiveness of conventional control strategies. This paper proposes a conceptual model for a DT-driven Co-Pilot designed to provide adaptive speed recommendations [...] Read more.
Digital Twins (DTs) are increasingly used to support real-time decision making in connected mobility systems, where network latency and uncertainty limit the effectiveness of conventional control strategies. This paper proposes a conceptual model for a DT-driven Co-Pilot designed to provide adaptive speed recommendations in congested urban traffic. The system combines live data from a mobile client with a prediction engine that executes multiple short-horizon SUMO simulations in parallel, enabling the DT to anticipate local traffic evolution faster than real time. A lightweight clock-alignment mechanism and latency evaluation over LAN, Cloudflare-tunneled connections, and 4G/5G networks demonstrate that the Co-Pilot can operate reliably using existing communication infrastructures. Experimental results show that moderate speeds (35–50 km/h) yield throughput and delay performance comparable to higher speeds, while improving flow stability—an important property for safe platooning and collaborative driving. The parallel execution of ten SUMO instances completes within 2–3 s for a 600 s simulation horizon, confirming the feasibility of embedding domain-specific ITS logic into a predictive DT architecture. The findings demonstrate that Digital Twin–based anticipatory simulation can compensate for communication latency and support real-time speed coordination, providing a practical pathway toward scalable, deployable DT-enabled traffic assistance systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Graphical abstract

31 pages, 2824 KB  
Article
A Digital Health Platform for Remote and Multimodal Monitoring in Neurodegenerative Diseases
by Adrian-Victor Vevera, Marilena Ianculescu and Adriana Alexandru
Future Internet 2025, 17(12), 571; https://doi.org/10.3390/fi17120571 - 13 Dec 2025
Viewed by 449
Abstract
Continuous and personalized monitoring are beneficial for patients suffering from neurodegenerative diseases such as Alzheimer’s disease, Parkinson’s disease and multiple sclerosis. However, such levels of monitoring are seldom ensured by traditional models of care. This paper presents NeuroPredict, a secure edge–cloud Internet of [...] Read more.
Continuous and personalized monitoring are beneficial for patients suffering from neurodegenerative diseases such as Alzheimer’s disease, Parkinson’s disease and multiple sclerosis. However, such levels of monitoring are seldom ensured by traditional models of care. This paper presents NeuroPredict, a secure edge–cloud Internet of Medical Things (IoMT) platform that addresses this problem by integrating commercial wearables and in-house sensors with cognitive and behavioral evaluations. The NeuroPredict platform links high-frequency physiological signals with periodic cognitive tests through the use of a modular architecture with lightweight device connectivity, a semantic integration layer for timestamp alignment and feature harmonization across heterogeneous streams, and multi-timescale data fusion. Its use of encrypted transport and storage, role-based access control, token-based authentication, identifier separation, and GDPR-aligned governance addresses security and privacy concerns. Moreover, the platform’s user interface was built by considering human-centered design principles and includes role-specific dashboards, alerts, and patient-facing summaries that are meant to encourage engagement and decision-making for patients and healthcare providers. Experimental evaluation demonstrated the NeuroPredict platform’s data acquisition reliability, coherence in multimodal synchronization, and correctness in role-based personalization and reporting. The NeuroPredict platform provides a smart system infrastructure for eHealth and remote monitoring in neurodegenerative care, aligned with priorities on wearables/IoMT integration, data security and privacy, interoperability, and human-centered design. Full article
(This article belongs to the Special Issue eHealth and mHealth—2nd Edition)
Show Figures

Graphical abstract

18 pages, 3718 KB  
Article
Population Estimation and Scanning System Using LEO Satellites Based on Wireless LAN Signals for Post-Disaster Areas
by Futo Noda and Gia Khanh Tran
Future Internet 2025, 17(12), 570; https://doi.org/10.3390/fi17120570 - 12 Dec 2025
Viewed by 241
Abstract
Many countries around the world repeatedly suffer from natural disasters such as earthquakes, tsunamis, floods, and hurricanes due to geographical factors, including plate boundaries, tropical cyclone zones, and coastal regions. Representative examples include Hurricane Katrina, which struck the United States in 2005, and [...] Read more.
Many countries around the world repeatedly suffer from natural disasters such as earthquakes, tsunamis, floods, and hurricanes due to geographical factors, including plate boundaries, tropical cyclone zones, and coastal regions. Representative examples include Hurricane Katrina, which struck the United States in 2005, and the Great East Japan Earthquake in 2011. Both were large-scale disasters that occurred in developed countries and caused enormous human and economic losses regardless of disaster type or location. As the occurrence of such catastrophic events remains inevitable, establishing effective preparedness and rapid response systems for large-scale disasters has become an urgent global challenge. One of the critical issues in disaster response is the rapid estimation of the number of affected individuals required for effective rescue operations. During large-scale disasters, terrestrial communication infrastructure is often rendered unusable, which severely hampers the collection of situational information. If the population within a disaster-affected area can be estimated without relying on ground-based communication networks, rescue resources can be more appropriately allocated based on the estimated number of people in need, thereby accelerating rescue operations and potentially reducing casualties. In this study, we propose a population-estimation system that remotely senses radio signals emitted from smartphones in disaster areas using Low Earth Orbit (LEO) satellites. Through numerical analysis conducted in MATLAB R2023b, the feasibility of the proposed system is examined. The numerical results demonstrate that, under ideal conditions, the proposed system can estimate the number of smartphones within the observation area with an average error of 2.254 devices. Furthermore, an additional evaluation incorporating a 3D urban model demonstrates that the proposed system can estimate the number of smartphones with an average error of 19.03 devices. To the best of our knowledge, this is the first attempt to estimate post-disaster population using wireless LAN signals sensed by LEO satellites, offering a novel remote-sensing-based approach for rapid disaster response. Full article
Show Figures

Figure 1

24 pages, 29424 KB  
Article
High-Degree Connectivity Sensor Networks: Applications in Pastured Cow Herd Monitoring
by Geunho Lee, Teruyuki Yamane, Kota Okabe, Fumiaki Sugino and Yeunwoong Kyung
Future Internet 2025, 17(12), 569; https://doi.org/10.3390/fi17120569 - 12 Dec 2025
Viewed by 431
Abstract
This paper explores the application of mobile sensor networks in cow herds, focusing on the challenge of achieving local communication under minimal computational constraints such as restricted locality, limited memory, and implicit coordination. To address this, we propose a high connectivity based sensor [...] Read more.
This paper explores the application of mobile sensor networks in cow herds, focusing on the challenge of achieving local communication under minimal computational constraints such as restricted locality, limited memory, and implicit coordination. To address this, we propose a high connectivity based sensor network scheme that enables individual sensors to self-organize and dynamically adapt to topological variations caused by cow movements. In this scheme, each sensor acquires local distribution data from neighboring sensors, identifies those with high connectivity, and forms a local network with a star topology. The overlap of these local networks results in a globally interconnected mesh topology. Furthermore, information exchanged through broadcasting and overhearing allows each sensor to incrementally update and adapt to dynamic changes in its local network. To validate the proposed scheme, a custom wireless sensor tag was developed and mounted on the necks of individual cows for experimental testing. Furthermore, large-scale simulations were performed to evaluate performance in herd environments. Both experimental and simulation results confirmed that the scheme effectively maintains network coverage and connectivity under dynamic herd conditions. Full article
(This article belongs to the Special Issue Intelligent Telecommunications Mobile Networks)
Show Figures

Graphical abstract

22 pages, 13391 KB  
Article
LSCNet: A Lightweight Shallow Feature Cascade Network for Small Object Detection in UAV Imagery
by Zening Wang and Amiya Nayak
Future Internet 2025, 17(12), 568; https://doi.org/10.3390/fi17120568 - 11 Dec 2025
Viewed by 340
Abstract
Unmanned Aerial Vehicles have become essential mobile sensing nodes in Internet of Things ecosystems, with applications ranging from disaster monitoring to traffic surveillance. However, wireless bandwidth is severely strained when sending enormous amounts of high-resolution aerial video to ground stations. To address these [...] Read more.
Unmanned Aerial Vehicles have become essential mobile sensing nodes in Internet of Things ecosystems, with applications ranging from disaster monitoring to traffic surveillance. However, wireless bandwidth is severely strained when sending enormous amounts of high-resolution aerial video to ground stations. To address these communication limitations, the current research paradigm is shifting toward UAV-assisted edge computing, where visual data is processed locally to extract semantic information for transmitting results to the ground or making autonomous decisions. Although deep detection is the dominant trend in general object detection, the heavy computational burden of these deep detection methods struggles to meet the stringent efficiency requirements of airborne edge platforms. Consequently, although recently proposed single-stage models like YOLOv10 can quickly detect objects in natural images, their over-dependence on deep features for computation results in wasted computational resources, as shallow information is crucial for small object detection in aerial scenes. In this paper, we propose LSCNet (Lightweight Shallow Feature Cascade Network), a novel lightweight architecture designed for UAV edge computing to handle aerial object detection tasks. Our lightweight Cascade Network focuses on feature extraction and shallow feature enhancement. LSCNet achieves 44.6% mAP50 on VisDrone2019 and 36.1% mAP50 on UAVDT, while decreasing parameters by 33% to 1.48 M. These results not only show how effective LSCNet is for real-time object detection but also provide a foundation for future developments in semantic communication within aerial networks. Full article
Show Figures

Figure 1

29 pages, 3021 KB  
Article
Fog-Aware Hierarchical Autoencoder with Density-Based Clustering for AI-Driven Threat Detection in Smart Farming IoT Systems
by Manikandan Thirumalaisamy, Sumendra Yogarayan, Md Shohel Sayeed, Siti Fatimah Abdul Razak and Ramesh Shunmugam
Future Internet 2025, 17(12), 567; https://doi.org/10.3390/fi17120567 - 10 Dec 2025
Viewed by 282
Abstract
Smart farming relies heavily on IoT automation and data-driven decision making, but this growing connectivity also increases exposure to cyberattacks. Flow-based unsupervised intrusion detection is a privacy-preserving alternative to signature and payload inspection, yet it still faces three challenges: loss of subtle anomaly [...] Read more.
Smart farming relies heavily on IoT automation and data-driven decision making, but this growing connectivity also increases exposure to cyberattacks. Flow-based unsupervised intrusion detection is a privacy-preserving alternative to signature and payload inspection, yet it still faces three challenges: loss of subtle anomaly cues during Autoencoder (AE) compression, instability of fixed reconstruction-error thresholds, and performance degradation of clustering in noisy high-dimensional spaces. To address these issues, we propose a fog-aware two-stage hierarchical AE with latent-space gating, followed by Density-Based Spatial Clustering of Applications with Noise (DBSCAN) for attack categorization. A shallow AE compresses the input into a compact 21-dimensional latent space, reducing computational demand for fog-node deployment. A deep AE then computes reconstruction-error scores to isolate malicious behavior while denoising latent features. Only high-error latent vectors are forwarded to DBSCAN, which improves cluster separability, reduces noise sensitivity, and avoids predefined cluster counts or labels. The framework is evaluated on two benchmark datasets. On CIC IoT-DIAD 2024, it achieves 98.99% accuracy, 0.9897 F1-score, 0.895 Adjusted Rand Index (ARI), and 0.019 Davies–Bouldin Index (DBI). To examine generalizability beyond smart farming traffic, we also evaluate the framework on the CSE-CIC-IDS2018 benchmark, where it achieves 99.33% accuracy, 0.9928 F1-score, 0.9013 ARI, and 0.0174 DBI. These results confirm that the proposed model can reliably detect and categorize major cyberattack families across distinct IoT threat landscapes while remaining compatible with resource-constrained fog computing environments. Full article
(This article belongs to the Special Issue Clustered Federated Learning for Networks)
Show Figures

Figure 1

32 pages, 544 KB  
Article
Explainability, Safety Cues, and Trust in GenAI Advisors: A SEM–ANN Hybrid Study
by Stefanos Balaskas, Ioannis Stamatiou and George Androulakis
Future Internet 2025, 17(12), 566; https://doi.org/10.3390/fi17120566 - 9 Dec 2025
Viewed by 590
Abstract
“GenAI” assistants are gradually being integrated into daily tasks and learning, but their uptake is no less contingent on perceptions of credibility or safety than on their capabilities per se. The current study hypothesizes and tests its proposed two-road construct consisting of two [...] Read more.
“GenAI” assistants are gradually being integrated into daily tasks and learning, but their uptake is no less contingent on perceptions of credibility or safety than on their capabilities per se. The current study hypothesizes and tests its proposed two-road construct consisting of two interface-level constructs, namely perceived transparency (PT) and perceived safety/guardrails (PSG), influencing “behavioral intention” (BI) both directly and indirectly, via the two socio-cognitive mediators trust in automation (TR) and psychological reactance (RE). Furthermore, we also provide formulations for the evaluative lenses, namely perceived usefulness (PU) and “perceived risk” (PR). Employing survey data with a sample of 365 responses and partial least squares structural equation modeling (PLS-SEM) with bootstrap techniques in SMART-PLS 4, we discovered that PT is the most influential factor in BI, supported by TR, with some contributions from PSG/PU, but none from PR/RE. Mediation testing revealed significant partial mediations, with PT only exhibiting indirect-only mediated relationships via TR, while the other variables are nonsignificant via reactance-driven paths. To uncover non-linearity and non-compensation, a Stage 2 multilayer perceptron was implemented, confirming the SEM ranking, complimented by an importance of variables and sensitivity analysis. In practical terms, the study’s findings support the primacy of explanatory clarity and the importance of clear rules that are rigorously obligatory, with usefulness subordinated to credibility once the latter is achieved. The integration of SEM and ANN improves explanation and prediction, providing valuable insights for policy, managerial, or educational decision-makers about the implementation of GenAI. Full article
Show Figures

Figure 1

41 pages, 3181 KB  
Article
Transmission-Path Selection with Joint Computation and Communication Resource Allocation in 6G MEC Networks with RIS and D2D Support
by Yao-Liang Chung
Future Internet 2025, 17(12), 565; https://doi.org/10.3390/fi17120565 - 6 Dec 2025
Viewed by 399
Abstract
This paper proposes a transmission-path selection algorithm with joint computation and communication resource allocation for sixth-generation (6G) mobile edge computing (MEC) networks enhanced by helper-assisted device-to-device (D2D) communication and reconfigurable intelligent surfaces (RIS). The novelties of this work lie in the joint design [...] Read more.
This paper proposes a transmission-path selection algorithm with joint computation and communication resource allocation for sixth-generation (6G) mobile edge computing (MEC) networks enhanced by helper-assisted device-to-device (D2D) communication and reconfigurable intelligent surfaces (RIS). The novelties of this work lie in the joint design of three key components: a helper-assisted D2D uplink scheme, a packet-partitioning cooperative MEC offloading mechanism, and RIS-assisted downlink transmission and deployment design. These components collectively enable diverse transmission paths under strict latency constraints, helping mitigate overload and reduce delay. To demonstrate its performance advantages, the proposed algorithm is compared with a baseline algorithm without helper-assisted D2D or RIS support, under two representative scheduling policies—modified maximum rate and modified proportional fair. Simulation results in single-base station (BS) and dual-BS environments show that the proposed algorithm consistently achieves a higher effective packet-delivery success percentage, defined as the fraction of packets whose total delay (uplink, MEC computation, and downlink) satisfies service-specific latency thresholds, and a lower average total delay, defined as the mean total delay of all successfully delivered packets, regardless of whether individual delays exceed their thresholds. Both metrics are evaluated separately for ultra-reliable low-latency communications, enhanced mobile broadband, and massive machine-type communications services. These results indicate that the proposed algorithm provides solid performance and robustness in supporting diverse 6G services under stringent latency requirements across different scheduling policies and deployment scenarios. Full article
Show Figures

Figure 1

41 pages, 2890 KB  
Article
STREAM: A Semantic Transformation and Real-Time Educational Adaptation Multimodal Framework in Personalized Virtual Classrooms
by Leyli Nouraei Yeganeh, Yu Chen, Nicole Scarlett Fenty, Amber Simpson and Mohsen Hatami
Future Internet 2025, 17(12), 564; https://doi.org/10.3390/fi17120564 - 5 Dec 2025
Viewed by 688
Abstract
Most adaptive learning systems personalize around content sequencing and difficulty adjustment rather than transforming instructional material within the lesson itself. This paper presents the STREAM (Semantic Transformation and Real-Time Educational Adaptation Multimodal) framework. This modular pipeline decomposes multimodal educational content into semantically tagged, [...] Read more.
Most adaptive learning systems personalize around content sequencing and difficulty adjustment rather than transforming instructional material within the lesson itself. This paper presents the STREAM (Semantic Transformation and Real-Time Educational Adaptation Multimodal) framework. This modular pipeline decomposes multimodal educational content into semantically tagged, pedagogically annotated units for regeneration into alternative formats while preserving source traceability. STREAM is designed to integrate automatic speech recognition, transformer-based natural language processing, and planned computer vision components to extract instructional elements from teacher explanations, slides, and embedded media. Each unit receives metadata, including time codes, instructional type, cognitive demand, and prerequisite concepts, designed to enable format-specific regeneration with explicit provenance links. For a predefined visual-learner profile, the system generates annotated path diagrams, two-panel instructional guides, and entity pictograms with complete back-link coverage. Ablation studies confirm that individual components contribute measurably to output completeness without compromising traceability. This paper reports results from a tightly scoped feasibility pilot that processes a single five-minute elementary STEM video offline under clean audio–visual conditions. We position the pilot’s limitations as testable hypotheses that require validation across diverse content domains, authentic deployments with ambient noise and bandwidth constraints, multiple learner profiles, including multilingual students and learners with disabilities, and controlled comprehension studies. The contribution is a transparent technical demonstration of feasibility and a methodological scaffold for investigating whether within-lesson content transformation can support personalized learning at scale. Full article
Show Figures

Graphical abstract

18 pages, 3274 KB  
Article
MEC-Chain: Towards a New Framework for a MEC-Enabled Mobile Blockchain Network Under the PoS Consensus
by Rima Grati, Khouloud Boukadi and Safa Elleuch
Future Internet 2025, 17(12), 563; https://doi.org/10.3390/fi17120563 - 5 Dec 2025
Viewed by 273
Abstract
The Proof of Stake (PoS) consensus mechanism is increasingly used in blockchain systems; however, resource allocation for PoS-based mobile blockchain networks remains underexplored, particularly given the constraints of mobile devices. This work introduces MEC-Chain, a new framework that integrates Mobile Edge Computing (MEC) [...] Read more.
The Proof of Stake (PoS) consensus mechanism is increasingly used in blockchain systems; however, resource allocation for PoS-based mobile blockchain networks remains underexplored, particularly given the constraints of mobile devices. This work introduces MEC-Chain, a new framework that integrates Mobile Edge Computing (MEC) with mobile blockchain to support efficient validator-node execution under PoS. MEC-Chain formalizes a multi-objective resource-allocation problem that jointly considers latency, reliability, and cost from both the validator and MEC-provider perspectives. To address this challenge, we develop a deep reinforcement learning-based allocation agent using the Proximal Policy Optimization (PPO) algorithm. Experimental results show that PPO achieves a 30–40% reduction in total execution time, 25–35% lower transmission latency, and 10–15% higher reliability compared to A2C (Advantage Actor–Critic) and DQN (Deep Q-Network), while offering comparable cost savings across all methods. These results demonstrate the effectiveness of MEC-Chain in enabling low-latency, reliable, and resource-efficient PoS validation within mobile blockchain environments. Full article
Show Figures

Graphical abstract

33 pages, 2277 KB  
Article
Artificial Intelligence for Pneumonia Detection: A Federated Deep Learning Approach in Smart Healthcare
by Ana-Mihaela Vasilevschi, Călin-Alexandru Coman, Marilena Ianculescu and Oana Andreia Coman
Future Internet 2025, 17(12), 562; https://doi.org/10.3390/fi17120562 - 4 Dec 2025
Viewed by 454
Abstract
Artificial Intelligence (AI) plays an important role in driving innovation in smart healthcare by providing accurate, scalable, and privacy-preserving diagnostic options. Pneumonia is still a major global health issue, and early detection is key to improving patient outcomes. This study proposes a federated [...] Read more.
Artificial Intelligence (AI) plays an important role in driving innovation in smart healthcare by providing accurate, scalable, and privacy-preserving diagnostic options. Pneumonia is still a major global health issue, and early detection is key to improving patient outcomes. This study proposes a federated deep learning (FL) approach for automatic pneumonia detection using chest X-ray images, considering both diagnostic efficacy and data privacy. Two models were developed and tested: a custom-developed convolutional neural network and a VGG16 transfer learning architecture. The framework evaluates diagnostic efficacy in both centralized and federated scenarios, taking into account heterogeneous client distributions and class imbalance. F1-score and accuracy values for the federated models indicate competitive levels, with F1-scores greater than 0.90 for pneumonia, being robust even when the data is not independent and identically distributed. Results confirm that FL could be tested as a privacy-preserving way to manage medical imaging and intelligence across distributed healthcare. This study provides a potential proof of concept of how to incorporate federated AI into smart healthcare and gives direction toward clinically tested and real-world applications. Full article
Show Figures

Figure 1

48 pages, 10659 KB  
Article
Evaluating Synthetic Malicious Network Traffic Generated by GAN and VAE Models: A Data Quality Perspective
by Nikolaos Peppes, Theodoros Alexakis, Emmanouil Daskalakis and Evgenia Adamopoulou
Future Internet 2025, 17(12), 561; https://doi.org/10.3390/fi17120561 - 4 Dec 2025
Viewed by 546
Abstract
The limited availability and imbalance of labeled malicious network traffic data remain major obstacles in developing effective AI-driven cybersecurity solutions. To mitigate these challenges, this study investigates the use of deep generative models, specifically Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), for [...] Read more.
The limited availability and imbalance of labeled malicious network traffic data remain major obstacles in developing effective AI-driven cybersecurity solutions. To mitigate these challenges, this study investigates the use of deep generative models, specifically Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), for producing realistic synthetic attack data. A comprehensive data quality assessment (DQA) framework is proposed to thoroughly evaluate the fidelity, diversity, and practical utility of the generated data samples. The findings support the adoption of data synthesis as a viable strategy to address data scarcity, improving robustness and reliability in modern cybersecurity applications and sectors. Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security)
Show Figures

Graphical abstract

20 pages, 1272 KB  
Article
Hybrid PON–RoF LTE Video Transmission with Experimental BLER Analysis and Amplifier Trade-Off
by Berenice Arguero, Mateo Leiva, Kevin Christopher Pozo Guerrero, Germán V. Arévalo, Miltón N. Tipán, Christian Tipantuña and Michela Meo
Future Internet 2025, 17(12), 560; https://doi.org/10.3390/fi17120560 - 4 Dec 2025
Viewed by 389
Abstract
This study evaluates the performance of a hybrid passive optical network–radio over fiber (PON–RoF) architecture for long-term evolution (LTE)-based video transmission, focusing on the analysis of the block error rate (BLER) with and without an external RF amplifier. The results show that removing [...] Read more.
This study evaluates the performance of a hybrid passive optical network–radio over fiber (PON–RoF) architecture for long-term evolution (LTE)-based video transmission, focusing on the analysis of the block error rate (BLER) with and without an external RF amplifier. The results show that removing it improves receiver sensitivity by 4.04 dB in the optical link and 16 dB in the hybrid RoF link. The internal gain control of the USRP-2944R (Universal Software Radio Peripheral) is sufficient for signal processing without saturating the receiver. Furthermore, the received power levels are consistent with typical GPON sensitivity and overload ranges reported in standards, although the experimental setup corresponds to a continuous point-to-point laboratory link rather than a full GPON burst-mode configuration. Full article
(This article belongs to the Special Issue Cyber-Physical Systems in Industrial Communication Systems)
Show Figures

Graphical abstract

22 pages, 529 KB  
Article
Understanding Generation Z’s Tourism Purchasing Decisions Through Internet Technologies: The Case of Influencer Marketing
by Petra Vašaničová, Zuzana Kosťová, Ivana Hodorová, Natália Kolková, Viliam Obšut and Michal Češkovič
Future Internet 2025, 17(12), 559; https://doi.org/10.3390/fi17120559 - 3 Dec 2025
Viewed by 1264
Abstract
In the era of rapidly evolving Internet technologies, influencer marketing has emerged as a transformative force in digital tourism, reshaping how travelers discover, evaluate, and choose destinations and accommodations. This study investigates the relationships between key dimensions of influencer marketing (credibility, authenticity, content [...] Read more.
In the era of rapidly evolving Internet technologies, influencer marketing has emerged as a transformative force in digital tourism, reshaping how travelers discover, evaluate, and choose destinations and accommodations. This study investigates the relationships between key dimensions of influencer marketing (credibility, authenticity, content format, perceived effectiveness, campaign frequency, and geographic proximity) and consumer behavior in tourism, with particular emphasis on trust formation and decision-making regarding accommodations and destinations among Slovak Generation Z (born 1997–2012) travelers. A structured electronic questionnaire was administered in December 2024 to assess respondents’ perceptions of influencer marketing in the context of travel-related choices. The instrument comprised 12 items measured on a 5-point Likert scale (1 = strongly disagree to 5 = strongly agree), capturing various aspects of influencers’ impact on accommodations and destinations. The final sample included 337 Generation Z participants (65% women and 35% men) aged 18–27 years. Data were analyzed using Spearman’s rank correlation to test six hypotheses concerning the influence of influencer marketing on tourism decision-making. The results supported all six hypotheses, revealing significant positive relationships between the examined dimensions of influencer marketing and consumer behavioral outcomes. These findings emphasize the expanding role of influencer marketing as a central mechanism in digital tourism strategies and highlight its importance in understanding how Internet technologies shape the purchasing behavior of Slovak Generation Z travelers. Full article
Show Figures

Figure 1

27 pages, 2900 KB  
Article
Graph-SENet: An Unsupervised Learning-Based Graph Neural Network for Skeleton Extraction from Point Cloud
by Jie Li, Wei Guo and Wenli Zhang
Future Internet 2025, 17(12), 558; https://doi.org/10.3390/fi17120558 - 3 Dec 2025
Viewed by 354
Abstract
Extracting 3D skeletons from point clouds is a challenging task in computer vision. Most existing deep learning methods rely heavily on supervised data requiring extensive manual annotation. Consequently, re-labeling is often necessary for cross-category applications, while the process of 3D point cloud annotation [...] Read more.
Extracting 3D skeletons from point clouds is a challenging task in computer vision. Most existing deep learning methods rely heavily on supervised data requiring extensive manual annotation. Consequently, re-labeling is often necessary for cross-category applications, while the process of 3D point cloud annotation is inherently time-consuming and expensive. Simultaneously, existing unsupervised methods often suffer from significant skeleton point deviations due to limited capabilities in modeling local structures. To address these limitations, we propose Graph-SENet, an unsupervised learning-based graph neural network method for skeleton extraction. This method integrates dynamic graph convolution with a multi-level feature fusion mechanism to more comprehensively capture local geometric relationships. Through a multi-dimensional unsupervised feature loss, it learns the structural representation of skeleton points, significantly improving the precision and stability of skeleton point localization under annotation-free conditions. Furthermore, we propose a graph autoencoder structure optimized by cosine similarity to predict topological connections between skeleton points, thereby recovering semantically consistent and structurally complete 3D skeleton representations in an end-to-end manner. Experimental results on multiple datasets, including ShapeNet, ITOP, and Soybean-MVS, demonstrate that Graph-SENet outperforms existing mainstream unsupervised methods in terms of Chamfer Distance and F1-score. It exhibits superior accuracy, robustness, and cross-category generalization capabilities, effectively reducing manual annotation costs while enhancing the completeness and semantic consistency of skeleton recovery. These results validate the application potential and practical value of Graph-SENet in 3D structure understanding and downstream 3D analysis tasks. Full article
(This article belongs to the Special Issue Algorithms and Models for Next-Generation Vision Systems)
Show Figures

Figure 1

26 pages, 1005 KB  
Article
A Context-Aware Lightweight Framework for Source Code Vulnerability Detection
by Yousef Sanjalawe, Budoor Allehyani and Salam Al-E’mari
Future Internet 2025, 17(12), 557; https://doi.org/10.3390/fi17120557 - 3 Dec 2025
Viewed by 439
Abstract
As software systems grow increasingly complex and interconnected, detecting vulnerabilities in source code has become a critical and challenging task. Traditional static analysis methods often fall short in capturing deep, context-dependent vulnerabilities and adapting to rapidly evolving threat landscapes. Recent efforts have explored [...] Read more.
As software systems grow increasingly complex and interconnected, detecting vulnerabilities in source code has become a critical and challenging task. Traditional static analysis methods often fall short in capturing deep, context-dependent vulnerabilities and adapting to rapidly evolving threat landscapes. Recent efforts have explored knowledge graphs and transformer-based models to enhance semantic understanding; however, these solutions frequently rely on static knowledge bases, exhibit high computational overhead, and lack adaptability to emerging threats. To address these limitations, we propose DynaKG-NER++, a novel and lightweight framework for context-aware vulnerability detection in source code. Our approach integrates lexical, syntactic, and semantic features using a transformer-based token encoder, dynamic knowledge graph embeddings, and a Graph Attention Network (GAT). We further introduce contrastive learning on vulnerability–patch pairs to improve discriminative capacity and design an attention-based fusion module to combine token and entity representations adaptively. A key innovation of our method is the dynamic construction and continual update of the knowledge graph, allowing the model to incorporate newly published CVEs and evolving relationships without retraining. We evaluate DynaKG-NER++ on five benchmark datasets, demonstrating superior performance across span-level F1 (89.3%), token-level accuracy (93.2%), and AUC-ROC (0.936), while achieving the lowest false positive rate (5.1%) among state-of-the-art baselines. Sta tistical significance tests confirm that these improvements are robust and meaningful. Overall, DynaKG-NER++ establishes a new standard in vulnerability detection, balancing accuracy, adaptability, and efficiency, making it highly suitable for deployment in real-world static analysis pipelines and resource-constrained environments. Full article
(This article belongs to the Topic Addressing Security Issues Related to Modern Software)
Show Figures

Figure 1

25 pages, 925 KB  
Article
A Proposal of a Scale to Evaluate Attitudes of People Towards a Social Metaverse
by Stefano Mottura and Marta Mondellini
Future Internet 2025, 17(12), 556; https://doi.org/10.3390/fi17120556 - 3 Dec 2025
Viewed by 293
Abstract
Big players in information and communication technologies are investing in the metaverse for their businesses. Meta, as the main player in social media worldwide, is massively developing its “social” metaverse as a new paradigm by depicting it with nice and endless features and [...] Read more.
Big players in information and communication technologies are investing in the metaverse for their businesses. Meta, as the main player in social media worldwide, is massively developing its “social” metaverse as a new paradigm by depicting it with nice and endless features and by expecting current social media to become engrained within it. What is the attitude of users towards this future scenario? Very few studies specifically focusing on this question were found. In this work, a scale for assessing the attitude of people towards the social metaverse was developed. A questionnaire composed of 38 Likert items, inspired by features of the social metaverse, was generated and administered to 184 Italian subjects. The results were analyzed with exploratory factor analysis, and the final scale is composed of 15 items covering four factors that were interpreted. Aspects consistent with both the preliminary work of the authors and with some previous works were found. Considerations are also made in relation to the analysis of the contents of Meta. Full article
Show Figures

Figure 1

15 pages, 1380 KB  
Article
Optimizing LoRaWAN Performance Through Learning Automata-Based Channel Selection
by Luka Aime Atadet, Richard Musabe, Eric Hitimana and Omar Gatera
Future Internet 2025, 17(12), 555; https://doi.org/10.3390/fi17120555 - 2 Dec 2025
Viewed by 291
Abstract
The rising demand for long-range, low-power wireless communication in applications such as monitoring, smart metering, and wide-area sensor networks has emphasized the critical need for efficient spectrum utilization in LoRaWAN (Long Range Wide Area Network). In response to this challenge, this paper proposes [...] Read more.
The rising demand for long-range, low-power wireless communication in applications such as monitoring, smart metering, and wide-area sensor networks has emphasized the critical need for efficient spectrum utilization in LoRaWAN (Long Range Wide Area Network). In response to this challenge, this paper proposes a novel channel selection framework based on Hierarchical Discrete Pursuit Learning Automata (HDPA), aimed at enhancing the adaptability and reliability of LoRaWAN operations in dynamic and interference-prone environments. HDPA leverages a tree-structure reinforcement learning model to monitor and respond to transmission success in real-time, dynamically updating channel probabilities based on environmental feedback. Simulation results conducted in MATLAB R2023b demonstrate that HDPA significantly outperforms conventional algorithms such as Hierarchical Continuous Pursuit Automata (HCPA) in terms of convergence speed, selection accuracy, and throughput performance. Specifically, HDPA achieved 98.78% accuracy with a mean convergence of 6279 iterations, compared to HCPA’s 93.89% accuracy and 6778 iterations in an eight-channel setup. Unlike the Tug-of-War-based Multi-Armed Bandit strategy, which emphasizes fairness in real-world heterogeneous networks, HDPA offers a computationally lightweight and highly adaptive solution tailored to LoRaWAN’s stochastic channel dynamics. These results position HDPA as a promising framework for improving reliability and spectrum utilization in future IoT deployments. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop