Next Issue
Volume 14, August
Previous Issue
Volume 14, June
 
 

Computers, Volume 14, Issue 7 (July 2025) – 54 articles

Cover Story (view full-size image): The 50th anniversary of the integration of the School of Industrial Technical Engineering of Alcoi into the UPV became an immersive experience thanks to virtual reality, the metaverse, and digital twins. The Ferrándiz–Carbonell building was recreated in an interactive virtual environment that was globally accessible, allowing students, faculty, and the public to relive memories and celebrate without physical limits. The digital twin received a score of 88.39/100 on the SUS scale. The article analyzes how these technologies are transforming social interaction, education, and accessibility in future digital ecosystems. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 1359 KiB  
Article
Enhanced Multi-Level Recommender System Using Turnover-Based Weighting for Predicting Regional Preferences
by Venkatesan Thillainayagam, Ramkumar Thirunavukarasu and J. Arun Pandian
Computers 2025, 14(7), 294; https://doi.org/10.3390/computers14070294 - 20 Jul 2025
Viewed by 225
Abstract
In the realm of recommender systems, the prediction of diverse customer preferences has emerged as a compelling research challenge, particularly for multi-state business organizations operating across various geographical regions. Collaborative filtering, a widely utilized recommendation technique, has demonstrated its efficacy in sectors such [...] Read more.
In the realm of recommender systems, the prediction of diverse customer preferences has emerged as a compelling research challenge, particularly for multi-state business organizations operating across various geographical regions. Collaborative filtering, a widely utilized recommendation technique, has demonstrated its efficacy in sectors such as e-commerce, tourism, hotel management, and entertainment-based customer services. In the item-based collaborative filtering approach, users’ evaluations of purchased items are considered uniformly, without assigning weight to the participatory data sources and users’ ratings. This approach results in the ‘relevance problem’ when assessing the generated recommendations. In such scenarios, filtering collaborative patterns based on regional and local characteristics, while emphasizing the significance of branches and user ratings, could enhance the accuracy of recommendations. This paper introduces a turnover-based weighting model utilizing a big data processing framework to mine multi-level collaborative filtering patterns. The proposed weighting model assigns weights to participatory data sources based on the turnover cost of the branches, where turnover refers to the revenue generated through total business transactions conducted by the branch. Furthermore, the proposed big data framework eliminates the forced integration of branch data into a centralized repository and avoids the complexities associated with data movement. To validate the proposed work, experimental studies were conducted using a benchmarking dataset, namely the ‘Movie Lens Dataset’. The proposed approach uncovers multi-level collaborative pattern bases, including global, sub-global, and local levels, with improved predicted ratings compared with results generated by traditional recommender systems. The findings of the proposed approach would be highly beneficial to the strategic management of an interstate business organization, enabling them to leverage regional implications from user preferences. Full article
Show Figures

Figure 1

37 pages, 2776 KiB  
Article
Design of Identical Strictly and Rearrangeably Nonblocking Folded Clos Networks with Equally Sized Square Crossbars
by Yamin Li
Computers 2025, 14(7), 293; https://doi.org/10.3390/computers14070293 - 20 Jul 2025
Viewed by 194
Abstract
Clos networks and their folded versions, fat trees, are widely adopted in interconnection network designs for data centers and supercomputers. There are two main types of Clos networks: strictly nonblocking Clos networks and rearrangeably nonblocking Clos networks. Strictly nonblocking Clos networks can connect [...] Read more.
Clos networks and their folded versions, fat trees, are widely adopted in interconnection network designs for data centers and supercomputers. There are two main types of Clos networks: strictly nonblocking Clos networks and rearrangeably nonblocking Clos networks. Strictly nonblocking Clos networks can connect an idle input to an idle output without interfering with existing connections. Rearrangeably nonblocking Clos networks can connect an idle input to an idle output with rearrangements of existing connections. Traditional strictly nonblocking Clos networks have two drawbacks. One drawback is the use of crossbars with different numbers of input and output ports, whereas the currently available switches are square crossbars with the same number of input and output ports. Another drawback is that every connection goes through a fixed number of stages, increasing the length of the communication path. A drawback of traditional fat trees is that the root stage uses differently sized crossbar switches than the other stages. To solve these problems, this paper proposes an Identical Strictly NonBlocking folded Clos (ISNBC) network that uses equally sized square crossbars for all switches. Correspondingly, this paper also proposes an Identical Rearrangeably NonBlocking folded Clos (IRNBC) network. Both ISNBC and IRNBC networks can have any number of stages, can use equally sized square crossbars with no unused switch ports, and can utilize shortcut connections to reduce communication path lengths. Moreover, both ISNBC and IRNBC networks have a lower switch crosspoint cost ratio relative to a single crossbar than their corresponding traditional Clos networks. Specifically, ISNBC networks use 46.43% to 87.71% crosspoints of traditional strictly nonblocking folded Clos networks, and IRNBC networks use 53.85% to 60.00% crosspoints of traditional rearrangeably nonblocking folded Clos networks. Full article
Show Figures

Figure 1

18 pages, 2423 KiB  
Article
A New AI Framework to Support Social-Emotional Skills and Emotion Awareness in Children with Autism Spectrum Disorder
by Andrea La Fauci De Leo, Pooneh Bagheri Zadeh, Kiran Voderhobli and Akbar Sheikh Akbari
Computers 2025, 14(7), 292; https://doi.org/10.3390/computers14070292 - 20 Jul 2025
Viewed by 879
Abstract
This research highlights the importance of Emotion Aware Technologies (EAT) and their implementation in serious games to assist children with Autism Spectrum Disorder (ASD) in developing social-emotional skills. As AI is gaining popularity, such tools can be used in mobile applications as invaluable [...] Read more.
This research highlights the importance of Emotion Aware Technologies (EAT) and their implementation in serious games to assist children with Autism Spectrum Disorder (ASD) in developing social-emotional skills. As AI is gaining popularity, such tools can be used in mobile applications as invaluable teaching tools. In this paper, a new AI framework application is discussed that will help children with ASD develop efficient social-emotional skills. It uses the Jetpack Compose framework and Google Cloud Vision API as emotion-aware technology. The framework is developed with two main features designed to help children reflect on their emotions, internalise them, and train them how to express these emotions. Each activity is based on similar features from literature with enhanced functionalities. A diary feature allows children to take pictures of themselves, and the application categorises their facial expressions, saving the picture in the appropriate space. The three-level minigame consists of a series of prompts depicting a specific emotion that children have to match. The results of the framework offer a good starting point for similar applications to be developed further, especially by training custom models to be used with ML Kit. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

25 pages, 2509 KiB  
Article
A Lightweight Intrusion Detection System for IoT and UAV Using Deep Neural Networks with Knowledge Distillation
by Treepop Wisanwanichthan and Mason Thammawichai
Computers 2025, 14(7), 291; https://doi.org/10.3390/computers14070291 - 19 Jul 2025
Viewed by 573
Abstract
Deep neural networks (DNNs) are highly effective for intrusion detection systems (IDS) due to their ability to learn complex patterns and detect potential anomalies within the systems. However, their high resource consumption requirements including memory and computation make them difficult to deploy on [...] Read more.
Deep neural networks (DNNs) are highly effective for intrusion detection systems (IDS) due to their ability to learn complex patterns and detect potential anomalies within the systems. However, their high resource consumption requirements including memory and computation make them difficult to deploy on low-powered platforms. This study explores the possibility of using knowledge distillation (KD) to reduce constraints such as power and hardware consumption and improve real-time inference speed but maintain high detection accuracy in IDS across all attack types. The technique utilizes the transfer of knowledge from DNNs (teacher) models to more lightweight shallow neural network (student) models. KD has been proven to achieve significant parameter reduction (92–95%) and faster inference speed (7–11%) while improving overall detection performance (up to 6.12%). Experimental results on datasets such as NSL-KDD, UNSW-NB15, CIC-IDS2017, IoTID20, and UAV IDS demonstrate DNN with KD’s effectiveness in achieving high accuracy, precision, F1 score, and area under the curve (AUC) metrics. These findings confirm KD’s ability as a potential edge computing strategy for IoT and UAV devices, which are suitable for resource-constrained environments and lead to real-time anomaly detection for next-generation distributed systems. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

28 pages, 7608 KiB  
Article
A Forecasting Method for COVID-19 Epidemic Trends Using VMD and TSMixer-BiKSA Network
by Yuhong Li, Guihong Bi, Taonan Tong and Shirui Li
Computers 2025, 14(7), 290; https://doi.org/10.3390/computers14070290 - 18 Jul 2025
Viewed by 184
Abstract
The spread of COVID-19 is influenced by multiple factors, including control policies, virus characteristics, individual behaviors, and environmental conditions, exhibiting highly complex nonlinear dynamic features. The time series of new confirmed cases shows significant nonlinearity and non-stationarity. Traditional prediction methods that rely solely [...] Read more.
The spread of COVID-19 is influenced by multiple factors, including control policies, virus characteristics, individual behaviors, and environmental conditions, exhibiting highly complex nonlinear dynamic features. The time series of new confirmed cases shows significant nonlinearity and non-stationarity. Traditional prediction methods that rely solely on one-dimensional case data struggle to capture the multi-dimensional features of the data and are limited in handling nonlinear and non-stationary characteristics. Their prediction accuracy and generalization capabilities remain insufficient, and most existing studies focus on single-step forecasting, with limited attention to multi-step prediction. To address these challenges, this paper proposes a multi-module fusion prediction model—TSMixer-BiKSA network—that integrates multi-feature inputs, Variational Mode Decomposition (VMD), and a dual-branch parallel architecture for 1- to 3-day-ahead multi-step forecasting of new COVID-19 cases. First, variables highly correlated with the target sequence are selected through correlation analysis to construct a feature matrix, which serves as one input branch. Simultaneously, the case sequence is decomposed using VMD to extract low-complexity, highly regular multi-scale modal components as the other input branch, enhancing the model’s ability to perceive and represent multi-source information. The two input branches are then processed in parallel by the TSMixer-BiKSA network model. Specifically, the TSMixer module employs a multilayer perceptron (MLP) structure to alternately model along the temporal and feature dimensions, capturing cross-time and cross-variable dependencies. The BiGRU module extracts bidirectional dynamic features of the sequence, improving long-term dependency modeling. The KAN module introduces hierarchical nonlinear transformations to enhance high-order feature interactions. Finally, the SA attention mechanism enables the adaptive weighted fusion of multi-source information, reinforcing inter-module synergy and enhancing the overall feature extraction and representation capability. Experimental results based on COVID-19 case data from Italy and the United States demonstrate that the proposed model significantly outperforms existing mainstream methods across various error metrics, achieving higher prediction accuracy and robustness. Full article
Show Figures

Figure 1

17 pages, 1019 KiB  
Article
Blockchain-Based Decentralized Identity Management System with AI and Merkle Trees
by Hoang Viet Anh Le, Quoc Duy Nam Nguyen, Nakano Tadashi and Thi Hong Tran
Computers 2025, 14(7), 289; https://doi.org/10.3390/computers14070289 - 18 Jul 2025
Viewed by 325
Abstract
The Blockchain-based Decentralized Identity Management System (BDIMS) is an innovative framework designed for digital identity management, utilizing the unique attributes of blockchain technology. The BDIMS categorizes entities into three distinct groups: identity providers, service providers, and end-users. The system’s efficiency in identifying and [...] Read more.
The Blockchain-based Decentralized Identity Management System (BDIMS) is an innovative framework designed for digital identity management, utilizing the unique attributes of blockchain technology. The BDIMS categorizes entities into three distinct groups: identity providers, service providers, and end-users. The system’s efficiency in identifying and extracting information from identification cards is enhanced by the integration of artificial intelligence (AI) algorithms. These algorithms decompose the extracted fields into smaller units, facilitating optical character recognition (OCR) and user authentication processes. By employing Merkle Trees, the BDIMS ensures secure authentication with service providers without the need to disclose any personal information. This advanced system empowers users to maintain control over their private information, ensuring its protection with maximum effectiveness and security. Experimental results confirm that the BDIMS effectively mitigates identity fraud while maintaining the confidentiality and integrity of sensitive data. Full article
Show Figures

Figure 1

20 pages, 709 KiB  
Article
SKGRec: A Semantic-Enhanced Knowledge Graph Fusion Recommendation Algorithm with Multi-Hop Reasoning and User Behavior Modeling
by Siqi Xu, Ziqian Yang, Jing Xu and Ping Feng
Computers 2025, 14(7), 288; https://doi.org/10.3390/computers14070288 - 18 Jul 2025
Viewed by 244
Abstract
To address the limitations of existing knowledge graph-based recommendation algorithms, including insufficient utilization of semantic information and inadequate modeling of user behavior motivations, we propose SKGRec, a novel recommendation model that integrates knowledge graph and semantic features. The model constructs a semantic interaction [...] Read more.
To address the limitations of existing knowledge graph-based recommendation algorithms, including insufficient utilization of semantic information and inadequate modeling of user behavior motivations, we propose SKGRec, a novel recommendation model that integrates knowledge graph and semantic features. The model constructs a semantic interaction graph (USIG) of user behaviors and employs a self-attention mechanism and a ranked optimization loss function to mine user interactions in fine-grained semantic associations. A relationship-aware aggregation module is designed to dynamically integrate higher-order relational features in the knowledge graph through the attention scoring function. In addition, a multi-hop relational path inference mechanism is introduced to capture long-distance dependencies to improve the depth of user interest modeling. Experiments on the Amazon-Book and Last-FM datasets show that SKGRec significantly outperforms several state-of-the-art recommendation algorithms on the Recall@20 and NDCG@20 metrics. Comparison experiments validate the effectiveness of semantic analysis of user behavior and multi-hop path inference, while cold-start experiments further confirm the robustness of the model in sparse-data scenarios. This study provides a new optimization approach for knowledge graph and semantic-driven recommendation systems, enabling more accurate capture of user preferences and alleviating the problem of noise interference. Full article
Show Figures

Figure 1

23 pages, 2250 KiB  
Article
Machine Learning Techniques for Uncertainty Estimation in Dynamic Aperture Prediction
by Carlo Emilio Montanari, Robert B. Appleby, Davide Di Croce, Massimo Giovannozzi, Tatiana Pieloni, Stefano Redaelli and Frederik F. Van der Veken
Computers 2025, 14(7), 287; https://doi.org/10.3390/computers14070287 - 18 Jul 2025
Viewed by 254
Abstract
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as [...] Read more.
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as the CERN Large Hadron Collider and is crucial for designing future accelerators like the CERN Future Circular Hadron Collider. Traditional methods for computing the dynamic aperture are computationally demanding and involve extensive numerical simulations with numerous initial phase space conditions. In our recent work, we have devised surrogate models to predict the dynamic aperture boundary both efficiently and accurately. These models have been further refined by incorporating them into a novel active learning framework. This framework enhances performance through continual retraining and intelligent data generation based on informed sampling driven by error estimation. A critical attribute of this framework is the precise estimation of uncertainty in dynamic aperture predictions. In this study, we investigate various machine learning techniques for uncertainty estimation, including Monte Carlo dropout, bootstrap methods, and aleatory uncertainty quantification. We evaluated these approaches to determine the most effective method for reliable uncertainty estimation in dynamic aperture predictions using machine learning techniques. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

21 pages, 2105 KiB  
Article
Implementing Virtual Reality for Fire Evacuation Preparedness at Schools
by Rashika Tasnim Keya, Ilona Heldal, Daniel Patel, Pietro Murano and Cecilia Hammar Wijkmark
Computers 2025, 14(7), 286; https://doi.org/10.3390/computers14070286 - 18 Jul 2025
Viewed by 540
Abstract
Emergency preparedness training in organizations frequently involves simple evacuation drills triggered by fire alarms, limiting the opportunities for broader skill development. Digital technologies, particularly virtual reality (VR), offer promising methods to enhance learning for handling incidents and evacuations. However, implementing VR-based training remains [...] Read more.
Emergency preparedness training in organizations frequently involves simple evacuation drills triggered by fire alarms, limiting the opportunities for broader skill development. Digital technologies, particularly virtual reality (VR), offer promising methods to enhance learning for handling incidents and evacuations. However, implementing VR-based training remains challenging due to unclear integration strategies within organizational practices and a lack of empirical evidence of VR’s effectiveness. This paper explores how VR-based training tools can be implemented in schools to enhance emergency preparedness among students, teachers, and staff. Following a design science research process, data were collected from a questionnaire-based study involving 12 participants and an exploratory study with 13 participants. The questionnaire-based study investigates initial attitudes and willingness to adopt VR training, while the exploratory study assesses the VR prototype’s usability, realism, and perceived effectiveness for emergency preparedness training. Despite a limited sample size and technical constraints of the early prototype, findings indicate strong student enthusiasm for gamified and immersive learning experiences. Teachers emphasized the need for technical and instructional support to regularly utilize VR training modules, while firefighters acknowledged the potential of VR tools, but also highlighted the critical importance of regular drills and professional validation. The relevance of the results of utilizing VR in this context is further discussed in terms of how it can be integrated into university curricula and aligned with other accessible digital preparedness tools. Full article
Show Figures

Figure 1

25 pages, 2870 KiB  
Article
Performance Evaluation and QoS Optimization of Routing Protocols in Vehicular Communication Networks Under Delay-Sensitive Conditions
by Alaa Kamal Yousif Dafhalla, Hiba Mohanad Isam, Amira Elsir Tayfour Ahmed, Ikhlas Saad Ahmed, Lutfieh S. Alhomed, Amel Mohamed essaket Zahou, Fawzia Awad Elhassan Ali, Duria Mohammed Ibrahim Zayan, Mohamed Elshaikh Elobaid and Tijjani Adam
Computers 2025, 14(7), 285; https://doi.org/10.3390/computers14070285 - 17 Jul 2025
Viewed by 277
Abstract
Vehicular Communication Networks (VCNs) are essential to intelligent transportation systems, where real-time data exchange between vehicles and infrastructure supports safety, efficiency, and automation. However, achieving high Quality of Service (QoS)—especially under delay-sensitive conditions—remains a major challenge due to the high mobility and dynamic [...] Read more.
Vehicular Communication Networks (VCNs) are essential to intelligent transportation systems, where real-time data exchange between vehicles and infrastructure supports safety, efficiency, and automation. However, achieving high Quality of Service (QoS)—especially under delay-sensitive conditions—remains a major challenge due to the high mobility and dynamic topology of vehicular environments. While some efforts have explored routing protocol optimization, few have systematically compared multiple optimization approaches tailored to distinct traffic and delay conditions. This study addresses this gap by evaluating and enhancing two widely used routing protocols, QOS-AODV and GPSR, through their improved versions, CM-QOS-AODV and CM-GPSR. Two distinct optimization models are proposed: the Traffic-Oriented Model (TOM), designed to handle variable and high-traffic conditions, and the Delay-Efficient Model (DEM), focused on reducing latency for time-critical scenarios. Performance was evaluated using key QoS metrics: throughput (rate of successful data delivery), packet delivery ratio (PDR) (percentage of successfully delivered packets), and end-to-end delay (latency between sender and receiver). Simulation results reveal that TOM-optimized protocols achieve up to 10% higher PDR, maintain throughput above 0.40 Mbps, and reduce delay to as low as 0.01 s, making them suitable for applications such as collision avoidance and emergency alerts. DEM-based variants offer balanced, moderate improvements, making them better suited for general-purpose VCN applications. These findings underscore the importance of traffic- and delay-aware protocol design in developing robust, QoS-compliant vehicular communication systems. Full article
(This article belongs to the Special Issue Application of Deep Learning to Internet of Things Systems)
Show Figures

Figure 1

19 pages, 5755 KiB  
Article
A Context-Aware Doorway Alignment and Depth Estimation Algorithm for Assistive Wheelchairs
by Shanelle Tennekoon, Nushara Wedasingha, Anuradhi Welhenge, Nimsiri Abhayasinghe and Iain Murray
Computers 2025, 14(7), 284; https://doi.org/10.3390/computers14070284 - 17 Jul 2025
Viewed by 265
Abstract
Navigating through doorways remains a daily challenge for wheelchair users, often leading to frustration, collisions, or dependence on assistance. These challenges highlight a pressing need for intelligent doorway detection algorithm for assistive wheelchairs that go beyond traditional object detection. This study presents the [...] Read more.
Navigating through doorways remains a daily challenge for wheelchair users, often leading to frustration, collisions, or dependence on assistance. These challenges highlight a pressing need for intelligent doorway detection algorithm for assistive wheelchairs that go beyond traditional object detection. This study presents the algorithmic development of a lightweight, vision-based doorway detection and alignment module with contextual awareness. It integrates channel and spatial attention, semantic feature fusion, unsupervised depth estimation, and doorway alignment that offers real-time navigational guidance to the wheelchairs control system. The model achieved a mean average precision of 95.8% and a F1 score of 93%, while maintaining low computational demands suitable for future deployment on embedded systems. By eliminating the need for depth sensors and enabling contextual awareness, this study offers a robust solution to improve indoor mobility and deliver actionable feedback to support safe and independent doorway traversal for wheelchair users. Full article
(This article belongs to the Special Issue AI for Humans and Humans for AI (AI4HnH4AI))
Show Figures

Figure 1

18 pages, 533 KiB  
Article
Comparative Analysis of Deep Learning Models for Intrusion Detection in IoT Networks
by Abdullah Waqas, Sultan Daud Khan, Zaib Ullah, Mohib Ullah and Habib Ullah
Computers 2025, 14(7), 283; https://doi.org/10.3390/computers14070283 - 17 Jul 2025
Viewed by 282
Abstract
The Internet of Things (IoT) holds transformative potential in fields such as power grid optimization, defense networks, and healthcare. However, the constrained processing capacities and resource limitations of IoT networks make them especially susceptible to cyber threats. This study addresses the problem of [...] Read more.
The Internet of Things (IoT) holds transformative potential in fields such as power grid optimization, defense networks, and healthcare. However, the constrained processing capacities and resource limitations of IoT networks make them especially susceptible to cyber threats. This study addresses the problem of detecting intrusions in IoT environments by evaluating the performance of deep learning (DL) models under different data and algorithmic conditions. We conducted a comparative analysis of three widely used DL models—Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), and Bidirectional LSTM (biLSTM)—across four benchmark IoT intrusion detection datasets: BoTIoT, CiCIoT, ToNIoT, and WUSTL-IIoT-2021. Each model was assessed under balanced and imbalanced dataset configurations and evaluated using three loss functions (cross-entropy, focal loss, and dual focal loss). By analyzing model efficacy across these datasets, we highlight the importance of generalizability and adaptability to varied data characteristics that are essential for real-world applications. The results demonstrate that the CNN trained using the cross-entropy loss function consistently outperforms the other models, particularly on balanced datasets. On the other hand, LSTM and biLSTM show strong potential in temporal modeling, but their performance is highly dependent on the characteristics of the dataset. By analyzing the performance of multiple DL models under diverse datasets, this research provides actionable insights for developing secure, interpretable IoT systems that can meet the challenges of designing a secure IoT system. Full article
(This article belongs to the Special Issue Application of Deep Learning to Internet of Things Systems)
Show Figures

Figure 1

16 pages, 1251 KiB  
Article
Enhanced Detection of Intrusion Detection System in Cloud Networks Using Time-Aware and Deep Learning Techniques
by Nima Terawi, Huthaifa I. Ashqar, Omar Darwish, Anas Alsobeh, Plamen Zahariev and Yahya Tashtoush
Computers 2025, 14(7), 282; https://doi.org/10.3390/computers14070282 - 17 Jul 2025
Viewed by 327
Abstract
This study introduces an enhanced Intrusion Detection System (IDS) framework for Denial-of-Service (DoS) attacks, utilizing network traffic inter-arrival time (IAT) analysis. By examining the timing between packets and other statistical features, we detected patterns of malicious activity, allowing early and effective DoS threat [...] Read more.
This study introduces an enhanced Intrusion Detection System (IDS) framework for Denial-of-Service (DoS) attacks, utilizing network traffic inter-arrival time (IAT) analysis. By examining the timing between packets and other statistical features, we detected patterns of malicious activity, allowing early and effective DoS threat mitigation. We generate real DoS traffic, including normal, Internet Control Message Protocol (ICMP), Smurf attack, and Transmission Control Protocol (TCP) classes, and develop nine predictive algorithms, combining traditional machine learning and advanced deep learning techniques with optimization methods, including the synthetic minority sampling technique (SMOTE) and grid search (GS). Our findings reveal that while traditional machine learning achieved moderate accuracy, it struggled with imbalanced datasets. In contrast, Deep Neural Network (DNN) models showed significant improvements with optimization, with DNN combined with GS (DNN-GS) reaching 89% accuracy. However, we also used Recurrent Neural Networks (RNNs) combined with SMOTE and GS (RNN-SMOTE-GS), which emerged as the best-performing with a precision of 97%, demonstrating the effectiveness of combining SMOTE and GS and highlighting the critical role of advanced optimization techniques in enhancing the detection capabilities of IDS models for the accurate classification of various types of network traffic and attacks. Full article
Show Figures

Figure 1

46 pages, 8887 KiB  
Article
One-Class Anomaly Detection for Industrial Applications: A Comparative Survey and Experimental Study
by Davide Paolini, Pierpaolo Dini, Ettore Soldaini and Sergio Saponara
Computers 2025, 14(7), 281; https://doi.org/10.3390/computers14070281 - 16 Jul 2025
Viewed by 393
Abstract
This article aims to evaluate the runtime effectiveness of various one-class classification (OCC) techniques for anomaly detection in an industrial scenario reproduced in a laboratory setting. To address the limitations posed by restricted access to proprietary data, the study explores OCC methods that [...] Read more.
This article aims to evaluate the runtime effectiveness of various one-class classification (OCC) techniques for anomaly detection in an industrial scenario reproduced in a laboratory setting. To address the limitations posed by restricted access to proprietary data, the study explores OCC methods that learn solely from legitimate network traffic, without requiring labeled malicious samples. After analyzing major publicly available datasets, such as KDD Cup 1999 and TON-IoT, as well as the most widely used OCC techniques, a lightweight and modular intrusion detection system (IDS) was developed in Python. The system was tested in real time on an experimental platform based on Raspberry Pi, within a simulated client–server environment using the NFSv4 protocol over TCP/UDP. Several OCC models were compared, including One-Class SVM, Autoencoder, VAE, and Isolation Forest. The results showed strong performance in terms of detection accuracy and low latency, with the best outcomes achieved using the UNSW-NB15 dataset. The article concludes with a discussion of additional strategies to enhance the runtime analysis of these algorithms, offering insights into potential future applications and improvement directions. Full article
Show Figures

Figure 1

17 pages, 1301 KiB  
Article
Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement in Cloud Data Centers Using Deep Q-Networks and Agglomerative Clustering
by Maraga Alex, Sunday O. Ojo and Fred Mzee Awuor
Computers 2025, 14(7), 280; https://doi.org/10.3390/computers14070280 - 15 Jul 2025
Viewed by 303
Abstract
The fast expansion of cloud computing has raised carbon emissions and energy usage in cloud data centers, so creative solutions for sustainable resource management are more necessary. This work presents a new algorithm—Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement using Deep Q-Networks (DQNs) [...] Read more.
The fast expansion of cloud computing has raised carbon emissions and energy usage in cloud data centers, so creative solutions for sustainable resource management are more necessary. This work presents a new algorithm—Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement using Deep Q-Networks (DQNs) and Agglomerative Clustering (CARBON-DQN)—that intelligibly balances environmental sustainability, service level agreement (SLA), and energy efficiency. The method combines a deep reinforcement learning model that learns optimum placement methods over time, carbon-aware data center profiling, and the hierarchical clustering of virtual machines (VMs) depending on resource constraints. Extensive simulations show that CARBON-DQN beats conventional and state-of-the-art algorithms like GRVMP, NSGA-II, RLVMP, GMPR, and MORLVMP very dramatically. Among many virtual machine configurations—including micro, small, high-CPU, and extra-large instances—it delivers the lowest carbon emissions, lowered SLA violations, and lowest energy usage. Driven by real-time input, the adaptive decision-making capacity of the algorithm allows it to dynamically react to changing data center circumstances and workloads. These findings highlight how well CARBON-DQN is a sustainable and intelligent virtual machine deployment system for cloud systems. To improve scalability, environmental effect, and practical applicability even further, future work will investigate the integration of renewable energy forecasts, dynamic pricing models, and deployment across multi-cloud and edge computing environments. Full article
Show Figures

Figure 1

27 pages, 1817 KiB  
Article
A Large Language Model-Based Approach for Multilingual Hate Speech Detection on Social Media
by Muhammad Usman, Muhammad Ahmad, Grigori Sidorov, Irina Gelbukh and Rolando Quintero Tellez
Computers 2025, 14(7), 279; https://doi.org/10.3390/computers14070279 - 15 Jul 2025
Viewed by 694
Abstract
The proliferation of hate speech on social media platforms poses significant threats to digital safety, social cohesion, and freedom of expression. Detecting such content—especially across diverse languages—remains a challenging task due to linguistic complexity, cultural context, and resource limitations. To address these challenges, [...] Read more.
The proliferation of hate speech on social media platforms poses significant threats to digital safety, social cohesion, and freedom of expression. Detecting such content—especially across diverse languages—remains a challenging task due to linguistic complexity, cultural context, and resource limitations. To address these challenges, this study introduces a comprehensive approach for multilingual hate speech detection. To facilitate robust hate speech detection across diverse languages, this study makes several key contributions. First, we created a novel trilingual hate speech dataset consisting of 10,193 manually annotated tweets in English, Spanish, and Urdu. Second, we applied two innovative techniques—joint multilingual and translation-based approaches—for cross-lingual hate speech detection that have not been previously explored for these languages. Third, we developed detailed hate speech annotation guidelines tailored specifically to all three languages to ensure consistent and high-quality labeling. Finally, we conducted 41 experiments employing machine learning models with TF–IDF features, deep learning models utilizing FastText and GloVe embeddings, and transformer-based models leveraging advanced contextual embeddings to comprehensively evaluate our approach. Additionally, we employed a large language model with advanced contextual embeddings to identify the best solution for the hate speech detection task. The experimental results showed that our GPT-3.5-turbo model significantly outperforms strong baselines, achieving up to an 8% improvement over XLM-R in Urdu hate speech detection and an average gain of 4% across all three languages. This research not only contributes a high-quality multilingual dataset but also offers a scalable and inclusive framework for hate speech detection in underrepresented languages. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Show Figures

Figure 1

31 pages, 17130 KiB  
Article
A Space-Time Plume Algorithm to Represent and Compute Dynamic Places
by Brent Dell and May Yuan
Computers 2025, 14(7), 278; https://doi.org/10.3390/computers14070278 - 15 Jul 2025
Viewed by 303
Abstract
Contrary to what is represented in geospatial databases, places are dynamic and shaped by events. Point clustering analysis commonly assumes events occur in an empty space and therefore ignores geospatial features where events take place. This research introduces relational density, a novel concept [...] Read more.
Contrary to what is represented in geospatial databases, places are dynamic and shaped by events. Point clustering analysis commonly assumes events occur in an empty space and therefore ignores geospatial features where events take place. This research introduces relational density, a novel concept redefining density as relative to the spatial structure of geospatial features rather than an absolute measure. Building on this, we developed Space-Time Plume, a new algorithm for detecting and tracking evolving event clusters as smoke plumes in space and time, representing dynamic places. Unlike conventional density-based methods, Space-Time Plume dynamically adapts spatial reachability based on the underlying spatial structure and other zone-based parameters across multiple temporal intervals to capture hierarchical plume dynamics. The algorithm tracks plume progression, identifies spatiotemporal relationships, and reveals the emergence, evolution, and disappearance of event-driven places. A case study of crime events in Dallas, Texas, USA, demonstrates the algorithm’s performance and its capacity to represent and compute criminogenic places. We further enhance metaball rendering with Perlin noise to visualize plume structures and their spatiotemporal evolution. A comparative analysis with ST-DBSCAN shows Space-Time Plume’s competitive computational efficiency and ability to represent dynamic places with richer geographic insights. Full article
Show Figures

Figure 1

28 pages, 392 KiB  
Article
Predicting Risk and Complications of Diabetes Through Built-In Artificial Intelligence
by Siana Sagar Bontha, Sastry Kodanda Rama Jammalamadaka, Chandra Prakash Vudatha, Sasi Bhanu Jammalamadaka, Balakrishna Kamesh Duvvuri and Bala Chandrika Vudatha
Computers 2025, 14(7), 277; https://doi.org/10.3390/computers14070277 - 15 Jul 2025
Viewed by 450
Abstract
The global healthcare system faces significant challenges posed by diabetes and its complications, highlighting the need for innovative strategies to improve early diagnosis and treatment. Machine learning models help in the early detection of diseases and recommendations for taking safety measures and treating [...] Read more.
The global healthcare system faces significant challenges posed by diabetes and its complications, highlighting the need for innovative strategies to improve early diagnosis and treatment. Machine learning models help in the early detection of diseases and recommendations for taking safety measures and treating the disease. A comparative analysis of existing machine learning (ML) models is necessary to identify the most suitable model while uniformly fixing the model parameters. Assessing risk based on biomarker measurement and computing overall risk is important for accurate prediction. Early prediction of complications that may arise, based on the risk of diabetes and biomarkers, using machine learning models, is key to helping patients. In this paper, a comparative model is presented to evaluate ML models based on common model characteristics. Additionally, a risk assessment model and a prediction model are presented to help predict the occurrence of complications. Random Forest (RF) is the best model for predicting the occurrence of Type 2 Diabetes (T2D) based on biomarker input. It has also been shown that the prediction of diabetes complications using neural networks is highly accurate, reaching a level of 98%. Full article
Show Figures

Figure 1

37 pages, 2921 KiB  
Article
A Machine-Learning-Based Data Science Framework for Effectively and Efficiently Processing, Managing, and Visualizing Big Sequential Data
by Alfredo Cuzzocrea, Islam Belmerabet, Abderraouf Hafsaoui and Carson K. Leung
Computers 2025, 14(7), 276; https://doi.org/10.3390/computers14070276 - 14 Jul 2025
Viewed by 600
Abstract
In recent years, the open data initiative has led to the willingness of many governments, researchers, and organizations to share their data and make it publicly available. Healthcare, disease, and epidemiological data, such as privacy statistics on patients who have suffered from epidemic [...] Read more.
In recent years, the open data initiative has led to the willingness of many governments, researchers, and organizations to share their data and make it publicly available. Healthcare, disease, and epidemiological data, such as privacy statistics on patients who have suffered from epidemic diseases such as the Coronavirus disease 2019 (COVID-19), are examples of open big data. Therefore, huge volumes of valuable data have been generated and collected at high speed from a wide variety of rich data sources. Analyzing these open big data can be of social benefit. For example, people gain a better understanding of disease by analyzing and mining disease statistics, which can inspire them to participate in disease prevention, detection, control, and combat. Visual representation further improves data understanding and corresponding results for analysis and mining, as a picture is worth a thousand words. In this paper, we present a visual data science solution for the visualization and visual analysis of large sequence data. These ideas are illustrated by the visualization and visual analysis of sequences of real epidemiological data of COVID-19. Through our solution, we enable users to visualize the epidemiological data of COVID-19 over time. It also allows people to visually analyze data and discover relationships between popular features associated with COVID-19 cases. The effectiveness of our visual data science solution in improving the user experience of visualization and visual analysis of large sequence data is demonstrated by the real-life evaluation of these sequenced epidemiological data of COVID-19. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2024 (ICCSA 2024))
Show Figures

Figure 1

17 pages, 1514 KiB  
Article
Examining the Flow Dynamics of Artificial Intelligence in Real-Time Classroom Applications
by Zoltán Szűts, Tünde Lengyelné Molnár, Réka Racskó, Geoffrey Vaughan, Szabolcs Ceglédi and Dalma Lilla Dominek
Computers 2025, 14(7), 275; https://doi.org/10.3390/computers14070275 - 14 Jul 2025
Viewed by 440
Abstract
The integration of artificial intelligence (AI) into educational environments is fundamentally transforming the learning process, raising new questions regarding student engagement and motivation. This empirical study investigates the relationship between AI-based learning support and the experience of flow, defined as the optimal state [...] Read more.
The integration of artificial intelligence (AI) into educational environments is fundamentally transforming the learning process, raising new questions regarding student engagement and motivation. This empirical study investigates the relationship between AI-based learning support and the experience of flow, defined as the optimal state of deep attention and intrinsic motivation, among university students. Building on Csíkszentmihályi’s flow theory and current models of technology-enhanced learning, we applied a validated, purposefully developed AI questionnaire (AIFLQ) to 142 students from two Hungarian universities: the Ludovika University of Public Service and Eszterházy Károly Catholic University. The participants used generative AI tools (e.g., ChatGPT 4, SUNO) during their academic tasks. Based on the results of the Mann–Whitney U test, significant differences were found between students from the two universities in the immersion and balance factors, as well as in the overall flow score, while the AI-related factor showed no statistically significant differences. The sustainability of the flow experience appears to be linked more to pedagogical methodological factors than to institutional ones, highlighting the importance of instructional support in fostering optimal learning experiences. Demographic variables also influenced the flow experience. In gender comparisons, female students showed significantly higher values for the immersion factor. According to the Kruskal–Wallis test, educational attainment also affected the flow experience, with students holding higher education degrees achieving higher flow scores. Our findings suggest that through the conscious design of AI tools and learning environments, taking into account instructional support and learner characteristics, it is possible to promote the development of optimal learning states. This research provides empirical evidence at the intersection of AI and motivational psychology, contributing to both domestic and international discourse in educational psychology and digital pedagogy. Full article
Show Figures

Figure 1

23 pages, 17084 KiB  
Article
Training First Responders Through VR-Based Situated Digital Twins
by Nikolaos Partarakis, Theodoros Evdaimon, Menelaos Katsantonis and Xenophon Zabulis
Computers 2025, 14(7), 274; https://doi.org/10.3390/computers14070274 - 11 Jul 2025
Viewed by 520
Abstract
This study examines first responder training to deliver realistic, adaptable, and scalable solutions aimed at equipping personnel to handle high-risk, rapidly developing scenarios. The proposed method leverages Virtual Reality, Augmented Reality, and digital twins to enable immersive and situationally relevant training for security-critical [...] Read more.
This study examines first responder training to deliver realistic, adaptable, and scalable solutions aimed at equipping personnel to handle high-risk, rapidly developing scenarios. The proposed method leverages Virtual Reality, Augmented Reality, and digital twins to enable immersive and situationally relevant training for security-critical incidents. The method is structured into three distinct phases: definition, digitization, and implementation. The outcome of this approach is the creation of virtual training scenarios that simulate real situations and incident dynamics. The methodology employs photogrammetric reconstruction, simulation of human behavior through locomotion, and virtual security systems, such as surveillance and drone technology. Alongside the methodology, a case study of a large public event is presented to illustrate its feasibility in real-world applications. This study offers a comprehensive and adaptive structure for the design and deployment of digitally augmented training systems. This provides a practical basis for enhancing readiness in a range of operational domains. Full article
Show Figures

Figure 1

25 pages, 9056 KiB  
Article
Creating Digital Twins to Celebrate Commemorative Events in the Metaverse
by Vicente Jover and Silvia Sempere
Computers 2025, 14(7), 273; https://doi.org/10.3390/computers14070273 - 10 Jul 2025
Viewed by 599
Abstract
This paper explores the potential and implications arising from the convergence of virtual reality, the metaverse, and digital twins in translating a real-world commemorative event into a virtual environment. It emphasizes how such integration influences digital transformation processes, particularly in reshaping models of [...] Read more.
This paper explores the potential and implications arising from the convergence of virtual reality, the metaverse, and digital twins in translating a real-world commemorative event into a virtual environment. It emphasizes how such integration influences digital transformation processes, particularly in reshaping models of social interaction. Virtual reality is conceptualized as an immersive technology, enabling advanced multisensory experiences within persistent virtual spaces, such as the metaverse. Furthermore, this study delves into the concept of digital twins—high-fidelity virtual representations of physical systems, processes, and objects—highlighting their application in simulation, analysis, forecasting, prevention, and operational enhancement. In the context of virtual events, the convergence of these technologies is examined as a means to create interactive, adaptable, and scalable environments capable of accommodating diverse social groups and facilitating global accessibility. As a practical application, a digital twin of the Ferrándiz and Carbonell buildings—the most iconic architectural ensemble on the Alcoi campus—was developed to host a virtual event commemorating the 50th anniversary of the integration of the Alcoi School of Industrial Technical Engineering into the Universitat Politècnica de València in 1972. The virtual environment was subsequently evaluated by a sample of users, including students and faculty, to assess usability and functionality, and to identify areas for improvement. The digital twin achieved a score of 88.39 out of 100 on the System Usability Scale (SUS). The findings underscore the key opportunities and challenges associated with the adoption of these emerging technologies, particularly regarding their adaptability in reconfiguring digital environments for work, social interaction, and education. Using this case study as a foundation, this paper offers insights into the strategic role of the metaverse in extending environmental perception and its transformative potential for the future digital ecosystem through the implementation of digital twins. Full article
Show Figures

Figure 1

21 pages, 1179 KiB  
Article
ELFA-Log: Cross-System Log Anomaly Detection via Enhanced Pseudo-Labeling and Feature Alignment
by Xiaowei Zhao, Kaiwei Guo, Mingting Huang, Shaojian Qiu and Lu Lu
Computers 2025, 14(7), 272; https://doi.org/10.3390/computers14070272 - 10 Jul 2025
Viewed by 342
Abstract
Existing log-based anomaly detection methods typically require large volumes of labeled data for training, presenting significant challenges when applied to new systems with limited labeled data. This limitation has spurred the need for cross-system log anomaly detection (CSLAD) methods. However, current CSLAD approaches [...] Read more.
Existing log-based anomaly detection methods typically require large volumes of labeled data for training, presenting significant challenges when applied to new systems with limited labeled data. This limitation has spurred the need for cross-system log anomaly detection (CSLAD) methods. However, current CSLAD approaches often face challenges in effectively handling distributional differences in log data across systems. To address this issue, we propose ELFA-Log, a transfer learning-based approach for cross-system log anomaly detection. By enhancing pseudo-label generation with uncertainty estimation and feature alignment, ELFA-Log improves detection performance even in the presence of data distribution shifts. It uses entropy-based metrics to generate high-confidence pseudo-labels, minimizing reliance on labeled data. Additionally, a distance-based loss function optimizes the shared representation of cross-system log features. Experimental results on benchmark datasets demonstrate that ELFA-Log enhances the performance of CSLAD, offering a practical solution to the challenge of high labeling costs in real-world applications. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

23 pages, 1590 KiB  
Article
A Decision Support System for Classifying Suppliers Based on Machine Learning Techniques: A Case Study in the Aeronautics Industry
by Ana Claudia Andrade Ferreira, Alexandre Ferreira de Pinho, Matheus Brendon Francisco, Laercio Almeida de Siqueira, Jr. and Guilherme Augusto Vilas Boas Vasconcelos
Computers 2025, 14(7), 271; https://doi.org/10.3390/computers14070271 - 10 Jul 2025
Viewed by 391
Abstract
This paper presents the application of four machine learning algorithms to segment suppliers in a real case. The algorithms used were K-Means, Hierarchical K-Means, Agglomerative Nesting (AGNES), and Fuzzy Clustering. The analyzed company has suppliers that have been clustered using responses such as [...] Read more.
This paper presents the application of four machine learning algorithms to segment suppliers in a real case. The algorithms used were K-Means, Hierarchical K-Means, Agglomerative Nesting (AGNES), and Fuzzy Clustering. The analyzed company has suppliers that have been clustered using responses such as the number of non-conformities, location, and quantity supplied, among others. The CRISP-DM methodology was used for the work development. The proposed methodology is important for both industry and academia, as it helps managers make decisions about the quality of their suppliers and compares the use of four different algorithms for this purpose, which is an important insight for new studies. The K-Means algorithm obtained the best performance both for the metrics obtained and the simplicity of use. It is important to highlight that no studies to date have been conducted using the four algorithms proposed here applied in an industrial case, and this work shows this application. The use of artificial intelligence in industry is essential in this Industry 4.0 era for companies to make decisions, i.e., to have ways to make better decisions using data-driven concepts. Full article
Show Figures

Figure 1

26 pages, 4876 KiB  
Article
A Systematic Approach to Evaluate the Use of Chatbots in Educational Contexts: Learning Gains, Engagements and Perceptions
by Wei Qiu, Chit Lin Su, Nurabidah Binti Jamil, Maung Thway, Samuel Soo Hwee Ng, Lei Zhang, Fun Siong Lim and Joel Weijia Lai
Computers 2025, 14(7), 270; https://doi.org/10.3390/computers14070270 - 9 Jul 2025
Viewed by 757
Abstract
As generative artificial intelligence (GenAI) chatbots gain traction in educational settings, a growing number of studies explore their potential for personalized, scalable learning. However, methodological fragmentation has limited the comparability and generalizability of findings across the field. This study proposes a unified, learning [...] Read more.
As generative artificial intelligence (GenAI) chatbots gain traction in educational settings, a growing number of studies explore their potential for personalized, scalable learning. However, methodological fragmentation has limited the comparability and generalizability of findings across the field. This study proposes a unified, learning analytics–driven framework for evaluating the impact of GenAI chatbots on student learning. Grounded in the collection, analysis, and interpretation of diverse learner data, the framework integrates assessment outcomes, conversational interactions, engagement metrics, and student feedback. We demonstrate its application through a multi-week, quasi-experimental study using a Socratic-style chatbot designed with pedagogical intent. Using clustering techniques and statistical analysis, we identified patterns in student–chatbot interaction and linked them to changes in learning outcomes. This framework provides researchers and educators with a replicable structure for evaluating GenAI interventions and advancing coherence in learning analytics–based educational research. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

21 pages, 2170 KiB  
Article
IoT-Driven Intelligent Energy Management: Leveraging Smart Monitoring Applications and Artificial Neural Networks (ANN) for Sustainable Practices
by Azza Mohamed, Ibrahim Ismail and Mohammed AlDaraawi
Computers 2025, 14(7), 269; https://doi.org/10.3390/computers14070269 - 9 Jul 2025
Cited by 1 | Viewed by 386
Abstract
The growing mismanagement of energy resources is a pressing issue that poses significant risks to both individuals and the environment. As energy consumption continues to rise, the ramifications become increasingly severe, necessitating urgent action. In response, the rapid expansion of Internet of Things [...] Read more.
The growing mismanagement of energy resources is a pressing issue that poses significant risks to both individuals and the environment. As energy consumption continues to rise, the ramifications become increasingly severe, necessitating urgent action. In response, the rapid expansion of Internet of Things (IoT) devices offers a promising and innovative solution due to their adaptability, low power consumption, and transformative potential in energy management. This study describes a novel, integrative strategy that integrates IoT and Artificial Neural Networks (ANNs) in a smart monitoring mobile application intended to optimize energy usage and promote sustainability in residential settings. While both IoT and ANN technologies have been investigated separately in previous research, the uniqueness of this work is the actual integration of both technologies into a real-time, user-adaptive framework. The application allows for continuous energy monitoring via modern IoT devices and wireless sensor networks, while ANN-based prediction models evaluate consumption data to dynamically optimize energy use and reduce environmental effect. The system’s key features include simulated consumption scenarios and adaptive user profiles, which account for differences in household behaviors and occupancy patterns, allowing for tailored recommendations and energy control techniques. The architecture allows for remote device control, real-time feedback, and scenario-based simulations, making the system suitable for a wide range of home contexts. The suggested system’s feasibility and effectiveness are proved through detailed simulations, highlighting its potential to increase energy efficiency and encourage sustainable habits. This study contributes to the rapidly evolving field of intelligent energy management by providing a scalable, integrated, and user-centric solution that bridges the gap between theoretical models and actual implementation. Full article
Show Figures

Figure 1

22 pages, 1350 KiB  
Article
From Patterns to Predictions: Spatiotemporal Mobile Traffic Forecasting Using AutoML, TimeGPT and Traditional Models
by Hassan Ayaz, Kashif Sultan, Muhammad Sheraz and Teong Chee Chuah
Computers 2025, 14(7), 268; https://doi.org/10.3390/computers14070268 - 8 Jul 2025
Viewed by 374
Abstract
Call Detail Records (CDRs) from mobile networks offer valuable insights into both network performance and user behavior. With the growing importance of data analytics, analyzing CDRs has become critical for optimizing network resources by forecasting demand across spatial and temporal dimensions. In this [...] Read more.
Call Detail Records (CDRs) from mobile networks offer valuable insights into both network performance and user behavior. With the growing importance of data analytics, analyzing CDRs has become critical for optimizing network resources by forecasting demand across spatial and temporal dimensions. In this study, we examine publicly available CDR data from Telecom Italia to explore the spatiotemporal dynamics of mobile network activity in Milan. This analysis reveals key patterns in traffic distribution highlighting both high- and low-demand regions as well as notable variations in usage over time. To anticipate future network usage, we employ both Automated Machine Learning (AutoML) and the transformer-based TimeGPT model, comparing their performance against traditional forecasting methods such as Long Short-Term Memory (LSTM), ARIMA and SARIMA. Model accuracy is assessed using standard evaluation metrics, including root mean square error (RMSE), mean absolute error (MAE) and the coefficient of determination (R2). Results show that AutoML delivers the most accurate forecasts, with significantly lower RMSE (2.4990 vs. 14.8226) and MAE (1.0284 vs. 7.7789) compared to TimeGPT and a higher R2 score (99.96% vs. 98.62%). Our findings underscore the strengths of modern predictive models in capturing complex traffic behaviors and demonstrate their value in resource planning, congestion management and overall network optimization. Importantly, this study is one of the first to Comprehensively assess AutoML and TimeGPT on a high-resolution, real-world CDR dataset from a major European city. By merging machine learning techniques with advanced temporal modeling, this study provides a strong framework for scalable and intelligent mobile traffic prediction. It thus highlights the functionality of AutoML in simplifying model development and the possibility of TimeGPT to extend transformer-based prediction to the telecommunications domain. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

25 pages, 3142 KiB  
Article
Mobile Augmented Reality Games Towards Smart Learning City Environments: Learning About Sustainability
by Margarida M. Marques, João Ferreira-Santos, Rita Rodrigues and Lúcia Pombo
Computers 2025, 14(7), 267; https://doi.org/10.3390/computers14070267 - 7 Jul 2025
Cited by 1 | Viewed by 327
Abstract
This study explores the potential of mobile augmented reality games (MARGs) in promoting sustainability competencies within the context of a smart learning city environment. Anchored in the EduCITY project, which integrates location-based AR-enhanced games into an interactive mobile app, the research investigates how [...] Read more.
This study explores the potential of mobile augmented reality games (MARGs) in promoting sustainability competencies within the context of a smart learning city environment. Anchored in the EduCITY project, which integrates location-based AR-enhanced games into an interactive mobile app, the research investigates how these tools support Education for Sustainable Development (ESD). Employing a mixed-methods approach, data were collected through the GreenComp-based Questionnaire (GCQuest) and anonymous gameplay logs generated by the app. Thematic analysis of 358 responses revealed four key learning domains: ‘cultural awareness’, ‘environmental protection’, ‘sustainability awareness’, and ‘contextual knowledge’. Quantitative performance data from game logs highlighted substantial variation across games, with the highest performance found in those with more frequent AR integration and multiple iterative refinements. Participants engaging with AR-enhanced features (optional) outperformed others. This study provides empirical evidence for the use of MARGs to cultivate sustainability-related knowledge, skills, and attitudes, particularly when grounded in local realities and enhanced through thoughtful design. Beyond the EduCITY project, the study proposes a replicable model for assessing sustainability competencies, with implications for broader integration of AR across educational contexts in ESD. The paper concludes with a critical reflection on methodological limitations and suggests future directions, including adapting the GCQuest for use with younger learners in primary education. Full article
Show Figures

Figure 1

12 pages, 349 KiB  
Article
Agentic AI for Cultural Heritage: Embedding Risk Memory in Semantic Digital Twins
by George Pavlidis
Computers 2025, 14(7), 266; https://doi.org/10.3390/computers14070266 - 7 Jul 2025
Viewed by 748
Abstract
Cultural heritage preservation increasingly relies on data-driven technologies, yet most existing systems lack the cognitive and temporal depth required to support meaningful, transparent, and policy-informed decision-making. This paper proposes a conceptual framework for memory-enabled, semantically grounded AI agents in the cultural domain, showing [...] Read more.
Cultural heritage preservation increasingly relies on data-driven technologies, yet most existing systems lack the cognitive and temporal depth required to support meaningful, transparent, and policy-informed decision-making. This paper proposes a conceptual framework for memory-enabled, semantically grounded AI agents in the cultural domain, showing how the integration of the ICCROM/CCI ABC method for risk assessment into the Panoptes ontology enables the structured encoding of risk cognition over time. This structured risk memory becomes the foundation for agentic reasoning, supporting prioritization, justification, and long-term preservation planning. It is argued that this approach constitutes a principled step toward the development of Cultural Agentic AI: autonomous systems that remember, reason, and act in alignment with cultural values. Proof-of-concept simulations illustrate how memory-enabled agents can trace evolving risk patterns, trigger policy responses, and evaluate mitigation outcomes through structured, explainable reasoning. Full article
Show Figures

Figure 1

20 pages, 632 KiB  
Article
Bridging or Burning? Digital Sustainability and PY Students’ Intentions to Adopt AI-NLP in Educational Contexts
by Mostafa Aboulnour Salem
Computers 2025, 14(7), 265; https://doi.org/10.3390/computers14070265 - 7 Jul 2025
Cited by 1 | Viewed by 409
Abstract
The current study examines the determinants influencing preparatory year (PY) students’ intentions to adopt AI-powered natural language processing (NLP) models, such as Copilot, ChatGPT, and Gemini, and how these intentions shape their conceptions of digital sustainability. Additionally, the extended unified theory of acceptance [...] Read more.
The current study examines the determinants influencing preparatory year (PY) students’ intentions to adopt AI-powered natural language processing (NLP) models, such as Copilot, ChatGPT, and Gemini, and how these intentions shape their conceptions of digital sustainability. Additionally, the extended unified theory of acceptance and use of technology (UTAUT) was integrated with a diversity of educational constructs, including content availability (CA), learning engagement (LE), learning motivation (LM), learner involvement (LI), and AI satisfaction (AS). Furthermore, responses of 274 PY students from Saudi Universities were analysed using partial least squares structural equation modelling (PLS-SEM) to evaluate both the measurement and structural models. Likewise, the findings indicated CA (β = 0.25), LE (β = 0.22), LM (β = 0.20), and LI (β = 0.18) significantly predicted user intention (UI), explaining 52.2% of its variance (R2 = 0.522). In turn, UI significantly predicted students’ digital sustainability conceptions (DSC) (β = 0.35, R2 = 0.451). However, AI satisfaction (AS) did not exhibit a moderating effect, suggesting uniformly high satisfaction levels among students. Hence, the study concluded that AI-powered NLP models are being adopted as learning assistant technologies and are also essential catalysts in promoting sustainable digital conceptions. Similarly, this study contributes both theoretically and practically by conceptualising digital sustainability as a learner-driven construct and linking educational technology adoption to its advancement. This aligns with global frameworks such as Sustainable Development Goals (SDGs) 4 and 9. The study highlights AI’s transformative potential in higher education by examining how user intention (UI) influences digital sustainability conceptions (DSC) among preparatory year students in Saudi Arabia. Given the demographic focus of the study, further research is recommended, particularly longitudinal studies, to track changes over time across diverse genders, academic specialisations, and cultural contexts. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop