Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.3 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
EEG-Based Biometric Identification and Emotion Recognition: An Overview
Computers 2025, 14(8), 299; https://doi.org/10.3390/computers14080299 (registering DOI) - 23 Jul 2025
Abstract
This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview
[...] Read more.
This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview discusses the influence of emotional states on EEG signals and the consequent impact on biometric reliability. It also evaluates recent emotion recognition techniques, including machine learning methods such as support vector machines (SVMs), convolutional neural networks (CNNs), and long short-term memory networks (LSTMs). Additionally, the role of multimodal EEG datasets in enhancing emotion recognition accuracy is explored. Findings from key studies are synthesized to highlight the potential of EEG for secure, adaptive biometric systems that account for emotional variability. This overview emphasizes the need for future research on resilient biometric identification that integrates emotional context, aiming to establish EEG as a viable component of advanced biometric technologies.
Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
►
Show Figures
Open AccessReview
Deep Learning Techniques for Retinal Layer Segmentation to Aid Ocular Disease Diagnosis: A Review
by
Oliver Jonathan Quintana-Quintana, Marco Antonio Aceves-Fernández, Jesús Carlos Pedraza-Ortega, Gendry Alfonso-Francia and Saul Tovar-Arriaga
Computers 2025, 14(8), 298; https://doi.org/10.3390/computers14080298 - 22 Jul 2025
Abstract
Age-related ocular conditions like macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma are leading causes of irreversible vision loss globally. Optical coherence tomography (OCT) provides essential non-invasive visualization of retinal structures for early diagnosis, but manual analysis of these images is labor-intensive and
[...] Read more.
Age-related ocular conditions like macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma are leading causes of irreversible vision loss globally. Optical coherence tomography (OCT) provides essential non-invasive visualization of retinal structures for early diagnosis, but manual analysis of these images is labor-intensive and prone to variability. Deep learning (DL) techniques have emerged as powerful tools for automating the segmentation of the retinal layer in OCT scans, potentially improving diagnostic efficiency and consistency. This review systematically evaluates the state of the art in DL-based retinal layer segmentation using the PRISMA methodology. We analyze various architectures (including CNNs, U-Net variants, GANs, and transformers), examine the characteristics and availability of datasets, discuss common preprocessing and data augmentation strategies, identify frequently targeted retinal layers, and compare performance evaluation metrics across studies. Our synthesis highlights significant progress, particularly with U-Net-based models, which often achieve Dice scores exceeding 0.90 for well-defined layers, such as the retinal pigment epithelium (RPE). However, it also identifies ongoing challenges, including dataset heterogeneity, inconsistent evaluation protocols, difficulties in segmenting specific layers (e.g., OPL, RNFL), and the need for improved clinical integration. This review provides a comprehensive overview of current strengths, limitations, and future directions to guide research towards more robust and clinically applicable automated segmentation tools for enhanced ocular disease diagnosis.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►▼
Show Figures

Figure 1
Open AccessArticle
Optimizing Cybersecurity Education: A Comparative Study of On-Premises and Cloud-Based Lab Environments Using AWS EC2
by
Adil Khan and Azza Mohamed
Computers 2025, 14(8), 297; https://doi.org/10.3390/computers14080297 - 22 Jul 2025
Abstract
The increasing complexity of cybersecurity risks highlights the critical need for novel teaching techniques that provide students with the necessary skills and information. Traditional on-premises laboratory setups frequently lack the scalability, flexibility, and accessibility necessary for efficient training in today’s dynamic world. This
[...] Read more.
The increasing complexity of cybersecurity risks highlights the critical need for novel teaching techniques that provide students with the necessary skills and information. Traditional on-premises laboratory setups frequently lack the scalability, flexibility, and accessibility necessary for efficient training in today’s dynamic world. This study compares the efficacy of cloud-based solutions—specifically, Amazon Web Services (AWS) Elastic Compute Cloud (EC2)—against traditional settings like VirtualBox, with the goal of determining their potential to improve cybersecurity education. The study conducts systematic experimentation to compare lab environments based on parameters such as lab completion time, CPU and RAM use, and ease of access. The results show that AWS EC2 outperforms VirtualBox by shortening lab completion times, optimizing resource usage, and providing more remote accessibility. Additionally, the cloud-based strategy provides scalable, cost-effective implementation via a pay-per-use model, serving a wide range of pedagogical needs. These findings show that incorporating cloud technology into cybersecurity curricula can lead to more efficient, adaptable, and inclusive learning experiences, thereby boosting pedagogical methods in the field.
Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
►▼
Show Figures

Figure 1
Open AccessArticle
Unlocking the Potential of Smart Environments Through Deep Learning
by
Adnan Ramakić and Zlatko Bundalo
Computers 2025, 14(8), 296; https://doi.org/10.3390/computers14080296 - 22 Jul 2025
Abstract
This paper looks at and describes the potential of using artificial intelligence in smart environments. Various environments such as houses and residential and commercial buildings are becoming smarter through the use of various technologies, i.e., various sensors, smart devices and elements based on
[...] Read more.
This paper looks at and describes the potential of using artificial intelligence in smart environments. Various environments such as houses and residential and commercial buildings are becoming smarter through the use of various technologies, i.e., various sensors, smart devices and elements based on artificial intelligence. These technologies are used, for example, to achieve different levels of security in environments, for personalized comfort and control and for ambient assisted living. We investigated the deep learning approach, and, in this paper, describe its use in this context. Accordingly, we developed four deep learning models, which we describe. These are models for hand gesture recognition, emotion recognition, face recognition and gait recognition. These models are intended for use in smart environments for various tasks. In order to present the possible applications of the models, in this paper, a house is used as an example of a smart environment. The models were developed using the TensorFlow platform together with Keras. Four different datasets were used to train and validate the models. The results are promising and are presented in this paper.
Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
A Three-Dimensional Convolutional Neural Network for Dark Web Traffic Classification Based on Multi-Channel Image Deep Learning
by
Junwei Li, Zhisong Pan and Kaolin Jiang
Computers 2025, 14(8), 295; https://doi.org/10.3390/computers14080295 - 22 Jul 2025
Abstract
►▼
Show Figures
Dark web traffic classification is an important research direction in cybersecurity; however, traditional classification methods have many limitations. Although deep learning architectures like CNN and LSTM, as well as multi-structural fusion frameworks, have demonstrated partial success, they remain constrained by shallow feature representation,
[...] Read more.
Dark web traffic classification is an important research direction in cybersecurity; however, traditional classification methods have many limitations. Although deep learning architectures like CNN and LSTM, as well as multi-structural fusion frameworks, have demonstrated partial success, they remain constrained by shallow feature representation, localized decision boundaries, and poor generalization capacity. To improve the prediction accuracy and classification precision of dark web traffic, we propose a novel dark web traffic classification model integrating multi-channel image deep learning and a three-dimensional convolutional neural network (3D-CNN). The proposed framework leverages spatial–temporal feature fusion to enhance discriminative capability, while the 3D-CNN structure effectively captures complex traffic patterns across multiple dimensions. The experimental results show that compared to common 2D-CNN and 1D-CNN classification models, the dark web traffic classification method based on multi-channel image visual features and 3D-CNN can improve classification by 5.1% and 3.3% while maintaining a smaller total number of parameters and feature recognition parameters, effectively reducing the computational complexity of the model. In comparative experiments, 3D-CNN validates the model’s superiority in accuracy and computational efficiency compared to state-of-the-art methods, offering a promising solution for dark web traffic monitoring and security applications.
Full article

Figure 1
Open AccessArticle
Enhanced Multi-Level Recommender System Using Turnover-Based Weighting for Predicting Regional Preferences
by
Venkatesan Thillainayagam, Ramkumar Thirunavukarasu and J. Arun Pandian
Computers 2025, 14(7), 294; https://doi.org/10.3390/computers14070294 - 20 Jul 2025
Abstract
►▼
Show Figures
In the realm of recommender systems, the prediction of diverse customer preferences has emerged as a compelling research challenge, particularly for multi-state business organizations operating across various geographical regions. Collaborative filtering, a widely utilized recommendation technique, has demonstrated its efficacy in sectors such
[...] Read more.
In the realm of recommender systems, the prediction of diverse customer preferences has emerged as a compelling research challenge, particularly for multi-state business organizations operating across various geographical regions. Collaborative filtering, a widely utilized recommendation technique, has demonstrated its efficacy in sectors such as e-commerce, tourism, hotel management, and entertainment-based customer services. In the item-based collaborative filtering approach, users’ evaluations of purchased items are considered uniformly, without assigning weight to the participatory data sources and users’ ratings. This approach results in the ‘relevance problem’ when assessing the generated recommendations. In such scenarios, filtering collaborative patterns based on regional and local characteristics, while emphasizing the significance of branches and user ratings, could enhance the accuracy of recommendations. This paper introduces a turnover-based weighting model utilizing a big data processing framework to mine multi-level collaborative filtering patterns. The proposed weighting model assigns weights to participatory data sources based on the turnover cost of the branches, where turnover refers to the revenue generated through total business transactions conducted by the branch. Furthermore, the proposed big data framework eliminates the forced integration of branch data into a centralized repository and avoids the complexities associated with data movement. To validate the proposed work, experimental studies were conducted using a benchmarking dataset, namely the ‘Movie Lens Dataset’. The proposed approach uncovers multi-level collaborative pattern bases, including global, sub-global, and local levels, with improved predicted ratings compared with results generated by traditional recommender systems. The findings of the proposed approach would be highly beneficial to the strategic management of an interstate business organization, enabling them to leverage regional implications from user preferences.
Full article

Figure 1
Open AccessArticle
Design of Identical Strictly and Rearrangeably Nonblocking Folded Clos Networks with Equally Sized Square Crossbars
by
Yamin Li
Computers 2025, 14(7), 293; https://doi.org/10.3390/computers14070293 - 20 Jul 2025
Abstract
►▼
Show Figures
Clos networks and their folded versions, fat trees, are widely adopted in interconnection network designs for data centers and supercomputers. There are two main types of Clos networks: strictly nonblocking Clos networks and rearrangeably nonblocking Clos networks. Strictly nonblocking Clos networks can connect
[...] Read more.
Clos networks and their folded versions, fat trees, are widely adopted in interconnection network designs for data centers and supercomputers. There are two main types of Clos networks: strictly nonblocking Clos networks and rearrangeably nonblocking Clos networks. Strictly nonblocking Clos networks can connect an idle input to an idle output without interfering with existing connections. Rearrangeably nonblocking Clos networks can connect an idle input to an idle output with rearrangements of existing connections. Traditional strictly nonblocking Clos networks have two drawbacks. One drawback is the use of crossbars with different numbers of input and output ports, whereas the currently available switches are square crossbars with the same number of input and output ports. Another drawback is that every connection goes through a fixed number of stages, increasing the length of the communication path. A drawback of traditional fat trees is that the root stage uses differently sized crossbar switches than the other stages. To solve these problems, this paper proposes an Identical Strictly NonBlocking folded Clos (ISNBC) network that uses equally sized square crossbars for all switches. Correspondingly, this paper also proposes an Identical Rearrangeably NonBlocking folded Clos (IRNBC) network. Both ISNBC and IRNBC networks can have any number of stages, can use equally sized square crossbars with no unused switch ports, and can utilize shortcut connections to reduce communication path lengths. Moreover, both ISNBC and IRNBC networks have a lower switch crosspoint cost ratio relative to a single crossbar than their corresponding traditional Clos networks. Specifically, ISNBC networks use 46.43% to 87.71% crosspoints of traditional strictly nonblocking folded Clos networks, and IRNBC networks use 53.85% to 60.00% crosspoints of traditional rearrangeably nonblocking folded Clos networks.
Full article

Figure 1
Open AccessArticle
A New AI Framework to Support Social-Emotional Skills and Emotion Awareness in Children with Autism Spectrum Disorder
by
Andrea La Fauci De Leo, Pooneh Bagheri Zadeh, Kiran Voderhobli and Akbar Sheikh Akbari
Computers 2025, 14(7), 292; https://doi.org/10.3390/computers14070292 - 20 Jul 2025
Abstract
This research highlights the importance of Emotion Aware Technologies (EAT) and their implementation in serious games to assist children with Autism Spectrum Disorder (ASD) in developing social-emotional skills. As AI is gaining popularity, such tools can be used in mobile applications as invaluable
[...] Read more.
This research highlights the importance of Emotion Aware Technologies (EAT) and their implementation in serious games to assist children with Autism Spectrum Disorder (ASD) in developing social-emotional skills. As AI is gaining popularity, such tools can be used in mobile applications as invaluable teaching tools. In this paper, a new AI framework application is discussed that will help children with ASD develop efficient social-emotional skills. It uses the Jetpack Compose framework and Google Cloud Vision API as emotion-aware technology. The framework is developed with two main features designed to help children reflect on their emotions, internalise them, and train them how to express these emotions. Each activity is based on similar features from literature with enhanced functionalities. A diary feature allows children to take pictures of themselves, and the application categorises their facial expressions, saving the picture in the appropriate space. The three-level minigame consists of a series of prompts depicting a specific emotion that children have to match. The results of the framework offer a good starting point for similar applications to be developed further, especially by training custom models to be used with ML Kit.
Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
►▼
Show Figures

Figure 1
Open AccessArticle
A Lightweight Intrusion Detection System for IoT and UAV Using Deep Neural Networks with Knowledge Distillation
by
Treepop Wisanwanichthan and Mason Thammawichai
Computers 2025, 14(7), 291; https://doi.org/10.3390/computers14070291 - 19 Jul 2025
Abstract
Deep neural networks (DNNs) are highly effective for intrusion detection systems (IDS) due to their ability to learn complex patterns and detect potential anomalies within the systems. However, their high resource consumption requirements including memory and computation make them difficult to deploy on
[...] Read more.
Deep neural networks (DNNs) are highly effective for intrusion detection systems (IDS) due to their ability to learn complex patterns and detect potential anomalies within the systems. However, their high resource consumption requirements including memory and computation make them difficult to deploy on low-powered platforms. This study explores the possibility of using knowledge distillation (KD) to reduce constraints such as power and hardware consumption and improve real-time inference speed but maintain high detection accuracy in IDS across all attack types. The technique utilizes the transfer of knowledge from DNNs (teacher) models to more lightweight shallow neural network (student) models. KD has been proven to achieve significant parameter reduction (92–95%) and faster inference speed (7–11%) while improving overall detection performance (up to 6.12%). Experimental results on datasets such as NSL-KDD, UNSW-NB15, CIC-IDS2017, IoTID20, and UAV IDS demonstrate DNN with KD’s effectiveness in achieving high accuracy, precision, F1 score, and area under the curve (AUC) metrics. These findings confirm KD’s ability as a potential edge computing strategy for IoT and UAV devices, which are suitable for resource-constrained environments and lead to real-time anomaly detection for next-generation distributed systems.
Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
►▼
Show Figures

Figure 1
Open AccessArticle
A Forecasting Method for COVID-19 Epidemic Trends Using VMD and TSMixer-BiKSA Network
by
Yuhong Li, Guihong Bi, Taonan Tong and Shirui Li
Computers 2025, 14(7), 290; https://doi.org/10.3390/computers14070290 - 18 Jul 2025
Abstract
The spread of COVID-19 is influenced by multiple factors, including control policies, virus characteristics, individual behaviors, and environmental conditions, exhibiting highly complex nonlinear dynamic features. The time series of new confirmed cases shows significant nonlinearity and non-stationarity. Traditional prediction methods that rely solely
[...] Read more.
The spread of COVID-19 is influenced by multiple factors, including control policies, virus characteristics, individual behaviors, and environmental conditions, exhibiting highly complex nonlinear dynamic features. The time series of new confirmed cases shows significant nonlinearity and non-stationarity. Traditional prediction methods that rely solely on one-dimensional case data struggle to capture the multi-dimensional features of the data and are limited in handling nonlinear and non-stationary characteristics. Their prediction accuracy and generalization capabilities remain insufficient, and most existing studies focus on single-step forecasting, with limited attention to multi-step prediction. To address these challenges, this paper proposes a multi-module fusion prediction model—TSMixer-BiKSA network—that integrates multi-feature inputs, Variational Mode Decomposition (VMD), and a dual-branch parallel architecture for 1- to 3-day-ahead multi-step forecasting of new COVID-19 cases. First, variables highly correlated with the target sequence are selected through correlation analysis to construct a feature matrix, which serves as one input branch. Simultaneously, the case sequence is decomposed using VMD to extract low-complexity, highly regular multi-scale modal components as the other input branch, enhancing the model’s ability to perceive and represent multi-source information. The two input branches are then processed in parallel by the TSMixer-BiKSA network model. Specifically, the TSMixer module employs a multilayer perceptron (MLP) structure to alternately model along the temporal and feature dimensions, capturing cross-time and cross-variable dependencies. The BiGRU module extracts bidirectional dynamic features of the sequence, improving long-term dependency modeling. The KAN module introduces hierarchical nonlinear transformations to enhance high-order feature interactions. Finally, the SA attention mechanism enables the adaptive weighted fusion of multi-source information, reinforcing inter-module synergy and enhancing the overall feature extraction and representation capability. Experimental results based on COVID-19 case data from Italy and the United States demonstrate that the proposed model significantly outperforms existing mainstream methods across various error metrics, achieving higher prediction accuracy and robustness.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►▼
Show Figures

Figure 1
Open AccessArticle
Blockchain-Based Decentralized Identity Management System with AI and Merkle Trees
by
Hoang Viet Anh Le, Quoc Duy Nam Nguyen, Nakano Tadashi and Thi Hong Tran
Computers 2025, 14(7), 289; https://doi.org/10.3390/computers14070289 - 18 Jul 2025
Abstract
The Blockchain-based Decentralized Identity Management System (BDIMS) is an innovative framework designed for digital identity management, utilizing the unique attributes of blockchain technology. The BDIMS categorizes entities into three distinct groups: identity providers, service providers, and end-users. The system’s efficiency in identifying and
[...] Read more.
The Blockchain-based Decentralized Identity Management System (BDIMS) is an innovative framework designed for digital identity management, utilizing the unique attributes of blockchain technology. The BDIMS categorizes entities into three distinct groups: identity providers, service providers, and end-users. The system’s efficiency in identifying and extracting information from identification cards is enhanced by the integration of artificial intelligence (AI) algorithms. These algorithms decompose the extracted fields into smaller units, facilitating optical character recognition (OCR) and user authentication processes. By employing Merkle Trees, the BDIMS ensures secure authentication with service providers without the need to disclose any personal information. This advanced system empowers users to maintain control over their private information, ensuring its protection with maximum effectiveness and security. Experimental results confirm that the BDIMS effectively mitigates identity fraud while maintaining the confidentiality and integrity of sensitive data.
Full article
(This article belongs to the Special Issue Harnessing the Blockchain Technology in Unveiling Futuristic Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
SKGRec: A Semantic-Enhanced Knowledge Graph Fusion Recommendation Algorithm with Multi-Hop Reasoning and User Behavior Modeling
by
Siqi Xu, Ziqian Yang, Jing Xu and Ping Feng
Computers 2025, 14(7), 288; https://doi.org/10.3390/computers14070288 - 18 Jul 2025
Abstract
►▼
Show Figures
To address the limitations of existing knowledge graph-based recommendation algorithms, including insufficient utilization of semantic information and inadequate modeling of user behavior motivations, we propose SKGRec, a novel recommendation model that integrates knowledge graph and semantic features. The model constructs a semantic interaction
[...] Read more.
To address the limitations of existing knowledge graph-based recommendation algorithms, including insufficient utilization of semantic information and inadequate modeling of user behavior motivations, we propose SKGRec, a novel recommendation model that integrates knowledge graph and semantic features. The model constructs a semantic interaction graph (USIG) of user behaviors and employs a self-attention mechanism and a ranked optimization loss function to mine user interactions in fine-grained semantic associations. A relationship-aware aggregation module is designed to dynamically integrate higher-order relational features in the knowledge graph through the attention scoring function. In addition, a multi-hop relational path inference mechanism is introduced to capture long-distance dependencies to improve the depth of user interest modeling. Experiments on the Amazon-Book and Last-FM datasets show that SKGRec significantly outperforms several state-of-the-art recommendation algorithms on the Recall@20 and NDCG@20 metrics. Comparison experiments validate the effectiveness of semantic analysis of user behavior and multi-hop path inference, while cold-start experiments further confirm the robustness of the model in sparse-data scenarios. This study provides a new optimization approach for knowledge graph and semantic-driven recommendation systems, enabling more accurate capture of user preferences and alleviating the problem of noise interference.
Full article

Figure 1
Open AccessArticle
Machine Learning Techniques for Uncertainty Estimation in Dynamic Aperture Prediction
by
Carlo Emilio Montanari, Robert B. Appleby, Davide Di Croce, Massimo Giovannozzi, Tatiana Pieloni, Stefano Redaelli and Frederik F. Van der Veken
Computers 2025, 14(7), 287; https://doi.org/10.3390/computers14070287 - 18 Jul 2025
Abstract
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as
[...] Read more.
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as the CERN Large Hadron Collider and is crucial for designing future accelerators like the CERN Future Circular Hadron Collider. Traditional methods for computing the dynamic aperture are computationally demanding and involve extensive numerical simulations with numerous initial phase space conditions. In our recent work, we have devised surrogate models to predict the dynamic aperture boundary both efficiently and accurately. These models have been further refined by incorporating them into a novel active learning framework. This framework enhances performance through continual retraining and intelligent data generation based on informed sampling driven by error estimation. A critical attribute of this framework is the precise estimation of uncertainty in dynamic aperture predictions. In this study, we investigate various machine learning techniques for uncertainty estimation, including Monte Carlo dropout, bootstrap methods, and aleatory uncertainty quantification. We evaluated these approaches to determine the most effective method for reliable uncertainty estimation in dynamic aperture predictions using machine learning techniques.
Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
Implementing Virtual Reality for Fire Evacuation Preparedness at Schools
by
Rashika Tasnim Keya, Ilona Heldal, Daniel Patel, Pietro Murano and Cecilia Hammar Wijkmark
Computers 2025, 14(7), 286; https://doi.org/10.3390/computers14070286 - 18 Jul 2025
Abstract
Emergency preparedness training in organizations frequently involves simple evacuation drills triggered by fire alarms, limiting the opportunities for broader skill development. Digital technologies, particularly virtual reality (VR), offer promising methods to enhance learning for handling incidents and evacuations. However, implementing VR-based training remains
[...] Read more.
Emergency preparedness training in organizations frequently involves simple evacuation drills triggered by fire alarms, limiting the opportunities for broader skill development. Digital technologies, particularly virtual reality (VR), offer promising methods to enhance learning for handling incidents and evacuations. However, implementing VR-based training remains challenging due to unclear integration strategies within organizational practices and a lack of empirical evidence of VR’s effectiveness. This paper explores how VR-based training tools can be implemented in schools to enhance emergency preparedness among students, teachers, and staff. Following a design science research process, data were collected from a questionnaire-based study involving 12 participants and an exploratory study with 13 participants. The questionnaire-based study investigates initial attitudes and willingness to adopt VR training, while the exploratory study assesses the VR prototype’s usability, realism, and perceived effectiveness for emergency preparedness training. Despite a limited sample size and technical constraints of the early prototype, findings indicate strong student enthusiasm for gamified and immersive learning experiences. Teachers emphasized the need for technical and instructional support to regularly utilize VR training modules, while firefighters acknowledged the potential of VR tools, but also highlighted the critical importance of regular drills and professional validation. The relevance of the results of utilizing VR in this context is further discussed in terms of how it can be integrated into university curricula and aligned with other accessible digital preparedness tools.
Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Performance Evaluation and QoS Optimization of Routing Protocols in Vehicular Communication Networks Under Delay-Sensitive Conditions
by
Alaa Kamal Yousif Dafhalla, Hiba Mohanad Isam, Amira Elsir Tayfour Ahmed, Ikhlas Saad Ahmed, Lutfieh S. Alhomed, Amel Mohamed essaket Zahou, Fawzia Awad Elhassan Ali, Duria Mohammed Ibrahim Zayan, Mohamed Elshaikh Elobaid and Tijjani Adam
Computers 2025, 14(7), 285; https://doi.org/10.3390/computers14070285 - 17 Jul 2025
Abstract
Vehicular Communication Networks (VCNs) are essential to intelligent transportation systems, where real-time data exchange between vehicles and infrastructure supports safety, efficiency, and automation. However, achieving high Quality of Service (QoS)—especially under delay-sensitive conditions—remains a major challenge due to the high mobility and dynamic
[...] Read more.
Vehicular Communication Networks (VCNs) are essential to intelligent transportation systems, where real-time data exchange between vehicles and infrastructure supports safety, efficiency, and automation. However, achieving high Quality of Service (QoS)—especially under delay-sensitive conditions—remains a major challenge due to the high mobility and dynamic topology of vehicular environments. While some efforts have explored routing protocol optimization, few have systematically compared multiple optimization approaches tailored to distinct traffic and delay conditions. This study addresses this gap by evaluating and enhancing two widely used routing protocols, QOS-AODV and GPSR, through their improved versions, CM-QOS-AODV and CM-GPSR. Two distinct optimization models are proposed: the Traffic-Oriented Model (TOM), designed to handle variable and high-traffic conditions, and the Delay-Efficient Model (DEM), focused on reducing latency for time-critical scenarios. Performance was evaluated using key QoS metrics: throughput (rate of successful data delivery), packet delivery ratio (PDR) (percentage of successfully delivered packets), and end-to-end delay (latency between sender and receiver). Simulation results reveal that TOM-optimized protocols achieve up to 10% higher PDR, maintain throughput above 0.40 Mbps, and reduce delay to as low as 0.01 s, making them suitable for applications such as collision avoidance and emergency alerts. DEM-based variants offer balanced, moderate improvements, making them better suited for general-purpose VCN applications. These findings underscore the importance of traffic- and delay-aware protocol design in developing robust, QoS-compliant vehicular communication systems.
Full article
(This article belongs to the Special Issue Application of Deep Learning to Internet of Things Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
A Context-Aware Doorway Alignment and Depth Estimation Algorithm for Assistive Wheelchairs
by
Shanelle Tennekoon, Nushara Wedasingha, Anuradhi Welhenge, Nimsiri Abhayasinghe and Iain Murray
Computers 2025, 14(7), 284; https://doi.org/10.3390/computers14070284 - 17 Jul 2025
Abstract
Navigating through doorways remains a daily challenge for wheelchair users, often leading to frustration, collisions, or dependence on assistance. These challenges highlight a pressing need for intelligent doorway detection algorithm for assistive wheelchairs that go beyond traditional object detection. This study presents the
[...] Read more.
Navigating through doorways remains a daily challenge for wheelchair users, often leading to frustration, collisions, or dependence on assistance. These challenges highlight a pressing need for intelligent doorway detection algorithm for assistive wheelchairs that go beyond traditional object detection. This study presents the algorithmic development of a lightweight, vision-based doorway detection and alignment module with contextual awareness. It integrates channel and spatial attention, semantic feature fusion, unsupervised depth estimation, and doorway alignment that offers real-time navigational guidance to the wheelchairs control system. The model achieved a mean average precision of 95.8% and a F1 score of 93%, while maintaining low computational demands suitable for future deployment on embedded systems. By eliminating the need for depth sensors and enabling contextual awareness, this study offers a robust solution to improve indoor mobility and deliver actionable feedback to support safe and independent doorway traversal for wheelchair users.
Full article
(This article belongs to the Special Issue AI for Humans and Humans for AI (AI4HnH4AI))
►▼
Show Figures

Figure 1
Open AccessArticle
Comparative Analysis of Deep Learning Models for Intrusion Detection in IoT Networks
by
Abdullah Waqas, Sultan Daud Khan, Zaib Ullah, Mohib Ullah and Habib Ullah
Computers 2025, 14(7), 283; https://doi.org/10.3390/computers14070283 - 17 Jul 2025
Abstract
The Internet of Things (IoT) holds transformative potential in fields such as power grid optimization, defense networks, and healthcare. However, the constrained processing capacities and resource limitations of IoT networks make them especially susceptible to cyber threats. This study addresses the problem of
[...] Read more.
The Internet of Things (IoT) holds transformative potential in fields such as power grid optimization, defense networks, and healthcare. However, the constrained processing capacities and resource limitations of IoT networks make them especially susceptible to cyber threats. This study addresses the problem of detecting intrusions in IoT environments by evaluating the performance of deep learning (DL) models under different data and algorithmic conditions. We conducted a comparative analysis of three widely used DL models—Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), and Bidirectional LSTM (biLSTM)—across four benchmark IoT intrusion detection datasets: BoTIoT, CiCIoT, ToNIoT, and WUSTL-IIoT-2021. Each model was assessed under balanced and imbalanced dataset configurations and evaluated using three loss functions (cross-entropy, focal loss, and dual focal loss). By analyzing model efficacy across these datasets, we highlight the importance of generalizability and adaptability to varied data characteristics that are essential for real-world applications. The results demonstrate that the CNN trained using the cross-entropy loss function consistently outperforms the other models, particularly on balanced datasets. On the other hand, LSTM and biLSTM show strong potential in temporal modeling, but their performance is highly dependent on the characteristics of the dataset. By analyzing the performance of multiple DL models under diverse datasets, this research provides actionable insights for developing secure, interpretable IoT systems that can meet the challenges of designing a secure IoT system.
Full article
(This article belongs to the Special Issue Application of Deep Learning to Internet of Things Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Enhanced Detection of Intrusion Detection System in Cloud Networks Using Time-Aware and Deep Learning Techniques
by
Nima Terawi, Huthaifa I. Ashqar, Omar Darwish, Anas Alsobeh, Plamen Zahariev and Yahya Tashtoush
Computers 2025, 14(7), 282; https://doi.org/10.3390/computers14070282 - 17 Jul 2025
Abstract
►▼
Show Figures
This study introduces an enhanced Intrusion Detection System (IDS) framework for Denial-of-Service (DoS) attacks, utilizing network traffic inter-arrival time (IAT) analysis. By examining the timing between packets and other statistical features, we detected patterns of malicious activity, allowing early and effective DoS threat
[...] Read more.
This study introduces an enhanced Intrusion Detection System (IDS) framework for Denial-of-Service (DoS) attacks, utilizing network traffic inter-arrival time (IAT) analysis. By examining the timing between packets and other statistical features, we detected patterns of malicious activity, allowing early and effective DoS threat mitigation. We generate real DoS traffic, including normal, Internet Control Message Protocol (ICMP), Smurf attack, and Transmission Control Protocol (TCP) classes, and develop nine predictive algorithms, combining traditional machine learning and advanced deep learning techniques with optimization methods, including the synthetic minority sampling technique (SMOTE) and grid search (GS). Our findings reveal that while traditional machine learning achieved moderate accuracy, it struggled with imbalanced datasets. In contrast, Deep Neural Network (DNN) models showed significant improvements with optimization, with DNN combined with GS (DNN-GS) reaching 89% accuracy. However, we also used Recurrent Neural Networks (RNNs) combined with SMOTE and GS (RNN-SMOTE-GS), which emerged as the best-performing with a precision of 97%, demonstrating the effectiveness of combining SMOTE and GS and highlighting the critical role of advanced optimization techniques in enhancing the detection capabilities of IDS models for the accurate classification of various types of network traffic and attacks.
Full article

Figure 1
Open AccessArticle
One-Class Anomaly Detection for Industrial Applications: A Comparative Survey and Experimental Study
by
Davide Paolini, Pierpaolo Dini, Ettore Soldaini and Sergio Saponara
Computers 2025, 14(7), 281; https://doi.org/10.3390/computers14070281 - 16 Jul 2025
Abstract
This article aims to evaluate the runtime effectiveness of various one-class classification (OCC) techniques for anomaly detection in an industrial scenario reproduced in a laboratory setting. To address the limitations posed by restricted access to proprietary data, the study explores OCC methods that
[...] Read more.
This article aims to evaluate the runtime effectiveness of various one-class classification (OCC) techniques for anomaly detection in an industrial scenario reproduced in a laboratory setting. To address the limitations posed by restricted access to proprietary data, the study explores OCC methods that learn solely from legitimate network traffic, without requiring labeled malicious samples. After analyzing major publicly available datasets, such as KDD Cup 1999 and TON-IoT, as well as the most widely used OCC techniques, a lightweight and modular intrusion detection system (IDS) was developed in Python. The system was tested in real time on an experimental platform based on Raspberry Pi, within a simulated client–server environment using the NFSv4 protocol over TCP/UDP. Several OCC models were compared, including One-Class SVM, Autoencoder, VAE, and Isolation Forest. The results showed strong performance in terms of detection accuracy and low latency, with the best outcomes achieved using the UNSW-NB15 dataset. The article concludes with a discussion of additional strategies to enhance the runtime analysis of these algorithms, offering insights into potential future applications and improvement directions.
Full article
(This article belongs to the Special Issue Intrusion Detection and Trust Provisioning in Edge-of-Things Environment)
►▼
Show Figures

Figure 1
Open AccessArticle
Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement in Cloud Data Centers Using Deep Q-Networks and Agglomerative Clustering
by
Maraga Alex, Sunday O. Ojo and Fred Mzee Awuor
Computers 2025, 14(7), 280; https://doi.org/10.3390/computers14070280 - 15 Jul 2025
Abstract
►▼
Show Figures
The fast expansion of cloud computing has raised carbon emissions and energy usage in cloud data centers, so creative solutions for sustainable resource management are more necessary. This work presents a new algorithm—Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement using Deep Q-Networks (DQNs)
[...] Read more.
The fast expansion of cloud computing has raised carbon emissions and energy usage in cloud data centers, so creative solutions for sustainable resource management are more necessary. This work presents a new algorithm—Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement using Deep Q-Networks (DQNs) and Agglomerative Clustering (CARBON-DQN)—that intelligibly balances environmental sustainability, service level agreement (SLA), and energy efficiency. The method combines a deep reinforcement learning model that learns optimum placement methods over time, carbon-aware data center profiling, and the hierarchical clustering of virtual machines (VMs) depending on resource constraints. Extensive simulations show that CARBON-DQN beats conventional and state-of-the-art algorithms like GRVMP, NSGA-II, RLVMP, GMPR, and MORLVMP very dramatically. Among many virtual machine configurations—including micro, small, high-CPU, and extra-large instances—it delivers the lowest carbon emissions, lowered SLA violations, and lowest energy usage. Driven by real-time input, the adaptive decision-making capacity of the algorithm allows it to dynamically react to changing data center circumstances and workloads. These findings highlight how well CARBON-DQN is a sustainable and intelligent virtual machine deployment system for cloud systems. To improve scalability, environmental effect, and practical applicability even further, future work will investigate the integration of renewable energy forecasts, dynamic pricing models, and deployment across multi-cloud and edge computing environments.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Automation, Computers, Electronics, Sensors, JCP, Mathematics
Intelligent Optimization, Decision-Making and Privacy Preservation in Cyber–Physical Systems
Topic Editors: Lijuan Zha, Jinliang Liu, Jian LiuDeadline: 31 August 2025
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies and Applications
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario Brcic, Sebastian LapuschkinDeadline: 31 January 2026

Conferences
Special Issues
Special Issue in
Computers
Natural Language Processing (NLP) and Large Language Modelling
Guest Editor: Ming LiuDeadline: 31 July 2025
Special Issue in
Computers
IT in Production and Logistics
Guest Editors: Markus Rabe, Anne Antonia Scheidler, Marc Stautner, Simon J. E. TaylorDeadline: 31 July 2025
Special Issue in
Computers
Application of Deep Learning to Internet of Things Systems
Guest Editor: Rytis MaskeliunasDeadline: 31 July 2025
Special Issue in
Computers
Artificial Intelligence in Industrial IoT Applications
Guest Editor: Isidro CalvoDeadline: 31 July 2025