Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.3 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Artificial Intelligence Approach for Waste-Printed Circuit Board Recycling: A Systematic Review
Computers 2025, 14(8), 304; https://doi.org/10.3390/computers14080304 (registering DOI) - 27 Jul 2025
Abstract
The rapid advancement of technology has led to a substantial increase in Waste Electrical and Electronic Equipment (WEEE), which poses significant environmental threats and increases pressure on the planet’s limited natural resources. In response, Artificial Intelligence (AI) has emerged as a key enabler
[...] Read more.
The rapid advancement of technology has led to a substantial increase in Waste Electrical and Electronic Equipment (WEEE), which poses significant environmental threats and increases pressure on the planet’s limited natural resources. In response, Artificial Intelligence (AI) has emerged as a key enabler of the Circular Economy (CE), particularly in improving the speed and precision of waste sorting through machine learning and computer vision techniques. Despite this progress, to our knowledge, no comprehensive, systematic review has focused specifically on the role of AI in disassembling and recycling Waste-Printed Circuit Boards (WPCBs). This paper addresses this gap by systematically reviewing recent advancements in AI-driven disassembly and sorting approaches with a focus on machine learning and vision-based methodologies. The review is structured around three areas: (1) the availability and use of datasets for AI-based WPCB recycling; (2) state-of-the-art techniques for selective disassembly and component recognition to enable fast WPCB recycling; and (3) key challenges and possible solutions aimed at enhancing the recovery of critical raw materials (CRMs) from WPCBs.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Open AccessArticle
White Matter Microstructure Differences Between Congenital and Acquired Hearing Loss Patients Using Diffusion Tensor Imaging (DTI) and Machine Learning
by
Fatimah Kayla Kameela, Fikri Mirza Putranto, Prasandhya Astagiri Yusuf, Arierta Pujitresnani, Vanya Vabrina Valindria, Dodi Sudiana and Mia Rizkinia
Computers 2025, 14(8), 303; https://doi.org/10.3390/computers14080303 - 25 Jul 2025
Abstract
Diffusion tensor imaging (DTI) metrics provide insights into neural pathways, which can be pivotal in differentiating congenital and acquired hearing loss to support diagnosis, especially for those diagnosed late. In this study, we analyzed DTI parameters and developed machine learning to classify these
[...] Read more.
Diffusion tensor imaging (DTI) metrics provide insights into neural pathways, which can be pivotal in differentiating congenital and acquired hearing loss to support diagnosis, especially for those diagnosed late. In this study, we analyzed DTI parameters and developed machine learning to classify these two patient groups. The study included 29 patients with congenital hearing loss and 6 with acquired hearing loss. DTI scans were performed to obtain metrics, such as fractional anisotropy (FA), axial diffusivity (AD), radial diffusivity (RD), and mean diffusivity (MD). Statistical analyses based on p-values highlighted the cortical auditory system’s prominence in differentiating between groups, with FA and RD emerging as pivotal metrics. Three machine learning models were trained to classify hearing loss types for each of five dataset scenarios. Random forest (RF) trained on a dataset consisting of significant features demonstrated superior performance, achieving a specificity of 87.12% and F1 score of 96.88%. This finding highlights the critical role of DTI metrics in the classification of hearing loss. The experimental results also emphasized the critical role of FA in distinguishing between the two types of hearing loss, underscoring its potential clinical utility. DTI parameters, combined with machine learning, can effectively distinguish between congenital and acquired hearing loss, offering a robust tool for clinical diagnosis and treatment planning. Further research with larger and balanced cohorts is warranted to validate these findings.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
Open AccessArticle
Novel Models for the Warm-Up Phase of Recommendation Systems
by
Nourah AlRossais
Computers 2025, 14(8), 302; https://doi.org/10.3390/computers14080302 - 24 Jul 2025
Abstract
In the recommendation system (RS) literature, a distinction exists between studies dedicated to fully operational (known users/items) and cold-start (new users/items) RSs. The warm-up phase—the transition between the two—is not widely researched, despite evidence that attrition rates are the highest for users and
[...] Read more.
In the recommendation system (RS) literature, a distinction exists between studies dedicated to fully operational (known users/items) and cold-start (new users/items) RSs. The warm-up phase—the transition between the two—is not widely researched, despite evidence that attrition rates are the highest for users and content providers during such periods. RS formulations, particularly deep learning models, do not easily allow for a warm-up phase. Herein, we propose two independent and complementary models to increase RS performance during the warm-up phase. The models apply to any cold-start RS expressible as a function of all user features, item features, and existing users’ preferences for existing items. We demonstrate substantial improvements: Accuracy-oriented metrics improved by up to 14% compared with not handling warm-up explicitly. Non-accuracy-oriented metrics, including serendipity and fairness, improved by up to 12% compared with not handling warm-up explicitly. The improvements were independent of the cold-start RS algorithm. Additionally, this paper introduces a method of examining the performance metrics of an RS during the warm-up phase as a function of the number of user–item interactions. We discuss problems such as data leakage and temporal consistencies of training/testing—often neglected during the offline evaluation of RSs.
Full article
Open AccessArticle
A Hybrid Approach Using Graph Neural Networks and LSTM for Attack Vector Reconstruction
by
Yelizaveta Vitulyova, Tetiana Babenko, Kateryna Kolesnikova, Nikolay Kiktev and Olga Abramkina
Computers 2025, 14(8), 301; https://doi.org/10.3390/computers14080301 - 24 Jul 2025
Abstract
The escalating complexity of cyberattacks necessitates advanced strategies for their detection and mitigation. This study presents a hybrid model that integrates Graph Neural Networks (GNNs) with Long Short-Term Memory (LSTM) networks to reconstruct and predict attack vectors in cybersecurity. GNNs are employed to
[...] Read more.
The escalating complexity of cyberattacks necessitates advanced strategies for their detection and mitigation. This study presents a hybrid model that integrates Graph Neural Networks (GNNs) with Long Short-Term Memory (LSTM) networks to reconstruct and predict attack vectors in cybersecurity. GNNs are employed to analyze the structural relationships within the MITRE ATT&CK framework, while LSTM networks are utilized to model the temporal dynamics of attack sequences, effectively capturing the evolution of cyber threats. The combined approach harnesses the complementary strengths of these methods to deliver precise, interpretable, and adaptable solutions for addressing cybersecurity challenges. Experimental evaluation on the CICIDS2017 dataset reveals the model’s strong performance, achieving an Area Under the Curve (AUC) of 0.99 on both balanced and imbalanced test sets, an F1-score of 0.85 for technique prediction, and a Mean Squared Error (MSE) of 0.05 for risk assessment. These findings underscore the model’s capability to accurately reconstruct attack paths and forecast future techniques, offering a promising avenue for strengthening proactive defense mechanisms against evolving cyber threats.
Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
►▼
Show Figures

Figure 1
Open AccessArticle
Assessing Blockchain Health Devices: A Multi-Framework Method for Integrating Usability and User Acceptance
by
Polina Bobrova and Paolo Perego
Computers 2025, 14(8), 300; https://doi.org/10.3390/computers14080300 - 23 Jul 2025
Abstract
Integrating blockchain into healthcare devices offers the potential for improved data control but faces significant usability and acceptance challenges. This study addresses this gap by evaluating CipherPal, an improved blockchain-enabled Smart Fidget Toy prototype, using a multi-framework approach to understand the interplay between
[...] Read more.
Integrating blockchain into healthcare devices offers the potential for improved data control but faces significant usability and acceptance challenges. This study addresses this gap by evaluating CipherPal, an improved blockchain-enabled Smart Fidget Toy prototype, using a multi-framework approach to understand the interplay between technology, design, and user experience. We synthesized insights from three complementary frameworks: an expert review assessing adherence to Web3 Design Guidelines, a User Acceptance Toolkit assessment with professionals based on UTAUT2, and an extended three-day user testing study. The findings revealed that users valued CipherPal’s satisfying tactile interaction and perceived benefits for well-being, such as stress relief. However, significant usability barriers emerged, primarily related to challenging device–application connectivity and data synchronization. The multi-framework approach proved valuable in revealing these core tensions. While the device was conceptually accepted, the blockchain integration added significant interaction friction that overshadowed its potential benefits during the study. This research underscores the critical need for user-centered design in health-related blockchain applications, emphasizing that seamless usability and abstracting technical complexity are paramount for adoption.
Full article
(This article belongs to the Special Issue When Blockchain Meets IoT: Challenges and Potentials)
►▼
Show Figures

Figure 1
Open AccessReview
EEG-Based Biometric Identification and Emotion Recognition: An Overview
by
Miguel A. Becerra, Carolina Duque-Mejia, Andres Castro-Ospina, Leonardo Serna-Guarín, Cristian Mejía and Eduardo Duque-Grisales
Computers 2025, 14(8), 299; https://doi.org/10.3390/computers14080299 - 23 Jul 2025
Abstract
This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview
[...] Read more.
This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview discusses the influence of emotional states on EEG signals and the consequent impact on biometric reliability. It also evaluates recent emotion recognition techniques, including machine learning methods such as support vector machines (SVMs), convolutional neural networks (CNNs), and long short-term memory networks (LSTMs). Additionally, the role of multimodal EEG datasets in enhancing emotion recognition accuracy is explored. Findings from key studies are synthesized to highlight the potential of EEG for secure, adaptive biometric systems that account for emotional variability. This overview emphasizes the need for future research on resilient biometric identification that integrates emotional context, aiming to establish EEG as a viable component of advanced biometric technologies.
Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessReview
Deep Learning Techniques for Retinal Layer Segmentation to Aid Ocular Disease Diagnosis: A Review
by
Oliver Jonathan Quintana-Quintana, Marco Antonio Aceves-Fernández, Jesús Carlos Pedraza-Ortega, Gendry Alfonso-Francia and Saul Tovar-Arriaga
Computers 2025, 14(8), 298; https://doi.org/10.3390/computers14080298 - 22 Jul 2025
Abstract
Age-related ocular conditions like macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma are leading causes of irreversible vision loss globally. Optical coherence tomography (OCT) provides essential non-invasive visualization of retinal structures for early diagnosis, but manual analysis of these images is labor-intensive and
[...] Read more.
Age-related ocular conditions like macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma are leading causes of irreversible vision loss globally. Optical coherence tomography (OCT) provides essential non-invasive visualization of retinal structures for early diagnosis, but manual analysis of these images is labor-intensive and prone to variability. Deep learning (DL) techniques have emerged as powerful tools for automating the segmentation of the retinal layer in OCT scans, potentially improving diagnostic efficiency and consistency. This review systematically evaluates the state of the art in DL-based retinal layer segmentation using the PRISMA methodology. We analyze various architectures (including CNNs, U-Net variants, GANs, and transformers), examine the characteristics and availability of datasets, discuss common preprocessing and data augmentation strategies, identify frequently targeted retinal layers, and compare performance evaluation metrics across studies. Our synthesis highlights significant progress, particularly with U-Net-based models, which often achieve Dice scores exceeding 0.90 for well-defined layers, such as the retinal pigment epithelium (RPE). However, it also identifies ongoing challenges, including dataset heterogeneity, inconsistent evaluation protocols, difficulties in segmenting specific layers (e.g., OPL, RNFL), and the need for improved clinical integration. This review provides a comprehensive overview of current strengths, limitations, and future directions to guide research towards more robust and clinically applicable automated segmentation tools for enhanced ocular disease diagnosis.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►▼
Show Figures

Figure 1
Open AccessArticle
Optimizing Cybersecurity Education: A Comparative Study of On-Premises and Cloud-Based Lab Environments Using AWS EC2
by
Adil Khan and Azza Mohamed
Computers 2025, 14(8), 297; https://doi.org/10.3390/computers14080297 - 22 Jul 2025
Abstract
The increasing complexity of cybersecurity risks highlights the critical need for novel teaching techniques that provide students with the necessary skills and information. Traditional on-premises laboratory setups frequently lack the scalability, flexibility, and accessibility necessary for efficient training in today’s dynamic world. This
[...] Read more.
The increasing complexity of cybersecurity risks highlights the critical need for novel teaching techniques that provide students with the necessary skills and information. Traditional on-premises laboratory setups frequently lack the scalability, flexibility, and accessibility necessary for efficient training in today’s dynamic world. This study compares the efficacy of cloud-based solutions—specifically, Amazon Web Services (AWS) Elastic Compute Cloud (EC2)—against traditional settings like VirtualBox, with the goal of determining their potential to improve cybersecurity education. The study conducts systematic experimentation to compare lab environments based on parameters such as lab completion time, CPU and RAM use, and ease of access. The results show that AWS EC2 outperforms VirtualBox by shortening lab completion times, optimizing resource usage, and providing more remote accessibility. Additionally, the cloud-based strategy provides scalable, cost-effective implementation via a pay-per-use model, serving a wide range of pedagogical needs. These findings show that incorporating cloud technology into cybersecurity curricula can lead to more efficient, adaptable, and inclusive learning experiences, thereby boosting pedagogical methods in the field.
Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
►▼
Show Figures

Figure 1
Open AccessArticle
Unlocking the Potential of Smart Environments Through Deep Learning
by
Adnan Ramakić and Zlatko Bundalo
Computers 2025, 14(8), 296; https://doi.org/10.3390/computers14080296 - 22 Jul 2025
Abstract
This paper looks at and describes the potential of using artificial intelligence in smart environments. Various environments such as houses and residential and commercial buildings are becoming smarter through the use of various technologies, i.e., various sensors, smart devices and elements based on
[...] Read more.
This paper looks at and describes the potential of using artificial intelligence in smart environments. Various environments such as houses and residential and commercial buildings are becoming smarter through the use of various technologies, i.e., various sensors, smart devices and elements based on artificial intelligence. These technologies are used, for example, to achieve different levels of security in environments, for personalized comfort and control and for ambient assisted living. We investigated the deep learning approach, and, in this paper, describe its use in this context. Accordingly, we developed four deep learning models, which we describe. These are models for hand gesture recognition, emotion recognition, face recognition and gait recognition. These models are intended for use in smart environments for various tasks. In order to present the possible applications of the models, in this paper, a house is used as an example of a smart environment. The models were developed using the TensorFlow platform together with Keras. Four different datasets were used to train and validate the models. The results are promising and are presented in this paper.
Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
A Three-Dimensional Convolutional Neural Network for Dark Web Traffic Classification Based on Multi-Channel Image Deep Learning
by
Junwei Li, Zhisong Pan and Kaolin Jiang
Computers 2025, 14(8), 295; https://doi.org/10.3390/computers14080295 - 22 Jul 2025
Abstract
►▼
Show Figures
Dark web traffic classification is an important research direction in cybersecurity; however, traditional classification methods have many limitations. Although deep learning architectures like CNN and LSTM, as well as multi-structural fusion frameworks, have demonstrated partial success, they remain constrained by shallow feature representation,
[...] Read more.
Dark web traffic classification is an important research direction in cybersecurity; however, traditional classification methods have many limitations. Although deep learning architectures like CNN and LSTM, as well as multi-structural fusion frameworks, have demonstrated partial success, they remain constrained by shallow feature representation, localized decision boundaries, and poor generalization capacity. To improve the prediction accuracy and classification precision of dark web traffic, we propose a novel dark web traffic classification model integrating multi-channel image deep learning and a three-dimensional convolutional neural network (3D-CNN). The proposed framework leverages spatial–temporal feature fusion to enhance discriminative capability, while the 3D-CNN structure effectively captures complex traffic patterns across multiple dimensions. The experimental results show that compared to common 2D-CNN and 1D-CNN classification models, the dark web traffic classification method based on multi-channel image visual features and 3D-CNN can improve classification by 5.1% and 3.3% while maintaining a smaller total number of parameters and feature recognition parameters, effectively reducing the computational complexity of the model. In comparative experiments, 3D-CNN validates the model’s superiority in accuracy and computational efficiency compared to state-of-the-art methods, offering a promising solution for dark web traffic monitoring and security applications.
Full article

Figure 1
Open AccessArticle
Enhanced Multi-Level Recommender System Using Turnover-Based Weighting for Predicting Regional Preferences
by
Venkatesan Thillainayagam, Ramkumar Thirunavukarasu and J. Arun Pandian
Computers 2025, 14(7), 294; https://doi.org/10.3390/computers14070294 - 20 Jul 2025
Abstract
►▼
Show Figures
In the realm of recommender systems, the prediction of diverse customer preferences has emerged as a compelling research challenge, particularly for multi-state business organizations operating across various geographical regions. Collaborative filtering, a widely utilized recommendation technique, has demonstrated its efficacy in sectors such
[...] Read more.
In the realm of recommender systems, the prediction of diverse customer preferences has emerged as a compelling research challenge, particularly for multi-state business organizations operating across various geographical regions. Collaborative filtering, a widely utilized recommendation technique, has demonstrated its efficacy in sectors such as e-commerce, tourism, hotel management, and entertainment-based customer services. In the item-based collaborative filtering approach, users’ evaluations of purchased items are considered uniformly, without assigning weight to the participatory data sources and users’ ratings. This approach results in the ‘relevance problem’ when assessing the generated recommendations. In such scenarios, filtering collaborative patterns based on regional and local characteristics, while emphasizing the significance of branches and user ratings, could enhance the accuracy of recommendations. This paper introduces a turnover-based weighting model utilizing a big data processing framework to mine multi-level collaborative filtering patterns. The proposed weighting model assigns weights to participatory data sources based on the turnover cost of the branches, where turnover refers to the revenue generated through total business transactions conducted by the branch. Furthermore, the proposed big data framework eliminates the forced integration of branch data into a centralized repository and avoids the complexities associated with data movement. To validate the proposed work, experimental studies were conducted using a benchmarking dataset, namely the ‘Movie Lens Dataset’. The proposed approach uncovers multi-level collaborative pattern bases, including global, sub-global, and local levels, with improved predicted ratings compared with results generated by traditional recommender systems. The findings of the proposed approach would be highly beneficial to the strategic management of an interstate business organization, enabling them to leverage regional implications from user preferences.
Full article

Figure 1
Open AccessArticle
Design of Identical Strictly and Rearrangeably Nonblocking Folded Clos Networks with Equally Sized Square Crossbars
by
Yamin Li
Computers 2025, 14(7), 293; https://doi.org/10.3390/computers14070293 - 20 Jul 2025
Abstract
►▼
Show Figures
Clos networks and their folded versions, fat trees, are widely adopted in interconnection network designs for data centers and supercomputers. There are two main types of Clos networks: strictly nonblocking Clos networks and rearrangeably nonblocking Clos networks. Strictly nonblocking Clos networks can connect
[...] Read more.
Clos networks and their folded versions, fat trees, are widely adopted in interconnection network designs for data centers and supercomputers. There are two main types of Clos networks: strictly nonblocking Clos networks and rearrangeably nonblocking Clos networks. Strictly nonblocking Clos networks can connect an idle input to an idle output without interfering with existing connections. Rearrangeably nonblocking Clos networks can connect an idle input to an idle output with rearrangements of existing connections. Traditional strictly nonblocking Clos networks have two drawbacks. One drawback is the use of crossbars with different numbers of input and output ports, whereas the currently available switches are square crossbars with the same number of input and output ports. Another drawback is that every connection goes through a fixed number of stages, increasing the length of the communication path. A drawback of traditional fat trees is that the root stage uses differently sized crossbar switches than the other stages. To solve these problems, this paper proposes an Identical Strictly NonBlocking folded Clos (ISNBC) network that uses equally sized square crossbars for all switches. Correspondingly, this paper also proposes an Identical Rearrangeably NonBlocking folded Clos (IRNBC) network. Both ISNBC and IRNBC networks can have any number of stages, can use equally sized square crossbars with no unused switch ports, and can utilize shortcut connections to reduce communication path lengths. Moreover, both ISNBC and IRNBC networks have a lower switch crosspoint cost ratio relative to a single crossbar than their corresponding traditional Clos networks. Specifically, ISNBC networks use 46.43% to 87.71% crosspoints of traditional strictly nonblocking folded Clos networks, and IRNBC networks use 53.85% to 60.00% crosspoints of traditional rearrangeably nonblocking folded Clos networks.
Full article

Figure 1
Open AccessArticle
A New AI Framework to Support Social-Emotional Skills and Emotion Awareness in Children with Autism Spectrum Disorder
by
Andrea La Fauci De Leo, Pooneh Bagheri Zadeh, Kiran Voderhobli and Akbar Sheikh Akbari
Computers 2025, 14(7), 292; https://doi.org/10.3390/computers14070292 - 20 Jul 2025
Abstract
This research highlights the importance of Emotion Aware Technologies (EAT) and their implementation in serious games to assist children with Autism Spectrum Disorder (ASD) in developing social-emotional skills. As AI is gaining popularity, such tools can be used in mobile applications as invaluable
[...] Read more.
This research highlights the importance of Emotion Aware Technologies (EAT) and their implementation in serious games to assist children with Autism Spectrum Disorder (ASD) in developing social-emotional skills. As AI is gaining popularity, such tools can be used in mobile applications as invaluable teaching tools. In this paper, a new AI framework application is discussed that will help children with ASD develop efficient social-emotional skills. It uses the Jetpack Compose framework and Google Cloud Vision API as emotion-aware technology. The framework is developed with two main features designed to help children reflect on their emotions, internalise them, and train them how to express these emotions. Each activity is based on similar features from literature with enhanced functionalities. A diary feature allows children to take pictures of themselves, and the application categorises their facial expressions, saving the picture in the appropriate space. The three-level minigame consists of a series of prompts depicting a specific emotion that children have to match. The results of the framework offer a good starting point for similar applications to be developed further, especially by training custom models to be used with ML Kit.
Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
►▼
Show Figures

Figure 1
Open AccessArticle
A Lightweight Intrusion Detection System for IoT and UAV Using Deep Neural Networks with Knowledge Distillation
by
Treepop Wisanwanichthan and Mason Thammawichai
Computers 2025, 14(7), 291; https://doi.org/10.3390/computers14070291 - 19 Jul 2025
Abstract
Deep neural networks (DNNs) are highly effective for intrusion detection systems (IDS) due to their ability to learn complex patterns and detect potential anomalies within the systems. However, their high resource consumption requirements including memory and computation make them difficult to deploy on
[...] Read more.
Deep neural networks (DNNs) are highly effective for intrusion detection systems (IDS) due to their ability to learn complex patterns and detect potential anomalies within the systems. However, their high resource consumption requirements including memory and computation make them difficult to deploy on low-powered platforms. This study explores the possibility of using knowledge distillation (KD) to reduce constraints such as power and hardware consumption and improve real-time inference speed but maintain high detection accuracy in IDS across all attack types. The technique utilizes the transfer of knowledge from DNNs (teacher) models to more lightweight shallow neural network (student) models. KD has been proven to achieve significant parameter reduction (92–95%) and faster inference speed (7–11%) while improving overall detection performance (up to 6.12%). Experimental results on datasets such as NSL-KDD, UNSW-NB15, CIC-IDS2017, IoTID20, and UAV IDS demonstrate DNN with KD’s effectiveness in achieving high accuracy, precision, F1 score, and area under the curve (AUC) metrics. These findings confirm KD’s ability as a potential edge computing strategy for IoT and UAV devices, which are suitable for resource-constrained environments and lead to real-time anomaly detection for next-generation distributed systems.
Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
►▼
Show Figures

Figure 1
Open AccessArticle
A Forecasting Method for COVID-19 Epidemic Trends Using VMD and TSMixer-BiKSA Network
by
Yuhong Li, Guihong Bi, Taonan Tong and Shirui Li
Computers 2025, 14(7), 290; https://doi.org/10.3390/computers14070290 - 18 Jul 2025
Abstract
The spread of COVID-19 is influenced by multiple factors, including control policies, virus characteristics, individual behaviors, and environmental conditions, exhibiting highly complex nonlinear dynamic features. The time series of new confirmed cases shows significant nonlinearity and non-stationarity. Traditional prediction methods that rely solely
[...] Read more.
The spread of COVID-19 is influenced by multiple factors, including control policies, virus characteristics, individual behaviors, and environmental conditions, exhibiting highly complex nonlinear dynamic features. The time series of new confirmed cases shows significant nonlinearity and non-stationarity. Traditional prediction methods that rely solely on one-dimensional case data struggle to capture the multi-dimensional features of the data and are limited in handling nonlinear and non-stationary characteristics. Their prediction accuracy and generalization capabilities remain insufficient, and most existing studies focus on single-step forecasting, with limited attention to multi-step prediction. To address these challenges, this paper proposes a multi-module fusion prediction model—TSMixer-BiKSA network—that integrates multi-feature inputs, Variational Mode Decomposition (VMD), and a dual-branch parallel architecture for 1- to 3-day-ahead multi-step forecasting of new COVID-19 cases. First, variables highly correlated with the target sequence are selected through correlation analysis to construct a feature matrix, which serves as one input branch. Simultaneously, the case sequence is decomposed using VMD to extract low-complexity, highly regular multi-scale modal components as the other input branch, enhancing the model’s ability to perceive and represent multi-source information. The two input branches are then processed in parallel by the TSMixer-BiKSA network model. Specifically, the TSMixer module employs a multilayer perceptron (MLP) structure to alternately model along the temporal and feature dimensions, capturing cross-time and cross-variable dependencies. The BiGRU module extracts bidirectional dynamic features of the sequence, improving long-term dependency modeling. The KAN module introduces hierarchical nonlinear transformations to enhance high-order feature interactions. Finally, the SA attention mechanism enables the adaptive weighted fusion of multi-source information, reinforcing inter-module synergy and enhancing the overall feature extraction and representation capability. Experimental results based on COVID-19 case data from Italy and the United States demonstrate that the proposed model significantly outperforms existing mainstream methods across various error metrics, achieving higher prediction accuracy and robustness.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►▼
Show Figures

Figure 1
Open AccessArticle
Blockchain-Based Decentralized Identity Management System with AI and Merkle Trees
by
Hoang Viet Anh Le, Quoc Duy Nam Nguyen, Nakano Tadashi and Thi Hong Tran
Computers 2025, 14(7), 289; https://doi.org/10.3390/computers14070289 - 18 Jul 2025
Abstract
The Blockchain-based Decentralized Identity Management System (BDIMS) is an innovative framework designed for digital identity management, utilizing the unique attributes of blockchain technology. The BDIMS categorizes entities into three distinct groups: identity providers, service providers, and end-users. The system’s efficiency in identifying and
[...] Read more.
The Blockchain-based Decentralized Identity Management System (BDIMS) is an innovative framework designed for digital identity management, utilizing the unique attributes of blockchain technology. The BDIMS categorizes entities into three distinct groups: identity providers, service providers, and end-users. The system’s efficiency in identifying and extracting information from identification cards is enhanced by the integration of artificial intelligence (AI) algorithms. These algorithms decompose the extracted fields into smaller units, facilitating optical character recognition (OCR) and user authentication processes. By employing Merkle Trees, the BDIMS ensures secure authentication with service providers without the need to disclose any personal information. This advanced system empowers users to maintain control over their private information, ensuring its protection with maximum effectiveness and security. Experimental results confirm that the BDIMS effectively mitigates identity fraud while maintaining the confidentiality and integrity of sensitive data.
Full article
(This article belongs to the Special Issue Harnessing the Blockchain Technology in Unveiling Futuristic Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
SKGRec: A Semantic-Enhanced Knowledge Graph Fusion Recommendation Algorithm with Multi-Hop Reasoning and User Behavior Modeling
by
Siqi Xu, Ziqian Yang, Jing Xu and Ping Feng
Computers 2025, 14(7), 288; https://doi.org/10.3390/computers14070288 - 18 Jul 2025
Abstract
►▼
Show Figures
To address the limitations of existing knowledge graph-based recommendation algorithms, including insufficient utilization of semantic information and inadequate modeling of user behavior motivations, we propose SKGRec, a novel recommendation model that integrates knowledge graph and semantic features. The model constructs a semantic interaction
[...] Read more.
To address the limitations of existing knowledge graph-based recommendation algorithms, including insufficient utilization of semantic information and inadequate modeling of user behavior motivations, we propose SKGRec, a novel recommendation model that integrates knowledge graph and semantic features. The model constructs a semantic interaction graph (USIG) of user behaviors and employs a self-attention mechanism and a ranked optimization loss function to mine user interactions in fine-grained semantic associations. A relationship-aware aggregation module is designed to dynamically integrate higher-order relational features in the knowledge graph through the attention scoring function. In addition, a multi-hop relational path inference mechanism is introduced to capture long-distance dependencies to improve the depth of user interest modeling. Experiments on the Amazon-Book and Last-FM datasets show that SKGRec significantly outperforms several state-of-the-art recommendation algorithms on the Recall@20 and NDCG@20 metrics. Comparison experiments validate the effectiveness of semantic analysis of user behavior and multi-hop path inference, while cold-start experiments further confirm the robustness of the model in sparse-data scenarios. This study provides a new optimization approach for knowledge graph and semantic-driven recommendation systems, enabling more accurate capture of user preferences and alleviating the problem of noise interference.
Full article

Figure 1
Open AccessArticle
Machine Learning Techniques for Uncertainty Estimation in Dynamic Aperture Prediction
by
Carlo Emilio Montanari, Robert B. Appleby, Davide Di Croce, Massimo Giovannozzi, Tatiana Pieloni, Stefano Redaelli and Frederik F. Van der Veken
Computers 2025, 14(7), 287; https://doi.org/10.3390/computers14070287 - 18 Jul 2025
Abstract
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as
[...] Read more.
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as the CERN Large Hadron Collider and is crucial for designing future accelerators like the CERN Future Circular Hadron Collider. Traditional methods for computing the dynamic aperture are computationally demanding and involve extensive numerical simulations with numerous initial phase space conditions. In our recent work, we have devised surrogate models to predict the dynamic aperture boundary both efficiently and accurately. These models have been further refined by incorporating them into a novel active learning framework. This framework enhances performance through continual retraining and intelligent data generation based on informed sampling driven by error estimation. A critical attribute of this framework is the precise estimation of uncertainty in dynamic aperture predictions. In this study, we investigate various machine learning techniques for uncertainty estimation, including Monte Carlo dropout, bootstrap methods, and aleatory uncertainty quantification. We evaluated these approaches to determine the most effective method for reliable uncertainty estimation in dynamic aperture predictions using machine learning techniques.
Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
Implementing Virtual Reality for Fire Evacuation Preparedness at Schools
by
Rashika Tasnim Keya, Ilona Heldal, Daniel Patel, Pietro Murano and Cecilia Hammar Wijkmark
Computers 2025, 14(7), 286; https://doi.org/10.3390/computers14070286 - 18 Jul 2025
Abstract
Emergency preparedness training in organizations frequently involves simple evacuation drills triggered by fire alarms, limiting the opportunities for broader skill development. Digital technologies, particularly virtual reality (VR), offer promising methods to enhance learning for handling incidents and evacuations. However, implementing VR-based training remains
[...] Read more.
Emergency preparedness training in organizations frequently involves simple evacuation drills triggered by fire alarms, limiting the opportunities for broader skill development. Digital technologies, particularly virtual reality (VR), offer promising methods to enhance learning for handling incidents and evacuations. However, implementing VR-based training remains challenging due to unclear integration strategies within organizational practices and a lack of empirical evidence of VR’s effectiveness. This paper explores how VR-based training tools can be implemented in schools to enhance emergency preparedness among students, teachers, and staff. Following a design science research process, data were collected from a questionnaire-based study involving 12 participants and an exploratory study with 13 participants. The questionnaire-based study investigates initial attitudes and willingness to adopt VR training, while the exploratory study assesses the VR prototype’s usability, realism, and perceived effectiveness for emergency preparedness training. Despite a limited sample size and technical constraints of the early prototype, findings indicate strong student enthusiasm for gamified and immersive learning experiences. Teachers emphasized the need for technical and instructional support to regularly utilize VR training modules, while firefighters acknowledged the potential of VR tools, but also highlighted the critical importance of regular drills and professional validation. The relevance of the results of utilizing VR in this context is further discussed in terms of how it can be integrated into university curricula and aligned with other accessible digital preparedness tools.
Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Performance Evaluation and QoS Optimization of Routing Protocols in Vehicular Communication Networks Under Delay-Sensitive Conditions
by
Alaa Kamal Yousif Dafhalla, Hiba Mohanad Isam, Amira Elsir Tayfour Ahmed, Ikhlas Saad Ahmed, Lutfieh S. Alhomed, Amel Mohamed essaket Zahou, Fawzia Awad Elhassan Ali, Duria Mohammed Ibrahim Zayan, Mohamed Elshaikh Elobaid and Tijjani Adam
Computers 2025, 14(7), 285; https://doi.org/10.3390/computers14070285 - 17 Jul 2025
Abstract
Vehicular Communication Networks (VCNs) are essential to intelligent transportation systems, where real-time data exchange between vehicles and infrastructure supports safety, efficiency, and automation. However, achieving high Quality of Service (QoS)—especially under delay-sensitive conditions—remains a major challenge due to the high mobility and dynamic
[...] Read more.
Vehicular Communication Networks (VCNs) are essential to intelligent transportation systems, where real-time data exchange between vehicles and infrastructure supports safety, efficiency, and automation. However, achieving high Quality of Service (QoS)—especially under delay-sensitive conditions—remains a major challenge due to the high mobility and dynamic topology of vehicular environments. While some efforts have explored routing protocol optimization, few have systematically compared multiple optimization approaches tailored to distinct traffic and delay conditions. This study addresses this gap by evaluating and enhancing two widely used routing protocols, QOS-AODV and GPSR, through their improved versions, CM-QOS-AODV and CM-GPSR. Two distinct optimization models are proposed: the Traffic-Oriented Model (TOM), designed to handle variable and high-traffic conditions, and the Delay-Efficient Model (DEM), focused on reducing latency for time-critical scenarios. Performance was evaluated using key QoS metrics: throughput (rate of successful data delivery), packet delivery ratio (PDR) (percentage of successfully delivered packets), and end-to-end delay (latency between sender and receiver). Simulation results reveal that TOM-optimized protocols achieve up to 10% higher PDR, maintain throughput above 0.40 Mbps, and reduce delay to as low as 0.01 s, making them suitable for applications such as collision avoidance and emergency alerts. DEM-based variants offer balanced, moderate improvements, making them better suited for general-purpose VCN applications. These findings underscore the importance of traffic- and delay-aware protocol design in developing robust, QoS-compliant vehicular communication systems.
Full article
(This article belongs to the Special Issue Application of Deep Learning to Internet of Things Systems)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Automation, Computers, Electronics, Sensors, JCP, Mathematics
Intelligent Optimization, Decision-Making and Privacy Preservation in Cyber–Physical Systems
Topic Editors: Lijuan Zha, Jinliang Liu, Jian LiuDeadline: 31 August 2025
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies and Applications
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario Brcic, Sebastian LapuschkinDeadline: 31 January 2026

Conferences
Special Issues
Special Issue in
Computers
Application of Deep Learning to Internet of Things Systems
Guest Editor: Rytis MaskeliunasDeadline: 31 July 2025
Special Issue in
Computers
Natural Language Processing (NLP) and Large Language Modelling
Guest Editor: Ming LiuDeadline: 31 July 2025
Special Issue in
Computers
IT in Production and Logistics
Guest Editors: Markus Rabe, Anne Antonia Scheidler, Marc Stautner, Simon J. E. TaylorDeadline: 31 July 2025
Special Issue in
Computers
Artificial Intelligence in Industrial IoT Applications
Guest Editor: Isidro CalvoDeadline: 31 July 2025