Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.3 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Healthcare AI for Physician-Centered Decision-Making: Case Study of Applying Deep Learning to Aid Medical Professionals
Computers 2025, 14(8), 320; https://doi.org/10.3390/computers14080320 - 7 Aug 2025
Abstract
This paper aims to leverage artificial intelligence (AI) to assist physicians in utilizing advanced deep learning techniques integrated into developed models within electronic health records (EHRs) in medical information systems (MISes), which have been in use for over 15 years in health centers
[...] Read more.
This paper aims to leverage artificial intelligence (AI) to assist physicians in utilizing advanced deep learning techniques integrated into developed models within electronic health records (EHRs) in medical information systems (MISes), which have been in use for over 15 years in health centers across the Republic of Serbia. This paper presents a human-centered AI approach that emphasizes physician decision-making supported by AI models. This study presents two developed and implemented deep neural network (DNN) models in the EHR. Both models were based on data that were collected during the COVID-19 outbreak. The models were evaluated using five-fold cross-validation. The convolutional neural network (CNN), based on the pre-trained VGG19 architecture for classifying chest X-ray images, was trained on a publicly available smaller dataset containing 196 entries, and achieved an average classification accuracy of 91.83 ± 2.82%. The DNN model for optimizing patient appointment scheduling was trained on a large dataset (341,569 entries) and a rich feature design extracted from the MIS, which is daily used in Serbia, achieving an average classification accuracy of 77.51 ± 0.70%. Both models have consistent performance and good generalization. The architecture of a realized MIS, incorporating the positioning of developed AI tools that encompass both developed models, is also presented in this study.
Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
►
Show Figures
Open AccessArticle
Optimizing Data Pipelines for Green AI: A Comparative Analysis of Pandas, Polars, and PySpark for CO2 Emission Prediction
by
Youssef Mekouar, Mohammed Lahmer and Mohammed Karim
Computers 2025, 14(8), 319; https://doi.org/10.3390/computers14080319 - 7 Aug 2025
Abstract
This study evaluates the performance and energy trade-offs of three popular data processing libraries—Pandas, PySpark, and Polars—applied to GreenNav, a CO2 emission prediction pipeline for urban traffic. GreenNav is an eco-friendly navigation app designed to predict CO2 emissions and determine low-carbon
[...] Read more.
This study evaluates the performance and energy trade-offs of three popular data processing libraries—Pandas, PySpark, and Polars—applied to GreenNav, a CO2 emission prediction pipeline for urban traffic. GreenNav is an eco-friendly navigation app designed to predict CO2 emissions and determine low-carbon routes using a hybrid CNN-LSTM model integrated into a complete pipeline for the ingestion and processing of large, heterogeneous geospatial and road data. Our study quantifies the end-to-end execution time, cumulative CPU load, and maximum RAM consumption for each library when applied to the GreenNav pipeline; it then converts these metrics into energy consumption and CO2 equivalents. Experiments conducted on datasets ranging from 100 MB to 8 GB demonstrate that Polars in lazy mode offers substantial gains, reducing the processing time by a factor of more than twenty, memory consumption by about two-thirds, and energy consumption by about 60%, while maintaining the predictive accuracy of the model (R2 ≈ 0.91). These results clearly show that the careful selection of data processing libraries can reconcile high computing performance and environmental sustainability in large-scale machine learning applications.
Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
►▼
Show Figures

Figure 1
Open AccessReview
Integrating Large Language Models into Digital Manufacturing: A Systematic Review and Research Agenda
by
Chourouk Ouerghemmi and Myriam Ertz
Computers 2025, 14(8), 318; https://doi.org/10.3390/computers14080318 - 7 Aug 2025
Abstract
Industries 4.0 and 5.0 are based on technological advances, notably large language models (LLMs), which are making a significant contribution to the transition to smart factories. Although considerable research has explored this phenomenon, the literature remains fragmented and lacks an integrative framework that
[...] Read more.
Industries 4.0 and 5.0 are based on technological advances, notably large language models (LLMs), which are making a significant contribution to the transition to smart factories. Although considerable research has explored this phenomenon, the literature remains fragmented and lacks an integrative framework that highlights the multifaceted implications of using LLMs in the context of digital manufacturing. To address this limitation, we conducted a systematic literature review, analyzing 53 papers selected according to predefined inclusion and exclusion criteria. Our descriptive and thematic analyses, respectively, mapped new trends and identified emerging themes, classified into three axes: (1) manufacturing process optimization, (2) data structuring and innovation, and (3) human–machine interaction and ethical challenges. Our results revealed that LLMs can enhance operational performance and foster innovation while redistributing human roles. Our research offers an in-depth understanding of the implications of LLMs. Finally, we propose a future research agenda to guide future studies.
Full article
(This article belongs to the Special Issue AI in Complex Engineering Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Ensuring Zero Trust in GDPR-Compliant Deep Federated Learning Architecture
by
Zahra Abbas, Sunila Fatima Ahmad, Adeel Anjum, Madiha Haider Syed, Saif Ur Rehman Malik and Semeen Rehman
Computers 2025, 14(8), 317; https://doi.org/10.3390/computers14080317 - 4 Aug 2025
Abstract
►▼
Show Figures
Deep Federated Learning (DFL) revolutionizes machine learning (ML) by enabling collaborative model training across diverse, decentralized data sources without direct data sharing, emphasizing user privacy and data sovereignty. Despite its potential, DFL’s application in sensitive sectors is hindered by challenges in meeting rigorous
[...] Read more.
Deep Federated Learning (DFL) revolutionizes machine learning (ML) by enabling collaborative model training across diverse, decentralized data sources without direct data sharing, emphasizing user privacy and data sovereignty. Despite its potential, DFL’s application in sensitive sectors is hindered by challenges in meeting rigorous standards like the GDPR, with traditional setups struggling to ensure compliance and maintain trust. Addressing these issues, our research introduces an innovative Zero Trust-based DFL architecture designed for GDPR compliant systems, integrating advanced security and privacy mechanisms to ensure safe and transparent cross-node data processing. Our base paper proposed the basic GDPR-Compliant DFL Architecture. Now we validate the previously proposed architecture by formally verifying it using High-Level Petri Nets (HLPNs). This Zero Trust-based framework facilitates secure, decentralized model training without direct data sharing. Furthermore, we have also implemented a case study using the MNIST and CIFAR-10 datasets to evaluate the existing approach with the proposed Zero Trust-based DFL methodology. Our experiments confirmed its effectiveness in enhancing trust, complying with GDPR, and promoting DFL adoption in privacy-sensitive areas, achieving secure, ethical Artificial Intelligence (AI) with transparent and efficient data processing.
Full article

Figure 1
Open AccessReview
Multi-Objective Evolutionary Algorithms in Waste Disposal Systems: A Comprehensive Review of Applications, Case Studies, and Future Directions
by
Saad Talal Alharbi
Computers 2025, 14(8), 316; https://doi.org/10.3390/computers14080316 - 4 Aug 2025
Abstract
Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful optimization tools for addressing the complex, often conflicting goals present in modern waste disposal systems. This review explores recent advances and practical applications of MOEAs in key areas, including waste collection routing, waste-to-energy (WTE) systems,
[...] Read more.
Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful optimization tools for addressing the complex, often conflicting goals present in modern waste disposal systems. This review explores recent advances and practical applications of MOEAs in key areas, including waste collection routing, waste-to-energy (WTE) systems, and facility location and allocation. Real-world case studies from cities like Braga, Lisbon, Uppsala, and Cyprus demonstrate how MOEAs can enhance operational efficiency, boost energy recovery, and reduce environmental impacts. While these algorithms offer significant advantages, challenges remain in computational complexity, adapting to dynamic environments, and integrating with emerging technologies. Future research directions highlight the potential of combining MOEAs with machine learning and real-time data to create more flexible and responsive waste management strategies. By leveraging these advancements, MOEAs can play a pivotal role in developing sustainable, efficient, and adaptive waste disposal systems capable of meeting the growing demands of urbanization and stricter environmental regulations.
Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Lexicon-Based Random Substitute and Word-Variant Voting Models for Detecting Textual Adversarial Attacks
by
Tarik El Lel, Mominul Ahsan and Majid Latifi
Computers 2025, 14(8), 315; https://doi.org/10.3390/computers14080315 - 2 Aug 2025
Abstract
Adversarial attacks in Natural Language Processing (NLP) present a critical challenge, particularly in sentiment analysis, where subtle input modifications can significantly alter model predictions. In search of more robust defenses against adversarial attacks on sentimental analysis, this research work introduces two novel defense
[...] Read more.
Adversarial attacks in Natural Language Processing (NLP) present a critical challenge, particularly in sentiment analysis, where subtle input modifications can significantly alter model predictions. In search of more robust defenses against adversarial attacks on sentimental analysis, this research work introduces two novel defense mechanisms: the Lexicon-Based Random Substitute Model (LRSM) and the Word-Variant Voting Model (WVVM). LRSM employs randomized substitutions from a dataset-specific lexicon to generate diverse input variations, disrupting adversarial strategies by introducing unpredictability. Unlike traditional defenses requiring synonym dictionaries or precomputed semantic relationships, LRSM directly substitutes words with random lexicon alternatives, reducing overhead while maintaining robustness. Notably, LRSM not only neutralizes adversarial perturbations but occasionally surpasses the original accuracy by correcting inherent model misclassifications. Building on LRSM, WVVM integrates LRSM, Frequency-Guided Word Substitution (FGWS), and Synonym Random Substitution and Voting (RS&V) in an ensemble framework that adaptively combines their outputs. Logistic Regression (LR) emerged as the optimal ensemble configuration, leveraging its regularization parameters to balance the contributions of individual defenses. WVVM consistently outperformed standalone defenses, demonstrating superior restored accuracy and F1 scores across adversarial scenarios. The proposed defenses were evaluated on two well-known sentiment analysis benchmarks: the IMDB Sentiment Dataset and the Yelp Polarity Dataset. The IMDB dataset, comprising 50,000 labeled movie reviews, and the Yelp Polarity dataset, containing labeled business reviews, provided diverse linguistic challenges for assessing adversarial robustness. Both datasets were tested using 4000 adversarial examples generated by established attacks, including Probability Weighted Word Saliency, TextFooler, and BERT-based Adversarial Examples. WVVM and LRSM demonstrated superior performance in restoring accuracy and F1 scores across both datasets, with WVVM excelling through its ensemble learning framework. LRSM improved restored accuracy from 75.66% to 83.7% when compared to the second-best individual model, RS&V, while the Support Vector Classifier WVVM variation further improved restored accuracy to 93.17%. Logistic Regression WVVM achieved an F1 score of 86.26% compared to 76.80% for RS&V. These findings establish LRSM and WVVM as robust frameworks for defending against adversarial text attacks in sentiment analysis.
Full article
(This article belongs to the Special Issue When Natural Language Processing Meets Machine Learning—Opportunities, Challenges and Solutions)
►▼
Show Figures

Figure 1
Open AccessArticle
Multimodal Detection of Emotional and Cognitive States in E-Learning Through Deep Fusion of Visual and Textual Data with NLP
by
Qamar El Maazouzi and Asmaa Retbi
Computers 2025, 14(8), 314; https://doi.org/10.3390/computers14080314 - 2 Aug 2025
Abstract
In distance learning environments, learner engagement directly impacts attention, motivation, and academic performance. Signs of fatigue, negative affect, or critical remarks can warn of growing disengagement and potential dropout. However, most existing approaches rely on a single modality, visual or text-based, without providing
[...] Read more.
In distance learning environments, learner engagement directly impacts attention, motivation, and academic performance. Signs of fatigue, negative affect, or critical remarks can warn of growing disengagement and potential dropout. However, most existing approaches rely on a single modality, visual or text-based, without providing a general view of learners’ cognitive and affective states. We propose a multimodal system that integrates three complementary analyzes: (1) a CNN-LSTM model augmented with warning signs such as PERCLOS and yawning frequency for fatigue detection, (2) facial emotion recognition by EmoNet and an LSTM to handle temporal dynamics, and (3) sentiment analysis of feedback by a fine-tuned BERT model. It was evaluated on three public benchmarks: DAiSEE for fatigue, AffectNet for emotion, and MOOC Review (Coursera) for sentiment analysis. The results show a precision of 88.5% for fatigue detection, 70% for emotion detection, and 91.5% for sentiment analysis. Aggregating these cues enables an accurate identification of disengagement periods and triggers individualized pedagogical interventions. These results, although based on independently sourced datasets, demonstrate the feasibility of an integrated approach to detecting disengagement and open the door to emotionally intelligent learning systems with potential for future work in real-time content personalization and adaptive learning assistance.
Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessReview
A Map of the Research About Lighting Systems in the 1995–2024 Time Frame
by
Gaetanino Paolone, Andrea Piazza, Francesco Pilotti, Romolo Paesani, Jacopo Camplone and Paolino Di Felice
Computers 2025, 14(8), 313; https://doi.org/10.3390/computers14080313 - 1 Aug 2025
Abstract
►▼
Show Figures
Lighting Systems (LSs) are a key component of modern cities. Across the years, thousands of articles have been published on this topic; nevertheless, a map of the state of the art of the extant literature is lacking. The present review reports on an
[...] Read more.
Lighting Systems (LSs) are a key component of modern cities. Across the years, thousands of articles have been published on this topic; nevertheless, a map of the state of the art of the extant literature is lacking. The present review reports on an analysis of the network of the co-occurrences of the authors’ keywords from 12,148 Scopus-indexed articles on LSs published between 1995 and 2024. This review addresses the following research questions: (RQ1) What are the major topics explored by scholars in connection with LSs within the 1995–2024 time frame? (RQ2) How do they group together? The investigation leveraged VOSviewer, an open-source software largely used for performing bibliometric analyses. The number of thematic clusters returned by VOSviewer was determined by the value of the minimum number of occurrences needed for the authors’ keywords to be admitted into the business analysis. If such a number is not properly chosen, the consequence is a set of clusters that do not represent meaningful patterns of the input dataset. In the present study, to overcome this issue, the threshold value balanced the score of four independent clustering validity indices against the authors’ judgment of a meaningful partition of the input dataset. In addition, our review delved into the impact that the use/non-use of a thesaurus of the authors’ keywords had on the number and composition of the thematic clusters returned by VOSviewer and, ultimately, on how this choice affected the correctness of the interpretation of the clusters. The study adhered to a well-known protocol, whose implementation is reported in detail. Thus, the workflow is transparent and replicable.
Full article

Figure 1
Open AccessArticle
User-Centered Design of a Computer Vision System for Monitoring PPE Compliance in Manufacturing
by
Luis Alberto Trujillo-Lopez, Rodrigo Alejandro Raymundo-Guevara and Juan Carlos Morales-Arevalo
Computers 2025, 14(8), 312; https://doi.org/10.3390/computers14080312 - 1 Aug 2025
Abstract
In manufacturing environments, the proper use of Personal Protective Equipment (PPE) is essential to prevent workplace accidents. Despite this need, existing PPE monitoring methods remain largely manual and suffer from limited coverage, significant errors, and inefficiencies. This article focuses on addressing this deficiency
[...] Read more.
In manufacturing environments, the proper use of Personal Protective Equipment (PPE) is essential to prevent workplace accidents. Despite this need, existing PPE monitoring methods remain largely manual and suffer from limited coverage, significant errors, and inefficiencies. This article focuses on addressing this deficiency by designing a computer vision desktop application for automated monitoring of PPE use. This system uses lightweight YOLOv8 models, developed to run on the local system and operate even in industrial locations with limited network connectivity. Using a Lean UX approach, the development of the system involved creating empathy maps, assumptions, product backlog, followed by high-fidelity prototype interface components. C4 and physical diagrams helped define the system architecture to facilitate modifiability, scalability, and maintainability. Usability was verified using the System Usability Scale (SUS), with a score of 87.6/100 indicating “excellent” usability. The findings demonstrate that a user-centered design approach, considering user experience and technical flexibility, can significantly advance the utility and adoption of AI-based safety tools, especially in small- and medium-sized manufacturing operations. This article delivers a validated and user-centered design solution for implementing machine vision systems into manufacturing safety processes, simplifying the complexities of utilizing advanced AI technologies and their practical application in resource-limited environments.
Full article
(This article belongs to the Topic Visual Computing and Understanding: New Developments and Trends)
►▼
Show Figures

Figure 1
Open AccessArticle
Ontology-Based Data Pipeline for Semantic Reaction Classification and Research Data Management
by
Hendrik Borgelt, Frederick Gabriel Kitel and Norbert Kockmann
Computers 2025, 14(8), 311; https://doi.org/10.3390/computers14080311 - 1 Aug 2025
Abstract
►▼
Show Figures
Catalysis research is complex and interdisciplinary, involving diverse physical effects and challenging data practices. Research data often captures only selected aspects, such as specific reactants and products, limiting its utility for machine learning and the implementation of FAIR (Findable, Accessible, Interoperable, Reusable) workflows.
[...] Read more.
Catalysis research is complex and interdisciplinary, involving diverse physical effects and challenging data practices. Research data often captures only selected aspects, such as specific reactants and products, limiting its utility for machine learning and the implementation of FAIR (Findable, Accessible, Interoperable, Reusable) workflows. To improve this, semantic structuring through ontologies is essential. This work extends the established ontologies by refining logical relations and integrating semantic tools such as the Web Ontology Language or the Shape Constraint Language. It incorporates application programming interfaces from chemical databases, such as the Kyoto Encyclopedia of Genes and Genomes and the National Institute of Health’s PubChem database, and builds upon established ontologies. A key innovation lies in automatically decomposing chemical substances through database entries and chemical identifier representations to identify functional groups, enabling more generalized reaction classification. Using new semantic functionality, functional groups are flexibly addressed, improving the classification of reactions such as saponification and ester cleavage with simultaneous oxidation. A graphical interface (GUI) supports user interaction with the knowledge graph, enabling ontological reasoning and querying. This approach demonstrates improved specificity of the newly established ontology over its predecessors and offers a more user-friendly interface for engaging with structured chemical knowledge. Future work will focus on expanding ontology coverage to support a wider range of reactions in catalysis research.
Full article

Figure 1
Open AccessArticle
Spectral Graph Compression in Deploying Recommender Algorithms on Quantum Simulators
by
Chenxi Liu, W. Bernard Lee and Anthony G. Constantinides
Computers 2025, 14(8), 310; https://doi.org/10.3390/computers14080310 - 1 Aug 2025
Abstract
This follow-up scientific case study builds on prior research to explore the computational challenges of applying quantum algorithms to financial asset management, focusing specifically on solving the graph-cut problem for investment recommendation. Unlike our prior study, which focused on idealized QAOA performance, this
[...] Read more.
This follow-up scientific case study builds on prior research to explore the computational challenges of applying quantum algorithms to financial asset management, focusing specifically on solving the graph-cut problem for investment recommendation. Unlike our prior study, which focused on idealized QAOA performance, this work introduces a graph compression pipeline that enables QAOA deployment under real quantum hardware constraints. This study investigates quantum-accelerated spectral graph compression for financial asset recommendations, addressing scalability and regulatory constraints in portfolio management. We propose a hybrid framework combining the Quantum Approximate Optimization Algorithm (QAOA) with spectral graph theory to solve the Max-Cut problem for investor clustering. Our methodology leverages quantum simulators (cuQuantum and Cirq-GPU) to evaluate performance against classical brute-force enumeration, with graph compression techniques enabling deployment on resource-constrained quantum hardware. The results underscore that efficient graph compression is crucial for successful implementation. The framework bridges theoretical quantum advantage with practical financial use cases, though hardware limitations (qubit counts, coherence times) necessitate hybrid quantum-classical implementations. These findings advance the deployment of quantum algorithms in mission-critical financial systems, particularly for high-dimensional investor profiling under regulatory constraints.
Full article
(This article belongs to the Section AI-Driven Innovations)
►▼
Show Figures

Figure 1
Open AccessReview
Review of Deep Learning Applications for Detecting Special Components in Agricultural Products
by
Yifeng Zhao and Qingqing Xie
Computers 2025, 14(8), 309; https://doi.org/10.3390/computers14080309 - 30 Jul 2025
Abstract
The rapid evolution of deep learning (DL) has fundamentally transformed the paradigm for detecting special components in agricultural products, addressing critical challenges in food safety, quality control, and precision agriculture. This comprehensive review systematically analyzes many seminal studies to evaluate cutting-edge DL applications
[...] Read more.
The rapid evolution of deep learning (DL) has fundamentally transformed the paradigm for detecting special components in agricultural products, addressing critical challenges in food safety, quality control, and precision agriculture. This comprehensive review systematically analyzes many seminal studies to evaluate cutting-edge DL applications across three core domains: contaminant surveillance (heavy metals, pesticides, and mycotoxins), nutritional component quantification (soluble solids, polyphenols, and pigments), and structural/biomarker assessment (disease symptoms, gel properties, and physiological traits). Emerging hybrid architectures—including attention-enhanced convolutional neural networks (CNNs) for lesion localization, wavelet-coupled autoencoders for spectral denoising, and multi-task learning frameworks for joint parameter prediction—demonstrate unprecedented accuracy in decoding complex agricultural matrices. Particularly noteworthy are sensor fusion strategies integrating hyperspectral imaging (HSI), Raman spectroscopy, and microwave detection with deep feature extraction, achieving industrial-grade performance ( > 3.0) while reducing detection time by 30–100× versus conventional methods. Nevertheless, persistent barriers in the “black-box” nature of complex models, severe lack of standardized data and protocols, computational inefficiency, and poor field robustness hinder the reliable deployment and adoption of DL for detecting special components in agricultural products. This review provides an essential foundation and roadmap for future research to bridge the gap between laboratory DL models and their effective, trusted application in real-world agricultural settings.
Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
Protecting Power System Infrastructure Against Disruptive Agents Considering Demand Response
by
Jesús M. López-Lezama, Nicolás Muñoz-Galeano, Sergio D. Saldarriaga-Zuluaga and Santiago Bustamante-Mesa
Computers 2025, 14(8), 308; https://doi.org/10.3390/computers14080308 - 30 Jul 2025
Abstract
►▼
Show Figures
Power system infrastructure is exposed to a range of threats, including both naturally occurring events and intentional attacks. Traditional vulnerability assessment models, typically based on the N-1 criterion, do not account for the intentionality of disruptive agents. This paper presents a game-theoretic approach
[...] Read more.
Power system infrastructure is exposed to a range of threats, including both naturally occurring events and intentional attacks. Traditional vulnerability assessment models, typically based on the N-1 criterion, do not account for the intentionality of disruptive agents. This paper presents a game-theoretic approach to protecting power system infrastructure against deliberate attacks, taking into account the effects of demand response. The interaction between the disruptive agent and the system operator is modeled as a leader–follower Stackelberg game. The leader, positioned in the upper-level optimization problem, must decide which elements to render out of service, anticipating the reaction of the follower (the system operator), who occupies the lower-level problem. The Stackelberg game is reformulated as a bilevel optimization model and solved using a metaheuristic approach. To evaluate the applicability of the proposed method, a 24-bus test system was employed. The results demonstrate that integrating demand response significantly enhances system resilience, compelling the disruptive agent to adopt alternative attack strategies that lead to lower overall disruption. The proposed model serves as a valuable decision-support tool for system operators and planners seeking to improve the robustness and security of electrical networks against disruptive agents.
Full article

Figure 1
Open AccessArticle
Smart Wildlife Monitoring: Real-Time Hybrid Tracking Using Kalman Filter and Local Binary Similarity Matching on Edge Network
by
Md. Auhidur Rahman, Stefano Giordano and Michele Pagano
Computers 2025, 14(8), 307; https://doi.org/10.3390/computers14080307 - 30 Jul 2025
Abstract
Real-time wildlife monitoring on edge devices poses significant challenges due to limited power, constrained bandwidth, and unreliable connectivity, especially in remote natural habitats. Conventional object detection systems often transmit redundant data of the same animals detected across multiple consecutive frames as a part
[...] Read more.
Real-time wildlife monitoring on edge devices poses significant challenges due to limited power, constrained bandwidth, and unreliable connectivity, especially in remote natural habitats. Conventional object detection systems often transmit redundant data of the same animals detected across multiple consecutive frames as a part of a single event, resulting in increased power consumption and inefficient bandwidth usage. Furthermore, maintaining consistent animal identities in the wild is difficult due to occlusions, variable lighting, and complex environments. In this study, we propose a lightweight hybrid tracking framework built on the YOLOv8m deep neural network, combining motion-based Kalman filtering with Local Binary Pattern (LBP) similarity for appearance-based re-identification using texture and color features. To handle ambiguous cases, we further incorporate Hue-Saturation-Value (HSV) color space similarity. This approach enhances identity consistency across frames while reducing redundant transmissions. The framework is optimized for real-time deployment on edge platforms such as NVIDIA Jetson Orin Nano and Raspberry Pi 5. We evaluate our method against state-of-the-art trackers using event-based metrics such as MOTA, HOTA, and IDF1, with a focus on detected animals occlusion handling, trajectory analysis, and counting during both day and night. Our approach significantly enhances tracking robustness, reduces ID switches, and provides more accurate detection and counting compared to existing methods. When transmitting time-series data and detected frames, it achieves up to 99.87% bandwidth savings and 99.67% power reduction, making it highly suitable for edge-based wildlife monitoring in resource-constrained environments.
Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
►▼
Show Figures

Figure 1
Open AccessArticle
Artificial Intelligence and Immersive Technologies: Virtual Assistants in AR/VR for Special Needs Learners
by
Azza Mohamed, Rouhi Faisal, Ahmed Al-Gindy and Khaled Shaalan
Computers 2025, 14(8), 306; https://doi.org/10.3390/computers14080306 - 28 Jul 2025
Abstract
This article investigates the revolutionary potential of AI-powered virtual assistants in augmented reality (AR) and virtual reality (VR) environments, concentrating primarily on their impact on special needs schooling. We investigate the complex characteristics of these virtual assistants, the influential elements affecting their development
[...] Read more.
This article investigates the revolutionary potential of AI-powered virtual assistants in augmented reality (AR) and virtual reality (VR) environments, concentrating primarily on their impact on special needs schooling. We investigate the complex characteristics of these virtual assistants, the influential elements affecting their development and implementation, and the joint efforts of educational institutions and technology developers, using a rigorous quantitative approach. Our research also looks at strategic initiatives aimed at effectively integrating AI into educational practices, addressing critical issues including infrastructure, teacher preparedness, equitable access, and ethical considerations. Our findings highlight the promise of AI technology, emphasizing the ability of AI-powered virtual assistants to provide individualized, immersive learning experiences adapted to the different needs of students with special needs. Furthermore, we find strong relationships between these virtual assistants’ features and deployment tactics and their subsequent impact on educational achievements. This study contributes to the increasing conversation on harnessing cutting-edge technology to improve educational results for all learners by synthesizing current research and employing a strong methodological framework. Our analysis not only highlights the promise of AI in increasing student engagement and comprehension but also emphasizes the importance of tackling ethical and infrastructure concerns to enable responsible and fair adoption.
Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Assessment of Machine Learning-Driven Retrievals of Arctic Sea Ice Thickness from L-Band Radiometry Remote Sensing
by
Ferran Hernández-Macià, Gemma Sanjuan Gomez, Carolina Gabarró and Maria José Escorihuela
Computers 2025, 14(8), 305; https://doi.org/10.3390/computers14080305 - 28 Jul 2025
Abstract
This study evaluates machine learning-based methods for retrieving thin Arctic sea ice thickness (SIT) from L-band radiometry, using data from the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite. In addition to the operational ESA product, three alternative approaches are
[...] Read more.
This study evaluates machine learning-based methods for retrieving thin Arctic sea ice thickness (SIT) from L-band radiometry, using data from the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite. In addition to the operational ESA product, three alternative approaches are assessed: a Random Forest (RF) algorithm, a Convolutional Neural Network (CNN) that incorporates spatial coherence, and a Long Short-Term Memory (LSTM) neural network designed to capture temporal coherence. Validation against in situ data from the Beaufort Gyre Exploration Project (BGEP) moorings and the ESA SMOSice campaign demonstrates that the RF algorithm achieves robust performance comparable to the ESA product, despite its simplicity and lack of explicit spatial or temporal modeling. The CNN exhibits a tendency to overestimate SIT and shows higher dispersion, suggesting limited added value when spatial coherence is already present in the input data. The LSTM approach does not improve retrieval accuracy, likely due to the mismatch between satellite resolution and the temporal variability of sea ice conditions. These results highlight the importance of L-band sea ice emission modeling over increasing algorithm complexity and suggest that simpler, adaptable methods such as RF offer a promising foundation for future SIT retrieval efforts. The findings are relevant for refining current methods used with SMOS and for developing upcoming satellite missions, such as ESA’s Copernicus Imaging Microwave Radiometer (CIMR).
Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Artificial Intelligence Approach for Waste-Printed Circuit Board Recycling: A Systematic Review
by
Muhammad Mohsin, Stefano Rovetta, Francesco Masulli and Alberto Cabri
Computers 2025, 14(8), 304; https://doi.org/10.3390/computers14080304 - 27 Jul 2025
Abstract
The rapid advancement of technology has led to a substantial increase in Waste Electrical and Electronic Equipment (WEEE), which poses significant environmental threats and increases pressure on the planet’s limited natural resources. In response, Artificial Intelligence (AI) has emerged as a key enabler
[...] Read more.
The rapid advancement of technology has led to a substantial increase in Waste Electrical and Electronic Equipment (WEEE), which poses significant environmental threats and increases pressure on the planet’s limited natural resources. In response, Artificial Intelligence (AI) has emerged as a key enabler of the Circular Economy (CE), particularly in improving the speed and precision of waste sorting through machine learning and computer vision techniques. Despite this progress, to our knowledge, no comprehensive, systematic review has focused specifically on the role of AI in disassembling and recycling Waste-Printed Circuit Boards (WPCBs). This paper addresses this gap by systematically reviewing recent advancements in AI-driven disassembly and sorting approaches with a focus on machine learning and vision-based methodologies. The review is structured around three areas: (1) the availability and use of datasets for AI-based WPCB recycling; (2) state-of-the-art techniques for selective disassembly and component recognition to enable fast WPCB recycling; and (3) key challenges and possible solutions aimed at enhancing the recovery of critical raw materials (CRMs) from WPCBs.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
White Matter Microstructure Differences Between Congenital and Acquired Hearing Loss Patients Using Diffusion Tensor Imaging (DTI) and Machine Learning
by
Fatimah Kayla Kameela, Fikri Mirza Putranto, Prasandhya Astagiri Yusuf, Arierta Pujitresnani, Vanya Vabrina Valindria, Dodi Sudiana and Mia Rizkinia
Computers 2025, 14(8), 303; https://doi.org/10.3390/computers14080303 - 25 Jul 2025
Abstract
Diffusion tensor imaging (DTI) metrics provide insights into neural pathways, which can be pivotal in differentiating congenital and acquired hearing loss to support diagnosis, especially for those diagnosed late. In this study, we analyzed DTI parameters and developed machine learning to classify these
[...] Read more.
Diffusion tensor imaging (DTI) metrics provide insights into neural pathways, which can be pivotal in differentiating congenital and acquired hearing loss to support diagnosis, especially for those diagnosed late. In this study, we analyzed DTI parameters and developed machine learning to classify these two patient groups. The study included 29 patients with congenital hearing loss and 6 with acquired hearing loss. DTI scans were performed to obtain metrics, such as fractional anisotropy (FA), axial diffusivity (AD), radial diffusivity (RD), and mean diffusivity (MD). Statistical analyses based on p-values highlighted the cortical auditory system’s prominence in differentiating between groups, with FA and RD emerging as pivotal metrics. Three machine learning models were trained to classify hearing loss types for each of five dataset scenarios. Random forest (RF) trained on a dataset consisting of significant features demonstrated superior performance, achieving a specificity of 87.12% and F1 score of 96.88%. This finding highlights the critical role of DTI metrics in the classification of hearing loss. The experimental results also emphasized the critical role of FA in distinguishing between the two types of hearing loss, underscoring its potential clinical utility. DTI parameters, combined with machine learning, can effectively distinguish between congenital and acquired hearing loss, offering a robust tool for clinical diagnosis and treatment planning. Further research with larger and balanced cohorts is warranted to validate these findings.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Novel Models for the Warm-Up Phase of Recommendation Systems
by
Nourah AlRossais
Computers 2025, 14(8), 302; https://doi.org/10.3390/computers14080302 - 24 Jul 2025
Abstract
►▼
Show Figures
In the recommendation system (RS) literature, a distinction exists between studies dedicated to fully operational (known users/items) and cold-start (new users/items) RSs. The warm-up phase—the transition between the two—is not widely researched, despite evidence that attrition rates are the highest for users and
[...] Read more.
In the recommendation system (RS) literature, a distinction exists between studies dedicated to fully operational (known users/items) and cold-start (new users/items) RSs. The warm-up phase—the transition between the two—is not widely researched, despite evidence that attrition rates are the highest for users and content providers during such periods. RS formulations, particularly deep learning models, do not easily allow for a warm-up phase. Herein, we propose two independent and complementary models to increase RS performance during the warm-up phase. The models apply to any cold-start RS expressible as a function of all user features, item features, and existing users’ preferences for existing items. We demonstrate substantial improvements: Accuracy-oriented metrics improved by up to 14% compared with not handling warm-up explicitly. Non-accuracy-oriented metrics, including serendipity and fairness, improved by up to 12% compared with not handling warm-up explicitly. The improvements were independent of the cold-start RS algorithm. Additionally, this paper introduces a method of examining the performance metrics of an RS during the warm-up phase as a function of the number of user–item interactions. We discuss problems such as data leakage and temporal consistencies of training/testing—often neglected during the offline evaluation of RSs.
Full article

Figure 1
Open AccessArticle
A Hybrid Approach Using Graph Neural Networks and LSTM for Attack Vector Reconstruction
by
Yelizaveta Vitulyova, Tetiana Babenko, Kateryna Kolesnikova, Nikolay Kiktev and Olga Abramkina
Computers 2025, 14(8), 301; https://doi.org/10.3390/computers14080301 - 24 Jul 2025
Abstract
The escalating complexity of cyberattacks necessitates advanced strategies for their detection and mitigation. This study presents a hybrid model that integrates Graph Neural Networks (GNNs) with Long Short-Term Memory (LSTM) networks to reconstruct and predict attack vectors in cybersecurity. GNNs are employed to
[...] Read more.
The escalating complexity of cyberattacks necessitates advanced strategies for their detection and mitigation. This study presents a hybrid model that integrates Graph Neural Networks (GNNs) with Long Short-Term Memory (LSTM) networks to reconstruct and predict attack vectors in cybersecurity. GNNs are employed to analyze the structural relationships within the MITRE ATT&CK framework, while LSTM networks are utilized to model the temporal dynamics of attack sequences, effectively capturing the evolution of cyber threats. The combined approach harnesses the complementary strengths of these methods to deliver precise, interpretable, and adaptable solutions for addressing cybersecurity challenges. Experimental evaluation on the CICIDS2017 dataset reveals the model’s strong performance, achieving an Area Under the Curve (AUC) of 0.99 on both balanced and imbalanced test sets, an F1-score of 0.85 for technique prediction, and a Mean Squared Error (MSE) of 0.05 for risk assessment. These findings underscore the model’s capability to accurately reconstruct attack paths and forecast future techniques, offering a promising avenue for strengthening proactive defense mechanisms against evolving cyber threats.
Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Automation, Computers, Electronics, Sensors, JCP, Mathematics
Intelligent Optimization, Decision-Making and Privacy Preservation in Cyber–Physical Systems
Topic Editors: Lijuan Zha, Jinliang Liu, Jian LiuDeadline: 31 August 2025
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies and Applications
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario Brcic, Sebastian LapuschkinDeadline: 31 January 2026

Special Issues
Special Issue in
Computers
When Blockchain Meets IoT: Challenges and Potentials
Guest Editors: Andres Marin Lopez, David ArroyoDeadline: 31 August 2025
Special Issue in
Computers
Emerging Trends in Machine Learning and Artificial Intelligence
Guest Editor: Thuseethan SelvarajahDeadline: 31 August 2025
Special Issue in
Computers
Edge and Fog Computing for Internet of Things Systems (2nd Edition)
Guest Editors: Luís Nogueira, Jorge CoelhoDeadline: 30 September 2025
Special Issue in
Computers
Present and Future of E-Learning Technologies (2nd Edition)
Guest Editor: Antonio Sarasa CabezueloDeadline: 30 September 2025