Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.3 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Simulation Application of Adaptive Strategy Hybrid Secretary Bird Optimization Algorithm in Multi-UAV 3D Path Planning
Computers 2025, 14(10), 439; https://doi.org/10.3390/computers14100439 - 15 Oct 2025
Abstract
Multi-UAV three-dimensional (3D) path planning is formulated as a high-dimensional multi-constraint optimization problem involving costs such as path length, flight altitude, avoidance cost, and smoothness. To address this challenge, we propose an Adaptive Strategy Hybrid Secretary Bird Optimization Algorithm (ASHSBOA), an enhanced variant
[...] Read more.
Multi-UAV three-dimensional (3D) path planning is formulated as a high-dimensional multi-constraint optimization problem involving costs such as path length, flight altitude, avoidance cost, and smoothness. To address this challenge, we propose an Adaptive Strategy Hybrid Secretary Bird Optimization Algorithm (ASHSBOA), an enhanced variant of the Secretary Bird Optimization Algorithm (SBOA). ASHSBOA integrates a weighted multi-direction dynamic learning strategy, an adaptive strategy-selection mechanism, and a hybrid elite-guided boundary-repair scheme to enhance the ability to identify local optima and balance exploration and exploitation. The algorithm is tested on benchmark suites CEC-2017 and CEC-2022 against nine classic or state-of-the-art optimizers. Non-parametric tests show that ASHSBOA consistently achieves superior performance and ranks first among competitors. Finally, we applied ASHSBOA to a multi-UAV 3D path planning model. In Scenario 1, the path cost planned by ASHSBOA decreased by 124.9 compared to the second-ranked QHSBOA. In the more complex Scenario 2, this figure reached 1137.9. Simulation results demonstrate that ASHSBOA produces lower-cost flight paths and more stable convergence behavior compared to comparative methods. These results validate the robustness and practicality of ASHSBOA in UAV path planning.
Full article
Open AccessReview
Machine and Deep Learning in Agricultural Engineering: A Comprehensive Survey and Meta-Analysis of Techniques, Applications, and Challenges
by
Samuel Akwasi Frimpong, Mu Han, Wenyi Zheng, Xiaowei Li, Ernest Akpaku and Ama Pokuah Obeng
Computers 2025, 14(10), 438; https://doi.org/10.3390/computers14100438 - 15 Oct 2025
Abstract
►▼
Show Figures
Machine learning and deep learning techniques integrated with advanced sensing technologies have revolutionized agricultural engineering, addressing complex challenges in food production, quality assessment, and environmental monitoring. This survey presents a systematic review and meta-analysis of recent developments by examining the peer-reviewed literature from
[...] Read more.
Machine learning and deep learning techniques integrated with advanced sensing technologies have revolutionized agricultural engineering, addressing complex challenges in food production, quality assessment, and environmental monitoring. This survey presents a systematic review and meta-analysis of recent developments by examining the peer-reviewed literature from 2015 to 2024. The analysis reveals computational approaches ranging from traditional algorithms like support vector machines and random forests to deep learning architectures, including convolutional and recurrent neural networks. Deep learning models often demonstrate superior performance, showing 5–10% accuracy improvements over traditional methods and achieving 93–99% accuracy in image-based applications. Three primary application domains are identified: agricultural product quality assessment using hyperspectral imaging, crop and field management through precision optimization, and agricultural automation with machine vision systems. Dataset taxonomy shows spectral data predominating at 42.1%, followed by image data at 26.2%, indicating preference for non-destructive approaches. Current challenges include data limitations, model interpretability issues, and computational complexity. Future trends emphasize lightweight model development, ensemble learning, and expanding applications. This analysis provides a comprehensive understanding of current capabilities and future directions for machine learning in agricultural engineering, supporting the development of efficient and sustainable agricultural systems for global food security.
Full article

Figure 1
Open AccessSystematic Review
A Systematic Review of Machine Learning in Credit Card Fraud Detection Under Original Class Imbalance
by
Nazerke Baisholan, J. Eric Dietz, Sergiy Gnatyuk, Mussa Turdalyuly, Eric T. Matson and Karlygash Baisholanova
Computers 2025, 14(10), 437; https://doi.org/10.3390/computers14100437 - 15 Oct 2025
Abstract
Credit card fraud remains a significant concern for financial institutions due to its low prevalence, evolving tactics, and the operational demand for timely, accurate detection. Machine learning (ML) has emerged as a core approach, capable of processing large-scale transactional data and adapting to
[...] Read more.
Credit card fraud remains a significant concern for financial institutions due to its low prevalence, evolving tactics, and the operational demand for timely, accurate detection. Machine learning (ML) has emerged as a core approach, capable of processing large-scale transactional data and adapting to new fraud patterns. However, much of the literature modifies the natural class distribution through resampling, potentially inflating reported performance and limiting real-world applicability. This systematic literature review examines only studies that preserve the original class imbalance during both training and evaluation. Following PRISMA 2020 guidelines, strict inclusion and exclusion criteria were applied to ensure methodological rigor and relevance. Four research questions guided the analysis, focusing on dataset usage, ML algorithm adoption, evaluation metric selection, and the integration of explainable artificial intelligence (XAI). The synthesis reveals dominant reliance on a small set of benchmark datasets, a preference for tree-based ensemble methods, limited use of AUC-PR despite its suitability for skewed data, and rare implementation of operational explainability, most notably through SHAP. The findings highlight the need for semantics-preserving benchmarks, cost-aware evaluation frameworks, and analyst-oriented interpretability tools, offering a research agenda to improve reproducibility and enable effective, transparent fraud detection under real-world imbalance conditions.
Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Sparse Keyword Data Analysis Using Bayesian Pattern Mining
by
Sunghae Jun
Computers 2025, 14(10), 436; https://doi.org/10.3390/computers14100436 - 14 Oct 2025
Abstract
Keyword data analysis aims to extract and interpret meaningful relationships from large collections of text documents. A major challenge in this process arises from the extreme sparsity of document–keyword matrices, where the majority of elements are zeros due to zero inflation. To address
[...] Read more.
Keyword data analysis aims to extract and interpret meaningful relationships from large collections of text documents. A major challenge in this process arises from the extreme sparsity of document–keyword matrices, where the majority of elements are zeros due to zero inflation. To address this issue, this study proposes a probabilistic framework called Bayesian Pattern Mining (BPM), which integrates Bayesian inference into association rule mining (ARM). The proposed method estimates both the expected values and credible intervals of interestingness measures such as confidence and lift, providing a probabilistic evaluation of keyword associations. Experiments conducted on 9436 quantum computing patent documents, from which 175 representative keywords were extracted, demonstrate that BPM yields more stable and interpretable associations than conventional ARM. By incorporating credible intervals, BPM reduces the risk of biased decisions under sparsity and enhances the reliability of keyword-based technology analysis, offering a rigorous approach for knowledge discovery in zero-inflated text data.
Full article
(This article belongs to the Special Issue Recent Advances in Data Mining: Methods, Trends, and Emerging Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
A Web-Based Digital Twin Framework for Interactive E-Learning in Engineering Education
by
Peter Weis, Ronald Bašťovanský and Matúš Vereš
Computers 2025, 14(10), 435; https://doi.org/10.3390/computers14100435 - 14 Oct 2025
Abstract
Traditional engineering education struggles to bridge the theory–practice gap in the Industry 4.0 era, as static 2D schematics inadequately convey complex spatial relationships. While advanced visualization tools exist, their adoption is frequently hindered by requirements for specialized hardware and software, limiting accessibility. This
[...] Read more.
Traditional engineering education struggles to bridge the theory–practice gap in the Industry 4.0 era, as static 2D schematics inadequately convey complex spatial relationships. While advanced visualization tools exist, their adoption is frequently hindered by requirements for specialized hardware and software, limiting accessibility. This study details the development and evaluation of a novel, web-based Digital Twin framework designed for accessible, intuitive e-learning that requires no client-side installation. The framework, centered on a high-fidelity 3D model of a historic radial engine, was assessed through a qualitative pilot case study with seven engineering professionals. Data was collected via a “think-aloud” protocol and a mixed-methods survey with a Likert scale and open-ended questions. Findings revealed an overwhelmingly positive reception; quantitative data showed high mean scores for usability, educational impact, and professional training potential (M > 4.2). Qualitative analysis confirmed the framework’s success in enhancing spatial understanding via features like dynamic cross-sections, improving the efficiency of accessing integrated documentation, and demonstrating high value as an onboarding tool. This work provides strong preliminary evidence that an accessible, web-based Digital Twin is a powerful and scalable solution for technical education that significantly enhances spatial comprehension and knowledge transfer.
Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games
by
Sehar Shahzad Farooq, Hameedur Rahman, Samiya Abdul Wahid, Muhammad Alyan Ansari, Saira Abdul Wahid and Hosu Lee
Computers 2025, 14(10), 434; https://doi.org/10.3390/computers14100434 - 13 Oct 2025
Abstract
Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage
[...] Read more.
Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage Actor–Critic (A3C) algorithm is explored using action games. Having been proven successful in several gaming environments, its effectiveness in action games is rarely explored. Providing efficient learning and adaptation facilities, this research aims to assess whether integrating ICM with A3C promotes curiosity-driven explorations and adaptive learning in action games. Using the MAME Toolkit library, we interface with the game environments, preprocess game screens to focus on relevant visual elements, and create diverse game episodes for training. The A3C policy is optimized using the Proximal Policy Optimization (PPO) algorithm with tuned hyperparameters. Comparisons are made with baseline methods, including vanilla A3C, ICM with pixel-based predictions, and state-of-the-art exploration techniques. Additionally, we evaluate the agent’s generalization capability in separate environments. The results demonstrate that ICM and A3C effectively promote curiosity-driven exploration in action games, with the agent learning exploration behaviors without relying solely on external rewards. Notably, we also observed an improved efficiency and learning speed compared to baseline approaches. This research contributes to curiosity-driven exploration in reinforcement learning-based virtual environments and provides insights into the exploration of complex action games. Successfully applying ICM and A3C in action games presents exciting opportunities for adaptive learning and efficient exploration in challenging real-world environments.
Full article
(This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning)
►▼
Show Figures

Figure 1
Open AccessArticle
Hybrid CDN Architecture Integrating Edge Caching, MEC Offloading, and Q-Learning-Based Adaptive Routing
by
Aymen D. Salman, Akram T. Zeyad, Asia Ali Salman Al-karkhi, Safanah M. Raafat and Amjad J. Humaidi
Computers 2025, 14(10), 433; https://doi.org/10.3390/computers14100433 - 13 Oct 2025
Abstract
Content Delivery Networks (CDNs) have evolved to meet surging data demands and stringent low-latency requirements driven by emerging applications like high-definition video streaming, virtual reality, and IoT. This paper proposes a hybrid CDN architecture that synergistically combines edge caching, Multi-access Edge Computing (MEC)
[...] Read more.
Content Delivery Networks (CDNs) have evolved to meet surging data demands and stringent low-latency requirements driven by emerging applications like high-definition video streaming, virtual reality, and IoT. This paper proposes a hybrid CDN architecture that synergistically combines edge caching, Multi-access Edge Computing (MEC) offloading, and reinforcement learning (Q-learning) for adaptive routing. In the proposed system, popular content is cached at radio access network edges (e.g., base stations) and computation-intensive tasks are offloaded to MEC servers, while a Q-learning agent dynamically routes user requests to the optimal service node (cache, MEC server, or origin) based on the network state. The study presented detailed system design and provided comprehensive simulation-based evaluation. The results demonstrate that the proposed hybrid approach significantly improves cache hit ratios and reduces end-to-end latency compared to traditional CDNs and simpler edge architectures. The Q-learning-enabled routing adapts to changing load and content popularity, converging to efficient policies that outperform static baselines. The proposed hybrid model has been tested against variants lacking MEC, edge caching, or the RL-based controller to isolate each component’s contributions. The paper concludes with a discussion on practical considerations, limitations, and future directions for intelligent CDN networking at the edge.
Full article
(This article belongs to the Special Issue Edge and Fog Computing for Internet of Things Systems (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
A Novel Multimodal Hand Gesture Recognition Model Using Combined Approach of Inter-Frame Motion and Shared Attention Weights
by
Xiaorui Zhang, Shuaitong Li, Xianglong Zeng, Peisen Lu and Wei Sun
Computers 2025, 14(10), 432; https://doi.org/10.3390/computers14100432 - 13 Oct 2025
Abstract
Dynamic hand gesture recognition based on computer vision aims at enabling computers to understand the semantic meaning conveyed by hand gestures in videos. Existing methods predominately rely on spatiotemporal attention mechanisms to extract hand motion features in a large spatiotemporal scope. However, they
[...] Read more.
Dynamic hand gesture recognition based on computer vision aims at enabling computers to understand the semantic meaning conveyed by hand gestures in videos. Existing methods predominately rely on spatiotemporal attention mechanisms to extract hand motion features in a large spatiotemporal scope. However, they cannot accurately focus on the moving hand region for hand feature extraction because frame sequences contain a substantial amount of redundant information. Although multimodal techniques can extract a wider variety of hand features, they are less successful at utilizing information interactions between various modalities for accurate feature extraction. To address these challenges, this study proposes a multimodal hand gesture recognition model combining inter-frame motion and shared attention weights. By jointly using an inter-frame motion attention (IFMA) mechanism and adaptive down-sampling (ADS), the spatiotemporal search scope can be effectively narrowed down to the hand-related regions based on the characteristic of hands exhibiting obvious movements. The proposed inter-modal attention weight (IMAW) loss enables RGB and Depth modalities to share attention, allowing each to adjust its distribution based on the other. Experimental results on the EgoGesture, NVGesture, and Jester datasets demonstrate the superiority of our proposed model over existing state-of-the-art methods in terms of hand motion feature extraction and hand gesture recognition accuracy.
Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
CourseEvalAI: Rubric-Guided Framework for Transparent and Consistent Evaluation of Large Language Models
by
Catalin Anghel, Marian Viorel Craciun, Emilia Pecheanu, Adina Cocu, Andreea Alexandra Anghel, Paul Iacobescu, Calina Maier, Constantin Adrian Andrei, Cristian Scheau and Serban Dragosloveanu
Computers 2025, 14(10), 431; https://doi.org/10.3390/computers14100431 - 11 Oct 2025
Abstract
►▼
Show Figures
Background and objectives: Large language models (LLMs) show promise in automating open-ended evaluation tasks, yet their reliability in rubric-based assessment remains uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about transparency and pedagogical validity in educational contexts. This study introduces
[...] Read more.
Background and objectives: Large language models (LLMs) show promise in automating open-ended evaluation tasks, yet their reliability in rubric-based assessment remains uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about transparency and pedagogical validity in educational contexts. This study introduces CourseEvalAI, a framework designed to enhance consistency and fidelity in rubric-guided evaluation by fine-tuning a general-purpose LLM with authentic university-level instructional content. Methods: The framework employs supervised fine-tuning with Low-Rank Adaptation (LoRA) on rubric-annotated answers and explanations drawn from undergraduate computer science exams. Responses generated by both the base and fine-tuned models were independently evaluated by two human raters and two LLM judges, applying dual-layer rubrics for answers (technical or argumentative) and explanations. Inter-rater reliability was reported as intraclass correlation coefficient (ICC(2,1)), Krippendorff’s α, and quadratic-weighted Cohen’s κ (QWK), and statistical analyses included Welch’s t tests with Holm–Bonferroni correction, Hedges’ g with bootstrap confidence intervals, and Levene’s tests. All responses, scores, feedback, and metadata were stored in a Neo4j graph database for structured exploration. Results: The fine-tuned model consistently outperformed the base version across all rubric dimensions, achieving higher scores for both answers and explanations. After multiple-testing correction, only the Generative Pre-trained Transformer (GPT-4)—judged Technical Answer contrast remains statistically significant; other contrasts show positive trends without passing the adjusted threshold, and no additional significance is claimed for explanation-level results. Variance in scoring decreased, inter-model agreement increased, and evaluator feedback for fine-tuned outputs contained fewer vague or critical remarks, indicating stronger rubric alignment and greater pedagogical coherence. Inter-rater reliability analyses indicated moderate human–human agreement and weaker alignment of LLM judges to the human mean. Originality: CourseEvalAI integrates rubric-guided fine-tuning, dual-layer evaluation, and graph-based storage into a unified framework. This combination provides a replicable and interpretable methodology that enhances the consistency, transparency, and pedagogical value of LLM-based evaluators in higher education and beyond.
Full article

Figure 1
Open AccessArticle
Automated OSINT Techniques for Digital Asset Discovery and Cyber Risk Assessment
by
Tetiana Babenko, Kateryna Kolesnikova, Olga Abramkina and Yelizaveta Vitulyova
Computers 2025, 14(10), 430; https://doi.org/10.3390/computers14100430 - 9 Oct 2025
Abstract
Cyber threats are becoming increasingly sophisticated, especially in distributed infrastructures where systems are deeply interconnected. To address this, we developed a framework that automates how organizations discover their digital assets and assess which ones are the most at risk. The approach integrates diverse
[...] Read more.
Cyber threats are becoming increasingly sophisticated, especially in distributed infrastructures where systems are deeply interconnected. To address this, we developed a framework that automates how organizations discover their digital assets and assess which ones are the most at risk. The approach integrates diverse public information sources, including WHOIS records, DNS data, and SSL certificates, into a unified analysis pipeline without relying on intrusive probing. For risk scoring we applied Gradient Boosted Decision Trees, which proved more robust with messy real-world data than other models we tested. DBSCAN clustering was used to detect unusual exposure patterns across assets. In validation on organizational data, the framework achieved 93.3% accuracy in detecting known vulnerabilities and an F1-score of 0.92 for asset classification. More importantly, security teams spent about 58% less time on manual triage and false alarm handling. The system also demonstrated reasonable scalability, indicating that automated OSINT analysis can provide a practical and resource-efficient way for organizations to maintain visibility over their attack surface.
Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
►▼
Show Figures

Figure 1
Open AccessReview
Security Requirements Engineering: A Review and Analysis
by
Aftab Alam Janisar, Ayman Meidan, Khairul Shafee bin Kalid, Abdul Rehman Gilal and Aliza Bt Sarlan
Computers 2025, 14(10), 429; https://doi.org/10.3390/computers14100429 - 9 Oct 2025
Abstract
Security is crucial, especially as software systems become increasingly complex. Both practitioners and researchers advocate for the early integration of security requirements (SR) into the Software Development Life Cycle (SDLC). However, ensuring the validation and assurance of security requirements is still a major
[...] Read more.
Security is crucial, especially as software systems become increasingly complex. Both practitioners and researchers advocate for the early integration of security requirements (SR) into the Software Development Life Cycle (SDLC). However, ensuring the validation and assurance of security requirements is still a major challenge in developing secure systems. To investigate this issue, a two-phase study was carried out. First phase: a literature review was conducted on 45 relevant studies related to Security Requirements Engineering (SRE) and Security Requirements Assurance (SRA). Nine SRE techniques were examined across multiple parameters, including major categories, requirements engineering stages, project scale, and the integration of standards involving 17 distinct activities. Second phase: An empirical survey of 58 industry professionals revealed a clear disparity between the understanding of Security Requirements Engineering (SRE) and the implementation of Security Requirements Assurance (SRA). While statistical analyses (ANOVA, regression, correlation, Kruskal–Wallis) confirmed a moderate grasp of SRE practices, SRA remains poorly understood and underapplied. Unlike prior studies focused on isolated models, this research combines practical insights with comparative analysis, highlighting the systemic neglect of SRA in current practices. The findings indicate the need for stronger security assurance in early development phases, offering targeted, data-driven recommendations for bridging this gap.
Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Measure Student Aptitude in Learning Programming in Higher Education—A Data Analysis
by
João Pires, Ana Rosa Borges, Jorge Bernardino, Fernanda Brito Correia and Anabela Gomes
Computers 2025, 14(10), 428; https://doi.org/10.3390/computers14100428 - 9 Oct 2025
Abstract
►▼
Show Figures
Analyzing student performance in Introductory Programming courses in Higher Education is critical for early intervention and improved learning outcomes. This study explores the potential of a cognitive test for student success in an Introductory Programming course by analyzing data from 180 students, including
[...] Read more.
Analyzing student performance in Introductory Programming courses in Higher Education is critical for early intervention and improved learning outcomes. This study explores the potential of a cognitive test for student success in an Introductory Programming course by analyzing data from 180 students, including Freshmen and Repeating Students, using descriptive statistics, correlation analysis, Categorical Principal Component Analysis and Item Response Theory models analysis. Analysis of the cognitive test revealed that some reasoning questions presented a statistically significant correlation, albeit of weak magnitude, with the course grades, particularly for freshman students. The development of models for predicting student performance in Introductory Programming using cognitive tests is also being explored. This study found that reasoning skills, namely logical reasoning and sequence completion, were more predictive of success in programming than general ability. The study also showed that a Programming Cognitive Test can be a useful tool for identifying students at risk of failure, particularly for freshmen students.
Full article

Figure 1
Open AccessReview
LLMs for Commit Messages: A Survey and an Agent-Based Evaluation Protocol on CommitBench
by
Mohamed Mehdi Trigui and Wasfi G. Al-Khatib
Computers 2025, 14(10), 427; https://doi.org/10.3390/computers14100427 - 7 Oct 2025
Abstract
►▼
Show Figures
Commit messages are vital for traceability, maintenance, and onboarding in modern software projects, yet their quality is frequently inconsistent. Recent large language models (LLMs) can transform code diffs into natural language summaries, offering a path to more consistent and informative commit messages. This
[...] Read more.
Commit messages are vital for traceability, maintenance, and onboarding in modern software projects, yet their quality is frequently inconsistent. Recent large language models (LLMs) can transform code diffs into natural language summaries, offering a path to more consistent and informative commit messages. This paper makes two contributions: (i) it provides a systematic survey of automated commit message generation with LLMs, critically comparing prompt-only, fine-tuned, and retrieval-augmented approaches; and (ii) it specifies a transparent, agent-based evaluation blueprint centered on CommitBench. Unlike prior reviews, we include a detailed dataset audit, preprocessing impacts, evaluation metrics, and error taxonomy. The protocol defines dataset usage and splits, prompting and context settings, scoring and selection rules, and reporting guidelines (results by project, language, and commit type), along with an error taxonomy to guide qualitative analysis. Importantly, this work emphasizes methodology and design rather than presenting new empirical benchmarking results. The blueprint is intended to support reproducibility and comparability in future studies.
Full article

Figure 1
Open AccessArticle
Hardware–Software System for Biomass Slow Pyrolysis: Characterization of Solid Yield via Optimization Algorithms
by
Ismael Urbina-Salas, David Granados-Lieberman, Juan Pablo Amezquita-Sanchez, Martin Valtierra-Rodriguez and David Aaron Rodriguez-Alejandro
Computers 2025, 14(10), 426; https://doi.org/10.3390/computers14100426 - 5 Oct 2025
Abstract
Biofuels represent a sustainable alternative that supports global energy development without compromising environmental balance. This work introduces a novel hardware–software platform for the experimental characterization of biomass solid yield during the slow pyrolysis process, integrating physical experimentation with advanced computational modeling. The hardware
[...] Read more.
Biofuels represent a sustainable alternative that supports global energy development without compromising environmental balance. This work introduces a novel hardware–software platform for the experimental characterization of biomass solid yield during the slow pyrolysis process, integrating physical experimentation with advanced computational modeling. The hardware consists of a custom-designed pyrolizer equipped with temperature and weight sensors, a dedicated control unit, and a user-friendly interface. On the software side, a two-step kinetic model was implemented and coupled with three optimization algorithms, i.e., Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Nelder–Mead (N-M), to estimate the Arrhenius kinetic parameters governing biomass degradation. Slow pyrolysis experiments were performed on wheat straw (WS), pruning waste (PW), and biosolids (BS) at a heating rate of 20 °C/min within 250–500 °C, with a 120 min residence time favoring biochar production. The comparative analysis shows that the N-M method achieved the highest accuracy (100% fit in estimating solid yield), with a convergence time of 4.282 min, while GA converged faster (1.675 min), with a fit of 99.972%, and PSO had the slowest convergence time at 6.409 min and a fit of 99.943%. These results highlight both the versatility of the system and the potential of optimization techniques to provide accurate predictive models of biomass decomposition as a function of time and temperature. Overall, the main contributions of this work are the development of a low-cost, custom MATLAB-based experimental platform and the tailored implementation of optimization algorithms for kinetic parameter estimation across different biomasses, together providing a robust framework for biomass pyrolysis characterization.
Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Innovations in Resilient Energy Systems)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Rethinking Blockchain Governance with AI: The VOPPA Framework
by
Catalin Daniel Morar, Daniela Elena Popescu, Ovidiu Constantin Novac and David Ghiurău
Computers 2025, 14(10), 425; https://doi.org/10.3390/computers14100425 - 4 Oct 2025
Abstract
Blockchain governance has become central to the performance and resilience of decentralized systems, yet current models face recurring issues of participation, coordination, and adaptability. This article offers a structured analysis of governance frameworks and highlights their limitations through recent high-impact case studies. It
[...] Read more.
Blockchain governance has become central to the performance and resilience of decentralized systems, yet current models face recurring issues of participation, coordination, and adaptability. This article offers a structured analysis of governance frameworks and highlights their limitations through recent high-impact case studies. It then examines how artificial intelligence (AI) is being integrated into governance processes, ranging from proposal summarization and anomaly detection to autonomous agent-based voting. In response to existing gaps, this paper proposes the Voting Via Parallel Predictive Agents (VOPPA) framework, a multi-agent architecture aimed at enabling predictive, diverse, and decentralized decision-making. Strengthening blockchain governance will require not just decentralization but also intelligent, adaptable, and accountable decision-making systems.
Full article
(This article belongs to the Special Issue Blockchain Technology—a Breakthrough Innovation for Modern Industries (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
ZDBERTa: Advancing Zero-Day Cyberattack Detection in Internet of Vehicle with Zero-Shot Learning
by
Amal Mirza, Sobia Arshad, Muhammad Haroon Yousaf and Muhammad Awais Azam
Computers 2025, 14(10), 424; https://doi.org/10.3390/computers14100424 - 3 Oct 2025
Abstract
►▼
Show Figures
The Internet of Vehicles (IoV) is becoming increasingly vulnerable to zero-day (ZD) cyberattacks, which often bypass conventional intrusion detection systems. To mitigate this challenge, this study proposes Zero-Day Bidirectional Encoder Representations from Transformers approach (ZDBERTa), a zero-shot learning (ZSL)-based framework for ZD attack
[...] Read more.
The Internet of Vehicles (IoV) is becoming increasingly vulnerable to zero-day (ZD) cyberattacks, which often bypass conventional intrusion detection systems. To mitigate this challenge, this study proposes Zero-Day Bidirectional Encoder Representations from Transformers approach (ZDBERTa), a zero-shot learning (ZSL)-based framework for ZD attack detection, evaluated on the CICIoV2024 dataset. Unlike conventional AI models, ZSL enables the classification of attack types not previously encountered during the training phase. Two dataset variants are formed: Variant 1, created through synthetic traffic generation using a mixture of pattern-based, crossover, and mutation techniques, and Variant 2, augmented with a Generative Adversarial Network (GAN). To replicate realistic zero-day conditions, denial-of-service (DoS) attacks were omitted during training and introduced only at testing. The proposed ZDBERTa incorporates a Byte-Pair Encoding (BPE) tokenizer, a multi-layer transformer encoder, and a classification head for prediction, enabling the model to capture semantic patterns and identify previously unseen threats. The experimental results demonstrate that ZDBERTa achieves 86.677% accuracy on Variant 1, highlighting the complexity of zero-day detection, while performance significantly improves to 99.315% on Variant 2, underscoring the effectiveness of GAN-based augmentation. To the best of our knowledge, this is the first research to explore ZD detection within CICIoV2024, contributing a novel direction toward resilient IoV cybersecurity.
Full article

Figure 1
Open AccessArticle
Mapping the Chemical Space of Antiviral Peptides with Half-Space Proximal and Metadata Networks Through Interactive Data Mining
by
Daniela de Llano García, Yovani Marrero-Ponce, Guillermin Agüero-Chapin, Hortensia Rodríguez, Francesc J. Ferri, Edgar A. Márquez, José R. Mora, Felix Martinez-Rios and Yunierkis Pérez-Castillo
Computers 2025, 14(10), 423; https://doi.org/10.3390/computers14100423 - 3 Oct 2025
Abstract
Antiviral peptides (AVPs) are promising therapeutic candidates, yet the rapid growth of sequence data and the field’s emphasis on predictors have left a gap: the lack of an integrated view linking peptide chemistry with biological context. Here, we map the AVP landscape through
[...] Read more.
Antiviral peptides (AVPs) are promising therapeutic candidates, yet the rapid growth of sequence data and the field’s emphasis on predictors have left a gap: the lack of an integrated view linking peptide chemistry with biological context. Here, we map the AVP landscape through interactive data mining using Half-Space Proximal Networks (HSPNs) and Metadata Networks (MNs) in the StarPep toolbox. HSPNs minimize edges and avoid fixed thresholds, reducing computational cost while enabling high-resolution analysis. A threshold-free HSPN resolved eight chemically and biologically distinct communities, while MNs contextualized AVPs by source, function, and target, revealing structural–functional relationships. To capture diversity compactly, we applied centrality-guided scaffold extraction with redundancy removal (90–50% identity), producing four representative subsets suitable for modeling and similarity searches. Alignment-free motif discovery yielded 33 validated motifs, including 10 overlapping with reported AVP signatures and 23 apparently novel. Motifs displayed category-specific enrichment across antimicrobial classes, and sequences carrying multiple motifs (≥4–5) consistently showed higher predicted antiviral probabilities. Beyond computational insights, scaffolds provide representative “entry points” into AVP chemical space, while motifs serve as modular building blocks for rational design. Together, these resources provide an integrated framework that may inform AVP discovery and support scaffold- and motif-guided therapeutic design.
Full article
(This article belongs to the Special Issue Recent Advances in Data Mining: Methods, Trends, and Emerging Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Hybrid MOCPO–AGE-MOEA for Efficient Bi-Objective Constrained Minimum Spanning Trees
by
Dana Faiq Abd, Haval Mohammed Sidqi and Omed Hasan Ahmed
Computers 2025, 14(10), 422; https://doi.org/10.3390/computers14100422 - 2 Oct 2025
Abstract
►▼
Show Figures
The constrained bi-objective Minimum Spanning Tree (MST) problem is a fundamental challenge in network design, as it simultaneously requires minimizing both total edge weight and maximum hop distance under strict feasibility limits; however, most existing algorithms tend to emphasize one objective over the
[...] Read more.
The constrained bi-objective Minimum Spanning Tree (MST) problem is a fundamental challenge in network design, as it simultaneously requires minimizing both total edge weight and maximum hop distance under strict feasibility limits; however, most existing algorithms tend to emphasize one objective over the other, resulting in imbalanced solutions, limited Pareto fronts, or poor scalability on larger instances. To overcome these shortcomings, this study introduces a Hybrid MOCPO–AGE-MOEA algorithm that strategically combines the exploratory strength of Multi-Objective Crested Porcupines Optimization (MOCPO) with the exploitative refinement of the Adaptive Geometry-based Evolutionary Algorithm (AGE-MOEA), while a Kruskal-based repair operator is integrated to strictly enforce feasibility and preserve solution diversity. Moreover, through extensive experiments conducted on Euclidean graphs with 11–100 nodes, the hybrid consistently demonstrates superior performance compared with five state-of-the-art baselines, as it generates Pareto fronts up to four times larger, achieves nearly 20% reductions in hop counts, and delivers order-of-magnitude runtime improvements with near-linear scalability. Importantly, results reveal that allocating 85% of offspring to MOCPO exploration and 15% to AGE-MOEA exploitation yields the best balance between diversity, efficiency, and feasibility. Therefore, the Hybrid MOCPO–AGE-MOEA not only addresses critical gaps in constrained MST optimization but also establishes itself as a practical and scalable solution with strong applicability to domains such as software-defined networking, wireless mesh systems, and adaptive routing, where both computational efficiency and solution diversity are paramount
Full article

Figure 1
Open AccessArticle
A Study to Determine the Feasibility of Combining Mobile Augmented Reality and an Automatic Pill Box to Support Older Adults’ Medication Adherence
by
Osslan Osiris Vergara-Villegas, Vianey Guadalupe Cruz-Sánchez, Abel Alejandro Rubín-Alvarado, Saulo Abraham Gante-Díaz, Jonathan Axel Cruz-Vazquez, Brandon Areyzaga-Mendizábal, Jesús Yaljá Montiel-Pérez, Juan Humberto Sossa-Azuela, Iliac Huerta-Trujillo and Rodolfo Romero-Herrera
Computers 2025, 14(10), 421; https://doi.org/10.3390/computers14100421 - 2 Oct 2025
Abstract
Because of the increased prevalence of chronic diseases, older adults frequently take many medications. However, adhering to a medication treatment tends to be difficult. The lack of medication adherence can cause health problems or even patient death. This paper describes the methodology used
[...] Read more.
Because of the increased prevalence of chronic diseases, older adults frequently take many medications. However, adhering to a medication treatment tends to be difficult. The lack of medication adherence can cause health problems or even patient death. This paper describes the methodology used in developing a mobile augmented reality (MAR) pill box. The proposal supports patients in adhering to their medication treatment. First, we explain the design and construction of the automatic pill box, which includes alarms and uses QR codes recognized by the MAR system to provide medication information. Then, we explain the development of the MAR system. We conducted a preliminary survey with 30 participants to assess the feasibility of the MAR app. One hundred older adults participated in the survey. After one week of using the proposal, each patient answered a survey regarding the proposal functionality. The results revealed that 88% of the participants strongly agree and 11% agree that the app is a support in adhering to medical treatment. Finally, we conducted a study to compare the time elapsed between the scheduled time for taking the medication and the time it was actually consumed. The results from 189 records showed that using the proposal, 63.5% of the patients take medication with a maximum delay of 4.5 min. The results also showed that the alarm always sounded at the scheduled time and that the QR code displayed always corresponded to the medication that had to be consumed.
Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Machine Learning-Driven Security and Privacy Analysis of a Dummy-ABAC Model for Cloud Computing
by
Baby Marina, Irfana Memon, Fizza Abbas Alvi, Ubaidullah Rajput and Mairaj Nabi
Computers 2025, 14(10), 420; https://doi.org/10.3390/computers14100420 - 2 Oct 2025
Abstract
►▼
Show Figures
The Attribute-Based Access Control (ABAC) model provides access control decisions based on subject, object (resource), and contextual attributes. However, the use of sensitive attributes in access control decisions poses many security and privacy challenges, particularly in cloud environment where third parties are involved.
[...] Read more.
The Attribute-Based Access Control (ABAC) model provides access control decisions based on subject, object (resource), and contextual attributes. However, the use of sensitive attributes in access control decisions poses many security and privacy challenges, particularly in cloud environment where third parties are involved. To address this shortcoming, we present a novel privacy-preserving Dummy-ABAC model that obfuscates real attributes with dummy attributes before transmission to the cloud server. In the proposed model, only dummy attributes are stored in the cloud database, whereas real attributes and mapping tokens are stored in a local machine database. Only dummy attributes are used for the access request evaluation in the cloud, and real data are retrieved in the post-decision mechanism using secure tokens. The security of the proposed model was assessed using a simulated threat scenario, including attribute inference, policy injection, and reverse mapping attacks. Experimental evaluation using machine learning classifiers (“DecisionTree” DT, “RandomForest” RF), demonstrated that inference accuracy dropped from ~0.65 on real attributes to ~0.25 on dummy attributes confirming improved resistance to inference attacks. Furthermore, the model rejects malformed and unauthorized policies. Performance analysis of dummy generation, token generation, encoding, and nearest-neighbor search, demonstrated minimal latency in both local and cloud environments. Overall, the proposed model ensures an efficient, secure, and privacy-preserving access control in cloud environments.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies and Applications
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario Brcic, Sebastian LapuschkinDeadline: 31 January 2026
Topic in
AI, Computers, Education Sciences, Societies, Future Internet, Technologies
AI Trends in Teacher and Student Training
Topic Editors: José Fernández-Cerero, Marta Montenegro-RuedaDeadline: 11 March 2026

Conferences
Special Issues
Special Issue in
Computers
Blockchain Technology—a Breakthrough Innovation for Modern Industries (2nd Edition)
Guest Editors: Nino Adamashvili, Radu State, Caterina Tricase, Roberto TonelliDeadline: 15 October 2025
Special Issue in
Computers
Artificial Intelligence-Driven Innovations in Resilient Energy Systems
Guest Editors: Morteza Nazari Heris, Mostafa Mohammadpourfard, Qiushi CuiDeadline: 22 October 2025
Special Issue in
Computers
AI for Humans and Humans for AI (AI4HnH4AI)
Guest Editors: Amit Kumar Mishra, Deepak PuthalDeadline: 31 October 2025
Special Issue in
Computers
Multimedia Data and Network Security
Guest Editor: Zahid AkhtarDeadline: 31 October 2025