-
Cybercrime Resilience in the Era of Advanced Technologies: Evidence from the Financial Sector of a Developing Country
-
A Literature Review on Security in the Internet of Things: Identifying and Analysing Critical Categories
-
Machine Learning and Deep Learning Paradigms: From Techniques to Practical Applications and Research Frontiers
Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q2 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 15.5 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the second half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.6 (2023);
5-Year Impact Factor:
2.4 (2023)
Latest Articles
M18K: A Multi-Purpose Real-World Dataset for Mushroom Detection, 3D Pose Estimation, and Growth Monitoring
Computers 2025, 14(5), 199; https://doi.org/10.3390/computers14050199 - 20 May 2025
Abstract
Automating agricultural processes holds significant promise for enhancing efficiency and sustainability in various farming practices. This paper contributes to the automation of agricultural processes by providing a dedicated mushroom detection dataset related to automated harvesting, 3D pose estimation, and growth monitoring of the
[...] Read more.
Automating agricultural processes holds significant promise for enhancing efficiency and sustainability in various farming practices. This paper contributes to the automation of agricultural processes by providing a dedicated mushroom detection dataset related to automated harvesting, 3D pose estimation, and growth monitoring of the button mushroom produced using Agaricus Bisporus fungi. With a total of 2000 images for object detection, instance segmentation, and 3D pose estimation—containing over 100,000 mushroom instances—and an additional 3838 images for yield estimation featuring eight mushroom scenes covering the complete growth period, it fills the gap in mushroom-specific datasets and serves as a benchmark for detection and instance segmentation as well as 3D pose estimation algorithms in smart mushroom agriculture. The dataset, featuring realistic growth environment scenarios with comprehensive 2D and 3D annotations, is assessed using advanced detection and instance segmentation algorithms. This paper details the dataset’s characteristics, presents detailed statistics on mushroom growth and yield, evaluates algorithmic performance, and, for broader applicability, makes all resources publicly available, including images, code, and trained models, via our GitHub repository. (accessed on 22 March 2025).
Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
►
Show Figures
Open AccessArticle
Few-Shot Data Augmentation by Morphology-Constrained Latent Diffusion for Enhanced Nematode Recognition
by
Xiong Ouyang, Jiayan Zhuang, Jianfeng Gu and Sichao Ye
Computers 2025, 14(5), 198; https://doi.org/10.3390/computers14050198 - 19 May 2025
Abstract
Plant-parasiticnematodes represent a significant biosecurity threat in cross-border plant quarantine, necessitating precise identification for effective border control. While DL models have demonstrated success in nematode image classification based on morphological features, the limited availability of high-quality samples and the species-specific nature of nematodes
[...] Read more.
Plant-parasiticnematodes represent a significant biosecurity threat in cross-border plant quarantine, necessitating precise identification for effective border control. While DL models have demonstrated success in nematode image classification based on morphological features, the limited availability of high-quality samples and the species-specific nature of nematodes result in insufficient training data, which constrains model performance. Although generative models have shown promise in data augmentation, they often struggle to balance morphological fidelity and phenotypic diversity. This paper proposes a novel few-shot data augmentation framework based on a morphology-constrained latent diffusion model, which, for the first time, integrates morphological constraints into the latent diffusion process. By geometrically parameterizing nematode morphology, the proposed approach enhances topological fidelity in the generated images and addresses key limitations of traditional generative models in controlling biological shapes. This framework is designed to augment nematode image datasets and improve classification performance under limited data conditions. The framework consists of three key components: First, we incorporate a fine-tuning strategy that preserves the generalization capability of model in few-shot settings. Second, we extract morphological constraints from nematode images using edge detection and a moving least squares method, capturing key structural details. Finally, we embed these constraints into the latent space of the diffusion model, ensuring generated images maintain both fidelity and diversity. Experimental results demonstrate that our approach significantly enhances classification accuracy. For imbalanced datasets, the Top-1 accuracy of multiple classification models improved by 7.34–14.66% compared to models trained without augmentation, and by 2.0–5.67% compared to models using traditional data augmentation. Additionally, when replacing up to 25% of real images with generated ones in a balanced dataset, model performance remained nearly unchanged, indicating the robustness and effectiveness of the method. Ablation experiments demonstrate that the morphology-guided strategy achieves superior image quality compared to both unconstrained and edge-based constraint methods, with a Fréchet Inception Distance of 12.95 and an Inception Score of 1.21 ± 0.057. These results indicate that the proposed method effectively balances morphological fidelity and phenotypic diversity in image generation.
Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
►▼
Show Figures

Figure 1
Open AccessArticle
A Novel MaxViT Model for Accelerated and Precise Soybean Leaf and Seed Disease Identification
by
Al Shahriar Uddin Khondakar Pranta, Hasib Fardin, Jesika Debnath, Amira Hossain, Anamul Haque Sakib, Md. Redwan Ahmed, Rezaul Haque, Ahmed Wasif Reza and M. Ali Akber Dewan
Computers 2025, 14(5), 197; https://doi.org/10.3390/computers14050197 - 18 May 2025
Abstract
Timely diagnosis of soybean diseases is essential to protect yields and limit global economic loss, yet current deep learning approaches suffer from small, imbalanced datasets, single‑organ focus, and limited interpretability. We propose MaxViT‑XSLD (MaxViT XAI-Seed–Leaf-Diagnostic), a Vision Transformer that integrates multiaxis attention with
[...] Read more.
Timely diagnosis of soybean diseases is essential to protect yields and limit global economic loss, yet current deep learning approaches suffer from small, imbalanced datasets, single‑organ focus, and limited interpretability. We propose MaxViT‑XSLD (MaxViT XAI-Seed–Leaf-Diagnostic), a Vision Transformer that integrates multiaxis attention with MBConv layers to jointly classify soybean leaf and seed diseases while remaining lightweight and explainable. Two benchmark datasets were upscaled through elastic deformation, Gaussian noise, brightness shifts, rotation, and flipping, enlarging ASDID from 10,722 to 16,000 images (eight classes) and the SD set from 5513 to 10,000 images (five classes). Under identical augmentation and hyperparameters, MaxViT‑XSLD delivered 99.82% accuracy on ASDID and 99.46% on SD, surpassing competitive ViT, CNN, and lightweight SOTA variants. High PR‑AUC and MCC values, confirmed via 10‑fold stratified cross‑validation and Wilcoxon tests, demonstrate robust generalization across data splits. Explainable AI (XAI) techniques further enhanced interpretability by highlighting biologically relevant features influencing predictions. Its modular design also enables future model compression for edge deployment in resource‑constrained settings. Finally, we deploy the model in SoyScan, a real‑time web tool that streams predictions and visual explanations to growers and agronomists. These findings establishes a scalable, interpretable system for precision crop health monitoring and lay the groundwork for edge‑oriented, multimodal agricultural diagnostics.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Open AccessArticle
Improving Real-Time Economic Decisions Through Edge Computing: Implications for Financial Contagion Risk Management
by
Ștefan Ionescu, Camelia Delcea and Ionuț Nica
Computers 2025, 14(5), 196; https://doi.org/10.3390/computers14050196 - 18 May 2025
Abstract
In the face of accelerating digitalization and growing systemic vulnerabilities, the ability to make accurate, real-time economic decisions has become a critical capability for financial and institutional stability. This study investigates how edge computing infrastructures influence decision-making accuracy, responsiveness, and risk containment in
[...] Read more.
In the face of accelerating digitalization and growing systemic vulnerabilities, the ability to make accurate, real-time economic decisions has become a critical capability for financial and institutional stability. This study investigates how edge computing infrastructures influence decision-making accuracy, responsiveness, and risk containment in economic systems, particularly under the threat of financial contagion. A synthetic dataset simulating the interaction between economic indicators and edge performance metrics was constructed to emulate real-time decision environments. Composite indicators were developed to quantify key dynamics, and a range of machine learning models, including XGBoost, Random Forest, and Neural Networks, were applied to classify economic decision outcomes. The results indicate that low latency, efficient resource use, and balanced workload distribution are significantly associated with higher decision quality. XGBoost outperformed all other models, achieving 97% accuracy and a ROC-AUC of 0.997. The findings suggest that edge computing performance metrics can act as predictive signals for systemic fragility and may be integrated into early warning systems for financial risk management. This study contributes to the literature by offering a novel framework for modeling the economic implications of edge intelligence and provides policy insights for designing resilient, real-time financial infrastructures.
Full article
(This article belongs to the Special Issue Distributed Computing Paradigms for the Internet of Things: Exploring Cloud, Edge, and Fog Solutions)
►▼
Show Figures

Figure 1
Open AccessArticle
HFC-YOLO11: A Lightweight Model for the Accurate Recognition of Tiny Remote Sensing Targets
by
Jinyin Bai, Wei Zhu, Zongzhe Nie, Xin Yang, Qinglin Xu and Dong Li
Computers 2025, 14(5), 195; https://doi.org/10.3390/computers14050195 - 18 May 2025
Abstract
►▼
Show Figures
To address critical challenges in tiny object detection within remote sensing imagery, including resolution–semantic imbalance, inefficient feature fusion, and insufficient localization accuracy, this study proposes Hierarchical Feature Compensation You Only Look Once 11 (HFC-YOLO11), a lightweight detection model based on hierarchical feature compensation.
[...] Read more.
To address critical challenges in tiny object detection within remote sensing imagery, including resolution–semantic imbalance, inefficient feature fusion, and insufficient localization accuracy, this study proposes Hierarchical Feature Compensation You Only Look Once 11 (HFC-YOLO11), a lightweight detection model based on hierarchical feature compensation. Firstly, by reconstructing the feature pyramid architecture, we preserve the high-resolution P2 feature layer in shallow networks to enhance the fine-grained feature representation for tiny targets, while eliminating redundant P5 layers to reduce the computational complexity. In addition, a depth-aware differentiated module design strategy is proposed: GhostBottleneck modules are adopted in shallow layers to improve its feature reuse efficiency, while standard Bottleneck modules are maintained in deep layers to strengthen the semantic feature extraction. Furthermore, an Extended Intersection over Union loss function (EIoU) is developed, incorporating boundary alignment penalty terms and scale-adaptive weight mechanisms to optimize the sub-pixel-level localization accuracy. Experimental results on the AI-TOD and VisDrone2019 datasets demonstrate that the improved model achieves mAP50 improvements of 3.4% and 2.7%, respectively, compared to the baseline YOLO11s, while reducing its parameters by 27.4%. Ablation studies validate the balanced performance of the hierarchical feature compensation strategy in the preservation of resolution and computational efficiency. Visualization results confirm an enhanced robustness against complex background interference. HFC-YOLO11 exhibits superior accuracy and generalization capability in tiny object detection tasks, effectively meeting practical application requirements for tiny object recognition.
Full article

Figure 1
Open AccessReview
Revolutionizing Data Exchange Through Intelligent Automation: Insights and Trends
by
Yeison Nolberto Cardona-Álvarez, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(5), 194; https://doi.org/10.3390/computers14050194 - 17 May 2025
Abstract
This review paper presents a comprehensive analysis of the evolving landscape of data exchange, with a particular focus on the transformative role of emerging technologies such as blockchain, field-programmable gate arrays (FPGAs), and artificial intelligence (AI). We explore how the integration of these
[...] Read more.
This review paper presents a comprehensive analysis of the evolving landscape of data exchange, with a particular focus on the transformative role of emerging technologies such as blockchain, field-programmable gate arrays (FPGAs), and artificial intelligence (AI). We explore how the integration of these technologies into data management systems enhances operational efficiency, precision, and security through intelligent automation and advanced machine learning techniques. The paper also critically examines the key challenges facing data exchange today, including issues of interoperability, the demand for real-time processing, and the stringent requirements of regulatory compliance. Furthermore, it underscores the urgent need for robust ethical frameworks to guide the responsible use of AI and to protect data privacy. In addressing these challenges, the paper calls for innovative research aimed at overcoming current limitations in scalability and security. It advocates for interdisciplinary approaches that harmonize technological innovation with legal and ethical considerations. Ultimately, this review highlights the pivotal role of collaboration among researchers, industry stakeholders, and policymakers in fostering a digitally inclusive future—one that strengthens data exchange practices while upholding global standards of fairness, transparency, and accountability.
Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
►▼
Show Figures

Figure 1
Open AccessArticle
Accounting Support Using Artificial Intelligence for Bank Statement Classification
by
Marco Lecci and Thomas Hanne
Computers 2025, 14(5), 193; https://doi.org/10.3390/computers14050193 - 15 May 2025
Abstract
Artificial Intelligence is a disruptive technology that is revolutionizing the accounting sector, e.g., by reducing costs, detecting fraud, and generating reports. However, the manual maintenance of booking ledgers remains a significant challenge, particularly for small and medium-sized enterprises. The usage of AI technologies
[...] Read more.
Artificial Intelligence is a disruptive technology that is revolutionizing the accounting sector, e.g., by reducing costs, detecting fraud, and generating reports. However, the manual maintenance of booking ledgers remains a significant challenge, particularly for small and medium-sized enterprises. The usage of AI technologies in this area is rarely considered in the literature depite a significant interest in using AI for other acounting-related activities. Our study, which was conducted during 2023–2024, utilizes natural language processing and machine learning to construct a predictive model that accurately matches bank transaction statements with accounting records. The study employs Feedforward Neural Networks and Support Vector Machines with various settings and compares their performance with that of previous models embedded in similar predictive tasks. Additionally, as a baseline model, a software called Contofox, a rule-based system capable of classifying accounting records by manually creating rules to match bank statements with accounting records, is used. Furthermore, this study evaluates the business value of the model through an interview with an accounting expert, highlighting the potential benefits of artifacts in enhancing accounting processes.
Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
A Case Study of Computational Thinking Analysis Using SOLO Taxonomy in Scientific–Mathematical Learning
by
Alejandro De la Hoz Serrano, Andrés Álvarez-Murillo, Eladio José Fernández Torrado, Miguel Ángel González Maestre and Lina Viviana Melo Niño
Computers 2025, 14(5), 192; https://doi.org/10.3390/computers14050192 - 15 May 2025
Abstract
Education nowadays requires a certain variety of resources that allow for the acquisition of 21st-century skills, including computational thinking. Educational robotics emerges as a digital resource that supports the development of these skills in both male and female students across different educational stages.
[...] Read more.
Education nowadays requires a certain variety of resources that allow for the acquisition of 21st-century skills, including computational thinking. Educational robotics emerges as a digital resource that supports the development of these skills in both male and female students across different educational stages. However, it is necessary to investigate in depth evaluations that analyze the acquisition of Computational Thinking skills in pre-service teachers, especially when scientific and mathematical content learning programs are designed. This study aims to analyze Computational Thinking skills using the SOLO taxonomy, with an approach to science and mathematics learning, through an intervention based on programming and Educational Robotics. A quasi-experimental design was used on a total sample of 116 pre-service teachers. The SOLO taxonomy categorization was used to associate each level of the taxonomy with the computational concepts analyzed through a quantitative questionnaire. The taxonomy levels associated with Computational Thinking skills correspond to uni-structural and multi-structural levels. Males presented better results before the intervention, while subsequently, females presented better levels of Computational Thinking, as well as a greater association with the higher complexity level of learning analyzed. In turn, there was a trend between the levels of the SOLO taxonomy and computational concepts, so that an increase in skill for a concept occurs similarly at both the uni-structural level and the multi-structural level. The SOLO taxonomy is presented as a proper tool for learning assessment since it allows for a more detailed understanding of the quality of students’ learning. Therefore, the SOLO taxonomy serves as a valuable resource in the evaluation of Computational Thinking skills.
Full article
(This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Analyzing Visitor Behavior to Enhance Personalized Experiences in Smart Museums: A Systematic Literature Review
by
Rosen Ivanov and Victoria Velkova
Computers 2025, 14(5), 191; https://doi.org/10.3390/computers14050191 - 14 May 2025
Abstract
►▼
Show Figures
This systematic review provides an analysis of information gathered from 33 chosen publications during the past decade. The analysis reveals the primary methodologies applied and identifies the visitor behaviors that enable personalized content delivery. Statistical and Data Analysis is the predominant methodology in
[...] Read more.
This systematic review provides an analysis of information gathered from 33 chosen publications during the past decade. The analysis reveals the primary methodologies applied and identifies the visitor behaviors that enable personalized content delivery. Statistical and Data Analysis is the predominant methodology in the reviewed publications. The methodology is present in 97% of the publications. AI and Machine Learning (63.6%) and Mobile/Interactive Technologies (60.6%) are most frequently paired with this methodology. Behavioral Analytics Platforms and Mobile/Wearable Devices are the most used technologies (42.4%) for delivering personalized content. A total of 39.4% of publications utilize Location Tracking Systems. The most frequent visitor behavior analysis focuses on Interactive Engagement and Movement Patterns, which occur 72.7% of the time, before Learning Patterns and Physical Positioning, which occur 63.6% of the time. The behavioral analysis of Group Dynamics (27.3%) and Emotional Response (18.2%) represents the least common practice when museums personalize their content despite the significance of social interaction analysis among visitors. The leading content personalization methods currently include real-time personalization systems combined with AI-driven systems and location-based technologies. Personalized content delivery systems face challenges including privacy protection and scalability issues paired with expensive implementation costs, which especially affect smaller museums. Researchers should explore how new technologies, such as virtual reality, augmented reality, and advanced biometric systems, can be integrated into future developments.
Full article

Graphical abstract
Open AccessArticle
Energy Transitions over Five Decades: A Statistical Perspective on Global Energy Trends
by
Francina Pali, Roschlynn Dsouza, Yeeon Ryu, Jennifer Oishee, Joel Aikkarakudiyil, Manali Avinash Gaikwad, Payam Norouzzadeh, Steven Buckner and Bahareh Rahmani
Computers 2025, 14(5), 190; https://doi.org/10.3390/computers14050190 - 13 May 2025
Abstract
This study analyzes global energy trends from January 1973 to November 2022, using the “World Energy Statistics” dataset from Kaggle, which includes data on the production, consumption, import, and export of fossil fuels, nuclear energy, and renewable energy. The analysis employs statistical techniques
[...] Read more.
This study analyzes global energy trends from January 1973 to November 2022, using the “World Energy Statistics” dataset from Kaggle, which includes data on the production, consumption, import, and export of fossil fuels, nuclear energy, and renewable energy. The analysis employs statistical techniques such as correlation analysis, quantile–quantile (Q–Q) plots, seasonal decomposition, and seasonal autoregressive integrated moving average (SARIMA) modeling. The results reveal strong positive correlations between nuclear energy production and consumption, as well as between renewable energy production and consumption. Seasonal decomposition highlights annual patterns in renewable energy use and a declining trend in fossil fuel dependency. SARIMA modeling forecasts continued growth in renewable energy consumption and a gradual reduction in fossil fuel reliance. These findings provide critical insights into long-term energy patterns and offer data-driven implications for global energy policy and strategic planning.
Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Innovations in Resilient Energy Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
AHA: Design and Evaluation of Compute-Intensive Hardware Accelerators for AMD-Xilinx Zynq SoCs Using HLS IP Flow
by
David Berrazueta-Mena and Byron Navas
Computers 2025, 14(5), 189; https://doi.org/10.3390/computers14050189 - 13 May 2025
Abstract
►▼
Show Figures
The increasing complexity of algorithms in embedded applications has amplified the demand for high-performance computing. Heterogeneous embedded systems, particularly FPGA-based systems-on-chip (SoCs), enhance execution speed by integrating hardware accelerator intellectual property (IP) cores. However, traditional low-level IP-core design presents significant challenges. High-level synthesis
[...] Read more.
The increasing complexity of algorithms in embedded applications has amplified the demand for high-performance computing. Heterogeneous embedded systems, particularly FPGA-based systems-on-chip (SoCs), enhance execution speed by integrating hardware accelerator intellectual property (IP) cores. However, traditional low-level IP-core design presents significant challenges. High-level synthesis (HLS) offers a promising alternative, enabling efficient FPGA development through high-level programming languages. Yet, effective methodologies for designing and evaluating heterogeneous FPGA-based SoCs remain crucial. This study surveys HLS tools and design concepts and presents the development of the AHA IP cores, a set of five benchmarking accelerators for rapid Zynq-based SoC evaluation. These accelerators target compute-intensive tasks, including matrix multiplication, Fast Fourier Transform (FFT), Advanced Encryption Standard (AES), Back-Propagation Neural Network (BPNN), and Artificial Neural Network (ANN). We establish a streamlined design flow using AMD-Xilinx tools for rapid prototyping and testing FPGA-based heterogeneous platforms. We outline criteria for selecting algorithms to improve speed and resource efficiency in HLS design. Our performance evaluation across various configurations highlights performance–resource trade-offs and demonstrates that ANN and BPNN achieve significant parallelism, while AES optimization increases resource utilization the most. Matrix multiplication shows strong optimization potential, whereas FFT is constrained by data dependencies.
Full article

Figure 1
Open AccessArticle
A Novel Autonomous Robotic Vehicle-Based System for Real-Time Production and Safety Control in Industrial Environments
by
Athanasios Sidiropoulos, Dimitrios Konstantinidis, Xenofon Karamanos, Theofilos Mastos, Konstantinos Apostolou, Theocharis Chatzis, Maria Papaspyropoulou, Kalliroi Marini, Georgios Karamitsos, Christina Theodoridou, Andreas Kargakos, Matina Vogiatzi, Angelos Papadopoulos, Dimitrios Giakoumis, Dimitrios Bechtsis, Kosmas Dimitropoulos and Dimitrios Vlachos
Computers 2025, 14(5), 188; https://doi.org/10.3390/computers14050188 - 12 May 2025
Abstract
►▼
Show Figures
Industry 4.0 has revolutionized the way companies manufacture, improve, and distribute their products through the use of new technologies, such as artificial intelligence, robotics, and machine learning. Autonomous Mobile Robots (AMRs), especially, have gained a lot of attention, supporting workers with daily industrial
[...] Read more.
Industry 4.0 has revolutionized the way companies manufacture, improve, and distribute their products through the use of new technologies, such as artificial intelligence, robotics, and machine learning. Autonomous Mobile Robots (AMRs), especially, have gained a lot of attention, supporting workers with daily industrial tasks and boosting overall performance by delivering vital information about the status of the production line. To this end, this work presents the novel Q-CONPASS system that aims to introduce AMRs in production lines with the ultimate goal of gathering important information that can assist in production and safety control. More specifically, the Q-CONPASS system is based on an AMR equipped with a plethora of machine learning algorithms that enable the vehicle to safely navigate in a dynamic industrial environment, avoiding humans, moving machines, and stationary objects while performing important tasks. These tasks include the identification of the following: (i) missing objects during product packaging and (ii) extreme skeletal poses of workers that can lead to musculoskeletal disorders. Finally, the Q-CONPASS system was validated in a real-life environment (i.e., the lift manufacturing industry), showcasing the importance of collecting and processing data in real-time to boost productivity and improve the well-being of workers.
Full article

Figure 1
Open AccessArticle
Interpretable Deep Learning for Diabetic Retinopathy: A Comparative Study of CNN, ViT, and Hybrid Architectures
by
Weijie Zhang, Veronika Belcheva and Tatiana Ermakova
Computers 2025, 14(5), 187; https://doi.org/10.3390/computers14050187 - 12 May 2025
Abstract
Diabetic retinopathy (DR) is a leading cause of vision impairment worldwide, requiring early detection for effective treatment. Deep learning models have been widely used for automated DR classification, with Convolutional Neural Networks (CNNs) being the most established approach. Recently, Vision Transformers (ViTs) have
[...] Read more.
Diabetic retinopathy (DR) is a leading cause of vision impairment worldwide, requiring early detection for effective treatment. Deep learning models have been widely used for automated DR classification, with Convolutional Neural Networks (CNNs) being the most established approach. Recently, Vision Transformers (ViTs) have shown promise, but a direct comparison of their performance and interpretability remains limited. Additionally, hybrid models that combine CNN and transformer-based architectures have not been extensively studied. This work systematically evaluates CNNs (ResNet-50), ViTs (Vision Transformer and SwinV2-Tiny), and hybrid models (Convolutional Vision Transformer, LeViT-256, and CvT-13) on DR classification using publicly available retinal image datasets. The models are assessed based on classification accuracy and interpretability, applying Grad-CAM and Attention-Rollout to analyze decision-making patterns. Results indicate that hybrid models outperform both standalone CNNs and ViTs, achieving a better balance between local feature extraction and global context awareness. The best-performing model (CvT-13) achieved a Quadratic Weighted Kappa (QWK) score of 0.84 and an AUC of 0.93 on the test set. Interpretability analysis shows that CNNs focus on fine-grained lesion details, while ViTs exhibit broader but less localized attention. These findings provide valuable insights for optimizing deep learning models in medical imaging, supporting the development of clinically viable AI-driven DR screening systems.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►▼
Show Figures

Figure 1
Open AccessArticle
Test-Time Training with Adaptive Memory for Traffic Accident Severity Prediction
by
Duo Peng and Weiqi Yan
Computers 2025, 14(5), 186; https://doi.org/10.3390/computers14050186 - 10 May 2025
Abstract
Traffic accident prediction is essential for improving road safety and optimizing intelligent transportation systems. However, deep learning models often struggle with distribution shifts and class imbalance, leading to degraded performance in real-world applications. While distribution shift is a common challenge in machine learning,
[...] Read more.
Traffic accident prediction is essential for improving road safety and optimizing intelligent transportation systems. However, deep learning models often struggle with distribution shifts and class imbalance, leading to degraded performance in real-world applications. While distribution shift is a common challenge in machine learning, Transformer-based models—despite their ability to capture long-term dependencies—often lack mechanisms for dynamic adaptation during inferencing. In this paper, we propose a TTT-Enhanced Transformer that incorporates Test-Time Training (TTT), enabling the model to refine its parameters during inferencing through a self-supervised auxiliary task. To further boost performance, an Adaptive Memory Layer (AML), a Feature Pyramid Network (FPN), Class-Balanced Attention (CBA), and Focal Loss are integrated to address multi-scale, long-term, and imbalance-related challenges. Our experimental results show that our model achieved an overall accuracy of 96.86% and a severe accident recall of 95.8%, outperforming the strongest Transformer baseline by 5.65% in accuracy and 9.6% in recall. The results of our confusion matrix and ROC analyses confirm our model’s superior classification balance and discriminatory power. These findings highlight the potential of our approach in enhancing real-time adaptability and robustness under shifting data distributions and class imbalances in intelligent transportation systems.
Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
The Influence of Artificial Intelligence Tools on Learning Outcomes in Computer Programming: A Systematic Review and Meta-Analysis
by
Manal Alanazi, Ben Soh, Halima Samra and Alice Li
Computers 2025, 14(5), 185; https://doi.org/10.3390/computers14050185 - 9 May 2025
Abstract
This systematic review and meta-analysis investigates the impact of artificial intelligence (AI) tools, including ChatGPT 3.5 and GitHub Copilot, on learning outcomes in computer programming courses. A total of 35 controlled studies published between 2020 and 2024 were analysed to assess the effectiveness
[...] Read more.
This systematic review and meta-analysis investigates the impact of artificial intelligence (AI) tools, including ChatGPT 3.5 and GitHub Copilot, on learning outcomes in computer programming courses. A total of 35 controlled studies published between 2020 and 2024 were analysed to assess the effectiveness of AI-assisted learning. The results indicate that students using AI tools outperformed those without such aids. The meta-analysis findings revealed that AI-assisted learning significantly reduced task completion time (SMD = −0.69, 95% CI [−2.13, −0.74], I2 = 95%, p = 0.34) and improved student performance scores (SMD = 0.86, 95% CI [0.36, 1.37], p = 0.0008, I2 = 54%). However, AI tools did not provide a statistically significant advantage in learning success or ease of understanding (SMD = 0.16, 95% CI [−0.23, 0.55], p = 0.41, I2 = 55%), with sensitivity analysis suggesting result variability. Student perceptions of AI tools were overwhelmingly positive, with a pooled estimate of 1.0 (95% CI [0.92, 1.00], I2 = 0%). While AI tools enhance computer programming proficiency and efficiency, their effectiveness depends on factors such as tool functionality and course design. To maximise benefits and mitigate over-reliance, tailored pedagogical strategies are essential. This study underscores the transformative role of AI in computer programming education and provides evidence-based insights for optimising AI-assisted learning.
Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Driver Distraction Detection in Extreme Conditions Using Kolmogorov–Arnold Networks
by
János Hollósi, Gábor Kovács, Mykola Sysyn, Dmytro Kurhan, Szabolcs Fischer and Viktor Nagy
Computers 2025, 14(5), 184; https://doi.org/10.3390/computers14050184 - 9 May 2025
Abstract
Driver distraction can have severe safety consequences, particularly in public transportation. This paper presents a novel approach for detecting bus driver actions, such as mobile phone usage and interactions with passengers, using Kolmogorov–Arnold networks (KANs). The adversarial FGSM attack method was applied to
[...] Read more.
Driver distraction can have severe safety consequences, particularly in public transportation. This paper presents a novel approach for detecting bus driver actions, such as mobile phone usage and interactions with passengers, using Kolmogorov–Arnold networks (KANs). The adversarial FGSM attack method was applied to assess the robustness of KANs in extreme driving conditions, like adverse weather, high-traffic situations, and bad visibility conditions. In this research, a custom dataset was used in collaboration with a partner company in the field of public transportation. This allows the efficiency of Kolmogorov–Arnold network solutions to be verified using real data. The results suggest that KANs can enhance driver distraction detection under challenging conditions, with improved resilience against adversarial attacks, particularly in low-complexity networks.
Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
Domain- and Language-Adaptable Natural Language Interface for Property Graphs
by
Ioannis Tsampos and Emmanouil Marakakis
Computers 2025, 14(5), 183; https://doi.org/10.3390/computers14050183 - 9 May 2025
Abstract
Despite the growing adoption of Property Graph Databases, like Neo4j, interacting with them remains difficult for non-technical users due to the reliance on formal query languages. Natural Language Interfaces (NLIs) address this by translating natural language (NL) into Cypher. However, existing solutions are
[...] Read more.
Despite the growing adoption of Property Graph Databases, like Neo4j, interacting with them remains difficult for non-technical users due to the reliance on formal query languages. Natural Language Interfaces (NLIs) address this by translating natural language (NL) into Cypher. However, existing solutions are typically limited to high-resource languages; are difficult to adapt to evolving domains with limited annotated data; and often depend on Machine Learning (ML) approaches, including Large Language Models (LLMs), that demand substantial computational resources and advanced expertise for training and maintenance. We address these limitations by introducing a novel dependency-based, training-free, schema-agnostic Natural Language Interface (NLI) that converts NL queries into Cypher for querying Property Graphs. Our system employs a modular pipeline-integrating entity and relationship extraction, Named Entity Recognition (NER), semantic mapping, triple creation via syntactic dependencies, and validation against an automatically extracted Schema Graph. The distinctive feature of this approach is the reduction in candidate entity pairs using syntactic analysis and schema validation, eliminating the need for candidate query generation and ranking. The schema-agnostic design enables adaptation across domains and languages. Our system supports single- and multi-hop queries, conjunctions, comparisons, aggregations, and complex questions through an explainable process. Evaluations on real-world queries demonstrate reliable translation results.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
►▼
Show Figures

Graphical abstract
Open AccessArticle
AFQSeg: An Adaptive Feature Quantization Network for Instance-Level Surface Crack Segmentation
by
Shaoliang Fang, Lu Lu, Zhu Lin, Zhanyu Yang and Shaosheng Wang
Computers 2025, 14(5), 182; https://doi.org/10.3390/computers14050182 - 9 May 2025
Abstract
Concrete surface crack detection plays a crucial role in infrastructure maintenance and safety. Deep learning-based methods have shown great potential in this task. However, under real-world conditions such as poor image quality, environmental interference, and complex crack patterns, existing models still face challenges
[...] Read more.
Concrete surface crack detection plays a crucial role in infrastructure maintenance and safety. Deep learning-based methods have shown great potential in this task. However, under real-world conditions such as poor image quality, environmental interference, and complex crack patterns, existing models still face challenges in detecting fine cracks and often rely on large training parameters, limiting their practicality in complex environments. To address these issues, this paper proposes a crack detection model based on adaptive feature quantization, which primarily consists of a maximum soft pooling module, an adaptive crack feature quantization module, and a trainable crack post-processing module. Specifically, the maximum soft pooling module improves the continuity and integrity of detected cracks. The adaptive crack feature quantization module enhances the contrast between cracks and background features and strengthens the model’s focus on critical regions through spatial feature fusion. The trainable crack post-processing module incorporates edge-guided post-processing algorithms to correct false predictions and refine segmentation results. Experiments conducted on the Crack500 Road Crack Dataset show that, the proposed model achieves notable improvements in detection accuracy and efficiency, with an average F1-score improvement of 2.81% and a precision gain of 2.20% over the baseline methods. In addition, the model significantly reduces computational cost, achieving a 78.5–88.7% reduction in parameter size and up to 96.8% improvement in inference speed, making it more efficient and deployable for real-world crack detection applications.
Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
►▼
Show Figures

Figure 1
Open AccessArticle
Parallel Sort Implementation and Evaluation in a Dataflow-Based Polymorphic Computing Architecture
by
David Hentrich, Erdal Oruklu and Jafar Saniie
Computers 2025, 14(5), 181; https://doi.org/10.3390/computers14050181 - 7 May 2025
Abstract
This work presents two variants of an odd–even sort algorithm that are implemented in a dataflow-based polymorphic computing architecture. The two odd–even sort algorithms are the “fully unrolled” variant and the “compact” variant. They are used as test kernels to evaluate the polymorphic
[...] Read more.
This work presents two variants of an odd–even sort algorithm that are implemented in a dataflow-based polymorphic computing architecture. The two odd–even sort algorithms are the “fully unrolled” variant and the “compact” variant. They are used as test kernels to evaluate the polymorphic computing architecture. Incidentally, these two odd–even sort algorithm variants can be readily adapted to ASIC (Application-Specific Integrated Circuit) and FPGA (Field Programmable Gate Array) designs. Additionally, two methods of placing the sort algorithms’ instructions in different configurations of the polymorphic computing architecture to achieve performance gains are furnished: a genetic-algorithm-based instruction placement method and a deterministic instruction placement method. Finally, a comparative study of the odd–even sort algorithm in several configurations of the polymorphic computing architecture is presented. The results show that scaling up the number of processing cores in the polymorphic architecture to the maximum amount of instantaneously exploitable parallelism improves the speed of the sort algorithms. Additionally, the sort algorithms that were placed in the polymorphic computing architecture configurations by the genetic instruction placement algorithm generally performed better than when they were placed by the deterministic instruction placement algorithm.
Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Teachers’ Experiences with Flipped Classrooms in Senior Secondary Mathematics Instruction
by
Adebayo Akinyinka Omoniyi, Loyiso Currell Jita and Thuthukile Jita
Computers 2025, 14(5), 180; https://doi.org/10.3390/computers14050180 - 6 May 2025
Abstract
►▼
Show Figures
The quest for effective pedagogical practices in mathematics education has increasingly highlighted the flipped classroom model. This model has been shown to be particularly successful in higher education settings within developed countries, where resources and technological infrastructure are readily available. However, its implementation
[...] Read more.
The quest for effective pedagogical practices in mathematics education has increasingly highlighted the flipped classroom model. This model has been shown to be particularly successful in higher education settings within developed countries, where resources and technological infrastructure are readily available. However, its implementation in secondary education, especially in developing nations, has been a critical area of investigation. Building on our earlier research, which found that students rated the flipped classroom model positively, this mixed-method study explores teachers’ experiences with implementing the model for mathematics instruction at the senior secondary level. Since teachers play a pivotal role as facilitators of this pedagogical approach, their understanding and perceptions of it can significantly impact its effectiveness. To gather insights into teachers’ experiences, this study employs both close-ended questionnaires and semi-structured interviews. A quantitative analysis of participants’ responses to the questionnaires, including mean scores, standard deviations and Kruskal–Wallis H tests, reveals that teachers generally record positive experiences teaching senior secondary mathematics through flipped classrooms, although there are notable differences in their experiences. A thematic analysis of qualitative interview responses highlights the specific support systems essential for teachers’ successful adoption of the flipped classroom model in senior secondary mathematics instruction.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2025
Topic in
Applied Sciences, Computers, Electronics, Sensors, Virtual Worlds, IJGI
Simulations and Applications of Augmented and Virtual Reality, 2nd Edition
Topic Editors: Radu Comes, Dorin-Mircea Popovici, Calin Gheorghe Dan Neamtu, Jing-Jing FangDeadline: 20 June 2025
Topic in
Applied Sciences, Automation, Computers, Electronics, Sensors, JCP, Mathematics
Intelligent Optimization, Decision-Making and Privacy Preservation in Cyber–Physical Systems
Topic Editors: Lijuan Zha, Jinliang Liu, Jian LiuDeadline: 31 August 2025
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025

Conferences
Special Issues
Special Issue in
Computers
Future Trends in Computer Programming Education
Guest Editor: Stelios XinogalosDeadline: 31 May 2025
Special Issue in
Computers
Harnessing the Blockchain Technology in Unveiling Futuristic Applications
Guest Editors: Raman Singh, Shantanu PalDeadline: 15 June 2025
Special Issue in
Computers
Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024
Guest Editor: Xuhui ChenDeadline: 30 June 2025
Special Issue in
Computers
When Natural Language Processing Meets Machine Learning—Opportunities, Challenges and Solutions
Guest Editors: Lu Bai, Huiru Zheng, Zhibao WangDeadline: 30 June 2025