Next Issue
Volume 14, June
Previous Issue
Volume 14, April
 
 

Computers, Volume 14, Issue 5 (May 2025) – 47 articles

Cover Story (view full-size image): This study revolutionizes tsunami occurrence forecasting by leveraging machine learning, specifically Random Forest and Logistic Regression models, trained on seismic data from 1995 to 2023. Achieving 90% accuracy, the research integrates diverse datasets—seismic, geospatial, and environmental—to predict tsunami-generating earthquakes with improved lead times. Exploratory data analysis reveals high-risk regions, offering insights for enhanced disaster preparedness. Future applications include real-time warning systems and resilient infrastructure planning, promising to mitigate the global impact of tsunamis. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 1146 KiB  
Review
Exploring Deep Learning Model Opportunities for Cervical Cancer Screening in Vulnerable Public Health Regions
by Renan Chaves de Lima and Juarez Antonio Simões Quaresma
Computers 2025, 14(5), 202; https://doi.org/10.3390/computers14050202 - 21 May 2025
Viewed by 99
Abstract
Deep learning models offer innovative solutions for cervical cancer screening in vulnerable regions such as the Brazilian Amazon. These tools are particularly relevant in areas with limited access to healthcare services, where the high prevalence of the disease severely affects riverine and indigenous [...] Read more.
Deep learning models offer innovative solutions for cervical cancer screening in vulnerable regions such as the Brazilian Amazon. These tools are particularly relevant in areas with limited access to healthcare services, where the high prevalence of the disease severely affects riverine and indigenous populations. Artificial intelligence can overcome the limitations of traditional screening methods, providing faster and more accurate diagnoses. This enables early disease detection and reduces mortality, improving equitable access to healthcare. Furthermore, the application of these technologies complements global efforts to eliminate cervical cancer, aligning with the WHO strategies. This review emphasizes the need for model adaptation to local realities, which is essential to ensure their effectiveness in low-infrastructure areas, reinforcing their potential to reduce health disparities and expand access to quality diagnostics. Full article
Show Figures

Figure 1

23 pages, 1311 KiB  
Article
Educational Robotics and Game-Based Interventions for Overcoming Dyscalculia: A Pilot Study
by Fabrizio Stasolla, Enza Curcio, Angela Borgese, Anna Passaro, Mariacarla Di Gioia, Antonio Zullo and Elvira Martini
Computers 2025, 14(5), 201; https://doi.org/10.3390/computers14050201 - 21 May 2025
Viewed by 83
Abstract
Dyscalculia is a specific learning disorder that affects numerical comprehension, arithmetic reasoning, and problem-solving skills, significantly impacting academic performance and daily life activities. Traditional teaching methods often fail to address the unique cognitive challenges faced by students with dyscalculia, highlighting the need for [...] Read more.
Dyscalculia is a specific learning disorder that affects numerical comprehension, arithmetic reasoning, and problem-solving skills, significantly impacting academic performance and daily life activities. Traditional teaching methods often fail to address the unique cognitive challenges faced by students with dyscalculia, highlighting the need for innovative educational approaches. Recent studies suggest that educational robotics and game-based learning can provide engaging and adaptive learning environments, enhancing numerical cognition and motivation in students with mathematical difficulties. The intervention was designed to improve calculation skills, problem-solving strategies, and overall engagement in mathematics. The study involved 73 secondary students, divided into three classes, among whom only a specific group had been diagnosed with dyscalculia. Data were collected through pre- and post-intervention assessment evaluating improvements in numerical accuracy, processing speed, and support motivation. Preliminary findings indicate that robotics and gamification create an interactive, less anxiety-inducing learning experience, facilitating conceptual understanding and retention of mathematical concepts. The results suggest that these tools hold promise as supplementary interventions for children with dyscalculia. Future research should explore long-term effects, optimal implementation strategies, and their integration within formal educational settings. Full article
(This article belongs to the Special Issue Advanced Human–Robot Interaction 2024)
Show Figures

Figure 1

15 pages, 7036 KiB  
Article
Detection of Fiber-Flaw on Pill Surface Based on Lightweight Network SA-MGhost-DVGG
by Jipei Lou, Hongyi Wang, Haodong Liang and Ziwei Wu
Computers 2025, 14(5), 200; https://doi.org/10.3390/computers14050200 - 21 May 2025
Viewed by 59
Abstract
Fiber-flaw detection on pill surfaces is a critical yet challenging task in industrial pharmacy due to diverse defect characteristics. To overcome the limitations of traditional methods in accuracy and real-time performance, this study introduces SA-MGhost-DVGG, a novel lightweight network for enhanced detection. The [...] Read more.
Fiber-flaw detection on pill surfaces is a critical yet challenging task in industrial pharmacy due to diverse defect characteristics. To overcome the limitations of traditional methods in accuracy and real-time performance, this study introduces SA-MGhost-DVGG, a novel lightweight network for enhanced detection. The proposed network integrates an MGhost module for reducing parameters and computational load, a mixed-channel spatial attention (SA) module to refine features specific to fiber regions, and depthwise separable convolutions (DepSepConv) for efficient dimensionality reduction while preserving feature information. Experimental evaluations demonstrate that SA-MGhost-DVGG achieves a mean detection accuracy of 99.01% with an average inference time of 2.23 ms per pill. The findings confirm that SA-MGhost-DVGG effectively balances high accuracy with computational efficiency, offering a robust solution for industrial applications. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

18 pages, 10587 KiB  
Article
M18K: A Multi-Purpose Real-World Dataset for Mushroom Detection, 3D Pose Estimation, and Growth Monitoring
by Abdollah Zakeri, Mulham Fawakherji, Jiming Kang, Bikram Koirala, Venkatesh Balan, Weihang Zhu, Driss Benhaddou and Fatima A. Merchant
Computers 2025, 14(5), 199; https://doi.org/10.3390/computers14050199 - 20 May 2025
Viewed by 152
Abstract
Automating agricultural processes holds significant promise for enhancing efficiency and sustainability in various farming practices. This paper contributes to the automation of agricultural processes by providing a dedicated mushroom detection dataset related to automated harvesting, 3D pose estimation, and growth monitoring of the [...] Read more.
Automating agricultural processes holds significant promise for enhancing efficiency and sustainability in various farming practices. This paper contributes to the automation of agricultural processes by providing a dedicated mushroom detection dataset related to automated harvesting, 3D pose estimation, and growth monitoring of the button mushroom produced using Agaricus Bisporus fungi. With a total of 2000 images for object detection, instance segmentation, and 3D pose estimation—containing over 100,000 mushroom instances—and an additional 3838 images for yield estimation featuring eight mushroom scenes covering the complete growth period, it fills the gap in mushroom-specific datasets and serves as a benchmark for detection and instance segmentation as well as 3D pose estimation algorithms in smart mushroom agriculture. The dataset, featuring realistic growth environment scenarios with comprehensive 2D and 3D annotations, is assessed using advanced detection and instance segmentation algorithms. This paper details the dataset’s characteristics, presents detailed statistics on mushroom growth and yield, evaluates algorithmic performance, and, for broader applicability, makes all resources publicly available, including images, code, and trained models, via our GitHub repository. (accessed on 22 March 2025). Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

17 pages, 11121 KiB  
Article
Few-Shot Data Augmentation by Morphology-Constrained Latent Diffusion for Enhanced Nematode Recognition
by Xiong Ouyang, Jiayan Zhuang, Jianfeng Gu and Sichao Ye
Computers 2025, 14(5), 198; https://doi.org/10.3390/computers14050198 - 19 May 2025
Viewed by 96
Abstract
Plant-parasiticnematodes represent a significant biosecurity threat in cross-border plant quarantine, necessitating precise identification for effective border control. While DL models have demonstrated success in nematode image classification based on morphological features, the limited availability of high-quality samples and the species-specific nature of nematodes [...] Read more.
Plant-parasiticnematodes represent a significant biosecurity threat in cross-border plant quarantine, necessitating precise identification for effective border control. While DL models have demonstrated success in nematode image classification based on morphological features, the limited availability of high-quality samples and the species-specific nature of nematodes result in insufficient training data, which constrains model performance. Although generative models have shown promise in data augmentation, they often struggle to balance morphological fidelity and phenotypic diversity. This paper proposes a novel few-shot data augmentation framework based on a morphology-constrained latent diffusion model, which, for the first time, integrates morphological constraints into the latent diffusion process. By geometrically parameterizing nematode morphology, the proposed approach enhances topological fidelity in the generated images and addresses key limitations of traditional generative models in controlling biological shapes. This framework is designed to augment nematode image datasets and improve classification performance under limited data conditions. The framework consists of three key components: First, we incorporate a fine-tuning strategy that preserves the generalization capability of model in few-shot settings. Second, we extract morphological constraints from nematode images using edge detection and a moving least squares method, capturing key structural details. Finally, we embed these constraints into the latent space of the diffusion model, ensuring generated images maintain both fidelity and diversity. Experimental results demonstrate that our approach significantly enhances classification accuracy. For imbalanced datasets, the Top-1 accuracy of multiple classification models improved by 7.34–14.66% compared to models trained without augmentation, and by 2.0–5.67% compared to models using traditional data augmentation. Additionally, when replacing up to 25% of real images with generated ones in a balanced dataset, model performance remained nearly unchanged, indicating the robustness and effectiveness of the method. Ablation experiments demonstrate that the morphology-guided strategy achieves superior image quality compared to both unconstrained and edge-based constraint methods, with a Fréchet Inception Distance of 12.95 and an Inception Score of 1.21 ± 0.057. These results indicate that the proposed method effectively balances morphological fidelity and phenotypic diversity in image generation. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

35 pages, 13580 KiB  
Article
A Novel MaxViT Model for Accelerated and Precise Soybean Leaf and Seed Disease Identification
by Al Shahriar Uddin Khondakar Pranta, Hasib Fardin, Jesika Debnath, Amira Hossain, Anamul Haque Sakib, Md. Redwan Ahmed, Rezaul Haque, Ahmed Wasif Reza and M. Ali Akber Dewan
Computers 2025, 14(5), 197; https://doi.org/10.3390/computers14050197 - 18 May 2025
Viewed by 199
Abstract
Timely diagnosis of soybean diseases is essential to protect yields and limit global economic loss, yet current deep learning approaches suffer from small, imbalanced datasets, single-organ focus, and limited interpretability. We propose MaxViT-XSLD (MaxViT XAI-Seed–Leaf-Diagnostic), a Vision Transformer that integrates multiaxis attention with [...] Read more.
Timely diagnosis of soybean diseases is essential to protect yields and limit global economic loss, yet current deep learning approaches suffer from small, imbalanced datasets, single-organ focus, and limited interpretability. We propose MaxViT-XSLD (MaxViT XAI-Seed–Leaf-Diagnostic), a Vision Transformer that integrates multiaxis attention with MBConv layers to jointly classify soybean leaf and seed diseases while remaining lightweight and explainable. Two benchmark datasets were upscaled through elastic deformation, Gaussian noise, brightness shifts, rotation, and flipping, enlarging ASDID from 10,722 to 16,000 images (eight classes) and the SD set from 5513 to 10,000 images (five classes). Under identical augmentation and hyperparameters, MaxViT-XSLD delivered 99.82% accuracy on ASDID and 99.46% on SD, surpassing competitive ViT, CNN, and lightweight SOTA variants. High PR-AUC and MCC values, confirmed via 10-fold stratified cross-validation and Wilcoxon tests, demonstrate robust generalization across data splits. Explainable AI (XAI) techniques further enhanced interpretability by highlighting biologically relevant features influencing predictions. Its modular design also enables future model compression for edge deployment in resource-constrained settings. Finally, we deploy the model in SoyScan, a real-time web tool that streams predictions and visual explanations to growers and agronomists. These findings establishes a scalable, interpretable system for precision crop health monitoring and lay the groundwork for edge-oriented, multimodal agricultural diagnostics. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

32 pages, 4255 KiB  
Article
Improving Real-Time Economic Decisions Through Edge Computing: Implications for Financial Contagion Risk Management
by Ștefan Ionescu, Camelia Delcea and Ionuț Nica
Computers 2025, 14(5), 196; https://doi.org/10.3390/computers14050196 - 18 May 2025
Viewed by 142
Abstract
In the face of accelerating digitalization and growing systemic vulnerabilities, the ability to make accurate, real-time economic decisions has become a critical capability for financial and institutional stability. This study investigates how edge computing infrastructures influence decision-making accuracy, responsiveness, and risk containment in [...] Read more.
In the face of accelerating digitalization and growing systemic vulnerabilities, the ability to make accurate, real-time economic decisions has become a critical capability for financial and institutional stability. This study investigates how edge computing infrastructures influence decision-making accuracy, responsiveness, and risk containment in economic systems, particularly under the threat of financial contagion. A synthetic dataset simulating the interaction between economic indicators and edge performance metrics was constructed to emulate real-time decision environments. Composite indicators were developed to quantify key dynamics, and a range of machine learning models, including XGBoost, Random Forest, and Neural Networks, were applied to classify economic decision outcomes. The results indicate that low latency, efficient resource use, and balanced workload distribution are significantly associated with higher decision quality. XGBoost outperformed all other models, achieving 97% accuracy and a ROC-AUC of 0.997. The findings suggest that edge computing performance metrics can act as predictive signals for systemic fragility and may be integrated into early warning systems for financial risk management. This study contributes to the literature by offering a novel framework for modeling the economic implications of edge intelligence and provides policy insights for designing resilient, real-time financial infrastructures. Full article
Show Figures

Figure 1

21 pages, 5452 KiB  
Article
HFC-YOLO11: A Lightweight Model for the Accurate Recognition of Tiny Remote Sensing Targets
by Jinyin Bai, Wei Zhu, Zongzhe Nie, Xin Yang, Qinglin Xu and Dong Li
Computers 2025, 14(5), 195; https://doi.org/10.3390/computers14050195 - 18 May 2025
Viewed by 231
Abstract
To address critical challenges in tiny object detection within remote sensing imagery, including resolution–semantic imbalance, inefficient feature fusion, and insufficient localization accuracy, this study proposes Hierarchical Feature Compensation You Only Look Once 11 (HFC-YOLO11), a lightweight detection model based on hierarchical feature compensation. [...] Read more.
To address critical challenges in tiny object detection within remote sensing imagery, including resolution–semantic imbalance, inefficient feature fusion, and insufficient localization accuracy, this study proposes Hierarchical Feature Compensation You Only Look Once 11 (HFC-YOLO11), a lightweight detection model based on hierarchical feature compensation. Firstly, by reconstructing the feature pyramid architecture, we preserve the high-resolution P2 feature layer in shallow networks to enhance the fine-grained feature representation for tiny targets, while eliminating redundant P5 layers to reduce the computational complexity. In addition, a depth-aware differentiated module design strategy is proposed: GhostBottleneck modules are adopted in shallow layers to improve its feature reuse efficiency, while standard Bottleneck modules are maintained in deep layers to strengthen the semantic feature extraction. Furthermore, an Extended Intersection over Union loss function (EIoU) is developed, incorporating boundary alignment penalty terms and scale-adaptive weight mechanisms to optimize the sub-pixel-level localization accuracy. Experimental results on the AI-TOD and VisDrone2019 datasets demonstrate that the improved model achieves mAP50 improvements of 3.4% and 2.7%, respectively, compared to the baseline YOLO11s, while reducing its parameters by 27.4%. Ablation studies validate the balanced performance of the hierarchical feature compensation strategy in the preservation of resolution and computational efficiency. Visualization results confirm an enhanced robustness against complex background interference. HFC-YOLO11 exhibits superior accuracy and generalization capability in tiny object detection tasks, effectively meeting practical application requirements for tiny object recognition. Full article
Show Figures

Figure 1

28 pages, 340 KiB  
Review
Revolutionizing Data Exchange Through Intelligent Automation: Insights and Trends
by Yeison Nolberto Cardona-Álvarez, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(5), 194; https://doi.org/10.3390/computers14050194 - 17 May 2025
Viewed by 121
Abstract
This review paper presents a comprehensive analysis of the evolving landscape of data exchange, with a particular focus on the transformative role of emerging technologies such as blockchain, field-programmable gate arrays (FPGAs), and artificial intelligence (AI). We explore how the integration of these [...] Read more.
This review paper presents a comprehensive analysis of the evolving landscape of data exchange, with a particular focus on the transformative role of emerging technologies such as blockchain, field-programmable gate arrays (FPGAs), and artificial intelligence (AI). We explore how the integration of these technologies into data management systems enhances operational efficiency, precision, and security through intelligent automation and advanced machine learning techniques. The paper also critically examines the key challenges facing data exchange today, including issues of interoperability, the demand for real-time processing, and the stringent requirements of regulatory compliance. Furthermore, it underscores the urgent need for robust ethical frameworks to guide the responsible use of AI and to protect data privacy. In addressing these challenges, the paper calls for innovative research aimed at overcoming current limitations in scalability and security. It advocates for interdisciplinary approaches that harmonize technological innovation with legal and ethical considerations. Ultimately, this review highlights the pivotal role of collaboration among researchers, industry stakeholders, and policymakers in fostering a digitally inclusive future—one that strengthens data exchange practices while upholding global standards of fairness, transparency, and accountability. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

18 pages, 922 KiB  
Article
Accounting Support Using Artificial Intelligence for Bank Statement Classification
by Marco Lecci and Thomas Hanne
Computers 2025, 14(5), 193; https://doi.org/10.3390/computers14050193 - 15 May 2025
Viewed by 174
Abstract
Artificial Intelligence is a disruptive technology that is revolutionizing the accounting sector, e.g., by reducing costs, detecting fraud, and generating reports. However, the manual maintenance of booking ledgers remains a significant challenge, particularly for small and medium-sized enterprises. The usage of AI technologies [...] Read more.
Artificial Intelligence is a disruptive technology that is revolutionizing the accounting sector, e.g., by reducing costs, detecting fraud, and generating reports. However, the manual maintenance of booking ledgers remains a significant challenge, particularly for small and medium-sized enterprises. The usage of AI technologies in this area is rarely considered in the literature depite a significant interest in using AI for other acounting-related activities. Our study, which was conducted during 2023–2024, utilizes natural language processing and machine learning to construct a predictive model that accurately matches bank transaction statements with accounting records. The study employs Feedforward Neural Networks and Support Vector Machines with various settings and compares their performance with that of previous models embedded in similar predictive tasks. Additionally, as a baseline model, a software called Contofox, a rule-based system capable of classifying accounting records by manually creating rules to match bank statements with accounting records, is used. Furthermore, this study evaluates the business value of the model through an interview with an accounting expert, highlighting the potential benefits of artifacts in enhancing accounting processes. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1468 KiB  
Article
A Case Study of Computational Thinking Analysis Using SOLO Taxonomy in Scientific–Mathematical Learning
by Alejandro De la Hoz Serrano, Andrés Álvarez-Murillo, Eladio José Fernández Torrado, Miguel Ángel González Maestre and Lina Viviana Melo Niño
Computers 2025, 14(5), 192; https://doi.org/10.3390/computers14050192 - 15 May 2025
Viewed by 250
Abstract
Education nowadays requires a certain variety of resources that allow for the acquisition of 21st-century skills, including computational thinking. Educational robotics emerges as a digital resource that supports the development of these skills in both male and female students across different educational stages. [...] Read more.
Education nowadays requires a certain variety of resources that allow for the acquisition of 21st-century skills, including computational thinking. Educational robotics emerges as a digital resource that supports the development of these skills in both male and female students across different educational stages. However, it is necessary to investigate in depth evaluations that analyze the acquisition of Computational Thinking skills in pre-service teachers, especially when scientific and mathematical content learning programs are designed. This study aims to analyze Computational Thinking skills using the SOLO taxonomy, with an approach to science and mathematics learning, through an intervention based on programming and Educational Robotics. A quasi-experimental design was used on a total sample of 116 pre-service teachers. The SOLO taxonomy categorization was used to associate each level of the taxonomy with the computational concepts analyzed through a quantitative questionnaire. The taxonomy levels associated with Computational Thinking skills correspond to uni-structural and multi-structural levels. Males presented better results before the intervention, while subsequently, females presented better levels of Computational Thinking, as well as a greater association with the higher complexity level of learning analyzed. In turn, there was a trend between the levels of the SOLO taxonomy and computational concepts, so that an increase in skill for a concept occurs similarly at both the uni-structural level and the multi-structural level. The SOLO taxonomy is presented as a proper tool for learning assessment since it allows for a more detailed understanding of the quality of students’ learning. Therefore, the SOLO taxonomy serves as a valuable resource in the evaluation of Computational Thinking skills. Full article
Show Figures

Figure 1

43 pages, 2755 KiB  
Systematic Review
Analyzing Visitor Behavior to Enhance Personalized Experiences in Smart Museums: A Systematic Literature Review
by Rosen Ivanov and Victoria Velkova
Computers 2025, 14(5), 191; https://doi.org/10.3390/computers14050191 - 14 May 2025
Viewed by 335
Abstract
This systematic review provides an analysis of information gathered from 33 chosen publications during the past decade. The analysis reveals the primary methodologies applied and identifies the visitor behaviors that enable personalized content delivery. Statistical and Data Analysis is the predominant methodology in [...] Read more.
This systematic review provides an analysis of information gathered from 33 chosen publications during the past decade. The analysis reveals the primary methodologies applied and identifies the visitor behaviors that enable personalized content delivery. Statistical and Data Analysis is the predominant methodology in the reviewed publications. The methodology is present in 97% of the publications. AI and Machine Learning (63.6%) and Mobile/Interactive Technologies (60.6%) are most frequently paired with this methodology. Behavioral Analytics Platforms and Mobile/Wearable Devices are the most used technologies (42.4%) for delivering personalized content. A total of 39.4% of publications utilize Location Tracking Systems. The most frequent visitor behavior analysis focuses on Interactive Engagement and Movement Patterns, which occur 72.7% of the time, before Learning Patterns and Physical Positioning, which occur 63.6% of the time. The behavioral analysis of Group Dynamics (27.3%) and Emotional Response (18.2%) represents the least common practice when museums personalize their content despite the significance of social interaction analysis among visitors. The leading content personalization methods currently include real-time personalization systems combined with AI-driven systems and location-based technologies. Personalized content delivery systems face challenges including privacy protection and scalability issues paired with expensive implementation costs, which especially affect smaller museums. Researchers should explore how new technologies, such as virtual reality, augmented reality, and advanced biometric systems, can be integrated into future developments. Full article
Show Figures

Graphical abstract

11 pages, 3574 KiB  
Article
Energy Transitions over Five Decades: A Statistical Perspective on Global Energy Trends
by Francina Pali, Roschlynn Dsouza, Yeeon Ryu, Jennifer Oishee, Joel Aikkarakudiyil, Manali Avinash Gaikwad, Payam Norouzzadeh, Steven Buckner and Bahareh Rahmani
Computers 2025, 14(5), 190; https://doi.org/10.3390/computers14050190 - 13 May 2025
Viewed by 285
Abstract
This study analyzes global energy trends from January 1973 to November 2022, using the “World Energy Statistics” dataset from Kaggle, which includes data on the production, consumption, import, and export of fossil fuels, nuclear energy, and renewable energy. The analysis employs statistical techniques [...] Read more.
This study analyzes global energy trends from January 1973 to November 2022, using the “World Energy Statistics” dataset from Kaggle, which includes data on the production, consumption, import, and export of fossil fuels, nuclear energy, and renewable energy. The analysis employs statistical techniques such as correlation analysis, quantile–quantile (Q–Q) plots, seasonal decomposition, and seasonal autoregressive integrated moving average (SARIMA) modeling. The results reveal strong positive correlations between nuclear energy production and consumption, as well as between renewable energy production and consumption. Seasonal decomposition highlights annual patterns in renewable energy use and a declining trend in fossil fuel dependency. SARIMA modeling forecasts continued growth in renewable energy consumption and a gradual reduction in fossil fuel reliance. These findings provide critical insights into long-term energy patterns and offer data-driven implications for global energy policy and strategic planning. Full article
Show Figures

Figure 1

35 pages, 2630 KiB  
Article
AHA: Design and Evaluation of Compute-Intensive Hardware Accelerators for AMD-Xilinx Zynq SoCs Using HLS IP Flow
by David Berrazueta-Mena and Byron Navas
Computers 2025, 14(5), 189; https://doi.org/10.3390/computers14050189 - 13 May 2025
Viewed by 289
Abstract
The increasing complexity of algorithms in embedded applications has amplified the demand for high-performance computing. Heterogeneous embedded systems, particularly FPGA-based systems-on-chip (SoCs), enhance execution speed by integrating hardware accelerator intellectual property (IP) cores. However, traditional low-level IP-core design presents significant challenges. High-level synthesis [...] Read more.
The increasing complexity of algorithms in embedded applications has amplified the demand for high-performance computing. Heterogeneous embedded systems, particularly FPGA-based systems-on-chip (SoCs), enhance execution speed by integrating hardware accelerator intellectual property (IP) cores. However, traditional low-level IP-core design presents significant challenges. High-level synthesis (HLS) offers a promising alternative, enabling efficient FPGA development through high-level programming languages. Yet, effective methodologies for designing and evaluating heterogeneous FPGA-based SoCs remain crucial. This study surveys HLS tools and design concepts and presents the development of the AHA IP cores, a set of five benchmarking accelerators for rapid Zynq-based SoC evaluation. These accelerators target compute-intensive tasks, including matrix multiplication, Fast Fourier Transform (FFT), Advanced Encryption Standard (AES), Back-Propagation Neural Network (BPNN), and Artificial Neural Network (ANN). We establish a streamlined design flow using AMD-Xilinx tools for rapid prototyping and testing FPGA-based heterogeneous platforms. We outline criteria for selecting algorithms to improve speed and resource efficiency in HLS design. Our performance evaluation across various configurations highlights performance–resource trade-offs and demonstrates that ANN and BPNN achieve significant parallelism, while AES optimization increases resource utilization the most. Matrix multiplication shows strong optimization potential, whereas FFT is constrained by data dependencies. Full article
Show Figures

Figure 1

27 pages, 11866 KiB  
Article
A Novel Autonomous Robotic Vehicle-Based System for Real-Time Production and Safety Control in Industrial Environments
by Athanasios Sidiropoulos, Dimitrios Konstantinidis, Xenofon Karamanos, Theofilos Mastos, Konstantinos Apostolou, Theocharis Chatzis, Maria Papaspyropoulou, Kalliroi Marini, Georgios Karamitsos, Christina Theodoridou, Andreas Kargakos, Matina Vogiatzi, Angelos Papadopoulos, Dimitrios Giakoumis, Dimitrios Bechtsis, Kosmas Dimitropoulos and Dimitrios Vlachos
Computers 2025, 14(5), 188; https://doi.org/10.3390/computers14050188 - 12 May 2025
Viewed by 203
Abstract
Industry 4.0 has revolutionized the way companies manufacture, improve, and distribute their products through the use of new technologies, such as artificial intelligence, robotics, and machine learning. Autonomous Mobile Robots (AMRs), especially, have gained a lot of attention, supporting workers with daily industrial [...] Read more.
Industry 4.0 has revolutionized the way companies manufacture, improve, and distribute their products through the use of new technologies, such as artificial intelligence, robotics, and machine learning. Autonomous Mobile Robots (AMRs), especially, have gained a lot of attention, supporting workers with daily industrial tasks and boosting overall performance by delivering vital information about the status of the production line. To this end, this work presents the novel Q-CONPASS system that aims to introduce AMRs in production lines with the ultimate goal of gathering important information that can assist in production and safety control. More specifically, the Q-CONPASS system is based on an AMR equipped with a plethora of machine learning algorithms that enable the vehicle to safely navigate in a dynamic industrial environment, avoiding humans, moving machines, and stationary objects while performing important tasks. These tasks include the identification of the following: (i) missing objects during product packaging and (ii) extreme skeletal poses of workers that can lead to musculoskeletal disorders. Finally, the Q-CONPASS system was validated in a real-life environment (i.e., the lift manufacturing industry), showcasing the importance of collecting and processing data in real-time to boost productivity and improve the well-being of workers. Full article
Show Figures

Figure 1

24 pages, 58563 KiB  
Article
Interpretable Deep Learning for Diabetic Retinopathy: A Comparative Study of CNN, ViT, and Hybrid Architectures
by Weijie Zhang, Veronika Belcheva and Tatiana Ermakova
Computers 2025, 14(5), 187; https://doi.org/10.3390/computers14050187 - 12 May 2025
Viewed by 356
Abstract
Diabetic retinopathy (DR) is a leading cause of vision impairment worldwide, requiring early detection for effective treatment. Deep learning models have been widely used for automated DR classification, with Convolutional Neural Networks (CNNs) being the most established approach. Recently, Vision Transformers (ViTs) have [...] Read more.
Diabetic retinopathy (DR) is a leading cause of vision impairment worldwide, requiring early detection for effective treatment. Deep learning models have been widely used for automated DR classification, with Convolutional Neural Networks (CNNs) being the most established approach. Recently, Vision Transformers (ViTs) have shown promise, but a direct comparison of their performance and interpretability remains limited. Additionally, hybrid models that combine CNN and transformer-based architectures have not been extensively studied. This work systematically evaluates CNNs (ResNet-50), ViTs (Vision Transformer and SwinV2-Tiny), and hybrid models (Convolutional Vision Transformer, LeViT-256, and CvT-13) on DR classification using publicly available retinal image datasets. The models are assessed based on classification accuracy and interpretability, applying Grad-CAM and Attention-Rollout to analyze decision-making patterns. Results indicate that hybrid models outperform both standalone CNNs and ViTs, achieving a better balance between local feature extraction and global context awareness. The best-performing model (CvT-13) achieved a Quadratic Weighted Kappa (QWK) score of 0.84 and an AUC of 0.93 on the test set. Interpretability analysis shows that CNNs focus on fine-grained lesion details, while ViTs exhibit broader but less localized attention. These findings provide valuable insights for optimizing deep learning models in medical imaging, supporting the development of clinically viable AI-driven DR screening systems. Full article
Show Figures

Figure 1

20 pages, 4795 KiB  
Article
Test-Time Training with Adaptive Memory for Traffic Accident Severity Prediction
by Duo Peng and Weiqi Yan
Computers 2025, 14(5), 186; https://doi.org/10.3390/computers14050186 - 10 May 2025
Viewed by 239
Abstract
Traffic accident prediction is essential for improving road safety and optimizing intelligent transportation systems. However, deep learning models often struggle with distribution shifts and class imbalance, leading to degraded performance in real-world applications. While distribution shift is a common challenge in machine learning, [...] Read more.
Traffic accident prediction is essential for improving road safety and optimizing intelligent transportation systems. However, deep learning models often struggle with distribution shifts and class imbalance, leading to degraded performance in real-world applications. While distribution shift is a common challenge in machine learning, Transformer-based models—despite their ability to capture long-term dependencies—often lack mechanisms for dynamic adaptation during inferencing. In this paper, we propose a TTT-Enhanced Transformer that incorporates Test-Time Training (TTT), enabling the model to refine its parameters during inferencing through a self-supervised auxiliary task. To further boost performance, an Adaptive Memory Layer (AML), a Feature Pyramid Network (FPN), Class-Balanced Attention (CBA), and Focal Loss are integrated to address multi-scale, long-term, and imbalance-related challenges. Our experimental results show that our model achieved an overall accuracy of 96.86% and a severe accident recall of 95.8%, outperforming the strongest Transformer baseline by 5.65% in accuracy and 9.6% in recall. The results of our confusion matrix and ROC analyses confirm our model’s superior classification balance and discriminatory power. These findings highlight the potential of our approach in enhancing real-time adaptability and robustness under shifting data distributions and class imbalances in intelligent transportation systems. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

48 pages, 6522 KiB  
Systematic Review
The Influence of Artificial Intelligence Tools on Learning Outcomes in Computer Programming: A Systematic Review and Meta-Analysis
by Manal Alanazi, Ben Soh, Halima Samra and Alice Li
Computers 2025, 14(5), 185; https://doi.org/10.3390/computers14050185 - 9 May 2025
Viewed by 392
Abstract
This systematic review and meta-analysis investigates the impact of artificial intelligence (AI) tools, including ChatGPT 3.5 and GitHub Copilot, on learning outcomes in computer programming courses. A total of 35 controlled studies published between 2020 and 2024 were analysed to assess the effectiveness [...] Read more.
This systematic review and meta-analysis investigates the impact of artificial intelligence (AI) tools, including ChatGPT 3.5 and GitHub Copilot, on learning outcomes in computer programming courses. A total of 35 controlled studies published between 2020 and 2024 were analysed to assess the effectiveness of AI-assisted learning. The results indicate that students using AI tools outperformed those without such aids. The meta-analysis findings revealed that AI-assisted learning significantly reduced task completion time (SMD = −0.69, 95% CI [−2.13, −0.74], I2 = 95%, p = 0.34) and improved student performance scores (SMD = 0.86, 95% CI [0.36, 1.37], p = 0.0008, I2 = 54%). However, AI tools did not provide a statistically significant advantage in learning success or ease of understanding (SMD = 0.16, 95% CI [−0.23, 0.55], p = 0.41, I2 = 55%), with sensitivity analysis suggesting result variability. Student perceptions of AI tools were overwhelmingly positive, with a pooled estimate of 1.0 (95% CI [0.92, 1.00], I2 = 0%). While AI tools enhance computer programming proficiency and efficiency, their effectiveness depends on factors such as tool functionality and course design. To maximise benefits and mitigate over-reliance, tailored pedagogical strategies are essential. This study underscores the transformative role of AI in computer programming education and provides evidence-based insights for optimising AI-assisted learning. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

25 pages, 7588 KiB  
Article
Driver Distraction Detection in Extreme Conditions Using Kolmogorov–Arnold Networks
by János Hollósi, Gábor Kovács, Mykola Sysyn, Dmytro Kurhan, Szabolcs Fischer and Viktor Nagy
Computers 2025, 14(5), 184; https://doi.org/10.3390/computers14050184 - 9 May 2025
Viewed by 231
Abstract
Driver distraction can have severe safety consequences, particularly in public transportation. This paper presents a novel approach for detecting bus driver actions, such as mobile phone usage and interactions with passengers, using Kolmogorov–Arnold networks (KANs). The adversarial FGSM attack method was applied to [...] Read more.
Driver distraction can have severe safety consequences, particularly in public transportation. This paper presents a novel approach for detecting bus driver actions, such as mobile phone usage and interactions with passengers, using Kolmogorov–Arnold networks (KANs). The adversarial FGSM attack method was applied to assess the robustness of KANs in extreme driving conditions, like adverse weather, high-traffic situations, and bad visibility conditions. In this research, a custom dataset was used in collaboration with a partner company in the field of public transportation. This allows the efficiency of Kolmogorov–Arnold network solutions to be verified using real data. The results suggest that KANs can enhance driver distraction detection under challenging conditions, with improved resilience against adversarial attacks, particularly in low-complexity networks. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

33 pages, 2131 KiB  
Article
Domain- and Language-Adaptable Natural Language Interface for Property Graphs
by Ioannis Tsampos and Emmanouil Marakakis
Computers 2025, 14(5), 183; https://doi.org/10.3390/computers14050183 - 9 May 2025
Viewed by 297
Abstract
Despite the growing adoption of Property Graph Databases, like Neo4j, interacting with them remains difficult for non-technical users due to the reliance on formal query languages. Natural Language Interfaces (NLIs) address this by translating natural language (NL) into Cypher. However, existing solutions are [...] Read more.
Despite the growing adoption of Property Graph Databases, like Neo4j, interacting with them remains difficult for non-technical users due to the reliance on formal query languages. Natural Language Interfaces (NLIs) address this by translating natural language (NL) into Cypher. However, existing solutions are typically limited to high-resource languages; are difficult to adapt to evolving domains with limited annotated data; and often depend on Machine Learning (ML) approaches, including Large Language Models (LLMs), that demand substantial computational resources and advanced expertise for training and maintenance. We address these limitations by introducing a novel dependency-based, training-free, schema-agnostic Natural Language Interface (NLI) that converts NL queries into Cypher for querying Property Graphs. Our system employs a modular pipeline-integrating entity and relationship extraction, Named Entity Recognition (NER), semantic mapping, triple creation via syntactic dependencies, and validation against an automatically extracted Schema Graph. The distinctive feature of this approach is the reduction in candidate entity pairs using syntactic analysis and schema validation, eliminating the need for candidate query generation and ranking. The schema-agnostic design enables adaptation across domains and languages. Our system supports single- and multi-hop queries, conjunctions, comparisons, aggregations, and complex questions through an explainable process. Evaluations on real-world queries demonstrate reliable translation results. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Graphical abstract

14 pages, 4391 KiB  
Article
AFQSeg: An Adaptive Feature Quantization Network for Instance-Level Surface Crack Segmentation
by Shaoliang Fang, Lu Lu, Zhu Lin, Zhanyu Yang and Shaosheng Wang
Computers 2025, 14(5), 182; https://doi.org/10.3390/computers14050182 - 9 May 2025
Viewed by 225
Abstract
Concrete surface crack detection plays a crucial role in infrastructure maintenance and safety. Deep learning-based methods have shown great potential in this task. However, under real-world conditions such as poor image quality, environmental interference, and complex crack patterns, existing models still face challenges [...] Read more.
Concrete surface crack detection plays a crucial role in infrastructure maintenance and safety. Deep learning-based methods have shown great potential in this task. However, under real-world conditions such as poor image quality, environmental interference, and complex crack patterns, existing models still face challenges in detecting fine cracks and often rely on large training parameters, limiting their practicality in complex environments. To address these issues, this paper proposes a crack detection model based on adaptive feature quantization, which primarily consists of a maximum soft pooling module, an adaptive crack feature quantization module, and a trainable crack post-processing module. Specifically, the maximum soft pooling module improves the continuity and integrity of detected cracks. The adaptive crack feature quantization module enhances the contrast between cracks and background features and strengthens the model’s focus on critical regions through spatial feature fusion. The trainable crack post-processing module incorporates edge-guided post-processing algorithms to correct false predictions and refine segmentation results. Experiments conducted on the Crack500 Road Crack Dataset show that, the proposed model achieves notable improvements in detection accuracy and efficiency, with an average F1-score improvement of 2.81% and a precision gain of 2.20% over the baseline methods. In addition, the model significantly reduces computational cost, achieving a 78.5–88.7% reduction in parameter size and up to 96.8% improvement in inference speed, making it more efficient and deployable for real-world crack detection applications. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

24 pages, 1223 KiB  
Article
Parallel Sort Implementation and Evaluation in a Dataflow-Based Polymorphic Computing Architecture
by David Hentrich, Erdal Oruklu and Jafar Saniie
Computers 2025, 14(5), 181; https://doi.org/10.3390/computers14050181 - 7 May 2025
Viewed by 125
Abstract
This work presents two variants of an odd–even sort algorithm that are implemented in a dataflow-based polymorphic computing architecture. The two odd–even sort algorithms are the “fully unrolled” variant and the “compact” variant. They are used as test kernels to evaluate the polymorphic [...] Read more.
This work presents two variants of an odd–even sort algorithm that are implemented in a dataflow-based polymorphic computing architecture. The two odd–even sort algorithms are the “fully unrolled” variant and the “compact” variant. They are used as test kernels to evaluate the polymorphic computing architecture. Incidentally, these two odd–even sort algorithm variants can be readily adapted to ASIC (Application-Specific Integrated Circuit) and FPGA (Field Programmable Gate Array) designs. Additionally, two methods of placing the sort algorithms’ instructions in different configurations of the polymorphic computing architecture to achieve performance gains are furnished: a genetic-algorithm-based instruction placement method and a deterministic instruction placement method. Finally, a comparative study of the odd–even sort algorithm in several configurations of the polymorphic computing architecture is presented. The results show that scaling up the number of processing cores in the polymorphic architecture to the maximum amount of instantaneously exploitable parallelism improves the speed of the sort algorithms. Additionally, the sort algorithms that were placed in the polymorphic computing architecture configurations by the genetic instruction placement algorithm generally performed better than when they were placed by the deterministic instruction placement algorithm. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

28 pages, 1136 KiB  
Article
Teachers’ Experiences with Flipped Classrooms in Senior Secondary Mathematics Instruction
by Adebayo Akinyinka Omoniyi, Loyiso Currell Jita and Thuthukile Jita
Computers 2025, 14(5), 180; https://doi.org/10.3390/computers14050180 - 6 May 2025
Viewed by 279
Abstract
The quest for effective pedagogical practices in mathematics education has increasingly highlighted the flipped classroom model. This model has been shown to be particularly successful in higher education settings within developed countries, where resources and technological infrastructure are readily available. However, its implementation [...] Read more.
The quest for effective pedagogical practices in mathematics education has increasingly highlighted the flipped classroom model. This model has been shown to be particularly successful in higher education settings within developed countries, where resources and technological infrastructure are readily available. However, its implementation in secondary education, especially in developing nations, has been a critical area of investigation. Building on our earlier research, which found that students rated the flipped classroom model positively, this mixed-method study explores teachers’ experiences with implementing the model for mathematics instruction at the senior secondary level. Since teachers play a pivotal role as facilitators of this pedagogical approach, their understanding and perceptions of it can significantly impact its effectiveness. To gather insights into teachers’ experiences, this study employs both close-ended questionnaires and semi-structured interviews. A quantitative analysis of participants’ responses to the questionnaires, including mean scores, standard deviations and Kruskal–Wallis H tests, reveals that teachers generally record positive experiences teaching senior secondary mathematics through flipped classrooms, although there are notable differences in their experiences. A thematic analysis of qualitative interview responses highlights the specific support systems essential for teachers’ successful adoption of the flipped classroom model in senior secondary mathematics instruction. Full article
Show Figures

Figure 1

21 pages, 2435 KiB  
Article
Property-Based Testing for Cybersecurity: Towards Automated Validation of Security Protocols
by Manuel J. C. S. Reis
Computers 2025, 14(5), 179; https://doi.org/10.3390/computers14050179 - 6 May 2025
Viewed by 170
Abstract
The validation of security protocols remains a complex and critical task in the cybersecurity landscape, often relying on labor-intensive testing or formal verification techniques with limited scalability. In this paper, we explore property-based testing (PBT) as a powerful yet underutilized methodology for the [...] Read more.
The validation of security protocols remains a complex and critical task in the cybersecurity landscape, often relying on labor-intensive testing or formal verification techniques with limited scalability. In this paper, we explore property-based testing (PBT) as a powerful yet underutilized methodology for the automated validation of security protocols. PBT enables the generation of large and diverse input spaces guided by declarative properties, making it well-suited to uncover subtle vulnerabilities in protocol logic, state transitions, and access control flows. We introduce the principles of PBT and demonstrate its applicability through selected use cases involving authentication mechanisms, cryptographic APIs, and session protocols. We further discuss integration strategies with existing security pipelines and highlight key challenges such as property specification, oracle design, and scalability. Finally, we outline future research directions aimed at bridging the gap between PBT and formal methods, with the goal of advancing the automation and reliability of secure system development. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

19 pages, 2491 KiB  
Article
A Hybrid Deep Learning Approach for Secure Biometric Authentication Using Fingerprint Data
by Abdulrahman Hussian, Foud Murshed, Mohammed Nasser Alandoli and Ghalib Aljafari
Computers 2025, 14(5), 178; https://doi.org/10.3390/computers14050178 - 6 May 2025
Viewed by 344
Abstract
Despite significant advancements in fingerprint-based authentication, existing models still suffer from challenges such as high false acceptance and rejection rates, computational inefficiency, and vulnerability to spoofing attacks. Addressing these limitations is crucial for ensuring reliable biometric security in real-world applications, including law enforcement, [...] Read more.
Despite significant advancements in fingerprint-based authentication, existing models still suffer from challenges such as high false acceptance and rejection rates, computational inefficiency, and vulnerability to spoofing attacks. Addressing these limitations is crucial for ensuring reliable biometric security in real-world applications, including law enforcement, financial transactions, and border security. This study proposes a hybrid deep learning approach that integrates Convolutional Neural Networks (CNNs) with Long Short-Term Memory (LSTM) networks to enhance fingerprint authentication accuracy and robustness. The CNN component efficiently extracts intricate fingerprint patterns, while the LSTM module captures sequential dependencies to refine feature representation. The proposed model achieves a classification accuracy of 99.42%, reducing the false acceptance rate (FAR) to 0.31% and the false rejection rate (FRR) to 0.27%, demonstrating a 12% improvement over traditional CNN-based models. Additionally, the optimized architecture reduces computational overheads, ensuring faster processing suitable for real-time authentication systems. These findings highlight the superiority of hybrid deep learning techniques in biometric security by providing a quantifiable enhancement in both accuracy and efficiency. This research contributes to the advancement of secure, adaptive, and high-performance fingerprint authentication systems, bridging the gap between theoretical advancements and real-world applications. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Graphical abstract

15 pages, 980 KiB  
Article
Development and Evaluation of a Machine Learning Model for Predicting 30-Day Readmission in General Internal Medicine
by Abdullah M. Al Alawi, Mariya Al Abdali, Al Zahraa Ahmed Al Mezeini, Thuraiya Al Rawahia, Eid Al Amri, Maisam Al Salmani, Zubaida Al-Falahi, Adhari Al Zaabi, Amira Al Aamri, Hatem Al Farhan and Juhaina Salim Al Maqbali
Computers 2025, 14(5), 177; https://doi.org/10.3390/computers14050177 - 5 May 2025
Viewed by 261
Abstract
Background/Objectives: Hospital readmissions within 30 days are a major challenge in general internal medicine (GIM), impacting patient outcomes and healthcare costs. This study aimed to develop and evaluate machine learning (ML) models for predicting 30-day readmissions in patients admitted under a GIM unit [...] Read more.
Background/Objectives: Hospital readmissions within 30 days are a major challenge in general internal medicine (GIM), impacting patient outcomes and healthcare costs. This study aimed to develop and evaluate machine learning (ML) models for predicting 30-day readmissions in patients admitted under a GIM unit and to identify key predictors to guide targeted interventions. Methods: A prospective study was conducted on 443 patients admitted to the Unit of General Internal Medicine at Sultan Qaboos University Hospital between May and September 2023. Sixty-two variables were collected, including demographics, comorbidities, laboratory markers, vital signs, and medication data. Data preprocessing included handling missing values, standardizing continuous variables, and applying one-hot encoding to categorical variables. Four ML models—logistic regression, random forest, gradient boosting, and support vector machine (SVM)—were trained and evaluated. An ensemble model combining soft voting and weighted voting was developed to enhance performance, particularly recall. Results: The overall 30-day readmission rate was 14.2%. Among all models, logistic regression had the highest clinical relevance due to its balanced recall (70.6%) and area under the curve (AUC = 0.735). While random forest and SVM models showed higher precision, they had lower recall compared to logistic regression. The ensemble model improved recall to 70.6% through adjusted thresholds and model weighting, though precision declined. The most significant predictors of readmission included length of hospital stay, weight, age, number of medications, and abnormalities in liver enzymes. Conclusions: ML models, particularly ensemble approaches, can effectively predict 30-day readmissions in GIM patients. Tailored interventions using key predictors may help reduce readmission rates, although model calibration is essential to optimize performance trade-offs. Full article
Show Figures

Figure 1

13 pages, 3063 KiB  
Article
Exploring Factors Influencing Students’ Continuance Intention to Use E-Learning System for Iraqi University Students
by Ahmed Rashid Alkhuwaylidee
Computers 2025, 14(5), 176; https://doi.org/10.3390/computers14050176 - 5 May 2025
Viewed by 265
Abstract
In the past years, the education sector has suffered from repeated epidemics and their spread, and COVID-19 is a good example of this. Therefore, the search for other educational methods has become necessary. Therefore, e-learning is one of the best methods to replace [...] Read more.
In the past years, the education sector has suffered from repeated epidemics and their spread, and COVID-19 is a good example of this. Therefore, the search for other educational methods has become necessary. Therefore, e-learning is one of the best methods to replace traditional education. In this study, we found it necessary to conduct a comprehensive st udy on the perceptions of Iraqi university students toward e-learning and the factors affecting its use by students’ interest in being used consistently to increase learning effectiveness and the influence of educational presentations. In this research, the Expectation−Confirmation Model was used as a framework, and SPSS v21 and AMOS v21 were used to analyze the questionnaire obtained from 360 valid samples. According to the findings, students’ perceptions of the usefulness of e-learning systems are influenced by factors such as system quality, content quality, and confirmation. In addition, the findings show that technical support has no effect on perceived usefulness. In addition, content quality, system quality, and technical support are three critical antecedents of confirmation. In addition, we found that satisfaction was positively affected by both confirmation and perceived usefulness. We also found that the continuance intention to use e-learning was positively affected by both satisfaction and perceived usefulness. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

25 pages, 8763 KiB  
Article
Forecasting the Unseen: Enhancing Tsunami Occurrence Predictions with Machine-Learning-Driven Analytics
by Snehal Satish, Hari Gonaygunta, Akhila Reddy Yadulla, Deepak Kumar, Mohan Harish Maturi, Karthik Meduri, Elyson De La Cruz, Geeta Sandeep Nadella and Guna Sekhar Sajja
Computers 2025, 14(5), 175; https://doi.org/10.3390/computers14050175 - 4 May 2025
Viewed by 418
Abstract
This research explores the improvement of tsunami occurrence forecasting with machine learning predictive models using earthquake-related data analytics. The primary goal is to develop a predictive framework that integrates a wide range of data sources, including seismic, geospatial, and ecological data, toward improving [...] Read more.
This research explores the improvement of tsunami occurrence forecasting with machine learning predictive models using earthquake-related data analytics. The primary goal is to develop a predictive framework that integrates a wide range of data sources, including seismic, geospatial, and ecological data, toward improving the accuracy and lead times of tsunami occurrence predictions. The study employs machine learning methods, including Random Forest and Logistic Regression, for binary classification of tsunami events. Data collection is performed using a Kaggle dataset spanning 1995–2023, with preprocessing and exploratory analysis to identify critical patterns. The Random Forest model achieved superior performance with an accuracy of 0.90 and precision of 0.88 compared to Logistic Regression (accuracy: 0.89, precision: 0.87). These results underscore Random Forest’s effectiveness in handling imbalanced data. Challenges such as improving data quality and model interpretability are discussed, with recommendations for future improvements in real-time warning systems. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

20 pages, 3977 KiB  
Article
Investigation of Multiple Hybrid Deep Learning Models for Accurate and Optimized Network Slicing
by Ahmed Raoof Nasser and Omar Younis Alani
Computers 2025, 14(5), 174; https://doi.org/10.3390/computers14050174 - 2 May 2025
Viewed by 285
Abstract
In 5G wireless communication, network slicing is considered one of the key network elements, which aims to provide services with high availability, low latency, maximizing data throughput, and ultra-reliability and save network resources. Due to the exponential expansion of cellular networking in the [...] Read more.
In 5G wireless communication, network slicing is considered one of the key network elements, which aims to provide services with high availability, low latency, maximizing data throughput, and ultra-reliability and save network resources. Due to the exponential expansion of cellular networking in the number of users along with the new applications, delivering the desired Quality of Service (QoS) requires an accurate and fast network slicing mechanism. In this paper, hybrid deep learning (DL) approaches are investigated using convolutional neural networks (CNNs), Long Short-Term Memory (LSTM), recurrent neural networks (RNNs), and Gated Recurrent Units (GRUs) to provide an accurate network slicing model. The proposed hybrid approaches are CNN-LSTM, CNN-RNN, and CNN-GRU, where a CNN is initially used for effective feature extraction and then LSTM, an RNN, and GRUs are utilized to achieve an accurate network slice classification. To optimize the model performance in terms of accuracy and model complexity, the hyperparameters of each algorithm are selected using the Bayesian optimization algorithm. The obtained results illustrate that the optimized hybrid CNN-GRU algorithm provides the best performance in terms of slicing accuracy (99.31%) and low model complexity. Full article
Show Figures

Graphical abstract

21 pages, 2185 KiB  
Article
Combining the Strengths of LLMs and Persuasive Technology to Combat Cyberhate
by Malik Almaliki, Abdulqader M. Almars, Khulood O. Aljuhani and El-Sayed Atlam
Computers 2025, 14(5), 173; https://doi.org/10.3390/computers14050173 - 2 May 2025
Viewed by 200
Abstract
Cyberhate presents a multifaceted, context-sensitive challenge that existing detection methods often struggle to tackle effectively. Large language models (LLMs) exhibit considerable potential for improving cyberhate detection due to their advanced contextual understanding. However, detection alone is insufficient; it is crucial for software to [...] Read more.
Cyberhate presents a multifaceted, context-sensitive challenge that existing detection methods often struggle to tackle effectively. Large language models (LLMs) exhibit considerable potential for improving cyberhate detection due to their advanced contextual understanding. However, detection alone is insufficient; it is crucial for software to also promote healthier user behaviors and empower individuals to actively confront the spread of cyberhate. This study investigates whether integrating large language models (LLMs) with persuasive technology (PT) can effectively detect cyberhate and encourage prosocial user behavior in digital spaces. Through an empirical study, we examine users’ perceptions of a self-monitoring persuasive strategy designed to reduce cyberhate. Specifically, the study introduces the Comment Analysis Feature to limit cyberhate spread, utilizing a prompt-based fine-tuning approach combined with LLMs. By framing users’ comments within the relevant context of cyberhate, the feature classifies input as either cyberhate or non-cyberhate and generates context-aware alternative statements when necessary to encourage more positive communication. A case study evaluated its real-world performance, examining user comments, detection accuracy, and the impact of alternative statements on user engagement and perception. The findings indicate that while most of the users (83%) found the suggestions clear and helpful, some resisted them, either because they felt the changes were irrelevant or misaligned with their intended expression (15%) or because they perceived them as a form of censorship (36%). However, a substantial number of users (40%) believed the interventions enhanced their language and overall commenting tone, with 68% suggesting they could have a positive long-term impact on reducing cyberhate. These insights highlight the potential of combining LLMs and PT to promote healthier online discourse while underscoring the need to address user concerns regarding relevance, intent, and freedom of expression. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop