Next Issue
Volume 15, February
Previous Issue
Volume 14, December
 
 

Computers, Volume 15, Issue 1 (January 2026) – 70 articles

Cover Story (view full-size image): AI increasingly reshapes control mechanisms governing MRI, enabling faster, safer, and more adaptive operation of the scanner’s subsystems. This review surveys AI-driven advances in control design, SAR prediction, motion-dependent field modeling, and gradient system characterization and correction. Neural networks act as efficient surrogates for lengthy simulations, motion tracking, and gradient modeling, delivering subject-specific predictions in subseconds, reducing the need for extensive calibration and promoting real-time, seamless integration of new applications. Remaining challenges include unified multiphysics models and generalization across systems and scan modalities. Yet, these advances position AI-driven control as a cornerstone of next-generation, personalized MRI. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 632 KB  
Article
Decision Making in Wood Supply Chain Operations Using Simulation-Based Many-Objective Optimization for Enhancing Delivery Performance and Robustness
by Karin Westlund and Amos H. C. Ng
Computers 2026, 15(1), 70; https://doi.org/10.3390/computers15010070 - 22 Jan 2026
Viewed by 71
Abstract
Wood supply chains are complex, involving many stakeholders, intricate processes, and logistical challenges to ensure the timely and accurate delivery of wood products to customers. Weather-related variations in forest road accessibility further complicate operations. This paper explores the challenges faced by forest managers [...] Read more.
Wood supply chains are complex, involving many stakeholders, intricate processes, and logistical challenges to ensure the timely and accurate delivery of wood products to customers. Weather-related variations in forest road accessibility further complicate operations. This paper explores the challenges faced by forest managers in targeting many delivery requirements—four or more. To address this, simulation-based optimization, using NSGA-III, a many-objective optimization algorithm, is proposed to simultaneously optimize often conflicting objectives primarily by minimizing delivery lead time, delivery deviations in backlogs, and delivery variation. NSGA-III enables the exploration of a diverse set of Pareto-optimal solutions that show trade-offs across a flexible set of four, or more, delivery objectives. A Discrete Event Simulation model is integrated to evaluate objectives in a complex wood supply chain. The implementation of NSGA-III within the framework allows forestry decision-makers to navigate between different harvest schedules and evaluate how they target a set of preference-based delivery objectives. The simulation can also provide detailed insights into how a specific harvest schedule affects the supply chain when post-processing possible solutions, facilitating decision making. This study shows that NSGA-III could substitute NSGA-II to optimize the wood supply chain for more than three objective functions. Full article
Show Figures

Figure 1

20 pages, 390 KB  
Systematic Review
Systematic Review of Quantization-Optimized Lightweight Transformer Architectures for Real-Time Fruit Ripeness Detection on Edge Devices
by Donny Maulana and R Kanesaraj Ramasamy
Computers 2026, 15(1), 69; https://doi.org/10.3390/computers15010069 - 19 Jan 2026
Viewed by 439
Abstract
Real-time visual inference on resource-constrained hardware remains a core challenge for edge computing and embedded artificial intelligence systems. Recent deep learning architectures, particularly Vision Transformers (ViTs) and Detection Transformers (DETRs), achieve high detection accuracy but impose substantial computational and memory demands that limit [...] Read more.
Real-time visual inference on resource-constrained hardware remains a core challenge for edge computing and embedded artificial intelligence systems. Recent deep learning architectures, particularly Vision Transformers (ViTs) and Detection Transformers (DETRs), achieve high detection accuracy but impose substantial computational and memory demands that limit their deployment on low-power edge platforms such as NVIDIA Jetson and Raspberry Pi devices. This paper presents a systematic review of model compression and optimization strategies—specifically quantization, pruning, and knowledge distillation—applied to lightweight object detection architectures for edge deployment. Following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, peer-reviewed studies were analyzed from Scopus, IEEE Xplore, and ScienceDirect to examine the evolution of efficient detectors from convolutional neural networks to transformer-based models. The synthesis highlights a growing focus on real-time transformer variants, including Real-Time DETR (RT-DETR) and low-bit quantized approaches such as Q-DETR, alongside optimized YOLO-based architectures. While quantization enables substantial theoretical acceleration (e.g., up to 16× operation reduction), aggressive low-bit precision introduces accuracy degradation, particularly in transformer attention mechanisms, highlighting a critical efficiency-accuracy tradeoff. The review further shows that Quantization-Aware Training (QAT) consistently outperforms Post-Training Quantization (PTQ) in preserving performance under low-precision constraints. Finally, this review identifies critical open research challenges, emphasizing the efficiency–accuracy tradeoff and the high computational demands imposed by Transformer architectures. Future directions are proposed, including hardware-aware optimization, robustness to imbalanced datasets, and multimodal sensing integration, to ensure reliable real-time inference in practical agricultural edge computing environments. Full article
Show Figures

Figure 1

17 pages, 552 KB  
Article
Videogame Programming & Education: Enhancing Programming Skills Through Unity Visual Scripting
by Álvaro Villagómez-Palacios, Claudia De la Fuente-Burdiles and Cristian Vidal-Silva
Computers 2026, 15(1), 68; https://doi.org/10.3390/computers15010068 - 18 Jan 2026
Viewed by 257
Abstract
Videogames (VGs) are highly attractive for children and young people. Although videogames were once viewed mainly as sources of distraction and leisure, they are now widely recognised as powerful tools for competence development across diverse domains. Designing and implementing a videogame is even [...] Read more.
Videogames (VGs) are highly attractive for children and young people. Although videogames were once viewed mainly as sources of distraction and leisure, they are now widely recognised as powerful tools for competence development across diverse domains. Designing and implementing a videogame is even more appealing for children and novice students than merely playing it, but developing programming competencies using a text-based language often constitutes a significant barrier to entry. This article presents the implementation and evaluation of a videogame development experience with university students using the Unity engine and its Visual Scripting block-based tool. Students worked in teams and successfully completed videogame projects, demonstrating substantial gains in programming and game construction skills. The adopted methodology facilitated learning, collaboration, and engagement. Building on a quasi-experimental design that compared a prior unit based on C# and MonoGame with a subsequent unit based on Unity Visual Scripting, the study analyses differences in performance, development effort, and motivational indicators. The results show statistically significant improvements in grades, reduced development time for core mechanics, and higher self-reported confidence when Visual Scripting is employed. The evidence supports the view of Visual Scripting as an effective educational strategy to introduce programming concepts without the syntactic and semantic barriers of traditional text-based languages. The findings further suggest that Unity Visual Scripting can act as a didactic bridge towards advanced programming, and that its adoption in secondary and primary education is promising both for reinforcing traditional subjects (history, language, mathematics) and for fostering foundational programming and videogame development skills in an inclusive manner. Full article
Show Figures

Figure 1

28 pages, 3209 KB  
Article
Fast Computation for Square Matrix Factorization
by Artyom M. Grigoryan
Computers 2026, 15(1), 67; https://doi.org/10.3390/computers15010067 - 17 Jan 2026
Viewed by 215
Abstract
In this work, we discuss a method for the QR-factorization of N×N matrices where N3 which is based on transformations which are called discrete signal-induced heap transformations (DsiHTs). These transformations are generated by given signals and can be composed [...] Read more.
In this work, we discuss a method for the QR-factorization of N×N matrices where N3 which is based on transformations which are called discrete signal-induced heap transformations (DsiHTs). These transformations are generated by given signals and can be composed by elementary rotations. The data processing order, or the path of the transformations, is an important characteristic of it, and the correct choice of such paths can lead to a significant reduction in the operation when calculating the factorization for large matrices. Such paths are called fast paths of the N-point DsiHTs, and they define sparse matrices with more zero coefficients than when calculating QR-factorization in the traditional path, that is, when processing data in the natural order x0,x1,x2,. For example, in the first stage of the factorization of a 512 × 512 matrix, a matrix is used with 257,024 zero coefficients out of a total of 262,144 coefficients when using the fast paths. For comparison, the calculations in the natural order require a 512 × 512 matrix with only 130,305 zero coefficients at this stage. The Householder reflection matrix has no zero coefficients. The number of multiplication operations for the QR-factorization by the fast DsiHTs is more than 40 times smaller than when using the Householder reflections and 20 times smaller when using DsiHTs with the natural paths. Examples with the 4 × 4, 5 × 5, and 8 × 8 matrices are described in detail. The concept of complex DsiHT with fast paths is also described and applied in the QR-factorization of complex square matrices. An example of the QR-factorization of a 256 × 256 complex matrix is also described and compared with the method of Householder reflections which is used in programming language MATLAB R2024b. Full article
Show Figures

Figure 1

29 pages, 1232 KB  
Article
A Business-Oriented Approach to Automated Threat Analysis for Large-Scale Infrastructure Systems
by Chiaki Otahara, Hiroki Uchiyama and Makoto Kayashima
Computers 2026, 15(1), 66; https://doi.org/10.3390/computers15010066 - 16 Jan 2026
Viewed by 308
Abstract
Security design for large-scale infrastructure systems requires substantial effort and often causes development delays. In line with NIST guidance, such systems should consider security design throughout a system development lifecycle. Nevertheless, performing security design in early phases of the lifecycle is difficult due [...] Read more.
Security design for large-scale infrastructure systems requires substantial effort and often causes development delays. In line with NIST guidance, such systems should consider security design throughout a system development lifecycle. Nevertheless, performing security design in early phases of the lifecycle is difficult due to frequent specification changes and variability in analyst expertise, which causes repeated rework. The workload is particularly critical in threat analysis, the key activity of security design, because rework can inflate the workload. To address this challenge, we propose an automated threat-analysis method. Specifically, (i) we systematize past security design cases and develop “templates” that organize the system-configuration and security information required for threat analysis into a reusable 5W-based format (When, Where, Who, Why, What); (ii) we define dependencies among the templates and design an algorithm that automatically generates threat-analysis results; and (iii) observing that threat analysis of large-scale systems often yield overlaps, we introduce “business operations” as an analytical asset, which includes encompassing information, function, and physical resources. We apply our method to an actual large-scale operational system and confirm that it reduces the workload by up to 84% relative to conventional manual analysis, while maintaining both the coverage and the accuracy of the analysis. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Graphical abstract

24 pages, 3292 KB  
Article
Comparing Emerging and Hybrid Quantum–Kolmogorov Architectures for Image Classification
by Lelio Campanile, Mariarosaria Castaldo, Stefano Marrone and Fabio Napoli
Computers 2026, 15(1), 65; https://doi.org/10.3390/computers15010065 - 16 Jan 2026
Viewed by 348
Abstract
The rapid evolution of Artificial Intelligence has enabled significant progress in image classification, with emerging approaches extending traditional deep learning paradigms. This article presents an extended version of a paper originally introduced at ICCSA 2025, providing a broader comparative analysis of classical, spline-based, [...] Read more.
The rapid evolution of Artificial Intelligence has enabled significant progress in image classification, with emerging approaches extending traditional deep learning paradigms. This article presents an extended version of a paper originally introduced at ICCSA 2025, providing a broader comparative analysis of classical, spline-based, and quantum machine learning architectures. The study evaluates Convolutional Neural Networks (CNNs), Kolmogorov–Arnold Networks (KANs), Convolutional KANs (CKANs), and Quantum Convolutional Neural Networks (QCNNs) on the Labeled Faces in the Wild dataset. In addition to these baselines, two novel architectures are introduced: a fully quantum Kolmogorov–Arnold model (F-QKAN) and a hybrid KAN–Quantum network (H-QKAN) that combines spline-based feature extraction with variational quantum classification. Rather than targeting state-of-the-art performance, the evaluation focuses on analyzing the behaviour of these architectures in terms of accuracy, computational efficiency, and interpretability under a unified experimental protocol. Results show that the fully quantum F-QKAN achieves a test accuracy above 80%. The hybrid H-QKAN obtains the best overall performance, exceeding 92% accuracy with rapid convergence and stable training dynamics. Classical CNNs models remain state-of-the-art in terms of predictive performance, whereas CKANs offer a favorable balance between accuracy and efficiency. QCNNs show potential in ideal noise-free settings but are significantly affected by realistic noise conditions, motivating further investigation into hybrid quantum–classical designs. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Figure 1

18 pages, 10974 KB  
Article
Exploring Slow Responses in International Large-Scale Assessments Using Sequential Process Analysis
by Daniel Jerez, Elisabetta Mazzullo and Okan Bulut
Computers 2026, 15(1), 64; https://doi.org/10.3390/computers15010064 - 16 Jan 2026
Viewed by 286
Abstract
Slow responding in International Large-Scale Assessments (ILSAs) has received far less attention than rapid guessing, despite its potential to reveal heterogeneous response processes. Unlike disengaged rapid responders, slow responders may differ in time management, off-task behavior, or specific cognitive operations. This exploratory study [...] Read more.
Slow responding in International Large-Scale Assessments (ILSAs) has received far less attention than rapid guessing, despite its potential to reveal heterogeneous response processes. Unlike disengaged rapid responders, slow responders may differ in time management, off-task behavior, or specific cognitive operations. This exploratory study uses sequence analysis of log-file data from a complex problem-solving item in PISA 2012 to examine whether slow responders can be grouped into homogeneous subtypes. The item required students to explore causal relations and externalize them in a diagram. Results indicate two distinct clusters among slow responders, each marked by characteristic interaction patterns and difficulties at different stages of the solution process. One cluster exhibited long pauses interspersed with repeated, inefficient attempts at representing causal relationships; the other showed shorter pauses coupled with inefficient exploratory actions targeting those relationships. These findings demonstrate that sequence analysis can parsimoniously identify clusters of action sequences associated with slow responding, offering a finer-grained account of aberrant behavior in low-stakes, digital assessments. More broadly, the approach illustrates how process data can be leveraged to differentiate mechanisms underlying slow response behaviors, with implications for validity arguments, diagnostic feedback, and the design of mitigation strategies in ILSAs. Directions for future research to better understand the differences among slow responders are provided. Full article
Show Figures

Figure 1

17 pages, 2852 KB  
Article
A Lightweight Edge-AI System for Disease Detection and Three-Level Leaf Spot Severity Assessment in Strawberry Using YOLOv10n and MobileViT-S
by Raikhan Amanova, Baurzhan Belgibayev, Madina Mansurova, Madina Suleimenova, Gulshat Amirkhanova and Gulnur Tyulepberdinova
Computers 2026, 15(1), 63; https://doi.org/10.3390/computers15010063 - 16 Jan 2026
Viewed by 252
Abstract
Mobile edge-AI plant monitoring systems enable automated disease control in greenhouses and open fields, reducing dependence on manual inspection and the variability of visual diagnostics. This paper proposes a lightweight two-stage edge-AI system for strawberries, in which a YOLOv10n detector on board a [...] Read more.
Mobile edge-AI plant monitoring systems enable automated disease control in greenhouses and open fields, reducing dependence on manual inspection and the variability of visual diagnostics. This paper proposes a lightweight two-stage edge-AI system for strawberries, in which a YOLOv10n detector on board a mobile agricultural robot locates leaves affected by seven common diseases (including Leaf Spot) with real-time capability on an embedded platform. Patches are then automatically extracted for leaves classified as Leaf Spot and transmitted to the second module—a compact MobileViT-S-based classifier with ordinal output that assesses the severity of Leaf Spot on three levels (S1—mild, S2—moderate, S3—severe) on a specialised set of 373 manually labelled leaf patches. In a comparative experiment with lightweight architectures ResNet-18, EfficientNet-B0, MobileNetV3-Small and Swin-Tiny, the proposed Ordinal MobileViT-S demonstrated the highest accuracy in assessing the severity of Leaf Spot (accuracy ≈ 0.97 with 4.9 million parameters), surpassing both the baseline models and the standard MobileViT-S with a cross-entropy loss function. On the original image set, the YOLOv10n detector achieves an mAP@0.5 of 0.960, an F1 score of 0.93 and a recall of 0.917, ensuring reliable detection of affected leaves for subsequent Leaf Spot severity assessment. The results show that the “YOLOv10n + Ordinal MobileViT-S” cascade provides practical severity-aware Leaf Spot diagnosis on a mobile agricultural robot and can serve as the basis for real-time strawberry crop health monitoring systems. Full article
Show Figures

Figure 1

18 pages, 3987 KB  
Article
Low-Latency Autonomous Surveillance in Defense Environments: A Hybrid RTSP-WebRTC Architecture with YOLOv11
by Juan José Castro-Castaño, William Efrén Chirán-Alpala, Guillermo Alfonso Giraldo-Martínez, José David Ortega-Pabón, Edison Camilo Rodríguez-Amézquita, Diego Ferney Gallego-Franco and Yeison Alberto Garcés-Gómez
Computers 2026, 15(1), 62; https://doi.org/10.3390/computers15010062 - 16 Jan 2026
Viewed by 341
Abstract
This article presents the Intelligent Monitoring System (IMS), an AI-assisted, low-latency surveillance platform designed for defense environments. The study addresses the need for real-time autonomous situational awareness by integrating high-speed video transmission with advanced computer vision analytics in constrained network settings. The IMS [...] Read more.
This article presents the Intelligent Monitoring System (IMS), an AI-assisted, low-latency surveillance platform designed for defense environments. The study addresses the need for real-time autonomous situational awareness by integrating high-speed video transmission with advanced computer vision analytics in constrained network settings. The IMS employs a hybrid transmission architecture based on RTSP for ingestion and WHEP/WebRTC for distribution, orchestrated via MediaMTX, with the objective of achieving end-to-end latencies below one second. The methodology includes a comparative evaluation of video streaming protocols (JPEG-over-WebSocket, HLS, WebRTC, etc.) and AI frameworks, alongside the modular architectural design and prolonged experimental validation. The detection module integrates YOLOv11 models fine-tuned on the VisDrone dataset to optimize performance for small objects, aerial views, and dense scenes. Experimental results, obtained through over 300 h of operational tests using IP cameras and aerial platforms, confirmed the stability and performance of the chosen architecture, maintaining latencies close to 500 ms. The YOLOv11 family was adopted as the primary detection framework, providing an effective trade-off between accuracy and inference performance in real-time scenarios. The YOLOv11n model was trained and validated on a Tesla T4 GPU, and YOLOv11m will be validated on the target platform in subsequent experiments. The findings demonstrate the technical viability and operational relevance of the IMS as a core component for autonomous surveillance systems in defense, satisfying strict requirements for speed, stability, and robust detection of vehicles and pedestrians. Full article
Show Figures

Figure 1

15 pages, 1607 KB  
Article
Using Steganography and Artificial Neural Network for Data Forensic Validation and Counter Image Deepfakes
by Matimu Caswell Nkuna, Ebenezer Esenogho and Ahmed Ali
Computers 2026, 15(1), 61; https://doi.org/10.3390/computers15010061 - 15 Jan 2026
Viewed by 272
Abstract
The merging of the Internet of Things (IoT) and Artificial Intelligence (AI) advances has intensified challenges related to data authenticity and security. These advancements necessitate a multi-layered security approach to ensure the security, reliability, and integrity of critical infrastructure and intelligent surveillance systems. [...] Read more.
The merging of the Internet of Things (IoT) and Artificial Intelligence (AI) advances has intensified challenges related to data authenticity and security. These advancements necessitate a multi-layered security approach to ensure the security, reliability, and integrity of critical infrastructure and intelligent surveillance systems. This paper proposes a two-layered security approach that combines a discrete cosine transform least significant bit 2 (DCT-LSB-2) with artificial neural networks (ANNs) for data forensic validation and mitigating deepfakes. The proposed model encodes validation codes within the LSBs of cover images captured by an IoT camera on the sender side, leveraging the DCT approach to enhance the resilience against steganalysis. On the receiver side, a reverse DCT-LSB-2 process decodes the embedded validation code, which is subjected to authenticity verification by a pre-trained ANN model. The ANN validates the integrity of the decoded code and ensures that only device-originated, untampered images are accepted. The proposed framework achieved an average SSIM of 0.9927 across the entire investigated embedding capacity, ranging from 0 to 1.988 bpp. DCT-LSB-2 showed a stable Peak Signal-to-Noise Ratio (average 42.44 dB) under various evaluated payloads ranging from 0 to 100 kB. The proposed model achieved a resilient and robust multi-layered data forensic validation system. Full article
(This article belongs to the Special Issue Multimedia Data and Network Security)
Show Figures

Graphical abstract

36 pages, 2621 KB  
Article
The Integration of ISO 27005 and NIST SP 800-30 for Security Operation Center (SOC) Framework Effectiveness in the Non-Bank Financial Industry
by Muharman Lubis, Muhammad Irfan Luthfi, Rd. Rohmat Saedudin, Alif Noorachmad Muttaqin and Arif Ridho Lubis
Computers 2026, 15(1), 60; https://doi.org/10.3390/computers15010060 - 15 Jan 2026
Viewed by 287
Abstract
A Security Operation Center (SOC) is a security control center for monitoring, detecting, analyzing, and responding to cybersecurity threats. PT (Perseroan Terbatas) Non-Bank Financial Company (NBFC) has implemented an SOC to secure its information systems, but challenges remain to be solved. [...] Read more.
A Security Operation Center (SOC) is a security control center for monitoring, detecting, analyzing, and responding to cybersecurity threats. PT (Perseroan Terbatas) Non-Bank Financial Company (NBFC) has implemented an SOC to secure its information systems, but challenges remain to be solved. These include the absence of impact analysis on financial and regulatory requirements, cost, and effort estimation for recovery; established Key Performance Indicators (KPIs) and Key Risk Indicators (KRIs) for monitoring security controls; and an official program for insider threats. This study evaluates SOC effectiveness at PT NBFC using the ISO 27005:2018 and NIST SP 800-30 frameworks. The research results in a proposed SOC assessment framework, integrating risk assessment, risk treatment, risk acceptance, and monitoring. Additionally, a maturity level assessment was conducted for ISO 27005:2018, NIST SP 800-30, and the proposed framework. The proposed framework achieves good maturity, with two domains meeting the target maturity value and one domain reaching level 4 (Managed and Measurable). By incorporating domains from both ISO 27005:2018 and NIST SP 800-30, the new framework offers a more comprehensive risk management approach, covering strategic, managerial, and technical aspects. Full article
Show Figures

Figure 1

23 pages, 1486 KB  
Article
AI-Based Emoji Recommendation for Early Childhood Education Using Deep Learning Techniques
by Shaya A. Alshaya
Computers 2026, 15(1), 59; https://doi.org/10.3390/computers15010059 - 15 Jan 2026
Viewed by 279
Abstract
The integration of emojis into Early Childhood Education (ECE) presents a promising avenue for enhancing student engagement, emotional expression, and comprehension. While prior studies suggest the benefit of visual aids in learning, systematic frameworks for pedagogically aligned emoji recommendation remain underdeveloped. This paper [...] Read more.
The integration of emojis into Early Childhood Education (ECE) presents a promising avenue for enhancing student engagement, emotional expression, and comprehension. While prior studies suggest the benefit of visual aids in learning, systematic frameworks for pedagogically aligned emoji recommendation remain underdeveloped. This paper presents EduEmoji-ECE, a pedagogically annotated dataset of early-childhood learning text segments. Specifically, the proposed model incorporates Bidirectional Encoder Representations from Transformers (BERTs) for contextual embedding extraction, Gated Recurrent Units (GRUs) for sequential pattern recognition, Deep Neural Networks (DNNs) for classification and emoji recommendation, and DECOC for improving emoji class prediction robustness. This hybrid BERT-GRU-DNN-DECOC architecture effectively captures textual semantics, emotional tone, and pedagogical intent, ensuring the alignment of emoji class recommendation with learning objectives. The experimental results show that the system is effective, with an accuracy of 95.3%, a precision of 93%, a recall of 91.8%, and an F1-score of 92.3%, outperforming baseline models in terms of contextual understanding and overall accuracy. This work helps fill a gap in AI-based education by combining learning with visual support for young children. The results suggest an association between emoji-enhanced materials and improved engagement/comprehension indicators in our exploratory classroom setting; however, causal attribution to the AI placement mechanism is not supported by the current study design. Full article
Show Figures

Figure 1

19 pages, 2837 KB  
Article
An Open-Source System for Public Transport Route Data Curation Using OpenTripPlanner in Australia
by Kiki Adhinugraha, Yusuke Gotoh and David Taniar
Computers 2026, 15(1), 58; https://doi.org/10.3390/computers15010058 - 14 Jan 2026
Viewed by 286
Abstract
Access to large-scale public transport journey data is essential for analysing accessibility, equity, and urban mobility. Although digital platforms such as Google Maps provide detailed routing for individual users, their licensing and access restrictions prevent systematic data extraction for research purposes. Open-source routing [...] Read more.
Access to large-scale public transport journey data is essential for analysing accessibility, equity, and urban mobility. Although digital platforms such as Google Maps provide detailed routing for individual users, their licensing and access restrictions prevent systematic data extraction for research purposes. Open-source routing engines such as OpenTripPlanner offer a transparent alternative, but are often limited to local or technical deployments that restrict broader use. This study evaluates the feasibility of deploying a publicly accessible, open-source routing platform based on OpenTripPlanner to support large-scale public transport route simulation across multiple cities. Using Australian metropolitan areas as a case study, the platform integrates GTFS and OpenStreetMap data to enable repeatable journey queries through a web interface, an API, and bulk processing tools. Across eight metropolitan regions, the system achieved itinerary coverage above 90 percent and sustained approximately 3000 routing requests per minute under concurrent access. These results demonstrate that open-source routing infrastructure can support reliable, large-scale route simulation using open data. Beyond performance, the platform enables public transport accessibility studies that are not feasible with proprietary routing services, supporting reproducible research, transparent decision-making, and evidence-based transport planning across diverse urban contexts. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Graphical abstract

29 pages, 2558 KB  
Article
IDN-MOTSCC: Integration of Deep Neural Network with Hybrid Meta-Heuristic Model for Multi-Objective Task Scheduling in Cloud Computing
by Mohit Kumar, Rama Kant, Brijesh Kumar Gupta, Azhar Shadab, Ashwani Kumar and Krishna Kant
Computers 2026, 15(1), 57; https://doi.org/10.3390/computers15010057 - 14 Jan 2026
Viewed by 396
Abstract
Cloud computing covers a wide range of practical applications and diverse domains, yet resource scheduling and task scheduling remain significant challenges. To address this, different task scheduling algorithms are implemented across various computing systems to allocate tasks to machines, thereby enhancing performance through [...] Read more.
Cloud computing covers a wide range of practical applications and diverse domains, yet resource scheduling and task scheduling remain significant challenges. To address this, different task scheduling algorithms are implemented across various computing systems to allocate tasks to machines, thereby enhancing performance through data mapping. To meet these challenges, a novel task scheduling model is proposed using a hybrid meta-heuristic integration with a deep learning approach. We employed this novel task scheduling model to integrate deep learning with an optimized DNN, fine-tuned using improved grey wolf–horse herd optimization, with the aim of optimizing cloud-based task allocation and overcoming makespan constraints. Initially, a user initiates a task or request within the cloud environment. Then, these tasks are assigned to Virtual Machines (VMs). Since the scheduling algorithm is constrained by the makespan objective, an optimized Deep Neural Network (DNN) model is developed to perform optimal task scheduling. Random solutions are provided to the optimized DNN, where the hidden neuron count is tuned optimally by the proposed Improved Grey Wolf–Horse Herd Optimization (IGW-HHO) algorithm. The proposed IGW-HHO algorithm is derived from both conventional Grey Wolf Optimization (GWO) and Horse Herd Optimization (HHO). The optimal solutions are acquired from the optimized DNN and processed by the proposed algorithm to efficiently allocate tasks to VMs. The experimental results are validated using various error measures and convergence analysis. The proposed DNN-IGW-HHO model achieved a lower cost function compared to other optimization methods, with a reduction of 1% compared to PSO, 3.5% compared to WOA, 2.7% compared to GWO, and 0.7% compared to HHO. The proposed task scheduling model achieved the minimal Mean Absolute Error (MAE), with performance improvements of 31% over PSO, 20.16% over WOA, 41.72% over GWO, and 9.11% over HHO. Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
Show Figures

Figure 1

18 pages, 1020 KB  
Article
Implementing Learning Analytics in Education: Enhancing Actionability and Adoption
by Dimitrios E. Tzimas and Stavros N. Demetriadis
Computers 2026, 15(1), 56; https://doi.org/10.3390/computers15010056 - 14 Jan 2026
Viewed by 264
Abstract
The broader aim of this research is to examine how Learning Analytics (LA) can become ethically sound, pedagogically actionable, and realistically adopted in educational practice. To address this overarching challenge, the study investigates three interrelated research questions: ethics by design, learning impact, and [...] Read more.
The broader aim of this research is to examine how Learning Analytics (LA) can become ethically sound, pedagogically actionable, and realistically adopted in educational practice. To address this overarching challenge, the study investigates three interrelated research questions: ethics by design, learning impact, and adoption conditions. Methodologically, the research follows an exploratory sequential multi-method design. First, a meta-synthesis of 53 studies is conducted to identify key ethical challenges in LA and to derive an ethics-by-design framework. Second, a quasi-experimental study examines the impact of interface-based LA guidance (strong versus minimal) on students’ self-regulated learning skills and academic performance. Third, a mixed-methods adoption study, combining surveys, focus groups, and ethnographic observations, investigates the factors that encourage or hinder teachers’ adoption of LA in K–12 education. The findings indicate that strong LA-based guidance leads to statistically significant improvements in students’ self-regulated learning skills and academic performance compared to minimal guidance. Furthermore, the adoption analysis reveals that performance expectancy, social influence, human-centred design, and positive emotions facilitate LA adoption, whereas effort expectancy, limited facilitating conditions, ethical concerns, and cultural resistance inhibit it. Overall, the study demonstrates that ethics by design, effective pedagogical guidance, and adoption conditions are mutually reinforcing dimensions. It argues that LA can support intelligent, responsive, and human-centred learning environments when ethical safeguards, instructional design, and stakeholder involvement are systematically aligned. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
Show Figures

Figure 1

27 pages, 1930 KB  
Article
SteadyEval: Robust LLM Exam Graders via Adversarial Training and Distillation
by Catalin Anghel, Marian Viorel Craciun, Adina Cocu, Andreea Alexandra Anghel and Adrian Istrate
Computers 2026, 15(1), 55; https://doi.org/10.3390/computers15010055 - 14 Jan 2026
Viewed by 217
Abstract
Large language models (LLMs) are increasingly used as rubric-guided graders for short-answer exams, but their decisions can be unstable across prompts and vulnerable to answer-side prompt injection. In this paper, we study SteadyEval, a guardrailed exam-grading pipeline in which an adversarially trained LoRA [...] Read more.
Large language models (LLMs) are increasingly used as rubric-guided graders for short-answer exams, but their decisions can be unstable across prompts and vulnerable to answer-side prompt injection. In this paper, we study SteadyEval, a guardrailed exam-grading pipeline in which an adversarially trained LoRA filter (SteadyEval-7B-deep) preprocesses student answers to remove answer-side prompt injection, after which the original Mistral-7B-Instruct rubric-guided grader assigns the final score. We build two exam-grading pipelines on top of Mistral-7B-Instruct: a baseline pipeline that scores student answers directly, and a guardrailed pipeline in which a LoRA-based filter (SteadyEval-7B-deep) first removes injection content from the answer and a downstream grader then assigns the final score. Using two rubric-guided short-answer datasets in machine learning and computer networking, we generate grouped families of clean answers and four classes of answer-side attacks, and we evaluate the impact of these attacks on score shifts, attack success rates, stability across prompt variants, and alignment with human graders. On the pooled dataset, answer-side attacks inflate grades in the unguarded baseline by an average of about +1.2 points on a 1–10 scale, and substantially increase score dispersion across prompt variants. The guardrailed pipeline largely removes this systematic grade inflation and reduces instability for many items, especially in the machine-learning exam, while keeping mean absolute error with respect to human reference scores in a similar range to the unguarded baseline on clean answers, with a conservative shift in networking that motivates per-course calibration. Chief-panel comparisons further show that the guardrailed pipeline tracks human grading more closely on machine-learning items, but tends to under-score networking answers. These findings are best interpreted as a proof-of-concept guardrail and require per-course validation and calibration before operational use. Full article
Show Figures

Figure 1

19 pages, 2960 KB  
Article
Gabor Transform-Based Deep Learning System Using CNN for Melanoma Detection
by S. Deivasigamani, C. Senthilpari, Siva Sundhara Raja. D, A. Thankaraj, G. Narmadha and K. Gowrishankar
Computers 2026, 15(1), 54; https://doi.org/10.3390/computers15010054 - 13 Jan 2026
Viewed by 147
Abstract
Melanoma is highly dangerous and can spread rapidly to other parts of the body. It has an increasing fatality rate among different types of cancer. Timely detection of skin malignancies can reduce overall mortality. Therefore, clinical screening methods require more time and accuracy [...] Read more.
Melanoma is highly dangerous and can spread rapidly to other parts of the body. It has an increasing fatality rate among different types of cancer. Timely detection of skin malignancies can reduce overall mortality. Therefore, clinical screening methods require more time and accuracy for diagnosis. An automated, computer-aided system would facilitate earlier melanoma detection, thereby increasing patient survival rates. This paper identifies melanoma images using a Convolutional Neural Network. Skin images are preprocessed using Histogram Equalization and Gabor transforms. A Gabor filter-based Convolutional Neural Network (CNN) classifier trains and classifies the extracted features. We adopt Gabor filters because they are bandpass filters that transform a pixel into a multi-resolution kernel matrix, providing detailed information about the image. This study suggests a method with accuracy, sensitivity, and specificity of 98.58%, 98.66%, and 98.75%, respectively. This research supports SDGs 3 and 4 by facilitating early melanoma detection and enhancing AI-driven medical education. Full article
Show Figures

Figure 1

21 pages, 20581 KB  
Article
Stereo-Based Single-Shot Hand-to-Eye Calibration for Robot Arms
by Pushkar Kadam, Gu Fang, Farshid Amirabdollahian, Ju Jia Zou and Patrick Holthaus
Computers 2026, 15(1), 53; https://doi.org/10.3390/computers15010053 - 13 Jan 2026
Viewed by 186
Abstract
Robot hand-to-eye calibration is a necessary process for a robot arm to perceive and interact with its environment. Past approaches required collecting multiple images using a calibration board placed at different locations relative to the robot. When the robot or camera is displaced [...] Read more.
Robot hand-to-eye calibration is a necessary process for a robot arm to perceive and interact with its environment. Past approaches required collecting multiple images using a calibration board placed at different locations relative to the robot. When the robot or camera is displaced from its calibrated position, hand–eye calibration must be redone using the same tedious process. In this research, we developed a novel method that uses a semi-automatic process to perform hand-to-eye calibration with a stereo camera, generating a transformation matrix from the world to the camera coordinate frame from a single image. We use a robot-pointer tool attached to the robot’s end-effector to manually establish a relationship between the world and the robot coordinate frame. Then, we establish the relationship between the camera and the robot using a transformation matrix that maps points observed in the stereo image frame from two-dimensional space to the robot’s three-dimensional coordinate frame. Our analysis of the stereo calibration showed a reprojection error of 0.26 pixels. An evaluation metric was developed to test the camera-to-robot transformation matrix, and the experimental results showed median root mean square errors of less than 1 mm in the x and y directions and less than 2 mm in the z directions in the robot coordinate frame. The results show that, with this work, we contribute a hand-to-eye calibration method that uses three non-collinear points in a single stereo image to map camera-to-robot coordinate-frame transformations. Full article
(This article belongs to the Special Issue Advanced Human–Robot Interaction 2025)
Show Figures

Figure 1

15 pages, 1527 KB  
Article
Learning Complementary Representations for Targeted Multimodal Sentiment Analysis
by Binfen Ding, Jieyu An and Yumeng Lei
Computers 2026, 15(1), 52; https://doi.org/10.3390/computers15010052 - 13 Jan 2026
Viewed by 175
Abstract
Targeted multimodal sentiment classification is frequently impeded by the semantic sparsity of social media content, where text is brief and context is implicit. Traditional methods that rely on direct concatenation of textual and visual features often fail to resolve the ambiguity of specific [...] Read more.
Targeted multimodal sentiment classification is frequently impeded by the semantic sparsity of social media content, where text is brief and context is implicit. Traditional methods that rely on direct concatenation of textual and visual features often fail to resolve the ambiguity of specific targets due to a lack of alignment between modalities. In this paper, we propose the Complementary Description Network (CDNet) to bridge this informational gap. CDNet incorporates automatically generated image descriptions as an additional semantic bridge, in contrast to methods that handle text and images as distinct streams. The framework enhances the input representation by directly translating visual content into text, allowing for more accurate interactions between the opinion target and the visual narrative. We further introduce a complementary reconstruction module that functions as a regularizer, forcing the model to retain deep semantic cues during fusion. Empirical results on the Twitter-2015 and Twitter-2017 benchmarks confirm that CDNet outperforms existing baselines. The findings suggest that visual-to-text augmentation is an effective strategy for compensating for the limited context inherent in short texts. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

14 pages, 617 KB  
Article
Integrating ESP32-Based IoT Architectures and Cloud Visualization to Foster Data Literacy in Early Engineering Education
by Jael Zambrano-Mieles, Miguel Tupac-Yupanqui, Salutar Mari-Loardo and Cristian Vidal-Silva
Computers 2026, 15(1), 51; https://doi.org/10.3390/computers15010051 - 13 Jan 2026
Viewed by 251
Abstract
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time [...] Read more.
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time environmental monitoring systems for agriculture and beekeeping. Through a sixteen-week Project-Based Learning (PBL) intervention with 91 participants, we evaluated how this technological stack influences technical proficiency. Results indicate that the transition from local code execution to cloud-based telemetry increased perceived learning confidence from μ=3.9 (Challenge phase) to μ=4.6 (Reflection phase) on a 5-point scale. Furthermore, 96% of students identified the visualization dashboards as essential Human–Computer Interfaces (HCI) for debugging, effectively bridging the gap between raw sensor data and evidence-based argumentation. These findings demonstrate that integrating open-source IoT architectures provides a scalable mechanism to cultivate data literacy in early engineering education. Full article
Show Figures

Figure 1

33 pages, 729 KB  
Review
A Comprehensive Review of Energy Efficiency in 5G Networks: Past Strategies, Present Advances, and Future Research Directions
by Narjes Lassoued and Noureddine Boujnah
Computers 2026, 15(1), 50; https://doi.org/10.3390/computers15010050 - 12 Jan 2026
Viewed by 389
Abstract
The rapid evolution of wireless communication toward Fifth Generation (5G) networks has enabled unprecedented performance improvement in terms of data rate, latency, reliability, sustainability, and connectivity. Recent years have witnessed an excessive deployment of new 5G networks worldwide. This deployment lead to an [...] Read more.
The rapid evolution of wireless communication toward Fifth Generation (5G) networks has enabled unprecedented performance improvement in terms of data rate, latency, reliability, sustainability, and connectivity. Recent years have witnessed an excessive deployment of new 5G networks worldwide. This deployment lead to an exponential growth in traffic flow and a massive number of connected devices requiring a new generation of energy-hungry base stations (BSs). This results in increased power consumption, higher operational costs, and greater environmental impact, making energy efficiency (EE) a critical research challenge. This paper presents a comprehensive survey of EE optimization strategies in 5G networks. It reviews the transition from traditional methods such as resources allocation, energy harvesting, BS sleep modes, and power control to modern artificial intelligence (AI)-driven solutions employing machine learning, deep reinforcement learning, and self-organizing networks (SON). Comparative analyses highlight the trade-offs between energy savings, network performance, and implementation complexity. Finally, the paper outlines key open issues and future directions toward sustainable 5G and beyond-5G (B5G/Sixth Generation (6G)) systems, emphasizing explainable AI, zero-energy communications, and holistic green network design. Full article
Show Figures

Figure 1

30 pages, 998 KB  
Systematic Review
Artificial Intelligence in K-12 Education: A Systematic Review of Teachers’ Professional Development Needs for AI Integration
by Spyridon Aravantinos, Konstantinos Lavidas, Vassilis Komis, Thanassis Karalis and Stamatios Papadakis
Computers 2026, 15(1), 49; https://doi.org/10.3390/computers15010049 - 12 Jan 2026
Viewed by 970
Abstract
Artificial intelligence (AI) is reshaping how learning environments are designed and experienced, offering new possibilities for personalization, creativity, and immersive engagement. This systematic review synthesizes 43 empirical studies (Scopus, Web of Science) to examine the training needs and practices of primary and secondary [...] Read more.
Artificial intelligence (AI) is reshaping how learning environments are designed and experienced, offering new possibilities for personalization, creativity, and immersive engagement. This systematic review synthesizes 43 empirical studies (Scopus, Web of Science) to examine the training needs and practices of primary and secondary education teachers for effective AI integration and overall professional development (PD). Following PRISMA guidelines, the review gathers teachers’ needs and practices related to AI integration, identifying key themes including training practices, teachers’ perceptions and attitudes, ongoing PD programs, multi-level support, AI literacy, and ethical and responsible use. The findings show that technical training alone is not sufficient, and that successful integration of AI requires a combination of pedagogical knowledge, positive attitudes, organizational support, and continuous training. Based on empirical data, a four-level, process-oriented PD framework is proposed, which bridges research with educational practice and offers practical guidance for the design of AI training interventions. Limitations and future research are discussed. Full article
Show Figures

Figure 1

16 pages, 4099 KB  
Article
A Machine Learning Approach to Wrist Angle Estimation Under Multiple Load Conditions Using Surface EMG
by Songpon Pumjam, Sarut Panjan, Tarinee Tonggoed and Anan Suebsomran
Computers 2026, 15(1), 48; https://doi.org/10.3390/computers15010048 - 12 Jan 2026
Viewed by 138
Abstract
Surface electromyography (sEMG) is widely used for decoding motion intent in prosthetic control and rehabilitation, yet the impact of external load on sEMG-to-kinematics mapping remains insufficiently characterized, particularly for wrist flexion-extension This pilot study investigates wrist angle estimation (0–90°) under four discrete counter-torque [...] Read more.
Surface electromyography (sEMG) is widely used for decoding motion intent in prosthetic control and rehabilitation, yet the impact of external load on sEMG-to-kinematics mapping remains insufficiently characterized, particularly for wrist flexion-extension This pilot study investigates wrist angle estimation (0–90°) under four discrete counter-torque levels (0, 25, 50, and 75 N·cm) using a multilayer perceptron neural network (MLPNN) regressor with mean absolute value (MAV) features. Multi-channel sEMG was acquired from three healthy participants while performing isotonic wrist extension (clockwise) and flexion (counterclockwise) in a constrained single-degree-of-freedom setup with potentiometer-based ground truth. Signals were filtered and normalized, and MAV features were extracted using a 200 ms sliding window with a 20 ms step. Across all load levels, the within-subject models achieved very high accuracy (R2 = 0.9946–0.9982) with test MSE of 1.23–3.75 deg2; extension yielded lower error than flexion, and the largest error was observed in flexion at 25 N·cm. Because the cohort is small (n = 3), the movement is highly constrained, and subject-independent validation and embedded implementation were not evaluated, these results should be interpreted as a best-case baseline rather than evidence of deployable rehabilitation performance. Future work should test multi-DoF wrist motion, freer movement conditions, richer feature sets, and subject-independent validation. Full article
(This article belongs to the Special Issue Wearable Computing and Activity Recognition)
Show Figures

Graphical abstract

20 pages, 2221 KB  
Article
Hybrid Web Architecture with AI and Mobile Notifications to Optimize Incident Management in the Public Sector
by Luis Alberto Pfuño Alccahuamani, Anthony Meza Bautista and Hesmeralda Rojas
Computers 2026, 15(1), 47; https://doi.org/10.3390/computers15010047 - 12 Jan 2026
Viewed by 222
Abstract
This study addresses the persistent inefficiencies in incident management within regional public institutions, where dispersed offices and limited digital infrastructure constrain timely technical support. The research aims to evaluate whether a hybrid web architecture integrating AI-assisted interaction and mobile notifications can significantly improve [...] Read more.
This study addresses the persistent inefficiencies in incident management within regional public institutions, where dispersed offices and limited digital infrastructure constrain timely technical support. The research aims to evaluate whether a hybrid web architecture integrating AI-assisted interaction and mobile notifications can significantly improve efficiency in this context. The ITIMS (Intelligent Technical Incident Management System) was designed using a Laravel 10 MVC backend, a responsive Bootstrap 5 interface, and a relational MariaDB/MySQL model optimized with migrations and composite indexes, and incorporated two low-cost integrations: a stateless AI chatbot through the OpenRouter API and asynchronous mobile notifications using the Telegram Bot API managed via Laravel Queues and webhooks. Developed through four Scrum sprints and deployed on an institutional XAMPP environment, the solution was evaluated from January to April 2025 with 100 participants using operational metrics and the QWU usability instrument. Results show a reduction in incident resolution time from 120 to 31 min (74.17%), an 85.48% chatbot interaction success rate, a 94.12% notification open rate, and a 99.34% incident resolution rate, alongside an 88% usability score. These findings indicate that a modular, low-cost, and scalable architecture can effectively strengthen digital transformation efforts in the public sector, especially in regions with resource and connectivity constraints. Full article
Show Figures

Graphical abstract

27 pages, 848 KB  
Article
Model of Acceptance of Artificial Intelligence Devices in Higher Education
by Luis Salazar and Luis Rivera
Computers 2026, 15(1), 46; https://doi.org/10.3390/computers15010046 - 12 Jan 2026
Viewed by 320
Abstract
Artificial intelligence (AI) has become a highly relevant tool in higher education. However, its acceptance by university students depends not only on technical or functional characteristics, but also on cognitive, contextual, and emotional factors. This study proposes and validates a model of acceptance [...] Read more.
Artificial intelligence (AI) has become a highly relevant tool in higher education. However, its acceptance by university students depends not only on technical or functional characteristics, but also on cognitive, contextual, and emotional factors. This study proposes and validates a model of acceptance of the use of AI devices (MIDA) in the university context. The model considers contextual variables such as anthropomorphism (AN), perceived value (PV) and perceived risk (PR). It also considers cognitive variables such as performance expectancy (PEX) and perceived effort expectancy (PEE). In addition, it considers emotional variables such as anxiety (ANX), stress (ST) and trust (TR). For its validation, data were collected from 517 university students and analysed using structural equations (CB-SEM). The results indicate that perceived value, anthropomorphism and perceived risk influence the willingness to accept the use of AI devices indirectly through performance expectancy and perceived effort. Likewise, performance expectancy significantly reduces anxiety and stress and increases trust, while effort expectancy increases both anxiety and stress. Trust is the main predictor of willingness to accept the use of AI devices, while stress has a significant negative effect on this willingness. These findings contribute to the literature on the acceptance of AI devices by highlighting the mediating role of emotions and offer practical implications for the design of AI devices aimed at improving their acceptance in educational contexts. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

19 pages, 7451 KB  
Article
PPE-EYE: A Deep Learning Approach to Personal Protective Equipment Compliance Detection
by Atta Rahman, Mohammed Salih Ahmed, Khaled Naif AlBugami, Abdullah Yousef Alabbad, Abdullah Abdulaziz AlFantoukh, Yousef Hassan Alshaikhahmed, Ziyad Saleh Alzahrani, Mohammad Aftab Alam Khan, Mustafa Youldash and Saeed Matar Alshahrani
Computers 2026, 15(1), 45; https://doi.org/10.3390/computers15010045 - 11 Jan 2026
Viewed by 352
Abstract
Safety on construction sites is an essential yet challenging issue due to the inherently hazardous nature of these sites. Workers are expected to wear Personal Protective Equipment (PPE), such as helmets, vests, and safety glasses, to prevent or minimize their exposure to injuries. [...] Read more.
Safety on construction sites is an essential yet challenging issue due to the inherently hazardous nature of these sites. Workers are expected to wear Personal Protective Equipment (PPE), such as helmets, vests, and safety glasses, to prevent or minimize their exposure to injuries. However, ensuring compliance remains difficult, particularly in large or complex sites, which require a time-consuming and usually error-prone manual inspection process. The research proposes an automated PPE detection system utilizing the deep learning model YOLO11, which is trained on the CHVG dataset, to identify in real-time whether workers are adequately equipped with the necessary gear. The proposed PPE-EYE method, using YOLO11x, achieved a mAP50 of 96.9% and an inference time of 7.3 ms, which is sufficient for real-time PPE detection systems, in contrast to previous approaches involving the same dataset, which required 170 ms. The model achieved these results by employing data augmentation and fine-tuning. The proposed solution provides continuous monitoring with reduced human oversight and ensures timely alerts if non-compliance is detected, allowing the site manager to act promptly. It further enhances the effectiveness and reliability of safety inspections, overall site safety, and reduces accidents, ensuring consistency in follow-through of safety procedures to create a safer and more productive working environment for all involved in construction activities. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

25 pages, 705 KB  
Article
Privacy-Preserving Set Intersection Protocol Based on SM2 Oblivious Transfer
by Zhibo Guan, Hai Huang, Haibo Yao, Qiong Jia, Kai Cheng, Mengmeng Ge, Bin Yu and Chao Ma
Computers 2026, 15(1), 44; https://doi.org/10.3390/computers15010044 - 10 Jan 2026
Viewed by 207
Abstract
Private Set Intersection (PSI) is a fundamental cryptographic primitive in privacy-preserving computation and has been widely applied in federated learning, secure data sharing, and privacy-aware data analytics. However, most existing PSI protocols rely on RSA or standard elliptic curve cryptography, which limits their [...] Read more.
Private Set Intersection (PSI) is a fundamental cryptographic primitive in privacy-preserving computation and has been widely applied in federated learning, secure data sharing, and privacy-aware data analytics. However, most existing PSI protocols rely on RSA or standard elliptic curve cryptography, which limits their applicability in scenarios requiring domestic cryptographic standards and often leads to high computational and communication overhead when processing large-scale datasets. In this paper, we propose a novel PSI protocol based on the Chinese commercial cryptographic standard SM2, referred to as SM2-OT-PSI. The proposed scheme constructs an oblivious transfer-based Oblivious Pseudorandom Function (OPRF) using SM2 public-key cryptography and the SM3 hash function, enabling efficient multi-point OPRF evaluation under the semi-honest adversary model. A formal security analysis demonstrates that the protocol satisfies privacy and correctness guarantees assuming the hardness of the Elliptic Curve Discrete Logarithm Problem. To further improve practical performance, we design a software–hardware co-design architecture that offloads SM2 scalar multiplication and SM3 hashing operations to a domestic reconfigurable cryptographic accelerator (RSP S20G). Experimental results show that, for datasets with up to millions of elements, the presented protocol significantly outperforms several representative PSI schemes in terms of execution time and communication efficiency, especially in medium and high-bandwidth network environments. The proposed SM2-OT-PSI protocol provides a practical and efficient solution for large-scale privacy-preserving set intersection under national cryptographic standards, making it suitable for deployment in real-world secure computing systems. Full article
(This article belongs to the Special Issue Mobile Fog and Edge Computing)
Show Figures

Figure 1

21 pages, 58532 KB  
Article
Joint Inference of Image Enhancement and Object Detection via Cross-Domain Fusion Transformer
by Bingxun Zhao and Yuan Chen
Computers 2026, 15(1), 43; https://doi.org/10.3390/computers15010043 - 10 Jan 2026
Viewed by 172
Abstract
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to [...] Read more.
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to improve visual quality prior to detection. However, image enhancement and object detection are optimized for fundamentally different objectives, and directly cascading them leads to feature distribution mismatch. Moreover, prevailing dual-branch architectures process enhancement and detection independently, overlooking multi-scale interactions across domains and thus constraining the learning of cross-domain feature representation. To overcome these limitations, We propose an underwater cross-domain fusion Transformer detector (UCF-DETR). UCF-DETR jointly leverages image enhancement and object detection by exploiting the complementary information from the enhanced and original image domains. Specifically, an underwater image enhancement module is employed to improve visibility. We then design a cross-domain feature pyramid to integrate fine-grained structural details from the enhanced domain with semantic representations from the original domain. Cross-domain query interaction mechanism is introduced to model inter-domain query relationships, leading to accurate object localization and boundary delineation. Extensive experiments on the challenging DUO and UDD benchmarks demonstrate that UCF-DETR consistently outperforms state-of-the-art methods for UOD. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

15 pages, 544 KB  
Article
Preparation for Inclusive and Technology-Enhanced Pedagogy: A Cluster Analysis of Secondary Special Education Teachers
by Evaggelos Foykas, Eleftheria Beazidou, Natassa Raikou and Nikolaos C. Zygouris
Computers 2026, 15(1), 42; https://doi.org/10.3390/computers15010042 - 9 Jan 2026
Viewed by 284
Abstract
This study examines the profiles of secondary special education teachers regarding their readiness for inclusive teaching, with technology-enhanced practices operationalized through participation in STEAM-related professional development. A total of 323 teachers from vocational high schools and integration classes participated. Four indicators of professional [...] Read more.
This study examines the profiles of secondary special education teachers regarding their readiness for inclusive teaching, with technology-enhanced practices operationalized through participation in STEAM-related professional development. A total of 323 teachers from vocational high schools and integration classes participated. Four indicators of professional preparation were assessed: years of teaching experience, formal STEAM training, exposure to students with special educational needs (SEN), and perceived success in inclusive teaching, operationalized as self-reported competence in adaptive instruction, classroom management, positive attitudes toward inclusion, and collaborative engagement. Cluster analysis revealed three distinct teacher profiles: less experienced teachers with moderate perceived success and limited exposure to students with SEN; well-prepared teachers with high levels across all indicators; and highly experienced teachers with lower STEAM training and perceived success. These findings underscore the need for targeted professional development that integrates inclusive and technology-enhanced pedagogy through STEAM and is tailored to teachers’ experience levels. By integrating inclusive readiness, STEAM-related preparation, and technology-enhanced pedagogy within a person-centered profiling approach, this study offers actionable teacher profiles to inform differentiated professional development in secondary special education. Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
Show Figures

Figure 1

37 pages, 1413 KB  
Systematic Review
Emerging Technologies in Financial Services: From Virtualization and Cloud Infrastructures to Edge Computing Applications
by Georgios Lambropoulos, Sarandis Mitropoulos and Christos Douligeris
Computers 2026, 15(1), 41; https://doi.org/10.3390/computers15010041 - 9 Jan 2026
Viewed by 530
Abstract
The financial services sector is experiencing unprecedented transformation through the adoption of virtualization technologies, encompassing cloud computing and edge computing digitalization initiatives that fundamentally alter operational paradigms and competitive dynamics within the industry. This systematic literature review employed a comprehensive methodology, analyzing peer-reviewed [...] Read more.
The financial services sector is experiencing unprecedented transformation through the adoption of virtualization technologies, encompassing cloud computing and edge computing digitalization initiatives that fundamentally alter operational paradigms and competitive dynamics within the industry. This systematic literature review employed a comprehensive methodology, analyzing peer-reviewed articles, systematic reviews, and industry reports published between 2016 and 2025 across three primary technological domains, utilizing thematic content analysis to synthesize findings and identify key implementation patterns, performance outcomes, and emerging challenges. The analysis reveals consistent evidence of positive long-term performance outcomes from virtualization technology adoption, including average transaction processing time reductions of 69% through edge computing implementations, substantial operational cost savings and efficiency improvements through cloud computing adoption, while simultaneously identifying critical challenges related to regulatory compliance, security management, and organizational transformation requirements. Virtualization technology offers transformative potential for financial services through improved operational efficiency, enhanced customer experience, and competitive advantage creation, though successful implementation requires sophisticated approaches to standardization, regulatory compliance, and change management, with future research needed to develop integrative frameworks addressing technology convergence and emerging applications in decentralized finance and digital currency systems. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop