Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.5 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Videogame Programming & Education: Enhancing Programming Skills Through Unity Visual Scripting
Computers 2026, 15(1), 68; https://doi.org/10.3390/computers15010068 (registering DOI) - 18 Jan 2026
Abstract
Videogames (VGs) are highly attractive for children and young people. Although videogames were once viewed mainly as sources of distraction and leisure, they are now widely recognised as powerful tools for competence development across diverse domains. Designing and implementing a videogame is even
[...] Read more.
Videogames (VGs) are highly attractive for children and young people. Although videogames were once viewed mainly as sources of distraction and leisure, they are now widely recognised as powerful tools for competence development across diverse domains. Designing and implementing a videogame is even more appealing for children and novice students than merely playing it, but developing programming competencies using a text-based language often constitutes a significant barrier to entry. This article presents the implementation and evaluation of a videogame development experience with university students using the Unity engine and its Visual Scripting block-based tool. Students worked in teams and successfully completed videogame projects, demonstrating substantial gains in programming and game construction skills. The adopted methodology facilitated learning, collaboration, and engagement. Building on a quasi-experimental design that compared a prior unit based on C# and MonoGame with a subsequent unit based on Unity Visual Scripting, the study analyses differences in performance, development effort, and motivational indicators. The results show statistically significant improvements in grades, reduced development time for core mechanics, and higher self-reported confidence when Visual Scripting is employed. The evidence supports the view of Visual Scripting as an effective educational strategy to introduce programming concepts without the syntactic and semantic barriers of traditional text-based languages. The findings further suggest that Unity Visual Scripting can act as a didactic bridge towards advanced programming, and that its adoption in secondary and primary education is promising both for reinforcing traditional subjects (history, language, mathematics) and for fostering foundational programming and videogame development skills in an inclusive manner.
Full article
(This article belongs to the Special Issue Advances in Game-Based Learning, Gamification in Education and Serious Games)
Open AccessArticle
Fast Computation for Square Matrix Factorization
by
Artyom M. Grigoryan
Computers 2026, 15(1), 67; https://doi.org/10.3390/computers15010067 (registering DOI) - 17 Jan 2026
Abstract
In this work, we discuss a method for the QR-factorization of N × N matrices where N ≥ 3 which is based on transformations which are called discrete signal-induced heap transformations (DsiHTs). These transformations are generated by given signals and can be composed
[...] Read more.
In this work, we discuss a method for the QR-factorization of N × N matrices where N ≥ 3 which is based on transformations which are called discrete signal-induced heap transformations (DsiHTs). These transformations are generated by given signals and can be composed by elementary rotations. The data processing order, or the path of the transformations, is an important characteristic of it, and the correct choice of such paths can lead to a significant reduction in the operation when calculating the factorization for large matrices. Such paths are called fast paths of the N-point DsiHTs, and they define sparse matrices with more zero coefficients than when calculating QR-factorization in the traditional path, that is, when processing data in the natural order x0, x1, x2, …. For example, in the first stage of the factorization of a 512 × 512 matrix, a matrix is used with 257,024 zero coefficients out of a total of 262,144 coefficients when using the fast paths. For comparison, the calculations in the natural order require a 512 × 512 matrix with only 130,305 zero coefficients at this stage. The Householder reflection matrix has no zero coefficients. The number of multiplication operations for the QR-factorization by the fast DsiHTs is more than 40 times smaller than when using the Householder reflections and 20 times smaller when using DsiHTs with the natural paths. Examples with the 4 × 4, 5 × 5, and 8 × 8 matrices are described in detail. The concept of complex DsiHT with fast paths is also described and applied in the QR-factorization of complex square matrices. An example of the QR-factorization of a 256 × 256 complex matrix is also described and compared with the method of Householder reflections which is used in programming language MATLAB R2024b.
Full article
Open AccessArticle
A Business-Oriented Approach to Automated Threat Analysis for Large-Scale Infrastructure Systems
by
Chiaki Otahara, Hiroki Uchiyama and Makoto Kayashima
Computers 2026, 15(1), 66; https://doi.org/10.3390/computers15010066 - 16 Jan 2026
Abstract
Security design for large-scale infrastructure systems requires substantial effort and often causes development delays. In line with NIST guidance, such systems should consider security design throughout a system development lifecycle. Nevertheless, performing security design in early phases of the lifecycle is difficult due
[...] Read more.
Security design for large-scale infrastructure systems requires substantial effort and often causes development delays. In line with NIST guidance, such systems should consider security design throughout a system development lifecycle. Nevertheless, performing security design in early phases of the lifecycle is difficult due to frequent specification changes and variability in analyst expertise, which causes repeated rework. The workload is particularly critical in threat analysis, the key activity of security design, because rework can inflate the workload. To address this challenge, we propose an automated threat-analysis method. Specifically, (i) we systematize past security design cases and develop “templates” that organize the system-configuration and security information required for threat analysis into a reusable 5W-based format (When, Where, Who, Why, What); (ii) we define dependencies among the templates and design an algorithm that automatically generates threat-analysis results; and (iii) observing that threat analysis of large-scale systems often yield overlaps, we introduce “business operations” as an analytical asset, which includes encompassing information, function, and physical resources. We apply our method to an actual large-scale operational system and confirm that it reduces the workload by up to 84% relative to conventional manual analysis, while maintaining both the coverage and the accuracy of the analysis.
Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Comparing Emerging and Hybrid Quantum–Kolmogorov Architectures for Image Classification
by
Lelio Campanile, Mariarosaria Castaldo, Stefano Marrone and Fabio Napoli
Computers 2026, 15(1), 65; https://doi.org/10.3390/computers15010065 - 16 Jan 2026
Abstract
The rapid evolution of Artificial Intelligence has enabled significant progress in image classification, with emerging approaches extending traditional deep learning paradigms. This article presents an extended version of a paper originally introduced at ICCSA 2025, providing a broader comparative analysis of classical, spline-based,
[...] Read more.
The rapid evolution of Artificial Intelligence has enabled significant progress in image classification, with emerging approaches extending traditional deep learning paradigms. This article presents an extended version of a paper originally introduced at ICCSA 2025, providing a broader comparative analysis of classical, spline-based, and quantum machine learning architectures. The study evaluates Convolutional Neural Networks (CNNs), Kolmogorov–Arnold Networks (KANs), Convolutional KANs (CKANs), and Quantum Convolutional Neural Networks (QCNNs) on the Labeled Faces in the Wild dataset. In addition to these baselines, two novel architectures are introduced: a fully quantum Kolmogorov–Arnold model (F-QKAN) and a hybrid KAN–Quantum network (H-QKAN) that combines spline-based feature extraction with variational quantum classification. Rather than targeting state-of-the-art performance, the evaluation focuses on analyzing the behaviour of these architectures in terms of accuracy, computational efficiency, and interpretability under a unified experimental protocol. Results show that the fully quantum F-QKAN achieves a test accuracy above 80%. The hybrid H-QKAN obtains the best overall performance, exceeding 92% accuracy with rapid convergence and stable training dynamics. Classical CNNs models remain state-of-the-art in terms of predictive performance, whereas CKANs offer a favorable balance between accuracy and efficiency. QCNNs show potential in ideal noise-free settings but are significantly affected by realistic noise conditions, motivating further investigation into hybrid quantum–classical designs.
Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
►▼
Show Figures

Figure 1
Open AccessArticle
Exploring Slow Responses in International Large-Scale Assessments Using Sequential Process Analysis
by
Daniel Jerez, Elisabetta Mazzullo and Okan Bulut
Computers 2026, 15(1), 64; https://doi.org/10.3390/computers15010064 - 16 Jan 2026
Abstract
Slow responding in International Large-Scale Assessments (ILSAs) has received far less attention than rapid guessing, despite its potential to reveal heterogeneous response processes. Unlike disengaged rapid responders, slow responders may differ in time management, off-task behavior, or specific cognitive operations. This exploratory study
[...] Read more.
Slow responding in International Large-Scale Assessments (ILSAs) has received far less attention than rapid guessing, despite its potential to reveal heterogeneous response processes. Unlike disengaged rapid responders, slow responders may differ in time management, off-task behavior, or specific cognitive operations. This exploratory study uses sequence analysis of log-file data from a complex problem-solving item in PISA 2012 to examine whether slow responders can be grouped into homogeneous subtypes. The item required students to explore causal relations and externalize them in a diagram. Results indicate two distinct clusters among slow responders, each marked by characteristic interaction patterns and difficulties at different stages of the solution process. One cluster exhibited long pauses interspersed with repeated, inefficient attempts at representing causal relationships; the other showed shorter pauses coupled with inefficient exploratory actions targeting those relationships. These findings demonstrate that sequence analysis can parsimoniously identify clusters of action sequences associated with slow responding, offering a finer-grained account of aberrant behavior in low-stakes, digital assessments. More broadly, the approach illustrates how process data can be leveraged to differentiate mechanisms underlying slow response behaviors, with implications for validity arguments, diagnostic feedback, and the design of mitigation strategies in ILSAs. Directions for future research to better understand the differences among slow responders are provided.
Full article
(This article belongs to the Special Issue Recent Advances in Data Mining: Methods, Trends, and Emerging Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
A Lightweight Edge-AI System for Disease Detection and Three-Level Leaf Spot Severity Assessment in Strawberry Using YOLOv10n and MobileViT-S
by
Raikhan Amanova, Baurzhan Belgibayev, Madina Mansurova, Madina Suleimenova, Gulshat Amirkhanova and Gulnur Tyulepberdinova
Computers 2026, 15(1), 63; https://doi.org/10.3390/computers15010063 - 16 Jan 2026
Abstract
►▼
Show Figures
Mobile edge-AI plant monitoring systems enable automated disease control in greenhouses and open fields, reducing dependence on manual inspection and the variability of visual diagnostics. This paper proposes a lightweight two-stage edge-AI system for strawberries, in which a YOLOv10n detector on board a
[...] Read more.
Mobile edge-AI plant monitoring systems enable automated disease control in greenhouses and open fields, reducing dependence on manual inspection and the variability of visual diagnostics. This paper proposes a lightweight two-stage edge-AI system for strawberries, in which a YOLOv10n detector on board a mobile agricultural robot locates leaves affected by seven common diseases (including Leaf Spot) with real-time capability on an embedded platform. Patches are then automatically extracted for leaves classified as Leaf Spot and transmitted to the second module—a compact MobileViT-S-based classifier with ordinal output that assesses the severity of Leaf Spot on three levels (S1—mild, S2—moderate, S3—severe) on a specialised set of 373 manually labelled leaf patches. In a comparative experiment with lightweight architectures ResNet-18, EfficientNet-B0, MobileNetV3-Small and Swin-Tiny, the proposed Ordinal MobileViT-S demonstrated the highest accuracy in assessing the severity of Leaf Spot (accuracy ≈ 0.97 with 4.9 million parameters), surpassing both the baseline models and the standard MobileViT-S with a cross-entropy loss function. On the original image set, the YOLOv10n detector achieves an mAP@0.5 of 0.960, an F1 score of 0.93 and a recall of 0.917, ensuring reliable detection of affected leaves for subsequent Leaf Spot severity assessment. The results show that the “YOLOv10n + Ordinal MobileViT-S” cascade provides practical severity-aware Leaf Spot diagnosis on a mobile agricultural robot and can serve as the basis for real-time strawberry crop health monitoring systems.
Full article

Figure 1
Open AccessArticle
Low-Latency Autonomous Surveillance in Defense Environments: A Hybrid RTSP-WebRTC Architecture with YOLOv11
by
Juan José Castro-Castaño, William Efrén Chirán-Alpala, Guillermo Alfonso Giraldo-Martínez, José David Ortega-Pabón, Edison Camilo Rodríguez-Amézquita, Diego Ferney Gallego-Franco and Yeison Alberto Garcés-Gómez
Computers 2026, 15(1), 62; https://doi.org/10.3390/computers15010062 - 16 Jan 2026
Abstract
►▼
Show Figures
This article presents the Intelligent Monitoring System (IMS), an AI-assisted, low-latency surveillance platform designed for defense environments. The study addresses the need for real-time autonomous situational awareness by integrating high-speed video transmission with advanced computer vision analytics in constrained network settings. The IMS
[...] Read more.
This article presents the Intelligent Monitoring System (IMS), an AI-assisted, low-latency surveillance platform designed for defense environments. The study addresses the need for real-time autonomous situational awareness by integrating high-speed video transmission with advanced computer vision analytics in constrained network settings. The IMS employs a hybrid transmission architecture based on RTSP for ingestion and WHEP/WebRTC for distribution, orchestrated via MediaMTX, with the objective of achieving end-to-end latencies below one second. The methodology includes a comparative evaluation of video streaming protocols (JPEG-over-WebSocket, HLS, WebRTC, etc.) and AI frameworks, alongside the modular architectural design and prolonged experimental validation. The detection module integrates YOLOv11 models fine-tuned on the VisDrone dataset to optimize performance for small objects, aerial views, and dense scenes. Experimental results, obtained through over 300 h of operational tests using IP cameras and aerial platforms, confirmed the stability and performance of the chosen architecture, maintaining latencies close to 500 ms. The YOLOv11 family was adopted as the primary detection framework, providing an effective trade-off between accuracy and inference performance in real-time scenarios. The YOLOv11n model was trained and validated on a Tesla T4 GPU, and YOLOv11m will be validated on the target platform in subsequent experiments. The findings demonstrate the technical viability and operational relevance of the IMS as a core component for autonomous surveillance systems in defense, satisfying strict requirements for speed, stability, and robust detection of vehicles and pedestrians.
Full article

Figure 1
Open AccessArticle
Using Steganography and Artificial Neural Network for Data Forensic Validation and Counter Image Deepfakes
by
Matimu Caswell Nkuna, Ebenezer Esenogho and Ahmed Ali
Computers 2026, 15(1), 61; https://doi.org/10.3390/computers15010061 - 15 Jan 2026
Abstract
The merging of the Internet of Things (IoT) and Artificial Intelligence (AI) advances has intensified challenges related to data authenticity and security. These advancements necessitate a multi-layered security approach to ensure the security, reliability, and integrity of critical infrastructure and intelligent surveillance systems.
[...] Read more.
The merging of the Internet of Things (IoT) and Artificial Intelligence (AI) advances has intensified challenges related to data authenticity and security. These advancements necessitate a multi-layered security approach to ensure the security, reliability, and integrity of critical infrastructure and intelligent surveillance systems. This paper proposes a two-layered security approach that combines a discrete cosine transform least significant bit 2 (DCT-LSB-2) with artificial neural networks (ANNs) for data forensic validation and mitigating deepfakes. The proposed model encodes validation codes within the LSBs of cover images captured by an IoT camera on the sender side, leveraging the DCT approach to enhance the resilience against steganalysis. On the receiver side, a reverse DCT-LSB-2 process decodes the embedded validation code, which is subjected to authenticity verification by a pre-trained ANN model. The ANN validates the integrity of the decoded code and ensures that only device-originated, untampered images are accepted. The proposed framework achieved an average SSIM of 0.9927 across the entire investigated embedding capacity, ranging from 0 to 1.988 bpp. DCT-LSB-2 showed a stable Peak Signal-to-Noise Ratio (average 42.44 dB) under various evaluated payloads ranging from 0 to 100 kB. The proposed model achieved a resilient and robust multi-layered data forensic validation system.
Full article
(This article belongs to the Special Issue Multimedia Data and Network Security)
►▼
Show Figures

Graphical abstract
Open AccessArticle
The Integration of ISO 27005 and NIST SP 800-30 for Security Operation Center (SOC) Framework Effectiveness in the Non-Bank Financial Industry
by
Muharman Lubis, Muhammad Irfan Luthfi, Rd. Rohmat Saedudin, Alif Noorachmad Muttaqin and Arif Ridho Lubis
Computers 2026, 15(1), 60; https://doi.org/10.3390/computers15010060 - 15 Jan 2026
Abstract
►▼
Show Figures
A Security Operation Center (SOC) is a security control center for monitoring, detecting, analyzing, and responding to cybersecurity threats. PT (Perseroan Terbatas) Non-Bank Financial Company (NBFC) has implemented an SOC to secure its information systems, but challenges remain to be solved.
[...] Read more.
A Security Operation Center (SOC) is a security control center for monitoring, detecting, analyzing, and responding to cybersecurity threats. PT (Perseroan Terbatas) Non-Bank Financial Company (NBFC) has implemented an SOC to secure its information systems, but challenges remain to be solved. These include the absence of impact analysis on financial and regulatory requirements, cost, and effort estimation for recovery; established Key Performance Indicators (KPIs) and Key Risk Indicators (KRIs) for monitoring security controls; and an official program for insider threats. This study evaluates SOC effectiveness at PT NBFC using the ISO 27005:2018 and NIST SP 800-30 frameworks. The research results in a proposed SOC assessment framework, integrating risk assessment, risk treatment, risk acceptance, and monitoring. Additionally, a maturity level assessment was conducted for ISO 27005:2018, NIST SP 800-30, and the proposed framework. The proposed framework achieves good maturity, with two domains meeting the target maturity value and one domain reaching level 4 (Managed and Measurable). By incorporating domains from both ISO 27005:2018 and NIST SP 800-30, the new framework offers a more comprehensive risk management approach, covering strategic, managerial, and technical aspects.
Full article

Figure 1
Open AccessArticle
AI-Based Emoji Recommendation for Early Childhood Education Using Deep Learning Techniques
by
Shaya A. Alshaya
Computers 2026, 15(1), 59; https://doi.org/10.3390/computers15010059 - 15 Jan 2026
Abstract
►▼
Show Figures
The integration of emojis into Early Childhood Education (ECE) presents a promising avenue for enhancing student engagement, emotional expression, and comprehension. While prior studies suggest the benefit of visual aids in learning, systematic frameworks for pedagogically aligned emoji recommendation remain underdeveloped. This paper
[...] Read more.
The integration of emojis into Early Childhood Education (ECE) presents a promising avenue for enhancing student engagement, emotional expression, and comprehension. While prior studies suggest the benefit of visual aids in learning, systematic frameworks for pedagogically aligned emoji recommendation remain underdeveloped. This paper presents EduEmoji-ECE, a pedagogically annotated dataset of early-childhood learning text segments. Specifically, the proposed model incorporates Bidirectional Encoder Representations from Transformers (BERTs) for contextual embedding extraction, Gated Recurrent Units (GRUs) for sequential pattern recognition, Deep Neural Networks (DNNs) for classification and emoji recommendation, and DECOC for improving emoji class prediction robustness. This hybrid BERT-GRU-DNN-DECOC architecture effectively captures textual semantics, emotional tone, and pedagogical intent, ensuring the alignment of emoji class recommendation with learning objectives. The experimental results show that the system is effective, with an accuracy of 95.3%, a precision of 93%, a recall of 91.8%, and an F1-score of 92.3%, outperforming baseline models in terms of contextual understanding and overall accuracy. This work helps fill a gap in AI-based education by combining learning with visual support for young children. The results suggest an association between emoji-enhanced materials and improved engagement/comprehension indicators in our exploratory classroom setting; however, causal attribution to the AI placement mechanism is not supported by the current study design.
Full article

Figure 1
Open AccessArticle
An Open-Source System for Public Transport Route Data Curation Using OpenTripPlanner in Australia
by
Kiki Adhinugraha, Yusuke Gotoh and David Taniar
Computers 2026, 15(1), 58; https://doi.org/10.3390/computers15010058 - 14 Jan 2026
Abstract
Access to large-scale public transport journey data is essential for analysing accessibility, equity, and urban mobility. Although digital platforms such as Google Maps provide detailed routing for individual users, their licensing and access restrictions prevent systematic data extraction for research purposes. Open-source routing
[...] Read more.
Access to large-scale public transport journey data is essential for analysing accessibility, equity, and urban mobility. Although digital platforms such as Google Maps provide detailed routing for individual users, their licensing and access restrictions prevent systematic data extraction for research purposes. Open-source routing engines such as OpenTripPlanner offer a transparent alternative, but are often limited to local or technical deployments that restrict broader use. This study evaluates the feasibility of deploying a publicly accessible, open-source routing platform based on OpenTripPlanner to support large-scale public transport route simulation across multiple cities. Using Australian metropolitan areas as a case study, the platform integrates GTFS and OpenStreetMap data to enable repeatable journey queries through a web interface, an API, and bulk processing tools. Across eight metropolitan regions, the system achieved itinerary coverage above 90 percent and sustained approximately 3000 routing requests per minute under concurrent access. These results demonstrate that open-source routing infrastructure can support reliable, large-scale route simulation using open data. Beyond performance, the platform enables public transport accessibility studies that are not feasible with proprietary routing services, supporting reproducible research, transparent decision-making, and evidence-based transport planning across diverse urban contexts.
Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
►▼
Show Figures

Figure 1
Open AccessArticle
IDN-MOTSCC: Integration of Deep Neural Network with Hybrid Meta-Heuristic Model for Multi-Objective Task Scheduling in Cloud Computing
by
Mohit Kumar, Rama Kant, Brijesh Kumar Gupta, Azhar Shadab, Ashwani Kumar and Krishna Kant
Computers 2026, 15(1), 57; https://doi.org/10.3390/computers15010057 - 14 Jan 2026
Abstract
Cloud computing covers a wide range of practical applications and diverse domains, yet resource scheduling and task scheduling remain significant challenges. To address this, different task scheduling algorithms are implemented across various computing systems to allocate tasks to machines, thereby enhancing performance through
[...] Read more.
Cloud computing covers a wide range of practical applications and diverse domains, yet resource scheduling and task scheduling remain significant challenges. To address this, different task scheduling algorithms are implemented across various computing systems to allocate tasks to machines, thereby enhancing performance through data mapping. To meet these challenges, a novel task scheduling model is proposed using a hybrid meta-heuristic integration with a deep learning approach. We employed this novel task scheduling model to integrate deep learning with an optimized DNN, fine-tuned using improved grey wolf–horse herd optimization, with the aim of optimizing cloud-based task allocation and overcoming makespan constraints. Initially, a user initiates a task or request within the cloud environment. Then, these tasks are assigned to Virtual Machines (VMs). Since the scheduling algorithm is constrained by the makespan objective, an optimized Deep Neural Network (DNN) model is developed to perform optimal task scheduling. Random solutions are provided to the optimized DNN, where the hidden neuron count is tuned optimally by the proposed Improved Grey Wolf–Horse Herd Optimization (IGW-HHO) algorithm. The proposed IGW-HHO algorithm is derived from both conventional Grey Wolf Optimization (GWO) and Horse Herd Optimization (HHO). The optimal solutions are acquired from the optimized DNN and processed by the proposed algorithm to efficiently allocate tasks to VMs. The experimental results are validated using various error measures and convergence analysis. The proposed DNN-IGW-HHO model achieved a lower cost function compared to other optimization methods, with a reduction of 1% compared to PSO, 3.5% compared to WOA, 2.7% compared to GWO, and 0.7% compared to HHO. The proposed task scheduling model achieved the minimal Mean Absolute Error (MAE), with performance improvements of 31% over PSO, 20.16% over WOA, 41.72% over GWO, and 9.11% over HHO.
Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Implementing Learning Analytics in Education: Enhancing Actionability and Adoption
by
Dimitrios E. Tzimas and Stavros N. Demetriadis
Computers 2026, 15(1), 56; https://doi.org/10.3390/computers15010056 - 14 Jan 2026
Abstract
The broader aim of this research is to examine how Learning Analytics (LA) can become ethically sound, pedagogically actionable, and realistically adopted in educational practice. To address this overarching challenge, the study investigates three interrelated research questions: ethics by design, learning impact, and
[...] Read more.
The broader aim of this research is to examine how Learning Analytics (LA) can become ethically sound, pedagogically actionable, and realistically adopted in educational practice. To address this overarching challenge, the study investigates three interrelated research questions: ethics by design, learning impact, and adoption conditions. Methodologically, the research follows an exploratory sequential multi-method design. First, a meta-synthesis of 53 studies is conducted to identify key ethical challenges in LA and to derive an ethics-by-design framework. Second, a quasi-experimental study examines the impact of interface-based LA guidance (strong versus minimal) on students’ self-regulated learning skills and academic performance. Third, a mixed-methods adoption study, combining surveys, focus groups, and ethnographic observations, investigates the factors that encourage or hinder teachers’ adoption of LA in K–12 education. The findings indicate that strong LA-based guidance leads to statistically significant improvements in students’ self-regulated learning skills and academic performance compared to minimal guidance. Furthermore, the adoption analysis reveals that performance expectancy, social influence, human-centred design, and positive emotions facilitate LA adoption, whereas effort expectancy, limited facilitating conditions, ethical concerns, and cultural resistance inhibit it. Overall, the study demonstrates that ethics by design, effective pedagogical guidance, and adoption conditions are mutually reinforcing dimensions. It argues that LA can support intelligent, responsive, and human-centred learning environments when ethical safeguards, instructional design, and stakeholder involvement are systematically aligned.
Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
SteadyEval: Robust LLM Exam Graders via Adversarial Training and Distillation
by
Catalin Anghel, Marian Viorel Craciun, Adina Cocu, Andreea Alexandra Anghel and Adrian Istrate
Computers 2026, 15(1), 55; https://doi.org/10.3390/computers15010055 - 14 Jan 2026
Abstract
Large language models (LLMs) are increasingly used as rubric-guided graders for short-answer exams, but their decisions can be unstable across prompts and vulnerable to answer-side prompt injection. In this paper, we study SteadyEval, a guardrailed exam-grading pipeline in which an adversarially trained LoRA
[...] Read more.
Large language models (LLMs) are increasingly used as rubric-guided graders for short-answer exams, but their decisions can be unstable across prompts and vulnerable to answer-side prompt injection. In this paper, we study SteadyEval, a guardrailed exam-grading pipeline in which an adversarially trained LoRA filter (SteadyEval-7B-deep) preprocesses student answers to remove answer-side prompt injection, after which the original Mistral-7B-Instruct rubric-guided grader assigns the final score. We build two exam-grading pipelines on top of Mistral-7B-Instruct: a baseline pipeline that scores student answers directly, and a guardrailed pipeline in which a LoRA-based filter (SteadyEval-7B-deep) first removes injection content from the answer and a downstream grader then assigns the final score. Using two rubric-guided short-answer datasets in machine learning and computer networking, we generate grouped families of clean answers and four classes of answer-side attacks, and we evaluate the impact of these attacks on score shifts, attack success rates, stability across prompt variants, and alignment with human graders. On the pooled dataset, answer-side attacks inflate grades in the unguarded baseline by an average of about +1.2 points on a 1–10 scale, and substantially increase score dispersion across prompt variants. The guardrailed pipeline largely removes this systematic grade inflation and reduces instability for many items, especially in the machine-learning exam, while keeping mean absolute error with respect to human reference scores in a similar range to the unguarded baseline on clean answers, with a conservative shift in networking that motivates per-course calibration. Chief-panel comparisons further show that the guardrailed pipeline tracks human grading more closely on machine-learning items, but tends to under-score networking answers. These findings are best interpreted as a proof-of-concept guardrail and require per-course validation and calibration before operational use.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Gabor Transform-Based Deep Learning System Using CNN for Melanoma Detection
by
S. Deivasigamani, C. Senthilpari, Siva Sundhara Raja. D, A. Thankaraj, G. Narmadha and K. Gowrishankar
Computers 2026, 15(1), 54; https://doi.org/10.3390/computers15010054 - 13 Jan 2026
Abstract
Melanoma is highly dangerous and can spread rapidly to other parts of the body. It has an increasing fatality rate among different types of cancer. Timely detection of skin malignancies can reduce overall mortality. Therefore, clinical screening methods require more time and accuracy
[...] Read more.
Melanoma is highly dangerous and can spread rapidly to other parts of the body. It has an increasing fatality rate among different types of cancer. Timely detection of skin malignancies can reduce overall mortality. Therefore, clinical screening methods require more time and accuracy for diagnosis. An automated, computer-aided system would facilitate earlier melanoma detection, thereby increasing patient survival rates. This paper identifies melanoma images using a Convolutional Neural Network. Skin images are preprocessed using Histogram Equalization and Gabor transforms. A Gabor filter-based Convolutional Neural Network (CNN) classifier trains and classifies the extracted features. We adopt Gabor filters because they are bandpass filters that transform a pixel into a multi-resolution kernel matrix, providing detailed information about the image. This study suggests a method with accuracy, sensitivity, and specificity of 98.58%, 98.66%, and 98.75%, respectively. This research supports SDGs 3 and 4 by facilitating early melanoma detection and enhancing AI-driven medical education.
Full article
(This article belongs to the Topic AI, Deep Learning, and Machine Learning in Veterinary Science Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Stereo-Based Single-Shot Hand-to-Eye Calibration for Robot Arms
by
Pushkar Kadam, Gu Fang, Farshid Amirabdollahian, Ju Jia Zou and Patrick Holthaus
Computers 2026, 15(1), 53; https://doi.org/10.3390/computers15010053 - 13 Jan 2026
Abstract
Robot hand-to-eye calibration is a necessary process for a robot arm to perceive and interact with its environment. Past approaches required collecting multiple images using a calibration board placed at different locations relative to the robot. When the robot or camera is displaced
[...] Read more.
Robot hand-to-eye calibration is a necessary process for a robot arm to perceive and interact with its environment. Past approaches required collecting multiple images using a calibration board placed at different locations relative to the robot. When the robot or camera is displaced from its calibrated position, hand–eye calibration must be redone using the same tedious process. In this research, we developed a novel method that uses a semi-automatic process to perform hand-to-eye calibration with a stereo camera, generating a transformation matrix from the world to the camera coordinate frame from a single image. We use a robot-pointer tool attached to the robot’s end-effector to manually establish a relationship between the world and the robot coordinate frame. Then, we establish the relationship between the camera and the robot using a transformation matrix that maps points observed in the stereo image frame from two-dimensional space to the robot’s three-dimensional coordinate frame. Our analysis of the stereo calibration showed a reprojection error of 0.26 pixels. An evaluation metric was developed to test the camera-to-robot transformation matrix, and the experimental results showed median root mean square errors of less than 1 mm in the x and y directions and less than 2 mm in the z directions in the robot coordinate frame. The results show that, with this work, we contribute a hand-to-eye calibration method that uses three non-collinear points in a single stereo image to map camera-to-robot coordinate-frame transformations.
Full article
(This article belongs to the Special Issue Advanced Human–Robot Interaction 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
Learning Complementary Representations for Targeted Multimodal Sentiment Analysis
by
Binfen Ding, Jieyu An and Yumeng Lei
Computers 2026, 15(1), 52; https://doi.org/10.3390/computers15010052 - 13 Jan 2026
Abstract
Targeted multimodal sentiment classification is frequently impeded by the semantic sparsity of social media content, where text is brief and context is implicit. Traditional methods that rely on direct concatenation of textual and visual features often fail to resolve the ambiguity of specific
[...] Read more.
Targeted multimodal sentiment classification is frequently impeded by the semantic sparsity of social media content, where text is brief and context is implicit. Traditional methods that rely on direct concatenation of textual and visual features often fail to resolve the ambiguity of specific targets due to a lack of alignment between modalities. In this paper, we propose the Complementary Description Network (CDNet) to bridge this informational gap. CDNet incorporates automatically generated image descriptions as an additional semantic bridge, in contrast to methods that handle text and images as distinct streams. The framework enhances the input representation by directly translating visual content into text, allowing for more accurate interactions between the opinion target and the visual narrative. We further introduce a complementary reconstruction module that functions as a regularizer, forcing the model to retain deep semantic cues during fusion. Empirical results on the Twitter-2015 and Twitter-2017 benchmarks confirm that CDNet outperforms existing baselines. The findings suggest that visual-to-text augmentation is an effective strategy for compensating for the limited context inherent in short texts.
Full article
(This article belongs to the Section AI-Driven Innovations)
►▼
Show Figures

Figure 1
Open AccessArticle
Integrating ESP32-Based IoT Architectures and Cloud Visualization to Foster Data Literacy in Early Engineering Education
by
Jael Zambrano-Mieles, Miguel Tupac-Yupanqui, Salutar Mari-Loardo and Cristian Vidal-Silva
Computers 2026, 15(1), 51; https://doi.org/10.3390/computers15010051 - 13 Jan 2026
Abstract
►▼
Show Figures
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time
[...] Read more.
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time environmental monitoring systems for agriculture and beekeeping. Through a sixteen-week Project-Based Learning (PBL) intervention with 91 participants, we evaluated how this technological stack influences technical proficiency. Results indicate that the transition from local code execution to cloud-based telemetry increased perceived learning confidence from (Challenge phase) to (Reflection phase) on a 5-point scale. Furthermore, 96% of students identified the visualization dashboards as essential Human–Computer Interfaces (HCI) for debugging, effectively bridging the gap between raw sensor data and evidence-based argumentation. These findings demonstrate that integrating open-source IoT architectures provides a scalable mechanism to cultivate data literacy in early engineering education.
Full article

Figure 1
Open AccessReview
A Comprehensive Review of Energy Efficiency in 5G Networks: Past Strategies, Present Advances, and Future Research Directions
by
Narjes Lassoued and Noureddine Boujnah
Computers 2026, 15(1), 50; https://doi.org/10.3390/computers15010050 - 12 Jan 2026
Abstract
The rapid evolution of wireless communication toward Fifth Generation (5G) networks has enabled unprecedented performance improvement in terms of data rate, latency, reliability, sustainability, and connectivity. Recent years have witnessed an excessive deployment of new 5G networks worldwide. This deployment lead to an
[...] Read more.
The rapid evolution of wireless communication toward Fifth Generation (5G) networks has enabled unprecedented performance improvement in terms of data rate, latency, reliability, sustainability, and connectivity. Recent years have witnessed an excessive deployment of new 5G networks worldwide. This deployment lead to an exponential growth in traffic flow and a massive number of connected devices requiring a new generation of energy-hungry base stations (BSs). This results in increased power consumption, higher operational costs, and greater environmental impact, making energy efficiency (EE) a critical research challenge. This paper presents a comprehensive survey of EE optimization strategies in 5G networks. It reviews the transition from traditional methods such as resources allocation, energy harvesting, BS sleep modes, and power control to modern artificial intelligence (AI)-driven solutions employing machine learning, deep reinforcement learning, and self-organizing networks (SON). Comparative analyses highlight the trade-offs between energy savings, network performance, and implementation complexity. Finally, the paper outlines key open issues and future directions toward sustainable 5G and beyond-5G (B5G/Sixth Generation (6G)) systems, emphasizing explainable AI, zero-energy communications, and holistic green network design.
Full article
(This article belongs to the Special Issue Shaping the Future of Green Networking: Integrated Approaches of Joint Intelligence, Communication, Sensing, and Resilience for 6G)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Artificial Intelligence in K-12 Education: A Systematic Review of Teachers’ Professional Development Needs for AI Integration
by
Spyridon Aravantinos, Konstantinos Lavidas, Vassilis Komis, Thanassis Karalis and Stamatios Papadakis
Computers 2026, 15(1), 49; https://doi.org/10.3390/computers15010049 - 12 Jan 2026
Abstract
Artificial intelligence (AI) is reshaping how learning environments are designed and experienced, offering new possibilities for personalization, creativity, and immersive engagement. This systematic review synthesizes 43 empirical studies (Scopus, Web of Science) to examine the training needs and practices of primary and secondary
[...] Read more.
Artificial intelligence (AI) is reshaping how learning environments are designed and experienced, offering new possibilities for personalization, creativity, and immersive engagement. This systematic review synthesizes 43 empirical studies (Scopus, Web of Science) to examine the training needs and practices of primary and secondary education teachers for effective AI integration and overall professional development (PD). Following PRISMA guidelines, the review gathers teachers’ needs and practices related to AI integration, identifying key themes including training practices, teachers’ perceptions and attitudes, ongoing PD programs, multi-level support, AI literacy, and ethical and responsible use. The findings show that technical training alone is not sufficient, and that successful integration of AI requires a combination of pedagogical knowledge, positive attitudes, organizational support, and continuous training. Based on empirical data, a four-level, process-oriented PD framework is proposed, which bridges research with educational practice and offers practical guidance for the design of AI training interventions. Limitations and future research are discussed.
Full article
(This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Computers, Electronics, Information, MAKE, Signals
Recent Advances in Label Distribution Learning
Topic Editors: Xin Geng, Ning Xu, Liangxiao JiangDeadline: 31 January 2026
Topic in
Applied Sciences, Computers, JSAN, Technologies, BDCC, Sensors, Telecom, Electronics
Electronic Communications, IOT and Big Data, 2nd Volume
Topic Editors: Teen-Hang Meen, Charles Tijus, Cheng-Chien Kuo, Kuei-Shu Hsu, Jih-Fu TuDeadline: 31 March 2026
Topic in
AI, Buildings, Computers, Drones, Entropy, Symmetry
Applications of Machine Learning in Large-Scale Optimization and High-Dimensional Learning
Topic Editors: Jeng-Shyang Pan, Junzo Watada, Vaclav Snasel, Pei HuDeadline: 30 April 2026
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2026
Conferences
Special Issues
Special Issue in
Computers
Future Trends in Computer Programming Education
Guest Editor: Stelios XinogalosDeadline: 31 January 2026
Special Issue in
Computers
AI in Complex Engineering Systems
Guest Editor: Sandi Baressi ŠegotaDeadline: 31 January 2026
Special Issue in
Computers
Computational Science and Its Applications 2025 (ICCSA 2025)
Guest Editor: Osvaldo GervasiDeadline: 31 January 2026
Special Issue in
Computers
Advances in Semantic Multimedia and Personalized Digital Content
Guest Editors: Phivos Mylonas, Christos Troussas, Akrivi Krouska, Manolis Wallace, Cleo SgouropoulouDeadline: 25 February 2026



