Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.5 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Using Steganography and Artificial Neural Network for Data Forensic Validation and Counter Image Deepfakes
Computers 2026, 15(1), 61; https://doi.org/10.3390/computers15010061 - 15 Jan 2026
Abstract
The merging of the Internet of Things (IoT) and Artificial Intelligence (AI) advances has intensified challenges related to data authenticity and security. These advancements necessitate a multi-layered security approach to ensure the security, reliability, and integrity of critical infrastructure and intelligent surveillance systems.
[...] Read more.
The merging of the Internet of Things (IoT) and Artificial Intelligence (AI) advances has intensified challenges related to data authenticity and security. These advancements necessitate a multi-layered security approach to ensure the security, reliability, and integrity of critical infrastructure and intelligent surveillance systems. This paper proposes a two-layered security approach that combines a discrete cosine transform least significant bit 2 (DCT-LSB-2) with artificial neural networks (ANNs) for data forensic validation and mitigating deepfakes. The proposed model encodes validation codes within the LSBs of cover images captured by an IoT camera on the sender side, leveraging the DCT approach to enhance the resilience against steganalysis. On the receiver side, a reverse DCT-LSB-2 process decodes the embedded validation code, which is subjected to authenticity verification by a pre-trained ANN model. The ANN validates the integrity of the decoded code and ensures that only device-originated, untampered images are accepted. The proposed framework achieved an average SSIM of 0.9927 across the entire investigated embedding capacity, ranging from 0 to 1.988 bpp. DCT-LSB-2 showed a stable Peak Signal-to-Noise Ratio (average 42.44 dB) under various evaluated payloads ranging from 0 to 100 kB. The proposed model achieved a resilient and robust multi-layered data forensic validation system.
Full article
(This article belongs to the Special Issue Multimedia Data and Network Security)
►
Show Figures
Open AccessArticle
The Integration of ISO 27005 and NIST SP 800-30 for Security Operation Center (SOC) Framework Effectiveness in the Non-Bank Financial Industry
by
Muharman Lubis, Muhammad Irfan Luthfi, Rd. Rohmat Saedudin, Alif Noorachmad Muttaqin and Arif Ridho Lubis
Computers 2026, 15(1), 60; https://doi.org/10.3390/computers15010060 - 15 Jan 2026
Abstract
►▼
Show Figures
A Security Operation Center (SOC) is a security control center for monitoring, detecting, analyzing, and responding to cybersecurity threats. PT (Perseroan Terbatas) Non-Bank Financial Company (NBFC) has implemented an SOC to secure its information systems, but challenges remain to be solved.
[...] Read more.
A Security Operation Center (SOC) is a security control center for monitoring, detecting, analyzing, and responding to cybersecurity threats. PT (Perseroan Terbatas) Non-Bank Financial Company (NBFC) has implemented an SOC to secure its information systems, but challenges remain to be solved. These include the absence of impact analysis on financial and regulatory requirements, cost, and effort estimation for recovery; established Key Performance Indicators (KPIs) and Key Risk Indicators (KRIs) for monitoring security controls; and an official program for insider threats. This study evaluates SOC effectiveness at PT NBFC using the ISO 27005:2018 and NIST SP 800-30 frameworks. The research results in a proposed SOC assessment framework, integrating risk assessment, risk treatment, risk acceptance, and monitoring. Additionally, a maturity level assessment was conducted for ISO 27005:2018, NIST SP 800-30, and the proposed framework. The proposed framework achieves good maturity, with two domains meeting the target maturity value and one domain reaching level 4 (Managed and Measurable). By incorporating domains from both ISO 27005:2018 and NIST SP 800-30, the new framework offers a more comprehensive risk management approach, covering strategic, managerial, and technical aspects.
Full article

Figure 1
Open AccessArticle
AI-Based Emoji Recommendation for Early Childhood Education Using Deep Learning Techniques
by
Shaya A. Alshaya
Computers 2026, 15(1), 59; https://doi.org/10.3390/computers15010059 - 15 Jan 2026
Abstract
►▼
Show Figures
The integration of emojis into Early Childhood Education (ECE) presents a promising avenue for enhancing student engagement, emotional expression, and comprehension. While prior studies suggest the benefit of visual aids in learning, systematic frameworks for pedagogically aligned emoji recommendation remain underdeveloped. This paper
[...] Read more.
The integration of emojis into Early Childhood Education (ECE) presents a promising avenue for enhancing student engagement, emotional expression, and comprehension. While prior studies suggest the benefit of visual aids in learning, systematic frameworks for pedagogically aligned emoji recommendation remain underdeveloped. This paper presents EduEmoji-ECE, a pedagogically annotated dataset of early-childhood learning text segments. Specifically, the proposed model incorporates Bidirectional Encoder Representations from Transformers (BERTs) for contextual embedding extraction, Gated Recurrent Units (GRUs) for sequential pattern recognition, Deep Neural Networks (DNNs) for classification and emoji recommendation, and DECOC for improving emoji class prediction robustness. This hybrid BERT-GRU-DNN-DECOC architecture effectively captures textual semantics, emotional tone, and pedagogical intent, ensuring the alignment of emoji class recommendation with learning objectives. The experimental results show that the system is effective, with an accuracy of 95.3%, a precision of 93%, a recall of 91.8%, and an F1-score of 92.3%, outperforming baseline models in terms of contextual understanding and overall accuracy. This work helps fill a gap in AI-based education by combining learning with visual support for young children. The results suggest an association between emoji-enhanced materials and improved engagement/comprehension indicators in our exploratory classroom setting; however, causal attribution to the AI placement mechanism is not supported by the current study design.
Full article

Figure 1
Open AccessArticle
An Open-Source System for Public Transport Route Data Curation Using OpenTripPlanner in Australia
by
Kiki Adhinugraha, Yusuke Gotoh and David Taniar
Computers 2026, 15(1), 58; https://doi.org/10.3390/computers15010058 - 14 Jan 2026
Abstract
Access to large-scale public transport journey data is essential for analysing accessibility, equity, and urban mobility. Although digital platforms such as Google Maps provide detailed routing for individual users, their licensing and access restrictions prevent systematic data extraction for research purposes. Open-source routing
[...] Read more.
Access to large-scale public transport journey data is essential for analysing accessibility, equity, and urban mobility. Although digital platforms such as Google Maps provide detailed routing for individual users, their licensing and access restrictions prevent systematic data extraction for research purposes. Open-source routing engines such as OpenTripPlanner offer a transparent alternative, but are often limited to local or technical deployments that restrict broader use. This study evaluates the feasibility of deploying a publicly accessible, open-source routing platform based on OpenTripPlanner to support large-scale public transport route simulation across multiple cities. Using Australian metropolitan areas as a case study, the platform integrates GTFS and OpenStreetMap data to enable repeatable journey queries through a web interface, an API, and bulk processing tools. Across eight metropolitan regions, the system achieved itinerary coverage above 90 percent and sustained approximately 3000 routing requests per minute under concurrent access. These results demonstrate that open-source routing infrastructure can support reliable, large-scale route simulation using open data. Beyond performance, the platform enables public transport accessibility studies that are not feasible with proprietary routing services, supporting reproducible research, transparent decision-making, and evidence-based transport planning across diverse urban contexts.
Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
►▼
Show Figures

Figure 1
Open AccessArticle
IDN-MOTSCC: Integration of Deep Neural Network with Hybrid Meta-Heuristic Model for Multi-Objective Task Scheduling in Cloud Computing
by
Mohit Kumar, Rama Kant, Brijesh Kumar Gupta, Azhar Shadab, Ashwani Kumar and Krishna Kant
Computers 2026, 15(1), 57; https://doi.org/10.3390/computers15010057 - 14 Jan 2026
Abstract
Cloud computing covers a wide range of practical applications and diverse domains, yet resource scheduling and task scheduling remain significant challenges. To address this, different task scheduling algorithms are implemented across various computing systems to allocate tasks to machines, thereby enhancing performance through
[...] Read more.
Cloud computing covers a wide range of practical applications and diverse domains, yet resource scheduling and task scheduling remain significant challenges. To address this, different task scheduling algorithms are implemented across various computing systems to allocate tasks to machines, thereby enhancing performance through data mapping. To meet these challenges, a novel task scheduling model is proposed using a hybrid meta-heuristic integration with a deep learning approach. We employed this novel task scheduling model to integrate deep learning with an optimized DNN, fine-tuned using improved grey wolf–horse herd optimization, with the aim of optimizing cloud-based task allocation and overcoming makespan constraints. Initially, a user initiates a task or request within the cloud environment. Then, these tasks are assigned to Virtual Machines (VMs). Since the scheduling algorithm is constrained by the makespan objective, an optimized Deep Neural Network (DNN) model is developed to perform optimal task scheduling. Random solutions are provided to the optimized DNN, where the hidden neuron count is tuned optimally by the proposed Improved Grey Wolf–Horse Herd Optimization (IGW-HHO) algorithm. The proposed IGW-HHO algorithm is derived from both conventional Grey Wolf Optimization (GWO) and Horse Herd Optimization (HHO). The optimal solutions are acquired from the optimized DNN and processed by the proposed algorithm to efficiently allocate tasks to VMs. The experimental results are validated using various error measures and convergence analysis. The proposed DNN-IGW-HHO model achieved a lower cost function compared to other optimization methods, with a reduction of 1% compared to PSO, 3.5% compared to WOA, 2.7% compared to GWO, and 0.7% compared to HHO. The proposed task scheduling model achieved the minimal Mean Absolute Error (MAE), with performance improvements of 31% over PSO, 20.16% over WOA, 41.72% over GWO, and 9.11% over HHO.
Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Implementing Learning Analytics in Education: Enhancing Actionability and Adoption
by
Dimitrios E. Tzimas and Stavros N. Demetriadis
Computers 2026, 15(1), 56; https://doi.org/10.3390/computers15010056 - 14 Jan 2026
Abstract
The broader aim of this research is to examine how Learning Analytics (LA) can become ethically sound, pedagogically actionable, and realistically adopted in educational practice. To address this overarching challenge, the study investigates three interrelated research questions: ethics by design, learning impact, and
[...] Read more.
The broader aim of this research is to examine how Learning Analytics (LA) can become ethically sound, pedagogically actionable, and realistically adopted in educational practice. To address this overarching challenge, the study investigates three interrelated research questions: ethics by design, learning impact, and adoption conditions. Methodologically, the research follows an exploratory sequential multi-method design. First, a meta-synthesis of 53 studies is conducted to identify key ethical challenges in LA and to derive an ethics-by-design framework. Second, a quasi-experimental study examines the impact of interface-based LA guidance (strong versus minimal) on students’ self-regulated learning skills and academic performance. Third, a mixed-methods adoption study, combining surveys, focus groups, and ethnographic observations, investigates the factors that encourage or hinder teachers’ adoption of LA in K–12 education. The findings indicate that strong LA-based guidance leads to statistically significant improvements in students’ self-regulated learning skills and academic performance compared to minimal guidance. Furthermore, the adoption analysis reveals that performance expectancy, social influence, human-centred design, and positive emotions facilitate LA adoption, whereas effort expectancy, limited facilitating conditions, ethical concerns, and cultural resistance inhibit it. Overall, the study demonstrates that ethics by design, effective pedagogical guidance, and adoption conditions are mutually reinforcing dimensions. It argues that LA can support intelligent, responsive, and human-centred learning environments when ethical safeguards, instructional design, and stakeholder involvement are systematically aligned.
Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
SteadyEval: Robust LLM Exam Graders via Adversarial Training and Distillation
by
Catalin Anghel, Marian Viorel Craciun, Adina Cocu, Andreea Alexandra Anghel and Adrian Istrate
Computers 2026, 15(1), 55; https://doi.org/10.3390/computers15010055 - 14 Jan 2026
Abstract
Large language models (LLMs) are increasingly used as rubric-guided graders for short-answer exams, but their decisions can be unstable across prompts and vulnerable to answer-side prompt injection. In this paper, we study SteadyEval, a guardrailed exam-grading pipeline in which an adversarially trained LoRA
[...] Read more.
Large language models (LLMs) are increasingly used as rubric-guided graders for short-answer exams, but their decisions can be unstable across prompts and vulnerable to answer-side prompt injection. In this paper, we study SteadyEval, a guardrailed exam-grading pipeline in which an adversarially trained LoRA filter (SteadyEval-7B-deep) preprocesses student answers to remove answer-side prompt injection, after which the original Mistral-7B-Instruct rubric-guided grader assigns the final score. We build two exam-grading pipelines on top of Mistral-7B-Instruct: a baseline pipeline that scores student answers directly, and a guardrailed pipeline in which a LoRA-based filter (SteadyEval-7B-deep) first removes injection content from the answer and a downstream grader then assigns the final score. Using two rubric-guided short-answer datasets in machine learning and computer networking, we generate grouped families of clean answers and four classes of answer-side attacks, and we evaluate the impact of these attacks on score shifts, attack success rates, stability across prompt variants, and alignment with human graders. On the pooled dataset, answer-side attacks inflate grades in the unguarded baseline by an average of about +1.2 points on a 1–10 scale, and substantially increase score dispersion across prompt variants. The guardrailed pipeline largely removes this systematic grade inflation and reduces instability for many items, especially in the machine-learning exam, while keeping mean absolute error with respect to human reference scores in a similar range to the unguarded baseline on clean answers, with a conservative shift in networking that motivates per-course calibration. Chief-panel comparisons further show that the guardrailed pipeline tracks human grading more closely on machine-learning items, but tends to under-score networking answers. These findings are best interpreted as a proof-of-concept guardrail and require per-course validation and calibration before operational use.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Gabor Transform-Based Deep Learning System Using CNN for Melanoma Detection
by
S. Deivasigamani, C. Senthilpari, Siva Sundhara Raja. D, A. Thankaraj, G. Narmadha and K. Gowrishankar
Computers 2026, 15(1), 54; https://doi.org/10.3390/computers15010054 - 13 Jan 2026
Abstract
Melanoma is highly dangerous and can spread rapidly to other parts of the body. It has an increasing fatality rate among different types of cancer. Timely detection of skin malignancies can reduce overall mortality. Therefore, clinical screening methods require more time and accuracy
[...] Read more.
Melanoma is highly dangerous and can spread rapidly to other parts of the body. It has an increasing fatality rate among different types of cancer. Timely detection of skin malignancies can reduce overall mortality. Therefore, clinical screening methods require more time and accuracy for diagnosis. An automated, computer-aided system would facilitate earlier melanoma detection, thereby increasing patient survival rates. This paper identifies melanoma images using a Convolutional Neural Network. Skin images are preprocessed using Histogram Equalization and Gabor transforms. A Gabor filter-based Convolutional Neural Network (CNN) classifier trains and classifies the extracted features. We adopt Gabor filters because they are bandpass filters that transform a pixel into a multi-resolution kernel matrix, providing detailed information about the image. This study suggests a method with accuracy, sensitivity, and specificity of 98.58%, 98.66%, and 98.75%, respectively. This research supports SDGs 3 and 4 by facilitating early melanoma detection and enhancing AI-driven medical education.
Full article
(This article belongs to the Topic AI, Deep Learning, and Machine Learning in Veterinary Science Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Stereo-Based Single-Shot Hand-to-Eye Calibration for Robot Arms
by
Pushkar Kadam, Gu Fang, Farshid Amirabdollahian, Ju Jia Zou and Patrick Holthaus
Computers 2026, 15(1), 53; https://doi.org/10.3390/computers15010053 - 13 Jan 2026
Abstract
Robot hand-to-eye calibration is a necessary process for a robot arm to perceive and interact with its environment. Past approaches required collecting multiple images using a calibration board placed at different locations relative to the robot. When the robot or camera is displaced
[...] Read more.
Robot hand-to-eye calibration is a necessary process for a robot arm to perceive and interact with its environment. Past approaches required collecting multiple images using a calibration board placed at different locations relative to the robot. When the robot or camera is displaced from its calibrated position, hand–eye calibration must be redone using the same tedious process. In this research, we developed a novel method that uses a semi-automatic process to perform hand-to-eye calibration with a stereo camera, generating a transformation matrix from the world to the camera coordinate frame from a single image. We use a robot-pointer tool attached to the robot’s end-effector to manually establish a relationship between the world and the robot coordinate frame. Then, we establish the relationship between the camera and the robot using a transformation matrix that maps points observed in the stereo image frame from two-dimensional space to the robot’s three-dimensional coordinate frame. Our analysis of the stereo calibration showed a reprojection error of 0.26 pixels. An evaluation metric was developed to test the camera-to-robot transformation matrix, and the experimental results showed median root mean square errors of less than 1 mm in the x and y directions and less than 2 mm in the z directions in the robot coordinate frame. The results show that, with this work, we contribute a hand-to-eye calibration method that uses three non-collinear points in a single stereo image to map camera-to-robot coordinate-frame transformations.
Full article
(This article belongs to the Special Issue Advanced Human–Robot Interaction 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
Learning Complementary Representations for Targeted Multimodal Sentiment Analysis
by
Binfen Ding, Jieyu An and Yumeng Lei
Computers 2026, 15(1), 52; https://doi.org/10.3390/computers15010052 - 13 Jan 2026
Abstract
Targeted multimodal sentiment classification is frequently impeded by the semantic sparsity of social media content, where text is brief and context is implicit. Traditional methods that rely on direct concatenation of textual and visual features often fail to resolve the ambiguity of specific
[...] Read more.
Targeted multimodal sentiment classification is frequently impeded by the semantic sparsity of social media content, where text is brief and context is implicit. Traditional methods that rely on direct concatenation of textual and visual features often fail to resolve the ambiguity of specific targets due to a lack of alignment between modalities. In this paper, we propose the Complementary Description Network (CDNet) to bridge this informational gap. CDNet incorporates automatically generated image descriptions as an additional semantic bridge, in contrast to methods that handle text and images as distinct streams. The framework enhances the input representation by directly translating visual content into text, allowing for more accurate interactions between the opinion target and the visual narrative. We further introduce a complementary reconstruction module that functions as a regularizer, forcing the model to retain deep semantic cues during fusion. Empirical results on the Twitter-2015 and Twitter-2017 benchmarks confirm that CDNet outperforms existing baselines. The findings suggest that visual-to-text augmentation is an effective strategy for compensating for the limited context inherent in short texts.
Full article
(This article belongs to the Section AI-Driven Innovations)
►▼
Show Figures

Figure 1
Open AccessArticle
Integrating ESP32-Based IoT Architectures and Cloud Visualization to Foster Data Literacy in Early Engineering Education
by
Jael Zambrano-Mieles, Miguel Tupac-Yupanqui, Salutar Mari-Loardo and Cristian Vidal-Silva
Computers 2026, 15(1), 51; https://doi.org/10.3390/computers15010051 - 13 Jan 2026
Abstract
►▼
Show Figures
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time
[...] Read more.
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time environmental monitoring systems for agriculture and beekeeping. Through a sixteen-week Project-Based Learning (PBL) intervention with 91 participants, we evaluated how this technological stack influences technical proficiency. Results indicate that the transition from local code execution to cloud-based telemetry increased perceived learning confidence from (Challenge phase) to (Reflection phase) on a 5-point scale. Furthermore, 96% of students identified the visualization dashboards as essential Human–Computer Interfaces (HCI) for debugging, effectively bridging the gap between raw sensor data and evidence-based argumentation. These findings demonstrate that integrating open-source IoT architectures provides a scalable mechanism to cultivate data literacy in early engineering education.
Full article

Figure 1
Open AccessReview
A Comprehensive Review of Energy Efficiency in 5G Networks: Past Strategies, Present Advances, and Future Research Directions
by
Narjes Lassoued and Noureddine Boujnah
Computers 2026, 15(1), 50; https://doi.org/10.3390/computers15010050 - 12 Jan 2026
Abstract
The rapid evolution of wireless communication toward Fifth Generation (5G) networks has enabled unprecedented performance improvement in terms of data rate, latency, reliability, sustainability, and connectivity. Recent years have witnessed an excessive deployment of new 5G networks worldwide. This deployment lead to an
[...] Read more.
The rapid evolution of wireless communication toward Fifth Generation (5G) networks has enabled unprecedented performance improvement in terms of data rate, latency, reliability, sustainability, and connectivity. Recent years have witnessed an excessive deployment of new 5G networks worldwide. This deployment lead to an exponential growth in traffic flow and a massive number of connected devices requiring a new generation of energy-hungry base stations (BSs). This results in increased power consumption, higher operational costs, and greater environmental impact, making energy efficiency (EE) a critical research challenge. This paper presents a comprehensive survey of EE optimization strategies in 5G networks. It reviews the transition from traditional methods such as resources allocation, energy harvesting, BS sleep modes, and power control to modern artificial intelligence (AI)-driven solutions employing machine learning, deep reinforcement learning, and self-organizing networks (SON). Comparative analyses highlight the trade-offs between energy savings, network performance, and implementation complexity. Finally, the paper outlines key open issues and future directions toward sustainable 5G and beyond-5G (B5G/Sixth Generation (6G)) systems, emphasizing explainable AI, zero-energy communications, and holistic green network design.
Full article
(This article belongs to the Special Issue Shaping the Future of Green Networking: Integrated Approaches of Joint Intelligence, Communication, Sensing, and Resilience for 6G)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Artificial Intelligence in K-12 Education: A Systematic Review of Teachers’ Professional Development Needs for AI Integration
by
Spyridon Aravantinos, Konstantinos Lavidas, Vassilis Komis, Thanassis Karalis and Stamatios Papadakis
Computers 2026, 15(1), 49; https://doi.org/10.3390/computers15010049 - 12 Jan 2026
Abstract
Artificial intelligence (AI) is reshaping how learning environments are designed and experienced, offering new possibilities for personalization, creativity, and immersive engagement. This systematic review synthesizes 43 empirical studies (Scopus, Web of Science) to examine the training needs and practices of primary and secondary
[...] Read more.
Artificial intelligence (AI) is reshaping how learning environments are designed and experienced, offering new possibilities for personalization, creativity, and immersive engagement. This systematic review synthesizes 43 empirical studies (Scopus, Web of Science) to examine the training needs and practices of primary and secondary education teachers for effective AI integration and overall professional development (PD). Following PRISMA guidelines, the review gathers teachers’ needs and practices related to AI integration, identifying key themes including training practices, teachers’ perceptions and attitudes, ongoing PD programs, multi-level support, AI literacy, and ethical and responsible use. The findings show that technical training alone is not sufficient, and that successful integration of AI requires a combination of pedagogical knowledge, positive attitudes, organizational support, and continuous training. Based on empirical data, a four-level, process-oriented PD framework is proposed, which bridges research with educational practice and offers practical guidance for the design of AI training interventions. Limitations and future research are discussed.
Full article
(This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning)
►▼
Show Figures

Figure 1
Open AccessArticle
A Machine Learning Approach to Wrist Angle Estimation Under Multiple Load Conditions Using Surface EMG
by
Songpon Pumjam, Sarut Panjan, Tarinee Tonggoed and Anan Suebsomran
Computers 2026, 15(1), 48; https://doi.org/10.3390/computers15010048 - 12 Jan 2026
Abstract
Surface electromyography (sEMG) is widely used for decoding motion intent in prosthetic control and rehabilitation, yet the impact of external load on sEMG-to-kinematics mapping remains insufficiently characterized, particularly for wrist flexion-extension This pilot study investigates wrist angle estimation (0–90°) under four discrete counter-torque
[...] Read more.
Surface electromyography (sEMG) is widely used for decoding motion intent in prosthetic control and rehabilitation, yet the impact of external load on sEMG-to-kinematics mapping remains insufficiently characterized, particularly for wrist flexion-extension This pilot study investigates wrist angle estimation (0–90°) under four discrete counter-torque levels (0, 25, 50, and 75 N·cm) using a multilayer perceptron neural network (MLPNN) regressor with mean absolute value (MAV) features. Multi-channel sEMG was acquired from three healthy participants while performing isotonic wrist extension (clockwise) and flexion (counterclockwise) in a constrained single-degree-of-freedom setup with potentiometer-based ground truth. Signals were filtered and normalized, and MAV features were extracted using a 200 ms sliding window with a 20 ms step. Across all load levels, the within-subject models achieved very high accuracy (R2 = 0.9946–0.9982) with test MSE of 1.23–3.75 deg2; extension yielded lower error than flexion, and the largest error was observed in flexion at 25 N·cm. Because the cohort is small (n = 3), the movement is highly constrained, and subject-independent validation and embedded implementation were not evaluated, these results should be interpreted as a best-case baseline rather than evidence of deployable rehabilitation performance. Future work should test multi-DoF wrist motion, freer movement conditions, richer feature sets, and subject-independent validation.
Full article
(This article belongs to the Special Issue Wearable Computing and Activity Recognition)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Hybrid Web Architecture with AI and Mobile Notifications to Optimize Incident Management in the Public Sector
by
Luis Alberto Pfuño Alccahuamani, Anthony Meza Bautista and Hesmeralda Rojas
Computers 2026, 15(1), 47; https://doi.org/10.3390/computers15010047 - 12 Jan 2026
Abstract
►▼
Show Figures
This study addresses the persistent inefficiencies in incident management within regional public institutions, where dispersed offices and limited digital infrastructure constrain timely technical support. The research aims to evaluate whether a hybrid web architecture integrating AI-assisted interaction and mobile notifications can significantly improve
[...] Read more.
This study addresses the persistent inefficiencies in incident management within regional public institutions, where dispersed offices and limited digital infrastructure constrain timely technical support. The research aims to evaluate whether a hybrid web architecture integrating AI-assisted interaction and mobile notifications can significantly improve efficiency in this context. The ITIMS (Intelligent Technical Incident Management System) was designed using a Laravel 10 MVC backend, a responsive Bootstrap 5 interface, and a relational MariaDB/MySQL model optimized with migrations and composite indexes, and incorporated two low-cost integrations: a stateless AI chatbot through the OpenRouter API and asynchronous mobile notifications using the Telegram Bot API managed via Laravel Queues and webhooks. Developed through four Scrum sprints and deployed on an institutional XAMPP environment, the solution was evaluated from January to April 2025 with 100 participants using operational metrics and the QWU usability instrument. Results show a reduction in incident resolution time from 120 to 31 min (74.17%), an 85.48% chatbot interaction success rate, a 94.12% notification open rate, and a 99.34% incident resolution rate, alongside an 88% usability score. These findings indicate that a modular, low-cost, and scalable architecture can effectively strengthen digital transformation efforts in the public sector, especially in regions with resource and connectivity constraints.
Full article

Graphical abstract
Open AccessArticle
Model of Acceptance of Artificial Intelligence Devices in Higher Education
by
Luis Salazar and Luis Rivera
Computers 2026, 15(1), 46; https://doi.org/10.3390/computers15010046 - 12 Jan 2026
Abstract
Artificial intelligence (AI) has become a highly relevant tool in higher education. However, its acceptance by university students depends not only on technical or functional characteristics, but also on cognitive, contextual, and emotional factors. This study proposes and validates a model of acceptance
[...] Read more.
Artificial intelligence (AI) has become a highly relevant tool in higher education. However, its acceptance by university students depends not only on technical or functional characteristics, but also on cognitive, contextual, and emotional factors. This study proposes and validates a model of acceptance of the use of AI devices (MIDA) in the university context. The model considers contextual variables such as anthropomorphism (AN), perceived value (PV) and perceived risk (PR). It also considers cognitive variables such as performance expectancy (PEX) and perceived effort expectancy (PEE). In addition, it considers emotional variables such as anxiety (ANX), stress (ST) and trust (TR). For its validation, data were collected from 517 university students and analysed using structural equations (CB-SEM). The results indicate that perceived value, anthropomorphism and perceived risk influence the willingness to accept the use of AI devices indirectly through performance expectancy and perceived effort. Likewise, performance expectancy significantly reduces anxiety and stress and increases trust, while effort expectancy increases both anxiety and stress. Trust is the main predictor of willingness to accept the use of AI devices, while stress has a significant negative effect on this willingness. These findings contribute to the literature on the acceptance of AI devices by highlighting the mediating role of emotions and offer practical implications for the design of AI devices aimed at improving their acceptance in educational contexts.
Full article
(This article belongs to the Section Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessArticle
PPE-EYE: A Deep Learning Approach to Personal Protective Equipment Compliance Detection
by
Atta Rahman, Mohammed Salih Ahmed, Khaled Naif AlBugami, Abdullah Yousef Alabbad, Abdullah Abdulaziz AlFantoukh, Yousef Hassan Alshaikhahmed, Ziyad Saleh Alzahrani, Mohammad Aftab Alam Khan, Mustafa Youldash and Saeed Matar Alshahrani
Computers 2026, 15(1), 45; https://doi.org/10.3390/computers15010045 - 11 Jan 2026
Abstract
Safety on construction sites is an essential yet challenging issue due to the inherently hazardous nature of these sites. Workers are expected to wear Personal Protective Equipment (PPE), such as helmets, vests, and safety glasses, to prevent or minimize their exposure to injuries.
[...] Read more.
Safety on construction sites is an essential yet challenging issue due to the inherently hazardous nature of these sites. Workers are expected to wear Personal Protective Equipment (PPE), such as helmets, vests, and safety glasses, to prevent or minimize their exposure to injuries. However, ensuring compliance remains difficult, particularly in large or complex sites, which require a time-consuming and usually error-prone manual inspection process. The research proposes an automated PPE detection system utilizing the deep learning model YOLO11, which is trained on the CHVG dataset, to identify in real-time whether workers are adequately equipped with the necessary gear. The proposed PPE-EYE method, using YOLO11x, achieved a mAP50 of 96.9% and an inference time of 7.3 ms, which is sufficient for real-time PPE detection systems, in contrast to previous approaches involving the same dataset, which required 170 ms. The model achieved these results by employing data augmentation and fine-tuning. The proposed solution provides continuous monitoring with reduced human oversight and ensures timely alerts if non-compliance is detected, allowing the site manager to act promptly. It further enhances the effectiveness and reliability of safety inspections, overall site safety, and reduces accidents, ensuring consistency in follow-through of safety procedures to create a safer and more productive working environment for all involved in construction activities.
Full article
(This article belongs to the Section AI-Driven Innovations)
►▼
Show Figures

Figure 1
Open AccessArticle
Privacy-Preserving Set Intersection Protocol Based on SM2 Oblivious Transfer
by
Zhibo Guan, Hai Huang, Haibo Yao, Qiong Jia, Kai Cheng, Mengmeng Ge, Bin Yu and Chao Ma
Computers 2026, 15(1), 44; https://doi.org/10.3390/computers15010044 - 10 Jan 2026
Abstract
Private Set Intersection (PSI) is a fundamental cryptographic primitive in privacy-preserving computation and has been widely applied in federated learning, secure data sharing, and privacy-aware data analytics. However, most existing PSI protocols rely on RSA or standard elliptic curve cryptography, which limits their
[...] Read more.
Private Set Intersection (PSI) is a fundamental cryptographic primitive in privacy-preserving computation and has been widely applied in federated learning, secure data sharing, and privacy-aware data analytics. However, most existing PSI protocols rely on RSA or standard elliptic curve cryptography, which limits their applicability in scenarios requiring domestic cryptographic standards and often leads to high computational and communication overhead when processing large-scale datasets. In this paper, we propose a novel PSI protocol based on the Chinese commercial cryptographic standard SM2, referred to as SM2-OT-PSI. The proposed scheme constructs an oblivious transfer-based Oblivious Pseudorandom Function (OPRF) using SM2 public-key cryptography and the SM3 hash function, enabling efficient multi-point OPRF evaluation under the semi-honest adversary model. A formal security analysis demonstrates that the protocol satisfies privacy and correctness guarantees assuming the hardness of the Elliptic Curve Discrete Logarithm Problem. To further improve practical performance, we design a software–hardware co-design architecture that offloads SM2 scalar multiplication and SM3 hashing operations to a domestic reconfigurable cryptographic accelerator (RSP S20G). Experimental results show that, for datasets with up to millions of elements, the presented protocol significantly outperforms several representative PSI schemes in terms of execution time and communication efficiency, especially in medium and high-bandwidth network environments. The proposed SM2-OT-PSI protocol provides a practical and efficient solution for large-scale privacy-preserving set intersection under national cryptographic standards, making it suitable for deployment in real-world secure computing systems.
Full article
(This article belongs to the Special Issue Mobile Fog and Edge Computing)
►▼
Show Figures

Figure 1
Open AccessArticle
Joint Inference of Image Enhancement and Object Detection via Cross-Domain Fusion Transformer
by
Bingxun Zhao and Yuan Chen
Computers 2026, 15(1), 43; https://doi.org/10.3390/computers15010043 - 10 Jan 2026
Abstract
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to
[...] Read more.
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to improve visual quality prior to detection. However, image enhancement and object detection are optimized for fundamentally different objectives, and directly cascading them leads to feature distribution mismatch. Moreover, prevailing dual-branch architectures process enhancement and detection independently, overlooking multi-scale interactions across domains and thus constraining the learning of cross-domain feature representation. To overcome these limitations, We propose an underwater cross-domain fusion Transformer detector (UCF-DETR). UCF-DETR jointly leverages image enhancement and object detection by exploiting the complementary information from the enhanced and original image domains. Specifically, an underwater image enhancement module is employed to improve visibility. We then design a cross-domain feature pyramid to integrate fine-grained structural details from the enhanced domain with semantic representations from the original domain. Cross-domain query interaction mechanism is introduced to model inter-domain query relationships, leading to accurate object localization and boundary delineation. Extensive experiments on the challenging DUO and UDD benchmarks demonstrate that UCF-DETR consistently outperforms state-of-the-art methods for UOD.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Preparation for Inclusive and Technology-Enhanced Pedagogy: A Cluster Analysis of Secondary Special Education Teachers
by
Evaggelos Foykas, Eleftheria Beazidou, Natassa Raikou and Nikolaos C. Zygouris
Computers 2026, 15(1), 42; https://doi.org/10.3390/computers15010042 - 9 Jan 2026
Abstract
This study examines the profiles of secondary special education teachers regarding their readiness for inclusive teaching, with technology-enhanced practices operationalized through participation in STEAM-related professional development. A total of 323 teachers from vocational high schools and integration classes participated. Four indicators of professional
[...] Read more.
This study examines the profiles of secondary special education teachers regarding their readiness for inclusive teaching, with technology-enhanced practices operationalized through participation in STEAM-related professional development. A total of 323 teachers from vocational high schools and integration classes participated. Four indicators of professional preparation were assessed: years of teaching experience, formal STEAM training, exposure to students with special educational needs (SEN), and perceived success in inclusive teaching, operationalized as self-reported competence in adaptive instruction, classroom management, positive attitudes toward inclusion, and collaborative engagement. Cluster analysis revealed three distinct teacher profiles: less experienced teachers with moderate perceived success and limited exposure to students with SEN; well-prepared teachers with high levels across all indicators; and highly experienced teachers with lower STEAM training and perceived success. These findings underscore the need for targeted professional development that integrates inclusive and technology-enhanced pedagogy through STEAM and is tailored to teachers’ experience levels. By integrating inclusive readiness, STEAM-related preparation, and technology-enhanced pedagogy within a person-centered profiling approach, this study offers actionable teacher profiles to inform differentiated professional development in secondary special education.
Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Computers, Electronics, Information, MAKE, Signals
Recent Advances in Label Distribution Learning
Topic Editors: Xin Geng, Ning Xu, Liangxiao JiangDeadline: 31 January 2026
Topic in
Applied Sciences, Computers, JSAN, Technologies, BDCC, Sensors, Telecom, Electronics
Electronic Communications, IOT and Big Data, 2nd Volume
Topic Editors: Teen-Hang Meen, Charles Tijus, Cheng-Chien Kuo, Kuei-Shu Hsu, Jih-Fu TuDeadline: 31 March 2026
Topic in
AI, Buildings, Computers, Drones, Entropy, Symmetry
Applications of Machine Learning in Large-Scale Optimization and High-Dimensional Learning
Topic Editors: Jeng-Shyang Pan, Junzo Watada, Vaclav Snasel, Pei HuDeadline: 30 April 2026
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2026
Conferences
Special Issues
Special Issue in
Computers
Future Trends in Computer Programming Education
Guest Editor: Stelios XinogalosDeadline: 31 January 2026
Special Issue in
Computers
AI in Complex Engineering Systems
Guest Editor: Sandi Baressi ŠegotaDeadline: 31 January 2026
Special Issue in
Computers
Computational Science and Its Applications 2025 (ICCSA 2025)
Guest Editor: Osvaldo GervasiDeadline: 31 January 2026
Special Issue in
Computers
Advances in Semantic Multimedia and Personalized Digital Content
Guest Editors: Phivos Mylonas, Christos Troussas, Akrivi Krouska, Manolis Wallace, Cleo SgouropoulouDeadline: 25 February 2026




