Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.5 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Learning Complementary Representations for Targeted Multimodal Sentiment Analysis
Computers 2026, 15(1), 52; https://doi.org/10.3390/computers15010052 - 13 Jan 2026
Abstract
Targeted multimodal sentiment classification is frequently impeded by the semantic sparsity of social media content, where text is brief and context is implicit. Traditional methods that rely on direct concatenation of textual and visual features often fail to resolve the ambiguity of specific
[...] Read more.
Targeted multimodal sentiment classification is frequently impeded by the semantic sparsity of social media content, where text is brief and context is implicit. Traditional methods that rely on direct concatenation of textual and visual features often fail to resolve the ambiguity of specific targets due to a lack of alignment between modalities. In this paper, we propose the Complementary Description Network (CDNet) to bridge this informational gap. CDNet incorporates automatically generated image descriptions as an additional semantic bridge, in contrast to methods that handle text and images as distinct streams. The framework enhances the input representation by directly translating visual content into text, allowing for more accurate interactions between the opinion target and the visual narrative. We further introduce a complementary reconstruction module that functions as a regularizer, forcing the model to retain deep semantic cues during fusion. Empirical results on the Twitter-2015 and Twitter-2017 benchmarks confirm that CDNet outperforms existing baselines. The findings suggest that visual-to-text augmentation is an effective strategy for compensating for the limited context inherent in short texts.
Full article
(This article belongs to the Section AI-Driven Innovations)
►
Show Figures
Open AccessArticle
Integrating ESP32-Based IoT Architectures and Cloud Visualization to Foster Data Literacy in Early Engineering Education
by
Jael Zambrano-Mieles, Miguel Tupac-Yupanqui, Salutar Mari-Loardo and Cristian Vidal-Silva
Computers 2026, 15(1), 51; https://doi.org/10.3390/computers15010051 - 13 Jan 2026
Abstract
►▼
Show Figures
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time
[...] Read more.
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time environmental monitoring systems for agriculture and beekeeping. Through a sixteen-week Project-Based Learning (PBL) intervention with 91 participants, we evaluated how this technological stack influences technical proficiency. Results indicate that the transition from local code execution to cloud-based telemetry increased perceived learning confidence from (Challenge phase) to (Reflection phase) on a 5-point scale. Furthermore, 96% of students identified the visualization dashboards as essential Human–Computer Interfaces (HCI) for debugging, effectively bridging the gap between raw sensor data and evidence-based argumentation. These findings demonstrate that integrating open-source IoT architectures provides a scalable mechanism to cultivate data literacy in early engineering education.
Full article

Figure 1
Open AccessReview
A Comprehensive Review of Energy Efficiency in 5G Networks: Past Strategies, Present Advances, and Future Research Directions
by
Narjes Lassoued and Noureddine Boujnah
Computers 2026, 15(1), 50; https://doi.org/10.3390/computers15010050 - 12 Jan 2026
Abstract
The rapid evolution of wireless communication toward Fifth Generation (5G) networks has enabled unprecedented performance improvement in terms of data rate, latency, reliability, sustainability, and connectivity. Recent years have witnessed an excessive deployment of new 5G networks worldwide. This deployment lead to an
[...] Read more.
The rapid evolution of wireless communication toward Fifth Generation (5G) networks has enabled unprecedented performance improvement in terms of data rate, latency, reliability, sustainability, and connectivity. Recent years have witnessed an excessive deployment of new 5G networks worldwide. This deployment lead to an exponential growth in traffic flow and a massive number of connected devices requiring a new generation of energy-hungry base stations (BSs). This results in increased power consumption, higher operational costs, and greater environmental impact, making energy efficiency (EE) a critical research challenge. This paper presents a comprehensive survey of EE optimization strategies in 5G networks. It reviews the transition from traditional methods such as resources allocation, energy harvesting, BS sleep modes, and power control to modern artificial intelligence (AI)-driven solutions employing machine learning, deep reinforcement learning, and self-organizing networks (SON). Comparative analyses highlight the trade-offs between energy savings, network performance, and implementation complexity. Finally, the paper outlines key open issues and future directions toward sustainable 5G and beyond-5G (B5G/Sixth Generation (6G)) systems, emphasizing explainable AI, zero-energy communications, and holistic green network design.
Full article
(This article belongs to the Special Issue Shaping the Future of Green Networking: Integrated Approaches of Joint Intelligence, Communication, Sensing, and Resilience for 6G)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Artificial Intelligence in K-12 Education: A Systematic Review of Teachers’ Professional Development Needs for AI Integration
by
Spyridon Aravantinos, Konstantinos Lavidas, Vassilis Komis, Thanassis Karalis and Stamatios Papadakis
Computers 2026, 15(1), 49; https://doi.org/10.3390/computers15010049 - 12 Jan 2026
Abstract
Artificial intelligence (AI) is reshaping how learning environments are designed and experienced, offering new possibilities for personalization, creativity, and immersive engagement. This systematic review synthesizes 43 empirical studies (Scopus, Web of Science) to examine the training needs and practices of primary and secondary
[...] Read more.
Artificial intelligence (AI) is reshaping how learning environments are designed and experienced, offering new possibilities for personalization, creativity, and immersive engagement. This systematic review synthesizes 43 empirical studies (Scopus, Web of Science) to examine the training needs and practices of primary and secondary education teachers for effective AI integration and overall professional development (PD). Following PRISMA guidelines, the review gathers teachers’ needs and practices related to AI integration, identifying key themes including training practices, teachers’ perceptions and attitudes, ongoing PD programs, multi-level support, AI literacy, and ethical and responsible use. The findings show that technical training alone is not sufficient, and that successful integration of AI requires a combination of pedagogical knowledge, positive attitudes, organizational support, and continuous training. Based on empirical data, a four-level, process-oriented PD framework is proposed, which bridges research with educational practice and offers practical guidance for the design of AI training interventions. Limitations and future research are discussed.
Full article
(This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning)
►▼
Show Figures

Figure 1
Open AccessArticle
A Machine Learning Approach to Wrist Angle Estimation Under Multiple Load Conditions Using Surface EMG
by
Songpon Pumjam, Sarut Panjan, Tarinee Tonggoed and Anan Suebsomran
Computers 2026, 15(1), 48; https://doi.org/10.3390/computers15010048 - 12 Jan 2026
Abstract
Surface electromyography (sEMG) is widely used for decoding motion intent in prosthetic control and rehabilitation, yet the impact of external load on sEMG-to-kinematics mapping remains insufficiently characterized, particularly for wrist flexion-extension This pilot study investigates wrist angle estimation (0–90°) under four discrete counter-torque
[...] Read more.
Surface electromyography (sEMG) is widely used for decoding motion intent in prosthetic control and rehabilitation, yet the impact of external load on sEMG-to-kinematics mapping remains insufficiently characterized, particularly for wrist flexion-extension This pilot study investigates wrist angle estimation (0–90°) under four discrete counter-torque levels (0, 25, 50, and 75 N·cm) using a multilayer perceptron neural network (MLPNN) regressor with mean absolute value (MAV) features. Multi-channel sEMG was acquired from three healthy participants while performing isotonic wrist extension (clockwise) and flexion (counterclockwise) in a constrained single-degree-of-freedom setup with potentiometer-based ground truth. Signals were filtered and normalized, and MAV features were extracted using a 200 ms sliding window with a 20 ms step. Across all load levels, the within-subject models achieved very high accuracy (R2 = 0.9946–0.9982) with test MSE of 1.23–3.75 deg2; extension yielded lower error than flexion, and the largest error was observed in flexion at 25 N·cm. Because the cohort is small (n = 3), the movement is highly constrained, and subject-independent validation and embedded implementation were not evaluated, these results should be interpreted as a best-case baseline rather than evidence of deployable rehabilitation performance. Future work should test multi-DoF wrist motion, freer movement conditions, richer feature sets, and subject-independent validation.
Full article
(This article belongs to the Special Issue Wearable Computing and Activity Recognition)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Hybrid Web Architecture with AI and Mobile Notifications to Optimize Incident Management in the Public Sector
by
Luis Alberto Pfuño Alccahuamani, Anthony Meza Bautista and Hesmeralda Rojas
Computers 2026, 15(1), 47; https://doi.org/10.3390/computers15010047 - 12 Jan 2026
Abstract
►▼
Show Figures
This study addresses the persistent inefficiencies in incident management within regional public institutions, where dispersed offices and limited digital infrastructure constrain timely technical support. The research aims to evaluate whether a hybrid web architecture integrating AI-assisted interaction and mobile notifications can significantly improve
[...] Read more.
This study addresses the persistent inefficiencies in incident management within regional public institutions, where dispersed offices and limited digital infrastructure constrain timely technical support. The research aims to evaluate whether a hybrid web architecture integrating AI-assisted interaction and mobile notifications can significantly improve efficiency in this context. The ITIMS (Intelligent Technical Incident Management System) was designed using a Laravel 10 MVC backend, a responsive Bootstrap 5 interface, and a relational MariaDB/MySQL model optimized with migrations and composite indexes, and incorporated two low-cost integrations: a stateless AI chatbot through the OpenRouter API and asynchronous mobile notifications using the Telegram Bot API managed via Laravel Queues and webhooks. Developed through four Scrum sprints and deployed on an institutional XAMPP environment, the solution was evaluated from January to April 2025 with 100 participants using operational metrics and the QWU usability instrument. Results show a reduction in incident resolution time from 120 to 31 min (74.17%), an 85.48% chatbot interaction success rate, a 94.12% notification open rate, and a 99.34% incident resolution rate, alongside an 88% usability score. These findings indicate that a modular, low-cost, and scalable architecture can effectively strengthen digital transformation efforts in the public sector, especially in regions with resource and connectivity constraints.
Full article

Graphical abstract
Open AccessArticle
Model of Acceptance of Artificial Intelligence Devices in Higher Education
by
Luis Salazar and Luis Rivera
Computers 2026, 15(1), 46; https://doi.org/10.3390/computers15010046 - 12 Jan 2026
Abstract
Artificial intelligence (AI) has become a highly relevant tool in higher education. However, its acceptance by university students depends not only on technical or functional characteristics, but also on cognitive, contextual, and emotional factors. This study proposes and validates a model of acceptance
[...] Read more.
Artificial intelligence (AI) has become a highly relevant tool in higher education. However, its acceptance by university students depends not only on technical or functional characteristics, but also on cognitive, contextual, and emotional factors. This study proposes and validates a model of acceptance of the use of AI devices (MIDA) in the university context. The model considers contextual variables such as anthropomorphism (AN), perceived value (PV) and perceived risk (PR). It also considers cognitive variables such as performance expectancy (PEX) and perceived effort expectancy (PEE). In addition, it considers emotional variables such as anxiety (ANX), stress (ST) and trust (TR). For its validation, data were collected from 517 university students and analysed using structural equations (CB-SEM). The results indicate that perceived value, anthropomorphism and perceived risk influence the willingness to accept the use of AI devices indirectly through performance expectancy and perceived effort. Likewise, performance expectancy significantly reduces anxiety and stress and increases trust, while effort expectancy increases both anxiety and stress. Trust is the main predictor of willingness to accept the use of AI devices, while stress has a significant negative effect on this willingness. These findings contribute to the literature on the acceptance of AI devices by highlighting the mediating role of emotions and offer practical implications for the design of AI devices aimed at improving their acceptance in educational contexts.
Full article
(This article belongs to the Section Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessArticle
PPE-EYE: A Deep Learning Approach to Personal Protective Equipment Compliance Detection
by
Atta Rahman, Mohammed Salih Ahmed, Khaled Naif AlBugami, Abdullah Yousef Alabbad, Abdullah Abdulaziz AlFantoukh, Yousef Hassan Alshaikhahmed, Ziyad Saleh Alzahrani, Mohammad Aftab Alam Khan, Mustafa Youldash and Saeed Matar Alshahrani
Computers 2026, 15(1), 45; https://doi.org/10.3390/computers15010045 - 11 Jan 2026
Abstract
Safety on construction sites is an essential yet challenging issue due to the inherently hazardous nature of these sites. Workers are expected to wear Personal Protective Equipment (PPE), such as helmets, vests, and safety glasses, to prevent or minimize their exposure to injuries.
[...] Read more.
Safety on construction sites is an essential yet challenging issue due to the inherently hazardous nature of these sites. Workers are expected to wear Personal Protective Equipment (PPE), such as helmets, vests, and safety glasses, to prevent or minimize their exposure to injuries. However, ensuring compliance remains difficult, particularly in large or complex sites, which require a time-consuming and usually error-prone manual inspection process. The research proposes an automated PPE detection system utilizing the deep learning model YOLO11, which is trained on the CHVG dataset, to identify in real-time whether workers are adequately equipped with the necessary gear. The proposed PPE-EYE method, using YOLO11x, achieved a mAP50 of 96.9% and an inference time of 7.3 ms, which is sufficient for real-time PPE detection systems, in contrast to previous approaches involving the same dataset, which required 170 ms. The model achieved these results by employing data augmentation and fine-tuning. The proposed solution provides continuous monitoring with reduced human oversight and ensures timely alerts if non-compliance is detected, allowing the site manager to act promptly. It further enhances the effectiveness and reliability of safety inspections, overall site safety, and reduces accidents, ensuring consistency in follow-through of safety procedures to create a safer and more productive working environment for all involved in construction activities.
Full article
(This article belongs to the Section AI-Driven Innovations)
►▼
Show Figures

Figure 1
Open AccessArticle
Privacy-Preserving Set Intersection Protocol Based on SM2 Oblivious Transfer
by
Zhibo Guan, Hai Huang, Haibo Yao, Qiong Jia, Kai Cheng, Mengmeng Ge, Bin Yu and Chao Ma
Computers 2026, 15(1), 44; https://doi.org/10.3390/computers15010044 - 10 Jan 2026
Abstract
Private Set Intersection (PSI) is a fundamental cryptographic primitive in privacy-preserving computation and has been widely applied in federated learning, secure data sharing, and privacy-aware data analytics. However, most existing PSI protocols rely on RSA or standard elliptic curve cryptography, which limits their
[...] Read more.
Private Set Intersection (PSI) is a fundamental cryptographic primitive in privacy-preserving computation and has been widely applied in federated learning, secure data sharing, and privacy-aware data analytics. However, most existing PSI protocols rely on RSA or standard elliptic curve cryptography, which limits their applicability in scenarios requiring domestic cryptographic standards and often leads to high computational and communication overhead when processing large-scale datasets. In this paper, we propose a novel PSI protocol based on the Chinese commercial cryptographic standard SM2, referred to as SM2-OT-PSI. The proposed scheme constructs an oblivious transfer-based Oblivious Pseudorandom Function (OPRF) using SM2 public-key cryptography and the SM3 hash function, enabling efficient multi-point OPRF evaluation under the semi-honest adversary model. A formal security analysis demonstrates that the protocol satisfies privacy and correctness guarantees assuming the hardness of the Elliptic Curve Discrete Logarithm Problem. To further improve practical performance, we design a software–hardware co-design architecture that offloads SM2 scalar multiplication and SM3 hashing operations to a domestic reconfigurable cryptographic accelerator (RSP S20G). Experimental results show that, for datasets with up to millions of elements, the presented protocol significantly outperforms several representative PSI schemes in terms of execution time and communication efficiency, especially in medium and high-bandwidth network environments. The proposed SM2-OT-PSI protocol provides a practical and efficient solution for large-scale privacy-preserving set intersection under national cryptographic standards, making it suitable for deployment in real-world secure computing systems.
Full article
(This article belongs to the Special Issue Mobile Fog and Edge Computing)
Open AccessArticle
Joint Inference of Image Enhancement and Object Detection via Cross-Domain Fusion Transformer
by
Bingxun Zhao and Yuan Chen
Computers 2026, 15(1), 43; https://doi.org/10.3390/computers15010043 - 10 Jan 2026
Abstract
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to
[...] Read more.
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to improve visual quality prior to detection. However, image enhancement and object detection are optimized for fundamentally different objectives, and directly cascading them leads to feature distribution mismatch. Moreover, prevailing dual-branch architectures process enhancement and detection independently, overlooking multi-scale interactions across domains and thus constraining the learning of cross-domain feature representation. To overcome these limitations, We propose an underwater cross-domain fusion Transformer detector (UCF-DETR). UCF-DETR jointly leverages image enhancement and object detection by exploiting the complementary information from the enhanced and original image domains. Specifically, an underwater image enhancement module is employed to improve visibility. We then design a cross-domain feature pyramid to integrate fine-grained structural details from the enhanced domain with semantic representations from the original domain. Cross-domain query interaction mechanism is introduced to model inter-domain query relationships, leading to accurate object localization and boundary delineation. Extensive experiments on the challenging DUO and UDD benchmarks demonstrate that UCF-DETR consistently outperforms state-of-the-art methods for UOD.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Open AccessArticle
Preparation for Inclusive and Technology-Enhanced Pedagogy: A Cluster Analysis of Secondary Special Education Teachers
by
Evaggelos Foykas, Eleftheria Beazidou, Natassa Raikou and Nikolaos C. Zygouris
Computers 2026, 15(1), 42; https://doi.org/10.3390/computers15010042 - 9 Jan 2026
Abstract
This study examines the profiles of secondary special education teachers regarding their readiness for inclusive teaching, with technology-enhanced practices operationalized through participation in STEAM-related professional development. A total of 323 teachers from vocational high schools and integration classes participated. Four indicators of professional
[...] Read more.
This study examines the profiles of secondary special education teachers regarding their readiness for inclusive teaching, with technology-enhanced practices operationalized through participation in STEAM-related professional development. A total of 323 teachers from vocational high schools and integration classes participated. Four indicators of professional preparation were assessed: years of teaching experience, formal STEAM training, exposure to students with special educational needs (SEN), and perceived success in inclusive teaching, operationalized as self-reported competence in adaptive instruction, classroom management, positive attitudes toward inclusion, and collaborative engagement. Cluster analysis revealed three distinct teacher profiles: less experienced teachers with moderate perceived success and limited exposure to students with SEN; well-prepared teachers with high levels across all indicators; and highly experienced teachers with lower STEAM training and perceived success. These findings underscore the need for targeted professional development that integrates inclusive and technology-enhanced pedagogy through STEAM and is tailored to teachers’ experience levels. By integrating inclusive readiness, STEAM-related preparation, and technology-enhanced pedagogy within a person-centered profiling approach, this study offers actionable teacher profiles to inform differentiated professional development in secondary special education.
Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Emerging Technologies in Financial Services: From Virtualization and Cloud Infrastructures to Edge Computing Applications
by
Georgios Lambropoulos, Sarandis Mitropoulos and Christos Douligeris
Computers 2026, 15(1), 41; https://doi.org/10.3390/computers15010041 - 9 Jan 2026
Abstract
The financial services sector is experiencing unprecedented transformation through the adoption of virtualization technologies, encompassing cloud computing and edge computing digitalization initiatives that fundamentally alter operational paradigms and competitive dynamics within the industry. This systematic literature review employed a comprehensive methodology, analyzing peer-reviewed
[...] Read more.
The financial services sector is experiencing unprecedented transformation through the adoption of virtualization technologies, encompassing cloud computing and edge computing digitalization initiatives that fundamentally alter operational paradigms and competitive dynamics within the industry. This systematic literature review employed a comprehensive methodology, analyzing peer-reviewed articles, systematic reviews, and industry reports published between 2016 and 2025 across three primary technological domains, utilizing thematic content analysis to synthesize findings and identify key implementation patterns, performance outcomes, and emerging challenges. The analysis reveals consistent evidence of positive long-term performance outcomes from virtualization technology adoption, including average transaction processing time reductions of 69% through edge computing implementations, substantial operational cost savings and efficiency improvements through cloud computing adoption, while simultaneously identifying critical challenges related to regulatory compliance, security management, and organizational transformation requirements. Virtualization technology offers transformative potential for financial services through improved operational efficiency, enhanced customer experience, and competitive advantage creation, though successful implementation requires sophisticated approaches to standardization, regulatory compliance, and change management, with future research needed to develop integrative frameworks addressing technology convergence and emerging applications in decentralized finance and digital currency systems.
Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
►▼
Show Figures

Figure 1
Open AccessReview
Deep Reinforcement Learning in the Era of Foundation Models: A Survey
by
Ibomoiye Domor Mienye, Ebenezer Esenogho and Cameron Modisane
Computers 2026, 15(1), 40; https://doi.org/10.3390/computers15010040 - 9 Jan 2026
Abstract
►▼
Show Figures
Deep reinforcement learning (DRL) and large foundation models (FMs) have reshaped modern artificial intelligence (AI) by enabling systems that learn from interaction while leveraging broad generalization and multimodal reasoning capabilities. This survey examines the growing convergence of these paradigms and reviews how reinforcement
[...] Read more.
Deep reinforcement learning (DRL) and large foundation models (FMs) have reshaped modern artificial intelligence (AI) by enabling systems that learn from interaction while leveraging broad generalization and multimodal reasoning capabilities. This survey examines the growing convergence of these paradigms and reviews how reinforcement learning from human feedback (RLHF), reinforcement learning from AI feedback (RLAIF), world-model pretraining, and preference-based optimization refine foundation model capabilities. We organize existing work into a taxonomy of model-centric, RL-centric, and hybrid DRL–FM integration pathways, and synthesize applications across language and multimodal agents, autonomous control, scientific discovery, and societal and ethical alignment. We also identify technical, behavioral, and governance challenges that hinder scalable and reliable DRL–FM integration, and outline emerging research directions that suggest how reinforcement-driven adaptation may shape the next generation of intelligent systems. This review provides researchers and practitioners with a structured overview of the current state and future trajectory of DRL in the era of foundation models.
Full article

Figure 1
Open AccessArticle
Efficient Low-Precision GEMM on Ascend NPU: HGEMM’s Synergy of Pipeline Scheduling, Tiling, and Memory Optimization
by
Erkun Zhang, Pengxiang Xu and Lu Lu
Computers 2026, 15(1), 39; https://doi.org/10.3390/computers15010039 - 8 Jan 2026
Abstract
►▼
Show Figures
As one of the most widely used high-performance kernels, General Matrix Multiplication, or GEMM, plays a pivotal role in diverse application fields. With the growing prevalence of training for Convolutional Neural Networks (CNNs) and Large Language Models (LLMs), the design and implementation of
[...] Read more.
As one of the most widely used high-performance kernels, General Matrix Multiplication, or GEMM, plays a pivotal role in diverse application fields. With the growing prevalence of training for Convolutional Neural Networks (CNNs) and Large Language Models (LLMs), the design and implementation of high-efficiency, low-precision GEMM on modern Neural Processing Unit (NPU) platforms are of great significance. In this work, HGEMM for Ascend NPU is presented, which enables collaborative processing of different computation types by Cube units and Vector units. The major contributions of this work are the following: (i) dual-stream pipeline scheduling is implemented, which synchronizes padding operations, matrix–matrix multiplications, and element-wise instructions across hierarchical buffers and compute units; (ii) a suite of tiling strategies and a corresponding strategy selection mechanism are developed, comprehensively accounting for the impacts from M, N, and K directions; and (iii) SplitK as well as ShuffleK methods are raised to address the challenges of memory access efficiency and AI Core utilization. Extensive evaluations demonstrate that our proposed HGEMM achieves an average 3.56× speedup over the CATLASS template-based implementation under identical Ascend NPU configurations, and an average 2.10× speedup relative to the cuBLAS implementation on Nvidia A800 GPUs under general random workloads. It also achieves a maximum computational utilization exceeding 90% under benchmark workloads. Moreover, the proposed HGEMM not only significantly outperforms the CATLASS template-based implementation but also delivers efficiency comparable to the cuBLAS implementation in OPT-based bandwidth-limited LLM inference workloads.
Full article

Figure 1
Open AccessArticle
Hypergraph Conversational Recommendation System Fusing Pairwise Relationships
by
Liping Wu, Jiajian Li, Di Jiang, Lei Su and Chunping Pang
Computers 2026, 15(1), 38; https://doi.org/10.3390/computers15010038 - 7 Jan 2026
Abstract
►▼
Show Figures
Conversational recommendation systems aim to provide high-quality recommendations based on user needs through multiple rounds of interaction with users. Hypergraphs are introduced into conversation recommendation due to their ability to express and model complex relationships among multiple entities, enabling the capture of complex
[...] Read more.
Conversational recommendation systems aim to provide high-quality recommendations based on user needs through multiple rounds of interaction with users. Hypergraphs are introduced into conversation recommendation due to their ability to express and model complex relationships among multiple entities, enabling the capture of complex multi-entity interactions in dialog history. However, existing hypergraph-based methods treat all entities within the same hyperedge as sharing a single relationship, ignoring the fact that multiple types of semantic relationships coexist among entities within the same hyperedge. This leads to ambiguous entity representations and makes it difficult to accurately characterize complex user preferences. To address this issue, this paper proposes a Hypergraph Conversational Recommendation System Fusing Pairwise Relationships (HCRS-PR) model that integrates pairwise relationships. While preserving the overall high-order semantics of the hypergraph, it constructs a fine-grained pairwise relationship graph for each entity interaction within a hyperedge, capturing specific interaction patterns between entities and significantly improving the accuracy of conversational context representation. During the model inference stage, to enhance the diversity of generated responses, this paper adopts a multinomial beam search strategy based on multinomial distribution sampling. Experimental results on benchmark datasets demonstrate the effectiveness of the proposed method in conversation recommendation tasks.
Full article

Figure 1
Open AccessArticle
Numerical Study of the Dynamics of Medical Data Security in Information Systems
by
Dinargul Mukhammejanova, Assel Mukasheva and Siming Chen
Computers 2026, 15(1), 37; https://doi.org/10.3390/computers15010037 - 7 Jan 2026
Abstract
Background: Integrated medical information systems process large volumes of sensitive clinical data and are exposed to persistent cyber threats. Artificial intelligence (AI) is increasingly used for anomaly detection and incident response, yet its systemic effect on the dynamics of security indicators is not
[...] Read more.
Background: Integrated medical information systems process large volumes of sensitive clinical data and are exposed to persistent cyber threats. Artificial intelligence (AI) is increasingly used for anomaly detection and incident response, yet its systemic effect on the dynamics of security indicators is not fully quantified. Aim: To develop and numerically study a nonlinear dynamical model describing the joint evolution of system vulnerability, threat activity, compromise level, AI detection quality, and response resources in a medical data protection context. Method: A five-dimensional system of ordinary differential equations was formulated for variables , , , , . Parameters characterize appearance and elimination of vulnerabilities, attack intensity, AI learning and degradation, and resource consumption. The corresponding Cauchy problem , , , , was solved on numerically using a fourth-order Runge–Kutta method. Results: Numerical modelling showed convergence to a favourable steady regime. On the interval t ∈ [195, 200] the mean values were , , , , . Thus, the initial 10% compromise is reduced by more than 99.9%, while AI detection quality stabilizes at around 0.58, and response capacity increases 25-fold. Conclusions: The model quantitatively confirms that the integration of AI detection and a managed response capacity enables the system to reach a stable state with virtually zero compromised medical data even with non-zero threat activity.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►▼
Show Figures

Figure 1
Open AccessArticle
Breaking the Speed–Accuracy Trade-Off: A Novel Embedding-Based Framework with Coarse Screening-Refined Verification for Zero-Shot Named Entity Recognition
by
Meng Yang, Shuo Wang, Hexin Yang and Ning Chen
Computers 2026, 15(1), 36; https://doi.org/10.3390/computers15010036 - 7 Jan 2026
Abstract
Although fine-tuning pretrained language models has brought remarkable progress to zero-shot named entity recognition (NER), current generative approaches still suffer from inherent limitations. Their autoregressive decoding mechanism requires token-by-token generation, resulting in low inference efficiency, while the massive parameter scale leads to high
[...] Read more.
Although fine-tuning pretrained language models has brought remarkable progress to zero-shot named entity recognition (NER), current generative approaches still suffer from inherent limitations. Their autoregressive decoding mechanism requires token-by-token generation, resulting in low inference efficiency, while the massive parameter scale leads to high computational and deployment costs. In contrast, span-based methods avoid autoregressive decoding but often face large candidate spaces and severe noise redundancy, which hinder efficient entity localization in long-text scenarios. To overcome these challenges, we propose an efficient Embedding-based NER framework that achieves an optimal balance between performance and efficiency. Specifically, the framework first introduces a lightweight dynamic feature matching module for coarse-grained entity localization, enabling rapid filtering of potential entity regions. Then, a hierarchical progressive entity filtering mechanism is applied for fine-grained recognition and noise suppression. Experimental results demonstrate that the proposed model, which is trained on a single RTX 5090 GPU for only 24 h, attains approximately 90% of the performance of the SOTA GNER-T5 11B model while using only one-seventh of its parameters. Moreover, by eliminating the redundancy of autoregressive decoding, the proposed framework achieves a 17× faster inference speed compared to GNER-T5 11B and significantly surpasses traditional span-based approaches in efficiency.
Full article
(This article belongs to the Special Issue Adaptive Decision Making Across Industries with AI and Machine Learning: Frameworks, Challenges, and Innovations)
►▼
Show Figures

Figure 1
Open AccessArticle
Hybrid Sine–Cosine with Hummingbird Foraging Algorithm for Engineering Design Optimisation
by
Jamal Zraqou, Ahmad Sami Al-Shamayleh, Riyad Alrousan, Hussam Fakhouri, Faten Hamad and Niveen Halalsheh
Computers 2026, 15(1), 35; https://doi.org/10.3390/computers15010035 - 7 Jan 2026
Abstract
We introduce AHA–SCA, a compact hybrid optimiser that alternates the wave-based exploration of the Sine–Cosine Algorithm (SCA) with the exploitation skills of the Artificial Hummingbird Algorithm (AHA) within a single population. Even iterations perform SCA moves with a linearly decaying sinusoidal amplitude to
[...] Read more.
We introduce AHA–SCA, a compact hybrid optimiser that alternates the wave-based exploration of the Sine–Cosine Algorithm (SCA) with the exploitation skills of the Artificial Hummingbird Algorithm (AHA) within a single population. Even iterations perform SCA moves with a linearly decaying sinusoidal amplitude to explore widely around the current best solution, while odd iterations invoke guided and territorial hummingbird flights using axial, diagonal, and omnidirectional patterns to intensify the search in promising regions. This simple interleaving yields an explicit and tunable balance between exploration and exploitation and incurs negligible overhead beyond evaluating candidate solutions. The proposed approach is evaluated on the CEC2014, CEC2017, and CEC2022 benchmark suites and on several constrained engineering design problems, including welded beam, pressure vessel, tension/compression spring, speed reducer, and cantilever beam designs. Across these diverse tasks, AHA–SCA demonstrates competitive or superior performance relative to stand-alone SCA, AHA, and a broad panel of recent metaheuristics, delivering faster early-phase convergence and robust final solutions. Statistical analyses using non-parametric tests confirm that improvements are significant on many functions, and the method respects problem constraints without parameter tuning. The results suggest that alternating wave-driven exploration with hummingbird-inspired refinement is a promising general strategy for continuous engineering optimisation.
Full article
(This article belongs to the Special Issue AI in Complex Engineering Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Exploring Risk Factors of Mycotoxin Contamination in Fresh Eggs Using Machine Learning Techniques
by
Eman Omar, Eman Alsaidi, Abdullah Aref, Sharaf Omar, Wafa’ Bani Mustafa and Hind Milhem
Computers 2026, 15(1), 34; https://doi.org/10.3390/computers15010034 - 7 Jan 2026
Abstract
Mycotoxins are toxic compounds produced by certain fungi, whose health effects may be significant when they contaminate fresh eggs. Conventional methods of mycotoxin analysis, while accurate, are labor-intensive, time-consuming, and impractical for large-scale screening applications. This study attempts to use using machine learning
[...] Read more.
Mycotoxins are toxic compounds produced by certain fungi, whose health effects may be significant when they contaminate fresh eggs. Conventional methods of mycotoxin analysis, while accurate, are labor-intensive, time-consuming, and impractical for large-scale screening applications. This study attempts to use using machine learning techniques to predict the concentration and presence of deoxynivalenol (DON), aflatoxin B1 (AFB1), and ochratoxin A (OTA) in fresh eggs from Jordan. Rather than replacing analytical detection methods, the proposed approach can enable a risk-based prioritization of samples for laboratory testing by identifying high-risk samples based on environmental and production factors. A dataset consisting of 1250 poultry egg samples collected between January and July 2024 under several factors involving environmental conditions and chemical assay results regarding mycotoxin content in eggs was used. Several machine learning algorithms were used in this study to build predictive models, including decision trees, support vector machines, and neural networks. The results indicate that machine learning can accurately and reliably predict mycotoxin contamination, which demonstrates the potential for integrating machine learning into food safety protocols. This study contributes toward developing predictive analytics for food safety and lays the groundwork for future research aimed at improving contamination monitoring systems.
Full article
(This article belongs to the Special Issue Machine Learning: Techniques, Industry Applications, Code Sharing, and Future Trends)
►▼
Show Figures

Figure 1
Open AccessArticle
X-HEM: An Explainable and Trustworthy AI-Based Framework for Intelligent Healthcare Diagnostics
by
Mohammad F. Al-Hammouri, Bandi Vamsi, Islam T. Almalkawi and Ali Al Bataineh
Computers 2026, 15(1), 33; https://doi.org/10.3390/computers15010033 - 7 Jan 2026
Abstract
Intracranial Hemorrhage (ICH) remains a critical life-threatening condition where timely and accurate diagnosis using non-contrast Computed Tomography (CT) scans is vital to reduce mortality and long-term disability. Deep learning methods have shown strong potential for automated hemorrhage detection, yet most existing approaches lack
[...] Read more.
Intracranial Hemorrhage (ICH) remains a critical life-threatening condition where timely and accurate diagnosis using non-contrast Computed Tomography (CT) scans is vital to reduce mortality and long-term disability. Deep learning methods have shown strong potential for automated hemorrhage detection, yet most existing approaches lack confidence quantification and clinical interpretability, which limits their adoption in high-stakes care. This study presents X-HEM, an explainable hemorrhage ensemble model for reliable detection of Intracranial Hemorrhage (ICH) on non-contrast head CT scans. The aim is to improve diagnostic accuracy, interpretability, and confidence for real-time clinical decision support. X-HEM integrates three convolutional backbones (VGG16, ResNet50, DenseNet121) through soft voting. Bayesian uncertainty is estimated using Monte Carlo Dropout, while Grad-CAM++ and SHAP provide spatial and global interpretability. Training and validation were conducted on the RSNA ICH dataset, with external testing on CQ500. The model achieved AUCs of 0.96 (RSNA) and 0.94 (CQ500), demonstrated well-calibrated confidence (low Brier/ECE), and provided explanations that aligned with radiologist-marked regions. The integration of ensemble learning, Bayesian uncertainty, and dual explainability enables X-HEM to deliver confidence-aware, interpretable ICH predictions suitable for clinical use.
Full article
(This article belongs to the Special Issue AI-Powered IoT (AIoT) Systems: Advancements in Security, Sustainability, and Intelligence)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Computers, Electronics, Information, MAKE, Signals
Recent Advances in Label Distribution Learning
Topic Editors: Xin Geng, Ning Xu, Liangxiao JiangDeadline: 31 January 2026
Topic in
Applied Sciences, Computers, JSAN, Technologies, BDCC, Sensors, Telecom, Electronics
Electronic Communications, IOT and Big Data, 2nd Volume
Topic Editors: Teen-Hang Meen, Charles Tijus, Cheng-Chien Kuo, Kuei-Shu Hsu, Jih-Fu TuDeadline: 31 March 2026
Topic in
AI, Buildings, Computers, Drones, Entropy, Symmetry
Applications of Machine Learning in Large-Scale Optimization and High-Dimensional Learning
Topic Editors: Jeng-Shyang Pan, Junzo Watada, Vaclav Snasel, Pei HuDeadline: 30 April 2026
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2026
Conferences
Special Issues
Special Issue in
Computers
Systems and Technologies for IT/OT Integration in Industry 4/5.0 Environments (SITE)
Guest Editors: Riccardo Venanzi, Paolo BellavistaDeadline: 15 January 2026
Special Issue in
Computers
Future Trends in Computer Programming Education
Guest Editor: Stelios XinogalosDeadline: 31 January 2026
Special Issue in
Computers
AI in Complex Engineering Systems
Guest Editor: Sandi Baressi ŠegotaDeadline: 31 January 2026
Special Issue in
Computers
Computational Science and Its Applications 2025 (ICCSA 2025)
Guest Editor: Osvaldo GervasiDeadline: 31 January 2026




