Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.5 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Privacy-Preserving Set Intersection Protocol Based on SM2 Oblivious Transfer
Computers 2026, 15(1), 44; https://doi.org/10.3390/computers15010044 (registering DOI) - 10 Jan 2026
Abstract
Private Set Intersection (PSI) is a fundamental cryptographic primitive in privacy-preserving computation and has been widely applied in federated learning, secure data sharing, and privacy-aware data analytics. However, most existing PSI protocols rely on RSA or standard elliptic curve cryptography, which limits their
[...] Read more.
Private Set Intersection (PSI) is a fundamental cryptographic primitive in privacy-preserving computation and has been widely applied in federated learning, secure data sharing, and privacy-aware data analytics. However, most existing PSI protocols rely on RSA or standard elliptic curve cryptography, which limits their applicability in scenarios requiring domestic cryptographic standards and often leads to high computational and communication overhead when processing large-scale datasets. In this paper, we propose a novel PSI protocol based on the Chinese commercial cryptographic standard SM2, referred to as SM2-OT-PSI. The proposed scheme constructs an oblivious transfer-based Oblivious Pseudorandom Function (OPRF) using SM2 public-key cryptography and the SM3 hash function, enabling efficient multi-point OPRF evaluation under the semi-honest adversary model. A formal security analysis demonstrates that the protocol satisfies privacy and correctness guarantees assuming the hardness of the Elliptic Curve Discrete Logarithm Problem. To further improve practical performance, we design a software–hardware co-design architecture that offloads SM2 scalar multiplication and SM3 hashing operations to a domestic reconfigurable cryptographic accelerator (RSP S20G). Experimental results show that, for datasets with up to millions of elements, the presented protocol significantly outperforms several representative PSI schemes in terms of execution time and communication efficiency, especially in medium and high-bandwidth network environments. The proposed SM2-OT-PSI protocol provides a practical and efficient solution for large-scale privacy-preserving set intersection under national cryptographic standards, making it suitable for deployment in real-world secure computing systems.
Full article
(This article belongs to the Special Issue Mobile Fog and Edge Computing)
Open AccessArticle
Joint Inference of Image Enhancement and Object Detection via Cross-Domain Fusion Transformer
by
Bingxun Zhao and Yuan Chen
Computers 2026, 15(1), 43; https://doi.org/10.3390/computers15010043 (registering DOI) - 10 Jan 2026
Abstract
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to
[...] Read more.
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to improve visual quality prior to detection. However, image enhancement and object detection are optimized for fundamentally different objectives, and directly cascading them leads to feature distribution mismatch. Moreover, prevailing dual-branch architectures process enhancement and detection independently, overlooking multi-scale interactions across domains and thus constraining the learning of cross-domain feature representation. To overcome these limitations, We propose an underwater cross-domain fusion Transformer detector (UCF-DETR). UCF-DETR jointly leverages image enhancement and object detection by exploiting the complementary information from the enhanced and original image domains. Specifically, an underwater image enhancement module is employed to improve visibility. We then design a cross-domain feature pyramid to integrate fine-grained structural details from the enhanced domain with semantic representations from the original domain. Cross-domain query interaction mechanism is introduced to model inter-domain query relationships, leading to accurate object localization and boundary delineation. Extensive experiments on the challenging DUO and UDD benchmarks demonstrate that UCF-DETR consistently outperforms state-of-the-art methods for UOD.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Open AccessArticle
Preparation for Inclusive and Technology-Enhanced Pedagogy: A Cluster Analysis of Secondary Special Education Teachers
by
Evaggelos Foykas, Eleftheria Beazidou, Natassa Raikou and Nikolaos C. Zygouris
Computers 2026, 15(1), 42; https://doi.org/10.3390/computers15010042 - 9 Jan 2026
Abstract
This study examines the profiles of secondary special education teachers regarding their readiness for inclusive teaching, with technology-enhanced practices operationalized through participation in STEAM-related professional development. A total of 323 teachers from vocational high schools and integration classes participated. Four indicators of professional
[...] Read more.
This study examines the profiles of secondary special education teachers regarding their readiness for inclusive teaching, with technology-enhanced practices operationalized through participation in STEAM-related professional development. A total of 323 teachers from vocational high schools and integration classes participated. Four indicators of professional preparation were assessed: years of teaching experience, formal STEAM training, exposure to students with special educational needs (SEN), and perceived success in inclusive teaching, operationalized as self-reported competence in adaptive instruction, classroom management, positive attitudes toward inclusion, and collaborative engagement. Cluster analysis revealed three distinct teacher profiles: less experienced teachers with moderate perceived success and limited exposure to students with SEN; well-prepared teachers with high levels across all indicators; and highly experienced teachers with lower STEAM training and perceived success. These findings underscore the need for targeted professional development that integrates inclusive and technology-enhanced pedagogy through STEAM and is tailored to teachers’ experience levels. By integrating inclusive readiness, STEAM-related preparation, and technology-enhanced pedagogy within a person-centered profiling approach, this study offers actionable teacher profiles to inform differentiated professional development in secondary special education.
Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Emerging Technologies in Financial Services: From Virtualization and Cloud Infrastructures to Edge Computing Applications
by
Georgios Lambropoulos, Sarandis Mitropoulos and Christos Douligeris
Computers 2026, 15(1), 41; https://doi.org/10.3390/computers15010041 - 9 Jan 2026
Abstract
The financial services sector is experiencing unprecedented transformation through the adoption of virtualization technologies, encompassing cloud computing and edge computing digitalization initiatives that fundamentally alter operational paradigms and competitive dynamics within the industry. This systematic literature review employed a comprehensive methodology, analyzing peer-reviewed
[...] Read more.
The financial services sector is experiencing unprecedented transformation through the adoption of virtualization technologies, encompassing cloud computing and edge computing digitalization initiatives that fundamentally alter operational paradigms and competitive dynamics within the industry. This systematic literature review employed a comprehensive methodology, analyzing peer-reviewed articles, systematic reviews, and industry reports published between 2016 and 2025 across three primary technological domains, utilizing thematic content analysis to synthesize findings and identify key implementation patterns, performance outcomes, and emerging challenges. The analysis reveals consistent evidence of positive long-term performance outcomes from virtualization technology adoption, including average transaction processing time reductions of 69% through edge computing implementations, substantial operational cost savings and efficiency improvements through cloud computing adoption, while simultaneously identifying critical challenges related to regulatory compliance, security management, and organizational transformation requirements. Virtualization technology offers transformative potential for financial services through improved operational efficiency, enhanced customer experience, and competitive advantage creation, though successful implementation requires sophisticated approaches to standardization, regulatory compliance, and change management, with future research needed to develop integrative frameworks addressing technology convergence and emerging applications in decentralized finance and digital currency systems.
Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
►▼
Show Figures

Figure 1
Open AccessReview
Deep Reinforcement Learning in the Era of Foundation Models: A Survey
by
Ibomoiye Domor Mienye, Ebenezer Esenogho and Cameron Modisane
Computers 2026, 15(1), 40; https://doi.org/10.3390/computers15010040 - 9 Jan 2026
Abstract
►▼
Show Figures
Deep reinforcement learning (DRL) and large foundation models (FMs) have reshaped modern artificial intelligence (AI) by enabling systems that learn from interaction while leveraging broad generalization and multimodal reasoning capabilities. This survey examines the growing convergence of these paradigms and reviews how reinforcement
[...] Read more.
Deep reinforcement learning (DRL) and large foundation models (FMs) have reshaped modern artificial intelligence (AI) by enabling systems that learn from interaction while leveraging broad generalization and multimodal reasoning capabilities. This survey examines the growing convergence of these paradigms and reviews how reinforcement learning from human feedback (RLHF), reinforcement learning from AI feedback (RLAIF), world-model pretraining, and preference-based optimization refine foundation model capabilities. We organize existing work into a taxonomy of model-centric, RL-centric, and hybrid DRL–FM integration pathways, and synthesize applications across language and multimodal agents, autonomous control, scientific discovery, and societal and ethical alignment. We also identify technical, behavioral, and governance challenges that hinder scalable and reliable DRL–FM integration, and outline emerging research directions that suggest how reinforcement-driven adaptation may shape the next generation of intelligent systems. This review provides researchers and practitioners with a structured overview of the current state and future trajectory of DRL in the era of foundation models.
Full article

Figure 1
Open AccessArticle
Efficient Low-Precision GEMM on Ascend NPU: HGEMM’s Synergy of Pipeline Scheduling, Tiling, and Memory Optimization
by
Erkun Zhang, Pengxiang Xu and Lu Lu
Computers 2026, 15(1), 39; https://doi.org/10.3390/computers15010039 - 8 Jan 2026
Abstract
As one of the most widely used high-performance kernels, General Matrix Multiplication, or GEMM, plays a pivotal role in diverse application fields. With the growing prevalence of training for Convolutional Neural Networks (CNNs) and Large Language Models (LLMs), the design and implementation of
[...] Read more.
As one of the most widely used high-performance kernels, General Matrix Multiplication, or GEMM, plays a pivotal role in diverse application fields. With the growing prevalence of training for Convolutional Neural Networks (CNNs) and Large Language Models (LLMs), the design and implementation of high-efficiency, low-precision GEMM on modern Neural Processing Unit (NPU) platforms are of great significance. In this work, HGEMM for Ascend NPU is presented, which enables collaborative processing of different computation types by Cube units and Vector units. The major contributions of this work are the following: (i) dual-stream pipeline scheduling is implemented, which synchronizes padding operations, matrix–matrix multiplications, and element-wise instructions across hierarchical buffers and compute units; (ii) a suite of tiling strategies and a corresponding strategy selection mechanism are developed, comprehensively accounting for the impacts from M, N, and K directions; and (iii) SplitK as well as ShuffleK methods are raised to address the challenges of memory access efficiency and AI Core utilization. Extensive evaluations demonstrate that our proposed HGEMM achieves an average 3.56× speedup over the CATLASS template-based implementation under identical Ascend NPU configurations, and an average 2.10× speedup relative to the cuBLAS implementation on Nvidia A800 GPUs under general random workloads. It also achieves a maximum computational utilization exceeding 90% under benchmark workloads. Moreover, the proposed HGEMM not only significantly outperforms the CATLASS template-based implementation but also delivers efficiency comparable to the cuBLAS implementation in OPT-based bandwidth-limited LLM inference workloads.
Full article
Open AccessArticle
Hypergraph Conversational Recommendation System Fusing Pairwise Relationships
by
Liping Wu, Jiajian Li, Di Jiang, Lei Su and Chunping Pang
Computers 2026, 15(1), 38; https://doi.org/10.3390/computers15010038 - 7 Jan 2026
Abstract
►▼
Show Figures
Conversational recommendation systems aim to provide high-quality recommendations based on user needs through multiple rounds of interaction with users. Hypergraphs are introduced into conversation recommendation due to their ability to express and model complex relationships among multiple entities, enabling the capture of complex
[...] Read more.
Conversational recommendation systems aim to provide high-quality recommendations based on user needs through multiple rounds of interaction with users. Hypergraphs are introduced into conversation recommendation due to their ability to express and model complex relationships among multiple entities, enabling the capture of complex multi-entity interactions in dialog history. However, existing hypergraph-based methods treat all entities within the same hyperedge as sharing a single relationship, ignoring the fact that multiple types of semantic relationships coexist among entities within the same hyperedge. This leads to ambiguous entity representations and makes it difficult to accurately characterize complex user preferences. To address this issue, this paper proposes a Hypergraph Conversational Recommendation System Fusing Pairwise Relationships (HCRS-PR) model that integrates pairwise relationships. While preserving the overall high-order semantics of the hypergraph, it constructs a fine-grained pairwise relationship graph for each entity interaction within a hyperedge, capturing specific interaction patterns between entities and significantly improving the accuracy of conversational context representation. During the model inference stage, to enhance the diversity of generated responses, this paper adopts a multinomial beam search strategy based on multinomial distribution sampling. Experimental results on benchmark datasets demonstrate the effectiveness of the proposed method in conversation recommendation tasks.
Full article

Figure 1
Open AccessArticle
Numerical Study of the Dynamics of Medical Data Security in Information Systems
by
Dinargul Mukhammejanova, Assel Mukasheva and Siming Chen
Computers 2026, 15(1), 37; https://doi.org/10.3390/computers15010037 - 7 Jan 2026
Abstract
Background: Integrated medical information systems process large volumes of sensitive clinical data and are exposed to persistent cyber threats. Artificial intelligence (AI) is increasingly used for anomaly detection and incident response, yet its systemic effect on the dynamics of security indicators is not
[...] Read more.
Background: Integrated medical information systems process large volumes of sensitive clinical data and are exposed to persistent cyber threats. Artificial intelligence (AI) is increasingly used for anomaly detection and incident response, yet its systemic effect on the dynamics of security indicators is not fully quantified. Aim: To develop and numerically study a nonlinear dynamical model describing the joint evolution of system vulnerability, threat activity, compromise level, AI detection quality, and response resources in a medical data protection context. Method: A five-dimensional system of ordinary differential equations was formulated for variables , , , , . Parameters characterize appearance and elimination of vulnerabilities, attack intensity, AI learning and degradation, and resource consumption. The corresponding Cauchy problem , , , , was solved on numerically using a fourth-order Runge–Kutta method. Results: Numerical modelling showed convergence to a favourable steady regime. On the interval t ∈ [195, 200] the mean values were , , , , . Thus, the initial 10% compromise is reduced by more than 99.9%, while AI detection quality stabilizes at around 0.58, and response capacity increases 25-fold. Conclusions: The model quantitatively confirms that the integration of AI detection and a managed response capacity enables the system to reach a stable state with virtually zero compromised medical data even with non-zero threat activity.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►▼
Show Figures

Figure 1
Open AccessArticle
Breaking the Speed–Accuracy Trade-Off: A Novel Embedding-Based Framework with Coarse Screening-Refined Verification for Zero-Shot Named Entity Recognition
by
Meng Yang, Shuo Wang, Hexin Yang and Ning Chen
Computers 2026, 15(1), 36; https://doi.org/10.3390/computers15010036 - 7 Jan 2026
Abstract
Although fine-tuning pretrained language models has brought remarkable progress to zero-shot named entity recognition (NER), current generative approaches still suffer from inherent limitations. Their autoregressive decoding mechanism requires token-by-token generation, resulting in low inference efficiency, while the massive parameter scale leads to high
[...] Read more.
Although fine-tuning pretrained language models has brought remarkable progress to zero-shot named entity recognition (NER), current generative approaches still suffer from inherent limitations. Their autoregressive decoding mechanism requires token-by-token generation, resulting in low inference efficiency, while the massive parameter scale leads to high computational and deployment costs. In contrast, span-based methods avoid autoregressive decoding but often face large candidate spaces and severe noise redundancy, which hinder efficient entity localization in long-text scenarios. To overcome these challenges, we propose an efficient Embedding-based NER framework that achieves an optimal balance between performance and efficiency. Specifically, the framework first introduces a lightweight dynamic feature matching module for coarse-grained entity localization, enabling rapid filtering of potential entity regions. Then, a hierarchical progressive entity filtering mechanism is applied for fine-grained recognition and noise suppression. Experimental results demonstrate that the proposed model, which is trained on a single RTX 5090 GPU for only 24 h, attains approximately 90% of the performance of the SOTA GNER-T5 11B model while using only one-seventh of its parameters. Moreover, by eliminating the redundancy of autoregressive decoding, the proposed framework achieves a 17× faster inference speed compared to GNER-T5 11B and significantly surpasses traditional span-based approaches in efficiency.
Full article
(This article belongs to the Special Issue Adaptive Decision Making Across Industries with AI and Machine Learning: Frameworks, Challenges, and Innovations)
►▼
Show Figures

Figure 1
Open AccessArticle
Hybrid Sine–Cosine with Hummingbird Foraging Algorithm for Engineering Design Optimisation
by
Jamal Zraqou, Ahmad Sami Al-Shamayleh, Riyad Alrousan, Hussam Fakhouri, Faten Hamad and Niveen Halalsheh
Computers 2026, 15(1), 35; https://doi.org/10.3390/computers15010035 - 7 Jan 2026
Abstract
We introduce AHA–SCA, a compact hybrid optimiser that alternates the wave-based exploration of the Sine–Cosine Algorithm (SCA) with the exploitation skills of the Artificial Hummingbird Algorithm (AHA) within a single population. Even iterations perform SCA moves with a linearly decaying sinusoidal amplitude to
[...] Read more.
We introduce AHA–SCA, a compact hybrid optimiser that alternates the wave-based exploration of the Sine–Cosine Algorithm (SCA) with the exploitation skills of the Artificial Hummingbird Algorithm (AHA) within a single population. Even iterations perform SCA moves with a linearly decaying sinusoidal amplitude to explore widely around the current best solution, while odd iterations invoke guided and territorial hummingbird flights using axial, diagonal, and omnidirectional patterns to intensify the search in promising regions. This simple interleaving yields an explicit and tunable balance between exploration and exploitation and incurs negligible overhead beyond evaluating candidate solutions. The proposed approach is evaluated on the CEC2014, CEC2017, and CEC2022 benchmark suites and on several constrained engineering design problems, including welded beam, pressure vessel, tension/compression spring, speed reducer, and cantilever beam designs. Across these diverse tasks, AHA–SCA demonstrates competitive or superior performance relative to stand-alone SCA, AHA, and a broad panel of recent metaheuristics, delivering faster early-phase convergence and robust final solutions. Statistical analyses using non-parametric tests confirm that improvements are significant on many functions, and the method respects problem constraints without parameter tuning. The results suggest that alternating wave-driven exploration with hummingbird-inspired refinement is a promising general strategy for continuous engineering optimisation.
Full article
(This article belongs to the Special Issue AI in Complex Engineering Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Exploring Risk Factors of Mycotoxin Contamination in Fresh Eggs Using Machine Learning Techniques
by
Eman Omar, Eman Alsaidi, Abdullah Aref, Sharaf Omar, Wafa’ Bani Mustafa and Hind Milhem
Computers 2026, 15(1), 34; https://doi.org/10.3390/computers15010034 - 7 Jan 2026
Abstract
Mycotoxins are toxic compounds produced by certain fungi, whose health effects may be significant when they contaminate fresh eggs. Conventional methods of mycotoxin analysis, while accurate, are labor-intensive, time-consuming, and impractical for large-scale screening applications. This study attempts to use using machine learning
[...] Read more.
Mycotoxins are toxic compounds produced by certain fungi, whose health effects may be significant when they contaminate fresh eggs. Conventional methods of mycotoxin analysis, while accurate, are labor-intensive, time-consuming, and impractical for large-scale screening applications. This study attempts to use using machine learning techniques to predict the concentration and presence of deoxynivalenol (DON), aflatoxin B1 (AFB1), and ochratoxin A (OTA) in fresh eggs from Jordan. Rather than replacing analytical detection methods, the proposed approach can enable a risk-based prioritization of samples for laboratory testing by identifying high-risk samples based on environmental and production factors. A dataset consisting of 1250 poultry egg samples collected between January and July 2024 under several factors involving environmental conditions and chemical assay results regarding mycotoxin content in eggs was used. Several machine learning algorithms were used in this study to build predictive models, including decision trees, support vector machines, and neural networks. The results indicate that machine learning can accurately and reliably predict mycotoxin contamination, which demonstrates the potential for integrating machine learning into food safety protocols. This study contributes toward developing predictive analytics for food safety and lays the groundwork for future research aimed at improving contamination monitoring systems.
Full article
(This article belongs to the Special Issue Machine Learning: Techniques, Industry Applications, Code Sharing, and Future Trends)
►▼
Show Figures

Figure 1
Open AccessArticle
X-HEM: An Explainable and Trustworthy AI-Based Framework for Intelligent Healthcare Diagnostics
by
Mohammad F. Al-Hammouri, Bandi Vamsi, Islam T. Almalkawi and Ali Al Bataineh
Computers 2026, 15(1), 33; https://doi.org/10.3390/computers15010033 - 7 Jan 2026
Abstract
Intracranial Hemorrhage (ICH) remains a critical life-threatening condition where timely and accurate diagnosis using non-contrast Computed Tomography (CT) scans is vital to reduce mortality and long-term disability. Deep learning methods have shown strong potential for automated hemorrhage detection, yet most existing approaches lack
[...] Read more.
Intracranial Hemorrhage (ICH) remains a critical life-threatening condition where timely and accurate diagnosis using non-contrast Computed Tomography (CT) scans is vital to reduce mortality and long-term disability. Deep learning methods have shown strong potential for automated hemorrhage detection, yet most existing approaches lack confidence quantification and clinical interpretability, which limits their adoption in high-stakes care. This study presents X-HEM, an explainable hemorrhage ensemble model for reliable detection of Intracranial Hemorrhage (ICH) on non-contrast head CT scans. The aim is to improve diagnostic accuracy, interpretability, and confidence for real-time clinical decision support. X-HEM integrates three convolutional backbones (VGG16, ResNet50, DenseNet121) through soft voting. Bayesian uncertainty is estimated using Monte Carlo Dropout, while Grad-CAM++ and SHAP provide spatial and global interpretability. Training and validation were conducted on the RSNA ICH dataset, with external testing on CQ500. The model achieved AUCs of 0.96 (RSNA) and 0.94 (CQ500), demonstrated well-calibrated confidence (low Brier/ECE), and provided explanations that aligned with radiologist-marked regions. The integration of ensemble learning, Bayesian uncertainty, and dual explainability enables X-HEM to deliver confidence-aware, interpretable ICH predictions suitable for clinical use.
Full article
(This article belongs to the Special Issue AI-Powered IoT (AIoT) Systems: Advancements in Security, Sustainability, and Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
Monitoring IoT and Robotics Data for Sustainable Agricultural Practices Using a New Edge–Fog–Cloud Architecture
by
Mohamed El-Ouati, Sandro Bimonte and Nicolas Tricot
Computers 2026, 15(1), 32; https://doi.org/10.3390/computers15010032 - 7 Jan 2026
Abstract
Modern agricultural operations generate high-volume and diverse data (historical and stream) from various sources, including IoT devices, robots, and drones. This paper presents a novel smart farming architecture specifically designed to efficiently manage and process this complex data landscape.The proposed architecture comprises five
[...] Read more.
Modern agricultural operations generate high-volume and diverse data (historical and stream) from various sources, including IoT devices, robots, and drones. This paper presents a novel smart farming architecture specifically designed to efficiently manage and process this complex data landscape.The proposed architecture comprises five distinct, interconnected layers: The Source Layer, the Ingestion Layer, the Batch Layer, the Speed Layer, and the Governance Layer. The Source Layer serves as the unified entry point, accommodating structured, spatial, and image data from sensors, Drones, and ROS-equipped robots. The Ingestion Layer uses a hybrid fog/cloud architecture with Kafka for real-time streams and for batch processing of historical data. Data is then segregated for processing: The cloud-deployed Batch Layer employs a Hadoop cluster, Spark, Hive, and Drill for large-scale historical analysis, while the Speed Layer utilizes Geoflink and PostGIS for low-latency, real-time geovisualization. Finally, the Governance Layer guarantees data quality, lineage, and organization across all components using Open Metadata. This layered, hybrid approach provides a scalable and resilient framework capable of transforming raw agricultural data into timely, actionable insights, addressing the critical need for advanced data management in smart farming.
Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
►▼
Show Figures

Figure 1
Open AccessArticle
Contribution-Driven Task Design: Multi-Task Optimization Algorithm for Large-Scale Constrained Multi-Objective Problems
by
Huai Li and Tianyu Liu
Computers 2026, 15(1), 31; https://doi.org/10.3390/computers15010031 - 6 Jan 2026
Abstract
Large-scale constrained multi-objective optimization problems (LSCMOPs) are highly challenging due to the need to optimize multiple conflicting objectives under complex constraints within a vast search space. To address this challenge, this paper proposes a multi-task optimization algorithm based on contribution-driven task design (MTO-CDTD).
[...] Read more.
Large-scale constrained multi-objective optimization problems (LSCMOPs) are highly challenging due to the need to optimize multiple conflicting objectives under complex constraints within a vast search space. To address this challenge, this paper proposes a multi-task optimization algorithm based on contribution-driven task design (MTO-CDTD). The algorithm constructs a multi-task optimization framework comprising one original task and multiple auxiliary tasks. Guided by an optimal contribution objective assignment strategy, each auxiliary task optimizes a subset of decision variables that contribute most to a specific objective function. A contribution-guided initialization strategy is then employed to generate high-quality initial populations for the auxiliary tasks. Furthermore, a knowledge transfer strategy based on multi-population collaboration is developed to integrate optimization information from the auxiliary tasks, thereby effectively guiding the original task in searching the large-scale decision space. Extensive experiments on three benchmark test suites—LIRCMOP, CF, and ZXH_CF—with 100, 500, and 1000 decision variables demonstrate that the proposed MTO-CDTD algorithm achieves significant advantages in solving complex LSCMOPs.
Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
A Fuzzy QFD-Based Methodology for Systematic Generation IT Project Management Plan and Scope Plan Elements
by
Anita Jansone and Ovinda Dilshan Nawalage
Computers 2026, 15(1), 30; https://doi.org/10.3390/computers15010030 - 6 Jan 2026
Abstract
►▼
Show Figures
The study presents a methodology that supports the development of the Information Technology Project Management Plan (PMP) and Scope Plan (SP) elements by formulating structured sentences from Quality Function Deployment (QFD) outputs produced through the House of Quality (HoQ) matrix. Rather than proposing
[...] Read more.
The study presents a methodology that supports the development of the Information Technology Project Management Plan (PMP) and Scope Plan (SP) elements by formulating structured sentences from Quality Function Deployment (QFD) outputs produced through the House of Quality (HoQ) matrix. Rather than proposing QFD as a new planning tool, the novelty lies in systematically mapping HoQ results to newly structured PMP and SP elements based on established standards and then transforming these results into planning statements through an integrated fuzzy logic layer. Additionally, the introduced fuzzy logic component addresses the uncertainty, prioritization needs, and subjectivity inherent in stakeholder inputs. This enables more accurate and consistent assistance in formulating plan elements, while strengthening the alignment between customer needs and project deliverables. Finally, the usefulness of the proposed methodology is demonstrated through an applied IT project case study that evaluates selected elements and highlights the concrete benefits of improving planning efficiency.
Full article

Figure 1
Open AccessArticle
Mechanical Optimizations with Variable Mesh Size, Using Differential Evolution Algorithm
by
David Robledo-Jimenez, Carlos Gustavo Manriquez-Padilla, Arturo Yosimar Jaen Cuellar, Angel Perez-Cruz and Juan Jose Saucedo-Dorantes
Computers 2026, 15(1), 29; https://doi.org/10.3390/computers15010029 - 6 Jan 2026
Abstract
Structural problems are a common topic among several optimization works; with the use of finite element analysis (FEA), the aim of these works is to improve the mechanical behavior of the distinct elements or bodies involved in these optimization problems. However, the impact
[...] Read more.
Structural problems are a common topic among several optimization works; with the use of finite element analysis (FEA), the aim of these works is to improve the mechanical behavior of the distinct elements or bodies involved in these optimization problems. However, the impact of the meshing discretization on the outcome of the optimization process has not been studied in previous works. The present work investigates the effect of mesh element size on the mechanical optimization of two cases of study; the first one is about a modal optimization on a cantilever beam, and the second one is about a cellular beam, where the aim is to reduce the weight of the beam under static load. In these two optimization problems, variables commonly used in the literature were employed, while additionally including the mesh size as an extra variable. The computational framework is implemented on MATLAB R2022a, and the modal and weight optimizations are carried out through APDL (ANSYS Parametric Design Language) executed in batch mode. The results demonstrate that the consideration of the mesh size element can improve the computational time that is required to perform this mechanical optimization, achieving a 96% percentage of time reduction instead of making the analysis with the finest element size (in case 1) and a 90 percent time reduction for the second case of study.
Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Millimeter-Wave Radar and Mixed Reality Virtual Reality System for Agility Analysis of Table Tennis Players
by
Yung-Hoh Sheu, Li-Wei Tai, Li-Chun Chang, Tz-Yun Chen and Sheng-K Wu
Computers 2026, 15(1), 28; https://doi.org/10.3390/computers15010028 - 6 Jan 2026
Abstract
This study proposes an integrated agility assessment system that combines Millimeter-Wave (MMW) radar, Ultra-Wideband (UWB) ranging, and Mixed Reality (MR) technologies to quantitatively evaluate athlete performance with high accuracy. The system utilizes the fine motion-tracking capability of MMW radar and the immersive real-time
[...] Read more.
This study proposes an integrated agility assessment system that combines Millimeter-Wave (MMW) radar, Ultra-Wideband (UWB) ranging, and Mixed Reality (MR) technologies to quantitatively evaluate athlete performance with high accuracy. The system utilizes the fine motion-tracking capability of MMW radar and the immersive real-time visualization provided by MR to ensure reliable operation under low-light conditions and multi-object occlusion, thereby enabling precise measurement of mobility, reaction time, and movement distance. To address the challenge of player identification during doubles testing, a one-to-one UWB configuration was adopted, in which each base station was paired with a wearable tag to distinguish individual athletes. UWB identification was not required during single-player tests. The experimental protocol included three specialized agility assessments—Table Tennis Agility Test I (TTAT I), Table Tennis Doubles Agility Test II (TTAT II), and the Agility T-Test (ATT)—conducted with more than 80 table tennis players of different technical levels (80% male and 20% female). Each athlete completed two sets of two trials to ensure measurement consistency and data stability. Experimental results demonstrated that the proposed system effectively captured displacement trajectories, movement speed, and reaction time. The MMW radar achieved an average measurement error of less than 10%, and the overall classification model attained an accuracy of 91%, confirming the reliability and robustness of the integrated sensing pipeline. Beyond local storage and MR-based live visualization, the system also supports cloud-based data uploading for graphical analysis and enables MR content to be mirrored on connected computer displays. This feature allows coaches to monitor performance in real time and provide immediate feedback. By integrating the environmental adaptability of MMW radar, the real-time visualization capability of MR, UWB-assisted athlete identification, and cloud-based data management, the proposed system demonstrates strong potential for professional sports training, technical diagnostics, and tactical optimization. It delivers timely and accurate performance metrics and contributes to the advancement of data-driven sports science applications.
Full article
(This article belongs to the Section Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessReview
A Review of AI-Powered Controls in the Field of Magnetic Resonance Imaging
by
Mads Sloth Vinding and Torben Ellegaard Lund
Computers 2026, 15(1), 27; https://doi.org/10.3390/computers15010027 - 5 Jan 2026
Abstract
Artificial intelligence (AI) is increasingly reshaping the control mechanisms that govern magnetic resonance imaging (MRI), enabling faster, safer, and more adaptive operation of the scanner’s physical subsystems. This review provides a comprehensive survey of recent AI-driven advances in core control domains: radio frequency
[...] Read more.
Artificial intelligence (AI) is increasingly reshaping the control mechanisms that govern magnetic resonance imaging (MRI), enabling faster, safer, and more adaptive operation of the scanner’s physical subsystems. This review provides a comprehensive survey of recent AI-driven advances in core control domains: radio frequency (RF) pulse design and specific absorption rate (SAR) prediction, motion-dependent modeling of and fields, and gradient system characterization and correction. Across these domains, deep learning models—convolutional, recurrent, generative, and temporal convolutional networks—have emerged as powerful computational surrogates for numerical electromagnetic simulations, Bloch simulations, motion tracking, and gradient impulse response modeling. These networks achieve subject-specific field or SAR predictions within seconds or milliseconds, mitigating long-standing limitations associated with inter-subject variability, non-linear system behavior, and the need for extensive calibration. We highlight methodological themes such as physics-guided training, reinforcement learning for RF pulse design, subject-specific fine-tuning, uncertainty considerations, and the integration of learned models into real-time MRI workflows. Open challenges and future directions include unified multi-physics frameworks, deep learning approaches for generalizing across anatomies and coil configurations, robust validation across vendors and field strengths, and safety-aware AI design. Overall, AI-powered control strategies are poised to become foundational components of next-generation, high-performance, and personalized MRI systems.
Full article
(This article belongs to the Special Issue Artificial Intelligence in Control)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Hybrid GNN–LSTM Architecture for Probabilistic IoT Botnet Detection with Calibrated Risk Assessment
by
Tetiana Babenko, Kateryna Kolesnikova, Yelena Bakhtiyarova, Damelya Yeskendirova, Kanibek Sansyzbay, Askar Sysoyev and Oleksandr Kruchinin
Computers 2026, 15(1), 26; https://doi.org/10.3390/computers15010026 - 5 Jan 2026
Abstract
Detecting botnets in IoT environments is difficult because most intrusion detection systems treat network events as independent observations. In practice, infections spread through device relationships and evolve through distinct temporal phases. A system that ignores either aspect will miss important patterns. This paper
[...] Read more.
Detecting botnets in IoT environments is difficult because most intrusion detection systems treat network events as independent observations. In practice, infections spread through device relationships and evolve through distinct temporal phases. A system that ignores either aspect will miss important patterns. This paper explores a hybrid architecture combining Graph Neural Networks with Long Short-Term Memory networks to capture both structural and temporal dynamics. The GNN component models behavioral similarity between traffic flows in feature space, while the LSTM tracks how patterns change as attacks progress. The two components are trained jointly so that relational context is preserved during temporal learning. We evaluated the approach on two datasets with different characteristics. N-BaIoT contains traffic from nine devices infected with Mirai and BASHLITE, while CICIoT2023 covers 105 devices across 33 attack types. On N-BaIoT, the model achieved 99.88% accuracy with F1 of 0.9988 and Brier score of 0.0015. Cross-validation on CICIoT2023 yielded 99.73% accuracy with Brier score of 0.0030. The low Brier scores suggest that probability outputs are reasonably well calibrated for risk-based decision making. Consistent performance across both datasets provides some evidence that the architecture generalizes beyond a single benchmark setting.
Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Machine Learning and Ensemble Methods for Cardiovascular Disease Prediction: A Systematic Review of Approaches, Performance Trends, and Research Challenges
by
Ghazala Gul, Imtiaz Ali Korejo, Dil Nawaz Hakro, Haitham Alqahtani, Abdullah Abbasi, Muhammad Babar, Osama Al Rahbi and Najma Imtiaz Ali
Computers 2026, 15(1), 25; https://doi.org/10.3390/computers15010025 - 5 Jan 2026
Abstract
Knowledge discovery helps mitigate the shortcomings of classical machine learning, especially those so-called imbalanced, high-dimensional, and noisy data challenges. Adaptive combination of multiple models, voting and other data fusion strategies, and the incorporation of other disparate information fusion methods characterize ensemble learning, which
[...] Read more.
Knowledge discovery helps mitigate the shortcomings of classical machine learning, especially those so-called imbalanced, high-dimensional, and noisy data challenges. Adaptive combination of multiple models, voting and other data fusion strategies, and the incorporation of other disparate information fusion methods characterize ensemble learning, which addresses the improvement of a predictive model’s accuracy, stability, and generalization. This paper provides a summary of the important approaches to ensemble learning and their real-world uses, emphasizing challenges and opportunities for future work. This paper also discusses how ensemble learning integrates with emergent areas such as deep learning and reinforcement learning. This paper also describes the most important machine learning methods for predicting heart disease, which include decision trees, support vector machines, artificial neural networks, Naïve Bayes, random forest, and K-nearest neighbors.
Full article
(This article belongs to the Special Issue Adaptive Decision Making Across Industries with AI and Machine Learning: Frameworks, Challenges, and Innovations)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Computers, Electronics, Information, MAKE, Signals
Recent Advances in Label Distribution Learning
Topic Editors: Xin Geng, Ning Xu, Liangxiao JiangDeadline: 31 January 2026
Topic in
Applied Sciences, Computers, JSAN, Technologies, BDCC, Sensors, Telecom, Electronics
Electronic Communications, IOT and Big Data, 2nd Volume
Topic Editors: Teen-Hang Meen, Charles Tijus, Cheng-Chien Kuo, Kuei-Shu Hsu, Jih-Fu TuDeadline: 31 March 2026
Topic in
AI, Buildings, Computers, Drones, Entropy, Symmetry
Applications of Machine Learning in Large-Scale Optimization and High-Dimensional Learning
Topic Editors: Jeng-Shyang Pan, Junzo Watada, Vaclav Snasel, Pei HuDeadline: 30 April 2026
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2026
Conferences
Special Issues
Special Issue in
Computers
Systems and Technologies for IT/OT Integration in Industry 4/5.0 Environments (SITE)
Guest Editors: Riccardo Venanzi, Paolo BellavistaDeadline: 15 January 2026
Special Issue in
Computers
Future Trends in Computer Programming Education
Guest Editor: Stelios XinogalosDeadline: 31 January 2026
Special Issue in
Computers
AI in Complex Engineering Systems
Guest Editor: Sandi Baressi ŠegotaDeadline: 31 January 2026
Special Issue in
Computers
Computational Science and Its Applications 2025 (ICCSA 2025)
Guest Editor: Osvaldo GervasiDeadline: 31 January 2026



