Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.5 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
A Self-Adaptive LLM-Based Framework for Automated Extraction and Structuring of Earthquake Information from Heterogeneous Web Sources
Computers 2026, 15(5), 294; https://doi.org/10.3390/computers15050294 - 5 May 2026
Abstract
The rapid growth of heterogeneous web sources has created significant challenges for the automated extraction and structuring of critical domain-specific information, particularly in real-time seismic monitoring scenarios. Despite the existence of official governmental reporting systems, relevant earthquake-related data are often distributed across diverse
[...] Read more.
The rapid growth of heterogeneous web sources has created significant challenges for the automated extraction and structuring of critical domain-specific information, particularly in real-time seismic monitoring scenarios. Despite the existence of official governmental reporting systems, relevant earthquake-related data are often distributed across diverse online platforms with highly variable and dynamically evolving HTML (HyperText Markup Language) structures, leading to incomplete, delayed, or inconsistent information retrieval. Existing rule-based and semi-automated approaches lack scalability and robustness under such conditions. To address this gap, this study proposes a self-adaptive framework based on large language models (LLMs) for the automated extraction and structuring of earthquake-related web content. The proposed approach integrates transformer-based schema generation, repository-guided schema matching, and an iterative refinement mechanism, enabling the system to dynamically adapt to heterogeneous document structures. A formal utility-based decision mechanism is introduced to optimize schema selection and reuse, while embedding-based similarity modeling facilitates efficient transfer of extraction patterns across structurally related webpages. The experimental evaluation was conducted on a heterogeneous benchmark dataset comprising multiple web domains with diverse structural characteristics. The results demonstrate that the proposed framework achieves a success rate of 85% across all evaluated models, with the best-performing configuration reaching an extraction accuracy of 96.5% and a final composite score of 84.26. Additional analysis reveals significant improvements in extraction completeness, reduction in false positives and false negatives, and effective reuse of a compact set of robust schemas. Error analysis indicates that the primary challenges are associated with noisy HTML structures and incorrect DOM (Document Object Model) element selection, rather than deficiencies in textual content. The findings confirm that combining lightweight transformer models with adaptive memory and schema reuse mechanisms enables the development of scalable, robust, and high-performance web extraction systems. The proposed approach is particularly suitable for real-time information retrieval in safety-critical domains, where timely and accurate data aggregation from heterogeneous sources is essential.
Full article
(This article belongs to the Special Issue Trustworthy and Efficient Large Language Models: Methods, Systems and Applications)
►
Show Figures
Open AccessArticle
Non-Standard Squat Posture Detection Method Using Human Skeleton
by
Leiyue Yao, Zhiqiang Dai and Keyun Xiong
Computers 2026, 15(5), 293; https://doi.org/10.3390/computers15050293 - 5 May 2026
Abstract
Squats are essential for assessing lower limb strength. However, performing them incorrectly without professional guidance often leads to sports injuries. Currently, most detection methods rely heavily on deep neural networks and massive datasets. This approach brings several downsides. It involves high data labeling
[...] Read more.
Squats are essential for assessing lower limb strength. However, performing them incorrectly without professional guidance often leads to sports injuries. Currently, most detection methods rely heavily on deep neural networks and massive datasets. This approach brings several downsides. It involves high data labeling costs and heavy computing demands. It is also difficult to achieve low-latency feedback on mobile devices. Furthermore, these models often lack robustness when dealing with individual body differences. To tackle these issues, we propose a new real-time squat detection method. Our approach is built on prior rules and statistical models. Here is how it works. First, we use MediaPipe to track the body’s skeleton joints in real-time from video feeds, calculating the hip and knee angles frame by frame. Next, we build a hip-knee coordination model using linear regression. This step helps us measure how these joints move together dynamically. Finally, we verify the squat depth using a geometry-based tolerance mechanism. This feature accounts for measurement noise and natural body variations, allowing us to accurately judge if the overall posture is standard. We tested our approach on three different squat styles. The results show that our method catches improper forms quickly and efficiently in real time, achieving an accuracy of 90%.
Full article
Open AccessArticle
A Language for Modeling Declarative Knowledge Bases in the Context of Model-Driven Engineering
by
Aleksandr Yurin and Nikita Dorodnykh
Computers 2026, 15(5), 292; https://doi.org/10.3390/computers15050292 - 4 May 2026
Abstract
End-user development (EUD) and model-driven engineering (MDE) are particularly valuable for building classical intelligent systems that rely on declarative knowledge bases. In these knowledge bases, the key dependencies of the domain can be described in the form of logical rules. The general-purpose modeling
[...] Read more.
End-user development (EUD) and model-driven engineering (MDE) are particularly valuable for building classical intelligent systems that rely on declarative knowledge bases. In these knowledge bases, the key dependencies of the domain can be described in the form of logical rules. The general-purpose modeling language used in MDE, specifically UML, enables modeling of static data structures and the dynamics of object behavior; however, it does not primarily support the modeling logical rules. In this paper, we propose a rule visual modeling language inspired by UML—Rule Visual Modeling Language (RVML)—which expands the capabilities of MDE in terms of using domain-specific visual languages. This approach substantially supports end-users in constructing declarative knowledge bases. We present the formal semantics, visual syntax, and features of RVML, along with two industrial case studies. We empirically evaluate the effectiveness of RVML in development compared to other graphic notations used for modeling logical rules. Our evaluation demonstrates that RVML provides superior expressiveness and better preservation of semantic integrity.
Full article
Open AccessArticle
FetalNet 1.0: TOPSIS-Guided Ensemble Learning with Genetic Feature Selection and SHAP Explainability for Fetal Health Classification from Cardiotocography
by
Shweta, Neha Gupta, Meenakshi Gupta, Massimo Donelli, Yogita Arora and Achin Jain
Computers 2026, 15(5), 291; https://doi.org/10.3390/computers15050291 - 2 May 2026
Abstract
Fetal health assessment is a crucial aspect of prenatal care, aimed at the early detection of potential complications to ensure optimal outcomes for both mother and child. Traditional methods, such as the visual analysis of cardiotocography (CTG) data by healthcare professionals, are valuable
[...] Read more.
Fetal health assessment is a crucial aspect of prenatal care, aimed at the early detection of potential complications to ensure optimal outcomes for both mother and child. Traditional methods, such as the visual analysis of cardiotocography (CTG) data by healthcare professionals, are valuable but often subjective and time-consuming. This work investigates the application of machine learning techniques, with a focus on ensemble learning, to enhance the accuracy and efficiency of fetal health classification based on CTG data. Genetic Algorithm (GA) is employed for optimal feature selection, identifying the most discriminative subset of CTG attributes to improve model performance and reduce computational complexity. We employ a combination of advanced machine learning models, including AdaBoost, Gaussian Naïve Bayes, Decision Tree, k-nearest neighbors (KNN), and Logistic Regression. The top two models were selected based on comprehensive performance metrics using the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method. These models were then integrated through ensemble learning approaches, such as stacking, Particle Swarm Optimization (PSO) weighted averaging, and soft voting, to improve prediction reliability. Our proposed stacking ensemble model achieves a remarkable accuracy of 97.9%, demonstrating its potential as a robust, data-driven tool for fetal health monitoring and the early identification of at-risk pregnancies. The results indicate that machine learning can effectively complement traditional fetal health assessment methods by providing an objective framework to support clinical decision-making.
Full article
(This article belongs to the Section AI-Driven Innovations)
Open AccessArticle
Performance Evaluation of Post-Quantum Digital Signature in QPSK- and 16QAM-Based WDM Communication Systems
by
Duaa J. Khalaf, Arwa A. Moosa and Tayseer S. Atia
Computers 2026, 15(5), 290; https://doi.org/10.3390/computers15050290 - 1 May 2026
Abstract
The integration of post-quantum digital signature (PQDS) algorithms into coherent wavelength-division multiplexing (WDM) optical networks introduces a non-negligible cryptographic overhead that fundamentally alters physical-layer performance characteristics. Unlike conventional studies that treat security and transmission independently, this work provides a cross-layer evaluation of PQDS-induced
[...] Read more.
The integration of post-quantum digital signature (PQDS) algorithms into coherent wavelength-division multiplexing (WDM) optical networks introduces a non-negligible cryptographic overhead that fundamentally alters physical-layer performance characteristics. Unlike conventional studies that treat security and transmission independently, this work provides a cross-layer evaluation of PQDS-induced payload expansion and its direct impact on coherent optical system behavior under realistic, DSP-aligned conditions. A structured and reproducible evaluation framework is proposed to systematically analyze this interaction across multiple transmission scenarios, ranging from a single-channel QPSK baseline to a 16-channel WDM system employing both QPSK and 16QAM modulation formats. Key system parameters—including launch power, local oscillator power, bit rate, and fiber length—are jointly optimized, while performance is rigorously assessed in terms of bit error rate (BER), Q-factor, and maximum transmission reach. The results demonstrate a clear performance degradation trend driven by both spectral efficiency scaling and cryptographic payload expansion. The single-channel QPSK system achieves a maximum reach of 203 km, which decreases to 194 km in the 16-channel WDM QPSK configuration due to inter-channel interference and nonlinear effects. In contrast, the 16-channel WDM 16QAM system exhibits a significantly reduced reach of 103 km, reflecting its heightened sensitivity to noise, chromatic dispersion, and fiber nonlinearities. Furthermore, increased payload size associated with PQDS schemes is shown to exacerbate transmission impairments by extending frame duration and intensifying inter-channel interactions. These findings identify PQDS-induced overhead as a critical system-level constraint that directly governs transmission efficiency, scalability, and performance limits. The study highlights the necessity of cross-layer co-design strategies, where cryptographic mechanisms and physical-layer parameters are jointly optimized to enable efficient, reliable, and quantum-safe coherent optical communication systems.
Full article
(This article belongs to the Special Issue Emerging Trends in Network Security and Applied Cryptography)
Open AccessArticle
Generative AI for Education in Infrastructure Systems: Lessons from a BIM-Based Rule-Checking
by
Islem Sahraoui, Kinam Kim, Lu Gao, Zia Ud Din and Ahmed Senouci
Computers 2026, 15(5), 289; https://doi.org/10.3390/computers15050289 - 1 May 2026
Abstract
This study investigates the educational potential of Large Language Models (LLMs) for automating rule-checking tasks in Building Information Modeling (BIM) instruction. A quasi-experimental classroom implementation was conducted over two consecutive semesters with 55 graduate students in a Construction Management program. In Fall 2024,
[...] Read more.
This study investigates the educational potential of Large Language Models (LLMs) for automating rule-checking tasks in Building Information Modeling (BIM) instruction. A quasi-experimental classroom implementation was conducted over two consecutive semesters with 55 graduate students in a Construction Management program. In Fall 2024, students were taught manual rule-checking techniques, whereas in Spring 2025, students received additional instruction in LLM-based prompting and Python code generation for automated compliance checking. A mixed-methods evaluation was conducted using surveys, NASA Task Load Index ratings, assignment-based learning outcomes, and structured interviews. Compared with the manual-only cohort, the LLM-assisted cohort reported significantly lower mental, temporal, and frustration demands, as well as higher perceived time efficiency and overall effectiveness. The LLM-assisted group also achieved significantly higher performance in violation detection and method accuracy, although no significant differences were observed in code interpretation or reflective analysis. Qualitative findings further revealed both the efficiency benefits of AI-assisted automation and persistent challenges related to prompt refinement, debugging, and output validation. These findings suggest that LLMs can enhance BIM instruction when paired with structured pedagogical scaffolding to support critical oversight and novice learners.
Full article
(This article belongs to the Special Issue The Digital Transformation of Education: Trends, Technologies, and Responsible Innovation)
►▼
Show Figures

Figure 1
Open AccessArticle
AIGU-DPFL: Adaptive Differentially Private Federated Learning with Importance-Based Gradient Updates
by
Fangfang Shan, Zhuo Chen, Yifan Mao, Yuhang Liu, Lulu Fan and Yanlong Lu
Computers 2026, 15(5), 288; https://doi.org/10.3390/computers15050288 - 1 May 2026
Abstract
Federated learning, a decentralized machine learning framework, allows multiple participants to jointly train models while keeping their raw data local and unshared. Nevertheless, during the exchange of model updates, the communicated information can still introduce privacy vulnerabilities and potentially result in the exposure
[...] Read more.
Federated learning, a decentralized machine learning framework, allows multiple participants to jointly train models while keeping their raw data local and unshared. Nevertheless, during the exchange of model updates, the communicated information can still introduce privacy vulnerabilities and potentially result in the exposure of user data. Over the past few years, differential privacy methods have been broadly incorporated into federated learning frameworks to strengthen the protection of sensitive data. Nevertheless, the noise required to satisfy differential privacy guarantees often causes significant degradation in model performance. Prior studies have typically employed a fixed noise-injection strategy following gradient clipping. Although such methods provide privacy protection, they overlook the varying importance of different gradient dimensions, resulting in noise being injected into unimportant or redundant parameters, thereby causing unnecessary performance loss. To address these limitations, we propose an adaptive differentially private federated learning scheme with importance-based gradient updates (AIGU-DPFL). Specifically, we focus on coordinates with high information content and introduce an adaptive noise injection mechanism, which perturbs gradient updates to satisfy differential privacy guarantees while dynamically controlling noise intensity, thereby achieving sparse and noise-effective gradient updates. AIGU-DPFL markedly enhances the training effectiveness of federated learning models. Comprehensive evaluations conducted on real-world datasets indicate that the proposed method achieves superior performance compared to existing differentially private federated learning techniques.
Full article
(This article belongs to the Special Issue Next-Generation Cyber Defense: AI, Automation and Adaptive Security)
Open AccessArticle
On-Device Transformer Architectures for Speech Evaluation in Neurodegenerative Disease Detection
by
Lara Marie Reimer, Leonard Pries, Florian Schweizer, Leon Nissen and Stephan M. Jonas
Computers 2026, 15(5), 287; https://doi.org/10.3390/computers15050287 - 1 May 2026
Abstract
Speech alterations are early markers of neurodegenerative diseases. Transformer-based speech models such as Whisper have advanced automated speech assessment, but most systems rely on cloud-based computation, raising privacy concerns. On-device processing could offer a scalable and privacy-preserving alternative. This research’s objective was to
[...] Read more.
Speech alterations are early markers of neurodegenerative diseases. Transformer-based speech models such as Whisper have advanced automated speech assessment, but most systems rely on cloud-based computation, raising privacy concerns. On-device processing could offer a scalable and privacy-preserving alternative. This research’s objective was to evaluate whether a fully on-device speech analysis pipeline can achieve competitive accuracy in detecting Alzheimer’s disease and to quantify the contributions of acoustic, linguistic, and embedding features. Therefore, we developed an iOS application running all components, including acoustic analysis, two transformer-based speech-to-text modules (WhisperBase and quantized CrisperWhisper), linguistic feature extraction, and embedding generation, directly on the device. Using the ADReSS Challenge 2020 dataset (N = 156), we trained classical machine-learning classifiers across 20 configurations and evaluated them via a stratified 10-fold cross-validation. Area under the receiver operating curve (AUC), accuracy, precision, recall, and F1 scores were used as performance metrics. An ablation study examined the relevance of each feature group. The best-performing setup (Random Forest with CrisperWhisper transcription and Apple embeddings) achieved an accuracy of 85.4% and an AUC of 0.85. Performance was 5–7% below benchmark models relying on manual transcripts or server-based processing. Embedding features provided the strongest individual contribution, but the highest accuracy required combining acoustic, linguistic, and embedding information. A fully on-device pipeline for Alzheimer’s disease detection from speech is feasible and achieves competitive accuracy while maintaining strict data privacy. These findings highlight the potential of on-device transformer architectures for scalable, privacy-preserving digital screening. Future work should validate the approach in larger and more diverse cohorts.
Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Medical Informatics)
Open AccessArticle
Multi-Centre Liver Tumour Classification via Federated Learning: Investigating Data Heterogeneity, Transfer Learning, and Model Efficiency
by
Degang Zhu, Shiqi Wei and Xinming Zhang
Computers 2026, 15(5), 286; https://doi.org/10.3390/computers15050286 - 1 May 2026
Abstract
This paper investigates federated multi-centre liver tumour classification from contrast-enhanced CT under realistic data heterogeneity and domain shift. To address the practical constraint that medical data are often siloed across institutions, we develop a FedProx-based federated learning pipeline that enables collaborative training without
[...] Read more.
This paper investigates federated multi-centre liver tumour classification from contrast-enhanced CT under realistic data heterogeneity and domain shift. To address the practical constraint that medical data are often siloed across institutions, we develop a FedProx-based federated learning pipeline that enables collaborative training without exchanging raw patient data. Using the LiTS dataset as the training domain, we construct a slice-level binary classification task based on voxel-level annotations, while rigorously assessing out-of-distribution generalisation on an external held-out dataset, 3D-IRCADb. We conduct comprehensive experiments across multiple backbone architectures, including ResNet-50, EfficientNet-B3, ViT-B/16, and MobileNetV3-Small, comparing FedProx and FedAvg under three heterogeneity intensities (IID, mild non-IID, and severe non-IID). Furthermore, we evaluate transfer learning strategies, ranging from frozen backbones to partial fine-tuning of the last stage, and perform ablations on the proximal coefficient and local epochs E to characterise optimisation behaviour. Our results show that FedProx is generally comparable to FedAvg, with slightly more stable behaviour in some heterogeneous settings. We also observe a clear validation-to-external gap, indicating that external-domain robustness remains challenging and requires cautious interpretation for deployment. ImageNet pretraining yields consistent gains, particularly for data-sparse clients, while partial fine-tuning enhances adaptation to CT-specific features. Finally, MobileNetV3-Small offers a favourable performance–efficiency trade-off by reducing communication payload and computation cost, supporting practical deployment on resource-constrained clinical edge devices.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain (3rd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
A Rigorous Comparative Study of Supervised Machine Learning Techniques for Network Anomaly Detection: Empirical Insights from the UNSW-NB15 Dataset
by
Nouf Alkhater
Computers 2026, 15(5), 285; https://doi.org/10.3390/computers15050285 - 1 May 2026
Abstract
The increasing complexity of modern network infrastructures has intensified the need for reliable and efficient intrusion detection systems. While advanced deep learning approaches have demonstrated strong performance, their high computational cost and limited interpretability restrict their practical deployment in real-time environments. This study
[...] Read more.
The increasing complexity of modern network infrastructures has intensified the need for reliable and efficient intrusion detection systems. While advanced deep learning approaches have demonstrated strong performance, their high computational cost and limited interpretability restrict their practical deployment in real-time environments. This study presents a systematic empirical evaluation of four supervised machine learning models—Decision Tree, Random Forest, Support Vector Machine (SVM), and XGBoost—for network anomaly detection using the UNSW-NB15 dataset. To ensure methodological rigor, a structured preprocessing pipeline and a five-fold stratified cross-validation framework were employed. Model performance was assessed using multiple evaluation metrics, including accuracy, precision, recall, F1-score, and area under the ROC curve (AUC). In addition, a feature importance analysis was conducted to identify the most influential network traffic attributes contributing to anomaly detection. The results show that ensemble-based methods outperform individual classifiers, with XGBoost achieving the best overall performance (accuracy = 0.97, AUC = 0.98) along with high stability across validation folds. The analysis further reveals that a subset of flow-based and temporal features—such as sttl, sload, and dload—plays a critical role in distinguishing between normal and malicious traffic. This study provides a rigorous, interpretable, and reproducible benchmarking framework for supervised machine learning in network anomaly detection. The findings provide practical insights for developing efficient and scalable intrusion detection systems suitable for real-world deployment.
Full article
(This article belongs to the Special Issue Intelligent Systems Security: AI-Driven Approaches for Attacks, Detection & Explainability)
►▼
Show Figures

Figure 1
Open AccessArticle
Immersive VR-MoCap for Creative Motion Design in Character Animation Training: A Classroom-Based Comparative Study
by
Xinyi Jiang, Muying Luo, Zainuddin Ibrahim, Azlan Abdul Aziz and Azhar Jamil
Computers 2026, 15(5), 284; https://doi.org/10.3390/computers15050284 - 30 Apr 2026
Abstract
Although motion capture has become integral to contemporary animation pipelines, university teaching still asks students to learn motion largely through screen-based keyframing. To address this gap, this classroom-based comparative study evaluated one structured motion-design lesson within an immersive MoCap-supported training module. Sixty-eight undergraduates
[...] Read more.
Although motion capture has become integral to contemporary animation pipelines, university teaching still asks students to learn motion largely through screen-based keyframing. To address this gap, this classroom-based comparative study evaluated one structured motion-design lesson within an immersive MoCap-supported training module. Sixty-eight undergraduates in a computer animation course completed the same task in either a Keyframe condition (n = 33) or a VR-MoCap condition (n = 35), with instructional delivery mode as the only difference. Creative performance was assessed in originality, fluency, aesthetic quality, clarity, and a composite score. MANOVA revealed a significant multivariate effect of condition (Pillai’s trace = 0.454, F(4, 63) = 13.12, p < 0.001). Relative to keyframe instruction, VR-MoCap produced significantly higher originality, fluency, clarity, and composite performance, whereas aesthetic quality did not differ significantly. Supplementary group-interview responses further indicated that students experienced the immersive condition as more engaging, more intuitive, and better suited to immediate feedback and embodied movement exploration. Immersive VR-MoCap appears most useful in the early phases of motion design and is better understood as complementing, rather than replacing, conventional keyframe training.
Full article
(This article belongs to the Special Issue Innovative Research in Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessArticle
YOLOv12-WCIRS: An Improved YOLOv12-Based Framework for Small Intestinal Lesion Detection in WCE
by
Shiren Ye, Liangjing Li, Zetong Zhang and Haipeng Ma
Computers 2026, 15(5), 283; https://doi.org/10.3390/computers15050283 - 29 Apr 2026
Abstract
Accurate detection of small intestinal lesions in wireless capsule endoscopy (WCE) images remains challenging because lesions are often small, weakly contrasted, irregular in shape, and easily confused with complex mucosal backgrounds. To address these difficulties, this study proposes YOLOv12-WCIRS, a WCE-oriented improvement of
[...] Read more.
Accurate detection of small intestinal lesions in wireless capsule endoscopy (WCE) images remains challenging because lesions are often small, weakly contrasted, irregular in shape, and easily confused with complex mucosal backgrounds. To address these difficulties, this study proposes YOLOv12-WCIRS, a WCE-oriented improvement of YOLOv12 that jointly enhances local feature extraction, selective multi-scale fusion, background suppression, localization sensitivity, and scale-aware optimization. The proposed framework incorporates a Weighted Convolution (WConv) module, a Contextual Selection Fusion Module (CSFM), an Information Integration Attention Fusion (IIA_Fusion) module, a Receptive Field Attention-based detection head (RFAHeadDetect), and a Scale Dynamic Loss (SD Loss). Experiments on the SEE-AI dataset show that YOLOv12-WCIRS achieves 83.4% mAP@0.5 and 61.1% mAP@0.5:0.95, improving mAP@0.5 from 76.9% to 83.4% over the direct baseline YOLOv12 while maintaining competitive efficiency. Additional analyses, including cross-dataset validation on overlapping categories in Kvasir-Capsule, normal-frame false-alarm evaluation, false-positive/false-negative breakdown, and repeated-run statistical testing, further support the robustness and practical value of the proposed framework. These results indicate that YOLOv12-WCIRS provides an effective solution for automated lesion detection in WCE images and shows promise for computer-aided capsule endoscopy analysis.
Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Medical Informatics)
►▼
Show Figures

Figure 1
Open AccessArticle
Empirical Performance and Operational Analysis of Monolithic and Distributed Database Architectures in Kubernetes Environments
by
Jasmin Redžepagić, Ana Kapulica, Nikola Malešević and Vedran Dakić
Computers 2026, 15(5), 282; https://doi.org/10.3390/computers15050282 - 29 Apr 2026
Abstract
This study presents a systematic empirical evaluation of monolithic and distributed database architectures deployed in Kubernetes environments. As containerized and cloud-native infrastructures become increasingly prevalent, understanding the performance implications of running stateful data systems under orchestration platforms has become critical. We evaluate five
[...] Read more.
This study presents a systematic empirical evaluation of monolithic and distributed database architectures deployed in Kubernetes environments. As containerized and cloud-native infrastructures become increasingly prevalent, understanding the performance implications of running stateful data systems under orchestration platforms has become critical. We evaluate five widely used database systems—PostgreSQL, MySQL, MongoDB, Redis, and Cassandra—using standardized workload generation frameworks, including pgbench, sysbench, YCSB, redis-benchmark, and cassandra-stress. Controlled experiments were conducted across varying concurrency levels and workload types to measure throughput, latency, and scalability in both single-node and distributed deployments. Redis achieves a maximum throughput of 4.2 million operations per second with sub-millisecond latency. In contrast, Cassandra delivers 214,743 distributed read operations per second at ONE consistency, approaching Redis’s non-pipelined baseline throughput (257,732–262,467 ops/sec) within a Kubernetes cluster. The write throughput of Cassandra decreases by 45.2% when the consistency level is elevated to QUORUM, accompanied by an elevenfold increase in run-to-run variability (CV from 7.1% to 84.7%), indicating that the consistency level is the primary performance determinant in distributed systems. PostgreSQL experiences a 72% decrease in write throughput in Kubernetes (74,072 → 20,805 TPS). In contrast, MySQL PXC anomalously attains a 37.3% increase in write throughput in Kubernetes compared to its monolithic deployment—the sole reversal noted among the five systems. These findings underscore a critical trade-off between vertical efficiency and horizontal scalability, illustrating that hybrid database architecture can be an effective solution for contemporary cloud-native applications compared to either paradigm independently.
Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
LACE-Net: A Swin Transformer with Local Frequency-Domain Energy and Adaptive Contrast Enhancement for Fine-Grained Land Cover Classification
by
Yongmei Tan, Gong Chen, Yan Huang, Hengzhou Ye and Jincheng Tang
Computers 2026, 15(5), 281; https://doi.org/10.3390/computers15050281 - 28 Apr 2026
Abstract
The Swin Transformer exhibits limitations in fine-grained land use and land cover (LULC) classification, particularly in capturing high-frequency texture details and representing low-contrast regions. To address these issues, we propose a novel network model, termed LACE-Net, which integrates local frequency-domain energy and adaptive
[...] Read more.
The Swin Transformer exhibits limitations in fine-grained land use and land cover (LULC) classification, particularly in capturing high-frequency texture details and representing low-contrast regions. To address these issues, we propose a novel network model, termed LACE-Net, which integrates local frequency-domain energy and adaptive contrast enhancement. Built upon the Swin Transformer backbone, the model introduces an innovative Local Frequency-Domain Energy-Adaptive Contrast Enhancement Multi-Scale Attention (LACE). This block consists of parallel branches for frequency-domain perception and contrast enhancement, which effectively combine texture and illumination physical priors. In addition, a texture-adaptive momentum adjustment mechanism is incorporated to refine the spatial enhancement attention weights dynamically. Consequently, LACE-Net greatly strengthens the modeling and representation of high-frequency details and complex spatial structural features. Experiments are performed on a self-constructed Guangxi regional dataset (denoted as GLC-30) and the publicly available remote sensing scene classification benchmark dataset NWPU-RESISC45. The results show that LACE-Net achieves a Top-1 accuracy (Top-1 Acc) of 96.48% and a macro-averaged F1 score (mF1) of 93.13%. These results outperform current mainstream vision models, particularly in mitigating the spectral confusion issue of “same spectrum, different objects.” The model exhibits superior fine-grained classification performance and robust generalization across datasets.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
xjb: Fast Float to String Algorithm
by
Junbo Xiang and Tiejun Wang
Computers 2026, 15(5), 280; https://doi.org/10.3390/computers15050280 - 27 Apr 2026
Abstract
Efficiently and accurately converting floating-point numbers to decimal strings remains a fundamental challenge in numerical computation, data serialization, and human–computer interaction. While modern algorithms such as Ryū, Dragonbox, and Schubfach rigorously satisfy the Steele–White criteria for correctness and minimal output length, their performance
[...] Read more.
Efficiently and accurately converting floating-point numbers to decimal strings remains a fundamental challenge in numerical computation, data serialization, and human–computer interaction. While modern algorithms such as Ryū, Dragonbox, and Schubfach rigorously satisfy the Steele–White criteria for correctness and minimal output length, their performance is frequently constrained by branch mispredictions, high-precision multiplication overhead, and suboptimal utilization of instruction-level parallelism. This paper introduces xjb, a novel floating-point–string conversion algorithm derived from Schubfach that systematically overcomes these bottlenecks. By restructuring the core computation to reduce instruction dependencies, adopting branchless decision logic, and exploiting SIMD instruction sets for decimal-to-ASCII formatting, xjb delivers state-of-the-art throughput across diverse hardware platforms. The algorithm requires only a single 64-by-128-bit multiplication for IEEE 754 binary64 conversions and a single 64-by-64-bit multiplication for binary32, drastically decreasing arithmetic complexity. Extensive benchmarking on AMD R7-7840H and Apple M1/M5 processors demonstrates that xjb consistently outperforms leading contemporary implementations. Notably, on the Apple M5, xjb achieves speedups of approximately 20% and 136% for binary64 and binary32 conversions, respectively, when compared to the highly optimized zmij library. The algorithm is fully compliant with the Steele–White principle; exhaustive validation over the entire binary32 space and extensive random testing across the binary64 range confirm both its theoretical soundness and practical robustness.
Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Open AccessArticle
Reason2Decide-C: Adaptive Cycle-Consistent Training for Clinical Rationales
by
H M Quamran Hasan, Housam Khalifa Bashier Babiker, Mi-Young Kim and Randy Goebel
Computers 2026, 15(5), 279; https://doi.org/10.3390/computers15050279 - 27 Apr 2026
Abstract
Large Language Models (LLMs) used for clinical decision support must not only make accurate predictions but also generate rationales that are consistent with, and sufficient for, those predictions. Building on Reason2Decide, a two-stage rationale-driven multi-task framework, we propose Reason2Decide-C (R2D-C, where C denotes
[...] Read more.
Large Language Models (LLMs) used for clinical decision support must not only make accurate predictions but also generate rationales that are consistent with, and sufficient for, those predictions. Building on Reason2Decide, a two-stage rationale-driven multi-task framework, we propose Reason2Decide-C (R2D-C, where C denotes cycle consistency), which augments Reason2Decide’s stage 2 training with confidence-adaptive scheduled sampling and cycle-consistent rationale-to-label training. In stage 1, we pretrain our model on rationale generation. In stage 2, we jointlytrain on label prediction and rationale generation, gradually replacing gold labels with model-predicted labels based on confidence. Simultaneously, we feed the rationale logits back into the model to recover the label, thus enforcing explanation sufficiency. We evaluate R2D-C on one proprietary triage dataset, as well as public biomedical QA and reasoning datasets. Across model sizes, R2D-C substantially improves rationale–prediction consistency (where stage 1 and stage 2 predictions agree) and sufficiency (where the rationale alone recovers the ground-truth label) over other baselines while matching or modestly improving predictive performance (F1); in several settings R2D-C surpasses larger foundation models. Ablations confirm that the full combination is optimal, maximizing alignment and LLM-as-a-Judge rationale quality. These results demonstrate that confidence-adaptive scheduled sampling and cycle-consistent rationale-to-label training substantially enhance explanation alignment without sacrificing accuracy.
Full article
(This article belongs to the Special Issue Generative AI in Medicine: Emerging Applications, Challenges, and Future Directions)
►▼
Show Figures

Figure 1
Open AccessArticle
A Validated Design Guideline for Mobile Applications Grounded in the Participation of Deaf Users for Accessible Development
by
Andrés Eduardo Fuentes-Cortázar and José Rafael Rojano-Cáceres
Computers 2026, 15(5), 278; https://doi.org/10.3390/computers15050278 - 27 Apr 2026
Abstract
Mobile devices are widely used, yet accessibility for people with disabilities remains a critical challenge. Deaf users who rely primarily on sign language (SL) frequently encounter barriers when interacting with applications not designed for their communication needs. This study proposes a design guide
[...] Read more.
Mobile devices are widely used, yet accessibility for people with disabilities remains a critical challenge. Deaf users who rely primarily on sign language (SL) frequently encounter barriers when interacting with applications not designed for their communication needs. This study proposes a design guide for developing mobile applications tailored to sign language users. The guide was developed through the active participation of three groups: Deaf individuals, usability and user experience (UX) experts, and mobile application developers. Based on their contributions, thirteen design guidelines were defined, addressing sign language integration, visual feedback, navigation, content presentation, and interface design. The guidelines were validated through usability and UX evaluations conducted with the three participant groups. A mobile application was subsequently developed following the proposed guidelines to assess their practical applicability. The evaluation results indicate that the guide effectively supports the development of more accessible and usable mobile applications for Deaf users. Incorporating sign language-centered design principles significantly improves usability and user experience for individuals with hearing disabilities, contributing to more inclusive mobile application development.
Full article
(This article belongs to the Section Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessArticle
Cognitive Grounding for Perspective Integration in Multi-LLM Systems
by
Lev Sukherman, Yetunde Longe-Folajimi and Marina Konkol
Computers 2026, 15(5), 277; https://doi.org/10.3390/computers15050277 - 27 Apr 2026
Abstract
This paper investigates whether structured collaboration between multiple large language models (LLMs), each assigned a distinct cognitive role grounded in psychological theory, produces benefits beyond simple answer aggregation. We propose the Parallel Synthesis architecture, in which three cognitively specialized roles Analyzer (hierarchical decomposition),
[...] Read more.
This paper investigates whether structured collaboration between multiple large language models (LLMs), each assigned a distinct cognitive role grounded in psychological theory, produces benefits beyond simple answer aggregation. We propose the Parallel Synthesis architecture, in which three cognitively specialized roles Analyzer (hierarchical decomposition), Creative (divergent thinking), and Critic (critical evaluation) process each task independently and in parallel, and a Synthesizer integrates their outputs into a final response. To evaluate collaborative reasoning, we introduce the Emergent Reasoning Score (ERS), a composite metric that separates perspective integration (Synthesis Effectiveness) from novel concept generation (Emergent Value). Experiments on Experiments on the AI2 Reasoning Challenge (ARC-Challenge) (1172 questions) and and the Massive Multitask Language Understanding benchmark (MMLU) (1531 questions) show two consistent findings. First, the architecture achieves high Synthesis Effectiveness ( – ), indicating reliable integration of all three cognitive perspectives. Second, Emergent Value remains low ( – ), indicating that synthesis primarily recombines existing concepts rather than generating substantial novel content. A Majority Voting baseline achieves comparable or slightly higher answer accuracy than the Synthesizer on both benchmarks, showing that the architecture’s main contribution lies not in answer selection but in producing integrated reasoning traces that draw on multiple perspectives. These findings suggest that the practical value of cognitively grounded multi-agent architectures lies in reliable perspective integration, while ERS provides a reusable framework for distinguishing integration from genuinely novel reasoning in multi-agent LLM systems. The empirical results reported here constitute a pilot validation of the proposed framework on closed-form benchmarks, intended to establish a proof of concept and motivate larger-scale evaluation.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling (2nd Edition))
Open AccessArticle
Monitoring of Customer Segment Dynamics Using Clustering and Event-Based Alerts
by
Stavroula Chatzinikolaou, Giannis Vassiliou, Efstratia Vasileiou, Sotirios Batsakis and Nikos Papadakis
Computers 2026, 15(5), 276; https://doi.org/10.3390/computers15050276 - 27 Apr 2026
Abstract
►▼
Show Figures
Continuous customer activity generated by modern digital platforms drives the evolution of behavioral segments over time. Traditional customer segmentation methods typically rely on periodic batch analysis of historical data, producing static snapshots that may quickly become outdated and fail to capture emerging behavioral
[...] Read more.
Continuous customer activity generated by modern digital platforms drives the evolution of behavioral segments over time. Traditional customer segmentation methods typically rely on periodic batch analysis of historical data, producing static snapshots that may quickly become outdated and fail to capture emerging behavioral patterns. This paper presents a monitoring-oriented framework for detecting customer segment evolution and generating timely notifications about meaningful structural changes in the customer population. The proposed system continuously ingests user activity events, incrementally updates customer profiles, and periodically recomputes behavioral segments using fixed-k KMeans clustering over standardized recency, frequency, and monetary (RFM) features. To improve robustness and interpretability, the framework incorporates adaptive event scoring, stability-aware segment validation, drift-aware centroid matching, and persistence-based filtering of transient changes. These mechanisms reduce noisy alerts caused by repeated clustering updates while preserving meaningful signals about evolving customer behavior. The framework is evaluated on the Online Retail II and Instacart datasets under streaming simulation conditions. Experimental results show that the proposed approach maintains stable clustering structures, identifies persistent segment changes, and uncovers economically meaningful customer groups. Compared with static segmentation and periodic clustering baselines, the framework improves clustering quality while enabling continuous monitoring of segment evolution. Overall, the results suggest that adaptive monitoring can extend traditional customer segmentation into a practical continuous analytics process for moderate-scale dynamic environments.
Full article

Figure 1
Open AccessArticle
The Missing Layer in Modern IT: Governance of Commitments, Not Just Compute and Data
by
Rao Mikkilineni and William Patrick Kelly
Computers 2026, 15(5), 275; https://doi.org/10.3390/computers15050275 - 24 Apr 2026
Abstract
Contemporary enterprise IT operations are largely implemented on Shannon–Turing computing models in which programs execute read–compute–write cycles over data structures, while governance—fault handling, configuration control, auditability, continuity, and accounting—is applied externally through infrastructure platforms, observability stacks, and human operational processes. This separation scales
[...] Read more.
Contemporary enterprise IT operations are largely implemented on Shannon–Turing computing models in which programs execute read–compute–write cycles over data structures, while governance—fault handling, configuration control, auditability, continuity, and accounting—is applied externally through infrastructure platforms, observability stacks, and human operational processes. This separation scales analytical throughput but accumulates what we term coherence debt: locally expedient operational commitments whose provenance and revisability degrade over time until exposed by failures, security incidents, regulatory demands, or architectural transitions. This paper examines the evolution of operational computing models that integrate com-pupation with regulation at two distinct levels. First, Distributed Intelligent Managed Elements (DIME) extend the classical Turing cycle toward a supervised execution loop—read–check-with-oracle–compute–write—by incorporating signaling overlays and FCAPS (Fault, Configuration, Accounting, Performance, and Security) supervision into computation in progress. Second, the Autopoietic Management and Orchestration System (AMOS), grounded in the General Theory of Information, the Burgin–Mikkilineni Thesis, and Deutsch’s epistemic framework, fully decouples process executors from governance by treating any Turing-equivalent engine as a replaceable execution substrate while elevating knowledge structures—encoded as local and global Digital Genomes—to first-class operational state within a governed knowledge network. Using a distributed microservice transaction testbed, we demonstrate how this approach operationalizes topology-as-data, a capability-oriented control plane, decoupled application-layer FCAPS independent of infrastructure management, and policy-selectable consistency/availability semantics. Our results show that the principal benefit of AMOS is not circumventing theoretical constraints such as the Consistency, Availability, and Partition tolerance (CAP) theorem, but governing their trade-offs as explicit, auditable commitments with defined convergence pathways and controlled return to a coherent system state, thereby reducing coherence debt and improving operational reliability in distributed AI-enabled enterprise systems.
Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2026
Topic in
Computers, Informatics, Information, Logistics, Mathematics, Algorithms
Decision Science Applications and Models (DSAM)
Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 30 June 2026
Topic in
AI, Computers, Education Sciences, Societies, Future Internet, Technologies
AI Trends in Teacher and Student Training
Topic Editors: José Fernández-Cerero, Marta Montenegro-RuedaDeadline: 11 July 2026
Topic in
Applied Sciences, Automation, Computers, Electronics, Sensors, JCP, Mathematics
Intelligent Optimization, Decision-Making and Privacy Preservation in Cyber–Physical Systems
Topic Editors: Lijuan Zha, Jinliang Liu, Jian LiuDeadline: 31 August 2026
Conferences
Special Issues
Special Issue in
Computers
Operations Research: Trends and Applications
Guest Editors: Óscar Oliveira, Eliana Costa e Silva, Dorabela GamboaDeadline: 8 May 2026
Special Issue in
Computers
Explainable Artificial Intelligence for Signal Processing and Recognition
Guest Editor: Andres Alvarez-MezaDeadline: 20 May 2026
Special Issue in
Computers
Intrusion Detection and Trust Provisioning in Edge-of-Things Environment
Guest Editors: Hooman Alavizadeh, Ahmad Salehi ShahrakiDeadline: 20 May 2026
Special Issue in
Computers
Generative AI in Medicine: Emerging Applications, Challenges, and Future Directions
Guest Editor: Atsushi TeramotoDeadline: 31 May 2026





