Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.5 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
CLIP-HBD: Hierarchical Boundary-Constrained Decoding for Open-Vocabulary Semantic Segmentation
Computers 2026, 15(5), 318; https://doi.org/10.3390/computers15050318 (registering DOI) - 15 May 2026
Abstract
Open-vocabulary semantic segmentation (OVSS) aims to achieve pixel-level object segmentation guided by arbitrary natural language descriptions. Although pre-trained vision–language models (VLMs) have significantly advanced the development of OVSS, their reliance on the Vision Transformer (ViT) architecture imposes a fundamental constraint on dense prediction.
[...] Read more.
Open-vocabulary semantic segmentation (OVSS) aims to achieve pixel-level object segmentation guided by arbitrary natural language descriptions. Although pre-trained vision–language models (VLMs) have significantly advanced the development of OVSS, their reliance on the Vision Transformer (ViT) architecture imposes a fundamental constraint on dense prediction. Specifically, the absence of hierarchical downsampling in ViT-based VLM results in single-scale representations that trade spatial localization for global semantics. To address these issues, this paper proposes a hierarchical boundary-constrained decoding network for OVSS, called CLIP-HBD. Our approach leverages VLM semantic priors to reconstruct multi-scale features and introduces a boundary-constrained decoding strategy to refine edge details. Specifically, CLIP-HBD leverages a ConvNeXt-based backbone alongside a hierarchical adaptation mechanism to fuse multi-layer VLM features, generating a comprehensive multi-scale representation. To address the issue of boundary inaccuracy, we perform explicit boundary prediction based on multi-scale representations, where the resulting boundary maps are subsequently transformed into structural constraints to steer the decoder’s focus toward boundary regions. By integrating structural constraints with hierarchical features, the decoding process effectively maintains semantic consistency and restores precise object boundaries. Extensive experiments demonstrate that CLIP-HBD achieves superior performance in both segmentation precision and boundary quality across multiple benchmarks.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (3rd Edition))
►
Show Figures
Open AccessArticle
Comparative Performance of Three GPT Models on Japanese Dental Board-Style Multiple-Choice Questions
by
Hikaru Fukuda, Masaki Morishita, Kosuke Muraoka, Shino Maeda, Taiji Nakamura, Manabu Habu, Shuji Awano and Kentaro Ono
Computers 2026, 15(5), 317; https://doi.org/10.3390/computers15050317 - 15 May 2026
Abstract
Large language models (LLMs) are increasingly used in professional examinations, but their relative performance on dental board-style questions remains unclear. This study compared two reasoning-optimized models, GPT-o3 and GPT-5T, with a general-purpose multimodal model, GPT-4o, using 399 Japanese dental board-style multiple-choice questions from
[...] Read more.
Large language models (LLMs) are increasingly used in professional examinations, but their relative performance on dental board-style questions remains unclear. This study compared two reasoning-optimized models, GPT-o3 and GPT-5T, with a general-purpose multimodal model, GPT-4o, using 399 Japanese dental board-style multiple-choice questions from 2018 to 2022. All questions were presented in Japanese, and items originally accompanied by charts, photographs, or other figures were analyzed separately from items without visual materials. Accuracy and item-level agreement were assessed using pairwise McNemar tests, stratified analyses according to the original presence of visual materials, the Breslow–Day test for homogeneity of odds ratios, and two-proportion z-tests. GPT-5T achieved the highest overall accuracy (294/399, 73.7%), followed by GPT-o3 (257/399, 64.4%) and GPT-4o (255/399, 63.9%). Pairwise McNemar tests showed that GPT-5T outperformed both GPT-4o (Holm-adjusted p = 0.00098) and GPT-o3 (Holm-adjusted p = 0.00072), whereas GPT-o3 and GPT-4o did not differ significantly (Holm-adjusted p = 0.920). Accuracy was lower for questions originally containing visual materials than for questions without such materials across all three models (GPT-4o: 49.7% vs. 72.2%; GPT-o3: 55.1% vs. 69.8%; GPT-5T: 59.9% vs. 81.8%). The advantage of GPT-5T was more evident in questions without visual materials, and heterogeneity across question formats was observed for GPT-5T versus GPT-o3. GPT-5T showed the strongest performance in this dataset. Questions originally containing visual materials were associated with lower accuracy across all models. Because the comparison was based on distinct item groups rather than experimentally manipulated visual conditions, this result should be interpreted as a difference across question formats and may also reflect differences in item composition and difficulty between the two groups.
Full article
(This article belongs to the Topic AI Trends in Teacher and Student Training)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Fault-Tolerant QCA-Based Parity Pre-Filtering Circuits for Lightweight Edge-IoT Transaction Screening
by
Osman Selvi, Seyed-Sajad Ahmadpour, Muhammad Zohaib and Naim Ajlouni
Computers 2026, 15(5), 316; https://doi.org/10.3390/computers15050316 - 14 May 2026
Abstract
Edge Internet of Things (IoT) blockchain deployments increasingly rely on continuous transaction ingestion from resource-constrained IoT devices to nearby edge gateways over heterogeneous wireless links. In this setting, transient channel noise and packet corruption can inject invalid payloads into the edge processing pipeline
[...] Read more.
Edge Internet of Things (IoT) blockchain deployments increasingly rely on continuous transaction ingestion from resource-constrained IoT devices to nearby edge gateways over heterogeneous wireless links. In this setting, transient channel noise and packet corruption can inject invalid payloads into the edge processing pipeline and trigger unnecessary buffering, parsing, and, most critically, computationally expensive cryptographic operations such as digital signature verification. This leads to wasted computation, increased latency, and reduced energy efficiency at the edge, particularly under dense IoT traffic. This paper presents an energy-aware and fault-tolerant Quantum-Dot Cellular Automata (QCA)-based integrity pre-filter for IoT-to-edge blockchain transaction ingestion. At the circuit level, we adapt and modify a previously reported fault-tolerant five-input majority gate (MV5) structure and use it as a robust primitive for nanoscale integrity-screening circuits. Building on this modified MV5, we design a set of QCA integrity blocks, including a parity checker, a compact XNOR gate circuit, a parity-bit generation circuit, and a sender-to-channel/receiver nano-communication integrity workflow suitable for early screening of corrupted payloads. Compared with the best previously reported baseline considered in this study, the modified MV5 achieves 76.47% tolerance to single-cell omission defects, corresponding to a 17.47 percentage-point increase and an approximately 29.61% relative improvement over the prior 59% omission-tolerance result, while preserving 100% tolerance against extra-cell deposition defects. At the system level, the proposed circuit is discussed as a potential early screening stage for edge-IoT blockchain transaction ingestion. A bounded analytical model is used to estimate the possible reduction in unnecessary signature-verification workload under assumed corruption and detection conditions. This analysis is not intended as a deployment-level validation; full edge-node implementation, throughput measurement, queueing-delay evaluation, real traffic traces, retransmission behavior, and empirical signature-verification profiling remain future work. The proposed parity/chunk-parity pre-filter is designed for low-cost detection of random transmission-induced corruption and does not replace cryptographic authentication, hashing, digital signatures, CRC-based detection, or blockchain validation. All proposed designs are validated using QCADesigner tools.
Full article
(This article belongs to the Special Issue IoT: Security, Privacy and Best Practices (3rd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Cross-Linguistic Complexity and Language-Specific Sentiment: Multifractal Structure and Emotional Valence in Popular Music Lyrics Across Three Languages
by
Fateme Khanipour, Zeinab Shahbazi, Sara Behnamian, Fatemeh Fogh and Nathan Blood
Computers 2026, 15(5), 315; https://doi.org/10.3390/computers15050315 - 14 May 2026
Abstract
We investigate the linguistic complexity and emotional valence of popular song lyrics across English ( ), Spanish ( ), and German ( ), using an analytical corpus of 2023 tracks drawn from 2113 deduplicated
[...] Read more.
We investigate the linguistic complexity and emotional valence of popular song lyrics across English ( ), Spanish ( ), and German ( ), using an analytical corpus of 2023 tracks drawn from 2113 deduplicated tracks on Spotify’s weekly Top 200 charts (2019–2021). Transformer-based sentiment analysis is combined with complexity-science tools to characterize both the affective content and the structural organization of commercially successful lyrics. A multilingual BERT model reveals a mild negative skew across all three languages (63.7% negative overall); the 1.003-point English–German gap observed under the English-centric VADER lexicon collapses to 0.127 points under BERT, indicating that earlier cross-linguistic sentiment differences are largely measurement artifacts. Word frequency distributions follow Zipf’s law in all three languages ( ), with English steepest ( ) and German shallowest ( ). Detrended fluctuation analysis indicates persistent long-range correlations ( – ; none of the 50 shuffled surrogates exceeded the observed values), and multifractal singularity spectra are statistically indistinguishable across languages once corpus size is controlled (all pairwise Mann–Whitney ). Streaming counts within the Top 200 are concentrated (German Gini ) but, given the truncated single-snapshot sample, are reported as within-chart descriptors rather than population-level scaling.
Full article
(This article belongs to the Special Issue Next-Generation Semantic Multimedia: Generative AI, Human-Centric Personalization, and Digital Sustainability)
►▼
Show Figures

Graphical abstract
Open AccessArticle
A Policy-Based Rough Optimization with Large Neighborhood Search for Carbon-Aware Flexible Job Shop Scheduling with Tardiness Penalty
by
Saurabh Sanjay Singh and Deepak Gupta
Computers 2026, 15(5), 314; https://doi.org/10.3390/computers15050314 - 14 May 2026
Abstract
Sustainable manufacturing requires schedules that balance environmental responsibility with delivery reliability. This paper studies the Carbon-Aware Flexible Job Shop Scheduling Problem with Tardiness Penalty (CAFJSP-T), where total carbon emissions and total tardiness penalty are the primary objectives. We propose a Policy-based Rough Optimization
[...] Read more.
Sustainable manufacturing requires schedules that balance environmental responsibility with delivery reliability. This paper studies the Carbon-Aware Flexible Job Shop Scheduling Problem with Tardiness Penalty (CAFJSP-T), where total carbon emissions and total tardiness penalty are the primary objectives. We propose a Policy-based Rough Optimization with a Large Neighborhood Search (Pro-LNS) framework integrating Proximal Policy Optimization (PPO) and adaptive Large Neighborhood Search (LNS). PPO constructs a feasible schedule by selecting operation-machine assignments from job-readiness, machine-availability, earliest-completion, and critical-path features. This policy-generated schedule provides a structurally informed incumbent, enabling LNS to avoid unguided search and focus destroy-and-repair refinement on high-impact operations. Both phases use the same normalized scalarized carbon-tardiness objective, which guides PPO rewards and LNS removal, reinsertion, and acceptance while preserving precedence, eligibility, and capacity constraints. Experiments on small, medium, and large workcenter benchmarks show strong due-date performance and controlled carbon emissions. Under equal objective weighting, Pro-LNS achieves a median optimality gap of 6.12% relative to the exact formulation, with all instances within 14%, while requiring 4.08 s on average and at most 10.51 s. Comparisons with PPO-only, Advantage Actor-Critic (A2C), Soft Actor-Critic (SAC), and Genetic Algorithm (GA) schedulers show that Pro-LNS attains the best weighted scalarized objective across representative instance-weight settings. Friedman and Holm-corrected Wilcoxon tests confirm significant improvements over all competitors, with average weighted-objective gains of 4.90%, 7.25%, 8.81%, and 9.51% over PPO-only, A2C, SAC, and GA, respectively. These results demonstrate that Pro-LNS is an effective and computationally practical hybrid approach for carbon-aware, tardiness-sensitive flexible job shop scheduling.
Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
LoRA-Based Deep Learning for High-Fidelity Satellite Image Super-Resolution in Big Data Remote Sensing
by
Noha Rashad Mahmoud, Hussam Elbehiery, Basheer Abdel Fattah Youssef and Hanaa Bayomi Ali Mobarz
Computers 2026, 15(5), 313; https://doi.org/10.3390/computers15050313 - 14 May 2026
Abstract
High-resolution satellite imagery is pivotal for accurate analysis in remote sensing applications, including land-use monitoring, urban planning, and environmental assessment. However, obtaining such data is often costly and limited. Consequently, super-resolution techniques, such as deep learning models and fine-tuning strategies like LoRA, offer
[...] Read more.
High-resolution satellite imagery is pivotal for accurate analysis in remote sensing applications, including land-use monitoring, urban planning, and environmental assessment. However, obtaining such data is often costly and limited. Consequently, super-resolution techniques, such as deep learning models and fine-tuning strategies like LoRA, offer a promising alternative to the critical research challenge, especially given the diversity and large scale of satellite datasets. While deep learning-based super-resolution models have been very promising recently, their effectiveness, efficiency, and scalability across heterogeneous satellite scenes are not well studied. This work studies the performance of representative deep learning Super-Resolution frameworks, including the Enhanced Super-Resolution Generative Adversarial Network. (ESRGAN), Swin Transformer for Image Restoration (SwinIR), and latent diffusion models (LDM), under unified experimental conditions using the WorldStrat dataset. The main goal is to establish whether adaptation strategies for parameter efficiency can boost reconstruction quality while reducing computational and training costs. Toward this goal, we investigate hybrid sequential pipelines, ensemble averaging, and Low-Rank Adaptation (LoRA)–based fine-tuning. The experiments indicate that these pipelines, which use multi-model methods, achieve only marginal performance gains while incurring substantial increases in computational complexity. LoRA-Based Fine-Tuning, by contrast, has demonstrated superiority in enhancing reconstruction accuracy and quality across all model families, despite using only a small percentage of trainable parameters. LoRA-based models demonstrate superiority over multi-model methods in both efficiency and performance. The presented results confirm that LoRA is an effective and accessible technique for high-fidelity satellite-based super-resolution image synthesis. The manuscript identifies LoRA as one of the enabling technologies advancing the state of the art in Deep Learning-based Super Resolution for large-scale satellite-based image synthesis.
Full article
(This article belongs to the Special Issue Machine Learning: Techniques, Industry Applications, Code Sharing, and Future Trends)
►▼
Show Figures

Figure 1
Open AccessArticle
Global Descriptors Features for Improved Detection of Textured Contact Lenses in Iris Images
by
Roqia Sailh Mahmood, Ismail Taha Ahmed and Mohamed A. Hafez
Computers 2026, 15(5), 312; https://doi.org/10.3390/computers15050312 - 14 May 2026
Abstract
Because textured contact lenses obscure the iris’s natural texture, they pose a serious threat to the accuracy of iris recognition systems and may make identity theft possible. Therefore, this work proposes a reliable method for textured contact lens detection that uses efficient global
[...] Read more.
Because textured contact lenses obscure the iris’s natural texture, they pose a serious threat to the accuracy of iris recognition systems and may make identity theft possible. Therefore, this work proposes a reliable method for textured contact lens detection that uses efficient global texture descriptors and effective feature selection with classification techniques. Run-Length and Zernike Moments are effective global texture descriptors that have been extracted from preprocessed iris images that were acquired from the IIIT-D CLI dataset. To improve classification performance, Ant Colony Optimization (ACO) was used to decrease the dimensionality of the feature vectors. Support Vector Machine (SVM) and Logistic Regression (LOG), two classifiers, have been evaluated with different descriptor pairings. According to findings from experiments, Zernike features optimized by ACO and paired with LOG produced the greatest accuracy of 98.04%, greatly surpassing previous methods. The efficacy of the presented approach for safe and dependable iris-based biometric systems is demonstrated by its exceptional results with regard to accuracy, recall, precision, and F1-score.
Full article
(This article belongs to the Special Issue AI in Bioinformatics)
►▼
Show Figures

Figure 1
Open AccessArticle
A Weakly Supervised Multi-Scale Cross-Modal Information Fusion Method for Wildfire Detection
by
Dawei Wen, Zhoujiang Peng and Yuan Tian
Computers 2026, 15(5), 311; https://doi.org/10.3390/computers15050311 - 14 May 2026
Abstract
►▼
Show Figures
In recent years, wildfires have occurred with increasing frequency. Pixel-level annotation of high-resolution remote sensing wildfire imagery is costly and labor-intensive. Therefore, there is an urgent need for a weakly supervised wildfire detection method that balances detection accuracy and annotation efficiency. To address
[...] Read more.
In recent years, wildfires have occurred with increasing frequency. Pixel-level annotation of high-resolution remote sensing wildfire imagery is costly and labor-intensive. Therefore, there is an urgent need for a weakly supervised wildfire detection method that balances detection accuracy and annotation efficiency. To address the key limitations of existing weakly supervised approaches based on class activation maps (CAMs), including imprecise delineation of fire boundaries, insufficient utilization of cross-modal information, and limited capability in modeling temporal characteristics, this paper proposes a dual-branch multi-scale feature fusion framework for weakly supervised wildfire detection. The proposed framework consists of a multispectral branch and a shortwave infrared (SWIR) temporal branch, which are designed to capture the spatial structural information of fire regions and the temporal variation of thermal anomalies, respectively. Attention-guided feature fusion modules are introduced at each network stage to enable complementary integration of cross-modal information. In addition, a multi-scale CAM-weighted fusion strategy is designed to jointly enhance region localization accuracy and semantic discrimination capability. Experimental evaluations are conducted on a high-resolution wildfire dataset covering 29 regions and consisting of 2206 images. The results demonstrate that the proposed method achieves an IoU of 58.7% and an F1-score of 73.5%, outperforming the state-of-the-art methods by 4.6% and 3.2%, respectively. Ablation and comparative experiments further verify that the dual-branch architecture and feature fusion strategy significantly improve fire localization accuracy and effectively reduce the missed detection rate.
Full article

Figure 1
Open AccessArticle
A Deep Learning-Based Hybrid Method for Reliable and Imperceptible Data Hiding
by
Farah F. Alkhalid
Computers 2026, 15(5), 310; https://doi.org/10.3390/computers15050310 - 13 May 2026
Abstract
Problem: In the deep image steganography field, the main challenge is to achieve a balance between visual image quality, reliably recovering the message, robustness, and interpretability, especially regarding image distortion because of noise, attack, resizing, and cropping. Solution: In this paper,
[...] Read more.
Problem: In the deep image steganography field, the main challenge is to achieve a balance between visual image quality, reliably recovering the message, robustness, and interpretability, especially regarding image distortion because of noise, attack, resizing, and cropping. Solution: In this paper, we propose to combine deterministic pattern-based embedding with a deep neural refinement network to achieve a strong balance between robustness, simplicity, and quality. Methodology: First of all, we embed binary messages using spatial patterns, then refine the stego image, using an encoder–decoder network and enhanced with an attention mechanism. Results: The experimental results record PSNR values between 34.9 and 37.8 dB and SSIM values above 0.99, with zero BER under no-attack, noise, and resizing conditions. Moderate degradation is observed under blur (BER ≈ 0.125), while cropping significantly affects performance (BER ≈ 0.575). Contribution: The proposed approach introduces an interpretable and stable hybrid design that effectively balances imperceptibility and robustness, while maintaining reliable message recovery in practical scenarios. The use of differentiable attacks through training enhances robustness against common distortions such as noise, blur, and resizing.
Full article
(This article belongs to the Special Issue Advanced Cryptographic Techniques for Digital Watermarking, Encryption, and Steganography)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Hyperparameter Tuning of Inception CNNs Using Genetic Algorithms for Automatic Defect Detection
by
Ambra Korra, Anduel Kuqi and Indrit Enesi
Computers 2026, 15(5), 309; https://doi.org/10.3390/computers15050309 - 13 May 2026
Abstract
Automated defect detection in industrial casting processes is important for improving product quality while reducing the cost of manual inspection. In this work, two deep convolutional neural network (CNN) architectures, InceptionV3 and InceptionResNetV2, are evaluated for the binary classification of defects in submersible
[...] Read more.
Automated defect detection in industrial casting processes is important for improving product quality while reducing the cost of manual inspection. In this work, two deep convolutional neural network (CNN) architectures, InceptionV3 and InceptionResNetV2, are evaluated for the binary classification of defects in submersible pump impellers. A genetic algorithm (GA) is used to optimize key hyperparameters, including dropout rate, learning rate, and dense layer configuration, while model complexity is assessed through Pareto-based analysis. Single-run optimization results show that InceptionV3 achieves high classification accuracy (99.0%) with lower model complexity than InceptionResNetV2 (98.75%). Repeated experiments using different random seeds demonstrate relatively stable performance across runs, with InceptionV3 achieving an accuracy of 0.9913 ± 0.003 and InceptionResNetV2 achieving 0.9860 ± 0.0076. Additional experiments were conducted using random-search baselines and classification-head ablation studies (Flatten vs. Global Average Pooling). These experiments showed that optimization strategy and architectural design choices influence both predictive performance and computational complexity. The environmental impact of the training process is evaluated using CodeCarbon, with energy consumption ranging from 0.083 to 0.098 kWh and carbon emissions ranging from 2.008 to 2.401 g CO2eq for InceptionV3 and InceptionResNetV2, respectively. Overall, the results suggest that the most effective configuration depends on the evaluated architecture and experimental setting, highlighting the importance of balancing accuracy, model complexity, and computational efficiency in industrial defect detection systems.
Full article
Open AccessArticle
Performance Evaluation of Lightweight Cryptographic Algorithms for End-to-End Secure IoT Data Transmission over 5G Standalone
by
Gurram Saraswathi and Nagender Kumar Suryadevara
Computers 2026, 15(5), 308; https://doi.org/10.3390/computers15050308 - 13 May 2026
Abstract
The rapid growth of Internet of Things (IoT) applications over 5G networks demands secure, low-latency data transmission while operating under strict resource constraints. However, existing studies have relied on simulations or partial implementations that fail to capture real 5G features, thus producing overly
[...] Read more.
The rapid growth of Internet of Things (IoT) applications over 5G networks demands secure, low-latency data transmission while operating under strict resource constraints. However, existing studies have relied on simulations or partial implementations that fail to capture real 5G features, thus producing overly optimistic elucidations of cryptographic performance. In addition, the absence of end-to-end validation across system layers introduces an opaque flow effect, where transparency lacks across the full transmission path. To address this gap, this paper presents a fully integrated end-to-end 5G IoT security framework that introduces a modified RC4-NL (nonlinear) algorithm to enhance the security of lightweight stream ciphers while preserving computational efficiency. Environmental sensor data is encrypted on a Raspberry Pi 4B and transmitted over a commercial 5G standalone network using a Quectel FG50V module to a Multi-access Edge-Computing (MEC) server. A web-based dashboard built with FastAPI, accessed securely through an Ngrok tunnel, performs real-time decryption and visualization on 5G-connected mobile devices. This architecture eliminates the opaque flow effect and enables realistic performance evaluation, thereby avoiding the optimistic elucidations observed in simulation-based studies. This work experimentally evaluates cryptographic algorithms named Ascon, ChaCha20, AES, standard RC4, and the proposed RC4-NL under the same conditions. Experimental findings indicate that modified RC4-NL achieved an encryption time of 977 µs, a decryption time of 456 µs, and provides a lower power consumption of 0.40 watts, thus giving a proper trade-off between efficiency and enhanced security compared to standard RC4.
Full article
(This article belongs to the Special Issue Shaping the Future of Green Networking: Integrated Approaches of Joint Intelligence, Communication, Sensing, and Resilience for 6G)
Open AccessArticle
Semantic Segmentation-Based Identification and Quantitative Analysis of Cross-Sectional Quality Features in Luzhou-Flavor Liquor Daqu
by
Zheli Song, Yi Dong, Chao Wang, Xiu Zhang, Aibao Sun, Cuiping You, Jian Mao and Shuangping Liu
Computers 2026, 15(5), 307; https://doi.org/10.3390/computers15050307 - 12 May 2026
Abstract
The objective evaluation of Daqu cross-sectional quality is challenging due to its heterogeneous structure, small features, and low contrast. This study proposes a semantic-segmentation-based framework for the automated identification and quantitative analysis of Luzhou-flavor Daqu cross-sections. Four representative architectures—including three convolutional neural network
[...] Read more.
The objective evaluation of Daqu cross-sectional quality is challenging due to its heterogeneous structure, small features, and low contrast. This study proposes a semantic-segmentation-based framework for the automated identification and quantitative analysis of Luzhou-flavor Daqu cross-sections. Four representative architectures—including three convolutional neural network (CNN)-based models (U-Net, U-Net++, and U2-Net) and one Transformer-based model (SegFormer)—were systematically benchmarked. To address severe class imbalance and enhance model robustness, a task-specific data augmentation pipeline was implemented. With these optimized augmentation strategies, the U2-Net model demonstrated the best performance, with a peak mean Intersection over Union (mIoU) of 87.54% and a Dice score of 98.30%. Based on the predicted masks, quantitative indicators such as plaque area ratio, pizhang thickness, and fissure length were precisely extracted. The proposed framework provides an objective and scalable solution for Daqu quality inspection, offering significant practical value for industrial scenarios involving complex materials and fine-grained defect patterns.
Full article
(This article belongs to the Special Issue Machine Learning: Techniques, Industry Applications, Code Sharing, and Future Trends)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Machine Learning Prediction Model and Interpretability Analysis of Depression Risk in Patients with Chronic Kidney Disease
by
Hongli Yan, Xu Peng, Shuang Geng, Yueming Gao and Junfeng Liao
Computers 2026, 15(5), 306; https://doi.org/10.3390/computers15050306 - 12 May 2026
Abstract
Patients with chronic kidney disease (CKD) frequently experience depressive symptoms, which substantially impair their quality of life. To facilitate the early identification of high-risk individuals, this study aimed to develop a predictive model for assessing depression risk among CKD patients. This study was
[...] Read more.
Patients with chronic kidney disease (CKD) frequently experience depressive symptoms, which substantially impair their quality of life. To facilitate the early identification of high-risk individuals, this study aimed to develop a predictive model for assessing depression risk among CKD patients. This study was based on data from the China Health and Retirement Longitudinal Study (CHARLS) 2018 wave, including 1777 middle-aged and elderly participants with self-reported CKD diagnosed by a physician. Depressive symptoms were assessed using the 10-item Center for Epidemiologic Studies Depression Scale (CES-D 10). A total of 29 variables were included, covering lifestyle factors, health status, comorbidities, and sociodemographic characteristics. The Elastic Net algorithm was employed to select 11 features with the highest predictive value. Seven machine learning models, including XGBoost and support vector machine (SVM), were compared, with CHARLS 2020 data used as a temporal validation set. In the multi-model comparison, XGBoost demonstrated discrimination performance comparable to logistic regression (LR), SVM, and multilayer perceptron (MLP) (DeLong test, p > 0.05). However, considering its superior calibration performance and ability to capture nonlinear interactions, XGBoost was selected as the final model. In the validation set, the model achieved an area under the curve (AUC) of 0.8017 and an accuracy of 72.39%. SHAP analysis further revealed the nonlinear effects of predictors, with life satisfaction, sleep duration, and self-rated health showing high contributions and negative associations with depression risk, whereas limitations in activities of daily living (ADL), physical pain, and digestive system diseases were significantly associated with an increased risk of depression. Overall, the risk of depression in CKD patients is influenced by multiple dimensions, including psychological cognition, quality of life, physical function, and social environment. The predictive model developed in this study may provide a valuable reference for the early screening of high-risk populations. However, its applicability to non-CKD populations requires further validation.
Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Medical Informatics)
►▼
Show Figures

Figure 1
Open AccessArticle
Knowledge Management in Manufacturing: Current Practices, Barriers, and Automation Potential for LLM-Supported Systems
by
Pius Finkel and Peter Wurster
Computers 2026, 15(5), 305; https://doi.org/10.3390/computers15050305 - 11 May 2026
Abstract
Knowledge management (KM) is increasingly becoming a critical success factor in Germany’s manufacturing industry due to demographic change, the shortage of a skilled workforce, and the growing need for flexible and resilient production systems. This study contributes empirical evidence on current KM practices
[...] Read more.
Knowledge management (KM) is increasingly becoming a critical success factor in Germany’s manufacturing industry due to demographic change, the shortage of a skilled workforce, and the growing need for flexible and resilient production systems. This study contributes empirical evidence on current KM practices in manufacturing and derives practice-oriented design implications for future LLM-supported KM systems. Two consecutive survey rounds involving six companies in Survey 1 and five companies in Survey 2 were conducted in order to identify current KM practices, recurring barriers, and design implications for large language model (LLM)-supported KM. The results show that KM is perceived as highly relevant, but is implemented only incompletely in practice. Across both datasets, central themes such as fragmented documentation practices, reliance on interpersonal transfer of tacit knowledge and uneven integration of digital KM tools recur consistently. Based on the identified practices, the paper further derives areas in which LLMs may support or augment existing KM processes, particularly with regard to semantic retrieval, contextualization, onboarding, and the preservation of tacit knowledge. The findings also highlight that successful implementation of artificial intelligence (AI)-enabled KM in manufacturing will depend on technical feasibility, trust, usability, and organizational acceptance.
Full article
(This article belongs to the Special Issue AI in Knowledge Management)
►▼
Show Figures

Graphical abstract
Open AccessReview
A Survey of Fault and Intrusion Tolerance Approaches for Scientific Workflow Scheduling in Cloud Computing
by
Mazen Farid, Oluwatosin Ahmed Amodu, Heng Siong Lim, Jamil Abedalrahim Jamil Alsayaydeh, Mohammed Fadhl Abdullah and Faten A. Saif
Computers 2026, 15(5), 304; https://doi.org/10.3390/computers15050304 - 10 May 2026
Abstract
To provide reliable services in the cloud, fault tolerance is perhaps the most important consideration. The inherent sensitivity to failure hampers cloud services’ performance and reliability. As a result, fault tolerance becomes a required characteristic to maintain reliability, which is difficult to provide
[...] Read more.
To provide reliable services in the cloud, fault tolerance is perhaps the most important consideration. The inherent sensitivity to failure hampers cloud services’ performance and reliability. As a result, fault tolerance becomes a required characteristic to maintain reliability, which is difficult to provide due to the dynamic architecture and complex inter-dependencies. To address the issues of cloud reliability, many fault-tolerant approaches have been developed in the literature. This paper presents a recent research survey that seeks to classify the various faults and intrusion tolerance architectures. Furthermore, it provides a thorough critical analysis of existing fault and intrusion tolerance, as well as combined approaches, aimed at enhancing the dependability, availability, and execution of cloud services. The report also includes a comparison of the studied systems’ framework based on various essential criteria such as cost, makespan, reliability, security, resource utilization, energy consumption, and failure ratio. This study aims to comprehensively review this subject for researchers to draw insights from existing patterns in the literature and provide deeper perspectives into some of the challenging issues and prospects. This will enhance the development of highly resilient fault-tolerant and intrusion-resistive scheduling algorithms for current and future cloud applications.
Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
►▼
Show Figures

Figure 1
Open AccessArticle
EEG Fatigue Judgment Method Based on Approximate Nearest Neighbor Search
by
Yingjie Cui, Xu Li, Zhongxian Chen and Yan Li
Computers 2026, 15(5), 303; https://doi.org/10.3390/computers15050303 - 10 May 2026
Abstract
Fatigue seriously affects work efficiency and brings potential safety hazards, and electroencephalogram (EEG) serves as a valuable physiological indicator for fatigue monitoring, as it directly reflects underlying brain neural activity. A key characteristic in EEG fatigue research is that the feature spaces of
[...] Read more.
Fatigue seriously affects work efficiency and brings potential safety hazards, and electroencephalogram (EEG) serves as a valuable physiological indicator for fatigue monitoring, as it directly reflects underlying brain neural activity. A key characteristic in EEG fatigue research is that the feature spaces of pre-fatigue and post-fatigue EEG signals exhibit obvious spatial separation—this separation is caused by significant changes in brain electrical activity when the human body transitions from a normal awake state to a fatigue state. Existing EEG-based fatigue judgment methods mostly focus on binary classification, which fails to fully leverage the inherent spatial separation characteristic of pre-fatigue and post-fatigue feature spaces, making it difficult to achieve simple, efficient, and accurate fatigue judgment. To address this problem, this paper proposes an EEG fatigue judgment method based on feature space spatial separation and Approximate Nearest Neighbor Search (ANNS). The 16-channel pre-fatigue (Group A) and post-fatigue (Group B) EEG signals acquired from seven subjects are segmented and subjected to feature extraction, projecting the signals into a unified feature space. An ANNS index is constructed using feature vectors from both Group A and Group B, with each vector annotated by its corresponding class label. A separate test set (Group C) is utilized, and the k-nearest neighbors of each test feature vector are retrieved from the built ANNS index. The mental fatigue state is then identified via majority voting according to the class labels of the k-nearest neighbors. Experimental results demonstrate that the proposed method can effectively exploit the spatial separation between pre-fatigue and post-fatigue feature distributions, yielding an average single-subject classification accuracy of approximately 90%.
Full article
(This article belongs to the Special Issue AI/ML-Driven EEG Signal Processing)
►▼
Show Figures

Figure 1
Open AccessArticle
An Efficient Quantum-Dot Cellular Automata Memory Architecture for Internet of Things Systems
by
B. S. Premananda, Mohsen Vahabi, Muhammad Zohaib, Seyed-Sajad Ahmadpour, M. Barath and K. R. Sreesha
Computers 2026, 15(5), 302; https://doi.org/10.3390/computers15050302 - 9 May 2026
Abstract
Internet of Things (IoT) nodes continuously acquire, buffer, and transmit sensor data under strict constraints on area, latency, and energy consumption. However, conventional complementary metal–oxide–semiconductor (CMOS)-based memory-access circuits face increasing power loss, parasitic effects, interconnect complexity, and sensitivity to process variations at the
[...] Read more.
Internet of Things (IoT) nodes continuously acquire, buffer, and transmit sensor data under strict constraints on area, latency, and energy consumption. However, conventional complementary metal–oxide–semiconductor (CMOS)-based memory-access circuits face increasing power loss, parasitic effects, interconnect complexity, and sensitivity to process variations at the nanoscale. To address these limitations, this paper proposes a quantum-dot cellular automata (QCA)-based decoder-driven static random-access memory (SRAM)-access architecture for compact and energy-efficient IoT perception-layer memory. The proposed framework integrates three main components: a majority-logic RAM cell with feedback-based storage and non-destructive readout, a compact 2 × 4 decoder with enable and auxiliary asynchronous set/reset control, and a 1 × 4 SRAM array in which the decoder is embedded to reduce routing and clocking overhead. The circuit layouts were implemented and functionally verified using QCADesigner 2.0.3, while the energy behavior was evaluated using QCADesigner-E. Simulation results confirm correct write/read (W/R) and address-selection behavior. The proposed 2 × 4 decoder achieves 86 QCA cells, 0.08 µm2 occupied area, and one clocking unit, reducing cell count, area, and clocking by 48.19%, 50.00%, and 20.00%, respectively, compared with the best selected decoder baseline. The integrated 1 × 4 SRAM array achieves 684 cells and 14 clocking units, improving timing by 30.00% compared with the closest SRAM-array baseline. These results demonstrate that the proposed QCA-based memory-access structure provides a compact and low-overhead solution for energy-constrained IoT communication systems.
Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data, 2nd Volume)
►▼
Show Figures

Figure 1
Open AccessArticle
Quantifying the Impact of Signal Simplification, Data Quantity, and Task Difficulty on Vision Transformer Performance for ECG Rhythm Classification
by
Jarod P. Hartley and W. Joseph MacInnes
Computers 2026, 15(5), 301; https://doi.org/10.3390/computers15050301 - 9 May 2026
Abstract
Vision transformers (ViTs) have demonstrated considerable promise for classifying electrocardiogram (ECG) rhythms. However, much of the existing research is conducted in highly controlled, data-sterile settings that fail to reflect the substantial variability present in real-world ECG signals. This paper seeks to address this
[...] Read more.
Vision transformers (ViTs) have demonstrated considerable promise for classifying electrocardiogram (ECG) rhythms. However, much of the existing research is conducted in highly controlled, data-sterile settings that fail to reflect the substantial variability present in real-world ECG signals. This paper seeks to address this gap by examining how signal simplification, data quantity, and task difficulty influence the performance of the SwinV2 ViT model in ECG rhythm classification. Through systematic analysis, we highlight that classifying highly abstracted signals yields only a limited impact on model performance, with all models achieving over 95% accuracy, while the amount of training data plays a crucial role with an almost 15% accuracy difference between the models trained on the most data and the least data. Finally, our analysis shows the model’s ability to effectively adapt to an increased class count, which is essential due to the varying nature of ECG diagnosis. In summary, these results highlight the importance of carefully balancing data clarity, dataset size, and diagnostic variety when designing ECG classification systems. Achieving this balance is crucial for building reliable and scalable AI solutions for cardiac assessment.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain (3rd Edition))
Open AccessArticle
An Integrated Open-Source Software System for the Generation and Analysis of Subject-Specific Blood Flow Simulation Ensembles
by
Simon Leistikow, Thomas Miro, Adrian Kummerländer, Ali Nahardani, Katja Grün, Marcus Franz, Verena Hoerr, Mathias J. Krause and Lars Linsen
Computers 2026, 15(5), 300; https://doi.org/10.3390/computers15050300 - 9 May 2026
Abstract
Hemodynamic analysis of blood flow is critical for diagnosing cardiovascular diseases and investigating cardiovascular parameters, such as aneurysms and wall shear stress. For subject-specific analyses, the anatomy and blood flow of the subject can be captured non-invasively using structural and 4D Magnetic Resonance
[...] Read more.
Hemodynamic analysis of blood flow is critical for diagnosing cardiovascular diseases and investigating cardiovascular parameters, such as aneurysms and wall shear stress. For subject-specific analyses, the anatomy and blood flow of the subject can be captured non-invasively using structural and 4D Magnetic Resonance Imaging (MRI), respectively. Computational fluid dynamics (CFD), on the other hand, can be used to generate blood flow simulations. To generate and analyze subject-specific blood flow simulations, MRI and CFD have to be brought together. We present an interactive, customizable, and user-oriented visual analysis tool that integrates measured data and CFD simulations. Thus, our open-source tool supports both medical and numerical analysis workflows. It enables the creation of simulation ensembles with a high variety of parameters. Furthermore, it allows for visual and analytical examination of simulations and measurements through 2D embeddings. To demonstrate the effectiveness of our tool, we applied it to three real-world use cases, showcasing its ability to configure simulation ensembles and analyze blood flow. We evaluated our example cases together with MRI and CFD experts. By combining the strengths of both CFD and MRI, our tool provides a comprehensive understanding of hemodynamic parameters, facilitating accurate analysis of hemodynamic biomarkers.
Full article
Open AccessArticle
ASL Recognition and Game-Based Interaction: A Machine Learning—Driven, Gamified and Accessible Vocabulary Learning System for Deaf Learners
by
Stefanie Amiruzzaman, Raga Mouni Batchu, Md Amiruzzaman, Linh Ngo and M. Ali Akber Dewan
Computers 2026, 15(5), 299; https://doi.org/10.3390/computers15050299 - 7 May 2026
Abstract
Digital learning tools for American Sign Language (ASL) often lack the interactive depth necessary to engage learners effectively. This paper introduces a novel, browser-based word search game designed to facilitate ASL vocabulary familiarization through gamified interaction. The system employs a two-tier architecture consisting
[...] Read more.
Digital learning tools for American Sign Language (ASL) often lack the interactive depth necessary to engage learners effectively. This paper introduces a novel, browser-based word search game designed to facilitate ASL vocabulary familiarization through gamified interaction. The system employs a two-tier architecture consisting of a React-based frontend and a Flask-based backend. At its core, the application integrates a lightweight, skeleton-based Isolated Sign Language Recognition (ISLR) model, utilizing a Stacked Transformer-based Spatial-Temporal Attention Network to enable real-time webcam-based word entry during the configuration phase. This model, trained on the WLASL-100 dataset, achieves a Top-5 test accuracy of 88.48% with an average model inference latency of 141 ms, enabling real-time webcam input without proprietary hardware. Furthermore, we implement a constraint-satisfaction puzzle generation algorithm that achieves a 100% success rate in creating interlocked, multi-directional grids. Our results demonstrate that merging computer vision with pedagogical game mechanics provides an accessible, high-performance tool for the Deaf and Hard-of-Hearing (DHH) community, bridging the gap between static instruction and active linguistic practice.
Full article
(This article belongs to the Special Issue Advances in Game-Based Learning, Gamification in Education and Serious Games)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2026
Topic in
Computers, Informatics, Information, Logistics, Mathematics, Algorithms
Decision Science Applications and Models (DSAM)
Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 30 June 2026
Topic in
AI, Computers, Education Sciences, Societies, Future Internet, Technologies
AI Trends in Teacher and Student Training
Topic Editors: José Fernández-Cerero, Marta Montenegro-RuedaDeadline: 11 July 2026
Topic in
Applied Sciences, Automation, Computers, Electronics, Sensors, JCP, Mathematics
Intelligent Optimization, Decision-Making and Privacy Preservation in Cyber–Physical Systems
Topic Editors: Lijuan Zha, Jinliang Liu, Jian LiuDeadline: 31 August 2026
Conferences
Special Issues
Special Issue in
Computers
Explainable Artificial Intelligence for Signal Processing and Recognition
Guest Editor: Andres Alvarez-MezaDeadline: 20 May 2026
Special Issue in
Computers
Intrusion Detection and Trust Provisioning in Edge-of-Things Environment
Guest Editors: Hooman Alavizadeh, Ahmad Salehi ShahrakiDeadline: 20 May 2026
Special Issue in
Computers
Advances in Failure Detection and Diagnostic Strategies: Enhancing Reliability and Safety
Guest Editor: Rafael PalaciosDeadline: 31 May 2026
Special Issue in
Computers
High-Performance Computing (HPC) and Computer Architecture
Guest Editors: Taskin Kocak, Ron (Rongyu) LinDeadline: 31 May 2026





