Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.5 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Artificial Intelligence-Based Models for Predicting Disease Course Risk Using Patient Data
Computers 2026, 15(2), 113; https://doi.org/10.3390/computers15020113 - 6 Feb 2026
Abstract
Nowadays, longitudinal data are common—typically high-dimensional, large, complex, and collected using various methods, with repeated outcomes. For example, the growing elderly population experiences health deterioration, including limitations in Instrumental Activities of Daily Living (IADLs), thereby increasing demand for long-term care. Understanding the risk
[...] Read more.
Nowadays, longitudinal data are common—typically high-dimensional, large, complex, and collected using various methods, with repeated outcomes. For example, the growing elderly population experiences health deterioration, including limitations in Instrumental Activities of Daily Living (IADLs), thereby increasing demand for long-term care. Understanding the risk of repeated IADLs and estimating the trajectory risk by identifying significant predictors will support effective care planning. Such data analysis requires a complex modeling framework. We illustrated a regressive modeling framework employing statistical and machine learning (ML) models on the Health and Retirement Study data to predict the trajectory of IADL risk as a function of predictors. Based on the accuracy measure, the regressive logistic regression (RLR) and the Decision Tree (DT) models showed the highest prediction accuracy: 0.90 to 0.93 for follow-ups 1–6; and 0.89 and 0.90 for follow-up 7, respectively. The Area Under the Curve and Receiver Operating Characteristics curve also showed similar findings. Depression scores, mobility score, large muscle score, and Difficulties of Activities of Daily Living (ADLs) score showed a significant positive association with IADLs (p < 0.05). The proposed modeling framework simplifies the analysis and risk prediction of repeated outcomes from complex datasets and could be automated by leveraging Artificial Intelligence (AI).
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►
Show Figures
Open AccessArticle
Multi-Objective Harris Hawks Optimization with NSGA-III for Feature Selection in Student Performance Prediction
by
Nabeel Al-Milli
Computers 2026, 15(2), 112; https://doi.org/10.3390/computers15020112 - 6 Feb 2026
Abstract
Student performance is an important factor for any education process to succeed; as a result, early detection of students at risk is critical for enabling timely and effective educational interventions. However, most educational datasets are complex and do not have a stable number
[...] Read more.
Student performance is an important factor for any education process to succeed; as a result, early detection of students at risk is critical for enabling timely and effective educational interventions. However, most educational datasets are complex and do not have a stable number of features. As a result, in this paper, we propose a new algorithm called MOHHO-NSGA-III, which is a multi-objective feature-selection framework that jointly optimizes classification performance, feature subset compactness, and prediction stability with cross-validation folds. The algorithm combines Harris Hawks Optimization (HHO) to obtain a good balance between exploration and exploitation, with NSGA-III to preserve solution diversity along the Pareto front. Moreover, we control the diversity management strategy to figure out a new solution to overcome the issue, thereby reducing the premature convergence status. We validated the algorithm on Portuguese and Mathematics datasets obtained from the UCI Student Performance repository. Selected features were evaluated with five classifiers (k-NN, Decision Tree, Naive Bayes, SVM, LDA) through 10-fold cross-validation repeated over 21 independent runs. MOHHO-NSGA-III consistently selected 12 out of 30 features (60% reduction) while achieving 4.5% higher average accuracy than the full feature set (Wilcoxon test; across all classifiers). The most frequently selected features were past failures, absences, and family support aligning with educational research on student success factors. This suggests the proposed algorithm produces not just accurate but also interpretable models suitable for deployment in institutional early warning systems.
Full article
(This article belongs to the Section AI-Driven Innovations)
►▼
Show Figures

Figure 1
Open AccessReview
Transparency Mechanisms for Generative AI Use in Higher Education Assessment: A Systematic Scoping Review (2022–2026)
by
Itahisa Pérez-Pérez, Miriam Catalina González-Afonso, Zeus Plasencia-Carballo and David Pérez-Jorge
Computers 2026, 15(2), 111; https://doi.org/10.3390/computers15020111 - 6 Feb 2026
Abstract
The integration of generative AI in higher education has reignited debates around authorship and academic integrity, prompting approaches that emphasize transparency. This study identifies and synthesizes the transparency mechanisms described for assessment involving generative AI, recognizes implementation patterns, and analyzes the available evidence
[...] Read more.
The integration of generative AI in higher education has reignited debates around authorship and academic integrity, prompting approaches that emphasize transparency. This study identifies and synthesizes the transparency mechanisms described for assessment involving generative AI, recognizes implementation patterns, and analyzes the available evidence regarding compliance monitoring, rigor, workload, and acceptability. A scoping review (PRISMA 2020) was conducted using searches in Scopus, Web of Science, ERIC, and IEEE Xplore (2022–2026). Out of 92 records, 11 studies were included, and four dimensions were coded: compliance assessment approach, specified requirements, implementation patterns, and reported evidence. The results indicate limited operationalization: the absence of explicit assessment (27.3%) and unverified self-disclosure (18.2%) are predominant, along with implicit instructor judgment (18.2%). Requirements are often poorly specified (45.5%), and evidence concerning workload and acceptability is rarely reported (63.6%). Overall, the literature suggests that transparency is more feasible when it is proportionate, grounded in clear expectations, and aligned with the assessment design, while avoiding punitive or overly surveillant dynamics. The review protocol was prospectively registered in PROSPERO (CRD420261287226).
Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
A Macrocognitive Design Taxonomy for Simulation-Based Training Systems: Bridging Cognitive Theory and Human–Computer Interaction
by
Jessica M. Johnson
Computers 2026, 15(2), 110; https://doi.org/10.3390/computers15020110 - 6 Feb 2026
Abstract
Simulation-based training systems are increasingly deployed to prepare learners for complex, safety-critical, and dynamic work environments. While advances in computing have enabled immersive and data-rich simulations, many systems remain optimized for procedural accuracy and surface-level task performance rather than the macrocognitive processes that
[...] Read more.
Simulation-based training systems are increasingly deployed to prepare learners for complex, safety-critical, and dynamic work environments. While advances in computing have enabled immersive and data-rich simulations, many systems remain optimized for procedural accuracy and surface-level task performance rather than the macrocognitive processes that underpin adaptive expertise. Macrocognition encompasses higher-order cognitive processes that are essential for performance transfer beyond controlled training conditions. When these processes are insufficiently supported, training systems risk fostering brittle strategies and negative training effects. This paper introduces a macrocognitive design taxonomy for simulation-based training systems derived from a large-scale meta-analysis examining the transfer of macrocognitive skills from immersive simulations to real-world training environments. Drawing on evidence synthesized from 111 studies spanning healthcare, industrial safety, skilled trades, and defense contexts, the taxonomy links macrocognitive theory to human–computer interaction (HCI) design affordances, computational data traces, and feedback and adaptation mechanisms shown to support transfer. Grounded in joint cognitive systems theory and learning engineering practice, the taxonomy treats macrocognition as a designable and computable system concern informed by empirical transfer effects rather than as an abstract explanatory construct.
Full article
(This article belongs to the Special Issue Innovative Research in Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Wearable Technology in Pediatric Cardiac Care: A Scoping Review of Parent Acceptance and Patient Comfort
by
Valentina La Marca, Tara Chatty, Animesh Tandon and Colin K. Drummond
Computers 2026, 15(2), 109; https://doi.org/10.3390/computers15020109 - 6 Feb 2026
Abstract
►▼
Show Figures
While wearable technology has advanced pediatric medical monitoring, home-based success in cardiology depends heavily on human-centered design. This scoping review synthesizes evidence on the human factors—specifically parental acceptance, child comfort, and usability—that determine the real-world adoption of pediatric cardiac wearables. By systematically searching
[...] Read more.
While wearable technology has advanced pediatric medical monitoring, home-based success in cardiology depends heavily on human-centered design. This scoping review synthesizes evidence on the human factors—specifically parental acceptance, child comfort, and usability—that determine the real-world adoption of pediatric cardiac wearables. By systematically searching PubMed, Scopus, Cochrane Library, and ClinicalTrials.gov, we mapped the evidence surrounding diverse technologies, including vital sign and ECG monitors. Our findings reveal a persistent “performance-usability gap”: while devices show high clinical efficacy in controlled settings, their long-term utility is frequently compromised by poor wearability, skin irritation, and a lack of alignment with family routines. The review identifies that current research structures and regulatory pathways reward quantifiable biomedical outcomes, such as sensor accuracy, while routinely sidelining difficult-to-measure factors like parental buy-in and child autonomy. Consequently, we highlight critical gaps in the design process that prioritize clinical specs over the lived experience of the patient. We conclude that a paradigm shift toward human-centered engineering is required to move beyond controlled study success. These results provide a necessary roadmap for developers and regulators to prioritize the “invisible” outcomes of comfort and compliance, which are essential for the effective, sustained home-based monitoring of pediatric patients.
Full article

Figure 1
Open AccessArticle
Design and Implementation of a Trusted Food Supply Chain Traceability System with Incentive Using Hyperledger Fabric
by
Zhiyang Zhou, Yaokai Feng and Kouichi Sakurai
Computers 2026, 15(2), 108; https://doi.org/10.3390/computers15020108 - 5 Feb 2026
Abstract
Effective supply chain traceability is indispensable for ensuring food safety, which is a significant social issue. Traditional traceability systems are mostly based on centralized databases, relying on a single entity or organization and facing problems such as insufficient transparency and the risk of
[...] Read more.
Effective supply chain traceability is indispensable for ensuring food safety, which is a significant social issue. Traditional traceability systems are mostly based on centralized databases, relying on a single entity or organization and facing problems such as insufficient transparency and the risk of data tampering. To address these issues, many studies have adopted blockchain technology, which offers advantages such as decentralization and immutability. However, challenges such as data credibility and insufficient protection of private data remain. This study proposes a multi-channel architecture based on Blockchain (Hyperledger Fabric in this study), in which data is partitioned and managed across dedicated channels to strengthen the protection of sensitive information. Furthermore, a trust and incentive design is implemented, featuring a trust-value calculation function and a reward–penalty mechanism that encourage participants to upload more truthful data and improve the reliability of data before it is recorded on the blockchain. In this paper, the design and implementation of the proposed system are explained in detail, and its performance is examined using Hyperledger Caliper, a blockchain performance benchmark framework. Functional evaluations indicate that the proposed system can be correctly implemented and that it correctly supports supply chain traceability, trust- and incentive-related, privacy protecting and other functions as designed, while performance evaluations indicate that it can maintain stable performance under higher workloads, suggesting that the proposed approach is practical and applicable to food supply chain traceability scenarios.
Full article
(This article belongs to the Special Issue Revolutionizing Industries: The Impact of Blockchain Technology)
►▼
Show Figures

Figure 1
Open AccessArticle
Unsupervised TTL-Based Deep Learning for Anomaly Detection in SIM-Tagged Network Traffic
by
Babe Haiba and Najat Rafalia
Computers 2026, 15(2), 107; https://doi.org/10.3390/computers15020107 - 4 Feb 2026
Abstract
The rise of SIM cloning, identity spoofing, and covert manipulation in mobile and IoT networks has created an urgent need for continuous post-registration verification. This work introduces an unsupervised deep learning framework for detecting behavioral anomalies in SIM-tagged network flows by modeling the
[...] Read more.
The rise of SIM cloning, identity spoofing, and covert manipulation in mobile and IoT networks has created an urgent need for continuous post-registration verification. This work introduces an unsupervised deep learning framework for detecting behavioral anomalies in SIM-tagged network flows by modeling the intrinsic structure of benign behavioral descriptors (TTL, timing drift, payload statistics). A Temporal Deep Autoencoder (TDAE) combining Conv1D layers and an LSTM encoder is trained exclusively on normal traffic and used to identify deviations through reconstruction error, enabling one-class (label-free) training. For deployment, alarms are set using an unsupervised quantile threshold calibrated on benign traffic with a false-alarm budget; is reported only as a diagnostic reference for model comparison. To ensure realism, a large-scale corpus of 3.6 million SIM-tagged flows was constructed by enriching public IoT traffic with pseudo-operator identifiers (synthetic SIM tags derived from device identifiers) and controlled anomaly injections. Cross-domain experiment transfer under SIM-grouped protocol: Training on clean Cassavia-like traffic and testing on attack-rich Guarascio-like flows yields a PR-AUC of 0.93 for the proposed Conv-LSTM Temporal Deep Autoencoder, outperforming Dense Autoencoder, Isolation Forest, One-Class SVM, and LOF baselines. Conversely, the reverse direction collapses to PR-AUC , confirming the absence of data leakage and the validity of one-class behavioral learning. Sensitivity analysis shows that performance is stable around the unsupervised quantile operating point. Overall, the proposed framework provides a lightweight, interpretable, and data-efficient behavioral verification layer for detecting cloned or unauthorized SIM activity, complementing existing registration mechanisms in next-generation telecom and IoT ecosystems.
Full article
(This article belongs to the Special Issue Emerging Trends in Network Security and Applied Cryptography)
►▼
Show Figures

Figure 1
Open AccessArticle
High-Security Image Encryption Using Baker Map Confusion and Extended PWAM Chaotic Diffusion
by
Ayman H. Abd El-Aziem, Marwa Hussien Mohamed and Ahmed Abdelhafeez
Computers 2026, 15(2), 106; https://doi.org/10.3390/computers15020106 - 3 Feb 2026
Abstract
The heavy use of digital images across network systems has become a major concern regarding data confidentiality and unauthorized access. Conventional image encryption techniques hardly achieve high security levels efficiently, especially in real-time and resource-constrained environments. These challenges motivate the development of more
[...] Read more.
The heavy use of digital images across network systems has become a major concern regarding data confidentiality and unauthorized access. Conventional image encryption techniques hardly achieve high security levels efficiently, especially in real-time and resource-constrained environments. These challenges motivate the development of more robust and efficient encryption mechanisms. In this paper, a dual-chaotic image encryption framework is developed where two complementary chaotic systems are combined to effectively realize confusion and diffusion. The proposed method uses a chaotic permutation mechanism to find the pixel positions and enhanced chaotic diffusion to change the pixel values for eliminating the statistical correlations. An extended family of piecewise affine chaotic maps is designed to enhance the dynamic range and complexity of the diffusion process for strengthening the resistance capability against cryptographic attacks. Intensive experimental validations confirm that the proposed scheme well obscures the visual information and strongly reduces the pixel correlations in the encrypted images. High entropy values, uniform histogram distributions, high resistance to differential attacks, and improved robustness are further evidenced by statistical and security analyses compared to some conventional image encryption techniques. The results also show extremely low computational overheads, hence allowing for efficient implementation. The proposed encryption framework provides more security for digital image transmission and storage, and the performances are still practical. Given its robustness, efficiency, and scalability, it is equally adequate for real-time multi-media applications and secure communication systems, hence promising to offer a reliable solution for modern image protection requirements.
Full article
(This article belongs to the Special Issue Multimedia Data and Network Security)
►▼
Show Figures

Figure 1
Open AccessArticle
The Development of a Wildfire Early Warning System Using LoRa Technology
by
Supawee Makdee, Ponglert Sangkaphet, Chanidapa Boonprasom, Buppawan Chaleamwong and Nawara Chansiri
Computers 2026, 15(2), 105; https://doi.org/10.3390/computers15020105 - 2 Feb 2026
Abstract
Sok Chan Forest, located in Lao Suea Kok District, Ubon Ratchathani Province, Thailand, is frequently affected by wildfires during the dry season, resulting in significant environmental degradation and adverse impacts on the livelihoods of local communities. In this study, we outline the development
[...] Read more.
Sok Chan Forest, located in Lao Suea Kok District, Ubon Ratchathani Province, Thailand, is frequently affected by wildfires during the dry season, resulting in significant environmental degradation and adverse impacts on the livelihoods of local communities. In this study, we outline the development of a prototype wildfire early warning system utilizing LoRa technology to address the long-distance data transmission limitations that are commonly encountered when using conventional Internet of Things (IoT) solutions. The proposed system comprises sensor nodes that communicate from peer to peer with a central node, which subsequently relays the collected data to a remote database server via the internet. Real-time alerts are disseminated through both a smartphone application and a web-based platform, thereby facilitating timely notification of authorities and community members. Field experiments in Sok Chan Forest demonstrated reliable single-hop communication with a 100% packet delivery ratio at distances up to 1500 m, positive SNR, and RSSI levels above receiver sensitivity, as well as sub-second end-to-end detection latency in both single- and two-hop configurations. A controlled alarm accuracy evaluation yielded an overall classification accuracy of 91.7%, with perfect precision for the Fire class, while a user study involving five software development experts and fifteen firefighters yielded an average effectiveness score of 3.84, reflecting a high level of operational efficacy.
Full article
(This article belongs to the Special Issue Wireless Sensor Networks in IoT)
►▼
Show Figures

Figure 1
Open AccessArticle
Zero-Inflated Data Analysis Using Graph Neural Networks with Convolution
by
Sunghae Jun
Computers 2026, 15(2), 104; https://doi.org/10.3390/computers15020104 - 2 Feb 2026
Abstract
►▼
Show Figures
Zero-inflated count data are characterized by an excessive frequency of zeros that cannot be adequately analyzed by a single distribution, such as Poisson or negative binomial. This problem is pervasive in many practical applications, including document–keyword matrix derived from text corpora, where most
[...] Read more.
Zero-inflated count data are characterized by an excessive frequency of zeros that cannot be adequately analyzed by a single distribution, such as Poisson or negative binomial. This problem is pervasive in many practical applications, including document–keyword matrix derived from text corpora, where most keyword frequencies are zero. Conventional statistical approaches, such as the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) models, explicitly separate a structural zero component from a count component, but they typically assume independent observations and can be unstable when covariates are high-dimensional and sparse. To address these limitations, this paper proposes a graph-based zero-inflated learning framework that combines simple graph convolution (SGC) with zero-inflated count regression heads such as ZIP and ZINB. We first construct an observation graph by connecting similar samples, and then apply SGC to propagate and smooth features over the graph, producing convolutional representations that incorporate neighborhood information while remaining computationally lightweight. The resulting representations are used as covariates in ZIP and ZINB heads, which preserve probabilistic interpretability through maximum likelihood learning. Our experiments on simulated zero-inflated datasets with controlled zero ratios demonstrate that the proposed ZIP+SGC and ZINB+SGC consistently reduce prediction errors compared with their non-graph baselines, as measured by mean absolute error and root mean squared error. Overall, the proposed approach provides an efficient and interpretable way to integrate graph neural computation with zero-inflated modeling for sparse count prediction problems.
Full article

Figure 1
Open AccessArticle
Sem4EDA: A Knowledge-Graph and Rule-Based Framework for Automated Fault Detection and Energy Optimization in EDA-IoT Systems
by
Antonios Pliatsios and Michael Dossis
Computers 2026, 15(2), 103; https://doi.org/10.3390/computers15020103 - 2 Feb 2026
Abstract
This paper presents Sem4EDA, an ontology-driven and rule-based framework for automated fault diagnosis and energy-aware optimization in Electronic Design Automation (EDA) and Internet of Things (IoT) environments. The escalating complexity of modern hardware systems, particularly within IoT and embedded domains, presents formidable challenges
[...] Read more.
This paper presents Sem4EDA, an ontology-driven and rule-based framework for automated fault diagnosis and energy-aware optimization in Electronic Design Automation (EDA) and Internet of Things (IoT) environments. The escalating complexity of modern hardware systems, particularly within IoT and embedded domains, presents formidable challenges for traditional EDA methodologies. While EDA tools excel at design and simulation, they often operate as siloed applications, lacking the semantic context necessary for intelligent fault diagnosis and system-level optimization. Sem4EDA addresses this gap by providing a comprehensive ontological framework developed in OWL 2, creating a unified, machine-interpretable model of hardware components, EDA design processes, fault modalities, and IoT operational contexts. We present a rule-based reasoning system implemented through SPARQL queries, which operates atop this knowledge base to automate the detection of complex faults such as timing violations, power inefficiencies, and thermal issues. A detailed case study, conducted via a large-scale trace-driven co-simulation of a smart city environment, demonstrates the framework’s practical efficacy: by analyzing simulated temperature sensor telemetry and Field-Programmable Gate Array (FPGA) configurations, Sem4EDA identified specific energy inefficiencies and overheating risks, leading to actionable optimization strategies that resulted in a 23.7% reduction in power consumption and 15.6% decrease in operating temperature for the modeled sensor cluster. This work establishes a foundational step towards more autonomous, resilient, and semantically-aware hardware design and management systems.
Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
Open AccessReview
A Comparative Review of Quantum Neural Networks and Classical Machine Learning for Cardiovascular Disease Risk Prediction
by
Nouf Ali AL Ajmi and Muhammad Shoaib
Computers 2026, 15(2), 102; https://doi.org/10.3390/computers15020102 - 2 Feb 2026
Abstract
Cardiac risk prediction is critical for the early detection and prevention of cardiovascular diseases, a leading global cause of mortality. In response to the growing volume and complexity of healthcare data, there has been increasing reliance on computational approaches to enhance clinical decision-making
[...] Read more.
Cardiac risk prediction is critical for the early detection and prevention of cardiovascular diseases, a leading global cause of mortality. In response to the growing volume and complexity of healthcare data, there has been increasing reliance on computational approaches to enhance clinical decision-making and improve early detection of cardiac risks. Although classical machine learning techniques have demonstrated strong performance in cardiovascular disease prediction, their efficiency and scalability are increasingly challenged by high-dimensional and large-scale medical datasets. Emerging advances in quantum computing have introduced quantum machine learning (QML) as a promising alternative, offering novel computational paradigms with the potential to outperform classical methods in terms of speed and problem-solving capability. This review analyzed twelve studies, evaluating data types, quantum architecture, performance metrics, and comparative efficacy against classical machine learning models. Our findings indicate that QNNs show promise for enhanced predictive accuracy and computational efficiency. However, significant challenges in scalability, noise resilience, and clinical integration persist. The translation of quantum advantage into clinical practice necessitates further validation on large-scale with diverse datasets.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain (3rd Edition))
►▼
Show Figures

Graphical abstract
Open AccessArticle
A Novel Dual-Layer Quantum-Resilient Encryption Strategy for UAV–Cloud Communication Using Adaptive Lightweight Ciphers and Hybrid ECC–PQC
by
Mahmoud Aljamal, Bashar S. Khassawneh, Ayoub Alsarhan, Saif Okour, Latifa Abdullah Almusfar, Bashair Faisal AlThani and Waad Aldossary
Computers 2026, 15(2), 101; https://doi.org/10.3390/computers15020101 - 2 Feb 2026
Abstract
Unmanned Aerial Vehicles (UAVs) are increasingly integrated into Internet of Things (IoT) ecosystems for applications such as surveillance, disaster response, environmental monitoring, and logistics. These missions demand reliable and secure communication between UAVs and cloud platforms for command, control, and data storage. However,
[...] Read more.
Unmanned Aerial Vehicles (UAVs) are increasingly integrated into Internet of Things (IoT) ecosystems for applications such as surveillance, disaster response, environmental monitoring, and logistics. These missions demand reliable and secure communication between UAVs and cloud platforms for command, control, and data storage. However, UAV communication channels are highly vulnerable to eavesdropping, spoofing, and man-in-the-middle attacks due to their wireless and often long-range nature. Traditional cryptographic schemes either impose excessive computational overhead on resource-constrained UAVs or lack sufficient robustness for cloud-level security. To address this challenge, we propose a dual-layer encryption architecture that balances lightweight efficiency with strong cryptographic guarantees. Unlike prior dual-layer approaches, the proposed framework introduces a context-aware adaptive lightweight layer for UAV-to-gateway communication and a hybrid post-quantum layer for gateway-to-cloud security, enabling dynamic cipher selection, energy-aware key scheduling, and quantum-resilient key establishment. In the first layer, UAV-to-gateway communication employs a lightweight symmetric encryption scheme optimized for low latency and minimal energy consumption. In the second layer, gateway-to-cloud communication uses post-quantum asymmetric encryption to ensure resilience against emerging quantum threats. The architecture is further reinforced with optional multi-path hardening and blockchain-assisted key lifecycle management to enhance scalability and tamper-proof auditability. Experimental evaluation using a UAV testbed and cloud integration shows that the proposed framework achieves 99.85% confidentiality preservation, reduces computational overhead on UAVs by 42%, and improves end-to-end latency by 35% compared to conventional single-layer encryption schemes. These results confirm that the proposed adaptive and hybridized dual-layer design provides a scalable, secure, and resource-aware solution for UAV-to-cloud communication, offering both present-day practicality and future-proof cryptographic resilience.
Full article
(This article belongs to the Special Issue Emerging Trends in Network Security and Applied Cryptography)
►▼
Show Figures

Figure 1
Open AccessArticle
The Cognitive Affective Model of Motion Capture Training: A Theoretical Framework for Enhancing Embodied Learning and Creative Skill Development in Computer Animation Design
by
Xinyi Jiang, Zainuddin Ibrahim, Jing Jiang, Jiafeng Wang and Gang Liu
Computers 2026, 15(2), 100; https://doi.org/10.3390/computers15020100 - 2 Feb 2026
Abstract
There has been a surge in interest in and implementation of motion capture (MoCap)-based lessons in animation, creative education, and performance training, leading to an increasing number of studies on this topic. While recent studies have summarized these developments, few have been conducted
[...] Read more.
There has been a surge in interest in and implementation of motion capture (MoCap)-based lessons in animation, creative education, and performance training, leading to an increasing number of studies on this topic. While recent studies have summarized these developments, few have been conducted that synthesize existing findings into a theoretical framework. Building upon the Cognitive Affective Model of Immersive Learning (CAMIL), this study proposes the Cognitive Affective Model of Motion Capture Training (CAMMT) as a theoretical and research-based framework for explaining how MoCap fosters creative cognition in computer animation practice. The model identifies six affective and cognitive constructs: Control and Active Learning, Reflective Thinking, Perceptual Motor Skills, Emotional Expressive, Artistic Innovation, and Collaborative Construction that describe how MoCap’s technological affordances of immersion and interactivity support creativity in animation practice. The findings indicate that instructional and design methods from less immersive media can be effectively adapted to MoCap environments. Although originally developed for animation education, CAMMT contributes to broader theories of creative design processes by linking cognitive, affective, and performative dimensions of embodied interaction. This study offers guidance for researchers and designers exploring creative and embodied interaction across digital performance and design contexts.
Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications (2nd Edition))
►▼
Show Figures

Graphical abstract
Open AccessSystematic Review
Research Advances in Maize Crop Disease Detection Using Machine Learning and Deep Learning Approaches
by
Thangavel Murugan, Nasurudeen Ahamed Noor Mohamed Badusha, Nura Shifa Musa, Eiman Mubarak Masoud Alahbabi, Ruqayyah Ali Ahmed Alyammahi, Abebe Belay Adege, Afedi Abdi and Zemzem Mohammed Megersa
Computers 2026, 15(2), 99; https://doi.org/10.3390/computers15020099 - 2 Feb 2026
Abstract
Recent developments in machine learning (ML) and deep learning (DL) algorithms have introduced a new approach to the automatic detection of plant diseases. However, existing reviews of this field tend to be broader than maize-focused and do not offer a comprehensive synthesis of
[...] Read more.
Recent developments in machine learning (ML) and deep learning (DL) algorithms have introduced a new approach to the automatic detection of plant diseases. However, existing reviews of this field tend to be broader than maize-focused and do not offer a comprehensive synthesis of how ML and DL methods have been applied to image-based detection of maize leaf disease. Following the PRISMA guidelines, this systematic review of 102 peer-reviewed papers published between 2017 and 2025 examined methods and approaches used to classify leaf images for detecting disease in maize plants. The 102 papers were categorized by disease type, dataset, task, learning approach, architecture, and metrics used to evaluate performance. The analysis results indicate that traditional ML methods, when combined with effective feature engineering, can achieve classification accuracies of approximately 79–100%, while DL, especially CNNs, provide consistent, superior classification performance on controlled benchmark datasets (up to 99.9%). Yet in “real field” conditions, many of these improvements typically decrease or disappear due to dataset bias, environmental factors, and limited evaluation. The review provides a comprehensive overview of emerging trends, performance trade-offs, and ongoing gaps in developing field-ready, explainable, reliable, and scalable maize leaf disease detection systems.
Full article
(This article belongs to the Special Issue Intelligent Computing and Sensing Systems for Sustainable Precision Agriculture)
►▼
Show Figures

Figure 1
Open AccessArticle
Bridging the Gap in IoT Education: A Comparative Analysis of Project-Based Learning Outcomes Across Industrial, Environmental, and Electrical Engineering Disciplines
by
Verónica Guevara, Miguel Tupac-Yupanqui and Cristian Vidal-Silva
Computers 2026, 15(2), 98; https://doi.org/10.3390/computers15020098 - 2 Feb 2026
Abstract
►▼
Show Figures
The rapid integration of Industry 4.0 technologies into non-computer engineering curricula presents a significant pedagogical challenge: avoiding a “one-size-fits-all” approach. While Project-Based Learning (PBL) is widely advocated for teaching Internet of Things (IoT), little research addresses how students from different engineering branches—specifically Industrial,
[...] Read more.
The rapid integration of Industry 4.0 technologies into non-computer engineering curricula presents a significant pedagogical challenge: avoiding a “one-size-fits-all” approach. While Project-Based Learning (PBL) is widely advocated for teaching Internet of Things (IoT), little research addresses how students from different engineering branches—specifically Industrial, Environmental, and Electrical—respond to identical technical requirements. This study evaluates the deployment of ESP32-based IoT solutions for local agriculture and beekeeping problems in the Peruvian Andes, analyzing the performance and perception of three distinct student cohorts (Total N = 95). Results indicate a significant divergence in learning outcomes and satisfaction. The cohort predominantly composed of Industrial Engineering students (NRC-33563) demonstrated lower adherence to technical code modularization (88% vs. 97%) and lower overall course recommendation rates compared to the mixed cohorts (NRC-33562/33561), who reported higher engagement with the hardware implementation. These findings suggest that while Environmental and Electrical engineering students naturally align with the sensing and actuation layers of IoT, Industrial engineering students may require a curriculum that emphasizes process optimization and data analytics over raw firmware development. We propose a differentiated pedagogical framework to maximize engagement and competency acquisition across diverse engineering disciplines.
Full article

Figure 1
Open AccessArticle
An Interpretable Multi-Dataset Learning Framework for Breast Cancer Prediction Using Clinical and Biomedical Tabular Data
by
Muhammad Ateeb Ather, Abdullah, Zulaikha Fatima, José Luis Oropeza Rodríguez and Grigori Sidorov
Computers 2026, 15(2), 97; https://doi.org/10.3390/computers15020097 - 2 Feb 2026
Abstract
Despite the numerous advancements that have been made in the treatment and management of breast cancer, it continues to be a source of mortality in millions of female patients across the world each year; thus, there is a need for proper and reliable
[...] Read more.
Despite the numerous advancements that have been made in the treatment and management of breast cancer, it continues to be a source of mortality in millions of female patients across the world each year; thus, there is a need for proper and reliable diagnostic assistance tools that are quite effective in the prediction of the disease in its early stages. In our research, in addition to the proposed framework, a comprehensive comparative assessment of traditional machine learning, deep learning, and transformer-based models has been performed to predict breast cancer in a multi-dataset environment. For the purpose of improving diversity and reducing any possible biases in the datasets, our research combined three datasets: breast cancer biopsy morphological (WDBC), biochemical and metabolic properties (Coimbra), and cytological attributes (WBCO), intended to expose the model to heterogeneous feature domains and evaluate robustness under distributional variation. Based on the thorough process conducted in our research involving traditional machine learning models, deep learning models, and transformers, a proposed hybrid architecture referred to as the FT-Transformer-Attention-LSTM-SVM framework has been designed and developed in our research that is compatible and well-suited for the processing and analysis of the given tabular biomedical datasets. The proposed design in the research has an effective performance of 99.90% accuracy in the primary test environment, an average mean accuracy of 99.56% in the 10-fold cross-validation process, and an accuracy of 98.50% in the WBCO test environment, with a considerable margin of significance less than 0.0001 in the paired two-sample t-test comparison process. In our research, we have performed the importance assessment in conjunction with the SHAP and LIME techniques and have demonstrated that its decisions are based upon important attributes such as the values of the attributes of radius, concavity, perimeter, compactness, and texture. Additionally, the research has conducted the ablation test and has proved the importance of the designed FT-Transformer.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain (3rd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Exploring Net Promoter Score with Machine Learning and Explainable Artificial Intelligence: Evidence from Brazilian Broadband Services
by
Matheus Raphael Elero, Rafael Henrique Palma Lima, Bruno Samways dos Santos and Gislaine Camila Lapasini Leal
Computers 2026, 15(2), 96; https://doi.org/10.3390/computers15020096 - 2 Feb 2026
Abstract
Despite the growing use of machine learning (ML) for analyzing service quality and customer satisfaction, empirical studies based on Brazilian broadband telecommunications data remain scarce. This is especially true for those who leverage publicly available nationwide datasets. To address this gap, this study
[...] Read more.
Despite the growing use of machine learning (ML) for analyzing service quality and customer satisfaction, empirical studies based on Brazilian broadband telecommunications data remain scarce. This is especially true for those who leverage publicly available nationwide datasets. To address this gap, this study investigates customer satisfaction with broadband internet services in Brazil using supervised ML and explainable artificial intelligence (XAI) techniques applied to survey data collected by ANATEL between 2017 and 2020. Customer satisfaction was operationalized using the Net Promoter Score (NPS) reference scale, and three modifications in the scale were evaluated: (i) a binary model grouping ratings ≥ 8 as satisfied and ≤7 as dissatisfied (portion of the neutrals as satisfied and another as dissatisfied); (ii) a binary model excluding neutral responses (ratings 7–8) and retaining only detractors (≤6) and promoters (≥9); and (iii) a multiclass model following the original NPS categories (detractors, neutrals, and promoters). Nine ML classifiers were trained and validated on tabular data for each formulation. Model interpretability was addressed through SHAP and feature importance analysis using tree-based models. The results indicate that Histogram Gradient Boosting and Random Forest achieve the most robust and stable performance, particularly in binary classification scenarios. The analysis of neutral customers reveals classification ambiguity, showing scores of “7” tend toward dissatisfaction, while scores of “8” tend toward satisfaction. XAI analyses consistently identify browsing speed, billing accuracy, fulfillment of advertised service conditions, and connection stability as the most influential predictors of satisfaction. By combining predictive performance with model transparency, this study provides computational evidence for explainable satisfaction modeling and highlights the value of public regulatory datasets for reproducible ML research.
Full article
(This article belongs to the Topic Opportunities and Challenges in Explainable Artificial Intelligence (XAI))
►▼
Show Figures

Graphical abstract
Open AccessArticle
A Hybrid of ResNext101_32x8d and Swin Transformer Networks with XAI for Alzheimer’s Disease Detection
by
Saeed Mohsen, Amr Yousef and M. Abdel-Aziz
Computers 2026, 15(2), 95; https://doi.org/10.3390/computers15020095 - 2 Feb 2026
Abstract
Medical images obtained from advanced imaging devices play a crucial role in supporting disease diagnosis and detection. Nevertheless, acquiring such images is often costly and storage-intensive, and it is time-consuming to diagnose individuals. The use of artificial intelligence (AI)-based automated diagnostic systems provides
[...] Read more.
Medical images obtained from advanced imaging devices play a crucial role in supporting disease diagnosis and detection. Nevertheless, acquiring such images is often costly and storage-intensive, and it is time-consuming to diagnose individuals. The use of artificial intelligence (AI)-based automated diagnostic systems provides potential solutions to address the limitations of cost and diagnostic time. In particular, deep learning and explainable AI (XAI) techniques provide a reliable and robust approach to classifying medical images. This paper presents a hybrid model comprising two networks, ResNext101_32x8d and Swin Transformer to differentiate four categories of Alzheimer’s disease: no dementia, very mild dementia, mild dementia, and moderate dementia. The combination of the two networks is applied to imbalanced data, trained on 5120 MRI images, validated on 768 images, and tested on 512 other images. Grad-CAM and LIME techniques with a saliency map are employed to interpret the predictions of the model, providing transparent and clinically interpretable decision support. The proposed combination is realized through a TensorFlow framework, incorporating hyperparameter optimization and various data augmentation methods. The performance evaluation of the proposed model is conducted through several metrics, including the error matrix, precision recall (PR), receiver operating characteristic (ROC), accuracy, and loss curves. Experimental results reveal that the hybrid of ResNext101_32x8d and Swin Transformer achieved a testing accuracy of 98.83% with a corresponding loss rate of 0.1019. Furthermore, for the combination “ResNext101_32x8d + Swin Transformer”, the precision, F1-score, and recall were 99.39%, 99.15%, and 98.91%, respectively, while the area under the ROC curve (AUC) was 1.00, “100%”. The combination of proposed networks with XAI techniques establishes a unique contribution to advance medical AI systems and assist radiologists during Alzheimer’s disease screening of patients.
Full article
(This article belongs to the Section AI-Driven Innovations)
►▼
Show Figures

Figure 1
Open AccessArticle
Integrating Digital Technologies into STEM Physics for Adult Learners: A Comparative Study in Second Chance Schools
by
Despina Radiopoulou, Denis Vavougios and Paraskevi Zacharia
Computers 2026, 15(2), 94; https://doi.org/10.3390/computers15020094 - 1 Feb 2026
Abstract
This study explores how integrating digital technologies into STEM-based physics instruction can transform learning outcomes for adult learners in Greek Second Chance Schools, which provide educational opportunities for adults over 18 who have not completed compulsory education. In a comparative design, participants were
[...] Read more.
This study explores how integrating digital technologies into STEM-based physics instruction can transform learning outcomes for adult learners in Greek Second Chance Schools, which provide educational opportunities for adults over 18 who have not completed compulsory education. In a comparative design, participants were divided into two groups: the experimental group experienced an innovative STEM approach, combining educational robotics, mobile sensing, and 3D printing within the Biological Sciences Curriculum Study (BSCS) 5E Instructional Model; the control group received enriched lecture-based instruction. Learning gains were measured using a rigorously developed, psychometrically validated multiple-choice physics test administered before and after the intervention. Results reveal that adults exposed to technology-enhanced STEM lessons achieved statistically significant improvements, outperforming their peers in the lecture-based group, who showed no measurable progress. Notably, these gains were consistent across gender and age. The findings highlight the transformative potential of digital technologies and learner-centered STEM pedagogies in alternative education settings, offering new directions for adult education and lifelong learning.
Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
►▼
Show Figures

Graphical abstract
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, JSAN, Technologies, BDCC, Sensors, Telecom, Electronics
Electronic Communications, IOT and Big Data, 2nd Volume
Topic Editors: Teen-Hang Meen, Charles Tijus, Cheng-Chien Kuo, Kuei-Shu Hsu, Jih-Fu TuDeadline: 31 March 2026
Topic in
AI, Buildings, Computers, Drones, Entropy, Symmetry
Applications of Machine Learning in Large-Scale Optimization and High-Dimensional Learning
Topic Editors: Jeng-Shyang Pan, Junzo Watada, Vaclav Snasel, Pei HuDeadline: 30 April 2026
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2026
Topic in
Computers, Informatics, Information, Logistics, Mathematics, Algorithms
Decision Science Applications and Models (DSAM)
Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 30 June 2026
Conferences
Special Issues
Special Issue in
Computers
Advances in Semantic Multimedia and Personalized Digital Content
Guest Editors: Phivos Mylonas, Christos Troussas, Akrivi Krouska, Manolis Wallace, Cleo SgouropoulouDeadline: 25 February 2026
Special Issue in
Computers
Advanced Image Processing and Computer Vision (2nd Edition)
Guest Editors: Selene Tomassini, M. Ali DewanDeadline: 28 February 2026
Special Issue in
Computers
Cloud Computing and Big Data Mining
Guest Editor: Rao MikkilineniDeadline: 28 February 2026
Special Issue in
Computers
Wearable Computing and Activity Recognition
Guest Editors: Yang Gao, Yincheng JinDeadline: 28 February 2026




