Next Issue
Volume 15, January
Previous Issue
Volume 14, November
 
 

Computers, Volume 14, Issue 12 (December 2025) – 65 articles

Cover Story (view full-size image): Entity resolution in administrative and census data is challenged by noise, ambiguity, and limited interpretability in monolithic AI systems. This work introduces a multi-agent Retrieval-Augmented Generation (RAG) framework that decomposes entity resolution into specialized, cooperating agents for direct matching, relational inference, household discovery, and movement detection. Orchestrated using LangGraph, the framework integrates deterministic preprocessing with LLM-driven reasoning and evidence-grounded retrieval. Experimental results demonstrate improved accuracy, reduced API usage, and fully traceable decision paths compared to single-LLM approaches. The proposed architecture offers a scalable and interpretable foundation for next-generation entity resolution across census, healthcare, and administrative data domains. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 2541 KB  
Article
Blockchain Variables and Possible Attacks: A Technical Survey
by Andrei Alexandru Bordeianu and Daniela Elena Popescu
Computers 2025, 14(12), 567; https://doi.org/10.3390/computers14120567 - 18 Dec 2025
Viewed by 582
Abstract
Blockchain technology has rapidly evolved as a cornerstone of decentralized computing, transforming how trust, data integrity, and transparency are achieved in digital ecosystems. However, despite extensive adoption, significant gaps remain in understanding how key blockchain variables, such as block size, consensus mechanisms, and [...] Read more.
Blockchain technology has rapidly evolved as a cornerstone of decentralized computing, transforming how trust, data integrity, and transparency are achieved in digital ecosystems. However, despite extensive adoption, significant gaps remain in understanding how key blockchain variables, such as block size, consensus mechanisms, and network latency, affect system vulnerabilities and susceptibility to cyberattacks. This survey addresses this gap by combining qualitative and quantitative analyses across multiple blockchain environments. Using simulation tools such as Ganache and Bitcoin Core, and reviewing peer-reviewed studies from 2016 to 2024, the research systematically maps blockchain parameters to cyberattack vectors including 51% attacks, Sybil attacks, and double-spending. Findings indicate that design choices like block size, block interval, and consensus type substantially influence resilience against attacks. The Blockchain Variable Quantitative Risk Framework (BVQRF) introduced here integrates NIST’s cybersecurity principles with quantitative scoring to assess risks. This framework represents a novel contribution by operationalizing theoretical security constructs into actionable evaluation metrics, enabling predictive modeling and adaptive risk mitigation strategies for blockchain systems. Full article
Show Figures

Figure 1

26 pages, 1053 KB  
Article
FastTree-Guided Genetic Algorithm for Credit Scoring Feature Selection
by Rashed Bahlool, Nabil Hewahi and Youssef Harrath
Computers 2025, 14(12), 566; https://doi.org/10.3390/computers14120566 - 18 Dec 2025
Viewed by 316
Abstract
Feature selection is pivotal in enhancing the efficiency of credit scoring predictions, where misclassifications are critical because they can result in financial losses for lenders and exclusion of eligible borrowers. While traditional feature selection methods can improve accuracy and class separation, they often [...] Read more.
Feature selection is pivotal in enhancing the efficiency of credit scoring predictions, where misclassifications are critical because they can result in financial losses for lenders and exclusion of eligible borrowers. While traditional feature selection methods can improve accuracy and class separation, they often struggle to maintain consistent performance aligned with institutional preferences across datasets of varying size and imbalance. This study introduces a FastTree-Guided Genetic Algorithm (FT-GA) that combines gradient-boosted learning with evolutionary optimization to prioritize class separability and minimize false-risk exposure. In contrast to traditional approaches, FT-GA provides fine-grained search guidance by acknowledging that false positives and false negatives carry disproportionate consequences in high-stakes lending contexts. By embedding domain-specific weighting into its fitness function, FT-GA favors separability over raw accuracy, reflecting practical risk sensitivity in real credit decision settings. Experimental results show that FT-GA achieved similar or higher AUC values ranging from 76% to 92% while reducing the average feature set by 21% when compared with the strongest baseline techniques. It also demonstrated strong performance on small to moderately imbalanced datasets and more resilience on highly imbalanced ones. These findings indicate that FT-GA offers a risk-aware enhancement to automated credit assessment workflows, supporting lower operational risk for financial institutions while showing potential applicability to other high-stakes domains. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

22 pages, 15154 KB  
Article
Intelligent Identification of Rural Productive Landscapes in Inner Mongolia
by Xin Tian, Nan Li, Nisha Ai, Songhua Gao and Chen Li
Computers 2025, 14(12), 565; https://doi.org/10.3390/computers14120565 - 17 Dec 2025
Viewed by 232
Abstract
Productive landscapes are an important part of intangible cultural heritage, and their protection and inheritance are of great significance to the prosperity and sustainable development of national culture. It not only reflects the wisdom accumulated through the long-term interaction between human production activities [...] Read more.
Productive landscapes are an important part of intangible cultural heritage, and their protection and inheritance are of great significance to the prosperity and sustainable development of national culture. It not only reflects the wisdom accumulated through the long-term interaction between human production activities and the natural environment, but also carries a strong symbolic meaning of rural culture. However, current research and investigation on productive landscapes still rely mainly on field surveys and manual records conducted by experts and scholars. This process is time-consuming and costly, and it is difficult to achieve efficient and systematic analysis and comparison, especially when dealing with large-scale and diverse types of landscapes. To address this problem, this study takes the Inner Mongolia region as the main research area and builds a productive landscape feature data framework that reflects the diversity of rural production activities and cultural landscapes. The framework covers four major types of landscapes: agriculture, animal husbandry, fishery and hunting, and sideline production and processing. Based on artificial intelligence and deep learning technologies, this study conducts comparative experiments on several convolutional neural network models to evaluate their classification performance and adaptability in complex rural environments. The results show that the improved CEM-ResNet50 model performs better than the other models in terms of accuracy, stability, and feature recognition ability, demonstrating stronger generalization and robustness. Through a semantic clustering approach in image classification, the model’s recognition process is visually interpreted, revealing the clustering patterns and possible sources of confusion among different landscape elements in the semantic space. This study reduces the time and economic cost of traditional field investigations and achieves efficient and intelligent recognition of rural productive landscapes. It also provides a new technical approach for the digital protection and cultural heritage transmission of productive landscapes, offering valuable references for future research in related fields. Full article
(This article belongs to the Special Issue Machine Learning: Innovation, Implementation, and Impact)
Show Figures

Figure 1

16 pages, 1945 KB  
Article
Error-Guided Multimodal Sample Selection with Hallucination Suppression for LVLMs
by Huanyu Cheng, Linjiang Shang, Xikang Chen, Tao Feng and Yin Zhang
Computers 2025, 14(12), 564; https://doi.org/10.3390/computers14120564 - 17 Dec 2025
Viewed by 270
Abstract
Building high-quality multimodal instruction datasets is often time-consuming and costly. Recent studies have shown that a small amount of carefully selected high-quality data can be more effective for improving LVLM performance than large volumes of low-quality data. Based on these observations, we propose [...] Read more.
Building high-quality multimodal instruction datasets is often time-consuming and costly. Recent studies have shown that a small amount of carefully selected high-quality data can be more effective for improving LVLM performance than large volumes of low-quality data. Based on these observations, we propose an error-guided multimodal sample selection framework with hallucination suppression for LVLM fine-tuning. First, semantic embeddings of queries are clustered to form balanced subsets that preserve task diversity. A visual contrastive decoding module is then used to reduce hallucinations and expose genuinely difficult examples. For closed-ended tasks, such as object detection, we estimate sample value using prediction accuracy; for open-ended question answering, we use the perplexity of generated responses as a difficulty signal. Within each cluster, high-error or high-perplexity samples are preferentially selected to construct a compact yet informative training set. Experiments on the InsPLAD detection benchmark and the PowerQA visual question answering dataset show that our method consistently outperforms random sampling under the same data budget, achieving higher F1, cosine similarity, BLEU (Bilingual Evaluation Understudy), and GPT-4o-based evaluation scores. This demonstrates that hallucination-aware, uncertainty-driven data selection can improve LVLM robustness and data efficiency. Full article
Show Figures

Figure 1

37 pages, 953 KB  
Article
Scalable Univariate and Multivariate Time-Series Classifiers with Deep Learning Methods Exploiting Symbolic Representations
by Apostolos Glenis and George Vouros
Computers 2025, 14(12), 563; https://doi.org/10.3390/computers14120563 - 17 Dec 2025
Viewed by 315
Abstract
Time-series classification (TSC) is an important task across sciences. Symbolic representations (especially SFA) are very effective at combating noise. In this paper, we employ symbolic representations to create state-of-the-art time-series classifiers, with the aim to advance scalability without sacrificing accuracy. First, we create [...] Read more.
Time-series classification (TSC) is an important task across sciences. Symbolic representations (especially SFA) are very effective at combating noise. In this paper, we employ symbolic representations to create state-of-the-art time-series classifiers, with the aim to advance scalability without sacrificing accuracy. First, we create a graph representation of the time series based on SFA words. We use this representation together with graph kernels and an SVM classifier to create a scalable time-series classifier. Next, we use the graph representation together with a Graph Convolutional Neural Network to test how it fares against state-of-the-art time-series classifiers. Additionally, we devised deep neural networks exploiting the SFA representation, inspired by the text classification domain, to study how they fare against state-of-the-art classifiers. The proposed deep learning classifiers have been adapted and evaluated for the multivariate time-series case and also against state-of-the-art time-series classification algorithms based on symbolic representations. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

15 pages, 1546 KB  
Article
Collaborative AI-Integrated Model for Reviewing Educational Literature
by María-Obdulia González-Fernández, Manuela Raposo-Rivas, Ana-Belén Pérez-Torregrosa and Paula Quadros-Flores
Computers 2025, 14(12), 562; https://doi.org/10.3390/computers14120562 - 17 Dec 2025
Viewed by 460
Abstract
The increasing complexity of networked research demands approaches that combine rigor, efficiency, and collaboration. In this context, artificial intelligence (AI) emerges as a strategic ally in the analysis and organization of scientific literature, facilitating the construction of a robust state-of-the-art framework to support [...] Read more.
The increasing complexity of networked research demands approaches that combine rigor, efficiency, and collaboration. In this context, artificial intelligence (AI) emerges as a strategic ally in the analysis and organization of scientific literature, facilitating the construction of a robust state-of-the-art framework to support decisions. The present study focuses on evaluating a model for the use of AI that facilitates collaborative literature review by integrating AI tools. The present study employed a descriptive, non-experimental, cross-sectional design. Participants (N = 10) completed a purpose-built questionnaire comprising twenty-five indicators on specific aspects of the model’s use. The participants indicated a high level of knowledge regarding ICT use (M = 8.3; SD = 1.25). The results showed that the System Usability Scale for the tools demonstrated variability; Google Drive scored the highest (M = 77.75; SD = 11.45), while Rayyan.AI scored the lowest (M = 66.00; SD = 20.69). While the findings indicated that AI enhances the efficiency of documentary research and the development of ethical and digital competencies, the participants expressed a need for further training in AI tools to optimize the usability of those integrated into the model. The proposed model CAIM-REL proves to be replicable and holds potential for collaborative research. Full article
Show Figures

Graphical abstract

21 pages, 793 KB  
Article
Beyond the Norm: Unsupervised Anomaly Detection in Telecommunications with Mahalanobis Distance
by Aline Mefleh, Michal Patryk Debicki, Ali Mubarak, Maroun Saade and Nathanael Weill
Computers 2025, 14(12), 561; https://doi.org/10.3390/computers14120561 - 17 Dec 2025
Viewed by 340
Abstract
Anomaly Detection (AD) in telecommunication networks is critical for maintaining service reliability and performance. However, operational networks present significant challenges: high-dimensional Key Performance Indicator (KPI) data collected from thousands of network elements must be processed in near real time to enable timely responses. [...] Read more.
Anomaly Detection (AD) in telecommunication networks is critical for maintaining service reliability and performance. However, operational networks present significant challenges: high-dimensional Key Performance Indicator (KPI) data collected from thousands of network elements must be processed in near real time to enable timely responses. This paper presents an unsupervised approach leveraging Mahalanobis Distance (MD) to identify network anomalies. The MD model offers a scalable solution that capitalizes on multivariate relationships among KPIs without requiring labeled data. Our methodology incorporates preprocessing steps to adjust KPI ratios, normalize feature distributions, and account for contextual factors like sample size. Aggregated anomaly scores are calculated across hierarchical network levels—cells, sectors, and sites—to localize issues effectively. Through experimental evaluations, the MD approach demonstrates consistent performance across datasets of varying sizes, achieving competitive Area Under the Receiver Operating Characteristic Curve (AUC) values while significantly reducing computational overhead compared to baseline AD methods: Isolation Forest (IF), Local Outlier Factor (LOF) and One-Class Support Vector Machines (SVM). Case studies illustrate the model’s practical application, pinpointing the Random Access Channel (RACH) success rate as a key anomaly contributor. The analysis highlights the importance of dimensionality reduction and tailored KPI adjustments in enhancing detection accuracy. This unsupervised framework empowers telecom operators to proactively identify and address network issues, optimizing their troubleshooting workflows. By focusing on interpretable metrics and efficient computation, the proposed approach bridges the gap between AD and actionable insights, offering a practical tool for improving network reliability and user experience. Full article
Show Figures

Graphical abstract

18 pages, 1381 KB  
Article
Energy-Efficient Container Scheduling Based on Deep Reinforcement Learning in Data Centers
by Zhuohui Li, Shaofeng Zhang, Yiqian Li, Xingchen Liu, Junyang Huang and Jinlong Hu
Computers 2025, 14(12), 560; https://doi.org/10.3390/computers14120560 - 17 Dec 2025
Viewed by 301
Abstract
As data centers become essential large-scale infrastructures for data processing and intelligent computing, the efficiency of their internal scheduling systems is critical for both service quality and energy consumption. The performance of these scheduling systems significantly impacts the quality of computing services and [...] Read more.
As data centers become essential large-scale infrastructures for data processing and intelligent computing, the efficiency of their internal scheduling systems is critical for both service quality and energy consumption. The performance of these scheduling systems significantly impacts the quality of computing services and overall energy usage. However, the rapid increase in task volume, coupled with the diversity of computing resources, poses substantial challenges to traditional scheduling approaches. Conventional container scheduling approaches typically focus on either minimizing task execution time or reducing energy consumption independently, often neglecting the importance of balancing these two objectives simultaneously. In this study, a container scheduling algorithm based on the Soft Actor–Critic framework, called SAC-CS, is proposed. This algorithm aims to enhance container execution efficiency while concurrently reducing energy consumption in data centers. It employs a maximum entropy reinforcement learning approach, enabling a flexible trade-off between energy use and task completion times. Experimental evaluations on both synthetic workloads and Alibaba cluster datasets demonstrate that the SAC-CS algorithm effectively achieves joint optimization of efficiency and energy consumption, outperforming heuristic methods and alternative reinforcement learning techniques. Full article
Show Figures

Figure 1

33 pages, 5077 KB  
Article
Micro-Expression Recognition Using Transformers Neural Networks
by Rodolfo Romero-Herrera, Franco Tadeo Sánchez García, Nathan Arturo Álvarez Peñaloza, Billy Yong Le López Lin and Edwin Josué Juárez Utrilla
Computers 2025, 14(12), 559; https://doi.org/10.3390/computers14120559 - 16 Dec 2025
Viewed by 248
Abstract
A person’s face can reveal their mood, and microexpressions, although brief and involuntary, are also authentic. People can recognize facial gestures; however, their accuracy is inconsistent, highlighting the importance of objective computational models. Various artificial intelligence models have classified microexpressions into three categories: [...] Read more.
A person’s face can reveal their mood, and microexpressions, although brief and involuntary, are also authentic. People can recognize facial gestures; however, their accuracy is inconsistent, highlighting the importance of objective computational models. Various artificial intelligence models have classified microexpressions into three categories: positive, negative, and surprise. However, it is still significant to address the basic Ekman microexpressions (joy, sadness, fear, disgust, anger, and surprise). This study proposes a Transformers-based machine learning model, trained on CASME, SAMM, SMIC, and its own datasets. The model offers comparable results with other studies when working with seven classes. It applies various component-based techniques ranging from ViT to optical flow with a different perspective, with low training rates and competitive metrics comparable with other publications on a laptop. These results can serve as a basis for future research. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Graphical abstract

27 pages, 519 KB  
Article
Dual-Algorithm Framework for Privacy-Preserving Task Scheduling Under Historical Inference Attacks
by Exiang Chen, Ayong Ye and Huina Deng
Computers 2025, 14(12), 558; https://doi.org/10.3390/computers14120558 - 16 Dec 2025
Viewed by 276
Abstract
Historical inference attacks pose a critical privacy threat in mobile edge computing (MEC), where adversaries exploit long-term task and location patterns to infer users’ sensitive information. To address this challenge, we propose a privacy-preserving task scheduling framework that adaptively balances privacy protection and [...] Read more.
Historical inference attacks pose a critical privacy threat in mobile edge computing (MEC), where adversaries exploit long-term task and location patterns to infer users’ sensitive information. To address this challenge, we propose a privacy-preserving task scheduling framework that adaptively balances privacy protection and system performance under dynamic vehicular environments. First, we introduce a dynamic privacy-aware adaptation mechanism that adjusts privacy levels in real time according to vehicle mobility and network dynamics. Second, we design a dual-algorithm framework composed of two complementary solutions: a Markov Approximation-Based Online Algorithm (MAOA) that achieves near-optimal scheduling with provable convergence, and a Privacy-Aware Deep Q-Network (PAT-DQN) algorithm that leverages deep reinforcement learning to enhance adaptability and long-term decision-making. Extensive simulations demonstrate that our proposed methods effectively mitigate privacy leakage while maintaining high task completion rates and low energy consumption. In particular, PAT-DQN achieves up to 14.2% lower privacy loss and 19% fewer handovers than MAOA in high-mobility scenarios, showing superior adaptability and convergence performance. Full article
Show Figures

Figure 1

14 pages, 465 KB  
Article
Optimizing Cloudlets for Faster Feedback in LLM-Based Code-Evaluation Systems
by Daniel-Florin Dosaru, Alexandru-Corneliu Olteanu and Nicolae Țăpuș
Computers 2025, 14(12), 557; https://doi.org/10.3390/computers14120557 - 16 Dec 2025
Viewed by 224
Abstract
This paper addresses the challenge of optimizing cloudlet resource allocation in a code evaluation system. The study models the relationship between system load and response time when users submit code to an online code-evaluation platform, LambdaChecker, which operates a cloudlet-based processing pipeline. The [...] Read more.
This paper addresses the challenge of optimizing cloudlet resource allocation in a code evaluation system. The study models the relationship between system load and response time when users submit code to an online code-evaluation platform, LambdaChecker, which operates a cloudlet-based processing pipeline. The pipeline includes code correctness checks, static analysis, and design-pattern detection using a local Large Language Model (LLM). To optimize the system, we develop a mathematical model and apply it to the LambdaChecker resource management problem. The proposed approach is evaluated using both simulations and real contest data, with a focus on improvements in average response time, resource utilization efficiency, and user satisfaction. The results indicate that adaptive scheduling and workload prediction effectively reduce waiting times without substantially increasing operational costs. Overall, the study suggests that systematic cloudlet optimization can enhance the educational value of automated code evaluation systems by improving responsiveness while preserving sustainable resource usage. Full article
Show Figures

Figure 1

20 pages, 3136 KB  
Article
Design of a Digital Personnel Management System for Swine Farms
by Zhenyu Jiang, Enli Lyu, Weijia Lin, Xinyuan He, Ziwei Li and Zhixiong Zeng
Computers 2025, 14(12), 556; https://doi.org/10.3390/computers14120556 - 15 Dec 2025
Viewed by 199
Abstract
To prevent swine fever transmission, swine farms in China adopt enclosed management, making strict farm personnel biosecurity essential for minimizing the risk of pathogen introduction. However, current shower-in procedures and personnel movement records on many farms still rely on manual logging, which is [...] Read more.
To prevent swine fever transmission, swine farms in China adopt enclosed management, making strict farm personnel biosecurity essential for minimizing the risk of pathogen introduction. However, current shower-in procedures and personnel movement records on many farms still rely on manual logging, which is prone to omissions and cannot support enterprise-level supervision. To address these limitations, this study develops a digital personnel management system designed specifically for the changing-room environment that forms the core biosecurity barrier. The proposed three-tier architecture integrates distributed identification terminals, local central controllers, and a cloud-based data platform. The system ensures reliable identity verification, synchronizes templates across terminals, and maintains continuous data availability, even in unstable network conditions. Fingerprint-based identity validation and a lightweight CAN-based communication mechanism were implemented to ensure robust operation in electrically noisy livestock facilities. System performance was evaluated through recognition tests, multi-frame template transmission experiments, and high-load CAN/MQTT communication tests. The system achieved a 91.4% overall verification success rate, lossless transmission of multi-frame fingerprint templates, and stable end-to-end communication, with mean CAN-bus processing delays of 99.96 ms and cloud-processing delays below 70.7 ms. These results demonstrate that the proposed system provides a reliable digital alternative to manual personnel movement records and shower duration, offering a scalable foundation for biosecurity supervision. While the present implementation focuses on identity verification, data synchronization, and calculating shower duration based on the interval between check-ins, the system architecture can be extended to support movement path enforcement and integration with wider biosecurity infrastructures. Full article
Show Figures

Figure 1

15 pages, 497 KB  
Article
Learning Analytics with Scalable Bloom’s Taxonomy Labeling of Socratic Chatbot Dialogues
by Kok Wai Lee, Yee Sin Ang and Joel Weijia Lai
Computers 2025, 14(12), 555; https://doi.org/10.3390/computers14120555 - 15 Dec 2025
Viewed by 374
Abstract
Educational chatbots are increasingly deployed to scaffold student learning, yet educators lack scalable ways to assess the cognitive depth of these dialogues in situ. Bloom’s taxonomy provides a principled lens for characterizing reasoning, but manual tagging of conversational turns is costly and difficult [...] Read more.
Educational chatbots are increasingly deployed to scaffold student learning, yet educators lack scalable ways to assess the cognitive depth of these dialogues in situ. Bloom’s taxonomy provides a principled lens for characterizing reasoning, but manual tagging of conversational turns is costly and difficult to scale for learning analytics. We present a reproducible high-confidence pseudo-labeling pipeline for multi-label Bloom classification of Socratic student–chatbot exchanges. The dataset comprises 6716 utterances collected from conversations between a Socratic chatbot and 34 undergraduate statistics students at Nanyang Technological University. From three chronologically selected workbooks with expert Bloom annotations, we trained and compared two labeling tracks: (i) a calibrated classical approach using SentenceTransformer (all-MiniLM-L6-v2) embeddings with one-vs-rest Logistic Regression, Linear SVM, XGBoost, and MLP, followed by per-class precision–recall threshold tuning; and (ii) a lightweight LLM track using GPT-4o-mini after supervised fine-tuning. Class-specific thresholds tuned on 5-fold cross-validation were then applied in a single pass to assign high-confidence pseudo-labels to the remaining unlabeled exchanges, avoiding feedback-loop confirmation bias. Fine-tuned GPT-4o-mini achieved the highest prevalence-weighted performance (micro-F1 =0.814), whereas calibrated classical models yielded stronger balance across Bloom levels (best macro-F1 =0.630 with Linear SVM; best classical micro-F1 =0.759 with Logistic Regression). Both model families reflect the corpus skew toward lower-order cognition, with LLMs excelling on common patterns and linear models better preserving rarer higher-order labels, while results should be interpreted as a proof-of-concept given limited gold labeling, the approach substantially reduces annotation burden and provides a practical pathway for Bloom-aware learning analytics and future real-time adaptive chatbot support. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
Show Figures

Figure 1

27 pages, 1614 KB  
Article
Comparative Analysis of Neural Network Models for Predicting Peach Maturity on Tabular Data
by Dejan Ljubobratović, Marko Vuković, Marija Brkić Bakarić, Tomislav Jemrić and Maja Matetić
Computers 2025, 14(12), 554; https://doi.org/10.3390/computers14120554 - 13 Dec 2025
Viewed by 243
Abstract
Peach maturity at harvest is a critical factor influencing fruit quality and postharvest life. Traditional destructive methods for maturity assessment, although effective, compromise fruit integrity and are unsuitable for practical implementation in modern production. This study presents a machine learning approach for non-destructive [...] Read more.
Peach maturity at harvest is a critical factor influencing fruit quality and postharvest life. Traditional destructive methods for maturity assessment, although effective, compromise fruit integrity and are unsuitable for practical implementation in modern production. This study presents a machine learning approach for non-destructive peach maturity prediction using tabular data collected from 701 ‘Redhaven’ peaches. Three neural network models suitable for small tabular datasets (TabNet, SAINT, and NODE) were applied and evaluated using classification metrics, including accuracy, F1-score, and AUC. The models demonstrated consistently strong performance across several feature configurations, with TabNet achieving the highest accuracy when all non-destructive measurements were available, while TabNet provided the most robust and practical performance on the comprehensive non-destructive subset and in optimized minimal-feature settings. These findings indicate that non-destructive sensing methods, particularly when combined with modern neural architectures, can reliably predict maturity and offer potential for real-time, automated fruit selection after harvest. The integration of such models into autonomous harvesting systems, for instance, through drone-based platforms equipped with appropriate sensors, could significantly improve efficiency and fruit quality management in horticultural peach production. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

35 pages, 3515 KB  
Review
Human–Computer Interaction in Smart Greenhouses: A Review of Interfaces, Technologies, and User-Centered Approaches
by Patricia Isabela Brăileanu
Computers 2025, 14(12), 553; https://doi.org/10.3390/computers14120553 - 12 Dec 2025
Viewed by 503
Abstract
Human–computer interaction (HCI) is essential for optimizing smart greenhouse management and for fostering efficient and sustainable agricultural practices. A synthesis of recent advancements in diverse interfaces, including digital twins, virtual and augmented reality, mobile applications, and sensor-based controls, alongside the integration of artificial [...] Read more.
Human–computer interaction (HCI) is essential for optimizing smart greenhouse management and for fostering efficient and sustainable agricultural practices. A synthesis of recent advancements in diverse interfaces, including digital twins, virtual and augmented reality, mobile applications, and sensor-based controls, alongside the integration of artificial intelligence (AI), automation, and human–robot collaboration, was examined as part of advanced automation strategies. This study highlights the importance of user-centered and context-aware design to enhance usability, address challenges like simulation sickness, and cater to varied user demographics. Emphasis is placed on responsible, adaptive, and trustworthy interaction, ensuring effective decision support and promoting human–AI synergy. This review offers an integrated perspective on current developments, identifying pathways for future sustainable interaction design in controlled-environment agriculture. Full article
(This article belongs to the Special Issue Innovative Research in Human–Computer Interactions)
Show Figures

Figure 1

24 pages, 981 KB  
Article
Hybrid Methods for Automatic Collocation Extraction in Building a Learners’ Dictionary of Italian
by Damiano Perri, Osvaldo Gervasi, Sergio Tasso, Stefania Spina, Irene Fioravanti, Fabio Zanda and Luciana Forti
Computers 2025, 14(12), 552; https://doi.org/10.3390/computers14120552 - 12 Dec 2025
Viewed by 375
Abstract
The automatic construction of learners’ dictionaries requires robust methods for identifying non-literal word combinations, or collocations, which represent a significant challenge for second-language (L2) learners. This paper addresses the critical initial step of accurately extracting collocation candidates from corpora to build a learner’s [...] Read more.
The automatic construction of learners’ dictionaries requires robust methods for identifying non-literal word combinations, or collocations, which represent a significant challenge for second-language (L2) learners. This paper addresses the critical initial step of accurately extracting collocation candidates from corpora to build a learner’s dictionary for Italian. The adopted method and the implemented application are significant for learning the Italian language. We present a comparative study of three methodologies for identifying these candidates within a 41.7-million-word Italian corpus: a Part-Of-Speech-based approach, a syntactic dependency-based approach, and a novel Hybrid method that integrates both. The analysis yielded 2,097,595 potential collocations. Results indicate that the Hybrid method achieves superior performance in terms of Recall and Benchmark Match, identifying the most significant portion of candidates, 42.35% of the total. We conducted an in-depth analysis to refine the extracted dataset, calculating multiple statistical metrics for each candidate, which are described in detail in the paper. Such analysis allows for the classification of collocations by relevance, difficulty, and frequency of use, forming the basis for the future development of a high-quality, web-based dictionary tailored to the proficiency levels of Italian learners. Full article
Show Figures

Figure 1

33 pages, 1463 KB  
Article
Hybrid LLM-Assisted Fault Diagnosis Framework for 5G/6G Networks Using Real-World Logs
by Aymen D. Salman, Akram T. Zeyad, Shereen S. Jumaa, Safanah M. Raafat, Fanan Hikmat Jasim and Amjad J. Humaidi
Computers 2025, 14(12), 551; https://doi.org/10.3390/computers14120551 - 12 Dec 2025
Viewed by 515
Abstract
This paper presents Hy-LIFT (Hybrid LLM-Integrated Fault Diagnosis Toolkit), a multi-stage framework for interpretable and data-efficient fault diagnosis in 5G/6G networks that integrates a high-precision interpretable rule-based engine (IRBE) for known patterns, a semi-supervised classifier (SSC) that leverages scarce labels and abundant unlabeled [...] Read more.
This paper presents Hy-LIFT (Hybrid LLM-Integrated Fault Diagnosis Toolkit), a multi-stage framework for interpretable and data-efficient fault diagnosis in 5G/6G networks that integrates a high-precision interpretable rule-based engine (IRBE) for known patterns, a semi-supervised classifier (SSC) that leverages scarce labels and abundant unlabeled logs via consistency regularization and pseudo-labeling, and an LLM Augmentation Engine (LAE) that generates operator-ready, context-aware explanations and zero-shot hypotheses for novel faults. Evaluations on a five-class, imbalanced Dataset-A and a simulated production setting with noise and label scarcity show that Hy-LIFT consistently attains higher macro-F1 than rule-only and standalone ML baselines while maintaining strong per-class precision/recall (≈0.85–0.93), including minority classes, indicating robust generalization under class imbalance. IRBE supplies auditable, high-confidence seeds; SSC expands coverage beyond explicit rules without sacrificing precision; and LAE improves operational interpretability and surfaces potential “unknown/novel” faults without altering classifier labels. The paper’s contributions are as follows: (i) a reproducible, interpretable baseline that doubles as a high-quality pseudo-label source; (ii) a principled semi-supervised learning objective tailored to network logs; (iii) an LLM-driven explanation layer with zero-shot capability; and (iv) an open, end-to-end toolkit with scripts to regenerate all figures and tables. Overall, Hy-LIFT narrows the gap between brittle rules and opaque black-box models by combining accuracy, data efficiency, and auditability, offering a practical path toward trustworthy AIOps in next-generation mobile networks. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

24 pages, 606 KB  
Article
A Secure Blockchain-Based MFA Dynamic Mechanism
by Vassilis Papaspirou, Ioanna Kantzavelou, Yagmur Yigit, Leandros Maglaras and Sokratis Katsikas
Computers 2025, 14(12), 550; https://doi.org/10.3390/computers14120550 - 12 Dec 2025
Viewed by 432
Abstract
Authentication mechanisms attract considerable research interest due to the protective role they offer, and when they fail, the system becomes vulnerable and immediately exposed to attacks. Blockchain technology was recently incorporated to enhance authentication mechanisms through its inherited specifications that cover higher security [...] Read more.
Authentication mechanisms attract considerable research interest due to the protective role they offer, and when they fail, the system becomes vulnerable and immediately exposed to attacks. Blockchain technology was recently incorporated to enhance authentication mechanisms through its inherited specifications that cover higher security requirements. This article proposes a dynamic multi-factor authentication (MFA) mechanism based on blockchain technology. The approach combines a honeytoken authentication method implemented with smart contracts and deploys the dynamic change of honeytokens for enhanced security. Two additional random numbers are inserted into the honeytoken within the smart contract for protection from potential attackers, forming a triad of values. The produced set is then imported into a dynamic hash algorithm that changes daily, introducing an additional layer of complexity and unpredictability. The honeytokens are securely transferred to the user through a dedicated and safe communication channel, ensuring the integrity and confidentiality of this critical authentication factor. Extensive evaluation and threat analysis of the proposed blockchain-based MFA dynamic mechanism (BMFA) demonstrate that it meets high-security standards and possesses essential properties that give prospects for future use in many domains. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (3rd Edition))
Show Figures

Figure 1

30 pages, 2439 KB  
Article
A Theoretical Model for Privacy-Preserving IoMT Based on Hybrid SDAIPA Classification Approach and Optimized Homomorphic Encryption
by Mohammed Ali R. Alzahrani
Computers 2025, 14(12), 549; https://doi.org/10.3390/computers14120549 - 11 Dec 2025
Viewed by 277
Abstract
The Internet of Medical Things (IoMT) improves healthcare delivery through many medical applications. Because of medical data sensitivity and limited resources of wearable technology, privacy and security are significant challenges. Traditional encryption does not provide secure computation on encrypted data, and many blockchain-based [...] Read more.
The Internet of Medical Things (IoMT) improves healthcare delivery through many medical applications. Because of medical data sensitivity and limited resources of wearable technology, privacy and security are significant challenges. Traditional encryption does not provide secure computation on encrypted data, and many blockchain-based IoMT solutions partially rely on centralized structures. IoMT with dynamic encryption is an innovative privacy-preserving system that combines sensitivity-based classification and advanced encryption to address these issues. The study proposes privacy-preserving IoMT framework that dynamically adapts its cryptographic strategy based on data sensitivity. The proposed approach uses a hybrid SDAIPA (SDAIA-HIPAA) classification model that integrates Saudi Data and Artificial Intelligence Authority (SDAIA) and Health Insurance Portability and Accountability Act (HIPAA) guidelines. This classification directly governs the selection of encryption mechanisms, where Advanced Encryption Standard (AES) is used for low-sensitivity data, and Fully Homomorphic Encryption (FHE) is used for high-sensitivity data. The Whale Optimization Algorithm (WOA) is used to maximize cryptographic entropy of FHE keys and improves security against attacks, resulting in an Optimized FHE that is conditionally used based on SDAIPA outputs. This proposed approach provides a novel scheme to dynamically align cryptographic intensity with data risk and avoids the overhead of uniform FHE use while ensuring strong privacy for critical records. Two datasets are used to assess the proposed approach with up to 806 samples. The results show that the hybrid OHE-WOA outperforms in the percentage of sensitivity of privacy index with dataset 1 by 78.3% and 12.5% and with dataset 2 by 89% and 19.7% compared to AES and RSA, respectively, which ensures its superior ability to preserve privacy. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

29 pages, 6232 KB  
Article
Research on Multi-Temporal Infrared Image Generation Based on Improved CLE Diffusion
by Hua Gong, Wenfei Gao, Fang Liu and Yuanjing Ma
Computers 2025, 14(12), 548; https://doi.org/10.3390/computers14120548 - 11 Dec 2025
Viewed by 258
Abstract
To address the problems of dynamic brightness imbalance in image sequences and blurred object edges in multi-temporal infrared image generation, we propose an improved multi-temporal infrared image generation model based on CLE Diffusion. First, the model adopts CLE Diffusion to capture the dynamic [...] Read more.
To address the problems of dynamic brightness imbalance in image sequences and blurred object edges in multi-temporal infrared image generation, we propose an improved multi-temporal infrared image generation model based on CLE Diffusion. First, the model adopts CLE Diffusion to capture the dynamic evolution patterns of image sequences. By modeling brightness variation through the noise evolution of the diffusion process, it enables controllable generation across multiple time points. Second, we design a periodic time encoding strategy and a feature linear modulator and build a temporal control module. Through channel-level modulation, this module jointly models temporal information and brightness features to improve the model’s temporal representation capability. Finally, to tackle structural distortion and edge blurring in infrared images, we design a multi-scale edge pyramid strategy and build a structure consistency module based on attention mechanisms. This module jointly computes multi-scale edge and structural features to enforce edge enhancement and structural consistency. Extensive experiments on both public visible-light and self-constructed infrared multi-temporal datasets demonstrate our model’s state-of-the-art (SOTA) performance. It generates high-quality images across all time points, achieving superior performance on the PSNR, SSIM, and LPIPS metrics. The generated images have clear edges and structural consistency. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Graphical abstract

30 pages, 2206 KB  
Article
Digital Tech Integration in Industrial Engineer Training via Affordable Academic Tools
by Lidia M. Belmonte, Eva Segura, José L. Gómez-Sirvent, Francisco López de la Rosa, Javier de las Morenas, Antonio Fernández-Caballero and Rafael Morales
Computers 2025, 14(12), 547; https://doi.org/10.3390/computers14120547 - 11 Dec 2025
Viewed by 397
Abstract
The rapid advancement of Industry 4.0 and digital transformation is significantly impacting various sectors. Enabling digital technologies, such as big data, machine learning, and the Internet of Things, is becoming increasingly prevalent in industry. However, engineering curricula often fail to keep pace with [...] Read more.
The rapid advancement of Industry 4.0 and digital transformation is significantly impacting various sectors. Enabling digital technologies, such as big data, machine learning, and the Internet of Things, is becoming increasingly prevalent in industry. However, engineering curricula often fail to keep pace with these swift changes. This study, grounded in Kolb’s experiential learning theory, investigates the integration of enabling digital technologies into final academic projects for industrial engineering students to enhance their competencies through practical experience with affordable technologies. It presents a case study on the design of access control systems using Android, NFC, and Arduino. To demonstrate the potential of this approach, two projects are highlighted: one implementing an integrated parking access control system with NFC payment, and the other focusing on appointment management for access to services. A total of 50 industrial engineering students evaluated both projects, showing a high level of interest and a desire for similar future implementations. The findings indicate that integrating Industry 4.0 technologies into final academic projects effectively bridges the gap between industry requirements and engineering education, enhancing students’ technical skills through experiential learning. Full article
Show Figures

Figure 1

18 pages, 2385 KB  
Article
Enhancing Language Learning with Generative AI: The Case of the OpenLang Network Platform
by Alexander Mikroyannidis, Maria Perifanou and Anastasios A. Economides
Computers 2025, 14(12), 546; https://doi.org/10.3390/computers14120546 - 11 Dec 2025
Viewed by 492
Abstract
The OpenLang Network platform is a sustainable online environment designed to support language learning, intercultural exchange, and open educational practices across Europe. This paper presents the conceptual framework and design of an AI-enhanced OpenLang Network platform, in which Generative AI is embedded across [...] Read more.
The OpenLang Network platform is a sustainable online environment designed to support language learning, intercultural exchange, and open educational practices across Europe. This paper presents the conceptual framework and design of an AI-enhanced OpenLang Network platform, in which Generative AI is embedded across all language learning services offered by the platform. The integration of Generative AI transforms the placement tests offered by the platform into adaptive diagnostic tools, extends the platform’s tandem language learning service through AI-mediated conversation, and enriches the open educational resources of the platform through automated adaptation, translation, and content generation. These innovations collectively reposition the OpenLang Network platform as a dynamic, learner-centred, and sustainable ecosystem that unites human collaboration with AI-powered personalisation. Through a pedagogically informed integration of Generative AI, the case of the OpenLang Network platform demonstrates how AI can enhance openness, collaboration, and personalisation in language learning. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
Show Figures

Graphical abstract

26 pages, 1491 KB  
Article
Time and Memory Trade-Offs in Shortest-Path Algorithms Across Graph Topologies: A*, Bellman–Ford, Dijkstra, AI-Augmented A* and a Neural Baseline
by Nahier Aldhafferi
Computers 2025, 14(12), 545; https://doi.org/10.3390/computers14120545 - 10 Dec 2025
Viewed by 473
Abstract
This study presents a comparative evaluation of Dijkstra’s algorithm, A*, Bellman-Ford, AI-Augmented A* and a neural AI-based model for shortest-path computation across diverse graph topologies, with a focus on time efficiency and memory consumption under standardized experimental conditions. We analyzed grids, random graphs, [...] Read more.
This study presents a comparative evaluation of Dijkstra’s algorithm, A*, Bellman-Ford, AI-Augmented A* and a neural AI-based model for shortest-path computation across diverse graph topologies, with a focus on time efficiency and memory consumption under standardized experimental conditions. We analyzed grids, random graphs, and scale-free graphs of sizes up to 103,103 nodes, specifically examining 100- and 1000-node grids, 100- and 1000-node random graphs, and 100-node scale-free graphs. The algorithms were benchmarked through repeated runs per condition on a server-class system equipped with an Intel Xeon Gold 6248R processor, NVIDIA Tesla V100 GPU (32 GB), 256 GB RAM, and Ubuntu 20.04. A* consistently outperformed Dijkstra’s algorithm when paired with an informative admissible heuristic, exhibiting faster runtimes by approximately 1.37× to 1.91× across various topologies. In comparison, Bellman-Ford was slower than Dijkstra’s by approximately 1.50× to 1.92×, depending on graph type and size; however, it remained a robust option in scenarios involving negative edge weights or when early-termination conditions reduced practical iterations. The AI model demonstrated the slowest performance across conditions, incurring runtimes that were 2.60× to 3.23× higher than A* and 1.62× to 2.15× higher than Bellman-Ford, offering limited gains as a direct solver. These findings underscore topology-sensitive trade-offs: A* is preferred when a suitable heuristic is available; Dijkstra’s serves as a strong baseline in the absence of heuristics; Bellman-Ford is appropriate for handling negative weights; and current AI approaches are not yet competitive for exact shortest paths but may hold promise as learned heuristics to augment A*. We provide environmental details and comparative results to support reproducibility and facilitate further investigation into hybrid learned-classical strategies. Full article
Show Figures

Figure 1

43 pages, 2856 KB  
Article
Secure DNA Cryptosystem for Data Protection in Cloud Storage and Retrieval
by Thangavel Murugan, Varalakshmi Perumal and Nasurudeen Ahamed Noor Mohamed Badusha
Computers 2025, 14(12), 544; https://doi.org/10.3390/computers14120544 - 10 Dec 2025
Viewed by 341
Abstract
In today’s digital era, real-time applications rely heavily on cloud environments for computation, storage, and data retrieval. Data owners outsource sensitive information to cloud storage servers managed by service providers such as Google and Amazon, who are responsible for ensuring data confidentiality. Traditional [...] Read more.
In today’s digital era, real-time applications rely heavily on cloud environments for computation, storage, and data retrieval. Data owners outsource sensitive information to cloud storage servers managed by service providers such as Google and Amazon, who are responsible for ensuring data confidentiality. Traditional cryptographic algorithms, though widely adopted, face challenges related to key management and computational complexity when implemented in the cloud. To overcome these limitations, this research proposes a Secure DNA Cryptosystem (SDNA) based on DNA molecular structures and biological processes. The proposed system generates encoding/decoding tables and encryption/decryption algorithms, using dynamically generated key files to secure communication between data owners and users in the cloud. The DNA-based cryptographic approach enhances data confidentiality, ensures faster computation, and increases resistance to cryptanalysis through dynamic key operations. The experimental results demonstrate the efficiency of the proposed system. For a character count of 16,384, the encryption and decryption times are 852 ms and 822 ms, respectively. Similarly, for a word count of 16,384, the encryption and decryption times are significantly reduced to 75 ms and 62 ms, respectively. These results highlight the superior computational performance and adaptability of the SDNA compared to conventional cryptographic schemes. Overall, performance and security analysis confirm that the proposed SDNA is computationally secure, faster, and flexible for implementation in cloud environments, offering a promising solution for real-time secure data storage and retrieval. Full article
(This article belongs to the Special Issue Emerging Trends in Network Security and Applied Cryptography)
Show Figures

Figure 1

19 pages, 10997 KB  
Article
YOLO-AEB: PCB Surface Defect Detection Based on Adaptive Multi-Branch Attention and Efficient Atrous Spatial Pyramid Pooling
by Chengzhi Deng, Yingbo Wu, Zhaoming Wu, Weiwei Zhou, You Zhang, Xiaowei Sun and Shengqian Wang
Computers 2025, 14(12), 543; https://doi.org/10.3390/computers14120543 - 10 Dec 2025
Viewed by 283
Abstract
The surface defect detection of printed circuit boards (PCBs) plays a crucial role in the field of industrial manufacturing. However, the existing PCB defect detection methods have great challenges in detecting the accuracy of tiny defects under the complex background due to its [...] Read more.
The surface defect detection of printed circuit boards (PCBs) plays a crucial role in the field of industrial manufacturing. However, the existing PCB defect detection methods have great challenges in detecting the accuracy of tiny defects under the complex background due to its compact layout. To address this problem, we propose a novel YOLO-AMBA-EASPP-BiFPN (YOLO-AEB) network based on the YOLOv10 framework that achieves high precision and real-time detection of tiny defects through multi-level architecture optimization. In the backbone network, an adaptive multi-branch attention mechanism (AMBA) is first proposed, which employs an adaptive reweighting algorithm (ARA) to dynamically optimize fusion weights within the multi-branch attention mechanism (MBA), thereby optimizing the ability to represent tiny defects under complex background noise. Then, an efficient atrous spatial pyramid pooling (EASPP) is constructed, which fuses AMBA and atrous spatial pyramid pooling-fast (ASPF). This integration effectively mitigates feature degradation while preserving expansive receptive fields, and the extraction of defect detail features is strengthened. In the neck network, the bidirectional feature pyramid network (BiFPN) is used to replace the conventional path aggregation network (PAN), and the bidirectional cross-scale feature fusion mechanism is used to improve the transfer ability of shallow detail features to deep networks. Comprehensive experimental evaluations demonstrate that our proposed network achieves state-of-the-art performance, whose F1 score can reach 95.7% and mean average precision (mAP) can reach 97%, representing respective improvements of 7.1% and 5.8% over the baseline YOLOv10 model. Feature visualization analysis further verifies the effectiveness and feasibility of YOLO-AEB. Full article
Show Figures

Figure 1

38 pages, 2967 KB  
Article
Exploring the Impact of Affective Pedagogical Agents: Enhancing Emotional Engagement in Higher Education
by Marta Arguedas, Thanasis Daradoumis, Santi Caballe, Jordi Conesa and Elvis Ortega-Ochoa
Computers 2025, 14(12), 542; https://doi.org/10.3390/computers14120542 - 10 Dec 2025
Viewed by 499
Abstract
This study examines the influence of pedagogical agents on enhancing emotional engagement in higher education settings through the provision of cognitive and affective feedback. The research focuses on students in a collaborative “Database Systems and Design”, comparing the effects of feedback from a [...] Read more.
This study examines the influence of pedagogical agents on enhancing emotional engagement in higher education settings through the provision of cognitive and affective feedback. The research focuses on students in a collaborative “Database Systems and Design”, comparing the effects of feedback from a human teacher (control group) to those of an Affective Pedagogical Tutor (APT) (experimental group). Emotional engagement was measured through key positive emotions such as motivation, curiosity, optimism, confidence, and satisfaction, as well as the reduction in negative emotions like boredom, anger, insecurity, and anxiety. Results suggest that APT feedback was associated with higher levels of emotional engagement compared to teacher feedback. Cognitive feedback from the APT was perceived as supporting learning outcomes by offering detailed, task-specific guidance, while affective feedback further supported emotional regulation and positive emotional states. Students interacting with the APT reported feeling more motivated, curious, and optimistic, which contributed to sustained participation and greater confidence in their work. At the same time, boredom and anger were notably reduced in the experimental group. These findings illustrate the potential of affective pedagogical agents to complement educational experiences by fostering positive emotional states and mitigating barriers to engagement. By integrating affective and cognitive feedback, pedagogical agents can create more emotionally supportive and engaging learning environments, particularly in collaborative and complex academic tasks. Full article
Show Figures

Figure 1

23 pages, 3559 KB  
Article
From Static Prediction to Mindful Machines: A Paradigm Shift in Distributed AI Systems
by Rao Mikkilineni and W. Patrick Kelly
Computers 2025, 14(12), 541; https://doi.org/10.3390/computers14120541 - 10 Dec 2025
Viewed by 665
Abstract
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted [...] Read more.
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted in a Turing-paradigm architecture: statistical world models (opaque weights) bolted onto brittle, imperative workflows. They excel at pattern completion, but they externalize governance, memory, and purpose, thereby accumulating coherence debt—a structural fragility manifested as hallucinations, shallow and siloed memory, ad hoc guardrails, and costly human oversight. The shortcoming of current AI relative to human-like intelligence is therefore less about raw performance or scaling, and more about an architectural limitation: knowledge is treated as an after-the-fact annotation on computation, rather than as an organizing substrate that shapes computation. This paper introduces Mindful Machines, a computational paradigm that operationalizes coherence as an architectural property rather than an emergent afterthought. A Mindful Machine is specified by a Digital Genome (encoding purposes, constraints, and knowledge structures) and orchestrated by an Autopoietic and Meta-Cognitive Operating System (AMOS) that runs a continuous Discover–Reflect–Apply–Share (D-R-A-S) loop. Instead of a static model embedded in a one-shot ML pipeline or deep learning neural network, the architecture separates (1) a structural knowledge layer (Digital Genome and knowledge graphs), (2) an autopoietic control plane (health checks, rollback, and self-repair), and (3) meta-cognitive governance (critique-then-commit gates, audit trails, and policy enforcement). We validate this approach on the classic Credit Default Prediction problem by comparing a traditional, static Logistic Regression pipeline (monolithic training, fixed features, external scripting for deployment) with a distributed Mindful Machine implementation whose components can reconfigure logic, update rules, and migrate workloads at runtime. The Mindful Machine not only matches the predictive task, but also achieves autopoiesis (self-healing services and live schema evolution), explainability (causal, event-driven audit trails), and dynamic adaptation (real-time logic and threshold switching driven by knowledge constraints), thereby reducing the coherence debt that characterizes contemporary ML- and LLM-centric AI architectures. The case study demonstrates “a hybrid, runtime-switchable combination of machine learning and rule-based simulation, orchestrated by AMOS under knowledge and policy constraints”. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

15 pages, 2871 KB  
Article
TD3 Reinforcement Learning Algorithm Used for Health Condition Monitoring of a Cooling Water Pump
by Miguel A. Sanz-Bobi, Inés Rodriguez, F. Javier Bellido-López, Antonio Muñoz, Javier Anguera, Daniel Gonzalez-Calvo and Tomas Alvarez-Tejedor
Computers 2025, 14(12), 540; https://doi.org/10.3390/computers14120540 - 9 Dec 2025
Viewed by 269
Abstract
In this paper, we describe the procedure of implementing a reinforcement learning algorithm, TD3, to learn the performance of a cooling water pump and how this type of learning can be used to detect degradations and evaluate its health condition. These types of [...] Read more.
In this paper, we describe the procedure of implementing a reinforcement learning algorithm, TD3, to learn the performance of a cooling water pump and how this type of learning can be used to detect degradations and evaluate its health condition. These types of machine learning algorithms have not been used extensively in the scientific literature to monitor the degradation of industrial components, so this study attempts to fill this gap, presenting the main characteristics of these algorithms’ application in a real case. The method presented consists of several models for predicting the expected evolution of significant behavior variables when no anomalies exist, showing the performance of different aspects of the pump. Examples of these variables are bearing temperatures or vibrations in different pump locations. All of the data used in this paper come from the SCADA system of the power plant where the cooling water pump is located. Full article
Show Figures

Graphical abstract

27 pages, 1212 KB  
Systematic Review
Enhancing Cybersecurity Readiness in Non-Profit Organizations Through Collaborative Research and Innovation—A Systematic Literature Review
by Maryam Roshanaei, Premkumar Krishnamurthy, Anivesh Sinha, Vikrant Gokhale, Faizan Muhammad Raza and Dušan Ramljak
Computers 2025, 14(12), 539; https://doi.org/10.3390/computers14120539 - 9 Dec 2025
Viewed by 409
Abstract
Non-profit organizations (NPOs) are crucial for building equitable and thriving communities. The majority of NPOs are small, community-based organizations that serve local needs. Despite their significance, NPOs often lack the resources to manage cybersecurity effectively, and information about them is usually found in [...] Read more.
Non-profit organizations (NPOs) are crucial for building equitable and thriving communities. The majority of NPOs are small, community-based organizations that serve local needs. Despite their significance, NPOs often lack the resources to manage cybersecurity effectively, and information about them is usually found in nonacademic or practitioner sources rather than in the academic literature. The recent surge in cyberattacks on NPOs underscores the urgent need for investment in cybersecurity readiness. The absence of robust safeguards and cybersecurity preparedness not only exposes NPOs to risks and vulnerabilities but also erodes trust and diminishes the value donors and volunteers place on them. Through this systematic literature review (SLR) mapping framework, the existing work on cyber threat assessment and mitigation is leveraged to make a framework and data collection plan to address the significant cybersecurity vulnerabilities faced by NPOs. The research aims to offer actionable guidance that NPOs can implement within their resource constraints to enhance their cybersecurity posture. This systematic literature review (SLR) adheres to PRISMA 2020 guidelines to examine the state of cybersecurity readiness in NPOs. The initial 4650 records were examined on 6 March 2025. We excluded studies that did not answer our research questions and did not discuss the cybersecurity readiness in NPOs. The quality of the selected studies was assessed on the basis of methodology, clarity, completeness, and transparency, resulting in the final number of 23 included studies. Further, 37 studies were added investigating papers that referenced relevant studies or that were referenced by the relevant studies. Results were synthesized through quantitative topic analysis and qualitative analysis to identify key themes and patterns. This study makes the following contributions: (i) identify and synthesize the top cybersecurity risks for NPOs, their service impacts, and mitigation methods; (ii) summarize affordable cybersecurity practices, with an emphasis on employee training and sector-specific knowledge gaps; (iii) analyze organizational and contextual factors (e.g., geography, budget, IT skills, cyber insurance, vendor dependencies) that shape cybersecurity readiness; and (iv) review and integrate existing assessment and resilience frameworks applicable to NPOs. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

23 pages, 722 KB  
Article
Enhancing Student Engagement and Performance Through Personalized Study Plans in Online Learning: A Proof-of-Concept Pilot Study
by Indika Karunaratne, Ranasignhe Arachchilage Ashinka Shani, Vithanage Chethani Sandamali Vithanage, Pavithra Senanayake and Ajantha Sanjeewa Atukorale
Computers 2025, 14(12), 538; https://doi.org/10.3390/computers14120538 - 9 Dec 2025
Cited by 1 | Viewed by 640
Abstract
This study examines how interaction data from Learning Management Systems (LMSs) can be leveraged to predict student performance and enhance academic outcomes through personalized study plans tailored to individual learning styles. The research followed three phases: (i) analyzing the relationship between engagement and [...] Read more.
This study examines how interaction data from Learning Management Systems (LMSs) can be leveraged to predict student performance and enhance academic outcomes through personalized study plans tailored to individual learning styles. The research followed three phases: (i) analyzing the relationship between engagement and performance, (ii) developing predictive models for academic outcomes, and (iii) generating customized study plan recommendations. Clustering analysis identified three distinct learner profiles—high-engagement–high-performance, low-engagement–high-performance, and low-engagement–low-performance—with no cases of high-engagement–low-performance, underscoring the pivotal role of engagement in academic success. Among clustering approaches, K-Means produced the most precise grouping. For prediction, Support Vector Machines (SVMs) achieved the highest accuracy (68.8%) in classifying students across 11 grade categories, supported by oversampling techniques to address class imbalance. Personalized study plans, derived using K-Nearest Neighbor (KNN) classifiers, significantly improved student performance in controlled experiments. To the best of our knowledge, this represents a novel attempt in this context to align predictive modeling with the full grading structure of undergraduate programs. These findings highlight the potential of integrating LMS data with machine learning to foster engagement and improve learning outcomes. Future work will focus on expanding datasets, refining predictive accuracy, and incorporating additional personalization features to strengthen adaptive learning. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop