Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1588 KB  
Article
Digital Literacy in Higher Education: Examining University Students’ Competence in Online Information Practices
by Maria Sofia Georgopoulou, Christos Troussas, Akrivi Krouska and Cleo Sgouropoulou
Computers 2025, 14(12), 528; https://doi.org/10.3390/computers14120528 - 2 Dec 2025
Cited by 5 | Viewed by 4842
Abstract
Accessing, processing, and sharing of information have been completely transformed by the speedy progress of digital technologies. However, as tech evolution accelerates, it presents notable challenges in the form of misinformation spreading rapidly and an increased demand for critical thinking competences. Digital literacy, [...] Read more.
Accessing, processing, and sharing of information have been completely transformed by the speedy progress of digital technologies. However, as tech evolution accelerates, it presents notable challenges in the form of misinformation spreading rapidly and an increased demand for critical thinking competences. Digital literacy, encompassing the ability to navigate, evaluate, and create digital content effectively, emerges as a crucial skillset for individuals to succeed in the modern world. This study aims to assess the digital literacy levels of university students and understand their ability to critically engage with digital technologies, with a specific focus on their competences in evaluating information, utilizing technology, and engaging in online communities. A quiz-type questionnaire, informed by frameworks such as DigComp 2.2 and Eshet-Alkalai’s model, was developed to assess participants’ self-perceived and applied competences, with a focus on emerging challenges like deepfake detection not fully covered in existing tools. The findings indicate that while most students are aware of various criteria for accessing and evaluating online content, there is room for improvement in consistently applying these criteria and understanding the potential risks of misinformation and responsible use of online sources. Exploratory analyses reveal minimal differences by department and year of study, suggesting that targeted interventions are required across all study fields. The results underline the importance of cultivating critical and ethical digital literacy within higher education to enhance digital citizenship. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
Show Figures

Figure 1

30 pages, 438 KB  
Article
Multi-Agent RAG Framework for Entity Resolution: Advancing Beyond Single-LLM Approaches with Specialized Agent Coordination
by Aatif Muhammad Althaf, Muzakkiruddin Ahmed Mohammed, Mariofanna Milanova, John Talburt and Mert Can Cakmak
Computers 2025, 14(12), 525; https://doi.org/10.3390/computers14120525 - 1 Dec 2025
Cited by 1 | Viewed by 4799
Abstract
Entity resolution in real-world datasets remains a persistent challenge, particularly for identifying households and detecting co-residence patterns within noisy and incomplete data. While Large Language Models (LLMs) show promise, monolithic approaches often suffer from limited scalability and interpretability. This study introduces a multi-agent [...] Read more.
Entity resolution in real-world datasets remains a persistent challenge, particularly for identifying households and detecting co-residence patterns within noisy and incomplete data. While Large Language Models (LLMs) show promise, monolithic approaches often suffer from limited scalability and interpretability. This study introduces a multi-agent Retrieval-Augmented Generation (RAG) framework that decomposes household entity resolution into coordinated, task-specialized agents implemented using LangGraph. The system includes four agents responsible for direct matching, transitive linkage, household clustering, and residential movement detection, combining rule-based preprocessing with LLM-guided reasoning. Evaluation on synthetic S12PX dataset segments containing 200–300 records demonstrates 94.3% accuracy on name variation matching and a 61% reduction in API calls compared to single-LLM baselines, while maintaining transparent and traceable decision processes. These results indicate that coordinated multi-agent specialization improves efficiency and interpretability, providing a structured and extensible approach for entity resolution in census, healthcare, and other administrative data domains. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Graphical abstract

48 pages, 3559 KB  
Review
Evolution and Perspectives in IT Governance: A Systematic Literature Review
by Álvaro Vaya-Arboledas, Mikel Ferrer-Oliva and José Amelio Medina-Merodio
Computers 2025, 14(12), 520; https://doi.org/10.3390/computers14120520 - 28 Nov 2025
Cited by 1 | Viewed by 4278
Abstract
The study presents a systematic review of the state of the art on Information Technology (IT) governance research. Following the PRISMA 2020 protocol and drawing on Scopus and Web of Science, covering publications from 1999 to May 2025, 380 relevant articles were identified, [...] Read more.
The study presents a systematic review of the state of the art on Information Technology (IT) governance research. Following the PRISMA 2020 protocol and drawing on Scopus and Web of Science, covering publications from 1999 to May 2025, 380 relevant articles were identified, analysed and categorised. A bibliometric analysis supported by tools such as VOSviewer and SciMaT mapped the principal thematic strands, influential authors and institutions, and revealed research gaps. The results indicate a consolidated field in which resource allocation, industrial management, strategic alignment and board-level IT governance operate as driving themes, while information management, the configuration of the IT function and the regulatory nexus between laws and information security remain emerging areas. The conclusions emphasise the theoretical implications of clarifying how IT governance shapes IT investment and initiative prioritisation, sectoral configurations and strategic alignment, and the practical implications of using these mechanisms to design and refine governance structures, processes and metrics in regulated organisations so that value creation risk control and accountability are more explicitly aligned. Full article
Show Figures

Figure 1

17 pages, 1542 KB  
Article
Classification of Drowsiness and Alertness States Using EEG Signals to Enhance Road Safety: A Comparative Analysis of Machine Learning Algorithms and Ensemble Techniques
by Masoud Sistaninezhad, Saman Rajebi, Siamak Pedrammehr, Arian Shajari, Hussain Mohammed Dipu Kabir, Thuong Hoang, Stefan Greuter and Houshyar Asadi
Computers 2025, 14(12), 509; https://doi.org/10.3390/computers14120509 - 24 Nov 2025
Viewed by 1116
Abstract
Drowsy driving is a major contributor to road accidents, as reduced vigilance degrades situational awareness and reaction control. Reliable assessment of alertness versus drowsiness can therefore support accident prevention. Key gaps remain in physiology-based detection, including robust identification of microsleep and transient vigilance [...] Read more.
Drowsy driving is a major contributor to road accidents, as reduced vigilance degrades situational awareness and reaction control. Reliable assessment of alertness versus drowsiness can therefore support accident prevention. Key gaps remain in physiology-based detection, including robust identification of microsleep and transient vigilance shifts, sensitivity to fatigue-related changes, and resilience to motion-related signal artifacts; practical sensing solutions are also needed. Using Electroencephalogram (EEG) recordings from the MIT-BIH Polysomnography Database (18 records; >80 h of clinically annotated data), we framed wakefulness–drowsiness discrimination as a binary classification task. From each 30 s segment, we extracted 61 handcrafted features spanning linear, nonlinear, and frequency descriptors designed to be largely robust to signal-quality variations. Three classifiers were evaluated—k-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Decision Tree (DT)—alongside a DT-based bagging ensemble. KNN achieved 99% training and 80.4% test accuracy; SVM reached 80.0% and 78.8%; and DT obtained 79.8% and 78.3%. Data standardization did not improve performance. The ensemble attained 100% training and 84.7% test accuracy. While these results indicate strong discriminative capability, the training–test gap suggests overfitting and underscores the need for validation on larger, more diverse cohorts to ensure generalizability. Overall, the findings demonstrate the potential of machine learning to identify vigilance states from EEG. We present an interpretable EEG-based classifier built on clinically scored polysomnography and discuss translation considerations; external validation in driving contexts is reserved for future work. Full article
(This article belongs to the Special Issue AI for Humans and Humans for AI (AI4HnH4AI))
Show Figures

Figure 1

22 pages, 765 KB  
Article
Evaluating Deployment of Deep Learning Model for Early Cyberthreat Detection in On-Premise Scenario Using Machine Learning Operations Framework
by Andrej Ralbovský, Ivan Kotuliak and Dennis Sobolev
Computers 2025, 14(12), 506; https://doi.org/10.3390/computers14120506 - 23 Nov 2025
Cited by 2 | Viewed by 1136
Abstract
Modern on-premises threat detection increasingly relies on deep learning over network and system logs, yet organizations must balance infrastructure and resource constraints with maintainability and performance. We investigate how adopting MLOps influences deployment and runtime behavior of a recurrent-neural-network–based detector for malicious event [...] Read more.
Modern on-premises threat detection increasingly relies on deep learning over network and system logs, yet organizations must balance infrastructure and resource constraints with maintainability and performance. We investigate how adopting MLOps influences deployment and runtime behavior of a recurrent-neural-network–based detector for malicious event sequences. Our investigation includes surveying modern open-source platforms to select a suitable candidate, its implementation over a two-node setup with a CPU-centric control server and a GPU worker and performance evaluation for a containerized MLOps-integrated setup vs. bare metal. For evaluation, we use four scenarios that cross the deployment model (bare metal vs. containerized) with two different versions of software stack, using a sizable training corpus and a held-out inference subset representative of operational traffic. For training and inference, we measured execution time, CPU and RAM utilization, and peak GPU memory to find notable patterns or correlations providing insights for organizations adopting the on-premises-first approach. Our findings prove that MLOps can be adopted even in resource-constrained environments without inherent performance penalties; thus, platform choice should be guided by operational concerns (reproducibility, scheduling, tracking), while performance tuning should prioritize pinning and validating the software stack, which has surprisingly large impact on resource utilization and execution process. Our study offers a reproducible blueprint for on-premises cyber-analytics and clarifies where optimization yields the greatest return. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (3rd Edition))
Show Figures

Graphical abstract

26 pages, 613 KB  
Article
AutoQALLMs: Automating Web Application Testing Using Large Language Models (LLMs) and Selenium
by Sindhupriya Mallipeddi, Muhammad Yaqoob, Javed Ali Khan, Tahir Mehmood, Alexios Mylonas and Nikolaos Pitropakis
Computers 2025, 14(11), 501; https://doi.org/10.3390/computers14110501 - 18 Nov 2025
Cited by 1 | Viewed by 3485
Abstract
Modern web applications change frequently in response to user and market needs, making their testing challenging. Manual testing and automation methods often struggle to keep up with these changes. We propose an automated testing framework, AutoQALLMs, that utilises various LLMs (Large Language Models), [...] Read more.
Modern web applications change frequently in response to user and market needs, making their testing challenging. Manual testing and automation methods often struggle to keep up with these changes. We propose an automated testing framework, AutoQALLMs, that utilises various LLMs (Large Language Models), including GPT-4, Claude, and Grok, alongside Selenium WebDriver, BeautifulSoup, and regular expressions. This framework enables one-click testing, where users provide a URL as input and receive test results as output, thus eliminating the need for human intervention. It extracts HTML (Hypertext Markup Language) elements from the webpage and utilises the LLMs API to generate Selenium-based test scripts. Regular expressions enhance the clarity and maintainability of these scripts. The scripts are executed automatically, and the results, such as pass/fail status and error details, are displayed to the tester. This streamlined input–output process forms the core foundation of the AutoQALLMs framework. We evaluated the framework on 30 websites. The results show that the system drastically reduces the time needed to create test cases, achieves broad test coverage (96%) with Claude 4.5 LLM, which is competitive with manual scripts (98%), and allows for rapid regeneration of tests in response to changes in webpage structure. Software testing expert feedback confirmed that the proposed AutoQALLMs method for automated web application testing enables faster regression testing, reduces manual effort, and maintains reliable test execution. However, some limitations remain in handling complex page changes and validation. Although Claude 4.5 achieved slightly higher test coverage in the comparative evaluation of the proposed experiment, GPT-4 was selected as the default model for AutoQALLMs due to its cost-efficiency, reproducibility, and stable script generation across diverse websites. Future improvements may focus on increasing accuracy, adding self-healing techniques, and expanding to more complex testing scenarios. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

19 pages, 4107 KB  
Article
Structured Prompting and Collaborative Multi-Agent Knowledge Distillation for Traffic Video Interpretation and Risk Inference
by Yunxiang Yang, Ningning Xu and Jidong J. Yang
Computers 2025, 14(11), 490; https://doi.org/10.3390/computers14110490 - 9 Nov 2025
Cited by 1 | Viewed by 1610
Abstract
Comprehensive highway scene understanding and robust traffic risk inference are vital for advancing Intelligent Transportation Systems (ITS) and autonomous driving. Traditional approaches often struggle with scalability and generalization, particularly under the complex and dynamic conditions of real-world environments. To address these challenges, we [...] Read more.
Comprehensive highway scene understanding and robust traffic risk inference are vital for advancing Intelligent Transportation Systems (ITS) and autonomous driving. Traditional approaches often struggle with scalability and generalization, particularly under the complex and dynamic conditions of real-world environments. To address these challenges, we introduce a novel structured prompting and multi-agent collaborative knowledge distillation framework that enables automatic generation of high-quality traffic scene annotations and contextual risk assessments. Our framework orchestrates two large vision–language models (VLMs): GPT-4o and o3-mini, using a structured Chain-of-Thought (CoT) strategy to produce rich, multiperspective outputs. These outputs serve as knowledge-enriched pseudo-annotations for supervised fine-tuning of a much smaller student VLM. The resulting compact 3B-scale model, named VISTA (Vision for Intelligent Scene and Traffic Analysis), is capable of understanding low-resolution traffic videos and generating semantically faithful, risk-aware captions. Despite its significantly reduced parameter count, VISTA achieves strong performance across established captioning metrics (BLEU-4, METEOR, ROUGE-L, and CIDEr) when benchmarked against its teacher models. This demonstrates that effective knowledge distillation and structured role-aware supervision can empower lightweight VLMs to capture complex reasoning capabilities. The compact architecture of VISTA facilitates efficient deployment on edge devices, enabling real-time risk monitoring without requiring extensive infrastructure upgrades. Full article
Show Figures

Graphical abstract

17 pages, 584 KB  
Article
An Adaptive Multi-Agent Framework for Semantic-Aware Process Mining
by Xiaohan Su, Bin Liang, Zhidong Li, Yifei Dong, Justin Wang and Fang Chen
Computers 2025, 14(11), 481; https://doi.org/10.3390/computers14110481 - 5 Nov 2025
Viewed by 1386
Abstract
With rapid advancements in large language models for natural language processing, their role in semantic-aware process mining is growing. We study semantics-aware process mining, where decisions must reflect both event logs and textual rules. We propose an online, adaptive multi-agent framework that operates [...] Read more.
With rapid advancements in large language models for natural language processing, their role in semantic-aware process mining is growing. We study semantics-aware process mining, where decisions must reflect both event logs and textual rules. We propose an online, adaptive multi-agent framework that operates over a single knowledge base shared across three tasks—semantic next-activity prediction (S_NAP), trace-level semantic anomaly detection (T_SAD), and activity-level semantic anomaly detection (A_SAD). The approach has three key elements: (i) cross-task corroboration at retrieval time, formed by pooling in-domain and out-of-domain candidates to strengthen coverage; (ii) feedback-to-index calibration that converts user correctness/usefulness into propensity-debiased, smoothed priors that immediately bias recall and first-stage ordering for the next query; and (iii) stability controls—consistency-aware scoring, confidence gating with failure-driven query rewriting, task-level trust regions, and a sequential rule to select the relevance–quality interpolation. We instantiate the framework with Mistral-7B-Instruct, Llama-3-8B, GPT-3.5, and GPT-4o and evaluate it using macro-F1. Compared to in-context learning, our framework improves S_NAP, T_SAD, and A_SAD by 44.0%, 15.6%, and 7.1%, respectively, and attains the best overall profile against retrieval-only and correction-centric baselines. Ablations show that removing index priors causes the steepest degradation, cross-task corroboration yields consistent gains—most visibly on S_NAP—and confidence gating preserves robustness to difficult inputs. The result is immediate serve-time adaptivity without heavy fine-tuning, making semantic process analysis practical under drift. Full article
Show Figures

Graphical abstract

21 pages, 17739 KB  
Article
Re_MGFE: A Multi-Scale Global Feature Embedding Spectrum Sensing Method Based on Relation Network
by Jiayi Wang, Fan Zhou, Jinyang Ren, Lizhuang Tan, Jian Wang, Peiying Zhang and Shaolin Liao
Computers 2025, 14(11), 480; https://doi.org/10.3390/computers14110480 - 4 Nov 2025
Viewed by 653
Abstract
Currently, the increasing number of Internet of Things devices makes spectrum resource shortage prominent. Spectrum sensing technology can effectively solve this problem by conducting real-time monitoring of the spectrum. However, in practical applications, it is difficult to obtain a large number of labeled [...] Read more.
Currently, the increasing number of Internet of Things devices makes spectrum resource shortage prominent. Spectrum sensing technology can effectively solve this problem by conducting real-time monitoring of the spectrum. However, in practical applications, it is difficult to obtain a large number of labeled samples, which leads to the neural network model not being fully trained and affects the performance. Moreover, the existing few-shot methods focus on capturing spatial features, ignoring the representation forms of features at different scales, thus reducing the diversity of features. To address the above issues, this paper proposes a few-shot spectrum sensing method based on multi-scale global feature. To enhance the feature diversity, this method employs a multi-scale feature extractor to extract features at multiple scales. This improves the model’s ability to distinguish signals and avoids overfitting of the network. In addition, to make full use of the frequency features at different scales, a learnable weight feature reinforcer is constructed to enhance the frequency features. The simulation results show that, when SNR is under 0∼10 dB, the recognition accuracy of the network under different task modes all reaches above 81%, which is better than the existing methods. It realizes the accurate spectrum sensing under the few-shot conditions. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Graphical abstract

23 pages, 3017 KB  
Article
Real-Time Passenger Flow Analysis in Tram Stations Using YOLO-Based Computer Vision and Edge AI on Jetson Nano
by Sonia Diaz-Santos, Pino Caballero-Gil and Cándido Caballero-Gil
Computers 2025, 14(11), 476; https://doi.org/10.3390/computers14110476 - 3 Nov 2025
Cited by 1 | Viewed by 2476
Abstract
Efficient real-time computer vision-based passenger flow analysis is increasingly important for the management of intelligent transportation systems and smart cities. This paper presents the design and implementation of a system for real-time object detection, tracking, and people counting in tram stations. The proposed [...] Read more.
Efficient real-time computer vision-based passenger flow analysis is increasingly important for the management of intelligent transportation systems and smart cities. This paper presents the design and implementation of a system for real-time object detection, tracking, and people counting in tram stations. The proposed approach integrates YOLO-based detection with a lightweight tracking module and is deployed on an NVIDIA Jetson Nano device, enabling operation under resource constraints and demonstrating the potential of edge AI. Multiple YOLO versions, from v3 to v11, were evaluated on data collected in collaboration with Metropolitano de Tenerife. Experimental results show that YOLOv5s achieves the best balance between detection accuracy and inference speed, reaching 96.85% accuracy in counting tasks. The system demonstrates the feasibility of applying edge AI to monitor passenger flow in real time, contributing to intelligent transportation and smart city initiatives. Full article
Show Figures

Figure 1

16 pages, 34460 KB  
Article
A Mixed Reality-Based Training and Guidance System for Quality Control
by Luzia Saraiva, João Henriques, José Silva, André Barbosa and Serafim M. Oliveira
Computers 2025, 14(11), 479; https://doi.org/10.3390/computers14110479 - 3 Nov 2025
Viewed by 884
Abstract
The increasing demand for customized products has raised the significant challenges of increasing performance and reducing costs in the industry. Facing that demand requires operators to enhance their capabilities to cope with complexity, demanding skills, and higher cognitive levels, performance, and errors. To [...] Read more.
The increasing demand for customized products has raised the significant challenges of increasing performance and reducing costs in the industry. Facing that demand requires operators to enhance their capabilities to cope with complexity, demanding skills, and higher cognitive levels, performance, and errors. To overcome this scenario, a virtual instructor framework is proposed to instruct operators and support procedural quality, enabled by the use of You Only Look Once (YOLO) models and by equipping the operators with Magic Leap 2 as a Head-Mounted Display (HMD). The framework relies on key modules, such as Instructor, Management, Core, Object Detection, 3D Modeling, and Storage. A use case in the automotive industry helped validate the Proof-of-concept (PoC) of the proposed framework. This framework can contribute to guiding the development of new tools supporting assembly operations in the industry. Full article
Show Figures

Figure 1

35 pages, 8683 KB  
Article
Teaching Machine Learning to Undergraduate Electrical Engineering Students
by Gerald Fudge, Anika Rimu, William Zorn, July Ringle and Cody Barnett
Computers 2025, 14(11), 465; https://doi.org/10.3390/computers14110465 - 28 Oct 2025
Viewed by 1499
Abstract
Proficiency in machine learning (ML) and the associated computational math foundations have become critical skills for engineers. Required areas of proficiency include the ability to use available ML tools and the ability to develop new tools to solve engineering problems. Engineers also need [...] Read more.
Proficiency in machine learning (ML) and the associated computational math foundations have become critical skills for engineers. Required areas of proficiency include the ability to use available ML tools and the ability to develop new tools to solve engineering problems. Engineers also need to be proficient in using generative artificial intelligence (AI) tools in a variety of contexts, including as an aid to learning, research, writing, and code generation. Using these tools properly requires a solid understanding of the associated computational math foundation. Without this foundation, engineers will struggle with developing new tools and can easily misuse available ML/AI tools, leading to poorly designed systems that are suboptimal or even harmful to society. Teaching (and learning) these skills can be difficult due to the breadth of skills required. One contribution of this paper is that it approaches teaching this topic within an industrial engineering human factors framework. Another contribution is the detailed case study narrative describing specific pedagogical challenges, including implementation of teaching strategies (successful and unsuccessful), recent observed trends in generative AI, and student perspectives on learning this topic. Although the primary methodology is anecdotal, we also include empirical data in support of anecdotal results. Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
Show Figures

Graphical abstract

30 pages, 2362 KB  
Article
Bridging the Gap: Enhancing BIM Education for Sustainable Design Through Integrated Curriculum and Student Perception Analysis
by Tran Duong Nguyen and Sanjeev Adhikari
Computers 2025, 14(11), 463; https://doi.org/10.3390/computers14110463 - 25 Oct 2025
Cited by 5 | Viewed by 1986
Abstract
Building Information Modeling (BIM) is a transformative tool in Sustainable Design (SD), providing measurable benefits for efficiency, collaboration, and performance in architectural, engineering, and construction (AEC) practices. Despite its growing presence in academic curricula, a gap persists between students’ recognition of BIM’s sustainability [...] Read more.
Building Information Modeling (BIM) is a transformative tool in Sustainable Design (SD), providing measurable benefits for efficiency, collaboration, and performance in architectural, engineering, and construction (AEC) practices. Despite its growing presence in academic curricula, a gap persists between students’ recognition of BIM’s sustainability potential and their confidence or ability to apply these concepts in real-world practice. This study examines students’ understanding and perceptions of BIM and Sustainable Design education, offering insights for enhancing curriculum integration and pedagogical strategies. The objectives are to: (1) assess students’ current understanding of BIM and Sustainable Design; (2) identify gaps and misconceptions in applying BIM to sustainability; (3) evaluate the effectiveness of existing teaching methods and curricula to inform future improvements; and (4) explore the alignment between students’ theoretical knowledge and practical abilities in using BIM for Sustainable Design. The research methodology includes a comprehensive literature review and a survey of 213 students from architecture and construction management programs. Results reveal that while most students recognize the value of BIM for early-stage sustainable design analysis, many lack confidence in their practical skills, highlighting a perception–practice gap. The paper examines current educational practices, identifies curriculum shortcomings, and proposes strategies, such as integrated, hands-on learning experiences, to better align academic instruction with industry needs. Distinct from previous studies that focused primarily on single-discipline or software-based training, this research provides an empirical, cross-program analysis of students’ perception–practice gaps and offers curriculum-level insights for sustainability-driven practice. These findings provide practical recommendations for enhancing BIM and sustainability education, thereby better preparing students to meet the demands of the evolving AEC sector. Full article
Show Figures

Figure 1

43 pages, 20477 KB  
Article
Investigation of Cybersecurity Bottlenecks of AI Agents in Industrial Automation
by Sami Shrestha, Chipiliro Banda, Amit Kumar Mishra, Fatiha Djebbar and Deepak Puthal
Computers 2025, 14(11), 456; https://doi.org/10.3390/computers14110456 - 23 Oct 2025
Cited by 4 | Viewed by 4058
Abstract
The growth of Agentic AI systems in Industrial Automation has brought forth new cybersecurity issues which in turn put at risk the reliability and integrity of these systems. In this study we look at the cybersecurity issues in industrial automation in terms of [...] Read more.
The growth of Agentic AI systems in Industrial Automation has brought forth new cybersecurity issues which in turn put at risk the reliability and integrity of these systems. In this study we look at the cybersecurity issues in industrial automation in terms of the threats, risks, and vulnerabilities related to Agentic AI. We conducted a systematic literature review to report on the present day practices in terms of cybersecurity for industrial automation and Agentic AI. Also we used a simulation based approach to study the security issues and their impact on industrial automation systems. Our study results identify the key areas of focus and what mitigation strategies may be put in place to secure the integration of Agentic AI in industrial automation. Our research brings to the table results which will play a role in the development of more secure and reliable industrial automation systems, which in the end will improve the overall cybersecurity of these systems. Full article
(This article belongs to the Special Issue AI for Humans and Humans for AI (AI4HnH4AI))
Show Figures

Figure 1

30 pages, 3604 KB  
Article
Integrated Systems Ontology (ISOnto): Integrating Engineering Design and Operational Feedback for Dependable Systems
by Haytham Younus, Felician Campean, Sohag Kabir, Pascal Bonnaud and David Delaux
Computers 2025, 14(11), 451; https://doi.org/10.3390/computers14110451 - 22 Oct 2025
Cited by 1 | Viewed by 1092
Abstract
This paper proposes an integrated ontological framework, Integrated Systems Ontology (ISOnto), for dependable systems engineering by semantically linking design models with real-world operational failure data. Building upon the recently proposed Function–Behaviour–Structure–Failure Modes (FBSFM) framework, ISOnto integrates early-stage design information with field-level evidence to [...] Read more.
This paper proposes an integrated ontological framework, Integrated Systems Ontology (ISOnto), for dependable systems engineering by semantically linking design models with real-world operational failure data. Building upon the recently proposed Function–Behaviour–Structure–Failure Modes (FBSFM) framework, ISOnto integrates early-stage design information with field-level evidence to support more informed, traceable, and dependable failure analysis. This extends the semantic scope of the FBSFM ontology to include operational/field feedback from warranty claims and technical inspections, enabling two-way traceability between design-phase assumptions (functions, behaviours, structures, and failure modes) and field-reported failures, causes, and effects. As a theoretical contribution, ISOnto introduces a formal semantic bridge between design and operational phases, strengthening the validation of known failure modes and the discovery of previously undocumented ones. Developed using established ontology engineering practices and formalised in OWL with Protégé, it incorporates domain-specific extensions to represent field data with structured mappings to design entities. A real-world automotive case study conducted with a global manufacturer demonstrates ISOnto’s ability to consolidate multisource lifecycle data into a coherent, machine-readable repository. The framework supports advanced reasoning, structured querying, and system-level traceability, thereby facilitating continuous improvement, data-driven validation, and more reliable decision-making across product development and reliability engineering. Full article
(This article belongs to the Special Issue Recent Trends in Dependable and High Availability Systems)
Show Figures

Figure 1

41 pages, 2159 KB  
Systematic Review
Predicting Website Performance: A Systematic Review of Metrics, Methods, and Research Gaps (2010–2024)
by Mohammad Ghattas, Suhail Odeh and Antonio M. Mora
Computers 2025, 14(10), 446; https://doi.org/10.3390/computers14100446 - 20 Oct 2025
Viewed by 2886
Abstract
Website performance directly impacts user experience, trust, and competitiveness. While numerous studies have proposed evaluation methods, there is still no comprehensive synthesis that integrates performance metrics with predictive models. This study conducts a systematic literature review (SLR) following the PRISMA framework across seven [...] Read more.
Website performance directly impacts user experience, trust, and competitiveness. While numerous studies have proposed evaluation methods, there is still no comprehensive synthesis that integrates performance metrics with predictive models. This study conducts a systematic literature review (SLR) following the PRISMA framework across seven academic databases (2010–2024). From 6657 initial records, 30 high-quality studies were included after rigorous screening and quality assessment. In addition, 59 website performance metrics were identified and validated through an expert survey, resulting in 16 core indicators. The review highlights a dominant reliance on traditional evaluation metrics (e.g., Load Time, Page Size, Response Time) and reveals limited adoption of machine learning and deep learning approaches. Most existing studies focus on e-government and educational websites, with little attention to e-commerce, healthcare, and industry domains. Furthermore, the geographic distribution of research remains uneven, with a concentration in Asia and limited contributions from North America and Africa. This study contributes by (i) consolidating and validating a set of 16 critical performance metrics, (ii) critically analyzing current methodologies, and (iii) identifying gaps in domain coverage and intelligent prediction models. Future research should prioritize cross-domain benchmarks, integrate machine learning for scalable predictions, and address the lack of standardized evaluation protocols. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Graphical abstract

69 pages, 7515 KB  
Review
Towards an End-to-End Digital Framework for Precision Crop Disease Diagnosis and Management Based on Emerging Sensing and Computing Technologies: State over Past Decade and Prospects
by Chijioke Leonard Nkwocha and Abhilash Kumar Chandel
Computers 2025, 14(10), 443; https://doi.org/10.3390/computers14100443 - 16 Oct 2025
Cited by 6 | Viewed by 4541
Abstract
Early detection and diagnosis of plant diseases is critical for ensuring global food security and sustainable agricultural practices. This review comprehensively examines latest advancements in crop disease risk prediction, onset detection through imaging techniques, machine learning (ML), deep learning (DL), and edge computing [...] Read more.
Early detection and diagnosis of plant diseases is critical for ensuring global food security and sustainable agricultural practices. This review comprehensively examines latest advancements in crop disease risk prediction, onset detection through imaging techniques, machine learning (ML), deep learning (DL), and edge computing technologies. Traditional disease detection methods, which rely on visual inspections, are time-consuming, and often inaccurate. While chemical analyses are accurate, they can be time consuming and leave less flexibility to promptly implement remedial actions. In contrast, modern techniques such as hyperspectral and multispectral imaging, thermal imaging, and fluorescence imaging, among others can provide non-invasive and highly accurate solutions for identifying plant diseases at early stages. The integration of ML and DL models, including convolutional neural networks (CNNs) and transfer learning, has significantly improved disease classification and severity assessment. Furthermore, edge computing and the Internet of Things (IoT) facilitate real-time disease monitoring by processing and communicating data directly in/from the field, reducing latency and reliance on in-house as well as centralized cloud computing. Despite these advancements, challenges remain in terms of multimodal dataset standardization, integration of individual technologies of sensing, data processing, communication, and decision-making to provide a complete end-to-end solution for practical implementations. In addition, robustness of such technologies in varying field conditions, and affordability has also not been reviewed. To this end, this review paper focuses on broad areas of sensing, computing, and communication systems to outline the transformative potential of end-to-end solutions for effective implementations towards crop disease management in modern agricultural systems. Foundation of this review also highlights critical potential for integrating AI-driven disease detection and predictive models capable of analyzing multimodal data of environmental factors such as temperature and humidity, as well as visible-range and thermal imagery information for early disease diagnosis and timely management. Future research should focus on developing autonomous end-to-end disease monitoring systems that incorporate these technologies, fostering comprehensive precision agriculture and sustainable crop production. Full article
Show Figures

Graphical abstract

23 pages, 1409 KB  
Systematic Review
A Systematic Review of Machine Learning in Credit Card Fraud Detection Under Original Class Imbalance
by Nazerke Baisholan, J. Eric Dietz, Sergiy Gnatyuk, Mussa Turdalyuly, Eric T. Matson and Karlygash Baisholanova
Computers 2025, 14(10), 437; https://doi.org/10.3390/computers14100437 - 15 Oct 2025
Cited by 4 | Viewed by 9008
Abstract
Credit card fraud remains a significant concern for financial institutions due to its low prevalence, evolving tactics, and the operational demand for timely, accurate detection. Machine learning (ML) has emerged as a core approach, capable of processing large-scale transactional data and adapting to [...] Read more.
Credit card fraud remains a significant concern for financial institutions due to its low prevalence, evolving tactics, and the operational demand for timely, accurate detection. Machine learning (ML) has emerged as a core approach, capable of processing large-scale transactional data and adapting to new fraud patterns. However, much of the literature modifies the natural class distribution through resampling, potentially inflating reported performance and limiting real-world applicability. This systematic literature review examines only studies that preserve the original class imbalance during both training and evaluation. Following PRISMA 2020 guidelines, strict inclusion and exclusion criteria were applied to ensure methodological rigor and relevance. Four research questions guided the analysis, focusing on dataset usage, ML algorithm adoption, evaluation metric selection, and the integration of explainable artificial intelligence (XAI). The synthesis reveals dominant reliance on a small set of benchmark datasets, a preference for tree-based ensemble methods, limited use of AUC-PR despite its suitability for skewed data, and rare implementation of operational explainability, most notably through SHAP. The findings highlight the need for semantics-preserving benchmarks, cost-aware evaluation frameworks, and analyst-oriented interpretability tools, offering a research agenda to improve reproducibility and enable effective, transparent fraud detection under real-world imbalance conditions. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Graphical abstract

25 pages, 3060 KB  
Article
Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games
by Sehar Shahzad Farooq, Hameedur Rahman, Samiya Abdul Wahid, Muhammad Alyan Ansari, Saira Abdul Wahid and Hosu Lee
Computers 2025, 14(10), 434; https://doi.org/10.3390/computers14100434 - 13 Oct 2025
Cited by 2 | Viewed by 3285
Abstract
Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage [...] Read more.
Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage Actor–Critic (A3C) algorithm is explored using action games. Having been proven successful in several gaming environments, its effectiveness in action games is rarely explored. Providing efficient learning and adaptation facilities, this research aims to assess whether integrating ICM with A3C promotes curiosity-driven explorations and adaptive learning in action games. Using the MAME Toolkit library, we interface with the game environments, preprocess game screens to focus on relevant visual elements, and create diverse game episodes for training. The A3C policy is optimized using the Proximal Policy Optimization (PPO) algorithm with tuned hyperparameters. Comparisons are made with baseline methods, including vanilla A3C, ICM with pixel-based predictions, and state-of-the-art exploration techniques. Additionally, we evaluate the agent’s generalization capability in separate environments. The results demonstrate that ICM and A3C effectively promote curiosity-driven exploration in action games, with the agent learning exploration behaviors without relying solely on external rewards. Notably, we also observed an improved efficiency and learning speed compared to baseline approaches. This research contributes to curiosity-driven exploration in reinforcement learning-based virtual environments and provides insights into the exploration of complex action games. Successfully applying ICM and A3C in action games presents exciting opportunities for adaptive learning and efficient exploration in challenging real-world environments. Full article
Show Figures

Figure 1

30 pages, 2870 KB  
Article
CourseEvalAI: Rubric-Guided Framework for Transparent and Consistent Evaluation of Large Language Models
by Catalin Anghel, Marian Viorel Craciun, Emilia Pecheanu, Adina Cocu, Andreea Alexandra Anghel, Paul Iacobescu, Calina Maier, Constantin Adrian Andrei, Cristian Scheau and Serban Dragosloveanu
Computers 2025, 14(10), 431; https://doi.org/10.3390/computers14100431 - 11 Oct 2025
Cited by 3 | Viewed by 2334
Abstract
Background and objectives: Large language models (LLMs) show promise in automating open-ended evaluation tasks, yet their reliability in rubric-based assessment remains uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about transparency and pedagogical validity in educational contexts. This study introduces [...] Read more.
Background and objectives: Large language models (LLMs) show promise in automating open-ended evaluation tasks, yet their reliability in rubric-based assessment remains uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about transparency and pedagogical validity in educational contexts. This study introduces CourseEvalAI, a framework designed to enhance consistency and fidelity in rubric-guided evaluation by fine-tuning a general-purpose LLM with authentic university-level instructional content. Methods: The framework employs supervised fine-tuning with Low-Rank Adaptation (LoRA) on rubric-annotated answers and explanations drawn from undergraduate computer science exams. Responses generated by both the base and fine-tuned models were independently evaluated by two human raters and two LLM judges, applying dual-layer rubrics for answers (technical or argumentative) and explanations. Inter-rater reliability was reported as intraclass correlation coefficient (ICC(2,1)), Krippendorff’s α, and quadratic-weighted Cohen’s κ (QWK), and statistical analyses included Welch’s t tests with Holm–Bonferroni correction, Hedges’ g with bootstrap confidence intervals, and Levene’s tests. All responses, scores, feedback, and metadata were stored in a Neo4j graph database for structured exploration. Results: The fine-tuned model consistently outperformed the base version across all rubric dimensions, achieving higher scores for both answers and explanations. After multiple-testing correction, only the Generative Pre-trained Transformer (GPT-4)—judged Technical Answer contrast remains statistically significant; other contrasts show positive trends without passing the adjusted threshold, and no additional significance is claimed for explanation-level results. Variance in scoring decreased, inter-model agreement increased, and evaluator feedback for fine-tuned outputs contained fewer vague or critical remarks, indicating stronger rubric alignment and greater pedagogical coherence. Inter-rater reliability analyses indicated moderate human–human agreement and weaker alignment of LLM judges to the human mean. Originality: CourseEvalAI integrates rubric-guided fine-tuning, dual-layer evaluation, and graph-based storage into a unified framework. This combination provides a replicable and interpretable methodology that enhances the consistency, transparency, and pedagogical value of LLM-based evaluators in higher education and beyond. Full article
Show Figures

Figure 1

25 pages, 4460 KB  
Systematic Review
Rethinking Blockchain Governance with AI: The VOPPA Framework
by Catalin Daniel Morar, Daniela Elena Popescu, Ovidiu Constantin Novac and David Ghiurău
Computers 2025, 14(10), 425; https://doi.org/10.3390/computers14100425 - 4 Oct 2025
Cited by 2 | Viewed by 3245
Abstract
Blockchain governance has become central to the performance and resilience of decentralized systems, yet current models face recurring issues of participation, coordination, and adaptability. This article offers a structured analysis of governance frameworks and highlights their limitations through recent high-impact case studies. It [...] Read more.
Blockchain governance has become central to the performance and resilience of decentralized systems, yet current models face recurring issues of participation, coordination, and adaptability. This article offers a structured analysis of governance frameworks and highlights their limitations through recent high-impact case studies. It then examines how artificial intelligence (AI) is being integrated into governance processes, ranging from proposal summarization and anomaly detection to autonomous agent-based voting. In response to existing gaps, this paper proposes the Voting Via Parallel Predictive Agents (VOPPA) framework, a multi-agent architecture aimed at enabling predictive, diverse, and decentralized decision-making. Strengthening blockchain governance will require not just decentralization but also intelligent, adaptable, and accountable decision-making systems. Full article
Show Figures

Figure 1

46 pages, 3210 KB  
Article
Evaluating the Usability and Ethical Implications of Graphical User Interfaces in Generative AI Systems
by Amna Batool and Waqar Hussain
Computers 2025, 14(10), 418; https://doi.org/10.3390/computers14100418 - 2 Oct 2025
Cited by 1 | Viewed by 2650
Abstract
The rapid development of generative artificial intelligence (GenAI) has revolutionized how individuals and organizations interact with technology. These systems, ranging from conversational agents to creative tools, are increasingly embedded in daily life. However, their effectiveness relies heavily on the usability of their graphical [...] Read more.
The rapid development of generative artificial intelligence (GenAI) has revolutionized how individuals and organizations interact with technology. These systems, ranging from conversational agents to creative tools, are increasingly embedded in daily life. However, their effectiveness relies heavily on the usability of their graphical user interfaces (GUIs), which serve as the primary medium for user interaction. Moreover, the design of these interfaces must align with ethical principles such as transparency, fairness, and user autonomy to ensure responsible usage. This study evaluates the usability of GUIs for three widely-used GenAI applications, including ChatGPT (GPT-4), Gemini (1.5), and Claude (3.5 Sonnet), using a heuristics-based and user-based testing approach (experimental-qualitative investigation). A total of 12 participants from a research organization in Australia, participated in structured usability evaluations, applying 14 usability heuristics to identify key issues and ethical concerns. The results indicate that Claude’s GUI is the most usable among the three, particularly due to its clean and minimalistic design. However, all applications demonstrated specific usability issues, such as insufficient error prevention, lack of shortcuts, and limited customization options, affecting the efficiency and effectiveness of user interactions. Despite these challenges, each application exhibited unique strengths, suggesting that while functional, significant enhancements are needed to fully support user satisfaction and ethical usage. The insights of this study can guide organizations in designing GenAI systems that are not only user-friendly but also ethically sound. Full article
Show Figures

Figure 1

27 pages, 2517 KB  
Article
A Guided Self-Study Platform of Integrating Documentation, Code, Visual Output, and Exercise for Flutter Cross-Platform Mobile Programming
by Safira Adine Kinari, Nobuo Funabiki, Soe Thandar Aung and Htoo Htoo Sandi Kyaw
Computers 2025, 14(10), 417; https://doi.org/10.3390/computers14100417 - 1 Oct 2025
Cited by 1 | Viewed by 1450
Abstract
Nowadays, Flutter with the Dart programming language has become widely popular in mobile developments, allowing developers to build multi-platform applications using one codebase. An increasing number of companies are adopting these technologies to create scalable and maintainable mobile applications. Despite this increasing relevance, [...] Read more.
Nowadays, Flutter with the Dart programming language has become widely popular in mobile developments, allowing developers to build multi-platform applications using one codebase. An increasing number of companies are adopting these technologies to create scalable and maintainable mobile applications. Despite this increasing relevance, university curricula often lack structured resources for Flutter/Dart, limiting opportunities for students to learn it in academic environments. To address this gap, we previously developed the Flutter Programming Learning Assistance System (FPLAS), which supports self-learning through interactive problems focused on code comprehension through code-based exercises and visual interfaces. However, it was observed that many students completed the exercises without fully understanding even basic concepts, if they already had some knowledge of object-oriented programming (OOP). As a result, they may not be able to design and implement Flutter/Dart codes independently, highlighting a mismatch between the system’s outcomes and intended learning goals. In this paper, we propose a guided self-study approach of integrating documentation, code, visual output, and exercise in FPLAS. Two existing problem types, namely, Grammar Understanding Problems (GUP) and Element Fill-in-Blank Problems (EFP), are combined together with documentation, code, and output into a new format called Integrated Introductory Problems (INTs). For evaluations, we generated 16 INT instances and conducted two rounds of evaluations. The first round with 23 master students in Okayama University, Japan, showed high correct answer rates but low usability ratings. After revising the documentation and the system design, the second round with 25 fourth-year undergraduate students in the same university demonstrated high usability and consistent performances, which confirms the effectiveness of the proposal. Full article
Show Figures

Graphical abstract

47 pages, 3137 KB  
Article
DietQA: A Comprehensive Framework for Personalized Multi-Diet Recipe Retrieval Using Knowledge Graphs, Retrieval-Augmented Generation, and Large Language Models
by Ioannis Tsampos and Emmanouil Marakakis
Computers 2025, 14(10), 412; https://doi.org/10.3390/computers14100412 - 29 Sep 2025
Cited by 4 | Viewed by 3607
Abstract
Recipes available on the web often lack nutritional transparency and clear indicators of dietary suitability. While searching by title is straightforward, exploring recipes that meet combined dietary needs, nutritional goals, and ingredient-level preferences remains challenging. Most existing recipe search systems do not effectively [...] Read more.
Recipes available on the web often lack nutritional transparency and clear indicators of dietary suitability. While searching by title is straightforward, exploring recipes that meet combined dietary needs, nutritional goals, and ingredient-level preferences remains challenging. Most existing recipe search systems do not effectively support flexible multi-dietary reasoning in combination with user preferences and restrictions. For example, users may seek gluten-free and dairy-free dinners with suitable substitutions, or compound goals such as vegan and low-fat desserts. Recent systematic reviews report that most food recommender systems are content-based and often non-personalized, with limited support for dietary restrictions, ingredient-level exclusions, and multi-criteria nutrition goals. This paper introduces DietQA, an end-to-end, language-adaptable chatbot system that integrates a Knowledge Graph (KG), Retrieval-Augmented Generation (RAG), and a Large Language Model (LLM) to support personalized, dietary-aware recipe search and question answering. DietQA crawls Greek-language recipe websites to extract structured information such as titles, ingredients, and quantities. Nutritional values are calculated using validated food composition databases, and dietary tags are inferred automatically based on ingredient composition. All information is stored in a Neo4j-based knowledge graph, enabling flexible querying via Cypher. Users interact with the system through a natural language chatbot friendly interface, where they can express preferences for ingredients, nutrients, dishes, and diets, and filter recipes based on multiple factors such as ingredient availability, exclusions, and nutritional goals. DietQA supports multi-diet recipe search by retrieving both compliant recipes and those adaptable via ingredient substitutions, explaining how each result aligns with user preferences and constraints. An LLM extracts intents and entities from user queries to support rule-based Cypher retrieval, while the RAG pipeline generates contextualized responses using the user query and preferences, retrieved recipes, statistical summaries, and substitution logic. The system integrates real-time updates of recipe and nutritional data, supporting up-to-date, relevant, and personalized recommendations. It is designed for language-adaptable deployment and has been developed and evaluated using Greek-language content. DietQA provides a scalable framework for transparent and adaptive dietary recommendation systems powered by conversational AI. Full article
Show Figures

Graphical abstract

27 pages, 2519 KB  
Article
Examining the Influence of AI on Python Programming Education: An Empirical Study and Analysis of Student Acceptance Through TAM3
by Manal Alanazi, Alice Li, Halima Samra and Ben Soh
Computers 2025, 14(10), 411; https://doi.org/10.3390/computers14100411 - 26 Sep 2025
Cited by 1 | Viewed by 2815
Abstract
This study investigates the adoption of PyChatAI, a bilingual AI-powered chatbot for Python programming education, among female computer science students at Jouf University. Guided by the Technology Acceptance Model 3 (TAM3), it examines the determinants of user acceptance and usage behaviour. A Solomon [...] Read more.
This study investigates the adoption of PyChatAI, a bilingual AI-powered chatbot for Python programming education, among female computer science students at Jouf University. Guided by the Technology Acceptance Model 3 (TAM3), it examines the determinants of user acceptance and usage behaviour. A Solomon Four-Group experimental design (N = 300) was used to control pre-test effects and isolate the impact of the intervention. PyChatAI provides interactive problem-solving, code explanations, and topic-based tutorials in English and Arabic. Measurement and structural models were validated via Confirmatory Factor Analysis (CFA) and Structural Equation Modelling (SEM), achieving excellent fit (CFI = 0.980, RMSEA = 0.039). Results show that perceived usefulness (β = 0.446, p < 0.001) and perceived ease of use (β = 0.243, p = 0.005) significantly influence intention to use, which in turn predicts actual usage (β = 0.406, p < 0.001). Trust, facilitating conditions, and hedonic motivation emerged as strong antecedents of ease of use, while social influence and cognitive factors had limited impact. These findings demonstrate that AI-driven bilingual tools can effectively enhance programming engagement in gender-specific, culturally sensitive contexts, offering practical guidance for integrating intelligent tutoring systems into computer science curricula. Full article
Show Figures

Figure 1

32 pages, 1432 KB  
Review
A Review of Multi-Microgrids Operation and Control from a Cyber-Physical Systems Perspective
by Ola Ali and Osama A. Mohammed
Computers 2025, 14(10), 409; https://doi.org/10.3390/computers14100409 - 25 Sep 2025
Cited by 6 | Viewed by 1582
Abstract
Developing multi-microgrid (MMG) systems provides a new paradigm for power distribution systems with a higher degree of resilience, flexibility, and sustainability. The inclusion of communication networks as part of MMG is critical for coordinating distributed energy resources (DERs) in real time and deploying [...] Read more.
Developing multi-microgrid (MMG) systems provides a new paradigm for power distribution systems with a higher degree of resilience, flexibility, and sustainability. The inclusion of communication networks as part of MMG is critical for coordinating distributed energy resources (DERs) in real time and deploying energy management systems (EMS) efficiently. However, the communication quality of service (QoS) parameters such as latency, jitter, packet loss, and throughput play an essential role in MMG control and stability, especially in highly dynamic and high-traffic situations. This paper presents a focused review of MMG systems from a cyber-physical viewpoint, particularly concerning the challenges and implications of communication network performance of energy management. The literature on MMG systems includes control strategies, models of communication infrastructure, cybersecurity challenges, and co-simulation platforms. We have identified research gaps, including, but not limited to, the need for scalable, real-time cyber-physical systems; limited research examining communication QoS under realistic conditions/traffic; and integrated cybersecurity strategies for MMGs. We suggest future research opportunities considering these research gaps to enhance the resiliency, adaptability, and sustainability of modern cyber-physical MMGs. Full article
Show Figures

Graphical abstract

20 pages, 2911 KB  
Article
Topological Machine Learning for Financial Crisis Detection: Early Warning Signals from Persistent Homology
by Ecaterina Guritanu, Enrico Barbierato and Alice Gatti
Computers 2025, 14(10), 408; https://doi.org/10.3390/computers14100408 - 24 Sep 2025
Cited by 2 | Viewed by 3202
Abstract
We propose a strictly causal early–warning framework for financial crises based on topological signal extraction from multivariate return streams. Sliding windows of daily log–returns are mapped to point clouds, from which Vietoris–Rips persistence diagrams are computed and summarised by persistence landscapes. A single, [...] Read more.
We propose a strictly causal early–warning framework for financial crises based on topological signal extraction from multivariate return streams. Sliding windows of daily log–returns are mapped to point clouds, from which Vietoris–Rips persistence diagrams are computed and summarised by persistence landscapes. A single, interpretable indicator is obtained as the L2 norm of the landscape and passed through a causal decision rule (with thresholds α,β and run–length parameters s,t) that suppresses isolated spikes and collapses bursts to time–stamped warnings. On four major U.S. equity indices (S&P 500, NASDAQ, DJIA, Russell 2000) over 1999–2021, the method, at a fixed strictly causal operating point (α=β=3.1,s=57,t=16), attains a balanced precision–recall (F10.50) with an average lead time of about 34 days. It anticipates two of the four canonical crises and issues a contemporaneous signal for the 2008 global financial crisis. Sensitivity analyses confirm the qualitative robustness of the detector, while comparisons with permissive spike rules and volatility–based baselines demonstrate substantially fewer false alarms at comparable recall. The approach delivers interpretable topology–based warnings and provides a reproducible route to combining persistent homology with causal event detection in financial time series. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

22 pages, 2003 KB  
Article
Beyond Opacity: Distributed Ledger Technology as a Catalyst for Carbon Credit Market Integrity
by Stanton Heister, Felix Kin Peng Hui, David Ian Wilson and Yaakov Anker
Computers 2025, 14(9), 403; https://doi.org/10.3390/computers14090403 - 22 Sep 2025
Cited by 1 | Viewed by 2076
Abstract
The 2015 Paris Agreement paved the way for the carbon trade economy, which has since evolved but has not attained a substantial magnitude. While carbon credit exchange is a critical mechanism for achieving global climate targets, it faces persistent challenges related to transparency, [...] Read more.
The 2015 Paris Agreement paved the way for the carbon trade economy, which has since evolved but has not attained a substantial magnitude. While carbon credit exchange is a critical mechanism for achieving global climate targets, it faces persistent challenges related to transparency, double-counting, and verification. This paper examines how Distributed Ledger Technology (DLT) can address these limitations by providing immutable transaction records, automated verification through digitally encoded smart contracts, and increased market efficiency. To assess DLT’s strategic potential for leveraging the carbon markets and, more explicitly, whether its implementation can reduce transaction costs and enhance market integrity, three alternative approaches that apply DLT for carbon trading were taken as case studies. By comparing key elements in these DLT-based carbon credit platforms, it is elucidated that these proposed frameworks may be developed for a scalable global platform. The integration of existing compliance markets in the EU (case study 1), Australia (case study 2), and China (case study 3) can act as a standard for a global carbon trade establishment. The findings from these case studies suggest that while DLT offers a promising path toward more sustainable carbon markets, regulatory harmonization, standardization, and data transfer across platforms remain significant challenges. Full article
Show Figures

Graphical abstract

40 pages, 3285 KB  
Article
SemaTopic: A Framework for Semantic-Adaptive Probabilistic Topic Modeling
by Amani Drissi, Salma Sassi, Richard Chbeir, Anis Tissaoui and Abderrazek Jemai
Computers 2025, 14(9), 400; https://doi.org/10.3390/computers14090400 - 19 Sep 2025
Cited by 1 | Viewed by 1515
Abstract
Topic modeling is a crucial technique for Natural Language Processing (NLP) which helps to automatically uncover coherent topics from large-scale text corpora. Yet, classic methods tend to suffer from poor semantic depth and topic coherence. In this regard, we present here a new [...] Read more.
Topic modeling is a crucial technique for Natural Language Processing (NLP) which helps to automatically uncover coherent topics from large-scale text corpora. Yet, classic methods tend to suffer from poor semantic depth and topic coherence. In this regard, we present here a new approach “SemaTopic” to improve the quality and interpretability of discovered topics. By exploiting semantic understanding and stronger clustering dynamics, our approach results in a more continuous, finer and more stable representation of the topics. Experimental results demonstrate that SemaTopic achieves a relative gain of +6.2% in semantic coherence compared to BERTopic on the 20 Newsgroups dataset (Cv=0.5315 vs. 0.5004), while maintaining stable performance across heterogeneous and multilingual corpora. These findings highlight “SemaTopic” as a scalable and reliable solution for practical text mining and knowledge discovery. Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
Show Figures

Figure 1

29 pages, 4648 KB  
Article
Optimizing Teacher Portfolio Integrity with a Cost-Effective Smart Contract for School-Issued Teacher Documents
by Diana Laura Silaghi, Andrada Cristina Artenie and Daniela Elena Popescu
Computers 2025, 14(9), 395; https://doi.org/10.3390/computers14090395 - 17 Sep 2025
Viewed by 1244
Abstract
Diplomas and academic transcripts issued at the conclusion of a university cycle have been the subject of numerous studies focused on developing secure methods for their registration and access. However, in the context of high school teachers, these initial credentials mark only the [...] Read more.
Diplomas and academic transcripts issued at the conclusion of a university cycle have been the subject of numerous studies focused on developing secure methods for their registration and access. However, in the context of high school teachers, these initial credentials mark only the starting point of a much more complex professional journey. Throughout their careers, teachers receive a wide array of certificates and attestations related to professional development, participation in educational projects, volunteering, and institutional contributions. Many of these documents are issued directly by the school administration and are often vulnerable to misplacement, unauthorized alterations, or limited portability. These challenges are amplified when teachers move between schools or are involved in teaching across multiple institutions. In response to this need, this paper proposes a blockchain-based solution built on the Ethereum platform, which ensures the integrity, traceability, and long-term accessibility of such records, preserving the professional achievements of teachers across their careers. Although most research has focused on securing highly valuable documents on blockchain, such as diplomas, certificates, and micro-credentials, this study highlights the importance of extending blockchain solutions to school-issued attestations, as they carry significant weight in teacher evaluation and the development of professional portfolios. Full article
Show Figures

Graphical abstract

19 pages, 912 KB  
Article
Lightweight Embedded IoT Gateway for Smart Homes Based on an ESP32 Microcontroller
by Filippos Serepas, Ioannis Papias, Konstantinos Christakis, Nikos Dimitropoulos and Vangelis Marinakis
Computers 2025, 14(9), 391; https://doi.org/10.3390/computers14090391 - 16 Sep 2025
Cited by 4 | Viewed by 4996
Abstract
The rapid expansion of the Internet of Things (IoT) demands scalable, efficient, and user-friendly gateway solutions that seamlessly connect resource-constrained edge devices to cloud services. Low-cost, widely available microcontrollers, such as the ESP32 and its ecosystem peers, offer integrated Wi-Fi/Bluetooth connectivity, low power [...] Read more.
The rapid expansion of the Internet of Things (IoT) demands scalable, efficient, and user-friendly gateway solutions that seamlessly connect resource-constrained edge devices to cloud services. Low-cost, widely available microcontrollers, such as the ESP32 and its ecosystem peers, offer integrated Wi-Fi/Bluetooth connectivity, low power consumption, and a mature developer toolchain at a bill of materials cost of only a few dollars. For smart-home deployments where budgets, energy consumption, and maintainability are critical, these characteristics make MCU-class gateways a pragmatic alternative to single-board computers, enabling always-on local control with minimal overhead. This paper presents the design and implementation of an embedded IoT gateway powered by the ESP32 microcontroller. By using lightweight communication protocols such as Message Queuing Telemetry Transport (MQTT) and REST APIs, the proposed architecture supports local control, distributed intelligence, and secure on-site data storage, all while minimizing dependence on cloud infrastructure. A real-world deployment in an educational building demonstrates the gateway’s capability to monitor energy consumption, execute control commands, and provide an intuitive web-based dashboard with minimal resource overhead. Experimental results confirm that the solution offers strong performance, with RAM usage ranging between 3.6% and 6.8% of available memory (approximately 8.92 KB to 16.9 KB). The initial loading of the single-page application (SPA) results in a temporary RAM spike to 52.4%, which later stabilizes at 50.8%. These findings highlight the ESP32’s ability to serve as a functional IoT gateway with minimal resource demands. Areas for future optimization include improved device discovery mechanisms and enhanced resource management to prolong device longevity. Overall, the gateway represents a cost-effective and vendor-agnostic platform for building resilient and scalable IoT ecosystems. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

21 pages, 1535 KB  
Article
Integrative Federated Learning Framework for Multimodal Parkinson’s Disease Biomarker Fusion
by Ruchira Pratihar and Ravi Sankar
Computers 2025, 14(9), 388; https://doi.org/10.3390/computers14090388 - 15 Sep 2025
Viewed by 1914
Abstract
Accurate and early diagnosis of Parkinson’s Disease (PD) is challenged by the diverse manifestations of motor and non-motor symptoms across different patients. Existing studies largely rely on limited datasets and biomarkers. In this extended research, we propose a comprehensive Federated Learning (FL) framework [...] Read more.
Accurate and early diagnosis of Parkinson’s Disease (PD) is challenged by the diverse manifestations of motor and non-motor symptoms across different patients. Existing studies largely rely on limited datasets and biomarkers. In this extended research, we propose a comprehensive Federated Learning (FL) framework designed to integrate heterogeneous biomarkers through multimodal combinations—such as EEG–fMRI pairs, continuous speech with vowel pronunciation, and the fusion of EEG, gait, and accelerometry data—drawn from diverse sources and modalities. By processing data separately at client nodes and performing feature and decision fusion at a central server, our method preserves privacy and enables robust PD classification. Experimental results show accuracies exceeding 85% across multiple fusion techniques, with attention-based fusion reaching 97.8% for Freezing of Gait (FoG) detection. Our framework advances scalable, privacy-preserving, multimodal diagnostics for PD. Full article
Show Figures

Figure 1

32 pages, 1923 KB  
Article
Narrative-Driven Digital Gamification for Motivation and Presence: Preservice Teachers’ Experiences in a Science Education Course
by Gregorio Jiménez-Valverde, Noëlle Fabre-Mitjans and Gerard Guimerà-Ballesta
Computers 2025, 14(9), 384; https://doi.org/10.3390/computers14090384 - 14 Sep 2025
Cited by 4 | Viewed by 4807
Abstract
This mixed-methods study investigated how a personalized, narrative-integrated digital gamification framework (with FantasyClass) was associated with motivation and presence among preservice elementary teachers in a science education course. The intervention combined HEXAD-informed personalization (aligning game elements with player types) with a branching storyworld, [...] Read more.
This mixed-methods study investigated how a personalized, narrative-integrated digital gamification framework (with FantasyClass) was associated with motivation and presence among preservice elementary teachers in a science education course. The intervention combined HEXAD-informed personalization (aligning game elements with player types) with a branching storyworld, teacher-directed AI-generated narrative emails, and multimodal cues (visuals, music, scent) to scaffold presence alongside autonomy, competence, and relatedness. Thirty-four students participated in a one-group posttest design, completing an adapted 21-item PENS questionnaire and responding to two open-ended prompts. Results, which are exploratory and not intended for broad generalization or causal inference, indicated high self-reported competence and autonomy, positive but more variable relatedness, and strong presence/immersion. Subscale correlations showed that Competence covaried with Autonomy and Relatedness, while Presence/Immersion was positively associated with all other subscales, suggesting that presence may act as a motivational conduit. Thematic analysis portrayed students as active decision-makers within the narrative, linking consequential choices, visible progress, and team-based goals to agency, effectiveness, and social connection. Additional themes included coherence and organization, fun and enjoyment, novelty, extrinsic incentives, and perceived professional transferability. Overall, findings suggest that narrative presence, when coupled with player-aligned game elements, can foster engagement and motivation in STEM-oriented teacher education. Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
Show Figures

Graphical abstract

18 pages, 4208 KB  
Article
Transformer Models for Paraphrase Detection: A Comprehensive Semantic Similarity Study
by Dianeliz Ortiz Martes, Evan Gunderson, Caitlin Neuman and Nezamoddin N. Kachouie
Computers 2025, 14(9), 385; https://doi.org/10.3390/computers14090385 - 14 Sep 2025
Cited by 3 | Viewed by 2589
Abstract
Semantic similarity, the task of determining whether two sentences convey the same meaning, is central to applications such as paraphrase detection, semantic search, and question answering. Despite the widespread adoption of transformer-based models for this task, their performance is influenced by both the [...] Read more.
Semantic similarity, the task of determining whether two sentences convey the same meaning, is central to applications such as paraphrase detection, semantic search, and question answering. Despite the widespread adoption of transformer-based models for this task, their performance is influenced by both the choice of similarity measure and BERT (bert-base-nli-mean-tokens), RoBERTa (all-roberta-large-v1), and MPNet (all-mpnet-base-v2) on the Microsoft Research Paraphrase Corpus (MRPC). Sentence embeddings were compared using cosine similarity, dot product, Manhattan distance, and Euclidean distance, with thresholds optimized for accuracy, balanced accuracy, and F1-score. Results indicate a consistent advantage for MPNet, which achieved the highest accuracy (75.6%), balanced accuracy (71.0%), and F1-score (0.836) when paired with cosine similarity at an optimized threshold of 0.671. BERT and RoBERTa performed competitively but exhibited greater sensitivity to the choice of Similarity metric, with BERT notably underperforming when using cosine similarity compared to Manhattan or Euclidean distance. Optimal thresholds varied widely (0.334–0.867), underscoring the difficulty of establishing a single, generalizable cut-off for paraphrase classification. These findings highlight the value of fine-tuning of both Similarity metrics and thresholds alongside model selection, offering practical guidance for designing high-accuracy semantic similarity systems in real-world NLP applications. Full article
Show Figures

Figure 1

15 pages, 5716 KB  
Article
Supersampling in Render CPOY: Total Annihilation
by Grigorie Dennis Sergiu and Stanciu Ion Rares
Computers 2025, 14(9), 383; https://doi.org/10.3390/computers14090383 - 12 Sep 2025
Cited by 2 | Viewed by 802
Abstract
This paper tackles a significant problem in gaming graphics: balancing visual fidelity with performance in real time. The article introduces CPOY SR (Continuous Procedural Output Yielder for Scaling Resolution), a dynamic resolution scaling algorithm designed to enhance both performance and visual quality in [...] Read more.
This paper tackles a significant problem in gaming graphics: balancing visual fidelity with performance in real time. The article introduces CPOY SR (Continuous Procedural Output Yielder for Scaling Resolution), a dynamic resolution scaling algorithm designed to enhance both performance and visual quality in real-time gaming. Unlike traditional supersampling and anti-aliasing techniques that suffer from fixed settings and hardware limitations, CPOY SR adapts resolution during gameplay based on system resources and user activity. The method is implemented and tested in an actual game project, not just theoretically proposed. The proposed method overcomes these by adjusting resolution dynamically during gameplay. One strong feature is that it works across diverse systems, from low-end laptops to high-end machines. The algorithm utilizes mathematical constraints like Mathf.Clamp to ensure numerical robustness during scaling and avoids manual reconfiguration. Testing was carried out across multiple hardware configurations and resolutions (up to 8K); the approach demonstrated consistent visual fidelity with optimized performance. The research integrates visual rendering, resolution scaling, and anti-aliasing techniques, offering a scalable solution for immersive gameplay. This article outlines the key components and development phases that contribute to the creation of this engaging and visually impressive gaming experience project. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

21 pages, 3806 KB  
Article
GraphTrace: A Modular Retrieval Framework Combining Knowledge Graphs and Large Language Models for Multi-Hop Question Answering
by Anna Osipjan, Hanieh Khorashadizadeh, Akasha-Leonie Kessel, Sven Groppe and Jinghua Groppe
Computers 2025, 14(9), 382; https://doi.org/10.3390/computers14090382 - 11 Sep 2025
Cited by 1 | Viewed by 1870
Abstract
This paper introduces GraphTrace, a novel retrieval framework that integrates a domain-specific knowledge graph (KG) with a large language model (LLM) to improve information retrieval for complex, multi-hop queries. Built on structured economic data related to the COVID-19 pandemic, GraphTrace adopts a modular [...] Read more.
This paper introduces GraphTrace, a novel retrieval framework that integrates a domain-specific knowledge graph (KG) with a large language model (LLM) to improve information retrieval for complex, multi-hop queries. Built on structured economic data related to the COVID-19 pandemic, GraphTrace adopts a modular architecture comprising entity extraction, path finding, query decomposition, semantic path ranking, and context aggregation, followed by LLM-based answer generation. GraphTrace is compared against baseline retrieval-augmented generation (RAG) and graph-based RAG (GraphRAG) approaches in both retrieval and generation settings. Experimental results show that GraphTrace consistently outperforms the baselines across evaluation metrics, particularly in handling mid-complexity (5–6-hop) queries and achieving top scores in directness during the generation evaluation. These gains are attributed to GraphTrace’s alignment of semantic reasoning with structured KG traversal, combining modular components for more targeted and interpretable retrieval. Full article
Show Figures

Figure 1

21 pages, 330 KB  
Article
Towards Navigating Ethical Challenges in AI-Driven Healthcare Ad Moderation
by Abraham Abby Sen, Jeen Mariam Joy and Murray E. Jennex
Computers 2025, 14(9), 380; https://doi.org/10.3390/computers14090380 - 11 Sep 2025
Cited by 2 | Viewed by 2465
Abstract
The growing use of AI-driven content moderation on social media platforms has intensified ethical concerns, particularly in the context of healthcare advertising and misinformation. While artificial intelligence offers scale and efficiency, it lacks the moral judgment, contextual understanding, and interpretive flexibility required to [...] Read more.
The growing use of AI-driven content moderation on social media platforms has intensified ethical concerns, particularly in the context of healthcare advertising and misinformation. While artificial intelligence offers scale and efficiency, it lacks the moral judgment, contextual understanding, and interpretive flexibility required to navigate complex health-related discourse. This paper addresses these challenges by integrating normative ethical theory with organizational practice to evaluate the limitations of AI in moderating healthcare content. Drawing on deontological, utilitarian, and virtue ethics frameworks, the analysis explores the tensions between ethical ideals and real-world implementation. Building on this foundation, the paper proposes a set of normative guidelines that emphasize hybrid human–AI moderation, transparency, the redesign of success metrics, and the cultivation of ethical organizational cultures. To institutionalize these principles, we introduce a governance framework that includes internal accountability structures, external oversight mechanisms, and adaptive processes for handling ambiguity, disagreement, and evolving standards. By connecting ethical theory with actionable design strategies, this study provides a roadmap for responsible and context-sensitive AI moderation in the digital healthcare ecosystem. Full article
(This article belongs to the Section AI-Driven Innovations)
27 pages, 18541 KB  
Article
Integrating Design Thinking Approach and Simulation Tools in Smart Building Systems Education: A Case Study on Computer-Assisted Learning for Master’s Students
by Andrzej Ożadowicz
Computers 2025, 14(9), 379; https://doi.org/10.3390/computers14090379 - 9 Sep 2025
Viewed by 1664
Abstract
The rapid development of smart home and building technologies requires educational methods that facilitate the integration of theoretical knowledge with practical, system-level design skills. Computer-assisted tools play a crucial role in this process by enabling students to experiment with complex Internet of Things [...] Read more.
The rapid development of smart home and building technologies requires educational methods that facilitate the integration of theoretical knowledge with practical, system-level design skills. Computer-assisted tools play a crucial role in this process by enabling students to experiment with complex Internet of Things (IoT) and building automation ecosystems in a risk-free, iterative environment. This paper proposes a pedagogical framework that integrates simulation-based prototyping with collaborative and spatial design tools, supported by elements of design thinking and blended learning. The approach was implemented in a master’s-level Smart Building Systems course, to engage students in interdisciplinary projects where virtual modeling, digital collaboration, and contextualized spatial design were combined to develop user-oriented smart space concepts. Analysis of project outcomes and student feedback indicated that the use of simulation and visualization platforms may enhance technical competencies, creativity, and engagement. The proposed framework contributes to engineering education by demonstrating how computer-assisted environments can effectively support practice-oriented, user-centered learning. Its modular and scalable structure makes it applicable across IoT- and automation-focused curricula, aligning academic training with the hybrid workflows of contemporary engineering practice. Concurrently, areas for enhancement and modification were identified to optimize support for group and creative student work. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
Show Figures

Graphical abstract

49 pages, 670 KB  
Review
Bridging Domains: Advances in Explainable, Automated, and Privacy-Preserving AI for Computer Science and Cybersecurity
by Youssef Harrath, Oswald Adohinzin, Jihene Kaabi and Morgan Saathoff
Computers 2025, 14(9), 374; https://doi.org/10.3390/computers14090374 - 8 Sep 2025
Cited by 4 | Viewed by 5344
Abstract
Artificial intelligence (AI) is rapidly redefining both computer science and cybersecurity by enabling more intelligent, scalable, and privacy-conscious systems. While most prior surveys treat these fields in isolation, this paper provides a unified review of 256 peer-reviewed publications to bridge that gap. We [...] Read more.
Artificial intelligence (AI) is rapidly redefining both computer science and cybersecurity by enabling more intelligent, scalable, and privacy-conscious systems. While most prior surveys treat these fields in isolation, this paper provides a unified review of 256 peer-reviewed publications to bridge that gap. We examine how emerging AI paradigms, such as explainable AI (XAI), AI-augmented software development, and federated learning, are shaping technological progress across both domains. In computer science, AI is increasingly embedded throughout the software development lifecycle to boost productivity, improve testing reliability, and automate decision making. In cybersecurity, AI drives advances in real-time threat detection and adaptive defense. Our synthesis highlights powerful cross-cutting findings, including shared challenges such as algorithmic bias, interpretability gaps, and high computational costs, as well as empirical evidence that AI-enabled defenses can reduce successful breaches by up to 30%. Explainability is identified as a cornerstone for trust and bias mitigation, while privacy-preserving techniques, including federated learning and local differential privacy, emerge as essential safeguards in decentralized environments such as the Internet of Things (IoT) and healthcare. Despite transformative progress, we emphasize persistent limitations in fairness, adversarial robustness, and the sustainability of large-scale model training. By integrating perspectives from two traditionally siloed disciplines, this review delivers a unified framework that not only maps current advances and limitations but also provides a foundation for building more resilient, ethical, and trustworthy AI systems. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

30 pages, 10155 KB  
Article
Interoperable Semantic Systems in Public Administration: AI-Driven Data Mining from Law-Enforcement Reports
by Alexandros Z. Spyropoulos and Vassilis Tsiantos
Computers 2025, 14(9), 376; https://doi.org/10.3390/computers14090376 - 8 Sep 2025
Cited by 2 | Viewed by 2842
Abstract
The digitisation of law-enforcement archives is examined with the aim of moving from static analogue records to interoperable semantic information systems. A step-by-step framework for optimal digitisation is proposed, grounded in archival best practice and enriched with artificial-intelligence and semantic-web technologies. Emphasis is [...] Read more.
The digitisation of law-enforcement archives is examined with the aim of moving from static analogue records to interoperable semantic information systems. A step-by-step framework for optimal digitisation is proposed, grounded in archival best practice and enriched with artificial-intelligence and semantic-web technologies. Emphasis is placed on semantic data representation, which renders information actionable, searchable, interlinked, and automatically processed. As a proof of concept, a large language model—OpenAI ChatGPT, version o3—was applied to a corpus of narrative police reports, extracting and classifying key entities (metadata, persons, addresses, vehicles, incidents, fingerprints, and inter-entity relationships). The output was converted to Resource Description Framework triples and ingested into a triplestore, demonstrating how unstructured text can be transformed into machine-readable, interoperable data with minimal human intervention. The approach’s challenges—technical complexity, data quality assurance, information-security requirements, and staff training—are analysed alongside the opportunities it affords, such as accelerated access to records, cross-agency interoperability, and advanced analytics for investigative and strategic decision-making. Combining systematic digitisation, AI-driven data extraction, and rigorous semantic modelling ultimately delivers a fully interoperable information environment for law-enforcement agencies, enhancing efficiency, transparency, and evidentiary integrity. Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
Show Figures

Graphical abstract

31 pages, 1545 KB  
Article
The Complexity of eHealth Architecture: Lessons Learned from Application Use Cases
by Annalisa Barsotti, Gerl Armin, Wilhelm Sebastian, Massimiliano Donati, Stefano Dalmiani and Claudio Passino
Computers 2025, 14(9), 371; https://doi.org/10.3390/computers14090371 - 4 Sep 2025
Cited by 1 | Viewed by 1946
Abstract
The rapid evolution of eHealth technologies has revolutionized healthcare, enabling data-driven decision-making and personalized care. Central to this transformation is interoperability, which ensures seamless communication among heterogeneous systems. This paper explores the critical role of interoperability, data management processes, and the use [...] Read more.
The rapid evolution of eHealth technologies has revolutionized healthcare, enabling data-driven decision-making and personalized care. Central to this transformation is interoperability, which ensures seamless communication among heterogeneous systems. This paper explores the critical role of interoperability, data management processes, and the use of international standards in enabling integrated healthcare solutions. We present an overview of interoperability dimensions—technical, semantic, and organizational—and align them with data management phases in a concise eHealth architecture. Furthermore, we examine two practical European use cases to demonstrate the extend of the proposed eHealth architecture, involving patients, environments, third parties, and healthcare providers. Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2025)
Show Figures

Figure 1

25 pages, 412 KB  
Article
LightCross: A Lightweight Smart Contract Vulnerability Detection Tool
by Ioannis Sfyrakis, Paolo Modesti, Lewis Golightly and Minaro Ikegima
Computers 2025, 14(9), 369; https://doi.org/10.3390/computers14090369 - 3 Sep 2025
Viewed by 2700
Abstract
Blockchain and smart contracts have transformed industries by automating complex processes and transactions. However, this innovation has introduced significant security concerns, potentially leading to loss of financial assets and data integrity. The focus of this research is to address these challenges by developing [...] Read more.
Blockchain and smart contracts have transformed industries by automating complex processes and transactions. However, this innovation has introduced significant security concerns, potentially leading to loss of financial assets and data integrity. The focus of this research is to address these challenges by developing a tool that can enable developers and testers to detect vulnerabilities in smart contracts in an efficient and reliable way. The research contributions include an analysis of existing literature on smart contract security, along with the design and implementation of a lightweight vulnerability detection tool called LightCross. This tool runs two well-known detectors, Slither and Mythril, to analyse smart contracts. Experimental analysis was conducted using the SmartBugs curated dataset, which contains 143 vulnerable smart contracts with a total of 206 vulnerabilities. The results showed that LightCross achieves the same detection rate as SmartBugs when using the same backend detectors (Slither and Mythril) while eliminating SmartBugs’ need for a separate Docker container for each detector. Mythril detects 53% and Slither 48% of the vulnerabilities in the SmartBugs curated dataset. Furthermore, an assessment of the execution time across various vulnerability categories revealed that LightCross performs comparably to SmartBugs when using the Mythril detector, while LightCross is significantly faster when using the Slither detector. Finally, to enhance user-friendliness and relevance, LightCross presents the verification results based on OpenSCV, a state-of-the-art academic classification of smart contract vulnerabilities, aligned with the industry-standard CWE and offering improvements over the unmaintained SWC taxonomy. Full article
Show Figures

Figure 1

41 pages, 966 KB  
Review
ChatGPT’s Expanding Horizons and Transformative Impact Across Domains: A Critical Review of Capabilities, Challenges, and Future Directions
by Taiwo Raphael Feyijimi, John Ogbeleakhu Aliu, Ayodeji Emmanuel Oke and Douglas Omoregie Aghimien
Computers 2025, 14(9), 366; https://doi.org/10.3390/computers14090366 - 2 Sep 2025
Cited by 3 | Viewed by 5560
Abstract
The rapid proliferation of Chat Generative Pre-trained Transformer (ChatGPT) marks a pivotal moment in artificial intelligence, eliciting responses from academic shock to industrial awe. As these technologies advance from passive tools toward proactive, agentic systems, their transformative potential and inherent risks are magnified [...] Read more.
The rapid proliferation of Chat Generative Pre-trained Transformer (ChatGPT) marks a pivotal moment in artificial intelligence, eliciting responses from academic shock to industrial awe. As these technologies advance from passive tools toward proactive, agentic systems, their transformative potential and inherent risks are magnified globally. This paper presents a comprehensive, critical review of ChatGPT’s impact across five key domains: natural language understanding (NLU), content generation, knowledge discovery, education, and engineering. While ChatGPT demonstrates profound capabilities, significant challenges remain in factual accuracy, bias, and the inherent opacity of its reasoning—a core issue termed the “Black Box Conundrum”. To analyze these evolving dynamics and the implications of this shift toward autonomous agency, this review introduces a series of conceptual frameworks, each specifically designed to illuminate the complex interactions and trade-offs within these domains: the “Specialization vs. Generalization” tension in NLU; the “Quality–Scalability–Ethics Trilemma” in content creation; the “Pedagogical Adaptation Imperative” in education; and the emergence of “Human–LLM Cognitive Symbiosis” in engineering. The analysis reveals an urgent need for proactive adaptation across sectors. Educational paradigms must shift to cultivate higher-order cognitive skills, while professional practices (including practices within education sector) must evolve to treat AI as a cognitive partner, leveraging techniques like Retrieval-Augmented Generation (RAG) and sophisticated prompt engineering. Ultimately, this paper argues for an overarching “Ethical–Technical Co-evolution Imperative”, charting a forward-looking research agenda that intertwines technological innovation with vigorous ethical and methodological standards to ensure responsible AI development and integration. Ultimately, the analysis reveals that the challenges of factual accuracy, bias, and opacity are interconnected and acutely magnified by the emergence of agentic systems, demanding a unified, proactive approach to adaptation across all sectors. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Graphical abstract

18 pages, 1660 KB  
Article
AI Gem: Context-Aware Transformer Agents as Digital Twin Tutors for Adaptive Learning
by Attila Kovari
Computers 2025, 14(9), 367; https://doi.org/10.3390/computers14090367 - 2 Sep 2025
Cited by 1 | Viewed by 2313
Abstract
Recent developments in large language models allow for real time, context-aware tutoring. AI Gem, presented in this article, is a layered architecture that integrates personalization, adaptive feedback, and curricular alignment into transformer based tutoring agents. The architecture combines retrieval augmented generation, Bayesian learner [...] Read more.
Recent developments in large language models allow for real time, context-aware tutoring. AI Gem, presented in this article, is a layered architecture that integrates personalization, adaptive feedback, and curricular alignment into transformer based tutoring agents. The architecture combines retrieval augmented generation, Bayesian learner model, and policy-based dialog in a verifiable and deployable software stack. The opportunities are scalable tutoring, multimodal interaction, and augmentation of teachers through content tools and analytics. Risks are factual errors, bias, over reliance, latency, cost, and privacy. The paper positions AI Gem as a design framework with testable hypotheses. A scenario-based walkthrough and new diagrams assign each learner step to the ten layers. Governance guidance covers data privacy across jurisdictions and operation in resource constrained environments. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
Show Figures

Figure 1

29 pages, 2571 KB  
Article
Governance Framework for Intelligent Digital Twin Systems in Battery Storage: Aligning Standards, Market Incentives, and Cybersecurity for Decision Support of Digital Twin in BESS
by April Lia Hananto and Ibham Veza
Computers 2025, 14(9), 365; https://doi.org/10.3390/computers14090365 - 2 Sep 2025
Cited by 4 | Viewed by 3553
Abstract
Digital twins represent a transformative innovation for battery energy storage systems (BESS), offering real-time virtual replicas of physical batteries that enable accurate monitoring, predictive analytics, and advanced control strategies. These capabilities promise to significantly enhance system efficiency, reliability, and lifespan. Yet, despite the [...] Read more.
Digital twins represent a transformative innovation for battery energy storage systems (BESS), offering real-time virtual replicas of physical batteries that enable accurate monitoring, predictive analytics, and advanced control strategies. These capabilities promise to significantly enhance system efficiency, reliability, and lifespan. Yet, despite the clear technical potential, large-scale deployment of digital twin-enabled battery systems faces critical governance barriers. This study identifies three major challenges: fragmented standards and lack of interoperability, weak or misaligned market incentives, and insufficient cybersecurity safeguards for interconnected systems. The central contribution of this research is the development of a comprehensive governance framework that aligns these three pillars—standards, market and regulatory incentives, and cybersecurity—into an integrated model. Findings indicate that harmonized standards reduce integration costs and build trust across vendors and operators, while supportive regulatory and market mechanisms can explicitly reward the benefits of digital twins, including improved reliability, extended battery life, and enhanced participation in energy markets. For example, simulation-based evidence suggests that digital twin-guided thermal and operational strategies can extend usable battery capacity by up to five percent, providing both technical and economic benefits. At the same time, embedding robust cybersecurity practices ensures that the adoption of digital twins does not introduce vulnerabilities that could threaten grid stability. Beyond identifying governance gaps, this study proposes an actionable implementation roadmap categorized into short-, medium-, and long-term strategies rather than fixed calendar dates, ensuring adaptability across different jurisdictions. Short-term actions include establishing terminology standards and piloting incentive programs. Medium-term measures involve mandating interoperability protocols and embedding digital twin requirements in market rules, and long-term strategies focus on achieving global harmonization and universal plug-and-play interoperability. International examples from Europe, North America, and Asia–Pacific illustrate how coordinated governance can accelerate adoption while safeguarding energy infrastructure. By combining technical analysis with policy and governance insights, this study advances both the scholarly and practical understanding of digital twin deployment in BESSs. The findings provide policymakers, regulators, industry leaders, and system operators with a clear framework to close governance gaps, maximize the value of digital twins, and enable more secure, reliable, and sustainable integration of energy storage into future power systems. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

21 pages, 7375 KB  
Article
Real-Time Face Mask Detection Using Federated Learning
by Tudor-Mihai David and Mihai Udrescu
Computers 2025, 14(9), 360; https://doi.org/10.3390/computers14090360 - 31 Aug 2025
Viewed by 1129
Abstract
Epidemics caused by respiratory infections have become a global and systemic threat since humankind has become highly connected via modern transportation systems. Any new pathogen with human-to-human transmission capabilities has the potential to cause public health disasters and severe disruptions of social and [...] Read more.
Epidemics caused by respiratory infections have become a global and systemic threat since humankind has become highly connected via modern transportation systems. Any new pathogen with human-to-human transmission capabilities has the potential to cause public health disasters and severe disruptions of social and economic activities. During the COVID-19 pandemic, we learned that proper mask-wearing in closed, restricted areas was one of the measures that worked to mitigate the spread of respiratory infections while allowing for continuing economic activity. Previous research approached this issue by designing hardware–software systems that determine whether individuals in the surveilled restricted area are using a mask; however, most such solutions are centralized, thus requiring massive computational resources, which makes them hard to scale up. To address such issues, this paper proposes a novel decentralized, federated learning (FL) solution to mask-wearing detection that instantiates our lightweight version of the MobileNetV2 model. The FL solution also ensures individual privacy, given that images remain at the local, device level. Importantly, we obtained a mask-wearing training accuracy of 98% (i.e., similar to centralized machine learning solutions) after only eight rounds of communication with 25 clients. We rigorously proved the reliability and robustness of our approach after repeated K-fold cross-validation. Full article
Show Figures

Figure 1

16 pages, 2074 KB  
Article
Benchmarking Control Strategies for Multi-Component Degradation (MCD) Detection in Digital Twin (DT) Applications
by Atuahene Kwasi Barimah, Akhtar Jahanzeb, Octavian Niculita, Andrew Cowell and Don McGlinchey
Computers 2025, 14(9), 356; https://doi.org/10.3390/computers14090356 - 29 Aug 2025
Viewed by 998
Abstract
Digital Twins (DTs) have become central to intelligent asset management within Industry 4.0, enabling real-time monitoring, diagnostics, and predictive maintenance. However, implementing Prognostics and Health Management (PHM) strategies within DT frameworks remains a significant challenge, particularly in systems experiencing multi-component degradation (MCD). MCD [...] Read more.
Digital Twins (DTs) have become central to intelligent asset management within Industry 4.0, enabling real-time monitoring, diagnostics, and predictive maintenance. However, implementing Prognostics and Health Management (PHM) strategies within DT frameworks remains a significant challenge, particularly in systems experiencing multi-component degradation (MCD). MCD occurs when several components degrade simultaneously or in interaction, complicating detection and isolation processes. Traditional data-driven fault detection models often require extensive historical degradation data, which is costly, time-consuming, or difficult to obtain in many real-world scenarios. This paper proposes a model-based, control-driven approach to MCD detection, which reduces the need for large training datasets by leveraging reference tracking performance in closed-loop control systems. We benchmark the accuracy of four control strategies—Proportional-Integral (PI), Linear Quadratic Regulator (LQR), Model Predictive Control (MPC), and a hybrid model—within a Digital Twin-enabled hydraulic system testbed comprising multiple components, including pumps, valves, nozzles, and filters. The control strategies are evaluated under various MCD scenarios for their ability to accurately detect and isolate degradation events. Simulation results indicate that the hybrid model consistently outperforms the individual control strategies, achieving an average accuracy of 95.76% under simultaneous pump and nozzle degradation scenarios. The LQR model also demonstrated strong predictive performance, especially in identifying degradation in components such as nozzles and pumps. Also, the sequence and interaction of faults were found to influence detection accuracy, highlighting how the complexities of fault sequences affect the performance of diagnostic strategies. This work contributes to PHM and DT research by introducing a scalable, data-efficient methodology for MCD detection that integrates seamlessly into existing DT architectures using containerized RESTful APIs. By shifting from data-dependent to model-informed diagnostics, the proposed approach enhances early fault detection capabilities and reduces deployment timelines for real-world DT-enabled PHM applications. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

19 pages, 5181 KB  
Article
Remote Code Execution via Log4J MBeans: Case Study of Apache ActiveMQ (CVE-2022-41678)
by Alexandru Răzvan Căciulescu, Matei Bădănoiu, Răzvan Rughiniș and Dinu Țurcanu
Computers 2025, 14(9), 355; https://doi.org/10.3390/computers14090355 - 28 Aug 2025
Cited by 1 | Viewed by 1795
Abstract
Java Management Extensions (JMX) are indispensable for managing and administrating Java software solutions, yet when exposed through HTTP bridges such as Jolokia they can radically enlarge an application’s attack surface. This paper presents the first in-depth analysis of CVE-2022-41678, a vulnerability discovered by [...] Read more.
Java Management Extensions (JMX) are indispensable for managing and administrating Java software solutions, yet when exposed through HTTP bridges such as Jolokia they can radically enlarge an application’s attack surface. This paper presents the first in-depth analysis of CVE-2022-41678, a vulnerability discovered by the authors in Apache ActiveMQ that combines Jolokia’s remote JMX access with Log4J2 management beans to achieve full remote code execution. Using a default installation testbed, we enumerate the Log4J MBeans surfaced by Jolokia, demonstrate arbitrary file read, file write, and server-side request–forgery primitives, and finally to leverage the file write capabilities to obtain a shell, all via authenticated HTTP(S) requests only. The end-to-end exploit chain requires no deserialization gadgets and is unaffected by prior Log4Shell mitigations. We have also automated the entire exploit process via proof-of-concept scripts on a stock ActiveMQ 5.17.1 instance. We discuss the broader security implications for any software exposing JMX-managed or Jolokia-managed Log4J contexts, provide concrete hardening guidelines, and outline design directions for safer remote-management stacks. The findings underscore that even “benign” management beans can become critical when surfaced through ubiquitous HTTP management gateways. Full article
Show Figures

Figure 1

14 pages, 898 KB  
Article
Attention-Pool: 9-Ball Game Video Analytics with Object Attention and Temporal Context Gated Attention
by Anni Zheng and Wei Qi Yan
Computers 2025, 14(9), 352; https://doi.org/10.3390/computers14090352 - 27 Aug 2025
Cited by 1 | Viewed by 1750
Abstract
The automated analysis of pool game videos presents significant challenges due to complex object interactions, precise rule requirements, and event-driven game dynamics that traditional computer vision approaches struggle to address effectively. This research introduces TCGA-Pool, a novel video analytics framework specifically designed for [...] Read more.
The automated analysis of pool game videos presents significant challenges due to complex object interactions, precise rule requirements, and event-driven game dynamics that traditional computer vision approaches struggle to address effectively. This research introduces TCGA-Pool, a novel video analytics framework specifically designed for comprehensive 9-ball pool game understanding through advanced object attention mechanisms and temporal context modeling. Our approach addresses the critical gap in automated cue sports analysis by focusing on three essential classification tasks: Clear shot detection (successful ball potting without fouls), win condition identification (game-ending scenarios), and potted balls counting (accurate enumeration of successfully pocketed balls). The proposed framework leverages a Temporal Context Gated Attention (TCGA) mechanism that dynamically focuses on salient game elements while incorporating sequential dependencies inherent in pool game sequences. Through comprehensive evaluation on a dataset comprising 58,078 annotated video frames from diverse 9-ball pool scenarios, our TCGA-Pool framework demonstrates substantial improvements over existing video analysis methods, achieving accuracy gains of 4.7%, 3.2%, and 6.2% for clear shot detection, win condition identification, and potted ball counting tasks, respectively. The framework maintains computational efficiency with only 27.3 M parameters and 13.9 G FLOPs, making it suitable for real-time applications. Our contributions include the introduction of domain-specific object attention mechanisms, the development of adaptive temporal modeling strategies for cue sports, and the implementation of a practical real-time system for automated pool game monitoring. This work establishes a foundation for intelligent sports analytics in precision-based games and demonstrates the effectiveness of specialized deep learning approaches for complex temporal video understanding tasks. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

16 pages, 417 KB  
Article
Empowering the Operator: Fault Diagnosis and Identification in an Industrial Environment Through a User-Friendly IoT Architecture
by Annalisa Bertoli and Cesare Fantuzzi
Computers 2025, 14(9), 349; https://doi.org/10.3390/computers14090349 - 26 Aug 2025
Viewed by 1150
Abstract
In recent years, the increasing complexity of production systems driven by technological development has created new opportunities in the industrial world but has also brought challenges in the practical use of these systems by operators. One of the biggest changes is data existence [...] Read more.
In recent years, the increasing complexity of production systems driven by technological development has created new opportunities in the industrial world but has also brought challenges in the practical use of these systems by operators. One of the biggest changes is data existence and its accessibility. This work proposes an IoT architecture specifically designed for real-world industrial environments. The goal is to present a system that can be effectively implemented to monitor operations and production processes in real time. This solution improves fault detection and identification, giving the operators the critical information needed to make informed decisions. The IoT architecture is implemented in two different industrial applications, demonstrating the flexibility of the architecture across various industrial contexts. It highlights how the system is monitored to reduce downtime when a fault occurs, making clear the loss in performance and the fault that causes this loss. Additionally, this approach supports human operators in a deeper understanding of their working environment, enabling them to make decisions based on real-time data. Full article
Show Figures

Figure 1

Back to TopTop