Previous Issue
Volume 14, August
 
 

Computers, Volume 14, Issue 9 (September 2025) – 40 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 3805 KB  
Article
GraphTrace: A Modular Retrieval Framework Combining Knowledge Graphs and Large Language Models for Multi-Hop Question Answering
by Anna Osipjan, Hanieh Khorashadizadeh, Akasha-Leonie Kessel, Sven Groppe and Jinghua Groppe
Computers 2025, 14(9), 382; https://doi.org/10.3390/computers14090382 - 11 Sep 2025
Abstract
This paper introduces GraphTrace, a novel retrieval framework that integrates a domain-specific knowledge graph (KG) with a large language model (LLM) to improve information retrieval for complex, multi-hop queries. Built on structured economic data related to the COVID-19 pandemic, GraphTrace adopts a modular [...] Read more.
This paper introduces GraphTrace, a novel retrieval framework that integrates a domain-specific knowledge graph (KG) with a large language model (LLM) to improve information retrieval for complex, multi-hop queries. Built on structured economic data related to the COVID-19 pandemic, GraphTrace adopts a modular architecture comprising entity extraction, path finding, query decomposition, semantic path ranking, and context aggregation, followed by LLM-based answer generation. GraphTrace is compared against baseline retrieval-augmented generation (RAG) and graph-based RAG (GraphRAG) approaches in both retrieval and generation settings. Experimental results show that GraphTrace consistently outperforms the baselines across evaluation metrics, particularly in handling mid-complexity (5–6-hop) queries and achieving top scores in directness during the generation evaluation. These gains are attributed to GraphTrace’s alignment of semantic reasoning with structured KG traversal, combining modular components for more targeted and interpretable retrieval. Full article
Show Figures

Figure 1

19 pages, 544 KB  
Article
Scaling Linearizable Range Queries on Modern Multi-Cores
by Chen Zhang, Zhengming Yi and Xinghui Zhu
Computers 2025, 14(9), 381; https://doi.org/10.3390/computers14090381 - 11 Sep 2025
Abstract
In this paper we introduce Range Query Timestamp Counter (RQ-TSC), a general approach to provide scalable and linearizable range query operations for highly concurrent lock-based data structures. RQ-TSC is a multi-versioned building block that relies on hardware timestamps (e.g., obtained through hardware timestamp [...] Read more.
In this paper we introduce Range Query Timestamp Counter (RQ-TSC), a general approach to provide scalable and linearizable range query operations for highly concurrent lock-based data structures. RQ-TSC is a multi-versioned building block that relies on hardware timestamps (e.g., obtained through hardware timestamp counter register on x86_64) to generate version timestamps, which greatly reduce a point of contention on a shared atomic counter. To evaluate the performance of RQ-TSC, we apply it to three data structures: a linked list, a skip list, and a binary search tree. Experiments show that our approach can improve scalability significantly. Moreover, in almost all cases, range queries on these data structures built from our design perform as well as or better than state-of-the-art concurrent data structures that support linearizable range queries. Full article
Show Figures

Figure 1

21 pages, 330 KB  
Article
Towards Navigating Ethical Challenges in AI-Driven Healthcare Ad Moderation
by Abraham Abby Sen, Jeen Mariam Joy and Murray E. Jennex
Computers 2025, 14(9), 380; https://doi.org/10.3390/computers14090380 - 11 Sep 2025
Abstract
The growing use of AI-driven content moderation on social media platforms has intensified ethical concerns, particularly in the context of healthcare advertising and misinformation. While artificial intelligence offers scale and efficiency, it lacks the moral judgment, contextual understanding, and interpretive flexibility required to [...] Read more.
The growing use of AI-driven content moderation on social media platforms has intensified ethical concerns, particularly in the context of healthcare advertising and misinformation. While artificial intelligence offers scale and efficiency, it lacks the moral judgment, contextual understanding, and interpretive flexibility required to navigate complex health-related discourse. This paper addresses these challenges by integrating normative ethical theory with organizational practice to evaluate the limitations of AI in moderating healthcare content. Drawing on deontological, utilitarian, and virtue ethics frameworks, the analysis explores the tensions between ethical ideals and real-world implementation. Building on this foundation, the paper proposes a set of normative guidelines that emphasize hybrid human–AI moderation, transparency, the redesign of success metrics, and the cultivation of ethical organizational cultures. To institutionalize these principles, we introduce a governance framework that includes internal accountability structures, external oversight mechanisms, and adaptive processes for handling ambiguity, disagreement, and evolving standards. By connecting ethical theory with actionable design strategies, this study provides a roadmap for responsible and context-sensitive AI moderation in the digital healthcare ecosystem. Full article
(This article belongs to the Section AI-Driven Innovations)
27 pages, 18541 KB  
Article
Integrating Design Thinking Approach and Simulation Tools in Smart Building Systems Education: A Case Study on Computer-Assisted Learning for Master’s Students
by Andrzej Ożadowicz
Computers 2025, 14(9), 379; https://doi.org/10.3390/computers14090379 - 9 Sep 2025
Abstract
The rapid development of smart home and building technologies requires educational methods that facilitate the integration of theoretical knowledge with practical, system-level design skills. Computer-assisted tools play a crucial role in this process by enabling students to experiment with complex Internet of Things [...] Read more.
The rapid development of smart home and building technologies requires educational methods that facilitate the integration of theoretical knowledge with practical, system-level design skills. Computer-assisted tools play a crucial role in this process by enabling students to experiment with complex Internet of Things (IoT) and building automation ecosystems in a risk-free, iterative environment. This paper proposes a pedagogical framework that integrates simulation-based prototyping with collaborative and spatial design tools, supported by elements of design thinking and blended learning. The approach was implemented in a master’s-level Smart Building Systems course, to engage students in interdisciplinary projects where virtual modeling, digital collaboration, and contextualized spatial design were combined to develop user-oriented smart space concepts. Analysis of project outcomes and student feedback indicated that the use of simulation and visualization platforms may enhance technical competencies, creativity, and engagement. The proposed framework contributes to engineering education by demonstrating how computer-assisted environments can effectively support practice-oriented, user-centered learning. Its modular and scalable structure makes it applicable across IoT- and automation-focused curricula, aligning academic training with the hybrid workflows of contemporary engineering practice. Concurrently, areas for enhancement and modification were identified to optimize support for group and creative student work. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
Show Figures

Figure 1

30 pages, 3451 KB  
Article
Pre-During-After Software Development Documentation (PDA-SDD): A Phase-Based Approach for Comprehensive Software Documentation in Modern Development Paradigms
by Abdullah A. H. Alzahrani
Computers 2025, 14(9), 378; https://doi.org/10.3390/computers14090378 - 9 Sep 2025
Abstract
Persistent challenges in software documentation, particularly limitations in generality, simplicity, and efficiency of existing models, impede effective software development. To address these, this research proposes a novel phase-based and holistic software documentation model (PDA-SDD). This model was subsequently evaluated using a digital questionnaire [...] Read more.
Persistent challenges in software documentation, particularly limitations in generality, simplicity, and efficiency of existing models, impede effective software development. To address these, this research proposes a novel phase-based and holistic software documentation model (PDA-SDD). This model was subsequently evaluated using a digital questionnaire distributed to 150 software development and documentation experts, achieving a 48% response rate (n = 72). The evaluation focused on assessing the proposed model’s generality, simplicity, and efficiency. Findings indicate that while certain sub-models (e.g., SRSD, RLD) were positively received across all criteria and the overall model demonstrated strong perceived generality and efficiency in specific aspects, areas for improvement were identified, particularly regarding terminological consistency and user-friendliness. This study contributes to the understanding of the complexities in achieving a universally effective software documentation model and highlights key considerations for future research and development in this critical area of software engineering. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

29 pages, 1167 KB  
Article
The Learning Style Decoder: FSLSM-Guided Behavior Mapping Meets Deep Neural Prediction in LMS Settings
by Athanasios Angeioplastis, John Aliprantis, Markos Konstantakis, Dimitrios Varsamis and Alkiviadis Tsimpiris
Computers 2025, 14(9), 377; https://doi.org/10.3390/computers14090377 - 8 Sep 2025
Abstract
Personalized learning environments increasingly rely on learner modeling techniques that integrate both explicit and implicit data sources. This study introduces a hybrid profiling methodology that combines psychometric data from an extended Felder–Silverman Learning Style Model (FSLSM) questionnaire with behavioral analytics derived from Moodle [...] Read more.
Personalized learning environments increasingly rely on learner modeling techniques that integrate both explicit and implicit data sources. This study introduces a hybrid profiling methodology that combines psychometric data from an extended Felder–Silverman Learning Style Model (FSLSM) questionnaire with behavioral analytics derived from Moodle Learning Management System interaction logs. A structured mapping process was employed to associate over 200 unique log event types with FSLSM cognitive dimensions, enabling dynamic, behavior-driven learner profiles. Experiments were conducted across three datasets: a university dataset from the International Hellenic University, a public dataset from Kaggle, and a combined dataset totaling over 7 million log entries. Deep learning models including a Sequential Neural Network, BiLSTM, and a pretrained MLSTM-FCN were trained to predict student performance across regression and classification tasks. Results indicate moderate predictive validity: binary classification achieved practical, albeit imperfect accuracy, while three-class and regression tasks performed close to baseline levels. These findings highlight both the potential and the current constraints of log-based learner modeling. The contribution of this work lies in providing a reproducible integration framework and pipeline that can be applied across datasets, offering a realistic foundation for further exploration of scalable, data-driven personalization. Full article
Show Figures

Figure 1

30 pages, 10155 KB  
Article
Interoperable Semantic Systems in Public Administration: AI-Driven Data Mining from Law-Enforcement Reports
by Alexandros Z. Spyropoulos and Vassilis Tsiantos
Computers 2025, 14(9), 376; https://doi.org/10.3390/computers14090376 - 8 Sep 2025
Abstract
The digitisation of law-enforcement archives is examined with the aim of moving from static analogue records to interoperable semantic information systems. A step-by-step framework for optimal digitisation is proposed, grounded in archival best practice and enriched with artificial-intelligence and semantic-web technologies. Emphasis is [...] Read more.
The digitisation of law-enforcement archives is examined with the aim of moving from static analogue records to interoperable semantic information systems. A step-by-step framework for optimal digitisation is proposed, grounded in archival best practice and enriched with artificial-intelligence and semantic-web technologies. Emphasis is placed on semantic data representation, which renders information actionable, searchable, interlinked, and automatically processed. As a proof of concept, a large language model—OpenAI ChatGPT, version o3—was applied to a corpus of narrative police reports, extracting and classifying key entities (metadata, persons, addresses, vehicles, incidents, fingerprints, and inter-entity relationships). The output was converted to Resource Description Framework triples and ingested into a triplestore, demonstrating how unstructured text can be transformed into machine-readable, interoperable data with minimal human intervention. The approach’s challenges—technical complexity, data quality assurance, information-security requirements, and staff training—are analysed alongside the opportunities it affords, such as accelerated access to records, cross-agency interoperability, and advanced analytics for investigative and strategic decision-making. Combining systematic digitisation, AI-driven data extraction, and rigorous semantic modelling ultimately delivers a fully interoperable information environment for law-enforcement agencies, enhancing efficiency, transparency, and evidentiary integrity. Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
Show Figures

Figure 1

20 pages, 1604 KB  
Article
Rule-Based eXplainable Autoencoder for DNS Tunneling Detection
by Giacomo De Bernardi, Giovanni Battista Gaggero, Fabio Patrone, Sandro Zappatore, Mario Marchese and Maurizio Mongelli
Computers 2025, 14(9), 375; https://doi.org/10.3390/computers14090375 - 8 Sep 2025
Abstract
Artificial Intelligence (AI) and Machine Learning (ML) are employed in numerous fields and applications. Even if most of these approaches offer a very good performance, they are affected by the “black-box” problem. The way they operate and make decisions is complex and difficult [...] Read more.
Artificial Intelligence (AI) and Machine Learning (ML) are employed in numerous fields and applications. Even if most of these approaches offer a very good performance, they are affected by the “black-box” problem. The way they operate and make decisions is complex and difficult for human users to interpret, making the systems impossible to manually adjust in case they make trivial (from a human viewpoint) errors. In this paper, we show how a “white-box” approach based on eXplainable AI (XAI) can be applied to the Domain Name System (DNS) tunneling detection problem, a cybersecurity problem already successfully addressed by “black-box” approaches, in order to make the detection explainable. The obtained results show that the proposed solution can achieve a performance comparable to the one offered by an autoencoder-based solution while offering a clear view of how the system makes its choices and the possibility of manual analysis and adjustments. Full article
Show Figures

Figure 1

49 pages, 670 KB  
Review
Bridging Domains: Advances in Explainable, Automated, and Privacy-Preserving AI for Computer Science and Cybersecurity
by Youssef Harrath, Oswald Adohinzin, Jihene Kaabi and Morgan Saathoff
Computers 2025, 14(9), 374; https://doi.org/10.3390/computers14090374 - 8 Sep 2025
Abstract
Artificial intelligence (AI) is rapidly redefining both computer science and cybersecurity by enabling more intelligent, scalable, and privacy-conscious systems. While most prior surveys treat these fields in isolation, this paper provides a unified review of 256 peer-reviewed publications to bridge that gap. We [...] Read more.
Artificial intelligence (AI) is rapidly redefining both computer science and cybersecurity by enabling more intelligent, scalable, and privacy-conscious systems. While most prior surveys treat these fields in isolation, this paper provides a unified review of 256 peer-reviewed publications to bridge that gap. We examine how emerging AI paradigms, such as explainable AI (XAI), AI-augmented software development, and federated learning, are shaping technological progress across both domains. In computer science, AI is increasingly embedded throughout the software development lifecycle to boost productivity, improve testing reliability, and automate decision making. In cybersecurity, AI drives advances in real-time threat detection and adaptive defense. Our synthesis highlights powerful cross-cutting findings, including shared challenges such as algorithmic bias, interpretability gaps, and high computational costs, as well as empirical evidence that AI-enabled defenses can reduce successful breaches by up to 30%. Explainability is identified as a cornerstone for trust and bias mitigation, while privacy-preserving techniques, including federated learning and local differential privacy, emerge as essential safeguards in decentralized environments such as the Internet of Things (IoT) and healthcare. Despite transformative progress, we emphasize persistent limitations in fairness, adversarial robustness, and the sustainability of large-scale model training. By integrating perspectives from two traditionally siloed disciplines, this review delivers a unified framework that not only maps current advances and limitations but also provides a foundation for building more resilient, ethical, and trustworthy AI systems. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

32 pages, 784 KB  
Review
Electromagnetic Field Distribution Mapping: A Taxonomy and Comprehensive Review of Computational and Machine Learning Methods
by Yiannis Kiouvrekis and Theodor Panagiotakopoulos
Computers 2025, 14(9), 373; https://doi.org/10.3390/computers14090373 - 5 Sep 2025
Viewed by 177
Abstract
Electromagnetic field (EMF) exposure mapping is increasingly important for ensuring compliance with safety regulations, supporting the deployment of next-generation wireless networks, and addressing public health concerns. While numerous surveys have addressed specific aspects of radio propagation or radio environment maps, a comprehensive and [...] Read more.
Electromagnetic field (EMF) exposure mapping is increasingly important for ensuring compliance with safety regulations, supporting the deployment of next-generation wireless networks, and addressing public health concerns. While numerous surveys have addressed specific aspects of radio propagation or radio environment maps, a comprehensive and unified overview of EMF mapping methodologies has been lacking. This review bridges that gap by systematically analyzing computational, geospatial, and machine learning approaches used for EMF exposure mapping across both wireless communication engineering and public health domains. A novel taxonomy is introduced to clarify overlapping terminology—encompassing radio maps, radio environment maps, and EMF exposure maps—and to classify construction methods, including analytical models, model-based interpolation, and data-driven learning techniques. In addition, the review highlights domain-specific challenges such as indoor versus outdoor mapping, data sparsity, and model generalization, while identifying emerging opportunities in hybrid modeling, big data integration, and explainable AI. By combining perspectives from communication engineering and public health, this work provides a broader and more interdisciplinary synthesis than previous surveys, offering a structured reference and roadmap for advancing robust, scalable, and socially relevant EMF mapping frameworks. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
23 pages, 2435 KB  
Article
Explainable Deep Kernel Learning for Interpretable Automatic Modulation Classification
by Carlos Enrique Mosquera-Trujillo, Juan Camilo Lugo-Rojas, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(9), 372; https://doi.org/10.3390/computers14090372 - 5 Sep 2025
Viewed by 253
Abstract
Modern wireless communication systems increasingly rely on Automatic Modulation Classification (AMC) to enhance reliability and adaptability, especially in the presence of severe signal degradation. However, despite significant progress driven by deep learning, many AMC models still struggle with high computational overhead, suboptimal performance [...] Read more.
Modern wireless communication systems increasingly rely on Automatic Modulation Classification (AMC) to enhance reliability and adaptability, especially in the presence of severe signal degradation. However, despite significant progress driven by deep learning, many AMC models still struggle with high computational overhead, suboptimal performance under low-signal-to-noise conditions, and limited interpretability, factors that hinder their deployment in real-time, resource-constrained environments. To address these challenges, we propose the Convolutional Random Fourier Features with Denoising Thresholding Network (CRFFDT-Net), a compact and interpretable deep kernel architecture that integrates Convolutional Random Fourier Features (CRFFSinCos), an automatic threshold-based denoising module, and a hybrid time-domain feature extractor composed of CNN and GRU layers. Our approach is validated on the RadioML 2016.10A benchmark dataset, encompassing eleven modulation types across a wide signal-to-noise ratio (SNR) spectrum. Experimental results demonstrate that CRFFDT-Net achieves an average classification accuracy that is statistically comparable to state-of-the-art models, while requiring significantly fewer parameters and offering lower inference latency. This highlights an exceptional accuracy–complexity trade-off. Moreover, interpretability analysis using GradCAM++ highlights the pivotal role of the Convolutional Random Fourier Features in the representation learning process, providing valuable insight into the model’s decision-making. These results underscore the promise of CRFFDT-Net as a lightweight and explainable solution for AMC in real-world, low-power communication systems. Full article
(This article belongs to the Special Issue AI in Complex Engineering Systems)
Show Figures

Figure 1

31 pages, 1545 KB  
Article
The Complexity of eHealth Architecture: Lessons Learned from Application Use Cases
by Annalisa Barsotti, Gerl Armin, Wilhelm Sebastian, Massimiliano Donati, Stefano Dalmiani and Claudio Passino
Computers 2025, 14(9), 371; https://doi.org/10.3390/computers14090371 - 4 Sep 2025
Viewed by 331
Abstract
The rapid evolution of eHealth technologies has revolutionized healthcare, enabling data-driven decision-making and personalized care. Central to this transformation is interoperability, which ensures seamless communication among heterogeneous systems. This paper explores the critical role of interoperability, data management processes, and the use [...] Read more.
The rapid evolution of eHealth technologies has revolutionized healthcare, enabling data-driven decision-making and personalized care. Central to this transformation is interoperability, which ensures seamless communication among heterogeneous systems. This paper explores the critical role of interoperability, data management processes, and the use of international standards in enabling integrated healthcare solutions. We present an overview of interoperability dimensions—technical, semantic, and organizational—and align them with data management phases in a concise eHealth architecture. Furthermore, we examine two practical European use cases to demonstrate the extend of the proposed eHealth architecture, involving patients, environments, third parties, and healthcare providers. Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2025)
Show Figures

Figure 1

24 pages, 1766 KB  
Article
Evaluating Interaction Capability in a Serious Game for Children with ASD: An Operability-Based Approach Aligned with ISO/IEC 25010:2023
by Delia Isabel Carrión-León, Milton Paúl Lopez-Ramos, Luis Gonzalo Santillan-Valdiviezo, Damaris Sayonara Tanguila-Tapuy, Gina Marilyn Morocho-Santos, Raquel Johanna Moyano-Arias, María Elena Yautibug-Apugllón and Ana Eva Chacón-Luna
Computers 2025, 14(9), 370; https://doi.org/10.3390/computers14090370 - 4 Sep 2025
Viewed by 347
Abstract
Serious games for children with Autism Spectrum Disorder (ASD) require rigorous evaluation frameworks that capture neurodivergent interaction patterns. This pilot study designed, developed, and evaluated a serious game for children with ASD, focusing on operability assessment aligned with ISO/IEC 25010:2023 standards. A repeated-measures [...] Read more.
Serious games for children with Autism Spectrum Disorder (ASD) require rigorous evaluation frameworks that capture neurodivergent interaction patterns. This pilot study designed, developed, and evaluated a serious game for children with ASD, focusing on operability assessment aligned with ISO/IEC 25010:2023 standards. A repeated-measures design involved ten children with ASD from the Carlos Garbay Special Education Institute in Riobamba, Ecuador, across 25 gameplay sessions. A bespoke operability algorithm incorporating four weighted components (ease of learning, user control, interface familiarity, and message comprehension) was developed through expert consultation with certified ASD therapists. Statistical study used linear mixed-effects models with Kenward–Roger correction, supplemented by thorough validation including split-half reliability and partial correlations. The operability metric demonstrated excellent internal consistency (split-half reliability = 0.94, 95% CI [0.88, 0.97]) and construct validity through partial correlations controlling for performance (difficulty: r_partial = 0.42, p = 0.037). Eighty percent of sessions achieved moderate-to-high operability levels (M = 45.07, SD = 10.52). In contrast to requirements, operability consistently improved with increasing difficulty level (Easy: M = 37.04; Medium: M = 48.71; Hard: M = 53.87), indicating that individuals with enhanced capabilities advanced to harder levels. Mixed-effects modeling indicated substantial difficulty effects (H = 9.36, p = 0.009, ε2 = 0.39). This pilot study establishes preliminary evidence for operability assessment in ASD serious games, requiring larger confirmatory validation studies (n ≥ 30) to establish broader generalizability and standardized instrument integration. The positive difficulty–operability association highlights the importance of adaptive game design in supporting skill progression. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

25 pages, 412 KB  
Article
LightCross: A Lightweight Smart Contract Vulnerability Detection Tool
by Ioannis Sfyrakis, Paolo Modesti, Lewis Golightly and Minaro Ikegima
Computers 2025, 14(9), 369; https://doi.org/10.3390/computers14090369 - 3 Sep 2025
Viewed by 260
Abstract
Blockchain and smart contracts have transformed industries by automating complex processes and transactions. However, this innovation has introduced significant security concerns, potentially leading to loss of financial assets and data integrity. The focus of this research is to address these challenges by developing [...] Read more.
Blockchain and smart contracts have transformed industries by automating complex processes and transactions. However, this innovation has introduced significant security concerns, potentially leading to loss of financial assets and data integrity. The focus of this research is to address these challenges by developing a tool that can enable developers and testers to detect vulnerabilities in smart contracts in an efficient and reliable way. The research contributions include an analysis of existing literature on smart contract security, along with the design and implementation of a lightweight vulnerability detection tool called LightCross. This tool runs two well-known detectors, Slither and Mythril, to analyse smart contracts. Experimental analysis was conducted using the SmartBugs curated dataset, which contains 143 vulnerable smart contracts with a total of 206 vulnerabilities. The results showed that LightCross achieves the same detection rate as SmartBugs when using the same backend detectors (Slither and Mythril) while eliminating SmartBugs’ need for a separate Docker container for each detector. Mythril detects 53% and Slither 48% of the vulnerabilities in the SmartBugs curated dataset. Furthermore, an assessment of the execution time across various vulnerability categories revealed that LightCross performs comparably to SmartBugs when using the Mythril detector, while LightCross is significantly faster when using the Slither detector. Finally, to enhance user-friendliness and relevance, LightCross presents the verification results based on OpenSCV, a state-of-the-art academic classification of smart contract vulnerabilities, aligned with the industry-standard CWE and offering improvements over the unmaintained SWC taxonomy. Full article
Show Figures

Figure 1

19 pages, 1153 KB  
Article
ChatGPT in Early Childhood Science Education: Can It Offer Innovative Effective Solutions to Overcome Challenges?
by Mustafa Uğraş, Zehra Çakır, Georgios Zacharis and Michail Kalogiannakis
Computers 2025, 14(9), 368; https://doi.org/10.3390/computers14090368 - 3 Sep 2025
Viewed by 821
Abstract
This study explores the potential of ChatGPT to address challenges in Early Childhood Science Education (ECSE) from the perspective of educators. A qualitative case study was conducted with 33 Early Childhood Education (ECE) teachers in Türkiye, using semi-structured interviews. Data were analyzed through [...] Read more.
This study explores the potential of ChatGPT to address challenges in Early Childhood Science Education (ECSE) from the perspective of educators. A qualitative case study was conducted with 33 Early Childhood Education (ECE) teachers in Türkiye, using semi-structured interviews. Data were analyzed through content analysis with MAXQDA 24 software. The results indicate that ECE teachers perceive ChatGPT as a partial solution to the scarcity of educational resources, appreciating its ability to propose alternative material uses and creative activity ideas. Participants also recognized its potential to support differentiated instruction by suggesting activities tailored to children’s developmental needs. Furthermore, ChatGPT was seen as a useful tool for generating lesson plans and activity options, although concerns were expressed that overreliance on the tool might undermine teachers’ pedagogical skills. Additional limitations highlighted include dependence on technology, restricted access to digital tools, diminished interpersonal interactions, risks of misinformation, and ethical concerns. Overall, while educators acknowledged ChatGPT’s usefulness in supporting ECSE, they emphasized that its integration into teaching practice should be cautious and balanced, considering both its educational benefits and its limitations. Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
Show Figures

Figure 1

18 pages, 1660 KB  
Article
AI Gem: Context-Aware Transformer Agents as Digital Twin Tutors for Adaptive Learning
by Attila Kovari
Computers 2025, 14(9), 367; https://doi.org/10.3390/computers14090367 - 2 Sep 2025
Viewed by 380
Abstract
Recent developments in large language models allow for real time, context-aware tutoring. AI Gem, presented in this article, is a layered architecture that integrates personalization, adaptive feedback, and curricular alignment into transformer based tutoring agents. The architecture combines retrieval augmented generation, Bayesian learner [...] Read more.
Recent developments in large language models allow for real time, context-aware tutoring. AI Gem, presented in this article, is a layered architecture that integrates personalization, adaptive feedback, and curricular alignment into transformer based tutoring agents. The architecture combines retrieval augmented generation, Bayesian learner model, and policy-based dialog in a verifiable and deployable software stack. The opportunities are scalable tutoring, multimodal interaction, and augmentation of teachers through content tools and analytics. Risks are factual errors, bias, over reliance, latency, cost, and privacy. The paper positions AI Gem as a design framework with testable hypotheses. A scenario-based walkthrough and new diagrams assign each learner step to the ten layers. Governance guidance covers data privacy across jurisdictions and operation in resource constrained environments. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
Show Figures

Figure 1

41 pages, 966 KB  
Review
ChatGPT’s Expanding Horizons and Transformative Impact Across Domains: A Critical Review of Capabilities, Challenges, and Future Directions
by Taiwo Raphael Feyijimi, John Ogbeleakhu Aliu, Ayodeji Emmanuel Oke and Douglas Omoregie Aghimien
Computers 2025, 14(9), 366; https://doi.org/10.3390/computers14090366 - 2 Sep 2025
Viewed by 445
Abstract
The rapid proliferation of Chat Generative Pre-trained Transformer (ChatGPT) marks a pivotal moment in artificial intelligence, eliciting responses from academic shock to industrial awe. As these technologies advance from passive tools toward proactive, agentic systems, their transformative potential and inherent risks are magnified [...] Read more.
The rapid proliferation of Chat Generative Pre-trained Transformer (ChatGPT) marks a pivotal moment in artificial intelligence, eliciting responses from academic shock to industrial awe. As these technologies advance from passive tools toward proactive, agentic systems, their transformative potential and inherent risks are magnified globally. This paper presents a comprehensive, critical review of ChatGPT’s impact across five key domains: natural language understanding (NLU), content generation, knowledge discovery, education, and engineering. While ChatGPT demonstrates profound capabilities, significant challenges remain in factual accuracy, bias, and the inherent opacity of its reasoning—a core issue termed the “Black Box Conundrum”. To analyze these evolving dynamics and the implications of this shift toward autonomous agency, this review introduces a series of conceptual frameworks, each specifically designed to illuminate the complex interactions and trade-offs within these domains: the “Specialization vs. Generalization” tension in NLU; the “Quality–Scalability–Ethics Trilemma” in content creation; the “Pedagogical Adaptation Imperative” in education; and the emergence of “Human–LLM Cognitive Symbiosis” in engineering. The analysis reveals an urgent need for proactive adaptation across sectors. Educational paradigms must shift to cultivate higher-order cognitive skills, while professional practices (including practices within education sector) must evolve to treat AI as a cognitive partner, leveraging techniques like Retrieval-Augmented Generation (RAG) and sophisticated prompt engineering. Ultimately, this paper argues for an overarching “Ethical–Technical Co-evolution Imperative”, charting a forward-looking research agenda that intertwines technological innovation with vigorous ethical and methodological standards to ensure responsible AI development and integration. Ultimately, the analysis reveals that the challenges of factual accuracy, bias, and opacity are interconnected and acutely magnified by the emergence of agentic systems, demanding a unified, proactive approach to adaptation across all sectors. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

29 pages, 2570 KB  
Article
Governance Framework for Intelligent Digital Twin Systems in Battery Storage: Aligning Standards, Market Incentives, and Cybersecurity for Decision Support of Digital Twin in BESS
by April Lia Hananto and Ibham Veza
Computers 2025, 14(9), 365; https://doi.org/10.3390/computers14090365 - 2 Sep 2025
Viewed by 391
Abstract
Digital twins represent a transformative innovation for battery energy storage systems (BESS), offering real-time virtual replicas of physical batteries that enable accurate monitoring, predictive analytics, and advanced control strategies. These capabilities promise to significantly enhance system efficiency, reliability, and lifespan. Yet, despite the [...] Read more.
Digital twins represent a transformative innovation for battery energy storage systems (BESS), offering real-time virtual replicas of physical batteries that enable accurate monitoring, predictive analytics, and advanced control strategies. These capabilities promise to significantly enhance system efficiency, reliability, and lifespan. Yet, despite the clear technical potential, large-scale deployment of digital twin-enabled battery systems faces critical governance barriers. This study identifies three major challenges: fragmented standards and lack of interoperability, weak or misaligned market incentives, and insufficient cybersecurity safeguards for interconnected systems. The central contribution of this research is the development of a comprehensive governance framework that aligns these three pillars—standards, market and regulatory incentives, and cybersecurity—into an integrated model. Findings indicate that harmonized standards reduce integration costs and build trust across vendors and operators, while supportive regulatory and market mechanisms can explicitly reward the benefits of digital twins, including improved reliability, extended battery life, and enhanced participation in energy markets. For example, simulation-based evidence suggests that digital twin-guided thermal and operational strategies can extend usable battery capacity by up to five percent, providing both technical and economic benefits. At the same time, embedding robust cybersecurity practices ensures that the adoption of digital twins does not introduce vulnerabilities that could threaten grid stability. Beyond identifying governance gaps, this study proposes an actionable implementation roadmap categorized into short-, medium-, and long-term strategies rather than fixed calendar dates, ensuring adaptability across different jurisdictions. Short-term actions include establishing terminology standards and piloting incentive programs. Medium-term measures involve mandating interoperability protocols and embedding digital twin requirements in market rules, and long-term strategies focus on achieving global harmonization and universal plug-and-play interoperability. International examples from Europe, North America, and Asia–Pacific illustrate how coordinated governance can accelerate adoption while safeguarding energy infrastructure. By combining technical analysis with policy and governance insights, this study advances both the scholarly and practical understanding of digital twin deployment in BESSs. The findings provide policymakers, regulators, industry leaders, and system operators with a clear framework to close governance gaps, maximize the value of digital twins, and enable more secure, reliable, and sustainable integration of energy storage into future power systems. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

16 pages, 393 KB  
Article
Optimizing Pre-Silicon CPU Validation: Reducing Simulation Time with Unsupervised Machine Learning and Statistical Analysis
by Victor Rodriguez-Bahena, Luis Pizano-Escalante, Omar Longoria-Gandara and Luis F Gutierrez-Preciado
Computers 2025, 14(9), 364; https://doi.org/10.3390/computers14090364 - 1 Sep 2025
Viewed by 501
Abstract
In modern processor development, extensive simulation is required before manufacturing to ensure that Central Processing Unit (CPU) designs function correctly and efficiently. This pre-silicon validation process involves running a wide range of software workloads on architectural models to identify potential issues early in [...] Read more.
In modern processor development, extensive simulation is required before manufacturing to ensure that Central Processing Unit (CPU) designs function correctly and efficiently. This pre-silicon validation process involves running a wide range of software workloads on architectural models to identify potential issues early in the design cycle. Improving pre-silicon simulation time is critical for accelerating CPU development and reducing time-to-market for high-quality processors. This study addresses the computational challenges of validating full-system simulations by leveraging unsupervised machine learning to optimize test case selection. By identifying patterns in executed instructions, the approach reduces the need for exhaustive simulations while maintaining rigorous validation standards. Notably, the optimized subset of test cases reduced simulation time by a factor of 10 and captured 97.5% of the maximum instruction entropy, ensuring nearly the same diversity in instruction coverage as the full workload set. The combination of Principal Component Analysis (PCA) and clustering algorithms effectively distinguished compute-bound and memory-bound workloads without requiring prior knowledge of the code. Statistical Model Checking with entropy-based analysis confirmed the effectiveness of this subset. This methodology significantly reduces validation effort, expedites CPU design cycles, and improves hardware efficiency. The findings highlight the potential of machine learning-driven validation strategies to enhance pre-silicon testing, enabling faster innovation and more robust processor architectures. Full article
Show Figures

Figure 1

26 pages, 3739 KB  
Article
Enhancing Software Usability Through LLMs: A Prompting and Fine-Tuning Framework for Analyzing Negative User Feedback
by Nahed Alsaleh, Reem Alnanih and Nahed Alowidi
Computers 2025, 14(9), 363; https://doi.org/10.3390/computers14090363 - 1 Sep 2025
Viewed by 412
Abstract
In today’s competitive digital landscape, application usability plays a critical role in user satisfaction and retention. Negative user reviews offer valuable insights into real-world usability issues, yet traditional analysis methods often fall short in scalability and contextual understanding. This paper proposes an intelligent [...] Read more.
In today’s competitive digital landscape, application usability plays a critical role in user satisfaction and retention. Negative user reviews offer valuable insights into real-world usability issues, yet traditional analysis methods often fall short in scalability and contextual understanding. This paper proposes an intelligent framework that utilizes large language models (LLMs), including GPT-4, Gemini, and BLOOM, to automate the extraction of actionable usability recommendations from negative app reviews. By applying prompting and fine-tuning techniques, the framework transforms unstructured feedback into meaningful suggestions aligned with three core usability dimensions: correctness, completeness, and satisfaction. A manually annotated dataset of Instagram negative reviews was used to evaluate model performance. Results show that GPT-4 consistently outperformed other models, achieving BLEU scores up to 0.64, ROUGE scores up to 0.80, and METEOR scores up to 0.90—demonstrating high semantic accuracy and contextual relevance in generated recommendations. Gemini and BLOOM, while improved through fine-tuning, showed significantly lower performance. This study also introduces a practical, web-based tool that enables real-time review analysis and recommendation generation, supporting data-driven, user-centered software development. These findings illustrate the potential of LLM-based frameworks to enhance software usability analysis and accelerate feedback-driven design processes. Full article
Show Figures

Figure 1

12 pages, 421 KB  
Article
A Graph Attention Network Combining Multifaceted Element Relationships for Full Document-Level Understanding
by Lorenzo Vaiani, Davide Napolitano and Luca Cagliero
Computers 2025, 14(9), 362; https://doi.org/10.3390/computers14090362 - 1 Sep 2025
Viewed by 259
Abstract
Question answering from visually rich documents (VRDs) is the task of retrieving the correct answer to a natural language question by considering the content of textual and visual elements in the document, as well as the pages’ layout. To answer closed-ended questions that [...] Read more.
Question answering from visually rich documents (VRDs) is the task of retrieving the correct answer to a natural language question by considering the content of textual and visual elements in the document, as well as the pages’ layout. To answer closed-ended questions that require a deep understanding of the hierarchical relationships between the elements, i.e., the full document-level understanding (FDU) task, state-of-the-art graph-based approaches to FDU model the pairwise element relationships in a graph model. Although they incorporate logical links (e.g., a caption refers to a figure) and spatial ones (e.g., a caption is placed below the figure), they currently disregard the semantic similarity among multimodal document elements, thus potentially yielding suboptimal scoring of the elements’ relevance to the input question. In this paper, we propose GRAS-FDU, a new graph attention network tailored to FDU. GATS-FDU is trained to jointly consider multiple document facets, i.e., the local, spatial, and semantic elements’ relationships. The results show that our approach achieves superior performance compared to several baseline methods. Full article
Show Figures

Figure 1

15 pages, 1780 KB  
Article
Prosodic Spatio-Temporal Feature Fusion with Attention Mechanisms for Speech Emotion Recognition
by Kristiawan Nugroho, Imam Husni Al Amin, Nina Anggraeni Noviasari and De Rosal Ignatius Moses Setiadi
Computers 2025, 14(9), 361; https://doi.org/10.3390/computers14090361 - 31 Aug 2025
Viewed by 403
Abstract
Speech Emotion Recognition (SER) plays a vital role in supporting applications such as healthcare, human–computer interaction, and security. However, many existing approaches still face challenges in achieving robust generalization and maintaining high recall, particularly for emotions related to stress and anxiety. This study [...] Read more.
Speech Emotion Recognition (SER) plays a vital role in supporting applications such as healthcare, human–computer interaction, and security. However, many existing approaches still face challenges in achieving robust generalization and maintaining high recall, particularly for emotions related to stress and anxiety. This study proposes a dual-stream hybrid model that combines prosodic features with spatio-temporal representations derived from the Multitaper Mel-Frequency Spectrogram (MTMFS) and the Constant-Q Transform Spectrogram (CQTS). Prosodic cues, including pitch, intensity, jitter, shimmer, HNR, pause rate, and speech rate, were processed using dense layers, while MTMFS and CQTS features were encoded with CNN and BiGRU. A Multi-Head Attention mechanism was then applied to adaptively fuse the two feature streams, allowing the model to focus on the most relevant emotional cues. Evaluations conducted on the RAVDESS dataset with subject-independent 5-fold cross-validation demonstrated an accuracy of 97.64% and a macro F1-score of 0.9745. These results confirm that combining prosodic and advanced spectrogram features with attention-based fusion improves precision, recall, and overall robustness, offering a promising framework for more reliable SER systems. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Graphical abstract

21 pages, 7375 KB  
Article
Real-Time Face Mask Detection Using Federated Learning
by Tudor-Mihai David and Mihai Udrescu
Computers 2025, 14(9), 360; https://doi.org/10.3390/computers14090360 - 31 Aug 2025
Viewed by 328
Abstract
Epidemics caused by respiratory infections have become a global and systemic threat since humankind has become highly connected via modern transportation systems. Any new pathogen with human-to-human transmission capabilities has the potential to cause public health disasters and severe disruptions of social and [...] Read more.
Epidemics caused by respiratory infections have become a global and systemic threat since humankind has become highly connected via modern transportation systems. Any new pathogen with human-to-human transmission capabilities has the potential to cause public health disasters and severe disruptions of social and economic activities. During the COVID-19 pandemic, we learned that proper mask-wearing in closed, restricted areas was one of the measures that worked to mitigate the spread of respiratory infections while allowing for continuing economic activity. Previous research approached this issue by designing hardware–software systems that determine whether individuals in the surveilled restricted area are using a mask; however, most such solutions are centralized, thus requiring massive computational resources, which makes them hard to scale up. To address such issues, this paper proposes a novel decentralized, federated learning (FL) solution to mask-wearing detection that instantiates our lightweight version of the MobileNetV2 model. The FL solution also ensures individual privacy, given that images remain at the local, device level. Importantly, we obtained a mask-wearing training accuracy of 98% (i.e., similar to centralized machine learning solutions) after only eight rounds of communication with 25 clients. We rigorously proved the reliability and robustness of our approach after repeated K-fold cross-validation. Full article
Show Figures

Figure 1

37 pages, 2381 KB  
Review
Exploring the Synergy Between Ethereum Layer 2 Solutions and Machine Learning to Improve Blockchain Scalability
by Andrada Cristina Artenie, Diana Laura Silaghi and Daniela Elena Popescu
Computers 2025, 14(9), 359; https://doi.org/10.3390/computers14090359 - 29 Aug 2025
Viewed by 394
Abstract
Blockchain technologies, despite their profound transformative potential across multiple industries, continue to face significant scalability challenges. These limitations are primarily observed in restricted transaction throughput and elevated latency, which hinder the ability of blockchain networks to support widespread adoption and high-volume applications. To [...] Read more.
Blockchain technologies, despite their profound transformative potential across multiple industries, continue to face significant scalability challenges. These limitations are primarily observed in restricted transaction throughput and elevated latency, which hinder the ability of blockchain networks to support widespread adoption and high-volume applications. To address these issues, research has predominantly focused on Layer 1 solutions that seek to improve blockchain performance through fundamental modifications to the core protocol and architectural design. Alternatively, Layer 2 solutions enable off-chain transaction processing, increasing throughput and reducing costs while maintaining the security of the base layer. Despite their advantages, Layer 2 approaches are less explored in the literature. To address this gap, this review conducts an in-depth analysis on Ethereum Layer 2 frameworks, emphasizing their integration with machine-learning techniques, with the goal of promoting the prevailing best practices and emerging applications; this review also identifies key technical and operational challenges hindering widespread adoption. Full article
Show Figures

Figure 1

26 pages, 1446 KB  
Article
Privacy Threats and Privacy Preservation in Multiple Data Releases of High-Dimensional Datasets
by Surapon Riyana
Computers 2025, 14(9), 358; https://doi.org/10.3390/computers14090358 - 29 Aug 2025
Viewed by 341
Abstract
Determining how to balance data utilities and data privacy when datasets are released to be utilized outside the scope of data-collecting organizations constitutes a major challenge. To achieve this aim in data collection (datasets), several privacy preservation models have been proposed, such as [...] Read more.
Determining how to balance data utilities and data privacy when datasets are released to be utilized outside the scope of data-collecting organizations constitutes a major challenge. To achieve this aim in data collection (datasets), several privacy preservation models have been proposed, such as k-Anonymity and l-Diversity. Unfortunately, these privacy preservation models may be insufficient to address privacy violation issues in datasets that have high-dimensional attributes. For this reason, the privacy preservation models, km-Anonymity and LKC-Privacy, for addressing privacy violation issues in high-dimensional datasets are proposed. However, these privacy preservation models still exhibit privacy violation issues from using data comparison attacks, and they further have data utility issues that must be addressed. Therefore, a privacy preservation model can address privacy violation issues in high-dimensional datasets to be proposed in this work, such that there are no concerns about privacy violations in released datasets from data comparison attacks, and it is highly efficient and effective in data maintenance. Furthermore, we show that the proposed model is efficient and effective through extensive experiments. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

21 pages, 4084 KB  
Article
Integration of Cloud-Based Central Telemedicine System and IoT Device Networks
by Parin Sornlertlamvanich, Chatdanai Phakaket, Panya Hantula, Sarunya Kanjanawattana, Nuntawut Kaoungku and Komsan Srivisut
Computers 2025, 14(9), 357; https://doi.org/10.3390/computers14090357 - 29 Aug 2025
Viewed by 489
Abstract
The growing challenges in healthcare services, such as hospital congestion and a persistent shortage of medical personnel, significantly impede effective service delivery. This particularly presents significant challenges for the continuous monitoring of patients with chronic diseases. In comparison, Internet of Things (IoT) based [...] Read more.
The growing challenges in healthcare services, such as hospital congestion and a persistent shortage of medical personnel, significantly impede effective service delivery. This particularly presents significant challenges for the continuous monitoring of patients with chronic diseases. In comparison, Internet of Things (IoT) based telemonitoring systems offer a promising solution to alleviate these challenges. However, transmitting sensitive and confidential patient health data requires a strong focus on end-to-end security. This includes securing sensitive data within the patient’s home network, during internet transmission, at the endpoint system, and managing sensitive data. In this study, we propose a secure and scalable architecture for a remote health monitoring system that integrates telemedicine technology with the IoT. The proposed solution included a portable remote health monitoring device, an IoT Gateway appliance (IoT GW) in the patient’s home, and an IoT Application Gateway Endpoint (App GW Endpoint) on a cloud infrastructure. A secure communication channel was established by implementing a multi-layered security protocol stack that uses HTTPS over Quick UDP Internet Connection (QUIC), with a focus on optimal security and compatibility, prioritizing cipher suites for data confidentiality and device authentication. The cloud architecture is designed based on the Well-Architected Framework principles to ensure security, high availability, and scalability. Our study shows that storing patient health information is reliable and efficient. Furthermore, the results for processing and transmission times clearly demonstrate that the additional encryption mechanisms have a negligible effect on data transmission latency, while significantly improving security. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

16 pages, 2074 KB  
Article
Benchmarking Control Strategies for Multi-Component Degradation (MCD) Detection in Digital Twin (DT) Applications
by Atuahene Kwasi Barimah, Akhtar Jahanzeb, Octavian Niculita, Andrew Cowell and Don McGlinchey
Computers 2025, 14(9), 356; https://doi.org/10.3390/computers14090356 - 29 Aug 2025
Viewed by 343
Abstract
Digital Twins (DTs) have become central to intelligent asset management within Industry 4.0, enabling real-time monitoring, diagnostics, and predictive maintenance. However, implementing Prognostics and Health Management (PHM) strategies within DT frameworks remains a significant challenge, particularly in systems experiencing multi-component degradation (MCD). MCD [...] Read more.
Digital Twins (DTs) have become central to intelligent asset management within Industry 4.0, enabling real-time monitoring, diagnostics, and predictive maintenance. However, implementing Prognostics and Health Management (PHM) strategies within DT frameworks remains a significant challenge, particularly in systems experiencing multi-component degradation (MCD). MCD occurs when several components degrade simultaneously or in interaction, complicating detection and isolation processes. Traditional data-driven fault detection models often require extensive historical degradation data, which is costly, time-consuming, or difficult to obtain in many real-world scenarios. This paper proposes a model-based, control-driven approach to MCD detection, which reduces the need for large training datasets by leveraging reference tracking performance in closed-loop control systems. We benchmark the accuracy of four control strategies—Proportional-Integral (PI), Linear Quadratic Regulator (LQR), Model Predictive Control (MPC), and a hybrid model—within a Digital Twin-enabled hydraulic system testbed comprising multiple components, including pumps, valves, nozzles, and filters. The control strategies are evaluated under various MCD scenarios for their ability to accurately detect and isolate degradation events. Simulation results indicate that the hybrid model consistently outperforms the individual control strategies, achieving an average accuracy of 95.76% under simultaneous pump and nozzle degradation scenarios. The LQR model also demonstrated strong predictive performance, especially in identifying degradation in components such as nozzles and pumps. Also, the sequence and interaction of faults were found to influence detection accuracy, highlighting how the complexities of fault sequences affect the performance of diagnostic strategies. This work contributes to PHM and DT research by introducing a scalable, data-efficient methodology for MCD detection that integrates seamlessly into existing DT architectures using containerized RESTful APIs. By shifting from data-dependent to model-informed diagnostics, the proposed approach enhances early fault detection capabilities and reduces deployment timelines for real-world DT-enabled PHM applications. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

19 pages, 5181 KB  
Article
Remote Code Execution via Log4J MBeans: Case Study of Apache ActiveMQ (CVE-2022-41678)
by Alexandru Răzvan Căciulescu, Matei Bădănoiu, Răzvan Rughiniș and Dinu Țurcanu
Computers 2025, 14(9), 355; https://doi.org/10.3390/computers14090355 - 28 Aug 2025
Viewed by 371
Abstract
Java Management Extensions (JMX) are indispensable for managing and administrating Java software solutions, yet when exposed through HTTP bridges such as Jolokia they can radically enlarge an application’s attack surface. This paper presents the first in-depth analysis of CVE-2022-41678, a vulnerability discovered by [...] Read more.
Java Management Extensions (JMX) are indispensable for managing and administrating Java software solutions, yet when exposed through HTTP bridges such as Jolokia they can radically enlarge an application’s attack surface. This paper presents the first in-depth analysis of CVE-2022-41678, a vulnerability discovered by the authors in Apache ActiveMQ that combines Jolokia’s remote JMX access with Log4J2 management beans to achieve full remote code execution. Using a default installation testbed, we enumerate the Log4J MBeans surfaced by Jolokia, demonstrate arbitrary file read, file write, and server-side request–forgery primitives, and finally to leverage the file write capabilities to obtain a shell, all via authenticated HTTP(S) requests only. The end-to-end exploit chain requires no deserialization gadgets and is unaffected by prior Log4Shell mitigations. We have also automated the entire exploit process via proof-of-concept scripts on a stock ActiveMQ 5.17.1 instance. We discuss the broader security implications for any software exposing JMX-managed or Jolokia-managed Log4J contexts, provide concrete hardening guidelines, and outline design directions for safer remote-management stacks. The findings underscore that even “benign” management beans can become critical when surfaced through ubiquitous HTTP management gateways. Full article
Show Figures

Figure 1

44 pages, 4216 KB  
Article
Legal AI in Low-Resource Languages: Building and Evaluating QA Systems for the Kazakh Legislation
by Diana Rakhimova, Assem Turarbek, Vladislav Karyukin, Assiya Sarsenbayeva and Rashid Alieyev
Computers 2025, 14(9), 354; https://doi.org/10.3390/computers14090354 - 27 Aug 2025
Viewed by 660
Abstract
The research focuses on the development and evaluation of a legal question–answer system for the Kazakh language, a low-resource and morphologically complex language. Four datasets were compiled from open legal sources—Adilet, Zqai, Gov, and a manually created synthetic set—containing question–аnswer pairs extracted from [...] Read more.
The research focuses on the development and evaluation of a legal question–answer system for the Kazakh language, a low-resource and morphologically complex language. Four datasets were compiled from open legal sources—Adilet, Zqai, Gov, and a manually created synthetic set—containing question–аnswer pairs extracted from official legislative documents and government portals. Seven large language models (GPT-4o mini, GEMMA, KazLLM, LLaMA, Phi, Qwen, and Mistral) were fine-tuned using structured prompt templates, quantization methods, and domain-specific training to enhance contextual understanding and efficiency. The evaluation employed both automatic metrics (ROUGE and METEOR) and expert-based manual assessment. GPT-4o mini achieved the highest overall performance, with ROUGE-1: 0.309, ROUGE-2: 0.175, ROUGE-L: 0.263, and METEOR: 0.320, and received an expert score of 3.96, indicating strong legal reasoning capabilities and adaptability to Kazakh legal contexts. The results highlight GPT-4o mini’s superiority over other tested models in both quantitative and qualitative evaluations. This work demonstrates the feasibility and importance of developing localized legal AI solutions for low-resource languages, contributing to improved legal accessibility, transparency, and digital governance in Kazakhstan. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

19 pages, 2518 KB  
Article
An Intelligent Hybrid AI Course Recommendation Framework Integrating BERT Embeddings and Random Forest Classification
by Armaneesa Naaman Hasoon, Salwa Khalid Abdulateef, R. S. Abdulameer and Moceheb Lazam Shuwandy
Computers 2025, 14(9), 353; https://doi.org/10.3390/computers14090353 - 27 Aug 2025
Viewed by 414
Abstract
With the proliferation of online learning platforms, selecting appropriate artificial intelligence (AI) courses has become increasingly complex for learners. This study proposes a novel hybrid AI course recommendation framework that integrates Term Frequency–Inverse Document Frequency (TF-IDF) and Bidirectional Encoder Representations from Transformers (BERT) [...] Read more.
With the proliferation of online learning platforms, selecting appropriate artificial intelligence (AI) courses has become increasingly complex for learners. This study proposes a novel hybrid AI course recommendation framework that integrates Term Frequency–Inverse Document Frequency (TF-IDF) and Bidirectional Encoder Representations from Transformers (BERT) for robust textual feature extraction, enhanced by a Random Forest classifier to improve recommendation precision. A curated dataset of 2238 AI-related courses from Udemy was constructed through multi-session web scraping, followed by comprehensive data preprocessing. The system computes semantic and lexical similarity using cosine similarity and fuzzy matching to handle user input variations. Experimental results demonstrate a high recommendation accuracy = 91.25%, precision = 96.63%, and F1-Score = 90.77%. Compared with baseline models, the proposed framework significantly improves performance in cold-start scenarios and does not rely on historical user interactions. A Flask-based web application was developed for real-time deployment, offering instant, user-friendly recommendations. This work contributes a scalable and metadata-driven AI recommender architecture with practical deployment and promising generalization capabilities. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

Previous Issue
Back to TopTop