Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.3 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Electromagnetic Field Distribution Mapping: A Taxonomy and Comprehensive Review of Computational and Machine Learning Methods
Computers 2025, 14(9), 373; https://doi.org/10.3390/computers14090373 - 5 Sep 2025
Abstract
Electromagnetic field (EMF) exposure mapping is increasingly important for ensuring compliance with safety regulations, supporting the deployment of next-generation wireless networks, and addressing public health concerns. While numerous surveys have addressed specific aspects of radio propagation or radio environment maps, a comprehensive and
[...] Read more.
Electromagnetic field (EMF) exposure mapping is increasingly important for ensuring compliance with safety regulations, supporting the deployment of next-generation wireless networks, and addressing public health concerns. While numerous surveys have addressed specific aspects of radio propagation or radio environment maps, a comprehensive and unified overview of EMF mapping methodologies has been lacking. This review bridges that gap by systematically analyzing computational, geospatial, and machine learning approaches used for EMF exposure mapping across both wireless communication engineering and public health domains. A novel taxonomy is introduced to clarify overlapping terminology—encompassing radio maps, radio environment maps, and EMF exposure maps—and to classify construction methods, including analytical models, model-based interpolation, and data-driven learning techniques. In addition, the review highlights domain-specific challenges such as indoor versus outdoor mapping, data sparsity, and model generalization, while identifying emerging opportunities in hybrid modeling, big data integration, and explainable AI. By combining perspectives from communication engineering and public health, this work provides a broader and more interdisciplinary synthesis than previous surveys, offering a structured reference and roadmap for advancing robust, scalable, and socially relevant EMF mapping frameworks.
Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Open AccessArticle
Explainable Deep Kernel Learning for Interpretable Automatic Modulation Classification
by
Carlos Enrique Mosquera-Trujillo, Juan Camilo Lugo-Rojas, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(9), 372; https://doi.org/10.3390/computers14090372 - 5 Sep 2025
Abstract
Modern wireless communication systems increasingly rely on Automatic Modulation Classification (AMC) to enhance reliability and adaptability, especially in the presence of severe signal degradation. However, despite significant progress driven by deep learning, many AMC models still struggle with high computational overhead, suboptimal performance
[...] Read more.
Modern wireless communication systems increasingly rely on Automatic Modulation Classification (AMC) to enhance reliability and adaptability, especially in the presence of severe signal degradation. However, despite significant progress driven by deep learning, many AMC models still struggle with high computational overhead, suboptimal performance under low-signal-to-noise conditions, and limited interpretability, factors that hinder their deployment in real-time, resource-constrained environments. To address these challenges, we propose the Convolutional Random Fourier Features with Denoising Thresholding Network (CRFFDT-Net), a compact and interpretable deep kernel architecture that integrates Convolutional Random Fourier Features (CRFFSinCos), an automatic threshold-based denoising module, and a hybrid time-domain feature extractor composed of CNN and GRU layers. Our approach is validated on the RadioML 2016.10A benchmark dataset, encompassing eleven modulation types across a wide signal-to-noise ratio (SNR) spectrum. Experimental results demonstrate that CRFFDT-Net achieves an average classification accuracy that is statistically comparable to state-of-the-art models, while requiring significantly fewer parameters and offering lower inference latency. This highlights an exceptional accuracy–complexity trade-off. Moreover, interpretability analysis using GradCAM++ highlights the pivotal role of the Convolutional Random Fourier Features in the representation learning process, providing valuable insight into the model’s decision-making. These results underscore the promise of CRFFDT-Net as a lightweight and explainable solution for AMC in real-world, low-power communication systems.
Full article
(This article belongs to the Special Issue AI in Complex Engineering Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
The Complexity of eHealth Architecture: Lessons Learned from Application Use Cases
by
Annalisa Barsotti, Gerl Armin, Wilhelm Sebastian, Massimiliano Donati, Stefano Dalmiani and Claudio Passino
Computers 2025, 14(9), 371; https://doi.org/10.3390/computers14090371 - 4 Sep 2025
Abstract
The rapid evolution of eHealth technologies has revolutionized healthcare, enabling data-driven decision-making and personalized care. Central to this transformation is interoperability, which ensures seamless communication among heterogeneous systems. This paper explores the critical role of interoperability, data management processes, and the use
[...] Read more.
The rapid evolution of eHealth technologies has revolutionized healthcare, enabling data-driven decision-making and personalized care. Central to this transformation is interoperability, which ensures seamless communication among heterogeneous systems. This paper explores the critical role of interoperability, data management processes, and the use of international standards in enabling integrated healthcare solutions. We present an overview of interoperability dimensions—technical, semantic, and organizational—and align them with data management phases in a concise eHealth architecture. Furthermore, we examine two practical European use cases to demonstrate the extend of the proposed eHealth architecture, involving patients, environments, third parties, and healthcare providers.
Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
Evaluating Interaction Capability in a Serious Game for Children with ASD: An Operability-Based Approach Aligned with ISO/IEC 25010:2023
by
Delia Isabel Carrión-León, Milton Paúl Lopez-Ramos, Luis Gonzalo Santillan-Valdiviezo, Damaris Sayonara Tanguila-Tapuy, Gina Marilyn Morocho-Santos, Raquel Johanna Moyano-Arias, María Elena Yautibug-Apugllón and Ana Eva Chacón-Luna
Computers 2025, 14(9), 370; https://doi.org/10.3390/computers14090370 - 4 Sep 2025
Abstract
Serious games for children with Autism Spectrum Disorder (ASD) require rigorous evaluation frameworks that capture neurodivergent interaction patterns. This pilot study designed, developed, and evaluated a serious game for children with ASD, focusing on operability assessment aligned with ISO/IEC 25010:2023 standards. A repeated-measures
[...] Read more.
Serious games for children with Autism Spectrum Disorder (ASD) require rigorous evaluation frameworks that capture neurodivergent interaction patterns. This pilot study designed, developed, and evaluated a serious game for children with ASD, focusing on operability assessment aligned with ISO/IEC 25010:2023 standards. A repeated-measures design involved ten children with ASD from the Carlos Garbay Special Education Institute in Riobamba, Ecuador, across 25 gameplay sessions. A bespoke operability algorithm incorporating four weighted components (ease of learning, user control, interface familiarity, and message comprehension) was developed through expert consultation with certified ASD therapists. Statistical study used linear mixed-effects models with Kenward–Roger correction, supplemented by thorough validation including split-half reliability and partial correlations. The operability metric demonstrated excellent internal consistency (split-half reliability = 0.94, 95% CI [0.88, 0.97]) and construct validity through partial correlations controlling for performance (difficulty: r_partial = 0.42, p = 0.037). Eighty percent of sessions achieved moderate-to-high operability levels (M = 45.07, SD = 10.52). In contrast to requirements, operability consistently improved with increasing difficulty level (Easy: M = 37.04; Medium: M = 48.71; Hard: M = 53.87), indicating that individuals with enhanced capabilities advanced to harder levels. Mixed-effects modeling indicated substantial difficulty effects (H = 9.36, p = 0.009, ε2 = 0.39). This pilot study establishes preliminary evidence for operability assessment in ASD serious games, requiring larger confirmatory validation studies (n ≥ 30) to establish broader generalizability and standardized instrument integration. The positive difficulty–operability association highlights the importance of adaptive game design in supporting skill progression.
Full article
(This article belongs to the Section Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessArticle
LightCross: A Lightweight Smart Contract Vulnerability Detection Tool
by
Ioannis Sfyrakis, Paolo Modesti, Lewis Golightly and Minaro Ikegima
Computers 2025, 14(9), 369; https://doi.org/10.3390/computers14090369 - 3 Sep 2025
Abstract
►▼
Show Figures
Blockchain and smart contracts have transformed industries by automating complex processes and transactions. However, this innovation has introduced significant security concerns, potentially leading to loss of financial assets and data integrity. The focus of this research is to address these challenges by developing
[...] Read more.
Blockchain and smart contracts have transformed industries by automating complex processes and transactions. However, this innovation has introduced significant security concerns, potentially leading to loss of financial assets and data integrity. The focus of this research is to address these challenges by developing a tool that can enable developers and testers to detect vulnerabilities in smart contracts in an efficient and reliable way. The research contributions include an analysis of existing literature on smart contract security, along with the design and implementation of a lightweight vulnerability detection tool called LightCross. This tool runs two well-known detectors, Slither and Mythril, to analyse smart contracts. Experimental analysis was conducted using the SmartBugs curated dataset, which contains 143 vulnerable smart contracts with a total of 206 vulnerabilities. The results showed that LightCross achieves the same detection rate as SmartBugs when using the same backend detectors (Slither and Mythril) while eliminating SmartBugs’ need for a separate Docker container for each detector. Mythril detects 53% and Slither 48% of the vulnerabilities in the SmartBugs curated dataset. Furthermore, an assessment of the execution time across various vulnerability categories revealed that LightCross performs comparably to SmartBugs when using the Mythril detector, while LightCross is significantly faster when using the Slither detector. Finally, to enhance user-friendliness and relevance, LightCross presents the verification results based on OpenSCV, a state-of-the-art academic classification of smart contract vulnerabilities, aligned with the industry-standard CWE and offering improvements over the unmaintained SWC taxonomy.
Full article

Figure 1
Open AccessArticle
ChatGPT in Early Childhood Science Education: Can It Offer Innovative Effective Solutions to Overcome Challenges?
by
Mustafa Uğraş, Zehra Çakır, Georgios Zacharis and Michail Kalogiannakis
Computers 2025, 14(9), 368; https://doi.org/10.3390/computers14090368 - 3 Sep 2025
Abstract
This study explores the potential of ChatGPT to address challenges in Early Childhood Science Education (ECSE) from the perspective of educators. A qualitative case study was conducted with 33 Early Childhood Education (ECE) teachers in Türkiye, using semi-structured interviews. Data were analyzed through
[...] Read more.
This study explores the potential of ChatGPT to address challenges in Early Childhood Science Education (ECSE) from the perspective of educators. A qualitative case study was conducted with 33 Early Childhood Education (ECE) teachers in Türkiye, using semi-structured interviews. Data were analyzed through content analysis with MAXQDA 24 software. The results indicate that ECE teachers perceive ChatGPT as a partial solution to the scarcity of educational resources, appreciating its ability to propose alternative material uses and creative activity ideas. Participants also recognized its potential to support differentiated instruction by suggesting activities tailored to children’s developmental needs. Furthermore, ChatGPT was seen as a useful tool for generating lesson plans and activity options, although concerns were expressed that overreliance on the tool might undermine teachers’ pedagogical skills. Additional limitations highlighted include dependence on technology, restricted access to digital tools, diminished interpersonal interactions, risks of misinformation, and ethical concerns. Overall, while educators acknowledged ChatGPT’s usefulness in supporting ECSE, they emphasized that its integration into teaching practice should be cautious and balanced, considering both its educational benefits and its limitations.
Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
►▼
Show Figures

Figure 1
Open AccessArticle
AI Gem: Context-Aware Transformer Agents as Digital Twin Tutors for Adaptive Learning
by
Attila Kovari
Computers 2025, 14(9), 367; https://doi.org/10.3390/computers14090367 - 2 Sep 2025
Abstract
Recent developments in large language models allow for real time, context-aware tutoring. AI Gem, presented in this article, is a layered architecture that integrates personalization, adaptive feedback, and curricular alignment into transformer based tutoring agents. The architecture combines retrieval augmented generation, Bayesian learner
[...] Read more.
Recent developments in large language models allow for real time, context-aware tutoring. AI Gem, presented in this article, is a layered architecture that integrates personalization, adaptive feedback, and curricular alignment into transformer based tutoring agents. The architecture combines retrieval augmented generation, Bayesian learner model, and policy-based dialog in a verifiable and deployable software stack. The opportunities are scalable tutoring, multimodal interaction, and augmentation of teachers through content tools and analytics. Risks are factual errors, bias, over reliance, latency, cost, and privacy. The paper positions AI Gem as a design framework with testable hypotheses. A scenario-based walkthrough and new diagrams assign each learner step to the ten layers. Governance guidance covers data privacy across jurisdictions and operation in resource constrained environments.
Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessReview
ChatGPT’s Expanding Horizons and Transformative Impact Across Domains: A Critical Review of Capabilities, Challenges, and Future Directions
by
Taiwo Raphael Feyijimi, John Ogbeleakhu Aliu, Ayodeji Emmanuel Oke and Douglas Omoregie Aghimien
Computers 2025, 14(9), 366; https://doi.org/10.3390/computers14090366 - 2 Sep 2025
Abstract
The rapid proliferation of Chat Generative Pre-trained Transformer (ChatGPT) marks a pivotal moment in artificial intelligence, eliciting responses from academic shock to industrial awe. As these technologies advance from passive tools toward proactive, agentic systems, their transformative potential and inherent risks are magnified
[...] Read more.
The rapid proliferation of Chat Generative Pre-trained Transformer (ChatGPT) marks a pivotal moment in artificial intelligence, eliciting responses from academic shock to industrial awe. As these technologies advance from passive tools toward proactive, agentic systems, their transformative potential and inherent risks are magnified globally. This paper presents a comprehensive, critical review of ChatGPT’s impact across five key domains: natural language understanding (NLU), content generation, knowledge discovery, education, and engineering. While ChatGPT demonstrates profound capabilities, significant challenges remain in factual accuracy, bias, and the inherent opacity of its reasoning—a core issue termed the “Black Box Conundrum”. To analyze these evolving dynamics and the implications of this shift toward autonomous agency, this review introduces a series of conceptual frameworks, each specifically designed to illuminate the complex interactions and trade-offs within these domains: the “Specialization vs. Generalization” tension in NLU; the “Quality–Scalability–Ethics Trilemma” in content creation; the “Pedagogical Adaptation Imperative” in education; and the emergence of “Human–LLM Cognitive Symbiosis” in engineering. The analysis reveals an urgent need for proactive adaptation across sectors. Educational paradigms must shift to cultivate higher-order cognitive skills, while professional practices (including practices within education sector) must evolve to treat AI as a cognitive partner, leveraging techniques like Retrieval-Augmented Generation (RAG) and sophisticated prompt engineering. Ultimately, this paper argues for an overarching “Ethical–Technical Co-evolution Imperative”, charting a forward-looking research agenda that intertwines technological innovation with vigorous ethical and methodological standards to ensure responsible AI development and integration. Ultimately, the analysis reveals that the challenges of factual accuracy, bias, and opacity are interconnected and acutely magnified by the emergence of agentic systems, demanding a unified, proactive approach to adaptation across all sectors.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
►▼
Show Figures

Figure 1
Open AccessArticle
Governance Framework for Intelligent Digital Twin Systems in Battery Storage: Aligning Standards, Market Incentives, and Cybersecurity for Decision Support of Digital Twin in BESS
by
April Lia Hananto and Ibham Veza
Computers 2025, 14(9), 365; https://doi.org/10.3390/computers14090365 - 2 Sep 2025
Abstract
Digital twins represent a transformative innovation for battery energy storage systems (BESS), offering real-time virtual replicas of physical batteries that enable accurate monitoring, predictive analytics, and advanced control strategies. These capabilities promise to significantly enhance system efficiency, reliability, and lifespan. Yet, despite the
[...] Read more.
Digital twins represent a transformative innovation for battery energy storage systems (BESS), offering real-time virtual replicas of physical batteries that enable accurate monitoring, predictive analytics, and advanced control strategies. These capabilities promise to significantly enhance system efficiency, reliability, and lifespan. Yet, despite the clear technical potential, large-scale deployment of digital twin-enabled battery systems faces critical governance barriers. This study identifies three major challenges: fragmented standards and lack of interoperability, weak or misaligned market incentives, and insufficient cybersecurity safeguards for interconnected systems. The central contribution of this research is the development of a comprehensive governance framework that aligns these three pillars—standards, market and regulatory incentives, and cybersecurity—into an integrated model. Findings indicate that harmonized standards reduce integration costs and build trust across vendors and operators, while supportive regulatory and market mechanisms can explicitly reward the benefits of digital twins, including improved reliability, extended battery life, and enhanced participation in energy markets. For example, simulation-based evidence suggests that digital twin-guided thermal and operational strategies can extend usable battery capacity by up to five percent, providing both technical and economic benefits. At the same time, embedding robust cybersecurity practices ensures that the adoption of digital twins does not introduce vulnerabilities that could threaten grid stability. Beyond identifying governance gaps, this study proposes an actionable implementation roadmap categorized into short-, medium-, and long-term strategies rather than fixed calendar dates, ensuring adaptability across different jurisdictions. Short-term actions include establishing terminology standards and piloting incentive programs. Medium-term measures involve mandating interoperability protocols and embedding digital twin requirements in market rules, and long-term strategies focus on achieving global harmonization and universal plug-and-play interoperability. International examples from Europe, North America, and Asia–Pacific illustrate how coordinated governance can accelerate adoption while safeguarding energy infrastructure. By combining technical analysis with policy and governance insights, this study advances both the scholarly and practical understanding of digital twin deployment in BESSs. The findings provide policymakers, regulators, industry leaders, and system operators with a clear framework to close governance gaps, maximize the value of digital twins, and enable more secure, reliable, and sustainable integration of energy storage into future power systems.
Full article
(This article belongs to the Section AI-Driven Innovations)
►▼
Show Figures

Figure 1
Open AccessArticle
Optimizing Pre-Silicon CPU Validation: Reducing Simulation Time with Unsupervised Machine Learning and Statistical Analysis
by
Victor Rodriguez-Bahena, Luis Pizano-Escalante, Omar Longoria-Gandara and Luis F Gutierrez-Preciado
Computers 2025, 14(9), 364; https://doi.org/10.3390/computers14090364 - 1 Sep 2025
Abstract
►▼
Show Figures
In modern processor development, extensive simulation is required before manufacturing to ensure that Central Processing Unit (CPU) designs function correctly and efficiently. This pre-silicon validation process involves running a wide range of software workloads on architectural models to identify potential issues early in
[...] Read more.
In modern processor development, extensive simulation is required before manufacturing to ensure that Central Processing Unit (CPU) designs function correctly and efficiently. This pre-silicon validation process involves running a wide range of software workloads on architectural models to identify potential issues early in the design cycle. Improving pre-silicon simulation time is critical for accelerating CPU development and reducing time-to-market for high-quality processors. This study addresses the computational challenges of validating full-system simulations by leveraging unsupervised machine learning to optimize test case selection. By identifying patterns in executed instructions, the approach reduces the need for exhaustive simulations while maintaining rigorous validation standards. Notably, the optimized subset of test cases reduced simulation time by a factor of 10 and captured 97.5% of the maximum instruction entropy, ensuring nearly the same diversity in instruction coverage as the full workload set. The combination of Principal Component Analysis (PCA) and clustering algorithms effectively distinguished compute-bound and memory-bound workloads without requiring prior knowledge of the code. Statistical Model Checking with entropy-based analysis confirmed the effectiveness of this subset. This methodology significantly reduces validation effort, expedites CPU design cycles, and improves hardware efficiency. The findings highlight the potential of machine learning-driven validation strategies to enhance pre-silicon testing, enabling faster innovation and more robust processor architectures.
Full article

Figure 1
Open AccessArticle
Enhancing Software Usability Through LLMs: A Prompting and Fine-Tuning Framework for Analyzing Negative User Feedback
by
Nahed Alsaleh, Reem Alnanih and Nahed Alowidi
Computers 2025, 14(9), 363; https://doi.org/10.3390/computers14090363 - 1 Sep 2025
Abstract
In today’s competitive digital landscape, application usability plays a critical role in user satisfaction and retention. Negative user reviews offer valuable insights into real-world usability issues, yet traditional analysis methods often fall short in scalability and contextual understanding. This paper proposes an intelligent
[...] Read more.
In today’s competitive digital landscape, application usability plays a critical role in user satisfaction and retention. Negative user reviews offer valuable insights into real-world usability issues, yet traditional analysis methods often fall short in scalability and contextual understanding. This paper proposes an intelligent framework that utilizes large language models (LLMs), including GPT-4, Gemini, and BLOOM, to automate the extraction of actionable usability recommendations from negative app reviews. By applying prompting and fine-tuning techniques, the framework transforms unstructured feedback into meaningful suggestions aligned with three core usability dimensions: correctness, completeness, and satisfaction. A manually annotated dataset of Instagram negative reviews was used to evaluate model performance. Results show that GPT-4 consistently outperformed other models, achieving BLEU scores up to 0.64, ROUGE scores up to 0.80, and METEOR scores up to 0.90—demonstrating high semantic accuracy and contextual relevance in generated recommendations. Gemini and BLOOM, while improved through fine-tuning, showed significantly lower performance. This study also introduces a practical, web-based tool that enables real-time review analysis and recommendation generation, supporting data-driven, user-centered software development. These findings illustrate the potential of LLM-based frameworks to enhance software usability analysis and accelerate feedback-driven design processes.
Full article
(This article belongs to the Topic Recent Advances in AI-Enhanced Software Engineering and Web Services)
►▼
Show Figures

Figure 1
Open AccessArticle
A Graph Attention Network Combining Multifaceted Element Relationships for Full Document-Level Understanding
by
Lorenzo Vaiani, Davide Napolitano and Luca Cagliero
Computers 2025, 14(9), 362; https://doi.org/10.3390/computers14090362 - 1 Sep 2025
Abstract
►▼
Show Figures
Question answering from visually rich documents (VRDs) is the task of retrieving the correct answer to a natural language question by considering the content of textual and visual elements in the document, as well as the pages’ layout. To answer closed-ended questions that
[...] Read more.
Question answering from visually rich documents (VRDs) is the task of retrieving the correct answer to a natural language question by considering the content of textual and visual elements in the document, as well as the pages’ layout. To answer closed-ended questions that require a deep understanding of the hierarchical relationships between the elements, i.e., the full document-level understanding (FDU) task, state-of-the-art graph-based approaches to FDU model the pairwise element relationships in a graph model. Although they incorporate logical links (e.g., a caption refers to a figure) and spatial ones (e.g., a caption is placed below the figure), they currently disregard the semantic similarity among multimodal document elements, thus potentially yielding suboptimal scoring of the elements’ relevance to the input question. In this paper, we propose GRAS-FDU, a new graph attention network tailored to FDU. GATS-FDU is trained to jointly consider multiple document facets, i.e., the local, spatial, and semantic elements’ relationships. The results show that our approach achieves superior performance compared to several baseline methods.
Full article

Figure 1
Open AccessArticle
Prosodic Spatio-Temporal Feature Fusion with Attention Mechanisms for Speech Emotion Recognition
by
Kristiawan Nugroho, Imam Husni Al Amin, Nina Anggraeni Noviasari and De Rosal Ignatius Moses Setiadi
Computers 2025, 14(9), 361; https://doi.org/10.3390/computers14090361 - 31 Aug 2025
Abstract
Speech Emotion Recognition (SER) plays a vital role in supporting applications such as healthcare, human–computer interaction, and security. However, many existing approaches still face challenges in achieving robust generalization and maintaining high recall, particularly for emotions related to stress and anxiety. This study
[...] Read more.
Speech Emotion Recognition (SER) plays a vital role in supporting applications such as healthcare, human–computer interaction, and security. However, many existing approaches still face challenges in achieving robust generalization and maintaining high recall, particularly for emotions related to stress and anxiety. This study proposes a dual-stream hybrid model that combines prosodic features with spatio-temporal representations derived from the Multitaper Mel-Frequency Spectrogram (MTMFS) and the Constant-Q Transform Spectrogram (CQTS). Prosodic cues, including pitch, intensity, jitter, shimmer, HNR, pause rate, and speech rate, were processed using dense layers, while MTMFS and CQTS features were encoded with CNN and BiGRU. A Multi-Head Attention mechanism was then applied to adaptively fuse the two feature streams, allowing the model to focus on the most relevant emotional cues. Evaluations conducted on the RAVDESS dataset with subject-independent 5-fold cross-validation demonstrated an accuracy of 97.64% and a macro F1-score of 0.9745. These results confirm that combining prosodic and advanced spectrogram features with attention-based fusion improves precision, recall, and overall robustness, offering a promising framework for more reliable SER systems.
Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Real-Time Face Mask Detection Using Federated Learning
by
Tudor-Mihai David and Mihai Udrescu
Computers 2025, 14(9), 360; https://doi.org/10.3390/computers14090360 - 31 Aug 2025
Abstract
Epidemics caused by respiratory infections have become a global and systemic threat since humankind has become highly connected via modern transportation systems. Any new pathogen with human-to-human transmission capabilities has the potential to cause public health disasters and severe disruptions of social and
[...] Read more.
Epidemics caused by respiratory infections have become a global and systemic threat since humankind has become highly connected via modern transportation systems. Any new pathogen with human-to-human transmission capabilities has the potential to cause public health disasters and severe disruptions of social and economic activities. During the COVID-19 pandemic, we learned that proper mask-wearing in closed, restricted areas was one of the measures that worked to mitigate the spread of respiratory infections while allowing for continuing economic activity. Previous research approached this issue by designing hardware–software systems that determine whether individuals in the surveilled restricted area are using a mask; however, most such solutions are centralized, thus requiring massive computational resources, which makes them hard to scale up. To address such issues, this paper proposes a novel decentralized, federated learning (FL) solution to mask-wearing detection that instantiates our lightweight version of the MobileNetV2 model. The FL solution also ensures individual privacy, given that images remain at the local, device level. Importantly, we obtained a mask-wearing training accuracy of 98% (i.e., similar to centralized machine learning solutions) after only eight rounds of communication with 25 clients. We rigorously proved the reliability and robustness of our approach after repeated K-fold cross-validation.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►▼
Show Figures

Figure 1
Open AccessReview
Exploring the Synergy Between Ethereum Layer 2 Solutions and Machine Learning to Improve Blockchain Scalability
by
Andrada Cristina Artenie, Diana Laura Silaghi and Daniela Elena Popescu
Computers 2025, 14(9), 359; https://doi.org/10.3390/computers14090359 - 29 Aug 2025
Abstract
►▼
Show Figures
Blockchain technologies, despite their profound transformative potential across multiple industries, continue to face significant scalability challenges. These limitations are primarily observed in restricted transaction throughput and elevated latency, which hinder the ability of blockchain networks to support widespread adoption and high-volume applications. To
[...] Read more.
Blockchain technologies, despite their profound transformative potential across multiple industries, continue to face significant scalability challenges. These limitations are primarily observed in restricted transaction throughput and elevated latency, which hinder the ability of blockchain networks to support widespread adoption and high-volume applications. To address these issues, research has predominantly focused on Layer 1 solutions that seek to improve blockchain performance through fundamental modifications to the core protocol and architectural design. Alternatively, Layer 2 solutions enable off-chain transaction processing, increasing throughput and reducing costs while maintaining the security of the base layer. Despite their advantages, Layer 2 approaches are less explored in the literature. To address this gap, this review conducts an in-depth analysis on Ethereum Layer 2 frameworks, emphasizing their integration with machine-learning techniques, with the goal of promoting the prevailing best practices and emerging applications; this review also identifies key technical and operational challenges hindering widespread adoption.
Full article

Figure 1
Open AccessArticle
Privacy Threats and Privacy Preservation in Multiple Data Releases of High-Dimensional Datasets
by
Surapon Riyana
Computers 2025, 14(9), 358; https://doi.org/10.3390/computers14090358 - 29 Aug 2025
Abstract
Determining how to balance data utilities and data privacy when datasets are released to be utilized outside the scope of data-collecting organizations constitutes a major challenge. To achieve this aim in data collection (datasets), several privacy preservation models have been proposed, such as
[...] Read more.
Determining how to balance data utilities and data privacy when datasets are released to be utilized outside the scope of data-collecting organizations constitutes a major challenge. To achieve this aim in data collection (datasets), several privacy preservation models have been proposed, such as k-Anonymity and l-Diversity. Unfortunately, these privacy preservation models may be insufficient to address privacy violation issues in datasets that have high-dimensional attributes. For this reason, the privacy preservation models, km-Anonymity and LKC-Privacy, for addressing privacy violation issues in high-dimensional datasets are proposed. However, these privacy preservation models still exhibit privacy violation issues from using data comparison attacks, and they further have data utility issues that must be addressed. Therefore, a privacy preservation model can address privacy violation issues in high-dimensional datasets to be proposed in this work, such that there are no concerns about privacy violations in released datasets from data comparison attacks, and it is highly efficient and effective in data maintenance. Furthermore, we show that the proposed model is efficient and effective through extensive experiments.
Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
►▼
Show Figures

Figure 1
Open AccessArticle
Integration of Cloud-Based Central Telemedicine System and IoT Device Networks
by
Parin Sornlertlamvanich, Chatdanai Phakaket, Panya Hantula, Sarunya Kanjanawattana, Nuntawut Kaoungku and Komsan Srivisut
Computers 2025, 14(9), 357; https://doi.org/10.3390/computers14090357 - 29 Aug 2025
Abstract
The growing challenges in healthcare services, such as hospital congestion and a persistent shortage of medical personnel, significantly impede effective service delivery. This particularly presents significant challenges for the continuous monitoring of patients with chronic diseases. In comparison, Internet of Things (IoT) based
[...] Read more.
The growing challenges in healthcare services, such as hospital congestion and a persistent shortage of medical personnel, significantly impede effective service delivery. This particularly presents significant challenges for the continuous monitoring of patients with chronic diseases. In comparison, Internet of Things (IoT) based telemonitoring systems offer a promising solution to alleviate these challenges. However, transmitting sensitive and confidential patient health data requires a strong focus on end-to-end security. This includes securing sensitive data within the patient’s home network, during internet transmission, at the endpoint system, and managing sensitive data. In this study, we propose a secure and scalable architecture for a remote health monitoring system that integrates telemedicine technology with the IoT. The proposed solution included a portable remote health monitoring device, an IoT Gateway appliance (IoT GW) in the patient’s home, and an IoT Application Gateway Endpoint (App GW Endpoint) on a cloud infrastructure. A secure communication channel was established by implementing a multi-layered security protocol stack that uses HTTPS over Quick UDP Internet Connection (QUIC), with a focus on optimal security and compatibility, prioritizing cipher suites for data confidentiality and device authentication. The cloud architecture is designed based on the Well-Architected Framework principles to ensure security, high availability, and scalability. Our study shows that storing patient health information is reliable and efficient. Furthermore, the results for processing and transmission times clearly demonstrate that the additional encryption mechanisms have a negligible effect on data transmission latency, while significantly improving security.
Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Benchmarking Control Strategies for Multi-Component Degradation (MCD) Detection in Digital Twin (DT) Applications
by
Atuahene Kwasi Barimah, Akhtar Jahanzeb, Octavian Niculita, Andrew Cowell and Don McGlinchey
Computers 2025, 14(9), 356; https://doi.org/10.3390/computers14090356 - 29 Aug 2025
Abstract
Digital Twins (DTs) have become central to intelligent asset management within Industry 4.0, enabling real-time monitoring, diagnostics, and predictive maintenance. However, implementing Prognostics and Health Management (PHM) strategies within DT frameworks remains a significant challenge, particularly in systems experiencing multi-component degradation (MCD). MCD
[...] Read more.
Digital Twins (DTs) have become central to intelligent asset management within Industry 4.0, enabling real-time monitoring, diagnostics, and predictive maintenance. However, implementing Prognostics and Health Management (PHM) strategies within DT frameworks remains a significant challenge, particularly in systems experiencing multi-component degradation (MCD). MCD occurs when several components degrade simultaneously or in interaction, complicating detection and isolation processes. Traditional data-driven fault detection models often require extensive historical degradation data, which is costly, time-consuming, or difficult to obtain in many real-world scenarios. This paper proposes a model-based, control-driven approach to MCD detection, which reduces the need for large training datasets by leveraging reference tracking performance in closed-loop control systems. We benchmark the accuracy of four control strategies—Proportional-Integral (PI), Linear Quadratic Regulator (LQR), Model Predictive Control (MPC), and a hybrid model—within a Digital Twin-enabled hydraulic system testbed comprising multiple components, including pumps, valves, nozzles, and filters. The control strategies are evaluated under various MCD scenarios for their ability to accurately detect and isolate degradation events. Simulation results indicate that the hybrid model consistently outperforms the individual control strategies, achieving an average accuracy of 95.76% under simultaneous pump and nozzle degradation scenarios. The LQR model also demonstrated strong predictive performance, especially in identifying degradation in components such as nozzles and pumps. Also, the sequence and interaction of faults were found to influence detection accuracy, highlighting how the complexities of fault sequences affect the performance of diagnostic strategies. This work contributes to PHM and DT research by introducing a scalable, data-efficient methodology for MCD detection that integrates seamlessly into existing DT architectures using containerized RESTful APIs. By shifting from data-dependent to model-informed diagnostics, the proposed approach enhances early fault detection capabilities and reduces deployment timelines for real-world DT-enabled PHM applications.
Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
►▼
Show Figures

Figure 1
Open AccessArticle
Remote Code Execution via Log4J MBeans: Case Study of Apache ActiveMQ (CVE-2022-41678)
by
Alexandru Răzvan Căciulescu, Matei Bădănoiu, Răzvan Rughiniș and Dinu Țurcanu
Computers 2025, 14(9), 355; https://doi.org/10.3390/computers14090355 - 28 Aug 2025
Abstract
►▼
Show Figures
Java Management Extensions (JMX) are indispensable for managing and administrating Java software solutions, yet when exposed through HTTP bridges such as Jolokia they can radically enlarge an application’s attack surface. This paper presents the first in-depth analysis of CVE-2022-41678, a vulnerability discovered by
[...] Read more.
Java Management Extensions (JMX) are indispensable for managing and administrating Java software solutions, yet when exposed through HTTP bridges such as Jolokia they can radically enlarge an application’s attack surface. This paper presents the first in-depth analysis of CVE-2022-41678, a vulnerability discovered by the authors in Apache ActiveMQ that combines Jolokia’s remote JMX access with Log4J2 management beans to achieve full remote code execution. Using a default installation testbed, we enumerate the Log4J MBeans surfaced by Jolokia, demonstrate arbitrary file read, file write, and server-side request–forgery primitives, and finally to leverage the file write capabilities to obtain a shell, all via authenticated HTTP(S) requests only. The end-to-end exploit chain requires no deserialization gadgets and is unaffected by prior Log4Shell mitigations. We have also automated the entire exploit process via proof-of-concept scripts on a stock ActiveMQ 5.17.1 instance. We discuss the broader security implications for any software exposing JMX-managed or Jolokia-managed Log4J contexts, provide concrete hardening guidelines, and outline design directions for safer remote-management stacks. The findings underscore that even “benign” management beans can become critical when surfaced through ubiquitous HTTP management gateways.
Full article

Figure 1
Open AccessArticle
Legal AI in Low-Resource Languages: Building and Evaluating QA Systems for the Kazakh Legislation
by
Diana Rakhimova, Assem Turarbek, Vladislav Karyukin, Assiya Sarsenbayeva and Rashid Alieyev
Computers 2025, 14(9), 354; https://doi.org/10.3390/computers14090354 - 27 Aug 2025
Abstract
The research focuses on the development and evaluation of a legal question–answer system for the Kazakh language, a low-resource and morphologically complex language. Four datasets were compiled from open legal sources—Adilet, Zqai, Gov, and a manually created synthetic set—containing question–аnswer pairs extracted from
[...] Read more.
The research focuses on the development and evaluation of a legal question–answer system for the Kazakh language, a low-resource and morphologically complex language. Four datasets were compiled from open legal sources—Adilet, Zqai, Gov, and a manually created synthetic set—containing question–аnswer pairs extracted from official legislative documents and government portals. Seven large language models (GPT-4o mini, GEMMA, KazLLM, LLaMA, Phi, Qwen, and Mistral) were fine-tuned using structured prompt templates, quantization methods, and domain-specific training to enhance contextual understanding and efficiency. The evaluation employed both automatic metrics (ROUGE and METEOR) and expert-based manual assessment. GPT-4o mini achieved the highest overall performance, with ROUGE-1: 0.309, ROUGE-2: 0.175, ROUGE-L: 0.263, and METEOR: 0.320, and received an expert score of 3.96, indicating strong legal reasoning capabilities and adaptability to Kazakh legal contexts. The results highlight GPT-4o mini’s superiority over other tested models in both quantitative and qualitative evaluations. This work demonstrates the feasibility and importance of developing localized legal AI solutions for low-resource languages, contributing to improved legal accessibility, transparency, and digital governance in Kazakhstan.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
3 September 2025
Join Us at the MDPI at the University of Toronto Career Fair, 23 September 2025, Toronto, ON, Canada
Join Us at the MDPI at the University of Toronto Career Fair, 23 September 2025, Toronto, ON, Canada

1 September 2025
MDPI INSIGHTS: The CEO’s Letter #26 – CUJS, Head of Ethics, Open Peer Review, AIS 2025, Reviewer Recognition
MDPI INSIGHTS: The CEO’s Letter #26 – CUJS, Head of Ethics, Open Peer Review, AIS 2025, Reviewer Recognition
Topics
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies and Applications
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario Brcic, Sebastian LapuschkinDeadline: 31 January 2026
Topic in
AI, Computers, Education Sciences, Societies, Future Internet, Technologies
AI Trends in Teacher and Student Training
Topic Editors: José Fernández-Cerero, Marta Montenegro-RuedaDeadline: 11 March 2026

Special Issues
Special Issue in
Computers
Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields
Guest Editor: Rafiqul ChowdhuryDeadline: 30 September 2025
Special Issue in
Computers
Applications of Machine Learning and Artificial Intelligence for Healthcare
Guest Editor: Elias DritsasDeadline: 30 September 2025
Special Issue in
Computers
Present and Future of E-Learning Technologies (2nd Edition)
Guest Editor: Antonio Sarasa CabezueloDeadline: 30 September 2025
Special Issue in
Computers
Wireless Sensor Network, IoT and Cloud Computing Technologies for Smart Cities
Guest Editor: Lilatul FerdouseDeadline: 30 September 2025