Previous Issue
Volume 14, September
 
 

Computers, Volume 14, Issue 10 (October 2025) – 31 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
26 pages, 3060 KB  
Article
Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games
by Sehar Shahzad Farooq, Hameedur Rahman, Samiya Abdul Wahid, Muhammad Alyan Ansari, Saira Abdul Wahid and Hosu Lee
Computers 2025, 14(10), 434; https://doi.org/10.3390/computers14100434 (registering DOI) - 13 Oct 2025
Abstract
Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage [...] Read more.
Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage Actor–Critic (A3C) algorithm is explored using action games. Having been proven successful in several gaming environments, its effectiveness in action games is rarely explored. Providing efficient learning and adaptation facilities, this research aims to assess whether integrating ICM with A3C promotes curiosity-driven explorations and adaptive learning in action games. Using the MAME Toolkit library, we interface with the game environments, preprocess game screens to focus on relevant visual elements, and create diverse game episodes for training. The A3C policy is optimized using the Proximal Policy Optimization (PPO) algorithm with tuned hyperparameters. Comparisons are made with baseline methods, including vanilla A3C, ICM with pixel-based predictions, and state-of-the-art exploration techniques. Additionally, we evaluate the agent’s generalization capability in separate environments. The results demonstrate that ICM and A3C effectively promote curiosity-driven exploration in action games, with the agent learning exploration behaviors without relying solely on external rewards. Notably, we also observed an improved efficiency and learning speed compared to baseline approaches. This research contributes to curiosity-driven exploration in reinforcement learning-based virtual environments and provides insights into the exploration of complex action games. Successfully applying ICM and A3C in action games presents exciting opportunities for adaptive learning and efficient exploration in challenging real-world environments. Full article
Show Figures

Figure 1

20 pages, 1343 KB  
Article
Hybrid CDN Architecture Integrating Edge Caching, MEC Offloading, and Q-Learning-Based Adaptive Routing
by Aymen D. Salman, Akram T. Zeyad, Asia Ali Salman Al-karkhi, Safanah M. Raafat and Amjad J. Humaidi
Computers 2025, 14(10), 433; https://doi.org/10.3390/computers14100433 (registering DOI) - 13 Oct 2025
Abstract
Content Delivery Networks (CDNs) have evolved to meet surging data demands and stringent low-latency requirements driven by emerging applications like high-definition video streaming, virtual reality, and IoT. This paper proposes a hybrid CDN architecture that synergistically combines edge caching, Multi-access Edge Computing (MEC) [...] Read more.
Content Delivery Networks (CDNs) have evolved to meet surging data demands and stringent low-latency requirements driven by emerging applications like high-definition video streaming, virtual reality, and IoT. This paper proposes a hybrid CDN architecture that synergistically combines edge caching, Multi-access Edge Computing (MEC) offloading, and reinforcement learning (Q-learning) for adaptive routing. In the proposed system, popular content is cached at radio access network edges (e.g., base stations) and computation-intensive tasks are offloaded to MEC servers, while a Q-learning agent dynamically routes user requests to the optimal service node (cache, MEC server, or origin) based on the network state. The study presented detailed system design and provided comprehensive simulation-based evaluation. The results demonstrate that the proposed hybrid approach significantly improves cache hit ratios and reduces end-to-end latency compared to traditional CDNs and simpler edge architectures. The Q-learning-enabled routing adapts to changing load and content popularity, converging to efficient policies that outperform static baselines. The proposed hybrid model has been tested against variants lacking MEC, edge caching, or the RL-based controller to isolate each component’s contributions. The paper concludes with a discussion on practical considerations, limitations, and future directions for intelligent CDN networking at the edge. Full article
(This article belongs to the Special Issue Edge and Fog Computing for Internet of Things Systems (2nd Edition))
Show Figures

Figure 1

21 pages, 3148 KB  
Article
A Novel Multimodal Hand Gesture Recognition Model Using Combined Approach of Inter-Frame Motion and Shared Attention Weights
by Xiaorui Zhang, Shuaitong Li, Xianglong Zeng, Peisen Lu and Wei Sun
Computers 2025, 14(10), 432; https://doi.org/10.3390/computers14100432 (registering DOI) - 13 Oct 2025
Abstract
Dynamic hand gesture recognition based on computer vision aims at enabling computers to understand the semantic meaning conveyed by hand gestures in videos. Existing methods predominately rely on spatiotemporal attention mechanisms to extract hand motion features in a large spatiotemporal scope. However, they [...] Read more.
Dynamic hand gesture recognition based on computer vision aims at enabling computers to understand the semantic meaning conveyed by hand gestures in videos. Existing methods predominately rely on spatiotemporal attention mechanisms to extract hand motion features in a large spatiotemporal scope. However, they cannot accurately focus on the moving hand region for hand feature extraction because frame sequences contain a substantial amount of redundant information. Although multimodal techniques can extract a wider variety of hand features, they are less successful at utilizing information interactions between various modalities for accurate feature extraction. To address these challenges, this study proposes a multimodal hand gesture recognition model combining inter-frame motion and shared attention weights. By jointly using an inter-frame motion attention (IFMA) mechanism and adaptive down-sampling (ADS), the spatiotemporal search scope can be effectively narrowed down to the hand-related regions based on the characteristic of hands exhibiting obvious movements. The proposed inter-modal attention weight (IMAW) loss enables RGB and Depth modalities to share attention, allowing each to adjust its distribution based on the other. Experimental results on the EgoGesture, NVGesture, and Jester datasets demonstrate the superiority of our proposed model over existing state-of-the-art methods in terms of hand motion feature extraction and hand gesture recognition accuracy. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

30 pages, 2870 KB  
Article
CourseEvalAI: Rubric-Guided Framework for Transparent and Consistent Evaluation of Large Language Models
by Catalin Anghel, Marian Viorel Craciun, Emilia Pecheanu, Adina Cocu, Andreea Alexandra Anghel, Paul Iacobescu, Calina Maier, Constantin Adrian Andrei, Cristian Scheau and Serban Dragosloveanu
Computers 2025, 14(10), 431; https://doi.org/10.3390/computers14100431 (registering DOI) - 11 Oct 2025
Viewed by 40
Abstract
Background and objectives: Large language models (LLMs) show promise in automating open-ended evaluation tasks, yet their reliability in rubric-based assessment remains uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about transparency and pedagogical validity in educational contexts. This study introduces [...] Read more.
Background and objectives: Large language models (LLMs) show promise in automating open-ended evaluation tasks, yet their reliability in rubric-based assessment remains uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about transparency and pedagogical validity in educational contexts. This study introduces CourseEvalAI, a framework designed to enhance consistency and fidelity in rubric-guided evaluation by fine-tuning a general-purpose LLM with authentic university-level instructional content. Methods: The framework employs supervised fine-tuning with Low-Rank Adaptation (LoRA) on rubric-annotated answers and explanations drawn from undergraduate computer science exams. Responses generated by both the base and fine-tuned models were independently evaluated by two human raters and two LLM judges, applying dual-layer rubrics for answers (technical or argumentative) and explanations. Inter-rater reliability was reported as intraclass correlation coefficient (ICC(2,1)), Krippendorff’s α, and quadratic-weighted Cohen’s κ (QWK), and statistical analyses included Welch’s t tests with Holm–Bonferroni correction, Hedges’ g with bootstrap confidence intervals, and Levene’s tests. All responses, scores, feedback, and metadata were stored in a Neo4j graph database for structured exploration. Results: The fine-tuned model consistently outperformed the base version across all rubric dimensions, achieving higher scores for both answers and explanations. After multiple-testing correction, only the Generative Pre-trained Transformer (GPT-4)—judged Technical Answer contrast remains statistically significant; other contrasts show positive trends without passing the adjusted threshold, and no additional significance is claimed for explanation-level results. Variance in scoring decreased, inter-model agreement increased, and evaluator feedback for fine-tuned outputs contained fewer vague or critical remarks, indicating stronger rubric alignment and greater pedagogical coherence. Inter-rater reliability analyses indicated moderate human–human agreement and weaker alignment of LLM judges to the human mean. Originality: CourseEvalAI integrates rubric-guided fine-tuning, dual-layer evaluation, and graph-based storage into a unified framework. This combination provides a replicable and interpretable methodology that enhances the consistency, transparency, and pedagogical value of LLM-based evaluators in higher education and beyond. Full article
Show Figures

Figure 1

54 pages, 6893 KB  
Article
Automated OSINT Techniques for Digital Asset Discovery and Cyber Risk Assessment
by Tetiana Babenko, Kateryna Kolesnikova, Olga Abramkina and Yelizaveta Vitulyova
Computers 2025, 14(10), 430; https://doi.org/10.3390/computers14100430 - 9 Oct 2025
Viewed by 125
Abstract
Cyber threats are becoming increasingly sophisticated, especially in distributed infrastructures where systems are deeply interconnected. To address this, we developed a framework that automates how organizations discover their digital assets and assess which ones are the most at risk. The approach integrates diverse [...] Read more.
Cyber threats are becoming increasingly sophisticated, especially in distributed infrastructures where systems are deeply interconnected. To address this, we developed a framework that automates how organizations discover their digital assets and assess which ones are the most at risk. The approach integrates diverse public information sources, including WHOIS records, DNS data, and SSL certificates, into a unified analysis pipeline without relying on intrusive probing. For risk scoring we applied Gradient Boosted Decision Trees, which proved more robust with messy real-world data than other models we tested. DBSCAN clustering was used to detect unusual exposure patterns across assets. In validation on organizational data, the framework achieved 93.3% accuracy in detecting known vulnerabilities and an F1-score of 0.92 for asset classification. More importantly, security teams spent about 58% less time on manual triage and false alarm handling. The system also demonstrated reasonable scalability, indicating that automated OSINT analysis can provide a practical and resource-efficient way for organizations to maintain visibility over their attack surface. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

23 pages, 2198 KB  
Review
Security Requirements Engineering: A Review and Analysis
by Aftab Alam Janisar, Ayman Meidan, Khairul Shafee bin Kalid, Abdul Rehman Gilal and Aliza Bt Sarlan
Computers 2025, 14(10), 429; https://doi.org/10.3390/computers14100429 - 9 Oct 2025
Viewed by 117
Abstract
Security is crucial, especially as software systems become increasingly complex. Both practitioners and researchers advocate for the early integration of security requirements (SR) into the Software Development Life Cycle (SDLC). However, ensuring the validation and assurance of security requirements is still a major [...] Read more.
Security is crucial, especially as software systems become increasingly complex. Both practitioners and researchers advocate for the early integration of security requirements (SR) into the Software Development Life Cycle (SDLC). However, ensuring the validation and assurance of security requirements is still a major challenge in developing secure systems. To investigate this issue, a two-phase study was carried out. First phase: a literature review was conducted on 45 relevant studies related to Security Requirements Engineering (SRE) and Security Requirements Assurance (SRA). Nine SRE techniques were examined across multiple parameters, including major categories, requirements engineering stages, project scale, and the integration of standards involving 17 distinct activities. Second phase: An empirical survey of 58 industry professionals revealed a clear disparity between the understanding of Security Requirements Engineering (SRE) and the implementation of Security Requirements Assurance (SRA). While statistical analyses (ANOVA, regression, correlation, Kruskal–Wallis) confirmed a moderate grasp of SRE practices, SRA remains poorly understood and underapplied. Unlike prior studies focused on isolated models, this research combines practical insights with comparative analysis, highlighting the systemic neglect of SRA in current practices. The findings indicate the need for stronger security assurance in early development phases, offering targeted, data-driven recommendations for bridging this gap. Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
Show Figures

Figure 1

21 pages, 1410 KB  
Article
Measure Student Aptitude in Learning Programming in Higher Education—A Data Analysis
by João Pires, Ana Rosa Borges, Jorge Bernardino, Fernanda Brito Correia and Anabela Gomes
Computers 2025, 14(10), 428; https://doi.org/10.3390/computers14100428 - 9 Oct 2025
Viewed by 140
Abstract
Analyzing student performance in Introductory Programming courses in Higher Education is critical for early intervention and improved learning outcomes. This study explores the potential of a cognitive test for student success in an Introductory Programming course by analyzing data from 180 students, including [...] Read more.
Analyzing student performance in Introductory Programming courses in Higher Education is critical for early intervention and improved learning outcomes. This study explores the potential of a cognitive test for student success in an Introductory Programming course by analyzing data from 180 students, including Freshmen and Repeating Students, using descriptive statistics, correlation analysis, Categorical Principal Component Analysis and Item Response Theory models analysis. Analysis of the cognitive test revealed that some reasoning questions presented a statistically significant correlation, albeit of weak magnitude, with the course grades, particularly for freshman students. The development of models for predicting student performance in Introductory Programming using cognitive tests is also being explored. This study found that reasoning skills, namely logical reasoning and sequence completion, were more predictive of success in programming than general ability. The study also showed that a Programming Cognitive Test can be a useful tool for identifying students at risk of failure, particularly for freshmen students. Full article
Show Figures

Figure 1

20 pages, 1205 KB  
Review
LLMs for Commit Messages: A Survey and an Agent-Based Evaluation Protocol on CommitBench
by Mohamed Mehdi Trigui and Wasfi G. Al-Khatib
Computers 2025, 14(10), 427; https://doi.org/10.3390/computers14100427 - 7 Oct 2025
Viewed by 268
Abstract
Commit messages are vital for traceability, maintenance, and onboarding in modern software projects, yet their quality is frequently inconsistent. Recent large language models (LLMs) can transform code diffs into natural language summaries, offering a path to more consistent and informative commit messages. This [...] Read more.
Commit messages are vital for traceability, maintenance, and onboarding in modern software projects, yet their quality is frequently inconsistent. Recent large language models (LLMs) can transform code diffs into natural language summaries, offering a path to more consistent and informative commit messages. This paper makes two contributions: (i) it provides a systematic survey of automated commit message generation with LLMs, critically comparing prompt-only, fine-tuned, and retrieval-augmented approaches; and (ii) it specifies a transparent, agent-based evaluation blueprint centered on CommitBench. Unlike prior reviews, we include a detailed dataset audit, preprocessing impacts, evaluation metrics, and error taxonomy. The protocol defines dataset usage and splits, prompting and context settings, scoring and selection rules, and reporting guidelines (results by project, language, and commit type), along with an error taxonomy to guide qualitative analysis. Importantly, this work emphasizes methodology and design rather than presenting new empirical benchmarking results. The blueprint is intended to support reproducibility and comparability in future studies. Full article
Show Figures

Figure 1

32 pages, 12099 KB  
Article
Hardware–Software System for Biomass Slow Pyrolysis: Characterization of Solid Yield via Optimization Algorithms
by Ismael Urbina-Salas, David Granados-Lieberman, Juan Pablo Amezquita-Sanchez, Martin Valtierra-Rodriguez and David Aaron Rodriguez-Alejandro
Computers 2025, 14(10), 426; https://doi.org/10.3390/computers14100426 - 5 Oct 2025
Viewed by 297
Abstract
Biofuels represent a sustainable alternative that supports global energy development without compromising environmental balance. This work introduces a novel hardware–software platform for the experimental characterization of biomass solid yield during the slow pyrolysis process, integrating physical experimentation with advanced computational modeling. The hardware [...] Read more.
Biofuels represent a sustainable alternative that supports global energy development without compromising environmental balance. This work introduces a novel hardware–software platform for the experimental characterization of biomass solid yield during the slow pyrolysis process, integrating physical experimentation with advanced computational modeling. The hardware consists of a custom-designed pyrolizer equipped with temperature and weight sensors, a dedicated control unit, and a user-friendly interface. On the software side, a two-step kinetic model was implemented and coupled with three optimization algorithms, i.e., Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Nelder–Mead (N-M), to estimate the Arrhenius kinetic parameters governing biomass degradation. Slow pyrolysis experiments were performed on wheat straw (WS), pruning waste (PW), and biosolids (BS) at a heating rate of 20 °C/min within 250–500 °C, with a 120 min residence time favoring biochar production. The comparative analysis shows that the N-M method achieved the highest accuracy (100% fit in estimating solid yield), with a convergence time of 4.282 min, while GA converged faster (1.675 min), with a fit of 99.972%, and PSO had the slowest convergence time at 6.409 min and a fit of 99.943%. These results highlight both the versatility of the system and the potential of optimization techniques to provide accurate predictive models of biomass decomposition as a function of time and temperature. Overall, the main contributions of this work are the development of a low-cost, custom MATLAB-based experimental platform and the tailored implementation of optimization algorithms for kinetic parameter estimation across different biomasses, together providing a robust framework for biomass pyrolysis characterization. Full article
Show Figures

Figure 1

25 pages, 4460 KB  
Systematic Review
Rethinking Blockchain Governance with AI: The VOPPA Framework
by Catalin Daniel Morar, Daniela Elena Popescu, Ovidiu Constantin Novac and David Ghiurău
Computers 2025, 14(10), 425; https://doi.org/10.3390/computers14100425 - 4 Oct 2025
Viewed by 261
Abstract
Blockchain governance has become central to the performance and resilience of decentralized systems, yet current models face recurring issues of participation, coordination, and adaptability. This article offers a structured analysis of governance frameworks and highlights their limitations through recent high-impact case studies. It [...] Read more.
Blockchain governance has become central to the performance and resilience of decentralized systems, yet current models face recurring issues of participation, coordination, and adaptability. This article offers a structured analysis of governance frameworks and highlights their limitations through recent high-impact case studies. It then examines how artificial intelligence (AI) is being integrated into governance processes, ranging from proposal summarization and anomaly detection to autonomous agent-based voting. In response to existing gaps, this paper proposes the Voting Via Parallel Predictive Agents (VOPPA) framework, a multi-agent architecture aimed at enabling predictive, diverse, and decentralized decision-making. Strengthening blockchain governance will require not just decentralization but also intelligent, adaptable, and accountable decision-making systems. Full article
Show Figures

Figure 1

24 pages, 637 KB  
Article
ZDBERTa: Advancing Zero-Day Cyberattack Detection in Internet of Vehicle with Zero-Shot Learning
by Amal Mirza, Sobia Arshad, Muhammad Haroon Yousaf and Muhammad Awais Azam
Computers 2025, 14(10), 424; https://doi.org/10.3390/computers14100424 - 3 Oct 2025
Viewed by 353
Abstract
The Internet of Vehicles (IoV) is becoming increasingly vulnerable to zero-day (ZD) cyberattacks, which often bypass conventional intrusion detection systems. To mitigate this challenge, this study proposes Zero-Day Bidirectional Encoder Representations from Transformers approach (ZDBERTa), a zero-shot learning (ZSL)-based framework for ZD attack [...] Read more.
The Internet of Vehicles (IoV) is becoming increasingly vulnerable to zero-day (ZD) cyberattacks, which often bypass conventional intrusion detection systems. To mitigate this challenge, this study proposes Zero-Day Bidirectional Encoder Representations from Transformers approach (ZDBERTa), a zero-shot learning (ZSL)-based framework for ZD attack detection, evaluated on the CICIoV2024 dataset. Unlike conventional AI models, ZSL enables the classification of attack types not previously encountered during the training phase. Two dataset variants are formed: Variant 1, created through synthetic traffic generation using a mixture of pattern-based, crossover, and mutation techniques, and Variant 2, augmented with a Generative Adversarial Network (GAN). To replicate realistic zero-day conditions, denial-of-service (DoS) attacks were omitted during training and introduced only at testing. The proposed ZDBERTa incorporates a Byte-Pair Encoding (BPE) tokenizer, a multi-layer transformer encoder, and a classification head for prediction, enabling the model to capture semantic patterns and identify previously unseen threats. The experimental results demonstrate that ZDBERTa achieves 86.677% accuracy on Variant 1, highlighting the complexity of zero-day detection, while performance significantly improves to 99.315% on Variant 2, underscoring the effectiveness of GAN-based augmentation. To the best of our knowledge, this is the first research to explore ZD detection within CICIoV2024, contributing a novel direction toward resilient IoV cybersecurity. Full article
Show Figures

Figure 1

33 pages, 9908 KB  
Article
Mapping the Chemical Space of Antiviral Peptides with Half-Space Proximal and Metadata Networks Through Interactive Data Mining
by Daniela de Llano García, Yovani Marrero-Ponce, Guillermin Agüero-Chapin, Hortensia Rodríguez, Francesc J. Ferri, Edgar A. Márquez, José R. Mora, Felix Martinez-Rios and Yunierkis Pérez-Castillo
Computers 2025, 14(10), 423; https://doi.org/10.3390/computers14100423 - 3 Oct 2025
Viewed by 1014
Abstract
Antiviral peptides (AVPs) are promising therapeutic candidates, yet the rapid growth of sequence data and the field’s emphasis on predictors have left a gap: the lack of an integrated view linking peptide chemistry with biological context. Here, we map the AVP landscape through [...] Read more.
Antiviral peptides (AVPs) are promising therapeutic candidates, yet the rapid growth of sequence data and the field’s emphasis on predictors have left a gap: the lack of an integrated view linking peptide chemistry with biological context. Here, we map the AVP landscape through interactive data mining using Half-Space Proximal Networks (HSPNs) and Metadata Networks (MNs) in the StarPep toolbox. HSPNs minimize edges and avoid fixed thresholds, reducing computational cost while enabling high-resolution analysis. A threshold-free HSPN resolved eight chemically and biologically distinct communities, while MNs contextualized AVPs by source, function, and target, revealing structural–functional relationships. To capture diversity compactly, we applied centrality-guided scaffold extraction with redundancy removal (90–50% identity), producing four representative subsets suitable for modeling and similarity searches. Alignment-free motif discovery yielded 33 validated motifs, including 10 overlapping with reported AVP signatures and 23 apparently novel. Motifs displayed category-specific enrichment across antimicrobial classes, and sequences carrying multiple motifs (≥4–5) consistently showed higher predicted antiviral probabilities. Beyond computational insights, scaffolds provide representative “entry points” into AVP chemical space, while motifs serve as modular building blocks for rational design. Together, these resources provide an integrated framework that may inform AVP discovery and support scaffold- and motif-guided therapeutic design. Full article
Show Figures

Figure 1

35 pages, 4926 KB  
Article
Hybrid MOCPO–AGE-MOEA for Efficient Bi-Objective Constrained Minimum Spanning Trees
by Dana Faiq Abd, Haval Mohammed Sidqi and Omed Hasan Ahmed
Computers 2025, 14(10), 422; https://doi.org/10.3390/computers14100422 - 2 Oct 2025
Viewed by 292
Abstract
The constrained bi-objective Minimum Spanning Tree (MST) problem is a fundamental challenge in network design, as it simultaneously requires minimizing both total edge weight and maximum hop distance under strict feasibility limits; however, most existing algorithms tend to emphasize one objective over the [...] Read more.
The constrained bi-objective Minimum Spanning Tree (MST) problem is a fundamental challenge in network design, as it simultaneously requires minimizing both total edge weight and maximum hop distance under strict feasibility limits; however, most existing algorithms tend to emphasize one objective over the other, resulting in imbalanced solutions, limited Pareto fronts, or poor scalability on larger instances. To overcome these shortcomings, this study introduces a Hybrid MOCPO–AGE-MOEA algorithm that strategically combines the exploratory strength of Multi-Objective Crested Porcupines Optimization (MOCPO) with the exploitative refinement of the Adaptive Geometry-based Evolutionary Algorithm (AGE-MOEA), while a Kruskal-based repair operator is integrated to strictly enforce feasibility and preserve solution diversity. Moreover, through extensive experiments conducted on Euclidean graphs with 11–100 nodes, the hybrid consistently demonstrates superior performance compared with five state-of-the-art baselines, as it generates Pareto fronts up to four times larger, achieves nearly 20% reductions in hop counts, and delivers order-of-magnitude runtime improvements with near-linear scalability. Importantly, results reveal that allocating 85% of offspring to MOCPO exploration and 15% to AGE-MOEA exploitation yields the best balance between diversity, efficiency, and feasibility. Therefore, the Hybrid MOCPO–AGE-MOEA not only addresses critical gaps in constrained MST optimization but also establishes itself as a practical and scalable solution with strong applicability to domains such as software-defined networking, wireless mesh systems, and adaptive routing, where both computational efficiency and solution diversity are paramount Full article
Show Figures

Figure 1

22 pages, 6620 KB  
Article
A Study to Determine the Feasibility of Combining Mobile Augmented Reality and an Automatic Pill Box to Support Older Adults’ Medication Adherence
by Osslan Osiris Vergara-Villegas, Vianey Guadalupe Cruz-Sánchez, Abel Alejandro Rubín-Alvarado, Saulo Abraham Gante-Díaz, Jonathan Axel Cruz-Vazquez, Brandon Areyzaga-Mendizábal, Jesús Yaljá Montiel-Pérez, Juan Humberto Sossa-Azuela, Iliac Huerta-Trujillo and Rodolfo Romero-Herrera
Computers 2025, 14(10), 421; https://doi.org/10.3390/computers14100421 - 2 Oct 2025
Viewed by 781
Abstract
Because of the increased prevalence of chronic diseases, older adults frequently take many medications. However, adhering to a medication treatment tends to be difficult. The lack of medication adherence can cause health problems or even patient death. This paper describes the methodology used [...] Read more.
Because of the increased prevalence of chronic diseases, older adults frequently take many medications. However, adhering to a medication treatment tends to be difficult. The lack of medication adherence can cause health problems or even patient death. This paper describes the methodology used in developing a mobile augmented reality (MAR) pill box. The proposal supports patients in adhering to their medication treatment. First, we explain the design and construction of the automatic pill box, which includes alarms and uses QR codes recognized by the MAR system to provide medication information. Then, we explain the development of the MAR system. We conducted a preliminary survey with 30 participants to assess the feasibility of the MAR app. One hundred older adults participated in the survey. After one week of using the proposal, each patient answered a survey regarding the proposal functionality. The results revealed that 88% of the participants strongly agree and 11% agree that the app is a support in adhering to medical treatment. Finally, we conducted a study to compare the time elapsed between the scheduled time for taking the medication and the time it was actually consumed. The results from 189 records showed that using the proposal, 63.5% of the patients take medication with a maximum delay of 4.5 min. The results also showed that the alarm always sounded at the scheduled time and that the QR code displayed always corresponded to the medication that had to be consumed. Full article
Show Figures

Figure 1

21 pages, 2222 KB  
Article
Machine Learning-Driven Security and Privacy Analysis of a Dummy-ABAC Model for Cloud Computing
by Baby Marina, Irfana Memon, Fizza Abbas Alvi, Ubaidullah Rajput and Mairaj Nabi
Computers 2025, 14(10), 420; https://doi.org/10.3390/computers14100420 - 2 Oct 2025
Viewed by 275
Abstract
The Attribute-Based Access Control (ABAC) model provides access control decisions based on subject, object (resource), and contextual attributes. However, the use of sensitive attributes in access control decisions poses many security and privacy challenges, particularly in cloud environment where third parties are involved. [...] Read more.
The Attribute-Based Access Control (ABAC) model provides access control decisions based on subject, object (resource), and contextual attributes. However, the use of sensitive attributes in access control decisions poses many security and privacy challenges, particularly in cloud environment where third parties are involved. To address this shortcoming, we present a novel privacy-preserving Dummy-ABAC model that obfuscates real attributes with dummy attributes before transmission to the cloud server. In the proposed model, only dummy attributes are stored in the cloud database, whereas real attributes and mapping tokens are stored in a local machine database. Only dummy attributes are used for the access request evaluation in the cloud, and real data are retrieved in the post-decision mechanism using secure tokens. The security of the proposed model was assessed using a simulated threat scenario, including attribute inference, policy injection, and reverse mapping attacks. Experimental evaluation using machine learning classifiers (“DecisionTree” DT, “RandomForest” RF), demonstrated that inference accuracy dropped from ~0.65 on real attributes to ~0.25 on dummy attributes confirming improved resistance to inference attacks. Furthermore, the model rejects malformed and unauthorized policies. Performance analysis of dummy generation, token generation, encoding, and nearest-neighbor search, demonstrated minimal latency in both local and cloud environments. Overall, the proposed model ensures an efficient, secure, and privacy-preserving access control in cloud environments. Full article
Show Figures

Figure 1

21 pages, 2189 KB  
Article
Hybrid CNN-Swin Transformer Model to Advance the Diagnosis of Maxillary Sinus Abnormalities on CT Images Using Explainable AI
by Mohammad Alhumaid and Ayman G. Fayoumi
Computers 2025, 14(10), 419; https://doi.org/10.3390/computers14100419 - 2 Oct 2025
Viewed by 205
Abstract
Accurate diagnosis of sinusitis is essential due to its widespread prevalence and its considerable impact on patient quality of life. While multiple imaging techniques are available for detecting maxillary sinus, computed tomography (CT) remains the preferred modality because of its high sensitivity and [...] Read more.
Accurate diagnosis of sinusitis is essential due to its widespread prevalence and its considerable impact on patient quality of life. While multiple imaging techniques are available for detecting maxillary sinus, computed tomography (CT) remains the preferred modality because of its high sensitivity and spatial resolution. Although recent advances in deep learning have led to the development of automated methods for sinusitis classification, many existing models perform poorly in the presence of complex pathological features and offer limited interpretability, which hinders their integration into clinical workflows. In this study, we propose a hybrid deep learning framework that combines EfficientNetB0, a convolutional neural network, with the Swin Transformer, a vision transformer, to improve feature representation. An attention-based fusion module is used to integrate both local and global information, thereby enhancing diagnostic accuracy. To improve transparency and support clinical adoption, the model incorporates explainable artificial intelligence (XAI) techniques using Gradient-weighted Class Activation Mapping (Grad-CAM). This allows for visualization of the regions influencing the model’s predictions, helping radiologists assess the clinical relevance of the results. We evaluate the proposed method on a curated maxillary sinus CT dataset covering four diagnostic categories: Normal, Opacified, Polyposis, and Retention Cysts. The model achieves a classification accuracy of 95.83%, with precision, recall, and F1 score all at 95%. Grad-CAM visualizations indicate that the model consistently focuses on clinically significant regions of the sinus anatomy, supporting its potential utility as a reliable diagnostic aid in medical practice. Full article
Show Figures

Figure 1

46 pages, 3207 KB  
Article
Evaluating the Usability and Ethical Implications of Graphical User Interfaces in Generative AI Systems
by Amna Batool and Waqar Hussain
Computers 2025, 14(10), 418; https://doi.org/10.3390/computers14100418 - 2 Oct 2025
Viewed by 168
Abstract
The rapid development of generative artificial intelligence (GenAI) has revolutionized how individuals and organizations interact with technology. These systems, ranging from conversational agents to creative tools, are increasingly embedded in daily life. However, their effectiveness relies heavily on the usability of their graphical [...] Read more.
The rapid development of generative artificial intelligence (GenAI) has revolutionized how individuals and organizations interact with technology. These systems, ranging from conversational agents to creative tools, are increasingly embedded in daily life. However, their effectiveness relies heavily on the usability of their graphical user interfaces (GUIs), which serve as the primary medium for user interaction. Moreover, the design of these interfaces must align with ethical principles such as transparency, fairness, and user autonomy to ensure responsible usage. This study evaluates the usability of GUIs for three widely-used GenAI applications, including ChatGPT (GPT-4), Gemini (1.5), and Claude (3.5 Sonnet) , using a heuristics-based and user-based testing approach (experimental-qualitative investigation). A total of 12 participants from a research organization in Australia, participated in structured usability evaluations, applying 14 usability heuristics to identify key issues and ethical concerns. The results indicate that Claude’s GUI is the most usable among the three, particularly due to its clean and minimalistic design. However, all applications demonstrated specific usability issues, such as insufficient error prevention, lack of shortcuts, and limited customization options, affecting the efficiency and effectiveness of user interactions. Despite these challenges, each application exhibited unique strengths, suggesting that while functional, significant enhancements are needed to fully support user satisfaction and ethical usage. The insights of this study can guide organizations in designing GenAI systems that are not only user-friendly but also ethically sound. Full article
27 pages, 2517 KB  
Article
A Guided Self-Study Platform of Integrating Documentation, Code, Visual Output, and Exercise for Flutter Cross-Platform Mobile Programming
by Safira Adine Kinari, Nobuo Funabiki, Soe Thandar Aung and Htoo Htoo Sandi Kyaw
Computers 2025, 14(10), 417; https://doi.org/10.3390/computers14100417 - 1 Oct 2025
Cited by 1 | Viewed by 277
Abstract
Nowadays, Flutter with the Dart programming language has become widely popular in mobile developments, allowing developers to build multi-platform applications using one codebase. An increasing number of companies are adopting these technologies to create scalable and maintainable mobile applications. Despite this increasing relevance, [...] Read more.
Nowadays, Flutter with the Dart programming language has become widely popular in mobile developments, allowing developers to build multi-platform applications using one codebase. An increasing number of companies are adopting these technologies to create scalable and maintainable mobile applications. Despite this increasing relevance, university curricula often lack structured resources for Flutter/Dart, limiting opportunities for students to learn it in academic environments. To address this gap, we previously developed the Flutter Programming Learning Assistance System (FPLAS), which supports self-learning through interactive problems focused on code comprehension through code-based exercises and visual interfaces. However, it was observed that many students completed the exercises without fully understanding even basic concepts, if they already had some knowledge of object-oriented programming (OOP). As a result, they may not be able to design and implement Flutter/Dart codes independently, highlighting a mismatch between the system’s outcomes and intended learning goals. In this paper, we propose a guided self-study approach of integrating documentation, code, visual output, and exercise in FPLAS. Two existing problem types, namely, Grammar Understanding Problems (GUP) and Element Fill-in-Blank Problems (EFP), are combined together with documentation, code, and output into a new format called Integrated Introductory Problems (INTs). For evaluations, we generated 16 INT instances and conducted two rounds of evaluations. The first round with 23 master students in Okayama University, Japan, showed high correct answer rates but low usability ratings. After revising the documentation and the system design, the second round with 25 fourth-year undergraduate students in the same university demonstrated high usability and consistent performances, which confirms the effectiveness of the proposal. Full article
Show Figures

Figure 1

33 pages, 4190 KB  
Article
Preserving Songket Heritage Through Intelligent Image Retrieval: A PCA and QGD-Rotational-Based Model
by Nadiah Yusof, Nazatul Aini Abd. Majid, Amirah Ismail and Nor Hidayah Hussain
Computers 2025, 14(10), 416; https://doi.org/10.3390/computers14100416 - 1 Oct 2025
Viewed by 273
Abstract
Malay songket motifs are a vital component of Malaysia’s intangible cultural heritage, characterized by intricate visual designs and deep cultural symbolism. However, the practical digital preservation and retrieval of these motifs present challenges, particularly due to the rotational variations typical in textile imagery. [...] Read more.
Malay songket motifs are a vital component of Malaysia’s intangible cultural heritage, characterized by intricate visual designs and deep cultural symbolism. However, the practical digital preservation and retrieval of these motifs present challenges, particularly due to the rotational variations typical in textile imagery. This study introduces a novel Content-Based Image Retrieval (CBIR) model that integrates Principal Component Analysis (PCA) for feature extraction and Quadratic Geometric Distance (QGD) for measuring similarity. To evaluate the model’s performance, a curated dataset comprising 413 original images and 4956 synthetically rotated songket motif images was utilized. The retrieval system featured metadata-driven preprocessing, dimensionality reduction, and multi-angle similarity assessment to address the issue of rotational invariance comprehensively. Quantitative evaluations using precision, recall, and F-measure metrics demonstrated that the proposed PCAQGD + Rotation technique achieved a mean F-measure of 59.72%, surpassing four benchmark retrieval methods. These findings confirm the model’s capability to accurately retrieve relevant motifs across varying orientations, thus supporting cultural heritage preservation efforts. The integration of PCA and QGD techniques effectively narrows the semantic gap between machine perception and human interpretation of motif designs. Future research should focus on expanding motif datasets and incorporating deep learning approaches to enhance retrieval precision, scalability, and applicability within larger national heritage repositories. Full article
Show Figures

Graphical abstract

17 pages, 1563 KB  
Article
Applying the Case-Based Axiomatic Design Assistant (CADA) to a Pharmaceutical Engineering Task: Implementation and Assessment
by Roland Wölfle, Irina Saur-Amaral and Leonor Teixeira
Computers 2025, 14(10), 415; https://doi.org/10.3390/computers14100415 - 1 Oct 2025
Viewed by 228
Abstract
Modern custom machine construction and automation projects face pressure to shorten innovation cycles, reduce durations, and manage growing system complexity. Traditional methods like Waterfall and V-Model have limits where end-to-end data traceability is vital throughout the product life cycle. This study introduces the [...] Read more.
Modern custom machine construction and automation projects face pressure to shorten innovation cycles, reduce durations, and manage growing system complexity. Traditional methods like Waterfall and V-Model have limits where end-to-end data traceability is vital throughout the product life cycle. This study introduces the implementation of a web application that incorporates a model-based design approach to assess its applicability and effectiveness in conceptual design scenarios. At the heart of this approach is the Case-Based Axiomatic Design Assistant (CADA), which utilizes Axiomatic Design principles to break down complex tasks into structured, analyzable sub-concepts. It also incorporates Case-Based Reasoning (CBR) to systematically store and reuse design knowledge. The effectiveness of the visual assistant was evaluated through expert-led assessments across different fields. The results revealed a significant reduction in design effort when utilising prior knowledge, thus validating both the efficiency of CADA as a model and the effectiveness of its implementation within a user-centric application, highlighting its collaborative features. The findings support this approach as a scalable solution for enhancing conceptual design quality, facilitating knowledge reuse, and promoting agile development. Full article
Show Figures

Figure 1

20 pages, 5435 KB  
Article
Do LLMs Offer a Robust Defense Mechanism Against Membership Inference Attacks on Graph Neural Networks?
by Abdellah Jnaini and Mohammed-Amine Koulali
Computers 2025, 14(10), 414; https://doi.org/10.3390/computers14100414 - 1 Oct 2025
Viewed by 348
Abstract
Graph neural networks (GNNs) are deep learning models that process structured graph data. By leveraging their graphs/node classification and link prediction capabilities, they have been effectively applied in multiple domains such as community detection, location sharing services, and drug discovery. These powerful applications [...] Read more.
Graph neural networks (GNNs) are deep learning models that process structured graph data. By leveraging their graphs/node classification and link prediction capabilities, they have been effectively applied in multiple domains such as community detection, location sharing services, and drug discovery. These powerful applications and the vast availability of graphs in diverse fields have facilitated the adoption of GNNs in privacy-sensitive contexts (e.g., banking systems and healthcare). Unfortunately, GNNs are vulnerable to the leakage of sensitive information through well-defined attacks. Our main focus is on membership inference attacks (MIAs) that allow the attacker to infer whether a given sample belongs to the training dataset. To prevent this, we introduce three LLM-guided defense mechanisms applied at the posterior level: posterior encoding with noise, knowledge distillation, and secure aggregation. Our proposed approaches not only successfully reduce MIA accuracy but also maintain the model’s performance on the node classification task. Our findings, validated through extensive experiments on widely used GNN architectures, offer insights into balancing privacy preservation with predictive performance. Full article
Show Figures

Figure 1

17 pages, 2399 KB  
Article
SADAMB: Advancing Spatially-Aware Vision-Language Modeling Through Datasets, Metrics, and Benchmarks
by Giorgos Papadopoulos, Petros Drakoulis, Athanasios Ntovas, Alexandros Doumanoglou and Dimitris Zarpalas
Computers 2025, 14(10), 413; https://doi.org/10.3390/computers14100413 - 29 Sep 2025
Viewed by 215
Abstract
Understanding spatial relationships between objects in images is crucial for robotic navigation, augmented reality systems, and autonomous driving applications, among others. However, existing vision-language benchmarks often overlook explicit spatial reasoning, limiting progress in this area. We attribute this limitation in part to existing [...] Read more.
Understanding spatial relationships between objects in images is crucial for robotic navigation, augmented reality systems, and autonomous driving applications, among others. However, existing vision-language benchmarks often overlook explicit spatial reasoning, limiting progress in this area. We attribute this limitation in part to existing open datasets and evaluation metrics, which tend to overlook spatial details. To address this gap, we make three contributions: First, we greatly extend the COCO dataset with annotations of spatial relations, providing a resource for spatially aware image captioning and visual question answering. Second, we propose a new evaluation framework encompassing metrics that assess image captions’ spatial accuracy at both the sentence and dataset levels. And third, we conduct a benchmark study of various vision encoder–text decoder transformer architectures for image captioning using the introduced dataset and metrics. Results reveal that current models capture spatial information only partially, underscoring the challenges of spatially grounded caption generation. Full article
Show Figures

Figure 1

47 pages, 3137 KB  
Article
DietQA: A Comprehensive Framework for Personalized Multi-Diet Recipe Retrieval Using Knowledge Graphs, Retrieval-Augmented Generation, and Large Language Models
by Ioannis Tsampos and Emmanouil Marakakis
Computers 2025, 14(10), 412; https://doi.org/10.3390/computers14100412 - 29 Sep 2025
Viewed by 401
Abstract
Recipes available on the web often lack nutritional transparency and clear indicators of dietary suitability. While searching by title is straightforward, exploring recipes that meet combined dietary needs, nutritional goals, and ingredient-level preferences remains challenging. Most existing recipe search systems do not effectively [...] Read more.
Recipes available on the web often lack nutritional transparency and clear indicators of dietary suitability. While searching by title is straightforward, exploring recipes that meet combined dietary needs, nutritional goals, and ingredient-level preferences remains challenging. Most existing recipe search systems do not effectively support flexible multi-dietary reasoning in combination with user preferences and restrictions. For example, users may seek gluten-free and dairy-free dinners with suitable substitutions, or compound goals such as vegan and low-fat desserts. Recent systematic reviews report that most food recommender systems are content-based and often non-personalized, with limited support for dietary restrictions, ingredient-level exclusions, and multi-criteria nutrition goals. This paper introduces DietQA, an end-to-end, language-adaptable chatbot system that integrates a Knowledge Graph (KG), Retrieval-Augmented Generation (RAG), and a Large Language Model (LLM) to support personalized, dietary-aware recipe search and question answering. DietQA crawls Greek-language recipe websites to extract structured information such as titles, ingredients, and quantities. Nutritional values are calculated using validated food composition databases, and dietary tags are inferred automatically based on ingredient composition. All information is stored in a Neo4j-based knowledge graph, enabling flexible querying via Cypher. Users interact with the system through a natural language chatbot friendly interface, where they can express preferences for ingredients, nutrients, dishes, and diets, and filter recipes based on multiple factors such as ingredient availability, exclusions, and nutritional goals. DietQA supports multi-diet recipe search by retrieving both compliant recipes and those adaptable via ingredient substitutions, explaining how each result aligns with user preferences and constraints. An LLM extracts intents and entities from user queries to support rule-based Cypher retrieval, while the RAG pipeline generates contextualized responses using the user query and preferences, retrieved recipes, statistical summaries, and substitution logic. The system integrates real-time updates of recipe and nutritional data, supporting up-to-date, relevant, and personalized recommendations. It is designed for language-adaptable deployment and has been developed and evaluated using Greek-language content. DietQA provides a scalable framework for transparent and adaptive dietary recommendation systems powered by conversational AI. Full article
Show Figures

Graphical abstract

27 pages, 2519 KB  
Article
Examining the Influence of AI on Python Programming Education: An Empirical Study and Analysis of Student Acceptance Through TAM3
by Manal Alanazi, Alice Li, Halima Samra and Ben Soh
Computers 2025, 14(10), 411; https://doi.org/10.3390/computers14100411 - 26 Sep 2025
Viewed by 513
Abstract
This study investigates the adoption of PyChatAI, a bilingual AI-powered chatbot for Python programming education, among female computer science students at Jouf University. Guided by the Technology Acceptance Model 3 (TAM3), it examines the determinants of user acceptance and usage behaviour. A Solomon [...] Read more.
This study investigates the adoption of PyChatAI, a bilingual AI-powered chatbot for Python programming education, among female computer science students at Jouf University. Guided by the Technology Acceptance Model 3 (TAM3), it examines the determinants of user acceptance and usage behaviour. A Solomon Four-Group experimental design (N = 300) was used to control pre-test effects and isolate the impact of the intervention. PyChatAI provides interactive problem-solving, code explanations, and topic-based tutorials in English and Arabic. Measurement and structural models were validated via Confirmatory Factor Analysis (CFA) and Structural Equation Modelling (SEM), achieving excellent fit (CFI = 0.980, RMSEA = 0.039). Results show that perceived usefulness (β = 0.446, p < 0.001) and perceived ease of use (β = 0.243, p = 0.005) significantly influence intention to use, which in turn predicts actual usage (β = 0.406, p < 0.001). Trust, facilitating conditions, and hedonic motivation emerged as strong antecedents of ease of use, while social influence and cognitive factors had limited impact. These findings demonstrate that AI-driven bilingual tools can effectively enhance programming engagement in gender-specific, culturally sensitive contexts, offering practical guidance for integrating intelligent tutoring systems into computer science curricula. Full article
Show Figures

Figure 1

76 pages, 904 KB  
Review
Theoretical Bases of Methods of Counteraction to Modern Forms of Information Warfare
by Akhat Bakirov and Ibragim Suleimenov
Computers 2025, 14(10), 410; https://doi.org/10.3390/computers14100410 - 26 Sep 2025
Viewed by 1460
Abstract
This review is devoted to a comprehensive analysis of modern forms of information warfare in the context of digitalization and global interconnectedness. The work considers fundamental theoretical foundations—cognitive distortions, mass communication models, network theories and concepts of cultural code. The key tools of [...] Read more.
This review is devoted to a comprehensive analysis of modern forms of information warfare in the context of digitalization and global interconnectedness. The work considers fundamental theoretical foundations—cognitive distortions, mass communication models, network theories and concepts of cultural code. The key tools of information influence are described in detail, including disinformation, the use of botnets, deepfakes, memetic strategies and manipulations in the media space. Particular attention is paid to methods of identifying and neutralizing information threats using artificial intelligence and digital signal processing, including partial digital convolutions, Fourier–Galois transforms, residue number systems and calculations in finite algebraic structures. The ethical and legal aspects of countering information attacks are analyzed, and geopolitical examples are given, demonstrating the peculiarities of applying various strategies. The review is based on a systematic analysis of 592 publications selected from the international databases Scopus, Web of Science and Google Scholar, covering research from fundamental works to modern publications of recent years (2015–2025). It is also based on regulatory legal acts, which ensures a high degree of relevance and representativeness. The results of the review can be used in the development of technologies for monitoring, detecting and filtering information attacks, as well as in the formation of national cybersecurity strategies. Full article
Show Figures

Figure 1

32 pages, 1432 KB  
Review
A Review of Multi-Microgrids Operation and Control from a Cyber-Physical Systems Perspective
by Ola Ali and Osama A. Mohammed
Computers 2025, 14(10), 409; https://doi.org/10.3390/computers14100409 - 25 Sep 2025
Viewed by 346
Abstract
Developing multi-microgrid (MMG) systems provides a new paradigm for power distribution systems with a higher degree of resilience, flexibility, and sustainability. The inclusion of communication networks as part of MMG is critical for coordinating distributed energy resources (DERs) in real time and deploying [...] Read more.
Developing multi-microgrid (MMG) systems provides a new paradigm for power distribution systems with a higher degree of resilience, flexibility, and sustainability. The inclusion of communication networks as part of MMG is critical for coordinating distributed energy resources (DERs) in real time and deploying energy management systems (EMS) efficiently. However, the communication quality of service (QoS) parameters such as latency, jitter, packet loss, and throughput play an essential role in MMG control and stability, especially in highly dynamic and high-traffic situations. This paper presents a focused review of MMG systems from a cyber-physical viewpoint, particularly concerning the challenges and implications of communication network performance of energy management. The literature on MMG systems includes control strategies, models of communication infrastructure, cybersecurity challenges, and co-simulation platforms. We have identified research gaps, including, but not limited to, the need for scalable, real-time cyber-physical systems; limited research examining communication QoS under realistic conditions/traffic; and integrated cybersecurity strategies for MMGs. We suggest future research opportunities considering these research gaps to enhance the resiliency, adaptability, and sustainability of modern cyber-physical MMGs. Full article
Show Figures

Figure 1

20 pages, 2911 KB  
Article
Topological Machine Learning for Financial Crisis Detection: Early Warning Signals from Persistent Homology
by Ecaterina Guritanu, Enrico Barbierato and Alice Gatti
Computers 2025, 14(10), 408; https://doi.org/10.3390/computers14100408 - 24 Sep 2025
Cited by 1 | Viewed by 482
Abstract
We propose a strictly causal early–warning framework for financial crises based on topological signal extraction from multivariate return streams. Sliding windows of daily log–returns are mapped to point clouds, from which Vietoris–Rips persistence diagrams are computed and summarised by persistence landscapes. A single, [...] Read more.
We propose a strictly causal early–warning framework for financial crises based on topological signal extraction from multivariate return streams. Sliding windows of daily log–returns are mapped to point clouds, from which Vietoris–Rips persistence diagrams are computed and summarised by persistence landscapes. A single, interpretable indicator is obtained as the L2 norm of the landscape and passed through a causal decision rule (with thresholds α,β and run–length parameters s,t) that suppresses isolated spikes and collapses bursts to time–stamped warnings. On four major U.S. equity indices (S&P 500, NASDAQ, DJIA, Russell 2000) over 1999–2021, the method, at a fixed strictly causal operating point (α=β=3.1,s=57,t=16), attains a balanced precision–recall (F10.50) with an average lead time of about 34 days. It anticipates two of the four canonical crises and issues a contemporaneous signal for the 2008 global financial crisis. Sensitivity analyses confirm the qualitative robustness of the detector, while comparisons with permissive spike rules and volatility–based baselines demonstrate substantially fewer false alarms at comparable recall. The approach delivers interpretable topology–based warnings and provides a reproducible route to combining persistent homology with causal event detection in financial time series. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

22 pages, 858 KB  
Systematic Review
Network Data Flow Collection Methods for Cybersecurity: A Systematic Literature Review
by Alessandro Carvalho Coutinho and Luciano Vieira de Araújo
Computers 2025, 14(10), 407; https://doi.org/10.3390/computers14100407 - 24 Sep 2025
Viewed by 373
Abstract
Network flow collection has become a cornerstone of cyber defence, yet the literature still lacks a consolidated view of which technologies are effective across different environments and conditions. We conducted a systematic review of 362 publications indexed in six digital libraries between January [...] Read more.
Network flow collection has become a cornerstone of cyber defence, yet the literature still lacks a consolidated view of which technologies are effective across different environments and conditions. We conducted a systematic review of 362 publications indexed in six digital libraries between January 2019 and July 2025, of which 51 met PRISMA 2020 eligibility criteria. All extraction materials are archived on OSF. NetFlow derivatives appear in 62.7% of the studies, IPFIX in 45.1%, INT/P4 or OpenFlow mirroring in 17.6%, and sFlow in 9.8%, with totals exceeding 100% because several papers evaluate multiple protocols. In total, 17 of the 51 studies (33.3%) tested production links of at least 40 Gbps, while others remained in laboratory settings. Fewer than half reported packet-loss thresholds or privacy controls, and none adopted a shared benchmark suite. These findings highlight trade-offs between throughput, fidelity, computational cost, and privacy, as well as gaps in encrypted-traffic support and GDPR-compliant anonymisation. Most importantly, our synthesis demonstrates that flow-collection methods directly shape what can be detected: some exporters are effective for volumetric attacks such as DDoS, while others enable visibility into brute-force authentication, botnets, or IoT malware. In other words, the choice of telemetry technology determines which threats and anomalous behaviours remain visible or hidden to defenders. By mapping technologies, metrics, and gaps, this review provides a single reference point for researchers, engineers, and regulators facing the challenges of flow-aware cybersecurity. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Graphical abstract

32 pages, 852 KB  
Article
Benchmarking the Responsiveness of Open-Source Text-to-Speech Systems
by Ha Pham Thien Dinh, Rutherford Agbeshi Patamia, Ming Liu and Akansel Cosgun
Computers 2025, 14(10), 406; https://doi.org/10.3390/computers14100406 - 23 Sep 2025
Viewed by 734
Abstract
Responsiveness—the speed at which a text-to-speech (TTS) system produces audible output—is critical for real-time voice assistants yet has received far less attention than perceptual quality metrics. Existing evaluations often touch on latency but do not establish reproducible, open-source standards that capture responsiveness as [...] Read more.
Responsiveness—the speed at which a text-to-speech (TTS) system produces audible output—is critical for real-time voice assistants yet has received far less attention than perceptual quality metrics. Existing evaluations often touch on latency but do not establish reproducible, open-source standards that capture responsiveness as a first-class dimension. This work introduces a baseline benchmark designed to fill that gap. Our framework unifies latency distribution, tail latency, and intelligibility within a transparent and dataset-diverse pipeline, enabling a fair and replicable comparison across 13 widely used open-source TTS models. By grounding evaluation in structured input sets ranging from single words to sentence-length utterances and adopting a methodology inspired by standardized inference benchmarks, we capture both typical and worst-case user experiences. Unlike prior studies that emphasize closed or proprietary systems, our focus is on establishing open, reproducible baselines rather than ranking against commercial references. The results reveal substantial variability across architectures, with some models delivering near-instant responses while others fail to meet interactive thresholds. By centering evaluation on responsiveness and reproducibility, this study provides an infrastructural foundation for benchmarking TTS systems and lays the groundwork for more comprehensive assessments that integrate both fidelity and speed. Full article
Show Figures

Figure 1

21 pages, 1229 KB  
Article
Eghatha: A Blockchain-Based System to Enhance Disaster Preparedness
by Ayoub Ghani, Ahmed Zinedine and Mohammed El Mohajir
Computers 2025, 14(10), 405; https://doi.org/10.3390/computers14100405 - 23 Sep 2025
Viewed by 437
Abstract
Natural disasters often strike unexpectedly, leaving thousands of victims and affected individuals each year. Effective disaster preparedness is critical to reducing these consequences and accelerating recovery. This paper presents Eghatha, a blockchain-based decentralized system designed to optimize humanitarian aid delivery during crises. By [...] Read more.
Natural disasters often strike unexpectedly, leaving thousands of victims and affected individuals each year. Effective disaster preparedness is critical to reducing these consequences and accelerating recovery. This paper presents Eghatha, a blockchain-based decentralized system designed to optimize humanitarian aid delivery during crises. By enabling secure and transparent transfers of donations and relief from donors to beneficiaries, the system enhances trust and operational efficiency. All transactions are immutably recorded and verified on a blockchain network, reducing fraud and misuse while adapting to local contexts. The platform is volunteer-driven, coordinated by civil society organizations with humanitarian expertise, and supported by government agencies involved in disaster response. Eghatha’s design accounts for disaster-related constraints—including limited mobility, varying levels of technological literacy, and resource accessibility—by offering a user-friendly interface, support for local currencies, and integration with locally available technologies. These elements ensure inclusivity for diverse populations. Aligned with Morocco’s “Digital Morocco 2030” strategy, the system contributes to both immediate crisis response and long-term digital transformation. Its scalable architecture and contextual sensitivity position the platform for broader adoption in similarly affected regions worldwide, offering a practical model for ethical, decentralized, and resilient humanitarian logistics. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop