Previous Issue
Volume 14, June
 
 

Computers, Volume 14, Issue 7 (July 2025) – 27 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
26 pages, 2738 KiB  
Article
Mobile Augmented Reality Games Towards Smart Learning City Environments: Learning About Sustainability
by Margarida M. Marques, João Ferreira-Santos, Rita Rodrigues and Lúcia Pombo
Computers 2025, 14(7), 267; https://doi.org/10.3390/computers14070267 - 7 Jul 2025
Abstract
This study explores the potential of mobile augmented reality games (MARGs) in promoting sustainability competencies within the context of a smart learning city environment. Anchored in the EduCITY project, which integrates location-based AR-enhanced games into an interactive mobile app, the research investigates how [...] Read more.
This study explores the potential of mobile augmented reality games (MARGs) in promoting sustainability competencies within the context of a smart learning city environment. Anchored in the EduCITY project, which integrates location-based AR-enhanced games into an interactive mobile app, the research investigates how these tools support Education for Sustainable Development (ESD). Employing a mixed-methods approach, data were collected through the GreenComp-based Questionnaire (GCQuest) and anonymous gameplay logs generated by the app. Thematic analysis of 358 responses revealed four key learning domains: ‘cultural awareness’, ‘environmental protection’, ‘sustainability awareness’, and ‘contextual knowledge’. Quantitative performance data from game logs highlighted substantial variation across games, with the highest performance found in those with more frequent AR integration and multiple iterative refinements. Participants engaging with AR-enhanced features (optional) outperformed others. This study provides empirical evidence for the use of MARGs to cultivate sustainability-related knowledge, skills, and attitudes, particularly when grounded in local realities and enhanced through thoughtful design. Beyond the EduCITY project, the study proposes a replicable model for assessing sustainability competencies, with implications for broader integration of AR across educational contexts in ESD. The paper concludes with a critical reflection on methodological limitations and suggests future directions, including adapting the GCQuest for use with younger learners in primary education. Full article
12 pages, 349 KiB  
Article
Agentic AI for Cultural Heritage: Embedding Risk Memory in Semantic Digital Twins
by George Pavlidis
Computers 2025, 14(7), 266; https://doi.org/10.3390/computers14070266 - 7 Jul 2025
Abstract
Cultural heritage preservation increasingly relies on data-driven technologies, yet most existing systems lack the cognitive and temporal depth required to support meaningful, transparent, and policy-informed decision-making. This paper proposes a conceptual framework for memory-enabled, semantically grounded AI agents in the cultural domain, showing [...] Read more.
Cultural heritage preservation increasingly relies on data-driven technologies, yet most existing systems lack the cognitive and temporal depth required to support meaningful, transparent, and policy-informed decision-making. This paper proposes a conceptual framework for memory-enabled, semantically grounded AI agents in the cultural domain, showing how the integration of the ICCROM/CCI ABC method for risk assessment into the Panoptes ontology enables the structured encoding of risk cognition over time. This structured risk memory becomes the foundation for agentic reasoning, supporting prioritization, justification, and long-term preservation planning. It is argued that this approach constitutes a principled step toward the development of Cultural Agentic AI: autonomous systems that remember, reason, and act in alignment with cultural values. Proof-of-concept simulations illustrate how memory-enabled agents can trace evolving risk patterns, trigger policy responses, and evaluate mitigation outcomes through structured, explainable reasoning. Full article
Show Figures

Figure 1

20 pages, 632 KiB  
Article
Bridging or Burning? Digital Sustainability and PY Students’ Intentions to Adopt AI-NLP in Educational Contexts
by Mostafa Aboulnour Salem
Computers 2025, 14(7), 265; https://doi.org/10.3390/computers14070265 - 7 Jul 2025
Abstract
The current study examines the determinants influencing preparatory year (PY) students’ intentions to adopt AI-powered natural language processing (NLP) models, such as Copilot, ChatGPT, and Gemini, and how these intentions shape their conceptions of digital sustainability. Additionally, the extended unified theory of acceptance [...] Read more.
The current study examines the determinants influencing preparatory year (PY) students’ intentions to adopt AI-powered natural language processing (NLP) models, such as Copilot, ChatGPT, and Gemini, and how these intentions shape their conceptions of digital sustainability. Additionally, the extended unified theory of acceptance and use of technology (UTAUT) was integrated with a diversity of educational constructs, including content availability (CA), learning engagement (LE), learning motivation (LM), learner involvement (LI), and AI satisfaction (AS). Furthermore, responses of 274 PY students from Saudi Universities were analysed using partial least squares structural equation modelling (PLS-SEM) to evaluate both the measurement and structural models. Likewise, the findings indicated CA (β = 0.25), LE (β = 0.22), LM (β = 0.20), and LI (β = 0.18) significantly predicted user intention (UI), explaining 52.2% of its variance (R2 = 0.522). In turn, UI significantly predicted students’ digital sustainability conceptions (DSC) (β = 0.35, R2 = 0.451). However, AI satisfaction (AS) did not exhibit a moderating effect, suggesting uniformly high satisfaction levels among students. Hence, the study concluded that AI-powered NLP models are being adopted as learning assistant technologies and are also essential catalysts in promoting sustainable digital conceptions. Similarly, this study contributes both theoretically and practically by conceptualising digital sustainability as a learner-driven construct and linking educational technology adoption to its advancement. This aligns with global frameworks such as Sustainable Development Goals (SDGs) 4 and 9. The study highlights AI’s transformative potential in higher education by examining how user intention (UI) influences digital sustainability conceptions (DSC) among preparatory year students in Saudi Arabia. Given the demographic focus of the study, further research is recommended, particularly longitudinal studies, to track changes over time across diverse genders, academic specialisations, and cultural contexts. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

15 pages, 4430 KiB  
Article
A Comprehensive Approach to Instruction Tuning for Qwen2.5: Data Selection, Domain Interaction, and Training Protocols
by Xungang Gu, Mengqi Wang, Yangjie Tian, Ning Li, Jiaze Sun, Jingfang Xu, He Zhang, Ruohua Xu and Ming Liu
Computers 2025, 14(7), 264; https://doi.org/10.3390/computers14070264 - 5 Jul 2025
Abstract
Instruction tuning plays a pivotal role in aligning large language models with diverse tasks, yet its effectiveness hinges on the interplay of data quality, domain composition, and training strategies. This study moves beyond qualitative assessment to systematically quantify these factors through extensive experiments [...] Read more.
Instruction tuning plays a pivotal role in aligning large language models with diverse tasks, yet its effectiveness hinges on the interplay of data quality, domain composition, and training strategies. This study moves beyond qualitative assessment to systematically quantify these factors through extensive experiments on data selection, data mixture, and training protocols. By quantifying performance trade-offs, we demonstrate that the implicit method SuperFiltering achieves an optimal balance, whereas explicit filters can induce capability conflicts. A fine-grained analysis of cross-domain interactions quantifies a near-linear competition between code and math, while showing that tool use data exhibits minimal interference. To mitigate these measured conflicts, we compare multi-task, sequential, and multi-stage training strategies, revealing that multi-stage training significantly reduces Conflict Rates while preserving domain expertise. Our findings culminate in a unified framework for optimizing instruction tuning, offering actionable, data-driven guidelines for balancing multi-domain performance and enhancing model generalization, thus advancing the field by providing a methodology to move from intuition to systematic optimization. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

13 pages, 1502 KiB  
Article
Anomaly Detection Based on 1DCNN Self-Attention Networks for Seismic Electric Signals
by Wei Li, Huaqin Gu, Yanlin Wen, Wenzhou Zhao and Zhaobin Wang
Computers 2025, 14(7), 263; https://doi.org/10.3390/computers14070263 - 5 Jul 2025
Abstract
The application of deep learning to seismic electric signal (SES) anomaly detection remains underexplored in geophysics. This study introduces the integration of a 1D convolutional neural network (1DCNN) with a self-attention mechanism to automate SES analysis in a station in a certain place [...] Read more.
The application of deep learning to seismic electric signal (SES) anomaly detection remains underexplored in geophysics. This study introduces the integration of a 1D convolutional neural network (1DCNN) with a self-attention mechanism to automate SES analysis in a station in a certain place in China. Utilizing physics-informed data augmentation, our framework adapts to real-world interference scenarios, including subway operations and tidal fluctuations. The model achieves an F1-score of 0.9797 on a 7-year dataset, demonstrating superior robustness and precision compared to traditional manual interpretation. This work establishes a practical deep learning solution for real-time geoelectric anomaly monitoring, offering a transformative tool for earthquake early warning systems. Full article
Show Figures

Figure 1

16 pages, 2358 KiB  
Article
A Hybrid Content-Aware Network for Single Image Deraining
by Guoqiang Chai, Rui Yang, Jin Ge and Yulei Chen
Computers 2025, 14(7), 262; https://doi.org/10.3390/computers14070262 - 4 Jul 2025
Abstract
Rain streaks degrade the quality of optical images and seriously affect the effectiveness of subsequent vision-based algorithms. Although the applications of a convolutional neural network (CNN) and self-attention mechanism (SA) in single image deraining have shown great success, there are still unresolved issues [...] Read more.
Rain streaks degrade the quality of optical images and seriously affect the effectiveness of subsequent vision-based algorithms. Although the applications of a convolutional neural network (CNN) and self-attention mechanism (SA) in single image deraining have shown great success, there are still unresolved issues regarding the deraining performance and the large computational load. The work in this paper fully coordinates and utilizes the advantages between CNN and SA and proposes a hybrid content-aware deraining network (CAD) to reduce complexity and generate high-quality results. Specifically, we construct the CADBlock, including the content-aware convolution and attention mixer module (CAMM) and the multi-scale double-gated feed-forward module (MDFM). In CAMM, the attention mechanism is used for intricate windows to generate abundant features and simple convolution is used for plain windows to reduce computational costs. In MDFM, multi-scale spatial features are double-gated fused to preserve local detail features and enhance image restoration capabilities. Furthermore, a four-token contextual attention module (FTCA) is introduced to explore the content information among neighbor keys to improve the representation ability. Both qualitative and quantitative validations on synthetic and real-world rain images demonstrate that the proposed CAD can achieve a competitive deraining performance. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

22 pages, 7580 KiB  
Article
Fuzzy-Based Multi-Modal Query-Forwarding in Mini-Datacenters
by Sami J. Habib and Paulvanna Nayaki Marimuthu
Computers 2025, 14(7), 261; https://doi.org/10.3390/computers14070261 - 1 Jul 2025
Abstract
The rapid growth of Internet of Things (IoT) enabled devices in industrial environments and the associated increase in data generation are paving the way for the development of localized, distributed datacenters. In this paper, we have proposed a novel mini-datacenter in the form [...] Read more.
The rapid growth of Internet of Things (IoT) enabled devices in industrial environments and the associated increase in data generation are paving the way for the development of localized, distributed datacenters. In this paper, we have proposed a novel mini-datacenter in the form of wireless sensor networks to efficiently handle query-based data collection from Industrial IoT (IIoT) devices. The mini-datacenter comprises a command center, gateways, and IoT sensors, designed to manage stochastic query-response traffic flow. We have developed a duplication/aggregation query flow model, tailored to emphasize reliable transmission. We have developed a dataflow management framework that employs a multi-modal query forwarding approach to forward queries from the command center to gateways under varying environments. The query forwarding includes coarse-grain and fine-grain strategies, where the coarse-grain strategy uses a direct data flow using a single gateway at the expense of reliability, while the fine-grain approach uses redundant gateways to enhance reliability. A fuzzy-logic-based intelligence system is integrated into the framework to dynamically select the appropriate granularity of the forwarding strategy based on the resource availability and network conditions, aided by a buffer watching algorithm that tracks real-time buffer status. We carried out several experiments with gateway nodes varying from 10 to 100 to evaluate the framework’s scalability and robustness in handling the query flow under complex environments. The experimental results demonstrate that the framework provides a flexible and adaptive solution that balances buffer usage while maintaining over 95% reliability in most queries. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

12 pages, 6638 KiB  
Article
Vision-Degree-Driven Loading Strategy for Real-Time Large-Scale Scene Rendering
by Yu Ding and Ying Song
Computers 2025, 14(7), 260; https://doi.org/10.3390/computers14070260 - 1 Jul 2025
Viewed by 1
Abstract
Large-scale scene rendering faces challenges in managing massive scene data and mitigating rendering latency caused by suboptimal loading sequences. Although current approaches utilize Level of Detail (LOD) for dynamic resource loading, two limitations remain. One is loading priority, which does not adequately consider [...] Read more.
Large-scale scene rendering faces challenges in managing massive scene data and mitigating rendering latency caused by suboptimal loading sequences. Although current approaches utilize Level of Detail (LOD) for dynamic resource loading, two limitations remain. One is loading priority, which does not adequately consider the factors affecting visual effects such as LOD selection and visible area. The other is the insufficient trade-off between rendering quality and loading latency. To this end, we propose a loading prioritization metric called Vision Degree (VD), derived from LOD selection, loading time, and the trade-off between rendering quality and loading latency. During rendering, VDs are sorted in descending order to achieve an optimized loading and unloading sequence. At the same time, a compensation factor is proposed to further compensate for the visual loss caused by the reduced LOD level and to optimize the rendering effect. Finally, we optimize the initial viewpoint selection by minimizing the average model-to-viewpoint distance, thereby reducing the initial scene loading time. Experimental results demonstrate that our method reduces the rendering latency by 24–29% compared with the existing Area-of-Interest (AOI)-based loading strategy, while maintaining comparable visual quality. Full article
Show Figures

Figure 1

14 pages, 236 KiB  
Systematic Review
Artificial Intelligence and the Future of Mental Health in a Digitally Transformed World
by Aggeliki Kelly Fanarioti and Kostas Karpouzis
Computers 2025, 14(7), 259; https://doi.org/10.3390/computers14070259 - 30 Jun 2025
Viewed by 2
Abstract
Artificial Intelligence (AI) is reshaping mental healthcare by enabling new forms of diagnosis, therapy, and patient monitoring. Yet this digital transformation raises complex policy and ethical questions that remain insufficiently addressed. In this paper, we critically examine how AI-driven innovations are being integrated [...] Read more.
Artificial Intelligence (AI) is reshaping mental healthcare by enabling new forms of diagnosis, therapy, and patient monitoring. Yet this digital transformation raises complex policy and ethical questions that remain insufficiently addressed. In this paper, we critically examine how AI-driven innovations are being integrated into mental health systems across different global contexts, with particular attention to governance, regulation, and social justice. The study follows the PRISMA-ScR methodology to ensure transparency and methodological rigor, while also acknowledging its inherent limitations, such as the emphasis on breadth over depth and the exclusion of non-English sources. Drawing on international guidelines, academic literature, and emerging national strategies, it identifies both opportunities, such as improved access and personalized care, and threats, including algorithmic bias, data privacy risks, and diminished human oversight. Special attention is given to underrepresented populations and the risks of digital exclusion. The paper argues for a value-driven approach that centers equity, transparency, and informed consent in the deployment of AI tools. It concludes with actionable policy recommendations to support the ethical implementation of AI in mental health, emphasizing the need for cross-sectoral collaboration and global accountability mechanisms. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

26 pages, 3334 KiB  
Review
Simulation-Based Development of Internet of Cyber-Things Using DEVS
by Laurent Capocchi, Bernard P. Zeigler and Jean-Francois Santucci
Computers 2025, 14(7), 258; https://doi.org/10.3390/computers14070258 - 30 Jun 2025
Abstract
Simulation-based development is a structured approach that uses formal models to design and test system behavior before building the actual system. The Internet of Things (IoT) connects physical devices equipped with sensors and software to collect and exchange data. Cyber-Physical Systems (CPSs) integrate [...] Read more.
Simulation-based development is a structured approach that uses formal models to design and test system behavior before building the actual system. The Internet of Things (IoT) connects physical devices equipped with sensors and software to collect and exchange data. Cyber-Physical Systems (CPSs) integrate computing directly into physical processes to enable real-time control. This paper reviews the Discrete-Event System Specification (DEVS) formalism and explores how it can serve as a unified framework for designing, simulating, and implementing systems that combine IoT and CPS—referred to as the Internet of Cyber-Things (IoCT). Through case studies that include home automation, solar energy monitoring, conflict management, and swarm robotics, the paper reviews how DEVS enables construction of modular, scalable, and reusable models. The role of the System Entity Structure (SES) is also discussed, highlighting its contribution in organizing models and generating alternative system configurations. With this background as basis, the paper evaluates whether DEVS provides the necessary modeling power and continuity across stages to support the development of complex IoCT systems. The paper concludes that DEVS offers a robust and flexible foundation for developing IoCT systems, supporting both expressiveness and seamless transition from design to real-world deployment. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

13 pages, 3210 KiB  
Article
Bridging Tradition and Innovation: Transformative Educational Practices in Museums with AI and VR
by Michele Domenico Todino, Eliza Pitri, Argyro Fella, Antonia Michaelidou, Lucia Campitiello, Francesca Placanica, Stefano Di Tore and Maurizio Sibilio
Computers 2025, 14(7), 257; https://doi.org/10.3390/computers14070257 - 30 Jun 2025
Viewed by 6
Abstract
This paper explores the intersection of folk art, museums, and education in the 20th century, with a focus on the concept of art as experience, emphasizing the role of museums as active, inclusive learning spaces. A collaboration between the University of Salerno and [...] Read more.
This paper explores the intersection of folk art, museums, and education in the 20th century, with a focus on the concept of art as experience, emphasizing the role of museums as active, inclusive learning spaces. A collaboration between the University of Salerno and the University of Nicosia has developed virtual museum environments using virtual reality (VR) to enhance engagement with cultural heritage. These projects aim to make museums more accessible and interactive, with future potential in integrating artificial intelligence NPC and VR strategies for personalized visitor experiences of the Nicosia Folk Art Museum. Full article
Show Figures

Graphical abstract

21 pages, 1414 KiB  
Article
An xLSTM–XGBoost Ensemble Model for Forecasting Non-Stationary and Highly Volatile Gasoline Price
by Fujiang Yuan, Xia Huang, Hong Jiang, Yang Jiang, Zihao Zuo, Lusheng Wang, Yuxin Wang, Shaojie Gu and Yanhong Peng
Computers 2025, 14(7), 256; https://doi.org/10.3390/computers14070256 - 29 Jun 2025
Viewed by 133
Abstract
High-frequency fluctuations in the international crude oil market have led to multilevel characteristics in China’s domestic refined oil pricing mechanism. To address the poor fitting performance of single deep learning models on oil price data, which hampers accurate gasoline price prediction, this paper [...] Read more.
High-frequency fluctuations in the international crude oil market have led to multilevel characteristics in China’s domestic refined oil pricing mechanism. To address the poor fitting performance of single deep learning models on oil price data, which hampers accurate gasoline price prediction, this paper proposes a gasoline price prediction method based on a combined xLSTM–XGBoost model. Using gasoline price data from June 2000 to November 2024 in Sichuan Province as a sample, the data are decomposed via STL decomposition to extract trend, residual, and seasonal components. The xLSTM model is then employed to predict the trend and seasonal components, while XGBoost predicts the residual component. Finally, the predictions from both models are combined to produce the final forecast. The experimental results demonstrate that the proposed xLSTM–XGBoost model reduces the MAE by 14.8% compared to the second-best sLSTM–XGBoost model and by 83% compared to the traditional LSTM model, significantly enhancing prediction accuracy. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

25 pages, 2892 KiB  
Article
Focal Correlation and Event-Based Focal Visual Content Text Attention for Past Event Search
by Pranita P. Deshmukh and S. Poonkuntran
Computers 2025, 14(7), 255; https://doi.org/10.3390/computers14070255 - 28 Jun 2025
Viewed by 127
Abstract
Every minute, vast amounts of video and image data are uploaded worldwide to the internet and social media platforms, creating a rich visual archive of human experiences—from weddings and family gatherings to significant historical events such as war crimes and humanitarian crises. When [...] Read more.
Every minute, vast amounts of video and image data are uploaded worldwide to the internet and social media platforms, creating a rich visual archive of human experiences—from weddings and family gatherings to significant historical events such as war crimes and humanitarian crises. When properly analyzed, this multimodal data holds immense potential for reconstructing important events and verifying information. However, challenges arise when images and videos lack complete annotations, making manual examination inefficient and time-consuming. To address this, we propose a novel event-based focal visual content text attention (EFVCTA) framework for automated past event retrieval using visual question answering (VQA) techniques. Our approach integrates a Long Short-Term Memory (LSTM) model with convolutional non-linearity and an adaptive attention mechanism to efficiently identify and retrieve relevant visual evidence alongside precise answers. The model is designed with robust weight initialization, regularization, and optimization strategies and is evaluated on the Common Objects in Context (COCO) dataset. The results demonstrate that EFVCTA achieves the highest performance across all metrics (88.7% accuracy, 86.5% F1-score, 84.9% mAP), outperforming state-of-the-art baselines. The EFVCTA framework demonstrates promising results for retrieving information about past events captured in images and videos and can be effectively applied to scenarios such as documenting training programs, workshops, conferences, and social gatherings in academic institutions Full article
Show Figures

Figure 1

24 pages, 589 KiB  
Article
FaceCloseup: Enhancing Mobile Facial Authentication with Perspective Distortion-Based Liveness Detection
by Yingjiu Li, Yan Li and Zilong Wang
Computers 2025, 14(7), 254; https://doi.org/10.3390/computers14070254 - 27 Jun 2025
Viewed by 146
Abstract
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to [...] Read more.
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to spoofing attacks. Adversaries can exploit facial recognition systems using pre-recorded photos, videos, or even sophisticated 3D models of victims’ faces to bypass authentication mechanisms. The increasing availability of personal images on social media further amplifies this risk, making robust anti-spoofing mechanisms essential for secure facial authentication. To address these challenges, we introduce FaceCloseup, a novel liveness detection technique that strengthens facial authentication by leveraging perspective distortion inherent in close-up shots of real, 3D faces. Instead of relying on additional sensors or user-interactive gestures, FaceCloseup passively analyzes facial distortions in video frames captured by a mobile device’s camera, improving security without compromising user experience. FaceCloseup effectively distinguishes live faces from spoofed attacks by identifying perspective-based distortions across different facial regions. The system achieves a 99.48% accuracy in detecting common spoofing methods—including photo, video, and 3D model-based attacks—and demonstrates 98.44% accuracy in differentiating between individual users. By operating entirely on-device, FaceCloseup eliminates the need for cloud-based processing, reducing privacy concerns and potential latency in authentication. Its reliance on natural device movement ensures a seamless authentication experience while maintaining robust security. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

24 pages, 20201 KiB  
Article
EMGP-Net: A Hybrid Deep Learning Architecture for Breast Cancer Gene Expression Prediction
by Oumeima Thâalbi and Moulay A. Akhloufi
Computers 2025, 14(7), 253; https://doi.org/10.3390/computers14070253 - 26 Jun 2025
Viewed by 164
Abstract
Background: The accurate prediction of gene expression is essential in breast cancer research. However, spatial transcriptomics technologies are usually too expensive. Recent studies have used whole-slide images combined with spatial transcriptomics data to predict breast cancer gene expression. To this end, we [...] Read more.
Background: The accurate prediction of gene expression is essential in breast cancer research. However, spatial transcriptomics technologies are usually too expensive. Recent studies have used whole-slide images combined with spatial transcriptomics data to predict breast cancer gene expression. To this end, we present EMGP-Net, a novel hybrid deep learning architecture developed by combining two state-of-the-art models, MambaVision and EfficientFormer. Method: EMGP-Net was first trained on the HER2+ dataset, containing data from eight patients using a leave-one-patient-out approach. To ensure generalizability, we conducted external validation and alternately trained EMGP-Net on the HER2+ dataset and tested it on the STNet dataset, containing data from 23 patients, and vice versa. We evaluated EMGP-Net’s ability to predict the expression of 250 selected genes. EMGP-Net mixes features from both models, and uses attention mechanisms followed by fully connected layers. Results: Our model outperformed both EfficientFormer and MambaVision, which were trained separately on the HER2+ dataset, achieving the highest PCC of 0.7903 for the PTMA gene, with the top 14 genes having PCCs greater than 0.7, including other important breast cancer biomarkers such as GNAS and B2M. The external validation showed that it also outperformed models that were retrained with our approach. Conclusions: The results of EMGP-Net were better than those of existing models, showing that the combination of advanced models is an effective strategy to improve performance in this task. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

13 pages, 12530 KiB  
Article
Data Augmentation-Driven Improvements in Malignant Lymphoma Image Classification
by Sandi Baressi Šegota, Vedran Mrzljak, Ivan Lorencin and Nikola Anđelić
Computers 2025, 14(7), 252; https://doi.org/10.3390/computers14070252 - 26 Jun 2025
Viewed by 172
Abstract
Artificial intelligence (AI)-based techniques have become increasingly prevalent in the classification of medical images. However, the effectiveness of such methods is often constrained by the limited availability of annotated medical data. To address this challenge, data augmentation is frequently employed. This study investigates [...] Read more.
Artificial intelligence (AI)-based techniques have become increasingly prevalent in the classification of medical images. However, the effectiveness of such methods is often constrained by the limited availability of annotated medical data. To address this challenge, data augmentation is frequently employed. This study investigates the impact of a novel augmentation approach on the classification performance of malignant lymphoma histopathological images. The proposed method involves slicing high-resolution images (1388 × 1040 pixels) into smaller segments (224 × 224 pixels) before applying standard augmentation techniques such as flipping and rotation. The original dataset consists of 374 images, comprising 32.6% mantle cell lymphoma, 30.2% chronic lymphocytic leukemia, and 37.2% follicular lymphoma. Through slicing, the dataset was expanded to 8976 images, and further augmented to 53,856 images. The visual geometry group with 16 layers (VGG16) convolutional neural network (CNN) was trained and evaluated on three datasets: the original, the sliced, and the sliced with augmentation. Performance was assessed using accuracy, AUC, precision, sensitivity, specificity, and F1 score. The results demonstrate a substantial improvement in classification performance when slicing was employed, with additional, albeit smaller, gains achieved through subsequent augmentation. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

67 pages, 2821 KiB  
Review
Hardware and Software Methods for Secure Obfuscation and Deobfuscation: An In-Depth Analysis
by Khaled Saleh, Dirar Darweesh, Omar Darwish, Eman Hammad and Fathi Amsaad
Computers 2025, 14(7), 251; https://doi.org/10.3390/computers14070251 - 25 Jun 2025
Viewed by 147
Abstract
The swift evolution of information technology and growing connectivity in critical applications have elevated cybersecurity, protecting and certifying software and designs against rising cyber threats. For example, software and hardware have become highly susceptible to various threats, like reverse engineering, cloning, tampering, and [...] Read more.
The swift evolution of information technology and growing connectivity in critical applications have elevated cybersecurity, protecting and certifying software and designs against rising cyber threats. For example, software and hardware have become highly susceptible to various threats, like reverse engineering, cloning, tampering, and IP piracy. While various techniques exist to enhance software and hardware security, including encryption, native code, and secure server-side execution, obfuscation emerges as a preeminent and cost-efficient solution to address these challenges. Obfuscation purposely converts software and hardware to improve complexity for probable adversaries, targeting obscure realization operations while preserving safety and functionality. Former research has commonly engaged features of obfuscation, deobfuscation, and obfuscation detection approaches. A novel departure from conventional research methodologies, this revolutionary comprehensive article reviews these approaches in depth. It explicates the correlations and dynamics among them. Furthermore, it conducts a meticulous comparative analysis, evaluating obfuscation techniques across parameters such as the methodology, testing procedures, efficacy, associated drawbacks, market applicability, and prospects for future enhancement. This review aims to assist organizations in wisely electing obfuscation techniques for firm protection against threats and enhances the strategic choice of deobfuscation and obfuscation detection techniques to recognize vulnerabilities in software and hardware products. This empowerment permits organizations to proficiently treat security risks, guaranteeing secure software and hardware solutions, and improving user satisfaction for maximized profitability. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Figure 1

39 pages, 1839 KiB  
Review
The Integration of the Internet of Things (IoT) Applications into 5G Networks: A Review and Analysis
by Aymen I. Zreikat, Zakwan AlArnaout, Ahmad Abadleh, Ersin Elbasi and Nour Mostafa
Computers 2025, 14(7), 250; https://doi.org/10.3390/computers14070250 - 25 Jun 2025
Viewed by 513
Abstract
The incorporation of Internet of Things (IoT) applications into 5G networks marks a significant step towards realizing the full potential of connected systems. 5G networks, with their ultra-low latency, high data speeds, and huge interconnection, provide a perfect foundation for IoT ecosystems to [...] Read more.
The incorporation of Internet of Things (IoT) applications into 5G networks marks a significant step towards realizing the full potential of connected systems. 5G networks, with their ultra-low latency, high data speeds, and huge interconnection, provide a perfect foundation for IoT ecosystems to thrive. This connectivity offers a diverse set of applications, including smart cities, self-driving cars, industrial automation, healthcare monitoring, and agricultural solutions. IoT devices can improve their reliability, real-time communication, and scalability by exploiting 5G’s advanced capabilities such as network slicing, edge computing, and enhanced mobile broadband. Furthermore, the convergence of IoT with 5G fosters interoperability, allowing for smooth communication across diverse devices and networks. This study examines the fundamental technical applications, obstacles, and future perspectives for integrating IoT applications with 5G networks, emphasizing the potential benefits while also addressing essential concerns such as security, energy efficiency, and network management. The results of this review and analysis will act as a valuable resource for researchers, industry experts, and policymakers involved in the progression of 5G technologies and their incorporation with IT solutions. Full article
Show Figures

Figure 1

32 pages, 3349 KiB  
Article
The PECC Framework: Promoting Gender Sensitivity and Gender Equality in Computer Science Education
by Bernadette Spieler and Carina Girvan
Computers 2025, 14(7), 249; https://doi.org/10.3390/computers14070249 - 25 Jun 2025
Viewed by 217
Abstract
There are increasing expectations that we should live in a digitally and computationally literate society. For many young people, particularly girls, school is the one place that provides an opportunity to develop the necessary knowledge and skills. This environment can either perpetuate and [...] Read more.
There are increasing expectations that we should live in a digitally and computationally literate society. For many young people, particularly girls, school is the one place that provides an opportunity to develop the necessary knowledge and skills. This environment can either perpetuate and reinforce or eliminate existing gender inequalities. In this article, we present the “PLAYING, ENGAGEMENT, CREATVITIY, CREATING” (PECC) Framework, a practical guide to supporting teachers in the design of gender-sensitive learning activities, bringing students’ own interests to the fore. Through a six-year, mixed-methods, design-based research approach, PECC—along with supporting resources and digital tools—was developed through iterative cycles of theoretical analysis, empirical data (both qualitative and quantitative), critical reflection, and case study research. Exploratory and instrumental case studies investigated the promise and limitations of the emerging framework, involving 43 teachers and 1453 students in secondary-school classrooms (including online during COVID-19) in Austria, Germany, and Switzerland. Quantitative data (e.g., surveys, usage metrics) and qualitative findings (e.g., interviews, observations, classroom artefacts) were analyzed across the case studies to inform successive refinements of the framework. The case study results are presented alongside the theoretically informed discussions and practical considerations that informed each stage of PECC. PECC has had a real-world, tangible impact at a national level. It provides an essential link between research and practice, offering a theoretically informed and empirically evidenced framework for teachers and policy makers. Full article
Show Figures

Figure 1

17 pages, 1372 KiB  
Article
Dark Web Traffic Classification Based on Spatial–Temporal Feature Fusion and Attention Mechanism
by Junwei Li and Zhisong Pan
Computers 2025, 14(7), 248; https://doi.org/10.3390/computers14070248 - 25 Jun 2025
Viewed by 164
Abstract
There is limited research on current traffic classification methods for dark web traffic and the classification results are not very satisfactory. To improve the prediction accuracy and classification precision of dark web traffic, a classification method (CLA) based on spatial–temporal feature fusion and [...] Read more.
There is limited research on current traffic classification methods for dark web traffic and the classification results are not very satisfactory. To improve the prediction accuracy and classification precision of dark web traffic, a classification method (CLA) based on spatial–temporal feature fusion and an attention mechanism is proposed. When processing raw bytes, the combination of a CNN and LSTM is used to extract local spatial–temporal features from raw data packets, while an attention module is introduced to process key spatial–temporal data. The experimental results show that this model can effectively extract and utilize the spatial–temporal features of traffic data and use the attention mechanism to measure the importance of different features, thereby achieving accurate predictions of different dark web traffic. In comparative experiments, the accuracy, recall rate, and F1 score of this model are higher than those of other traditional methods. Full article
Show Figures

Figure 1

24 pages, 2258 KiB  
Article
Machine Learning for Anomaly Detection in Blockchain: A Critical Analysis, Empirical Validation, and Future Outlook
by Fouzia Jumani and Muhammad Raza
Computers 2025, 14(7), 247; https://doi.org/10.3390/computers14070247 - 25 Jun 2025
Viewed by 230
Abstract
Blockchain technology has transformed how data are stored and transactions are processed in a distributed environment. Blockchain assures data integrity by validating transactions through the consensus of a distributed ledger involving several miners as validators. Although blockchain provides multiple advantages, it has also [...] Read more.
Blockchain technology has transformed how data are stored and transactions are processed in a distributed environment. Blockchain assures data integrity by validating transactions through the consensus of a distributed ledger involving several miners as validators. Although blockchain provides multiple advantages, it has also been subject to some malicious attacks, such as a 51% attack, which is considered a potential risk to data integrity. These attacks can be detected by analyzing the anomalous node behavior of miner nodes in the network, and data analysis plays a vital role in detecting and overcoming these attacks to make a secure blockchain. Integrating machine learning algorithms with blockchain has become a significant approach to detecting anomalies such as a 51% attack and double spending. This study comprehensively analyzes various machine learning (ML) methods to detect anomalies in blockchain networks. It presents a Systematic Literature Review (SLR) and a classification to explore the integration of blockchain and ML for anomaly detection in blockchain networks. We implemented Random Forest, AdaBoost, XGBoost, K-means, and Isolation Forest ML models to evaluate their performance in detecting Blockchain anomalies, such as a 51% attack. Additionally, we identified future research directions, including challenges related to scalability, network latency, imbalanced datasets, the dynamic nature of anomalies, and the lack of standardization in blockchain protocols. This study acts as a benchmark for additional research on how ML algorithms identify anomalies in blockchain technology and aids ongoing studies in this rapidly evolving field. Full article
Show Figures

Figure 1

14 pages, 385 KiB  
Article
A Comparative Evaluation of Time-Series Forecasting Models for Energy Datasets
by Nikitas Maragkos and Ioannis Refanidis
Computers 2025, 14(7), 246; https://doi.org/10.3390/computers14070246 - 24 Jun 2025
Viewed by 232
Abstract
Time series forecasting plays a critical role across numerous domains such as finance, energy, and healthcare. While traditional statistical models have long been employed for this task, recent advancements in deep learning have led to a new generation of state-of-the-art (SotA) models that [...] Read more.
Time series forecasting plays a critical role across numerous domains such as finance, energy, and healthcare. While traditional statistical models have long been employed for this task, recent advancements in deep learning have led to a new generation of state-of-the-art (SotA) models that offer improved accuracy and flexibility. However, there remains a gap in understanding how these forecasting models perform under different forecasting scenarios, especially when incorporating external variables. This paper presents a comprehensive review and empirical evaluation of seven leading deep learning models for time series forecasting. We introduce a novel dataset that combines energy consumption and weather data from 24 European countries, allowing us to benchmark model performance across various forecasting horizons, granularities, and variable types. Our findings offer practical insights into model strengths and limitations, guiding future applications and research in time series forecasting. Full article
Show Figures

Graphical abstract

30 pages, 2599 KiB  
Article
Exploring the Role of Artificial Intelligence in Detecting Advanced Persistent Threats
by Pedro Ramos Brandao
Computers 2025, 14(7), 245; https://doi.org/10.3390/computers14070245 - 23 Jun 2025
Viewed by 178
Abstract
The rapid evolution of cyber threats, particularly Advanced Persistent Threats (APTs), poses significant challenges to the security of information systems. This paper explores the pivotal role of Artificial Intelligence (AI) in enhancing the detection and mitigation of APTs. By leveraging machine learning algorithms [...] Read more.
The rapid evolution of cyber threats, particularly Advanced Persistent Threats (APTs), poses significant challenges to the security of information systems. This paper explores the pivotal role of Artificial Intelligence (AI) in enhancing the detection and mitigation of APTs. By leveraging machine learning algorithms and data analytics, AI systems can identify patterns and anomalies that are indicative of sophisticated cyber-attacks. This study examines various AI-driven methodologies, including anomaly detection, predictive analytics, and automated response systems, highlighting their effectiveness in real-time threat detection and response. Furthermore, we discuss the integration of AI into existing cybersecurity frameworks, emphasizing the importance of collaboration between human analysts and AI systems in combating APTs. The findings suggest that the adoption of AI technologies not only improves the accuracy and speed of threat detection but also enables organizations to proactively defend against evolving cyber threats, probably achieving a 75% reduction in alert volume. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
19 pages, 582 KiB  
Systematic Review
Human–AI Collaboration in the Modernization of COBOL-Based Legacy Systems: The Case of the Department of Government Efficiency (DOGE)
by Inês Melo, Daniel Polónia and Leonor Teixeira
Computers 2025, 14(7), 244; https://doi.org/10.3390/computers14070244 - 23 Jun 2025
Viewed by 999
Abstract
This paper aims to explore the challenges of maintaining and modernizing legacy systems, particularly COBOL-based platforms, the backbone of many financial and administrative systems. By exploring the DOGE team’s initiative to modernize government IT systems on a relevant case study, the author analyzes [...] Read more.
This paper aims to explore the challenges of maintaining and modernizing legacy systems, particularly COBOL-based platforms, the backbone of many financial and administrative systems. By exploring the DOGE team’s initiative to modernize government IT systems on a relevant case study, the author analyzes the pros and cons of AI and Agile methodologies in addressing the limitations of static and highly resilient legacy architectures. A systematic literature review was conducted to assess the state of the art about legacy system modernization, AI integration, and Agile methodologies. Then, the gray literature was analyzed to provide practical insights into how government agencies can modernize their IT infrastructures while addressing the growing shortage of COBOL experts. Findings suggest that AI may support interoperability, automation, and knowledge abstraction, but also introduce new risks related to cybersecurity, workforce disruption, and knowledge retention. Furthermore, the transition from Waterfall to Agile approaches poses significant epistemological and operational challenges. The results highlight the importance of adopting a hybrid human–AI model and structured governance strategies to ensure sustainable and secure system evolution. This study offers valuable insights for organizations that are facing the challenge of balancing the desire for modernization with the need to ensure their systems remain functional and manage tacit knowledge transfer. Full article
Show Figures

Figure 1

23 pages, 1779 KiB  
Article
AI_TAF: A Human-Centric Trustworthiness Risk Assessment Framework for AI Systems
by Eleni Seralidou, Kitty Kioskli, Theofanis Fotis and Nineta Polemi
Computers 2025, 14(7), 243; https://doi.org/10.3390/computers14070243 - 22 Jun 2025
Viewed by 310
Abstract
This paper presents the AI Trustworthiness Assessment Framework (AI_TAF), a comprehensive methodology for evaluating and mitigating trustworthiness risks across all stages of an AI system’s lifecycle. The framework accounts for the criticality of the system based on its intended application, the maturity level [...] Read more.
This paper presents the AI Trustworthiness Assessment Framework (AI_TAF), a comprehensive methodology for evaluating and mitigating trustworthiness risks across all stages of an AI system’s lifecycle. The framework accounts for the criticality of the system based on its intended application, the maturity level of the AI teams responsible for ensuring trust, and the organisation’s risk tolerance regarding trustworthiness. By integrating both technical safeguards and sociopsychological considerations, AI_TAF adopts a human-centric approach to risk management, supporting the development of trustworthy AI systems across diverse organisational contexts and at varying levels of human–AI maturity. Crucially, the framework underscores that achieving trust in AI requires a rigorous assessment and advancement of the trustworthiness maturity of the human actors involved in the AI lifecycle. Only through this human-centric enhancement can AI teams be adequately prepared to provide effective oversight of AI systems. Full article
(This article belongs to the Special Issue AI for Humans and Humans for AI (AI4HnH4AI))
Show Figures

Figure 1

15 pages, 500 KiB  
Article
Incremental Reinforcement Learning for Portfolio Optimisation
by Refiloe Shabe, Andries Engelbrecht and Kian Anderson
Computers 2025, 14(7), 242; https://doi.org/10.3390/computers14070242 - 21 Jun 2025
Viewed by 338
Abstract
Portfolio optimisation is a crucial decision-making task. Traditionally static, this problem is more realistically addressed as dynamic, reflecting frequent trading within financial markets. The dynamic nature of the portfolio optimisation problem makes it susceptible to rapid market changes or financial contagions, which may [...] Read more.
Portfolio optimisation is a crucial decision-making task. Traditionally static, this problem is more realistically addressed as dynamic, reflecting frequent trading within financial markets. The dynamic nature of the portfolio optimisation problem makes it susceptible to rapid market changes or financial contagions, which may cause drifts in historical data. While reinforcement learning (RL) offers a framework that allows for the formulation of portfolio optimisation as a dynamic problem, existing RL approaches lack the ability to adapt to rapid market changes, such as pandemics, and fail to capture the resulting concept drift. This study introduces a recurrent proximal policy optimisation (PPO) algorithm, leveraging recurrent neural networks (RNNs), specifically the long short-term memory network (LSTM) for pattern recognition. Initial results conclusively demonstrate the recurrent PPO’s efficacy in generating quality portfolios. However, its performance declined during the COVID-19 pandemic, highlighting susceptibility to rapid market changes. To address this, an incremental recurrent PPO is developed, leveraging incremental learning to adapt to concept drift triggered by the pandemic. This enhanced algorithm not only learns from ongoing market data but also consistently identifies optimal portfolios despite significant market volatility, offering a robust tool for real-time financial decision-making. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

23 pages, 1222 KiB  
Article
A Data Quality Pipeline for Industrial Environments: Architecture and Implementation
by Teresa Peixoto, Óscar Oliveira, Eliana Costa e Silva, Bruno Oliveira and Fillipe Ribeiro
Computers 2025, 14(7), 241; https://doi.org/10.3390/computers14070241 - 20 Jun 2025
Viewed by 239
Abstract
In modern industrial environments, data-driven decision-making plays a crucial role in ensuring operational efficiency, predictive maintenance, and process optimization. However, the effectiveness of these decisions is highly dependent on the quality of the data. Industrial data is typically generated in real time by [...] Read more.
In modern industrial environments, data-driven decision-making plays a crucial role in ensuring operational efficiency, predictive maintenance, and process optimization. However, the effectiveness of these decisions is highly dependent on the quality of the data. Industrial data is typically generated in real time by sensors integrated into IoT devices and smart manufacturing systems, resulting in high-volume, heterogeneous, and rapidly changing data streams. This paper presents the design and implementation of a data quality pipeline specifically adapted to such industrial contexts. The proposed pipeline includes modular components responsible for data ingestion, profiling, validation, and continuous monitoring, and is guided by a comprehensive set of data quality dimensions, including accuracy, completeness, consistency, and timeliness. For each dimension, appropriate metrics are applied, including accuracy measures based on dynamic intervals and validations based on consistency rules. To evaluate its effectiveness, we conducted a case study in a real manufacturing environment. By continuously monitoring data quality, problems can be proactively identified before they impact downstream processes, resulting in more reliable and timely decisions. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop