Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.3 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Fuzzy-Based Multi-Modal Query-Forwarding in Mini-Datacenters
Computers 2025, 14(7), 261; https://doi.org/10.3390/computers14070261 - 1 Jul 2025
Abstract
The rapid growth of Internet of Things (IoT) enabled devices in industrial environments and the associated increase in data generation are paving the way for the development of localized, distributed datacenters. In this paper, we have proposed a novel mini-datacenter in the form
[...] Read more.
The rapid growth of Internet of Things (IoT) enabled devices in industrial environments and the associated increase in data generation are paving the way for the development of localized, distributed datacenters. In this paper, we have proposed a novel mini-datacenter in the form of wireless sensor networks to efficiently handle query-based data collection from Industrial IoT (IIoT) devices. The mini-datacenter comprises a command center, gateways, and IoT sensors, designed to manage stochastic query-response traffic flow. We have developed a duplication/aggregation query flow model, tailored to emphasize reliable transmission. We have developed a dataflow management framework that employs a multi-modal query forwarding approach to forward queries from the command center to gateways under varying environments. The query forwarding includes coarse-grain and fine-grain strategies, where the coarse-grain strategy uses a direct data flow using a single gateway at the expense of reliability, while the fine-grain approach uses redundant gateways to enhance reliability. A fuzzy-logic-based intelligence system is integrated into the framework to dynamically select the appropriate granularity of the forwarding strategy based on the resource availability and network conditions, aided by a buffer watching algorithm that tracks real-time buffer status. We carried out several experiments with gateway nodes varying from 10 to 100 to evaluate the framework’s scalability and robustness in handling the query flow under complex environments. The experimental results demonstrate that the framework provides a flexible and adaptive solution that balances buffer usage while maintaining over 95% reliability in most queries.
Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
►
Show Figures
Open AccessArticle
Vision-Degree-Driven Loading Strategy for Real-Time Large-Scale Scene Rendering
by
Yu Ding and Ying Song
Computers 2025, 14(7), 260; https://doi.org/10.3390/computers14070260 - 1 Jul 2025
Abstract
►▼
Show Figures
Large-scale scene rendering faces challenges in managing massive scene data and mitigating rendering latency caused by suboptimal loading sequences. Although current approaches utilize Level of Detail (LOD) for dynamic resource loading, two limitations remain. One is loading priority, which does not adequately consider
[...] Read more.
Large-scale scene rendering faces challenges in managing massive scene data and mitigating rendering latency caused by suboptimal loading sequences. Although current approaches utilize Level of Detail (LOD) for dynamic resource loading, two limitations remain. One is loading priority, which does not adequately consider the factors affecting visual effects such as LOD selection and visible area. The other is the insufficient trade-off between rendering quality and loading latency. To this end, we propose a loading prioritization metric called Vision Degree (VD), derived from LOD selection, loading time, and the trade-off between rendering quality and loading latency. During rendering, VDs are sorted in descending order to achieve an optimized loading and unloading sequence. At the same time, a compensation factor is proposed to further compensate for the visual loss caused by the reduced LOD level and to optimize the rendering effect. Finally, we optimize the initial viewpoint selection by minimizing the average model-to-viewpoint distance, thereby reducing the initial scene loading time. Experimental results demonstrate that our method reduces the rendering latency by 24–29% compared with the existing Area-of-Interest (AOI)-based loading strategy, while maintaining comparable visual quality.
Full article

Figure 1
Open AccessSystematic Review
Artificial Intelligence and the Future of Mental Health in a Digitally Transformed World
by
Aggeliki Kelly Fanarioti and Kostas Karpouzis
Computers 2025, 14(7), 259; https://doi.org/10.3390/computers14070259 - 30 Jun 2025
Abstract
Artificial Intelligence (AI) is reshaping mental healthcare by enabling new forms of diagnosis, therapy, and patient monitoring. Yet this digital transformation raises complex policy and ethical questions that remain insufficiently addressed. In this paper, we critically examine how AI-driven innovations are being integrated
[...] Read more.
Artificial Intelligence (AI) is reshaping mental healthcare by enabling new forms of diagnosis, therapy, and patient monitoring. Yet this digital transformation raises complex policy and ethical questions that remain insufficiently addressed. In this paper, we critically examine how AI-driven innovations are being integrated into mental health systems across different global contexts, with particular attention to governance, regulation, and social justice. The study follows the PRISMA-ScR methodology to ensure transparency and methodological rigor, while also acknowledging its inherent limitations, such as the emphasis on breadth over depth and the exclusion of non-English sources. Drawing on international guidelines, academic literature, and emerging national strategies, it identifies both opportunities, such as improved access and personalized care, and threats, including algorithmic bias, data privacy risks, and diminished human oversight. Special attention is given to underrepresented populations and the risks of digital exclusion. The paper argues for a value-driven approach that centers equity, transparency, and informed consent in the deployment of AI tools. It concludes with actionable policy recommendations to support the ethical implementation of AI in mental health, emphasizing the need for cross-sectoral collaboration and global accountability mechanisms.
Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
►▼
Show Figures

Figure 1
Open AccessReview
Simulation-Based Development of Internet of Cyber-Things Using DEVS
by
Laurent Capocchi, Bernard P. Zeigler and Jean-Francois Santucci
Computers 2025, 14(7), 258; https://doi.org/10.3390/computers14070258 - 30 Jun 2025
Abstract
Simulation-based development is a structured approach that uses formal models to design and test system behavior before building the actual system. The Internet of Things (IoT) connects physical devices equipped with sensors and software to collect and exchange data. Cyber-Physical Systems (CPSs) integrate
[...] Read more.
Simulation-based development is a structured approach that uses formal models to design and test system behavior before building the actual system. The Internet of Things (IoT) connects physical devices equipped with sensors and software to collect and exchange data. Cyber-Physical Systems (CPSs) integrate computing directly into physical processes to enable real-time control. This paper reviews the Discrete-Event System Specification (DEVS) formalism and explores how it can serve as a unified framework for designing, simulating, and implementing systems that combine IoT and CPS—referred to as the Internet of Cyber-Things (IoCT). Through case studies that include home automation, solar energy monitoring, conflict management, and swarm robotics, the paper reviews how DEVS enables construction of modular, scalable, and reusable models. The role of the System Entity Structure (SES) is also discussed, highlighting its contribution in organizing models and generating alternative system configurations. With this background as basis, the paper evaluates whether DEVS provides the necessary modeling power and continuity across stages to support the development of complex IoCT systems. The paper concludes that DEVS offers a robust and flexible foundation for developing IoCT systems, supporting both expressiveness and seamless transition from design to real-world deployment.
Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
►▼
Show Figures

Figure 1
Open AccessArticle
Bridging Tradition and Innovation: Transformative Educational Practices in Museums with AI and VR
by
Michele Domenico Todino, Eliza Pitri, Argyro Fella, Antonia Michaelidou, Lucia Campitiello, Francesca Placanica, Stefano Di Tore and Maurizio Sibilio
Computers 2025, 14(7), 257; https://doi.org/10.3390/computers14070257 - 30 Jun 2025
Abstract
This paper explores the intersection of folk art, museums, and education in the 20th century, with a focus on the concept of art as experience, emphasizing the role of museums as active, inclusive learning spaces. A collaboration between the University of Salerno and
[...] Read more.
This paper explores the intersection of folk art, museums, and education in the 20th century, with a focus on the concept of art as experience, emphasizing the role of museums as active, inclusive learning spaces. A collaboration between the University of Salerno and the University of Nicosia has developed virtual museum environments using virtual reality (VR) to enhance engagement with cultural heritage. These projects aim to make museums more accessible and interactive, with future potential in integrating artificial intelligence NPC and VR strategies for personalized visitor experiences of the Nicosia Folk Art Museum.
Full article
(This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning)
►▼
Show Figures

Graphical abstract
Open AccessArticle
An xLSTM–XGBoost Ensemble Model for Forecasting Non-Stationary and Highly Volatile Gasoline Price
by
Fujiang Yuan, Xia Huang, Hong Jiang, Yang Jiang, Zihao Zuo, Lusheng Wang, Yuxin Wang, Shaojie Gu and Yanhong Peng
Computers 2025, 14(7), 256; https://doi.org/10.3390/computers14070256 - 29 Jun 2025
Abstract
High-frequency fluctuations in the international crude oil market have led to multilevel characteristics in China’s domestic refined oil pricing mechanism. To address the poor fitting performance of single deep learning models on oil price data, which hampers accurate gasoline price prediction, this paper
[...] Read more.
High-frequency fluctuations in the international crude oil market have led to multilevel characteristics in China’s domestic refined oil pricing mechanism. To address the poor fitting performance of single deep learning models on oil price data, which hampers accurate gasoline price prediction, this paper proposes a gasoline price prediction method based on a combined xLSTM–XGBoost model. Using gasoline price data from June 2000 to November 2024 in Sichuan Province as a sample, the data are decomposed via STL decomposition to extract trend, residual, and seasonal components. The xLSTM model is then employed to predict the trend and seasonal components, while XGBoost predicts the residual component. Finally, the predictions from both models are combined to produce the final forecast. The experimental results demonstrate that the proposed xLSTM–XGBoost model reduces the MAE by 14.8% compared to the second-best sLSTM–XGBoost model and by 83% compared to the traditional LSTM model, significantly enhancing prediction accuracy.
Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
Focal Correlation and Event-Based Focal Visual Content Text Attention for Past Event Search
by
Pranita P. Deshmukh and S. Poonkuntran
Computers 2025, 14(7), 255; https://doi.org/10.3390/computers14070255 - 28 Jun 2025
Abstract
►▼
Show Figures
Every minute, vast amounts of video and image data are uploaded worldwide to the internet and social media platforms, creating a rich visual archive of human experiences—from weddings and family gatherings to significant historical events such as war crimes and humanitarian crises. When
[...] Read more.
Every minute, vast amounts of video and image data are uploaded worldwide to the internet and social media platforms, creating a rich visual archive of human experiences—from weddings and family gatherings to significant historical events such as war crimes and humanitarian crises. When properly analyzed, this multimodal data holds immense potential for reconstructing important events and verifying information. However, challenges arise when images and videos lack complete annotations, making manual examination inefficient and time-consuming. To address this, we propose a novel event-based focal visual content text attention (EFVCTA) framework for automated past event retrieval using visual question answering (VQA) techniques. Our approach integrates a Long Short-Term Memory (LSTM) model with convolutional non-linearity and an adaptive attention mechanism to efficiently identify and retrieve relevant visual evidence alongside precise answers. The model is designed with robust weight initialization, regularization, and optimization strategies and is evaluated on the Common Objects in Context (COCO) dataset. The results demonstrate that EFVCTA achieves the highest performance across all metrics (88.7% accuracy, 86.5% F1-score, 84.9% mAP), outperforming state-of-the-art baselines. The EFVCTA framework demonstrates promising results for retrieving information about past events captured in images and videos and can be effectively applied to scenarios such as documenting training programs, workshops, conferences, and social gatherings in academic institutions
Full article

Figure 1
Open AccessArticle
FaceCloseup: Enhancing Mobile Facial Authentication with Perspective Distortion-Based Liveness Detection
by
Yingjiu Li, Yan Li and Zilong Wang
Computers 2025, 14(7), 254; https://doi.org/10.3390/computers14070254 - 27 Jun 2025
Abstract
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to
[...] Read more.
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to spoofing attacks. Adversaries can exploit facial recognition systems using pre-recorded photos, videos, or even sophisticated 3D models of victims’ faces to bypass authentication mechanisms. The increasing availability of personal images on social media further amplifies this risk, making robust anti-spoofing mechanisms essential for secure facial authentication. To address these challenges, we introduce FaceCloseup, a novel liveness detection technique that strengthens facial authentication by leveraging perspective distortion inherent in close-up shots of real, 3D faces. Instead of relying on additional sensors or user-interactive gestures, FaceCloseup passively analyzes facial distortions in video frames captured by a mobile device’s camera, improving security without compromising user experience. FaceCloseup effectively distinguishes live faces from spoofed attacks by identifying perspective-based distortions across different facial regions. The system achieves a 99.48% accuracy in detecting common spoofing methods—including photo, video, and 3D model-based attacks—and demonstrates 98.44% accuracy in differentiating between individual users. By operating entirely on-device, FaceCloseup eliminates the need for cloud-based processing, reducing privacy concerns and potential latency in authentication. Its reliance on natural device movement ensures a seamless authentication experience while maintaining robust security.
Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
►▼
Show Figures

Figure 1
Open AccessArticle
EMGP-Net: A Hybrid Deep Learning Architecture for Breast Cancer Gene Expression Prediction
by
Oumeima Thâalbi and Moulay A. Akhloufi
Computers 2025, 14(7), 253; https://doi.org/10.3390/computers14070253 - 26 Jun 2025
Abstract
Background: The accurate prediction of gene expression is essential in breast cancer research. However, spatial transcriptomics technologies are usually too expensive. Recent studies have used whole-slide images combined with spatial transcriptomics data to predict breast cancer gene expression. To this end, we
[...] Read more.
Background: The accurate prediction of gene expression is essential in breast cancer research. However, spatial transcriptomics technologies are usually too expensive. Recent studies have used whole-slide images combined with spatial transcriptomics data to predict breast cancer gene expression. To this end, we present EMGP-Net, a novel hybrid deep learning architecture developed by combining two state-of-the-art models, MambaVision and EfficientFormer. Method: EMGP-Net was first trained on the HER2+ dataset, containing data from eight patients using a leave-one-patient-out approach. To ensure generalizability, we conducted external validation and alternately trained EMGP-Net on the HER2+ dataset and tested it on the STNet dataset, containing data from 23 patients, and vice versa. We evaluated EMGP-Net’s ability to predict the expression of 250 selected genes. EMGP-Net mixes features from both models, and uses attention mechanisms followed by fully connected layers. Results: Our model outperformed both EfficientFormer and MambaVision, which were trained separately on the HER2+ dataset, achieving the highest PCC of 0.7903 for the PTMA gene, with the top 14 genes having PCCs greater than 0.7, including other important breast cancer biomarkers such as GNAS and B2M. The external validation showed that it also outperformed models that were retrained with our approach. Conclusions: The results of EMGP-Net were better than those of existing models, showing that the combination of advanced models is an effective strategy to improve performance in this task.
Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
►▼
Show Figures

Figure 1
Open AccessArticle
Data Augmentation-Driven Improvements in Malignant Lymphoma Image Classification
by
Sandi Baressi Šegota, Vedran Mrzljak, Ivan Lorencin and Nikola Anđelić
Computers 2025, 14(7), 252; https://doi.org/10.3390/computers14070252 - 26 Jun 2025
Abstract
Artificial intelligence (AI)-based techniques have become increasingly prevalent in the classification of medical images. However, the effectiveness of such methods is often constrained by the limited availability of annotated medical data. To address this challenge, data augmentation is frequently employed. This study investigates
[...] Read more.
Artificial intelligence (AI)-based techniques have become increasingly prevalent in the classification of medical images. However, the effectiveness of such methods is often constrained by the limited availability of annotated medical data. To address this challenge, data augmentation is frequently employed. This study investigates the impact of a novel augmentation approach on the classification performance of malignant lymphoma histopathological images. The proposed method involves slicing high-resolution images (1388 × 1040 pixels) into smaller segments (224 × 224 pixels) before applying standard augmentation techniques such as flipping and rotation. The original dataset consists of 374 images, comprising 32.6% mantle cell lymphoma, 30.2% chronic lymphocytic leukemia, and 37.2% follicular lymphoma. Through slicing, the dataset was expanded to 8976 images, and further augmented to 53,856 images. The visual geometry group with 16 layers (VGG16) convolutional neural network (CNN) was trained and evaluated on three datasets: the original, the sliced, and the sliced with augmentation. Performance was assessed using accuracy, AUC, precision, sensitivity, specificity, and F1 score. The results demonstrate a substantial improvement in classification performance when slicing was employed, with additional, albeit smaller, gains achieved through subsequent augmentation.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessReview
Hardware and Software Methods for Secure Obfuscation and Deobfuscation: An In-Depth Analysis
by
Khaled Saleh, Dirar Darweesh, Omar Darwish, Eman Hammad and Fathi Amsaad
Computers 2025, 14(7), 251; https://doi.org/10.3390/computers14070251 - 25 Jun 2025
Abstract
The swift evolution of information technology and growing connectivity in critical applications have elevated cybersecurity, protecting and certifying software and designs against rising cyber threats. For example, software and hardware have become highly susceptible to various threats, like reverse engineering, cloning, tampering, and
[...] Read more.
The swift evolution of information technology and growing connectivity in critical applications have elevated cybersecurity, protecting and certifying software and designs against rising cyber threats. For example, software and hardware have become highly susceptible to various threats, like reverse engineering, cloning, tampering, and IP piracy. While various techniques exist to enhance software and hardware security, including encryption, native code, and secure server-side execution, obfuscation emerges as a preeminent and cost-efficient solution to address these challenges. Obfuscation purposely converts software and hardware to improve complexity for probable adversaries, targeting obscure realization operations while preserving safety and functionality. Former research has commonly engaged features of obfuscation, deobfuscation, and obfuscation detection approaches. A novel departure from conventional research methodologies, this revolutionary comprehensive article reviews these approaches in depth. It explicates the correlations and dynamics among them. Furthermore, it conducts a meticulous comparative analysis, evaluating obfuscation techniques across parameters such as the methodology, testing procedures, efficacy, associated drawbacks, market applicability, and prospects for future enhancement. This review aims to assist organizations in wisely electing obfuscation techniques for firm protection against threats and enhances the strategic choice of deobfuscation and obfuscation detection techniques to recognize vulnerabilities in software and hardware products. This empowerment permits organizations to proficiently treat security risks, guaranteeing secure software and hardware solutions, and improving user satisfaction for maximized profitability.
Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessReview
The Integration of the Internet of Things (IoT) Applications into 5G Networks: A Review and Analysis
by
Aymen I. Zreikat, Zakwan AlArnaout, Ahmad Abadleh, Ersin Elbasi and Nour Mostafa
Computers 2025, 14(7), 250; https://doi.org/10.3390/computers14070250 - 25 Jun 2025
Abstract
The incorporation of Internet of Things (IoT) applications into 5G networks marks a significant step towards realizing the full potential of connected systems. 5G networks, with their ultra-low latency, high data speeds, and huge interconnection, provide a perfect foundation for IoT ecosystems to
[...] Read more.
The incorporation of Internet of Things (IoT) applications into 5G networks marks a significant step towards realizing the full potential of connected systems. 5G networks, with their ultra-low latency, high data speeds, and huge interconnection, provide a perfect foundation for IoT ecosystems to thrive. This connectivity offers a diverse set of applications, including smart cities, self-driving cars, industrial automation, healthcare monitoring, and agricultural solutions. IoT devices can improve their reliability, real-time communication, and scalability by exploiting 5G’s advanced capabilities such as network slicing, edge computing, and enhanced mobile broadband. Furthermore, the convergence of IoT with 5G fosters interoperability, allowing for smooth communication across diverse devices and networks. This study examines the fundamental technical applications, obstacles, and future perspectives for integrating IoT applications with 5G networks, emphasizing the potential benefits while also addressing essential concerns such as security, energy efficiency, and network management. The results of this review and analysis will act as a valuable resource for researchers, industry experts, and policymakers involved in the progression of 5G technologies and their incorporation with IT solutions.
Full article
(This article belongs to the Special Issue Distributed Computing Paradigms for the Internet of Things: Exploring Cloud, Edge, and Fog Solutions)
►▼
Show Figures

Figure 1
Open AccessArticle
The PECC Framework: Promoting Gender Sensitivity and Gender Equality in Computer Science Education
by
Bernadette Spieler and Carina Girvan
Computers 2025, 14(7), 249; https://doi.org/10.3390/computers14070249 - 25 Jun 2025
Abstract
►▼
Show Figures
There are increasing expectations that we should live in a digitally and computationally literate society. For many young people, particularly girls, school is the one place that provides an opportunity to develop the necessary knowledge and skills. This environment can either perpetuate and
[...] Read more.
There are increasing expectations that we should live in a digitally and computationally literate society. For many young people, particularly girls, school is the one place that provides an opportunity to develop the necessary knowledge and skills. This environment can either perpetuate and reinforce or eliminate existing gender inequalities. In this article, we present the “PLAYING, ENGAGEMENT, CREATVITIY, CREATING” (PECC) Framework, a practical guide to supporting teachers in the design of gender-sensitive learning activities, bringing students’ own interests to the fore. Through a six-year, mixed-methods, design-based research approach, PECC—along with supporting resources and digital tools—was developed through iterative cycles of theoretical analysis, empirical data (both qualitative and quantitative), critical reflection, and case study research. Exploratory and instrumental case studies investigated the promise and limitations of the emerging framework, involving 43 teachers and 1453 students in secondary-school classrooms (including online during COVID-19) in Austria, Germany, and Switzerland. Quantitative data (e.g., surveys, usage metrics) and qualitative findings (e.g., interviews, observations, classroom artefacts) were analyzed across the case studies to inform successive refinements of the framework. The case study results are presented alongside the theoretically informed discussions and practical considerations that informed each stage of PECC. PECC has had a real-world, tangible impact at a national level. It provides an essential link between research and practice, offering a theoretically informed and empirically evidenced framework for teachers and policy makers.
Full article

Figure 1
Open AccessArticle
Dark Web Traffic Classification Based on Spatial–Temporal Feature Fusion and Attention Mechanism
by
Junwei Li and Zhisong Pan
Computers 2025, 14(7), 248; https://doi.org/10.3390/computers14070248 - 25 Jun 2025
Abstract
►▼
Show Figures
There is limited research on current traffic classification methods for dark web traffic and the classification results are not very satisfactory. To improve the prediction accuracy and classification precision of dark web traffic, a classification method (CLA) based on spatial–temporal feature fusion and
[...] Read more.
There is limited research on current traffic classification methods for dark web traffic and the classification results are not very satisfactory. To improve the prediction accuracy and classification precision of dark web traffic, a classification method (CLA) based on spatial–temporal feature fusion and an attention mechanism is proposed. When processing raw bytes, the combination of a CNN and LSTM is used to extract local spatial–temporal features from raw data packets, while an attention module is introduced to process key spatial–temporal data. The experimental results show that this model can effectively extract and utilize the spatial–temporal features of traffic data and use the attention mechanism to measure the importance of different features, thereby achieving accurate predictions of different dark web traffic. In comparative experiments, the accuracy, recall rate, and F1 score of this model are higher than those of other traditional methods.
Full article

Figure 1
Open AccessArticle
Machine Learning for Anomaly Detection in Blockchain: A Critical Analysis, Empirical Validation, and Future Outlook
by
Fouzia Jumani and Muhammad Raza
Computers 2025, 14(7), 247; https://doi.org/10.3390/computers14070247 - 25 Jun 2025
Abstract
Blockchain technology has transformed how data are stored and transactions are processed in a distributed environment. Blockchain assures data integrity by validating transactions through the consensus of a distributed ledger involving several miners as validators. Although blockchain provides multiple advantages, it has also
[...] Read more.
Blockchain technology has transformed how data are stored and transactions are processed in a distributed environment. Blockchain assures data integrity by validating transactions through the consensus of a distributed ledger involving several miners as validators. Although blockchain provides multiple advantages, it has also been subject to some malicious attacks, such as a 51% attack, which is considered a potential risk to data integrity. These attacks can be detected by analyzing the anomalous node behavior of miner nodes in the network, and data analysis plays a vital role in detecting and overcoming these attacks to make a secure blockchain. Integrating machine learning algorithms with blockchain has become a significant approach to detecting anomalies such as a 51% attack and double spending. This study comprehensively analyzes various machine learning (ML) methods to detect anomalies in blockchain networks. It presents a Systematic Literature Review (SLR) and a classification to explore the integration of blockchain and ML for anomaly detection in blockchain networks. We implemented Random Forest, AdaBoost, XGBoost, K-means, and Isolation Forest ML models to evaluate their performance in detecting Blockchain anomalies, such as a 51% attack. Additionally, we identified future research directions, including challenges related to scalability, network latency, imbalanced datasets, the dynamic nature of anomalies, and the lack of standardization in blockchain protocols. This study acts as a benchmark for additional research on how ML algorithms identify anomalies in blockchain technology and aids ongoing studies in this rapidly evolving field.
Full article
(This article belongs to the Special Issue Harnessing the Blockchain Technology in Unveiling Futuristic Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
A Comparative Evaluation of Time-Series Forecasting Models for Energy Datasets
by
Nikitas Maragkos and Ioannis Refanidis
Computers 2025, 14(7), 246; https://doi.org/10.3390/computers14070246 - 24 Jun 2025
Abstract
Time series forecasting plays a critical role across numerous domains such as finance, energy, and healthcare. While traditional statistical models have long been employed for this task, recent advancements in deep learning have led to a new generation of state-of-the-art (SotA) models that
[...] Read more.
Time series forecasting plays a critical role across numerous domains such as finance, energy, and healthcare. While traditional statistical models have long been employed for this task, recent advancements in deep learning have led to a new generation of state-of-the-art (SotA) models that offer improved accuracy and flexibility. However, there remains a gap in understanding how these forecasting models perform under different forecasting scenarios, especially when incorporating external variables. This paper presents a comprehensive review and empirical evaluation of seven leading deep learning models for time series forecasting. We introduce a novel dataset that combines energy consumption and weather data from 24 European countries, allowing us to benchmark model performance across various forecasting horizons, granularities, and variable types. Our findings offer practical insights into model strengths and limitations, guiding future applications and research in time series forecasting.
Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Innovations in Resilient Energy Systems)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Exploring the Role of Artificial Intelligence in Detecting Advanced Persistent Threats
by
Pedro Ramos Brandao
Computers 2025, 14(7), 245; https://doi.org/10.3390/computers14070245 - 23 Jun 2025
Abstract
The rapid evolution of cyber threats, particularly Advanced Persistent Threats (APTs), poses significant challenges to the security of information systems. This paper explores the pivotal role of Artificial Intelligence (AI) in enhancing the detection and mitigation of APTs. By leveraging machine learning algorithms
[...] Read more.
The rapid evolution of cyber threats, particularly Advanced Persistent Threats (APTs), poses significant challenges to the security of information systems. This paper explores the pivotal role of Artificial Intelligence (AI) in enhancing the detection and mitigation of APTs. By leveraging machine learning algorithms and data analytics, AI systems can identify patterns and anomalies that are indicative of sophisticated cyber-attacks. This study examines various AI-driven methodologies, including anomaly detection, predictive analytics, and automated response systems, highlighting their effectiveness in real-time threat detection and response. Furthermore, we discuss the integration of AI into existing cybersecurity frameworks, emphasizing the importance of collaboration between human analysts and AI systems in combating APTs. The findings suggest that the adoption of AI technologies not only improves the accuracy and speed of threat detection but also enables organizations to proactively defend against evolving cyber threats, probably achieving a 75% reduction in alert volume.
Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Open AccessSystematic Review
Human–AI Collaboration in the Modernization of COBOL-Based Legacy Systems: The Case of the Department of Government Efficiency (DOGE)
by
Inês Melo, Daniel Polónia and Leonor Teixeira
Computers 2025, 14(7), 244; https://doi.org/10.3390/computers14070244 - 23 Jun 2025
Abstract
►▼
Show Figures
This paper aims to explore the challenges of maintaining and modernizing legacy systems, particularly COBOL-based platforms, the backbone of many financial and administrative systems. By exploring the DOGE team’s initiative to modernize government IT systems on a relevant case study, the author analyzes
[...] Read more.
This paper aims to explore the challenges of maintaining and modernizing legacy systems, particularly COBOL-based platforms, the backbone of many financial and administrative systems. By exploring the DOGE team’s initiative to modernize government IT systems on a relevant case study, the author analyzes the pros and cons of AI and Agile methodologies in addressing the limitations of static and highly resilient legacy architectures. A systematic literature review was conducted to assess the state of the art about legacy system modernization, AI integration, and Agile methodologies. Then, the gray literature was analyzed to provide practical insights into how government agencies can modernize their IT infrastructures while addressing the growing shortage of COBOL experts. Findings suggest that AI may support interoperability, automation, and knowledge abstraction, but also introduce new risks related to cybersecurity, workforce disruption, and knowledge retention. Furthermore, the transition from Waterfall to Agile approaches poses significant epistemological and operational challenges. The results highlight the importance of adopting a hybrid human–AI model and structured governance strategies to ensure sustainable and secure system evolution. This study offers valuable insights for organizations that are facing the challenge of balancing the desire for modernization with the need to ensure their systems remain functional and manage tacit knowledge transfer.
Full article

Figure 1
Open AccessArticle
AI_TAF: A Human-Centric Trustworthiness Risk Assessment Framework for AI Systems
by
Eleni Seralidou, Kitty Kioskli, Theofanis Fotis and Nineta Polemi
Computers 2025, 14(7), 243; https://doi.org/10.3390/computers14070243 - 22 Jun 2025
Abstract
This paper presents the AI Trustworthiness Assessment Framework (AI_TAF), a comprehensive methodology for evaluating and mitigating trustworthiness risks across all stages of an AI system’s lifecycle. The framework accounts for the criticality of the system based on its intended application, the maturity level
[...] Read more.
This paper presents the AI Trustworthiness Assessment Framework (AI_TAF), a comprehensive methodology for evaluating and mitigating trustworthiness risks across all stages of an AI system’s lifecycle. The framework accounts for the criticality of the system based on its intended application, the maturity level of the AI teams responsible for ensuring trust, and the organisation’s risk tolerance regarding trustworthiness. By integrating both technical safeguards and sociopsychological considerations, AI_TAF adopts a human-centric approach to risk management, supporting the development of trustworthy AI systems across diverse organisational contexts and at varying levels of human–AI maturity. Crucially, the framework underscores that achieving trust in AI requires a rigorous assessment and advancement of the trustworthiness maturity of the human actors involved in the AI lifecycle. Only through this human-centric enhancement can AI teams be adequately prepared to provide effective oversight of AI systems.
Full article
(This article belongs to the Special Issue AI for Humans and Humans for AI (AI4HnH4AI))
►▼
Show Figures

Figure 1
Open AccessArticle
Incremental Reinforcement Learning for Portfolio Optimisation
by
Refiloe Shabe, Andries Engelbrecht and Kian Anderson
Computers 2025, 14(7), 242; https://doi.org/10.3390/computers14070242 - 21 Jun 2025
Abstract
Portfolio optimisation is a crucial decision-making task. Traditionally static, this problem is more realistically addressed as dynamic, reflecting frequent trading within financial markets. The dynamic nature of the portfolio optimisation problem makes it susceptible to rapid market changes or financial contagions, which may
[...] Read more.
Portfolio optimisation is a crucial decision-making task. Traditionally static, this problem is more realistically addressed as dynamic, reflecting frequent trading within financial markets. The dynamic nature of the portfolio optimisation problem makes it susceptible to rapid market changes or financial contagions, which may cause drifts in historical data. While reinforcement learning (RL) offers a framework that allows for the formulation of portfolio optimisation as a dynamic problem, existing RL approaches lack the ability to adapt to rapid market changes, such as pandemics, and fail to capture the resulting concept drift. This study introduces a recurrent proximal policy optimisation (PPO) algorithm, leveraging recurrent neural networks (RNNs), specifically the long short-term memory network (LSTM) for pattern recognition. Initial results conclusively demonstrate the recurrent PPO’s efficacy in generating quality portfolios. However, its performance declined during the COVID-19 pandemic, highlighting susceptibility to rapid market changes. To address this, an incremental recurrent PPO is developed, leveraging incremental learning to adapt to concept drift triggered by the pandemic. This enhanced algorithm not only learns from ongoing market data but also consistently identifies optimal portfolios despite significant market volatility, offering a robust tool for real-time financial decision-making.
Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Automation, Computers, Electronics, Sensors, JCP, Mathematics
Intelligent Optimization, Decision-Making and Privacy Preservation in Cyber–Physical Systems
Topic Editors: Lijuan Zha, Jinliang Liu, Jian LiuDeadline: 31 August 2025
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies and Applications
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario Brcic, Sebastian LapuschkinDeadline: 31 January 2026

Conferences
Special Issues
Special Issue in
Computers
Generative Artificial Intelligence and Machine Learning in Industrial Processes and Manufacturing
Guest Editor: Ananda MaitiDeadline: 31 July 2025
Special Issue in
Computers
IT in Production and Logistics
Guest Editors: Markus Rabe, Anne Antonia Scheidler, Marc Stautner, Simon J. E. TaylorDeadline: 31 July 2025
Special Issue in
Computers
Application of Deep Learning to Internet of Things Systems
Guest Editor: Rytis MaskeliunasDeadline: 31 July 2025
Special Issue in
Computers
Advanced Image Processing and Computer Vision (2nd Edition)
Guest Editors: Selene Tomassini, M. Ali DewanDeadline: 31 July 2025