Previous Issue
Volume 14, June
 
 

Computers, Volume 14, Issue 7 (July 2025) – 34 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 17084 KiB  
Article
Training First Responders Through VR-Based Situated Digital Twins
by Nikolaos Partarakis, Theodoros Evdaimon, Menelaos Katsantonis and Xenophon Zabulis
Computers 2025, 14(7), 274; https://doi.org/10.3390/computers14070274 - 11 Jul 2025
Abstract
This study examines first responder training to deliver realistic, adaptable, and scalable solutions aimed at equipping personnel to handle high-risk, rapidly developing scenarios. The proposed method leverages Virtual Reality, Augmented Reality, and digital twins to enable immersive and situationally relevant training for security-critical [...] Read more.
This study examines first responder training to deliver realistic, adaptable, and scalable solutions aimed at equipping personnel to handle high-risk, rapidly developing scenarios. The proposed method leverages Virtual Reality, Augmented Reality, and digital twins to enable immersive and situationally relevant training for security-critical incidents. The method is structured into three distinct phases: definition, digitization, and implementation. The outcome of this approach is the creation of virtual training scenarios that simulate real situations and incident dynamics. The methodology employs photogrammetric reconstruction, simulation of human behavior through locomotion, and virtual security systems, such as surveillance and drone technology. Alongside the methodology, a case study of a large public event is presented to illustrate its feasibility in real-world applications. This study offers a comprehensive and adaptive structure for the design and deployment of digitally augmented training systems. This provides a practical basis for enhancing readiness in a range of operational domains. Full article
Show Figures

Figure 1

25 pages, 9056 KiB  
Article
Creating Digital Twins to Celebrate Commemorative Events in the Metaverse
by Vicente Jover and Silvia Sempere
Computers 2025, 14(7), 273; https://doi.org/10.3390/computers14070273 - 10 Jul 2025
Abstract
This paper explores the potential and implications arising from the convergence of virtual reality, the metaverse, and digital twins in translating a real-world commemorative event into a virtual environment. It emphasizes how such integration influences digital transformation processes, particularly in reshaping models of [...] Read more.
This paper explores the potential and implications arising from the convergence of virtual reality, the metaverse, and digital twins in translating a real-world commemorative event into a virtual environment. It emphasizes how such integration influences digital transformation processes, particularly in reshaping models of social interaction. Virtual reality is conceptualized as an immersive technology, enabling advanced multisensory experiences within persistent virtual spaces, such as the metaverse. Furthermore, this study delves into the concept of digital twins—high-fidelity virtual representations of physical systems, processes, and objects—highlighting their application in simulation, analysis, forecasting, prevention, and operational enhancement. In the context of virtual events, the convergence of these technologies is examined as a means to create interactive, adaptable, and scalable environments capable of accommodating diverse social groups and facilitating global accessibility. As a practical application, a digital twin of the Ferrándiz and Carbonell buildings—the most iconic architectural ensemble on the Alcoi campus—was developed to host a virtual event commemorating the 50th anniversary of the integration of the Alcoi School of Industrial Technical Engineering into the Universitat Politècnica de València in 1972. The virtual environment was subsequently evaluated by a sample of users, including students and faculty, to assess usability and functionality, and to identify areas for improvement. The digital twin achieved a score of 88.39 out of 100 on the System Usability Scale (SUS). The findings underscore the key opportunities and challenges associated with the adoption of these emerging technologies, particularly regarding their adaptability in reconfiguring digital environments for work, social interaction, and education. Using this case study as a foundation, this paper offers insights into the strategic role of the metaverse in extending environmental perception and its transformative potential for the future digital ecosystem through the implementation of digital twins. Full article
Show Figures

Figure 1

21 pages, 1179 KiB  
Article
ELFA-Log: Cross-System Log Anomaly Detection via Enhanced Pseudo-Labeling and Feature Alignment
by Xiaowei Zhao, Kaiwei Guo, Mingting Huang, Shaojian Qiu and Lu Lu
Computers 2025, 14(7), 272; https://doi.org/10.3390/computers14070272 - 10 Jul 2025
Abstract
Existing log-based anomaly detection methods typically require large volumes of labeled data for training, presenting significant challenges when applied to new systems with limited labeled data. This limitation has spurred the need for cross-system log anomaly detection (CSLAD) methods. However, current CSLAD approaches [...] Read more.
Existing log-based anomaly detection methods typically require large volumes of labeled data for training, presenting significant challenges when applied to new systems with limited labeled data. This limitation has spurred the need for cross-system log anomaly detection (CSLAD) methods. However, current CSLAD approaches often face challenges in effectively handling distributional differences in log data across systems. To address this issue, we propose ELFA-Log, a transfer learning-based approach for cross-system log anomaly detection. By enhancing pseudo-label generation with uncertainty estimation and feature alignment, ELFA-Log improves detection performance even in the presence of data distribution shifts. It uses entropy-based metrics to generate high-confidence pseudo-labels, minimizing reliance on labeled data. Additionally, a distance-based loss function optimizes the shared representation of cross-system log features. Experimental results on benchmark datasets demonstrate that ELFA-Log enhances the performance of CSLAD, offering a practical solution to the challenge of high labeling costs in real-world applications. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

23 pages, 1590 KiB  
Article
A Decision Support System for Classifying Suppliers Based on Machine Learning Techniques: A Case Study in the Aeronautics Industry
by Ana Claudia Andrade Ferreira, Alexandre Ferreira de Pinho, Matheus Brendon Francisco, Laercio Almeida de Siqueira, Jr. and Guilherme Augusto Vilas Boas Vasconcelos
Computers 2025, 14(7), 271; https://doi.org/10.3390/computers14070271 - 10 Jul 2025
Abstract
This paper presents the application of four machine learning algorithms to segment suppliers in a real case. The algorithms used were K-Means, Hierarchical K-Means, Agglomerative Nesting (AGNES), and Fuzzy Clustering. The analyzed company has suppliers that have been clustered using responses such as [...] Read more.
This paper presents the application of four machine learning algorithms to segment suppliers in a real case. The algorithms used were K-Means, Hierarchical K-Means, Agglomerative Nesting (AGNES), and Fuzzy Clustering. The analyzed company has suppliers that have been clustered using responses such as the number of non-conformities, location, and quantity supplied, among others. The CRISP-DM methodology was used for the work development. The proposed methodology is important for both industry and academia, as it helps managers make decisions about the quality of their suppliers and compares the use of four different algorithms for this purpose, which is an important insight for new studies. The K-Means algorithm obtained the best performance both for the metrics obtained and the simplicity of use. It is important to highlight that no studies to date have been conducted using the four algorithms proposed here applied in an industrial case, and this work shows this application. The use of artificial intelligence in industry is essential in this Industry 4.0 era for companies to make decisions, i.e., to have ways to make better decisions using data-driven concepts. Full article
Show Figures

Figure 1

26 pages, 4876 KiB  
Article
A Systematic Approach to Evaluate the Use of Chatbots in Educational Contexts: Learning Gains, Engagements and Perceptions
by Wei Qiu, Chit Lin Su, Nurabidah Binti Jamil, Maung Thway, Samuel Soo Hwee Ng, Lei Zhang, Fun Siong Lim and Joel Weijia Lai
Computers 2025, 14(7), 270; https://doi.org/10.3390/computers14070270 - 9 Jul 2025
Abstract
As generative artificial intelligence (GenAI) chatbots gain traction in educational settings, a growing number of studies explore their potential for personalized, scalable learning. However, methodological fragmentation has limited the comparability and generalizability of findings across the field. This study proposes a unified, learning [...] Read more.
As generative artificial intelligence (GenAI) chatbots gain traction in educational settings, a growing number of studies explore their potential for personalized, scalable learning. However, methodological fragmentation has limited the comparability and generalizability of findings across the field. This study proposes a unified, learning analytics–driven framework for evaluating the impact of GenAI chatbots on student learning. Grounded in the collection, analysis, and interpretation of diverse learner data, the framework integrates assessment outcomes, conversational interactions, engagement metrics, and student feedback. We demonstrate its application through a multi-week, quasi-experimental study using a Socratic-style chatbot designed with pedagogical intent. Using clustering techniques and statistical analysis, we identified patterns in student–chatbot interaction and linked them to changes in learning outcomes. This framework provides researchers and educators with a replicable structure for evaluating GenAI interventions and advancing coherence in learning analytics–based educational research. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

21 pages, 2170 KiB  
Article
IoT-Driven Intelligent Energy Management: Leveraging Smart Monitoring Applications and Artificial Neural Networks (ANN) for Sustainable Practices
by Azza Mohamed, Ibrahim Ismail and Mohammed AlDaraawi
Computers 2025, 14(7), 269; https://doi.org/10.3390/computers14070269 - 9 Jul 2025
Abstract
The growing mismanagement of energy resources is a pressing issue that poses significant risks to both individuals and the environment. As energy consumption continues to rise, the ramifications become increasingly severe, necessitating urgent action. In response, the rapid expansion of Internet of Things [...] Read more.
The growing mismanagement of energy resources is a pressing issue that poses significant risks to both individuals and the environment. As energy consumption continues to rise, the ramifications become increasingly severe, necessitating urgent action. In response, the rapid expansion of Internet of Things (IoT) devices offers a promising and innovative solution due to their adaptability, low power consumption, and transformative potential in energy management. This study describes a novel, integrative strategy that integrates IoT and Artificial Neural Networks (ANNs) in a smart monitoring mobile application intended to optimize energy usage and promote sustainability in residential settings. While both IoT and ANN technologies have been investigated separately in previous research, the uniqueness of this work is the actual integration of both technologies into a real-time, user-adaptive framework. The application allows for continuous energy monitoring via modern IoT devices and wireless sensor networks, while ANN-based prediction models evaluate consumption data to dynamically optimize energy use and reduce environmental effect. The system’s key features include simulated consumption scenarios and adaptive user profiles, which account for differences in household behaviors and occupancy patterns, allowing for tailored recommendations and energy control techniques. The architecture allows for remote device control, real-time feedback, and scenario-based simulations, making the system suitable for a wide range of home contexts. The suggested system’s feasibility and effectiveness are proved through detailed simulations, highlighting its potential to increase energy efficiency and encourage sustainable habits. This study contributes to the rapidly evolving field of intelligent energy management by providing a scalable, integrated, and user-centric solution that bridges the gap between theoretical models and actual implementation. Full article
Show Figures

Figure 1

22 pages, 1350 KiB  
Article
From Patterns to Predictions: Spatiotemporal Mobile Traffic Forecasting Using AutoML, TimeGPT and Traditional Models
by Hassan Ayaz, Kashif Sultan, Muhammad Sheraz and Teong Chee Chuah
Computers 2025, 14(7), 268; https://doi.org/10.3390/computers14070268 - 8 Jul 2025
Viewed by 23
Abstract
Call Detail Records (CDRs) from mobile networks offer valuable insights into both network performance and user behavior. With the growing importance of data analytics, analyzing CDRs has become critical for optimizing network resources by forecasting demand across spatial and temporal dimensions. In this [...] Read more.
Call Detail Records (CDRs) from mobile networks offer valuable insights into both network performance and user behavior. With the growing importance of data analytics, analyzing CDRs has become critical for optimizing network resources by forecasting demand across spatial and temporal dimensions. In this study, we examine publicly available CDR data from Telecom Italia to explore the spatiotemporal dynamics of mobile network activity in Milan. This analysis reveals key patterns in traffic distribution highlighting both high- and low-demand regions as well as notable variations in usage over time. To anticipate future network usage, we employ both Automated Machine Learning (AutoML) and the transformer-based TimeGPT model, comparing their performance against traditional forecasting methods such as Long Short-Term Memory (LSTM), ARIMA and SARIMA. Model accuracy is assessed using standard evaluation metrics, including root mean square error (RMSE), mean absolute error (MAE) and the coefficient of determination (R2). Results show that AutoML delivers the most accurate forecasts, with significantly lower RMSE (2.4990 vs. 14.8226) and MAE (1.0284 vs. 7.7789) compared to TimeGPT and a higher R2 score (99.96% vs. 98.62%). Our findings underscore the strengths of modern predictive models in capturing complex traffic behaviors and demonstrate their value in resource planning, congestion management and overall network optimization. Importantly, this study is one of the first to Comprehensively assess AutoML and TimeGPT on a high-resolution, real-world CDR dataset from a major European city. By merging machine learning techniques with advanced temporal modeling, this study provides a strong framework for scalable and intelligent mobile traffic prediction. It thus highlights the functionality of AutoML in simplifying model development and the possibility of TimeGPT to extend transformer-based prediction to the telecommunications domain. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

25 pages, 3142 KiB  
Article
Mobile Augmented Reality Games Towards Smart Learning City Environments: Learning About Sustainability
by Margarida M. Marques, João Ferreira-Santos, Rita Rodrigues and Lúcia Pombo
Computers 2025, 14(7), 267; https://doi.org/10.3390/computers14070267 - 7 Jul 2025
Viewed by 64
Abstract
This study explores the potential of mobile augmented reality games (MARGs) in promoting sustainability competencies within the context of a smart learning city environment. Anchored in the EduCITY project, which integrates location-based AR-enhanced games into an interactive mobile app, the research investigates how [...] Read more.
This study explores the potential of mobile augmented reality games (MARGs) in promoting sustainability competencies within the context of a smart learning city environment. Anchored in the EduCITY project, which integrates location-based AR-enhanced games into an interactive mobile app, the research investigates how these tools support Education for Sustainable Development (ESD). Employing a mixed-methods approach, data were collected through the GreenComp-based Questionnaire (GCQuest) and anonymous gameplay logs generated by the app. Thematic analysis of 358 responses revealed four key learning domains: ‘cultural awareness’, ‘environmental protection’, ‘sustainability awareness’, and ‘contextual knowledge’. Quantitative performance data from game logs highlighted substantial variation across games, with the highest performance found in those with more frequent AR integration and multiple iterative refinements. Participants engaging with AR-enhanced features (optional) outperformed others. This study provides empirical evidence for the use of MARGs to cultivate sustainability-related knowledge, skills, and attitudes, particularly when grounded in local realities and enhanced through thoughtful design. Beyond the EduCITY project, the study proposes a replicable model for assessing sustainability competencies, with implications for broader integration of AR across educational contexts in ESD. The paper concludes with a critical reflection on methodological limitations and suggests future directions, including adapting the GCQuest for use with younger learners in primary education. Full article
Show Figures

Figure 1

12 pages, 349 KiB  
Article
Agentic AI for Cultural Heritage: Embedding Risk Memory in Semantic Digital Twins
by George Pavlidis
Computers 2025, 14(7), 266; https://doi.org/10.3390/computers14070266 - 7 Jul 2025
Viewed by 109
Abstract
Cultural heritage preservation increasingly relies on data-driven technologies, yet most existing systems lack the cognitive and temporal depth required to support meaningful, transparent, and policy-informed decision-making. This paper proposes a conceptual framework for memory-enabled, semantically grounded AI agents in the cultural domain, showing [...] Read more.
Cultural heritage preservation increasingly relies on data-driven technologies, yet most existing systems lack the cognitive and temporal depth required to support meaningful, transparent, and policy-informed decision-making. This paper proposes a conceptual framework for memory-enabled, semantically grounded AI agents in the cultural domain, showing how the integration of the ICCROM/CCI ABC method for risk assessment into the Panoptes ontology enables the structured encoding of risk cognition over time. This structured risk memory becomes the foundation for agentic reasoning, supporting prioritization, justification, and long-term preservation planning. It is argued that this approach constitutes a principled step toward the development of Cultural Agentic AI: autonomous systems that remember, reason, and act in alignment with cultural values. Proof-of-concept simulations illustrate how memory-enabled agents can trace evolving risk patterns, trigger policy responses, and evaluate mitigation outcomes through structured, explainable reasoning. Full article
Show Figures

Figure 1

20 pages, 632 KiB  
Article
Bridging or Burning? Digital Sustainability and PY Students’ Intentions to Adopt AI-NLP in Educational Contexts
by Mostafa Aboulnour Salem
Computers 2025, 14(7), 265; https://doi.org/10.3390/computers14070265 - 7 Jul 2025
Viewed by 205
Abstract
The current study examines the determinants influencing preparatory year (PY) students’ intentions to adopt AI-powered natural language processing (NLP) models, such as Copilot, ChatGPT, and Gemini, and how these intentions shape their conceptions of digital sustainability. Additionally, the extended unified theory of acceptance [...] Read more.
The current study examines the determinants influencing preparatory year (PY) students’ intentions to adopt AI-powered natural language processing (NLP) models, such as Copilot, ChatGPT, and Gemini, and how these intentions shape their conceptions of digital sustainability. Additionally, the extended unified theory of acceptance and use of technology (UTAUT) was integrated with a diversity of educational constructs, including content availability (CA), learning engagement (LE), learning motivation (LM), learner involvement (LI), and AI satisfaction (AS). Furthermore, responses of 274 PY students from Saudi Universities were analysed using partial least squares structural equation modelling (PLS-SEM) to evaluate both the measurement and structural models. Likewise, the findings indicated CA (β = 0.25), LE (β = 0.22), LM (β = 0.20), and LI (β = 0.18) significantly predicted user intention (UI), explaining 52.2% of its variance (R2 = 0.522). In turn, UI significantly predicted students’ digital sustainability conceptions (DSC) (β = 0.35, R2 = 0.451). However, AI satisfaction (AS) did not exhibit a moderating effect, suggesting uniformly high satisfaction levels among students. Hence, the study concluded that AI-powered NLP models are being adopted as learning assistant technologies and are also essential catalysts in promoting sustainable digital conceptions. Similarly, this study contributes both theoretically and practically by conceptualising digital sustainability as a learner-driven construct and linking educational technology adoption to its advancement. This aligns with global frameworks such as Sustainable Development Goals (SDGs) 4 and 9. The study highlights AI’s transformative potential in higher education by examining how user intention (UI) influences digital sustainability conceptions (DSC) among preparatory year students in Saudi Arabia. Given the demographic focus of the study, further research is recommended, particularly longitudinal studies, to track changes over time across diverse genders, academic specialisations, and cultural contexts. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

15 pages, 4430 KiB  
Article
A Comprehensive Approach to Instruction Tuning for Qwen2.5: Data Selection, Domain Interaction, and Training Protocols
by Xungang Gu, Mengqi Wang, Yangjie Tian, Ning Li, Jiaze Sun, Jingfang Xu, He Zhang, Ruohua Xu and Ming Liu
Computers 2025, 14(7), 264; https://doi.org/10.3390/computers14070264 - 5 Jul 2025
Viewed by 126
Abstract
Instruction tuning plays a pivotal role in aligning large language models with diverse tasks, yet its effectiveness hinges on the interplay of data quality, domain composition, and training strategies. This study moves beyond qualitative assessment to systematically quantify these factors through extensive experiments [...] Read more.
Instruction tuning plays a pivotal role in aligning large language models with diverse tasks, yet its effectiveness hinges on the interplay of data quality, domain composition, and training strategies. This study moves beyond qualitative assessment to systematically quantify these factors through extensive experiments on data selection, data mixture, and training protocols. By quantifying performance trade-offs, we demonstrate that the implicit method SuperFiltering achieves an optimal balance, whereas explicit filters can induce capability conflicts. A fine-grained analysis of cross-domain interactions quantifies a near-linear competition between code and math, while showing that tool use data exhibits minimal interference. To mitigate these measured conflicts, we compare multi-task, sequential, and multi-stage training strategies, revealing that multi-stage training significantly reduces Conflict Rates while preserving domain expertise. Our findings culminate in a unified framework for optimizing instruction tuning, offering actionable, data-driven guidelines for balancing multi-domain performance and enhancing model generalization, thus advancing the field by providing a methodology to move from intuition to systematic optimization. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

13 pages, 1502 KiB  
Article
Anomaly Detection Based on 1DCNN Self-Attention Networks for Seismic Electric Signals
by Wei Li, Huaqin Gu, Yanlin Wen, Wenzhou Zhao and Zhaobin Wang
Computers 2025, 14(7), 263; https://doi.org/10.3390/computers14070263 - 5 Jul 2025
Viewed by 116
Abstract
The application of deep learning to seismic electric signal (SES) anomaly detection remains underexplored in geophysics. This study introduces the integration of a 1D convolutional neural network (1DCNN) with a self-attention mechanism to automate SES analysis in a station in a certain place [...] Read more.
The application of deep learning to seismic electric signal (SES) anomaly detection remains underexplored in geophysics. This study introduces the integration of a 1D convolutional neural network (1DCNN) with a self-attention mechanism to automate SES analysis in a station in a certain place in China. Utilizing physics-informed data augmentation, our framework adapts to real-world interference scenarios, including subway operations and tidal fluctuations. The model achieves an F1-score of 0.9797 on a 7-year dataset, demonstrating superior robustness and precision compared to traditional manual interpretation. This work establishes a practical deep learning solution for real-time geoelectric anomaly monitoring, offering a transformative tool for earthquake early warning systems. Full article
Show Figures

Figure 1

16 pages, 2358 KiB  
Article
A Hybrid Content-Aware Network for Single Image Deraining
by Guoqiang Chai, Rui Yang, Jin Ge and Yulei Chen
Computers 2025, 14(7), 262; https://doi.org/10.3390/computers14070262 - 4 Jul 2025
Viewed by 167
Abstract
Rain streaks degrade the quality of optical images and seriously affect the effectiveness of subsequent vision-based algorithms. Although the applications of a convolutional neural network (CNN) and self-attention mechanism (SA) in single image deraining have shown great success, there are still unresolved issues [...] Read more.
Rain streaks degrade the quality of optical images and seriously affect the effectiveness of subsequent vision-based algorithms. Although the applications of a convolutional neural network (CNN) and self-attention mechanism (SA) in single image deraining have shown great success, there are still unresolved issues regarding the deraining performance and the large computational load. The work in this paper fully coordinates and utilizes the advantages between CNN and SA and proposes a hybrid content-aware deraining network (CAD) to reduce complexity and generate high-quality results. Specifically, we construct the CADBlock, including the content-aware convolution and attention mixer module (CAMM) and the multi-scale double-gated feed-forward module (MDFM). In CAMM, the attention mechanism is used for intricate windows to generate abundant features and simple convolution is used for plain windows to reduce computational costs. In MDFM, multi-scale spatial features are double-gated fused to preserve local detail features and enhance image restoration capabilities. Furthermore, a four-token contextual attention module (FTCA) is introduced to explore the content information among neighbor keys to improve the representation ability. Both qualitative and quantitative validations on synthetic and real-world rain images demonstrate that the proposed CAD can achieve a competitive deraining performance. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

22 pages, 7580 KiB  
Article
Fuzzy-Based Multi-Modal Query-Forwarding in Mini-Datacenters
by Sami J. Habib and Paulvanna Nayaki Marimuthu
Computers 2025, 14(7), 261; https://doi.org/10.3390/computers14070261 - 1 Jul 2025
Viewed by 224
Abstract
The rapid growth of Internet of Things (IoT) enabled devices in industrial environments and the associated increase in data generation are paving the way for the development of localized, distributed datacenters. In this paper, we have proposed a novel mini-datacenter in the form [...] Read more.
The rapid growth of Internet of Things (IoT) enabled devices in industrial environments and the associated increase in data generation are paving the way for the development of localized, distributed datacenters. In this paper, we have proposed a novel mini-datacenter in the form of wireless sensor networks to efficiently handle query-based data collection from Industrial IoT (IIoT) devices. The mini-datacenter comprises a command center, gateways, and IoT sensors, designed to manage stochastic query-response traffic flow. We have developed a duplication/aggregation query flow model, tailored to emphasize reliable transmission. We have developed a dataflow management framework that employs a multi-modal query forwarding approach to forward queries from the command center to gateways under varying environments. The query forwarding includes coarse-grain and fine-grain strategies, where the coarse-grain strategy uses a direct data flow using a single gateway at the expense of reliability, while the fine-grain approach uses redundant gateways to enhance reliability. A fuzzy-logic-based intelligence system is integrated into the framework to dynamically select the appropriate granularity of the forwarding strategy based on the resource availability and network conditions, aided by a buffer watching algorithm that tracks real-time buffer status. We carried out several experiments with gateway nodes varying from 10 to 100 to evaluate the framework’s scalability and robustness in handling the query flow under complex environments. The experimental results demonstrate that the framework provides a flexible and adaptive solution that balances buffer usage while maintaining over 95% reliability in most queries. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

12 pages, 6638 KiB  
Article
Vision-Degree-Driven Loading Strategy for Real-Time Large-Scale Scene Rendering
by Yu Ding and Ying Song
Computers 2025, 14(7), 260; https://doi.org/10.3390/computers14070260 - 1 Jul 2025
Viewed by 134
Abstract
Large-scale scene rendering faces challenges in managing massive scene data and mitigating rendering latency caused by suboptimal loading sequences. Although current approaches utilize Level of Detail (LOD) for dynamic resource loading, two limitations remain. One is loading priority, which does not adequately consider [...] Read more.
Large-scale scene rendering faces challenges in managing massive scene data and mitigating rendering latency caused by suboptimal loading sequences. Although current approaches utilize Level of Detail (LOD) for dynamic resource loading, two limitations remain. One is loading priority, which does not adequately consider the factors affecting visual effects such as LOD selection and visible area. The other is the insufficient trade-off between rendering quality and loading latency. To this end, we propose a loading prioritization metric called Vision Degree (VD), derived from LOD selection, loading time, and the trade-off between rendering quality and loading latency. During rendering, VDs are sorted in descending order to achieve an optimized loading and unloading sequence. At the same time, a compensation factor is proposed to further compensate for the visual loss caused by the reduced LOD level and to optimize the rendering effect. Finally, we optimize the initial viewpoint selection by minimizing the average model-to-viewpoint distance, thereby reducing the initial scene loading time. Experimental results demonstrate that our method reduces the rendering latency by 24–29% compared with the existing Area-of-Interest (AOI)-based loading strategy, while maintaining comparable visual quality. Full article
Show Figures

Figure 1

14 pages, 236 KiB  
Systematic Review
Artificial Intelligence and the Future of Mental Health in a Digitally Transformed World
by Aggeliki Kelly Fanarioti and Kostas Karpouzis
Computers 2025, 14(7), 259; https://doi.org/10.3390/computers14070259 - 30 Jun 2025
Viewed by 252
Abstract
Artificial Intelligence (AI) is reshaping mental healthcare by enabling new forms of diagnosis, therapy, and patient monitoring. Yet this digital transformation raises complex policy and ethical questions that remain insufficiently addressed. In this paper, we critically examine how AI-driven innovations are being integrated [...] Read more.
Artificial Intelligence (AI) is reshaping mental healthcare by enabling new forms of diagnosis, therapy, and patient monitoring. Yet this digital transformation raises complex policy and ethical questions that remain insufficiently addressed. In this paper, we critically examine how AI-driven innovations are being integrated into mental health systems across different global contexts, with particular attention to governance, regulation, and social justice. The study follows the PRISMA-ScR methodology to ensure transparency and methodological rigor, while also acknowledging its inherent limitations, such as the emphasis on breadth over depth and the exclusion of non-English sources. Drawing on international guidelines, academic literature, and emerging national strategies, it identifies both opportunities, such as improved access and personalized care, and threats, including algorithmic bias, data privacy risks, and diminished human oversight. Special attention is given to underrepresented populations and the risks of digital exclusion. The paper argues for a value-driven approach that centers equity, transparency, and informed consent in the deployment of AI tools. It concludes with actionable policy recommendations to support the ethical implementation of AI in mental health, emphasizing the need for cross-sectoral collaboration and global accountability mechanisms. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

26 pages, 3334 KiB  
Review
Simulation-Based Development of Internet of Cyber-Things Using DEVS
by Laurent Capocchi, Bernard P. Zeigler and Jean-Francois Santucci
Computers 2025, 14(7), 258; https://doi.org/10.3390/computers14070258 - 30 Jun 2025
Viewed by 289
Abstract
Simulation-based development is a structured approach that uses formal models to design and test system behavior before building the actual system. The Internet of Things (IoT) connects physical devices equipped with sensors and software to collect and exchange data. Cyber-Physical Systems (CPSs) integrate [...] Read more.
Simulation-based development is a structured approach that uses formal models to design and test system behavior before building the actual system. The Internet of Things (IoT) connects physical devices equipped with sensors and software to collect and exchange data. Cyber-Physical Systems (CPSs) integrate computing directly into physical processes to enable real-time control. This paper reviews the Discrete-Event System Specification (DEVS) formalism and explores how it can serve as a unified framework for designing, simulating, and implementing systems that combine IoT and CPS—referred to as the Internet of Cyber-Things (IoCT). Through case studies that include home automation, solar energy monitoring, conflict management, and swarm robotics, the paper reviews how DEVS enables construction of modular, scalable, and reusable models. The role of the System Entity Structure (SES) is also discussed, highlighting its contribution in organizing models and generating alternative system configurations. With this background as basis, the paper evaluates whether DEVS provides the necessary modeling power and continuity across stages to support the development of complex IoCT systems. The paper concludes that DEVS offers a robust and flexible foundation for developing IoCT systems, supporting both expressiveness and seamless transition from design to real-world deployment. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

13 pages, 3210 KiB  
Article
Bridging Tradition and Innovation: Transformative Educational Practices in Museums with AI and VR
by Michele Domenico Todino, Eliza Pitri, Argyro Fella, Antonia Michaelidou, Lucia Campitiello, Francesca Placanica, Stefano Di Tore and Maurizio Sibilio
Computers 2025, 14(7), 257; https://doi.org/10.3390/computers14070257 - 30 Jun 2025
Viewed by 596
Abstract
This paper explores the intersection of folk art, museums, and education in the 20th century, with a focus on the concept of art as experience, emphasizing the role of museums as active, inclusive learning spaces. A collaboration between the University of Salerno and [...] Read more.
This paper explores the intersection of folk art, museums, and education in the 20th century, with a focus on the concept of art as experience, emphasizing the role of museums as active, inclusive learning spaces. A collaboration between the University of Salerno and the University of Nicosia has developed virtual museum environments using virtual reality (VR) to enhance engagement with cultural heritage. These projects aim to make museums more accessible and interactive, with future potential in integrating artificial intelligence NPC and VR strategies for personalized visitor experiences of the Nicosia Folk Art Museum. Full article
Show Figures

Graphical abstract

21 pages, 1414 KiB  
Article
An xLSTM–XGBoost Ensemble Model for Forecasting Non-Stationary and Highly Volatile Gasoline Price
by Fujiang Yuan, Xia Huang, Hong Jiang, Yang Jiang, Zihao Zuo, Lusheng Wang, Yuxin Wang, Shaojie Gu and Yanhong Peng
Computers 2025, 14(7), 256; https://doi.org/10.3390/computers14070256 - 29 Jun 2025
Viewed by 419
Abstract
High-frequency fluctuations in the international crude oil market have led to multilevel characteristics in China’s domestic refined oil pricing mechanism. To address the poor fitting performance of single deep learning models on oil price data, which hampers accurate gasoline price prediction, this paper [...] Read more.
High-frequency fluctuations in the international crude oil market have led to multilevel characteristics in China’s domestic refined oil pricing mechanism. To address the poor fitting performance of single deep learning models on oil price data, which hampers accurate gasoline price prediction, this paper proposes a gasoline price prediction method based on a combined xLSTM–XGBoost model. Using gasoline price data from June 2000 to November 2024 in Sichuan Province as a sample, the data are decomposed via STL decomposition to extract trend, residual, and seasonal components. The xLSTM model is then employed to predict the trend and seasonal components, while XGBoost predicts the residual component. Finally, the predictions from both models are combined to produce the final forecast. The experimental results demonstrate that the proposed xLSTM–XGBoost model reduces the MAE by 14.8% compared to the second-best sLSTM–XGBoost model and by 83% compared to the traditional LSTM model, significantly enhancing prediction accuracy. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

25 pages, 2892 KiB  
Article
Focal Correlation and Event-Based Focal Visual Content Text Attention for Past Event Search
by Pranita P. Deshmukh and S. Poonkuntran
Computers 2025, 14(7), 255; https://doi.org/10.3390/computers14070255 - 28 Jun 2025
Viewed by 220
Abstract
Every minute, vast amounts of video and image data are uploaded worldwide to the internet and social media platforms, creating a rich visual archive of human experiences—from weddings and family gatherings to significant historical events such as war crimes and humanitarian crises. When [...] Read more.
Every minute, vast amounts of video and image data are uploaded worldwide to the internet and social media platforms, creating a rich visual archive of human experiences—from weddings and family gatherings to significant historical events such as war crimes and humanitarian crises. When properly analyzed, this multimodal data holds immense potential for reconstructing important events and verifying information. However, challenges arise when images and videos lack complete annotations, making manual examination inefficient and time-consuming. To address this, we propose a novel event-based focal visual content text attention (EFVCTA) framework for automated past event retrieval using visual question answering (VQA) techniques. Our approach integrates a Long Short-Term Memory (LSTM) model with convolutional non-linearity and an adaptive attention mechanism to efficiently identify and retrieve relevant visual evidence alongside precise answers. The model is designed with robust weight initialization, regularization, and optimization strategies and is evaluated on the Common Objects in Context (COCO) dataset. The results demonstrate that EFVCTA achieves the highest performance across all metrics (88.7% accuracy, 86.5% F1-score, 84.9% mAP), outperforming state-of-the-art baselines. The EFVCTA framework demonstrates promising results for retrieving information about past events captured in images and videos and can be effectively applied to scenarios such as documenting training programs, workshops, conferences, and social gatherings in academic institutions Full article
Show Figures

Figure 1

24 pages, 589 KiB  
Article
FaceCloseup: Enhancing Mobile Facial Authentication with Perspective Distortion-Based Liveness Detection
by Yingjiu Li, Yan Li and Zilong Wang
Computers 2025, 14(7), 254; https://doi.org/10.3390/computers14070254 - 27 Jun 2025
Viewed by 352
Abstract
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to [...] Read more.
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to spoofing attacks. Adversaries can exploit facial recognition systems using pre-recorded photos, videos, or even sophisticated 3D models of victims’ faces to bypass authentication mechanisms. The increasing availability of personal images on social media further amplifies this risk, making robust anti-spoofing mechanisms essential for secure facial authentication. To address these challenges, we introduce FaceCloseup, a novel liveness detection technique that strengthens facial authentication by leveraging perspective distortion inherent in close-up shots of real, 3D faces. Instead of relying on additional sensors or user-interactive gestures, FaceCloseup passively analyzes facial distortions in video frames captured by a mobile device’s camera, improving security without compromising user experience. FaceCloseup effectively distinguishes live faces from spoofed attacks by identifying perspective-based distortions across different facial regions. The system achieves a 99.48% accuracy in detecting common spoofing methods—including photo, video, and 3D model-based attacks—and demonstrates 98.44% accuracy in differentiating between individual users. By operating entirely on-device, FaceCloseup eliminates the need for cloud-based processing, reducing privacy concerns and potential latency in authentication. Its reliance on natural device movement ensures a seamless authentication experience while maintaining robust security. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

24 pages, 20201 KiB  
Article
EMGP-Net: A Hybrid Deep Learning Architecture for Breast Cancer Gene Expression Prediction
by Oumeima Thâalbi and Moulay A. Akhloufi
Computers 2025, 14(7), 253; https://doi.org/10.3390/computers14070253 - 26 Jun 2025
Viewed by 236
Abstract
Background: The accurate prediction of gene expression is essential in breast cancer research. However, spatial transcriptomics technologies are usually too expensive. Recent studies have used whole-slide images combined with spatial transcriptomics data to predict breast cancer gene expression. To this end, we [...] Read more.
Background: The accurate prediction of gene expression is essential in breast cancer research. However, spatial transcriptomics technologies are usually too expensive. Recent studies have used whole-slide images combined with spatial transcriptomics data to predict breast cancer gene expression. To this end, we present EMGP-Net, a novel hybrid deep learning architecture developed by combining two state-of-the-art models, MambaVision and EfficientFormer. Method: EMGP-Net was first trained on the HER2+ dataset, containing data from eight patients using a leave-one-patient-out approach. To ensure generalizability, we conducted external validation and alternately trained EMGP-Net on the HER2+ dataset and tested it on the STNet dataset, containing data from 23 patients, and vice versa. We evaluated EMGP-Net’s ability to predict the expression of 250 selected genes. EMGP-Net mixes features from both models, and uses attention mechanisms followed by fully connected layers. Results: Our model outperformed both EfficientFormer and MambaVision, which were trained separately on the HER2+ dataset, achieving the highest PCC of 0.7903 for the PTMA gene, with the top 14 genes having PCCs greater than 0.7, including other important breast cancer biomarkers such as GNAS and B2M. The external validation showed that it also outperformed models that were retrained with our approach. Conclusions: The results of EMGP-Net were better than those of existing models, showing that the combination of advanced models is an effective strategy to improve performance in this task. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

13 pages, 12530 KiB  
Article
Data Augmentation-Driven Improvements in Malignant Lymphoma Image Classification
by Sandi Baressi Šegota, Vedran Mrzljak, Ivan Lorencin and Nikola Anđelić
Computers 2025, 14(7), 252; https://doi.org/10.3390/computers14070252 - 26 Jun 2025
Viewed by 243
Abstract
Artificial intelligence (AI)-based techniques have become increasingly prevalent in the classification of medical images. However, the effectiveness of such methods is often constrained by the limited availability of annotated medical data. To address this challenge, data augmentation is frequently employed. This study investigates [...] Read more.
Artificial intelligence (AI)-based techniques have become increasingly prevalent in the classification of medical images. However, the effectiveness of such methods is often constrained by the limited availability of annotated medical data. To address this challenge, data augmentation is frequently employed. This study investigates the impact of a novel augmentation approach on the classification performance of malignant lymphoma histopathological images. The proposed method involves slicing high-resolution images (1388 × 1040 pixels) into smaller segments (224 × 224 pixels) before applying standard augmentation techniques such as flipping and rotation. The original dataset consists of 374 images, comprising 32.6% mantle cell lymphoma, 30.2% chronic lymphocytic leukemia, and 37.2% follicular lymphoma. Through slicing, the dataset was expanded to 8976 images, and further augmented to 53,856 images. The visual geometry group with 16 layers (VGG16) convolutional neural network (CNN) was trained and evaluated on three datasets: the original, the sliced, and the sliced with augmentation. Performance was assessed using accuracy, AUC, precision, sensitivity, specificity, and F1 score. The results demonstrate a substantial improvement in classification performance when slicing was employed, with additional, albeit smaller, gains achieved through subsequent augmentation. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

67 pages, 2821 KiB  
Review
Hardware and Software Methods for Secure Obfuscation and Deobfuscation: An In-Depth Analysis
by Khaled Saleh, Dirar Darweesh, Omar Darwish, Eman Hammad and Fathi Amsaad
Computers 2025, 14(7), 251; https://doi.org/10.3390/computers14070251 - 25 Jun 2025
Viewed by 450
Abstract
The swift evolution of information technology and growing connectivity in critical applications have elevated cybersecurity, protecting and certifying software and designs against rising cyber threats. For example, software and hardware have become highly susceptible to various threats, like reverse engineering, cloning, tampering, and [...] Read more.
The swift evolution of information technology and growing connectivity in critical applications have elevated cybersecurity, protecting and certifying software and designs against rising cyber threats. For example, software and hardware have become highly susceptible to various threats, like reverse engineering, cloning, tampering, and IP piracy. While various techniques exist to enhance software and hardware security, including encryption, native code, and secure server-side execution, obfuscation emerges as a preeminent and cost-efficient solution to address these challenges. Obfuscation purposely converts software and hardware to improve complexity for probable adversaries, targeting obscure realization operations while preserving safety and functionality. Former research has commonly engaged features of obfuscation, deobfuscation, and obfuscation detection approaches. A novel departure from conventional research methodologies, this revolutionary comprehensive article reviews these approaches in depth. It explicates the correlations and dynamics among them. Furthermore, it conducts a meticulous comparative analysis, evaluating obfuscation techniques across parameters such as the methodology, testing procedures, efficacy, associated drawbacks, market applicability, and prospects for future enhancement. This review aims to assist organizations in wisely electing obfuscation techniques for firm protection against threats and enhances the strategic choice of deobfuscation and obfuscation detection techniques to recognize vulnerabilities in software and hardware products. This empowerment permits organizations to proficiently treat security risks, guaranteeing secure software and hardware solutions, and improving user satisfaction for maximized profitability. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Figure 1

39 pages, 1839 KiB  
Review
The Integration of the Internet of Things (IoT) Applications into 5G Networks: A Review and Analysis
by Aymen I. Zreikat, Zakwan AlArnaout, Ahmad Abadleh, Ersin Elbasi and Nour Mostafa
Computers 2025, 14(7), 250; https://doi.org/10.3390/computers14070250 - 25 Jun 2025
Viewed by 804
Abstract
The incorporation of Internet of Things (IoT) applications into 5G networks marks a significant step towards realizing the full potential of connected systems. 5G networks, with their ultra-low latency, high data speeds, and huge interconnection, provide a perfect foundation for IoT ecosystems to [...] Read more.
The incorporation of Internet of Things (IoT) applications into 5G networks marks a significant step towards realizing the full potential of connected systems. 5G networks, with their ultra-low latency, high data speeds, and huge interconnection, provide a perfect foundation for IoT ecosystems to thrive. This connectivity offers a diverse set of applications, including smart cities, self-driving cars, industrial automation, healthcare monitoring, and agricultural solutions. IoT devices can improve their reliability, real-time communication, and scalability by exploiting 5G’s advanced capabilities such as network slicing, edge computing, and enhanced mobile broadband. Furthermore, the convergence of IoT with 5G fosters interoperability, allowing for smooth communication across diverse devices and networks. This study examines the fundamental technical applications, obstacles, and future perspectives for integrating IoT applications with 5G networks, emphasizing the potential benefits while also addressing essential concerns such as security, energy efficiency, and network management. The results of this review and analysis will act as a valuable resource for researchers, industry experts, and policymakers involved in the progression of 5G technologies and their incorporation with IT solutions. Full article
Show Figures

Figure 1

32 pages, 3349 KiB  
Article
The PECC Framework: Promoting Gender Sensitivity and Gender Equality in Computer Science Education
by Bernadette Spieler and Carina Girvan
Computers 2025, 14(7), 249; https://doi.org/10.3390/computers14070249 - 25 Jun 2025
Viewed by 361
Abstract
There are increasing expectations that we should live in a digitally and computationally literate society. For many young people, particularly girls, school is the one place that provides an opportunity to develop the necessary knowledge and skills. This environment can either perpetuate and [...] Read more.
There are increasing expectations that we should live in a digitally and computationally literate society. For many young people, particularly girls, school is the one place that provides an opportunity to develop the necessary knowledge and skills. This environment can either perpetuate and reinforce or eliminate existing gender inequalities. In this article, we present the “PLAYING, ENGAGEMENT, CREATVITIY, CREATING” (PECC) Framework, a practical guide to supporting teachers in the design of gender-sensitive learning activities, bringing students’ own interests to the fore. Through a six-year, mixed-methods, design-based research approach, PECC—along with supporting resources and digital tools—was developed through iterative cycles of theoretical analysis, empirical data (both qualitative and quantitative), critical reflection, and case study research. Exploratory and instrumental case studies investigated the promise and limitations of the emerging framework, involving 43 teachers and 1453 students in secondary-school classrooms (including online during COVID-19) in Austria, Germany, and Switzerland. Quantitative data (e.g., surveys, usage metrics) and qualitative findings (e.g., interviews, observations, classroom artefacts) were analyzed across the case studies to inform successive refinements of the framework. The case study results are presented alongside the theoretically informed discussions and practical considerations that informed each stage of PECC. PECC has had a real-world, tangible impact at a national level. It provides an essential link between research and practice, offering a theoretically informed and empirically evidenced framework for teachers and policy makers. Full article
Show Figures

Figure 1

17 pages, 1372 KiB  
Article
Dark Web Traffic Classification Based on Spatial–Temporal Feature Fusion and Attention Mechanism
by Junwei Li and Zhisong Pan
Computers 2025, 14(7), 248; https://doi.org/10.3390/computers14070248 - 25 Jun 2025
Viewed by 254
Abstract
There is limited research on current traffic classification methods for dark web traffic and the classification results are not very satisfactory. To improve the prediction accuracy and classification precision of dark web traffic, a classification method (CLA) based on spatial–temporal feature fusion and [...] Read more.
There is limited research on current traffic classification methods for dark web traffic and the classification results are not very satisfactory. To improve the prediction accuracy and classification precision of dark web traffic, a classification method (CLA) based on spatial–temporal feature fusion and an attention mechanism is proposed. When processing raw bytes, the combination of a CNN and LSTM is used to extract local spatial–temporal features from raw data packets, while an attention module is introduced to process key spatial–temporal data. The experimental results show that this model can effectively extract and utilize the spatial–temporal features of traffic data and use the attention mechanism to measure the importance of different features, thereby achieving accurate predictions of different dark web traffic. In comparative experiments, the accuracy, recall rate, and F1 score of this model are higher than those of other traditional methods. Full article
Show Figures

Figure 1

24 pages, 2258 KiB  
Article
Machine Learning for Anomaly Detection in Blockchain: A Critical Analysis, Empirical Validation, and Future Outlook
by Fouzia Jumani and Muhammad Raza
Computers 2025, 14(7), 247; https://doi.org/10.3390/computers14070247 - 25 Jun 2025
Viewed by 530
Abstract
Blockchain technology has transformed how data are stored and transactions are processed in a distributed environment. Blockchain assures data integrity by validating transactions through the consensus of a distributed ledger involving several miners as validators. Although blockchain provides multiple advantages, it has also [...] Read more.
Blockchain technology has transformed how data are stored and transactions are processed in a distributed environment. Blockchain assures data integrity by validating transactions through the consensus of a distributed ledger involving several miners as validators. Although blockchain provides multiple advantages, it has also been subject to some malicious attacks, such as a 51% attack, which is considered a potential risk to data integrity. These attacks can be detected by analyzing the anomalous node behavior of miner nodes in the network, and data analysis plays a vital role in detecting and overcoming these attacks to make a secure blockchain. Integrating machine learning algorithms with blockchain has become a significant approach to detecting anomalies such as a 51% attack and double spending. This study comprehensively analyzes various machine learning (ML) methods to detect anomalies in blockchain networks. It presents a Systematic Literature Review (SLR) and a classification to explore the integration of blockchain and ML for anomaly detection in blockchain networks. We implemented Random Forest, AdaBoost, XGBoost, K-means, and Isolation Forest ML models to evaluate their performance in detecting Blockchain anomalies, such as a 51% attack. Additionally, we identified future research directions, including challenges related to scalability, network latency, imbalanced datasets, the dynamic nature of anomalies, and the lack of standardization in blockchain protocols. This study acts as a benchmark for additional research on how ML algorithms identify anomalies in blockchain technology and aids ongoing studies in this rapidly evolving field. Full article
Show Figures

Figure 1

14 pages, 385 KiB  
Article
A Comparative Evaluation of Time-Series Forecasting Models for Energy Datasets
by Nikitas Maragkos and Ioannis Refanidis
Computers 2025, 14(7), 246; https://doi.org/10.3390/computers14070246 - 24 Jun 2025
Viewed by 433
Abstract
Time series forecasting plays a critical role across numerous domains such as finance, energy, and healthcare. While traditional statistical models have long been employed for this task, recent advancements in deep learning have led to a new generation of state-of-the-art (SotA) models that [...] Read more.
Time series forecasting plays a critical role across numerous domains such as finance, energy, and healthcare. While traditional statistical models have long been employed for this task, recent advancements in deep learning have led to a new generation of state-of-the-art (SotA) models that offer improved accuracy and flexibility. However, there remains a gap in understanding how these forecasting models perform under different forecasting scenarios, especially when incorporating external variables. This paper presents a comprehensive review and empirical evaluation of seven leading deep learning models for time series forecasting. We introduce a novel dataset that combines energy consumption and weather data from 24 European countries, allowing us to benchmark model performance across various forecasting horizons, granularities, and variable types. Our findings offer practical insights into model strengths and limitations, guiding future applications and research in time series forecasting. Full article
Show Figures

Graphical abstract

30 pages, 2599 KiB  
Article
Exploring the Role of Artificial Intelligence in Detecting Advanced Persistent Threats
by Pedro Ramos Brandao
Computers 2025, 14(7), 245; https://doi.org/10.3390/computers14070245 - 23 Jun 2025
Viewed by 237
Abstract
The rapid evolution of cyber threats, particularly Advanced Persistent Threats (APTs), poses significant challenges to the security of information systems. This paper explores the pivotal role of Artificial Intelligence (AI) in enhancing the detection and mitigation of APTs. By leveraging machine learning algorithms [...] Read more.
The rapid evolution of cyber threats, particularly Advanced Persistent Threats (APTs), poses significant challenges to the security of information systems. This paper explores the pivotal role of Artificial Intelligence (AI) in enhancing the detection and mitigation of APTs. By leveraging machine learning algorithms and data analytics, AI systems can identify patterns and anomalies that are indicative of sophisticated cyber-attacks. This study examines various AI-driven methodologies, including anomaly detection, predictive analytics, and automated response systems, highlighting their effectiveness in real-time threat detection and response. Furthermore, we discuss the integration of AI into existing cybersecurity frameworks, emphasizing the importance of collaboration between human analysts and AI systems in combating APTs. The findings suggest that the adoption of AI technologies not only improves the accuracy and speed of threat detection but also enables organizations to proactively defend against evolving cyber threats, probably achieving a 75% reduction in alert volume. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Previous Issue
Back to TopTop