Next Issue
Volume 14, October
Previous Issue
Volume 14, August
 
 

Computers, Volume 14, Issue 9 (September 2025) – 61 articles

Cover Story (view full-size image): Artificial intelligence (AI) is redefining both computer science and cybersecurity domains by enabling more intelligent, scalable, and privacy-aware systems. Prior surveys cover these fields in isolation, but this paper provides a unified review of 256 peer-reviewed publications to bridge that gap. Our dual-domain synthesis highlights powerful cross-cutting findings and shared challenges. Explainable AI, Federated Learning, and Local Differential Privacy are identified as essential safeguards in decentralized environments such as the Internet of Things (IoT) and healthcare. Despite transformative progress, we also emphasize persistent limitations such as fairness and adversarial robustness. This review integrates perspectives from two disciplines to deliver a unified foundational framework mapping advances, limitations, and requirements for resilient, ethical and trustworthy AI systems. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 2003 KB  
Article
Beyond Opacity: Distributed Ledger Technology as a Catalyst for Carbon Credit Market Integrity
by Stanton Heister, Felix Kin Peng Hui, David Ian Wilson and Yaakov Anker
Computers 2025, 14(9), 403; https://doi.org/10.3390/computers14090403 - 22 Sep 2025
Viewed by 231
Abstract
The 2015 Paris Agreement paved the way for the carbon trade economy, which has since evolved but has not attained a substantial magnitude. While carbon credit exchange is a critical mechanism for achieving global climate targets, it faces persistent challenges related to transparency, [...] Read more.
The 2015 Paris Agreement paved the way for the carbon trade economy, which has since evolved but has not attained a substantial magnitude. While carbon credit exchange is a critical mechanism for achieving global climate targets, it faces persistent challenges related to transparency, double-counting, and verification. This paper examines how Distributed Ledger Technology (DLT) can address these limitations by providing immutable transaction records, automated verification through digitally encoded smart contracts, and increased market efficiency. To assess DLT’s strategic potential for leveraging the carbon markets and, more explicitly, whether its implementation can reduce transaction costs and enhance market integrity, three alternative approaches that apply DLT for carbon trading were taken as case studies. By comparing key elements in these DLT-based carbon credit platforms, it is elucidated that these proposed frameworks may be developed for a scalable global platform. The integration of existing compliance markets in the EU (case study 1), Australia (case study 2), and China (case study 3) can act as a standard for a global carbon trade establishment. The findings from these case studies suggest that while DLT offers a promising path toward more sustainable carbon markets, regulatory harmonization, standardization, and data transfer across platforms remain significant challenges. Full article
Show Figures

Figure 1

29 pages, 2935 KB  
Article
Optimising Contextual Embeddings for Meaning Conflation Deficiency Resolution in Low-Resourced Languages
by Mosima A. Masethe, Sunday O. Ojo and Hlaudi D. Masethe
Computers 2025, 14(9), 402; https://doi.org/10.3390/computers14090402 - 22 Sep 2025
Viewed by 203
Abstract
Meaning conflation deficiency (MCD) presents a continual obstacle in natural language processing (NLP), especially for low-resourced and morphologically complex languages, where polysemy and contextual ambiguity diminish model precision in word sense disambiguation (WSD) tasks. This paper examines the optimisation of contextual embedding models, [...] Read more.
Meaning conflation deficiency (MCD) presents a continual obstacle in natural language processing (NLP), especially for low-resourced and morphologically complex languages, where polysemy and contextual ambiguity diminish model precision in word sense disambiguation (WSD) tasks. This paper examines the optimisation of contextual embedding models, namely XLNet, ELMo, BART, and their improved variations, to tackle MCD in linguistic settings. Utilising Sesotho sa Leboa as a case study, researchers devised an enhanced XLNet architecture with specific hyperparameter optimisation, dynamic padding, early termination, and class-balanced training. Comparative assessments reveal that the optimised XLNet attains an accuracy of 91% and exhibits balanced precision–recall metrics of 92% and 91%, respectively, surpassing both its baseline counterpart and competing models. Optimised ELMo attained the greatest overall metrics (accuracy: 92%, F1-score: 96%), whilst optimised BART demonstrated significant accuracy improvements (96%) despite a reduced recall. The results demonstrate that fine-tuning contextual embeddings using MCD-specific methodologies significantly improves semantic disambiguation for under-represented languages. This study offers a scalable and flexible optimisation approach suitable for additional low-resource language contexts. Full article
Show Figures

Figure 1

20 pages, 2197 KB  
Article
Perceptual Image Hashing Fusing Zernike Moments and Saliency-Based Local Binary Patterns
by Wei Li, Tingting Wang, Yajun Liu and Kai Liu
Computers 2025, 14(9), 401; https://doi.org/10.3390/computers14090401 - 21 Sep 2025
Viewed by 270
Abstract
This paper proposes a novel perceptual image hashing scheme that robustly combines global structural features with local texture information for image authentication. The method starts with image normalization and Gaussian filtering to ensure scale invariance and suppress noise. A saliency map is then [...] Read more.
This paper proposes a novel perceptual image hashing scheme that robustly combines global structural features with local texture information for image authentication. The method starts with image normalization and Gaussian filtering to ensure scale invariance and suppress noise. A saliency map is then generated from a color vector angle matrix using a frequency-tuned model to identify perceptually significant regions. Local Binary Pattern (LBP) features are extracted from this map to represent fine-grained textures, while rotation-invariant Zernike moments are computed to capture global geometric structures. These local and global features are quantized and concatenated into a compact binary hash. Extensive experiments on standard databases show that the proposed method outperforms state-of-the-art algorithms in both robustness against content-preserving manipulations and discriminability across different images. Quantitative evaluations based on ROC curves and AUC values confirm its superior robustness–uniqueness trade-off, demonstrating the effectiveness of the saliency-guided fusion of Zernike moments and LBP for reliable image hashing. Full article
Show Figures

Figure 1

40 pages, 3284 KB  
Article
SemaTopic: A Framework for Semantic-Adaptive Probabilistic Topic Modeling
by Amani Drissi, Salma Sassi, Richard Chbeir, Anis Tissaoui and Abderrazek Jemai
Computers 2025, 14(9), 400; https://doi.org/10.3390/computers14090400 - 19 Sep 2025
Viewed by 168
Abstract
Topic modeling is a crucial technique for Natural Language Processing (NLP) which helps to automatically uncover coherent topics from large-scale text corpora. Yet, classic methods tend to suffer from poor semantic depth and topic coherence. In this regard, we present here a new [...] Read more.
Topic modeling is a crucial technique for Natural Language Processing (NLP) which helps to automatically uncover coherent topics from large-scale text corpora. Yet, classic methods tend to suffer from poor semantic depth and topic coherence. In this regard, we present here a new approach “SemaTopic” to improve the quality and interpretability of discovered topics. By exploiting semantic understanding and stronger clustering dynamics, our approach results in a more continuous, finer and more stable representation of the topics. Experimental results demonstrate that SemaTopic achieves a relative gain of +6.2% in semantic coherence compared to BERTopic on the 20 Newsgroups dataset (Cv=0.5315 vs. 0.5004), while maintaining stable performance across heterogeneous and multilingual corpora. These findings highlight “SemaTopic” as a scalable and reliable solution for practical text mining and knowledge discovery. Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
Show Figures

Figure 1

19 pages, 1380 KB  
Article
Educational QA System-Oriented Answer Selection Model Based on Focus Fusion of Multi-Perspective Word Matching
by Xiaoli Hu, Junfei He, Zhaoyu Shou, Ziming Liu and Huibing Zhang
Computers 2025, 14(9), 399; https://doi.org/10.3390/computers14090399 - 19 Sep 2025
Viewed by 168
Abstract
Question-answering systems have become an important tool for learning and knowledge acquisition. However, current answer selection models often rely on representing features using whole sentences, which leads to neglecting individual words and losing important information. To address this challenge, the paper proposes a [...] Read more.
Question-answering systems have become an important tool for learning and knowledge acquisition. However, current answer selection models often rely on representing features using whole sentences, which leads to neglecting individual words and losing important information. To address this challenge, the paper proposes a novel answer selection model based on focus fusion of multi-perspective word matching. First, according to the different combination relationships between sentences, focus distribution in terms of words is obtained from the matching perspectives of serial, parallel, and transfer. Then, the sentence’s key position information is inferred from its focus distribution. Finally, a method of aligning key information points is designed to fuse the focus distribution for each perspective, which obtains match scores for each candidate answer to the question. Experimental results show that the proposed model significantly outperforms the Transformer encoder fine-tuned model based on contextual embedding, achieving a 4.07% and 5.51% increase in MAP and a 1.63% and 4.86% increase in MRR, respectively. Full article
Show Figures

Figure 1

14 pages, 219 KB  
Article
Integration of Information and Communication Technology in Curriculum Practices: The Case of Preservice Accounting Teachers
by Lineo Mphatsoane-Sesoane, Loyiso Currell Jita and Molaodi Tshelane
Computers 2025, 14(9), 398; https://doi.org/10.3390/computers14090398 - 19 Sep 2025
Viewed by 201
Abstract
This empirical paper explores South African preservice accounting teachers’ perceptions of ICT integration in secondary schools’ accounting curriculum practices. Since 2020, curriculum practices have been characterised by disruptions to traditional teaching and learning methods, including those brought on by the COVID-19 pandemic. Curriculum [...] Read more.
This empirical paper explores South African preservice accounting teachers’ perceptions of ICT integration in secondary schools’ accounting curriculum practices. Since 2020, curriculum practices have been characterised by disruptions to traditional teaching and learning methods, including those brought on by the COVID-19 pandemic. Curriculum practices in accounting were not unnoticed. These sparked discussions about pedagogical changes, academic continuity, and the future of accounting curriculum practices. The theoretical framework used to guide the research process is connectivism. The theory is about forming connections between people and technology and teaching and learning in a connectivist learning environment. Connectivism promotes a lifelong learning perspective by training teachers and students to adapt to a fast-changing environment. An interpretive paradigm underpins this qualitative research paper. The data were collected from semi-structured interviews with five preservice accounting teachers about how they navigated pedagogy while switching to digital curriculum practices. Thematic analysis was used. The findings revealed that preservice accounting teachers faced challenges in ICT integration during school-based training, including limited resources, inadequate infrastructure, and insufficient hands-on training. While ICT tools enhanced learner engagement, barriers such as low digital skills and a lack of technical support hindered effective use. Participants highlighted a disconnect between theoretical training and classroom practice, prompting self-directed learning to bridge skill gaps. The study underscores the need for teacher education programs to provide practical, immersive ICT training to equip future educators for technology-driven classrooms. Full article
20 pages, 2451 KB  
Article
Development of an Early Lung Cancer Diagnosis Method Based on a Neural Network
by Indira Karymsakova, Dinara Kozhakhmetova, Dariga Bekenova, Danila Ostroukh, Roza Bekbayeva, Lazat Kydyralina, Alina Bugubayeva and Dinara Kurushbayeva
Computers 2025, 14(9), 397; https://doi.org/10.3390/computers14090397 - 18 Sep 2025
Viewed by 276
Abstract
Cancer is one of the most lethal diseases in the modern world. Early diagnosis significantly contributes to prolonging the life expectancy of patients. The application of intelligent systems and AI methods is crucial for diagnosing oncological diseases. Primarily, expert systems or decision support [...] Read more.
Cancer is one of the most lethal diseases in the modern world. Early diagnosis significantly contributes to prolonging the life expectancy of patients. The application of intelligent systems and AI methods is crucial for diagnosing oncological diseases. Primarily, expert systems or decision support systems are utilized in such cases. This research explores early lung cancer diagnosis through protocol-based questioning, considering the impact of nuclear testing factors. Nuclear tests conducted historically continue to affect citizens’ health. A classification of regions into five groups was proposed based on their proximity to nuclear test sites. The weighting coefficient was assigned accordingly, in proportion to the distance from the test zones. In this study, existing expert systems were analyzed and classified. Approaches used to build diagnostic expert systems for oncological diseases were grouped by how well they apply to different tumor localizations. An online questionnaire based on the lung cancer diagnostic protocol was created to gather input data for the neural network. To support this diagnostic method, a functional block diagram of the intelligent system “Oncology” was developed. The following methods were used to create the mathematical model: gradient boosting, multilayer perceptron, and Hamming network. Finally, a web application architecture for early lung cancer detection was proposed. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

22 pages, 13502 KB  
Article
AI Test Modeling for Computer Vision System—A Case Study
by Jerry Gao and Radhika Agarwal
Computers 2025, 14(9), 396; https://doi.org/10.3390/computers14090396 - 18 Sep 2025
Viewed by 343
Abstract
This paper presents an intelligent AI test modeling framework for computer vision systems, focused on image-based systems. A three-dimensional (3D) model using decision tables enables model-based function testing, automated test data generation, and comprehensive coverage analysis. A case study using the Seek by [...] Read more.
This paper presents an intelligent AI test modeling framework for computer vision systems, focused on image-based systems. A three-dimensional (3D) model using decision tables enables model-based function testing, automated test data generation, and comprehensive coverage analysis. A case study using the Seek by iNaturalist application demonstrates the framework’s applicability to real-world CV tasks. It effectively identifies species and non-species under varying image conditions such as distance, blur, brightness, and grayscale. This study contributes a structured methodology that advances our academic understanding of model-based CV testing while offering practical tools for improving the robustness and reliability of AI-driven vision applications. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

29 pages, 4648 KB  
Article
Optimizing Teacher Portfolio Integrity with a Cost-Effective Smart Contract for School-Issued Teacher Documents
by Diana Laura Silaghi, Andrada Cristina Artenie and Daniela Elena Popescu
Computers 2025, 14(9), 395; https://doi.org/10.3390/computers14090395 - 17 Sep 2025
Viewed by 383
Abstract
Diplomas and academic transcripts issued at the conclusion of a university cycle have been the subject of numerous studies focused on developing secure methods for their registration and access. However, in the context of high school teachers, these initial credentials mark only the [...] Read more.
Diplomas and academic transcripts issued at the conclusion of a university cycle have been the subject of numerous studies focused on developing secure methods for their registration and access. However, in the context of high school teachers, these initial credentials mark only the starting point of a much more complex professional journey. Throughout their careers, teachers receive a wide array of certificates and attestations related to professional development, participation in educational projects, volunteering, and institutional contributions. Many of these documents are issued directly by the school administration and are often vulnerable to misplacement, unauthorized alterations, or limited portability. These challenges are amplified when teachers move between schools or are involved in teaching across multiple institutions. In response to this need, this paper proposes a blockchain-based solution built on the Ethereum platform, which ensures the integrity, traceability, and long-term accessibility of such records, preserving the professional achievements of teachers across their careers. Although most research has focused on securing highly valuable documents on blockchain, such as diplomas, certificates, and micro-credentials, this study highlights the importance of extending blockchain solutions to school-issued attestations, as they carry significant weight in teacher evaluation and the development of professional portfolios. Full article
Show Figures

Figure 1

40 pages, 1638 KB  
Review
Fake News Detection Using Machine Learning and Deep Learning Algorithms: A Comprehensive Review and Future Perspectives
by Faisal A. Alshuwaier and Fawaz A. Alsulaiman
Computers 2025, 14(9), 394; https://doi.org/10.3390/computers14090394 - 16 Sep 2025
Viewed by 1213
Abstract
Currently, with significant developments in technology and social networks, people gain rapid access to news without focusing on its reliability. Consequently, the proportion of fake news has increased. Fake news is a significant problem that hinders societies today, as it negatively impacts many [...] Read more.
Currently, with significant developments in technology and social networks, people gain rapid access to news without focusing on its reliability. Consequently, the proportion of fake news has increased. Fake news is a significant problem that hinders societies today, as it negatively impacts many aspects, including politics, the economy, and society. Fake news is widely disseminated via social media through modern digital platforms. In this paper, we focus on conducting a comprehensive review on fake news detection using machine learning and deep learning. Additionally, this review provides a brief survey and evaluation, as well as a discussion of gaps, and explores future perspectives. Through this research, this review addresses various research questions. This review also focuses on the importance of machine learning and deep learning for fake news detection, by providing a comparison and discussion of how they are used to detect fake news. The results of the review, presented between 2018 and 2025, with the most commonly used publishers being IEEE, Intelligent Systems, EMNLP, ACM, Springer, Elsevier, JAIR, and others, can be used to determine the most effective algorithm in terms of performance. Therefore, articles that did not demonstrate the use of algorithms or performance were excluded. Full article
Show Figures

Figure 1

18 pages, 269 KB  
Article
Secret Sharing Scheme with Share Verification Capability
by Nursulu Kapalova, Armanbek Haumen and Kunbolat Algazy
Computers 2025, 14(9), 393; https://doi.org/10.3390/computers14090393 - 16 Sep 2025
Viewed by 267
Abstract
This paper examines the properties of classical secret sharing schemes used in information protection systems, including the protection of valuable and confidential data. It addresses issues such as implementation complexity, limited flexibility, vulnerability to new types of attacks, the requirements for such schemes, [...] Read more.
This paper examines the properties of classical secret sharing schemes used in information protection systems, including the protection of valuable and confidential data. It addresses issues such as implementation complexity, limited flexibility, vulnerability to new types of attacks, the requirements for such schemes, and analyzes existing approaches to their solutions. A new secret sharing scheme is proposed as a potential solution to these challenges. The developed scheme is based on multivariable functions. The shares distributed among participants represent the values of these functions. Secret reconstruction is reduced to solving a system of linear equations composed of such functions. The structure and mathematical foundation of the scheme are presented, along with an analysis of its properties. A key feature of the proposed scheme is the incorporation of functions aimed at authenticating participants and verifying the integrity of the distributed shares. The paper also provides a cryptanalysis of the scheme, evaluates its resistance to various types of attacks, and discusses the results obtained. Thus, this work contributes to the advancement of information security methods by offering a modern and reliable solution for the secure storage and joint use of secret data. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
12 pages, 965 KB  
Article
SeismicNoiseAnalyzer: A Deep-Learning Tool for Automatic Quality Control of Seismic Stations
by Alessandro Pignatelli, Paolo Casale, Veronica Vignoli and Flavia Tavani
Computers 2025, 14(9), 392; https://doi.org/10.3390/computers14090392 - 16 Sep 2025
Viewed by 288
Abstract
SeismicNoiseAnalyzer 1.0 is a software tool designed to automatically assess the quality of seismic stations through the classification of spectral diagrams. By leveraging convolutional neural networks trained on expert-labeled data, the software emulates human visual inspection of probability density function (PDF) plots. It [...] Read more.
SeismicNoiseAnalyzer 1.0 is a software tool designed to automatically assess the quality of seismic stations through the classification of spectral diagrams. By leveraging convolutional neural networks trained on expert-labeled data, the software emulates human visual inspection of probability density function (PDF) plots. It supports both individual image analysis and batch processing from compressed archives, providing detailed reports that summarize station health. Two classification networks are available: a binary model that distinguishes between working and malfunctioning stations and a ternary model that introduces an intermediate “doubtful” category to capture ambiguous cases. The system demonstrates high agreement with expert evaluations and enables efficient instrumentation control across large seismic networks. Its intuitive graphical interface and automated workflow make it a valuable tool for routine monitoring and data validation. Full article
Show Figures

Graphical abstract

19 pages, 912 KB  
Article
Lightweight Embedded IoT Gateway for Smart Homes Based on an ESP32 Microcontroller
by Filippos Serepas, Ioannis Papias, Konstantinos Christakis, Nikos Dimitropoulos and Vangelis Marinakis
Computers 2025, 14(9), 391; https://doi.org/10.3390/computers14090391 - 16 Sep 2025
Viewed by 528
Abstract
The rapid expansion of the Internet of Things (IoT) demands scalable, efficient, and user-friendly gateway solutions that seamlessly connect resource-constrained edge devices to cloud services. Low-cost, widely available microcontrollers, such as the ESP32 and its ecosystem peers, offer integrated Wi-Fi/Bluetooth connectivity, low power [...] Read more.
The rapid expansion of the Internet of Things (IoT) demands scalable, efficient, and user-friendly gateway solutions that seamlessly connect resource-constrained edge devices to cloud services. Low-cost, widely available microcontrollers, such as the ESP32 and its ecosystem peers, offer integrated Wi-Fi/Bluetooth connectivity, low power consumption, and a mature developer toolchain at a bill of materials cost of only a few dollars. For smart-home deployments where budgets, energy consumption, and maintainability are critical, these characteristics make MCU-class gateways a pragmatic alternative to single-board computers, enabling always-on local control with minimal overhead. This paper presents the design and implementation of an embedded IoT gateway powered by the ESP32 microcontroller. By using lightweight communication protocols such as Message Queuing Telemetry Transport (MQTT) and REST APIs, the proposed architecture supports local control, distributed intelligence, and secure on-site data storage, all while minimizing dependence on cloud infrastructure. A real-world deployment in an educational building demonstrates the gateway’s capability to monitor energy consumption, execute control commands, and provide an intuitive web-based dashboard with minimal resource overhead. Experimental results confirm that the solution offers strong performance, with RAM usage ranging between 3.6% and 6.8% of available memory (approximately 8.92 KB to 16.9 KB). The initial loading of the single-page application (SPA) results in a temporary RAM spike to 52.4%, which later stabilizes at 50.8%. These findings highlight the ESP32’s ability to serve as a functional IoT gateway with minimal resource demands. Areas for future optimization include improved device discovery mechanisms and enhanced resource management to prolong device longevity. Overall, the gateway represents a cost-effective and vendor-agnostic platform for building resilient and scalable IoT ecosystems. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

39 pages, 2315 KB  
Review
Optimizing Kubernetes with Multi-Objective Scheduling Algorithms: A 5G Perspective
by Mazen Farid, Heng Siong Lim, Chin Poo Lee, Charilaos C. Zarakovitis and Su Fong Chien
Computers 2025, 14(9), 390; https://doi.org/10.3390/computers14090390 - 15 Sep 2025
Viewed by 540
Abstract
This review provides an in-depth examination of multi-objective scheduling algorithms within 5G networks, with a particular focus on Kubernetes-based container orchestration. As 5G systems evolve, efficient resource allocation and the optimization of Quality-of-Service (QoS) metrics, including response time, energy efficiency, scalability, and resource [...] Read more.
This review provides an in-depth examination of multi-objective scheduling algorithms within 5G networks, with a particular focus on Kubernetes-based container orchestration. As 5G systems evolve, efficient resource allocation and the optimization of Quality-of-Service (QoS) metrics, including response time, energy efficiency, scalability, and resource utilization, have become increasingly critical. Given the scheduler’s central role in orchestrating containerized workloads, this study analyzes diverse scheduling strategies designed to address these competing objectives. A novel taxonomy is introduced to categorize existing approaches, offering a structured view of deterministic, heuristic, and learning-based methods. Furthermore, the review identifies key research challenges, highlights open issues, such as QoS-aware orchestration and resilience in distributed environments, and outlines prospective directions to advance multi-objective scheduling in Kubernetes for next-generation networks. By synthesizing current knowledge and mapping research gaps, this work aims to provide both a foundation for newcomers and a practical reference for advancing scholarly and industrial efforts in the field. Full article
Show Figures

Figure 1

27 pages, 5866 KB  
Article
DCGAN Feature-Enhancement-Based YOLOv8n Model in Small-Sample Target Detection
by Peng Zheng, Yun Cheng, Wei Zhu, Bo Liu, Chenhao Ye, Shijie Wang, Shuhong Liu and Jinyin Bai
Computers 2025, 14(9), 389; https://doi.org/10.3390/computers14090389 - 15 Sep 2025
Viewed by 354
Abstract
This paper proposes DCGAN-YOLOv8n, an integrated framework that significantly advances small-sample target detection by synergizing generative adversarial feature enhancement with multi-scale representation learning. The model’s core contribution lies in its novel adversarial feature enhancement module (AFEM), which leverages conditional generative adversarial networks to [...] Read more.
This paper proposes DCGAN-YOLOv8n, an integrated framework that significantly advances small-sample target detection by synergizing generative adversarial feature enhancement with multi-scale representation learning. The model’s core contribution lies in its novel adversarial feature enhancement module (AFEM), which leverages conditional generative adversarial networks to reconstruct discriminative multi-scale features while effectively mitigating mode collapse. Furthermore, the architecture incorporates a deformable multi-scale feature pyramid that dynamically fuses generated high-resolution features with hierarchical semantic representations through an attention mechanism. The proposed triple marginal constraint optimization jointly enhances intra-class compactness and inter-class separation, thereby structuring a highly discriminative feature space. Extensive experiments on the NWPU VHR-10 dataset demonstrate state-of-the-art performance, with the model achieving an mAP50 of 90.46% and an mAP50-95 of 57.06%, representing significant improvements of 4.52% and 4.08% over the baseline YOLOv8n, respectively. These results validate the framework’s effectiveness in addressing critical challenges of feature representation scarcity and cross-scale adaptation in data-limited scenarios. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

21 pages, 1535 KB  
Article
Integrative Federated Learning Framework for Multimodal Parkinson’s Disease Biomarker Fusion
by Ruchira Pratihar and Ravi Sankar
Computers 2025, 14(9), 388; https://doi.org/10.3390/computers14090388 - 15 Sep 2025
Viewed by 360
Abstract
Accurate and early diagnosis of Parkinson’s Disease (PD) is challenged by the diverse manifestations of motor and non-motor symptoms across different patients. Existing studies largely rely on limited datasets and biomarkers. In this extended research, we propose a comprehensive Federated Learning (FL) framework [...] Read more.
Accurate and early diagnosis of Parkinson’s Disease (PD) is challenged by the diverse manifestations of motor and non-motor symptoms across different patients. Existing studies largely rely on limited datasets and biomarkers. In this extended research, we propose a comprehensive Federated Learning (FL) framework designed to integrate heterogeneous biomarkers through multimodal combinations—such as EEG–fMRI pairs, continuous speech with vowel pronunciation, and the fusion of EEG, gait, and accelerometry data—drawn from diverse sources and modalities. By processing data separately at client nodes and performing feature and decision fusion at a central server, our method preserves privacy and enables robust PD classification. Experimental results show accuracies exceeding 85% across multiple fusion techniques, with attention-based fusion reaching 97.8% for Freezing of Gait (FoG) detection. Our framework advances scalable, privacy-preserving, multimodal diagnostics for PD. Full article
Show Figures

Figure 1

21 pages, 356 KB  
Article
Integrating Large Language Models with near Real-Time Web Crawling for Enhanced Job Recommendation Systems
by David Gauhl, Kevin Kakkanattu, Melbin Mukkattu and Thomas Hanne
Computers 2025, 14(9), 387; https://doi.org/10.3390/computers14090387 - 15 Sep 2025
Viewed by 331
Abstract
This study addresses the limitations of traditional job recommendation systems that rely on static datasets, making them less responsive to dynamic job market changes. While existing job platforms address job search with an untransparent logic following their business goals, job seekers may benefit [...] Read more.
This study addresses the limitations of traditional job recommendation systems that rely on static datasets, making them less responsive to dynamic job market changes. While existing job platforms address job search with an untransparent logic following their business goals, job seekers may benefit from a solution actively and dynamically crawling and evaluating job offers from a variety of sites according to their objectives. To address this gap, a hybrid system was developed that integrates large language models (LLMs) for semantic analysis with near real-time data acquisition through web crawling. The system extracts and ranks job-specific keywords from user inputs, such as resumes, while dynamically retrieving job listings from online platforms. User evaluations indicated strong performance in keyword extraction and system usability but revealed challenges in web crawler performance, affecting recommendation accuracy. Compared with a state-of-the-art commercial tool, user tests indicate a smaller accuracy of our prototype but a higher functionality satisfaction. Test users highlighted its great potential for further development. The results highlight the benefits of combining LLMs and web crawling while emphasizing the need for improved near real-time data handling to enhance recommendation precision and user satisfaction. Full article
Show Figures

Figure 1

23 pages, 1547 KB  
Article
An Adaptive Steganographic Method for Reversible Information Embedding in X-Ray Images
by Elmira Daiyrbayeva, Aigerim Yerimbetova, Ekaterina Merzlyakova, Ualikhan Sadyk, Aizada Sarina, Zhamilya Taichik, Irina Ismailova, Yerbolat Iztleuov and Asset Nurmangaliyev
Computers 2025, 14(9), 386; https://doi.org/10.3390/computers14090386 - 14 Sep 2025
Viewed by 303
Abstract
The rapid digitalisation of the medical field has heightened concerns over protecting patients’ personal information during the transmission of medical images. This study introduces a method for securely transmitting X-ray images that contain embedded patient data. The proposed steganographic approach ensures that the [...] Read more.
The rapid digitalisation of the medical field has heightened concerns over protecting patients’ personal information during the transmission of medical images. This study introduces a method for securely transmitting X-ray images that contain embedded patient data. The proposed steganographic approach ensures that the original image remains intact while the embedded data is securely hidden, a critical requirement in medical contexts. To guarantee reversibility, the Interpolation Near Pixels method was utilised, recognised as one of the most effective techniques within reversible data hiding (RDH) frameworks. Additionally, the method integrates a statistical property preservation technique, enhancing the scheme’s alignment with ideal steganographic characteristics. Specifically, the “forest fire” algorithm partitions the image into interconnected regions, where statistical analyses of low-order bits are performed, followed by arithmetic decoding to achieve a desired distribution. This process successfully maintains the original statistical features of the image. The effectiveness of the proposed method was validated through stegoanalysis on real-world medical images from previous studies. The results revealed high robustness, with minimal distortion of stegocontainers, as evidenced by high PSNR values ranging between 52 and 81 dB. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Figure 1

18 pages, 4208 KB  
Article
Transformer Models for Paraphrase Detection: A Comprehensive Semantic Similarity Study
by Dianeliz Ortiz Martes, Evan Gunderson, Caitlin Neuman and Nezamoddin N. Kachouie
Computers 2025, 14(9), 385; https://doi.org/10.3390/computers14090385 - 14 Sep 2025
Viewed by 419
Abstract
Semantic similarity, the task of determining whether two sentences convey the same meaning, is central to applications such as paraphrase detection, semantic search, and question answering. Despite the widespread adoption of transformer-based models for this task, their performance is influenced by both the [...] Read more.
Semantic similarity, the task of determining whether two sentences convey the same meaning, is central to applications such as paraphrase detection, semantic search, and question answering. Despite the widespread adoption of transformer-based models for this task, their performance is influenced by both the choice of similarity measure and BERT (bert-base-nli-mean-tokens), RoBERTa (all-roberta-large-v1), and MPNet (all-mpnet-base-v2) on the Microsoft Research Paraphrase Corpus (MRPC). Sentence embeddings were compared using cosine similarity, dot product, Manhattan distance, and Euclidean distance, with thresholds optimized for accuracy, balanced accuracy, and F1-score. Results indicate a consistent advantage for MPNet, which achieved the highest accuracy (75.6%), balanced accuracy (71.0%), and F1-score (0.836) when paired with cosine similarity at an optimized threshold of 0.671. BERT and RoBERTa performed competitively but exhibited greater sensitivity to the choice of Similarity metric, with BERT notably underperforming when using cosine similarity compared to Manhattan or Euclidean distance. Optimal thresholds varied widely (0.334–0.867), underscoring the difficulty of establishing a single, generalizable cut-off for paraphrase classification. These findings highlight the value of fine-tuning of both Similarity metrics and thresholds alongside model selection, offering practical guidance for designing high-accuracy semantic similarity systems in real-world NLP applications. Full article
Show Figures

Figure 1

32 pages, 1923 KB  
Article
Narrative-Driven Digital Gamification for Motivation and Presence: Preservice Teachers’ Experiences in a Science Education Course
by Gregorio Jiménez-Valverde, Noëlle Fabre-Mitjans and Gerard Guimerà-Ballesta
Computers 2025, 14(9), 384; https://doi.org/10.3390/computers14090384 - 14 Sep 2025
Viewed by 710
Abstract
This mixed-methods study investigated how a personalized, narrative-integrated digital gamification framework (with FantasyClass) was associated with motivation and presence among preservice elementary teachers in a science education course. The intervention combined HEXAD-informed personalization (aligning game elements with player types) with a branching storyworld, [...] Read more.
This mixed-methods study investigated how a personalized, narrative-integrated digital gamification framework (with FantasyClass) was associated with motivation and presence among preservice elementary teachers in a science education course. The intervention combined HEXAD-informed personalization (aligning game elements with player types) with a branching storyworld, teacher-directed AI-generated narrative emails, and multimodal cues (visuals, music, scent) to scaffold presence alongside autonomy, competence, and relatedness. Thirty-four students participated in a one-group posttest design, completing an adapted 21-item PENS questionnaire and responding to two open-ended prompts. Results, which are exploratory and not intended for broad generalization or causal inference, indicated high self-reported competence and autonomy, positive but more variable relatedness, and strong presence/immersion. Subscale correlations showed that Competence covaried with Autonomy and Relatedness, while Presence/Immersion was positively associated with all other subscales, suggesting that presence may act as a motivational conduit. Thematic analysis portrayed students as active decision-makers within the narrative, linking consequential choices, visible progress, and team-based goals to agency, effectiveness, and social connection. Additional themes included coherence and organization, fun and enjoyment, novelty, extrinsic incentives, and perceived professional transferability. Overall, findings suggest that narrative presence, when coupled with player-aligned game elements, can foster engagement and motivation in STEM-oriented teacher education. Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
Show Figures

Figure 1

15 pages, 5716 KB  
Article
Supersampling in Render CPOY: Total Annihilation
by Grigorie Dennis Sergiu and Stanciu Ion Rares
Computers 2025, 14(9), 383; https://doi.org/10.3390/computers14090383 - 12 Sep 2025
Viewed by 297
Abstract
This paper tackles a significant problem in gaming graphics: balancing visual fidelity with performance in real time. The article introduces CPOY SR (Continuous Procedural Output Yielder for Scaling Resolution), a dynamic resolution scaling algorithm designed to enhance both performance and visual quality in [...] Read more.
This paper tackles a significant problem in gaming graphics: balancing visual fidelity with performance in real time. The article introduces CPOY SR (Continuous Procedural Output Yielder for Scaling Resolution), a dynamic resolution scaling algorithm designed to enhance both performance and visual quality in real-time gaming. Unlike traditional supersampling and anti-aliasing techniques that suffer from fixed settings and hardware limitations, CPOY SR adapts resolution during gameplay based on system resources and user activity. The method is implemented and tested in an actual game project, not just theoretically proposed. The proposed method overcomes these by adjusting resolution dynamically during gameplay. One strong feature is that it works across diverse systems, from low-end laptops to high-end machines. The algorithm utilizes mathematical constraints like Mathf.Clamp to ensure numerical robustness during scaling and avoids manual reconfiguration. Testing was carried out across multiple hardware configurations and resolutions (up to 8K); the approach demonstrated consistent visual fidelity with optimized performance. The research integrates visual rendering, resolution scaling, and anti-aliasing techniques, offering a scalable solution for immersive gameplay. This article outlines the key components and development phases that contribute to the creation of this engaging and visually impressive gaming experience project. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

21 pages, 3805 KB  
Article
GraphTrace: A Modular Retrieval Framework Combining Knowledge Graphs and Large Language Models for Multi-Hop Question Answering
by Anna Osipjan, Hanieh Khorashadizadeh, Akasha-Leonie Kessel, Sven Groppe and Jinghua Groppe
Computers 2025, 14(9), 382; https://doi.org/10.3390/computers14090382 - 11 Sep 2025
Viewed by 490
Abstract
This paper introduces GraphTrace, a novel retrieval framework that integrates a domain-specific knowledge graph (KG) with a large language model (LLM) to improve information retrieval for complex, multi-hop queries. Built on structured economic data related to the COVID-19 pandemic, GraphTrace adopts a modular [...] Read more.
This paper introduces GraphTrace, a novel retrieval framework that integrates a domain-specific knowledge graph (KG) with a large language model (LLM) to improve information retrieval for complex, multi-hop queries. Built on structured economic data related to the COVID-19 pandemic, GraphTrace adopts a modular architecture comprising entity extraction, path finding, query decomposition, semantic path ranking, and context aggregation, followed by LLM-based answer generation. GraphTrace is compared against baseline retrieval-augmented generation (RAG) and graph-based RAG (GraphRAG) approaches in both retrieval and generation settings. Experimental results show that GraphTrace consistently outperforms the baselines across evaluation metrics, particularly in handling mid-complexity (5–6-hop) queries and achieving top scores in directness during the generation evaluation. These gains are attributed to GraphTrace’s alignment of semantic reasoning with structured KG traversal, combining modular components for more targeted and interpretable retrieval. Full article
Show Figures

Figure 1

19 pages, 544 KB  
Article
Scaling Linearizable Range Queries on Modern Multi-Cores
by Chen Zhang, Zhengming Yi and Xinghui Zhu
Computers 2025, 14(9), 381; https://doi.org/10.3390/computers14090381 - 11 Sep 2025
Viewed by 232
Abstract
In this paper we introduce Range Query Timestamp Counter (RQ-TSC), a general approach to provide scalable and linearizable range query operations for highly concurrent lock-based data structures. RQ-TSC is a multi-versioned building block that relies on hardware timestamps (e.g., obtained through hardware timestamp [...] Read more.
In this paper we introduce Range Query Timestamp Counter (RQ-TSC), a general approach to provide scalable and linearizable range query operations for highly concurrent lock-based data structures. RQ-TSC is a multi-versioned building block that relies on hardware timestamps (e.g., obtained through hardware timestamp counter register on x86_64) to generate version timestamps, which greatly reduce a point of contention on a shared atomic counter. To evaluate the performance of RQ-TSC, we apply it to three data structures: a linked list, a skip list, and a binary search tree. Experiments show that our approach can improve scalability significantly. Moreover, in almost all cases, range queries on these data structures built from our design perform as well as or better than state-of-the-art concurrent data structures that support linearizable range queries. Full article
Show Figures

Figure 1

21 pages, 330 KB  
Article
Towards Navigating Ethical Challenges in AI-Driven Healthcare Ad Moderation
by Abraham Abby Sen, Jeen Mariam Joy and Murray E. Jennex
Computers 2025, 14(9), 380; https://doi.org/10.3390/computers14090380 - 11 Sep 2025
Viewed by 453
Abstract
The growing use of AI-driven content moderation on social media platforms has intensified ethical concerns, particularly in the context of healthcare advertising and misinformation. While artificial intelligence offers scale and efficiency, it lacks the moral judgment, contextual understanding, and interpretive flexibility required to [...] Read more.
The growing use of AI-driven content moderation on social media platforms has intensified ethical concerns, particularly in the context of healthcare advertising and misinformation. While artificial intelligence offers scale and efficiency, it lacks the moral judgment, contextual understanding, and interpretive flexibility required to navigate complex health-related discourse. This paper addresses these challenges by integrating normative ethical theory with organizational practice to evaluate the limitations of AI in moderating healthcare content. Drawing on deontological, utilitarian, and virtue ethics frameworks, the analysis explores the tensions between ethical ideals and real-world implementation. Building on this foundation, the paper proposes a set of normative guidelines that emphasize hybrid human–AI moderation, transparency, the redesign of success metrics, and the cultivation of ethical organizational cultures. To institutionalize these principles, we introduce a governance framework that includes internal accountability structures, external oversight mechanisms, and adaptive processes for handling ambiguity, disagreement, and evolving standards. By connecting ethical theory with actionable design strategies, this study provides a roadmap for responsible and context-sensitive AI moderation in the digital healthcare ecosystem. Full article
(This article belongs to the Section AI-Driven Innovations)
27 pages, 18541 KB  
Article
Integrating Design Thinking Approach and Simulation Tools in Smart Building Systems Education: A Case Study on Computer-Assisted Learning for Master’s Students
by Andrzej Ożadowicz
Computers 2025, 14(9), 379; https://doi.org/10.3390/computers14090379 - 9 Sep 2025
Viewed by 462
Abstract
The rapid development of smart home and building technologies requires educational methods that facilitate the integration of theoretical knowledge with practical, system-level design skills. Computer-assisted tools play a crucial role in this process by enabling students to experiment with complex Internet of Things [...] Read more.
The rapid development of smart home and building technologies requires educational methods that facilitate the integration of theoretical knowledge with practical, system-level design skills. Computer-assisted tools play a crucial role in this process by enabling students to experiment with complex Internet of Things (IoT) and building automation ecosystems in a risk-free, iterative environment. This paper proposes a pedagogical framework that integrates simulation-based prototyping with collaborative and spatial design tools, supported by elements of design thinking and blended learning. The approach was implemented in a master’s-level Smart Building Systems course, to engage students in interdisciplinary projects where virtual modeling, digital collaboration, and contextualized spatial design were combined to develop user-oriented smart space concepts. Analysis of project outcomes and student feedback indicated that the use of simulation and visualization platforms may enhance technical competencies, creativity, and engagement. The proposed framework contributes to engineering education by demonstrating how computer-assisted environments can effectively support practice-oriented, user-centered learning. Its modular and scalable structure makes it applicable across IoT- and automation-focused curricula, aligning academic training with the hybrid workflows of contemporary engineering practice. Concurrently, areas for enhancement and modification were identified to optimize support for group and creative student work. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
Show Figures

Figure 1

30 pages, 3507 KB  
Article
Pre-During-After Software Development Documentation (PDA-SDD): A Phase-Based Approach for Comprehensive Software Documentation in Modern Development Paradigms
by Abdullah A. H. Alzahrani
Computers 2025, 14(9), 378; https://doi.org/10.3390/computers14090378 - 9 Sep 2025
Viewed by 589
Abstract
Persistent challenges in software documentation, particularly limitations in generality, simplicity, and efficiency of existing models, impede effective software development. To address these, this research proposes a novel phase-based and holistic software documentation model (PDA-SDD). This model was subsequently evaluated using a digital questionnaire [...] Read more.
Persistent challenges in software documentation, particularly limitations in generality, simplicity, and efficiency of existing models, impede effective software development. To address these, this research proposes a novel phase-based and holistic software documentation model (PDA-SDD). This model was subsequently evaluated using a digital questionnaire distributed to 150 software development and documentation experts, achieving a 48% response rate (n = 72). The evaluation focused on assessing the proposed model’s generality, simplicity, and efficiency. Findings indicate that while certain sub-models (e.g., SRSD, RLD) were positively received across all criteria and the overall model demonstrated strong perceived generality and efficiency in specific aspects, areas for improvement were identified, particularly regarding terminological consistency and user-friendliness. This study contributes to the understanding of the complexities in achieving a universally effective software documentation model and highlights key considerations for future research and development in this critical area of software engineering. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

29 pages, 1167 KB  
Article
The Learning Style Decoder: FSLSM-Guided Behavior Mapping Meets Deep Neural Prediction in LMS Settings
by Athanasios Angeioplastis, John Aliprantis, Markos Konstantakis, Dimitrios Varsamis and Alkiviadis Tsimpiris
Computers 2025, 14(9), 377; https://doi.org/10.3390/computers14090377 - 8 Sep 2025
Viewed by 332
Abstract
Personalized learning environments increasingly rely on learner modeling techniques that integrate both explicit and implicit data sources. This study introduces a hybrid profiling methodology that combines psychometric data from an extended Felder–Silverman Learning Style Model (FSLSM) questionnaire with behavioral analytics derived from Moodle [...] Read more.
Personalized learning environments increasingly rely on learner modeling techniques that integrate both explicit and implicit data sources. This study introduces a hybrid profiling methodology that combines psychometric data from an extended Felder–Silverman Learning Style Model (FSLSM) questionnaire with behavioral analytics derived from Moodle Learning Management System interaction logs. A structured mapping process was employed to associate over 200 unique log event types with FSLSM cognitive dimensions, enabling dynamic, behavior-driven learner profiles. Experiments were conducted across three datasets: a university dataset from the International Hellenic University, a public dataset from Kaggle, and a combined dataset totaling over 7 million log entries. Deep learning models including a Sequential Neural Network, BiLSTM, and a pretrained MLSTM-FCN were trained to predict student performance across regression and classification tasks. Results indicate moderate predictive validity: binary classification achieved practical, albeit imperfect accuracy, while three-class and regression tasks performed close to baseline levels. These findings highlight both the potential and the current constraints of log-based learner modeling. The contribution of this work lies in providing a reproducible integration framework and pipeline that can be applied across datasets, offering a realistic foundation for further exploration of scalable, data-driven personalization. Full article
Show Figures

Figure 1

30 pages, 10155 KB  
Article
Interoperable Semantic Systems in Public Administration: AI-Driven Data Mining from Law-Enforcement Reports
by Alexandros Z. Spyropoulos and Vassilis Tsiantos
Computers 2025, 14(9), 376; https://doi.org/10.3390/computers14090376 - 8 Sep 2025
Viewed by 1006
Abstract
The digitisation of law-enforcement archives is examined with the aim of moving from static analogue records to interoperable semantic information systems. A step-by-step framework for optimal digitisation is proposed, grounded in archival best practice and enriched with artificial-intelligence and semantic-web technologies. Emphasis is [...] Read more.
The digitisation of law-enforcement archives is examined with the aim of moving from static analogue records to interoperable semantic information systems. A step-by-step framework for optimal digitisation is proposed, grounded in archival best practice and enriched with artificial-intelligence and semantic-web technologies. Emphasis is placed on semantic data representation, which renders information actionable, searchable, interlinked, and automatically processed. As a proof of concept, a large language model—OpenAI ChatGPT, version o3—was applied to a corpus of narrative police reports, extracting and classifying key entities (metadata, persons, addresses, vehicles, incidents, fingerprints, and inter-entity relationships). The output was converted to Resource Description Framework triples and ingested into a triplestore, demonstrating how unstructured text can be transformed into machine-readable, interoperable data with minimal human intervention. The approach’s challenges—technical complexity, data quality assurance, information-security requirements, and staff training—are analysed alongside the opportunities it affords, such as accelerated access to records, cross-agency interoperability, and advanced analytics for investigative and strategic decision-making. Combining systematic digitisation, AI-driven data extraction, and rigorous semantic modelling ultimately delivers a fully interoperable information environment for law-enforcement agencies, enhancing efficiency, transparency, and evidentiary integrity. Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
Show Figures

Figure 1

20 pages, 1604 KB  
Article
Rule-Based eXplainable Autoencoder for DNS Tunneling Detection
by Giacomo De Bernardi, Giovanni Battista Gaggero, Fabio Patrone, Sandro Zappatore, Mario Marchese and Maurizio Mongelli
Computers 2025, 14(9), 375; https://doi.org/10.3390/computers14090375 - 8 Sep 2025
Viewed by 386
Abstract
Artificial Intelligence (AI) and Machine Learning (ML) are employed in numerous fields and applications. Even if most of these approaches offer a very good performance, they are affected by the “black-box” problem. The way they operate and make decisions is complex and difficult [...] Read more.
Artificial Intelligence (AI) and Machine Learning (ML) are employed in numerous fields and applications. Even if most of these approaches offer a very good performance, they are affected by the “black-box” problem. The way they operate and make decisions is complex and difficult for human users to interpret, making the systems impossible to manually adjust in case they make trivial (from a human viewpoint) errors. In this paper, we show how a “white-box” approach based on eXplainable AI (XAI) can be applied to the Domain Name System (DNS) tunneling detection problem, a cybersecurity problem already successfully addressed by “black-box” approaches, in order to make the detection explainable. The obtained results show that the proposed solution can achieve a performance comparable to the one offered by an autoencoder-based solution while offering a clear view of how the system makes its choices and the possibility of manual analysis and adjustments. Full article
Show Figures

Figure 1

49 pages, 670 KB  
Review
Bridging Domains: Advances in Explainable, Automated, and Privacy-Preserving AI for Computer Science and Cybersecurity
by Youssef Harrath, Oswald Adohinzin, Jihene Kaabi and Morgan Saathoff
Computers 2025, 14(9), 374; https://doi.org/10.3390/computers14090374 - 8 Sep 2025
Viewed by 1125
Abstract
Artificial intelligence (AI) is rapidly redefining both computer science and cybersecurity by enabling more intelligent, scalable, and privacy-conscious systems. While most prior surveys treat these fields in isolation, this paper provides a unified review of 256 peer-reviewed publications to bridge that gap. We [...] Read more.
Artificial intelligence (AI) is rapidly redefining both computer science and cybersecurity by enabling more intelligent, scalable, and privacy-conscious systems. While most prior surveys treat these fields in isolation, this paper provides a unified review of 256 peer-reviewed publications to bridge that gap. We examine how emerging AI paradigms, such as explainable AI (XAI), AI-augmented software development, and federated learning, are shaping technological progress across both domains. In computer science, AI is increasingly embedded throughout the software development lifecycle to boost productivity, improve testing reliability, and automate decision making. In cybersecurity, AI drives advances in real-time threat detection and adaptive defense. Our synthesis highlights powerful cross-cutting findings, including shared challenges such as algorithmic bias, interpretability gaps, and high computational costs, as well as empirical evidence that AI-enabled defenses can reduce successful breaches by up to 30%. Explainability is identified as a cornerstone for trust and bias mitigation, while privacy-preserving techniques, including federated learning and local differential privacy, emerge as essential safeguards in decentralized environments such as the Internet of Things (IoT) and healthcare. Despite transformative progress, we emphasize persistent limitations in fairness, adversarial robustness, and the sustainability of large-scale model training. By integrating perspectives from two traditionally siloed disciplines, this review delivers a unified framework that not only maps current advances and limitations but also provides a foundation for building more resilient, ethical, and trustworthy AI systems. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop