Previous Issue
Volume 14, April
 
 

Computers, Volume 14, Issue 5 (May 2025) – 12 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 923 KiB  
Article
From Transformers to Voting Ensembles for Interpretable Sentiment Classification: A Comprehensive Comparison
by Konstantinos Kyritsis, Charalampos M. Liapis, Isidoros Perikos, Michael Paraskevas and Vaggelis Kapoulas
Computers 2025, 14(5), 167; https://doi.org/10.3390/computers14050167 (registering DOI) - 29 Apr 2025
Abstract
This study conducts an in-depth investigation of the performance of six transformer models using 12 different datasets—10 with three classes and two with two classes—on sentiment classification. We use these six models and generate all combinations of triple schema ensembles, Majority and Soft [...] Read more.
This study conducts an in-depth investigation of the performance of six transformer models using 12 different datasets—10 with three classes and two with two classes—on sentiment classification. We use these six models and generate all combinations of triple schema ensembles, Majority and Soft vote. In total, we compare 46 classifiers on each dataset and see in one case up to a 7.6% increase in accuracy on a dataset with three classes from an ensemble scheme and, in a second case, up to 8.5% increase in accuracy on a dataset with two classes. Our study contributes to the field of natural language processing by exploring the reasons for the predominance, in this particular task, of Majority vote over Soft vote. The conclusions are drawn after a thorough investigation of the classifiers that are co-compared with each other through reliability charts, analyses of the confidence the models have in their predictions and their metrics, concluding with statistical analyses using the Friedman test and the Nemenyi post-hoc test with useful conclusions. Full article
16 pages, 3173 KiB  
Article
On Generating Synthetic Datasets for Photometric Stereo Applications
by Elisa Crabu and Giuseppe Rodriguez
Computers 2025, 14(5), 166; https://doi.org/10.3390/computers14050166 (registering DOI) - 29 Apr 2025
Abstract
The mathematical model for photometric stereo makes several restricting assumptions, which are often not fulfilled in real-life applications. Specifically, an object surface does not always satisfies Lambert’s cosine law, leading to reflection issues. Moreover, the camera and the light source, in some situations, [...] Read more.
The mathematical model for photometric stereo makes several restricting assumptions, which are often not fulfilled in real-life applications. Specifically, an object surface does not always satisfies Lambert’s cosine law, leading to reflection issues. Moreover, the camera and the light source, in some situations, have to be placed at a close distance from the target, rather than at infinite distance from it. When studying algorithms for these complex situations, it is extremely useful to have at disposal synthetic datasets with known exact solutions, to assert the accuracy of a solution method. The aim of this paper is to present a Matlab package which constructs such datasets on the basis of a chosen exact solution, providing a tool for simulating various real camera/light configurations. This package, starting from the mathematical expression of a surface, or from a discrete sampling, allows the user to build a set of images matching a particular light configuration. Setting various parameters makes it possible to simulate different scenarios, which can be used to investigate the performance of reconstruction algorithms in several situations and test their response to lack of ideality in data. The ability to construct large datasets is particularly useful to train machine learning based algorithms. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
19 pages, 2033 KiB  
Article
DeepStego: Privacy-Preserving Natural Language Steganography Using Large Language Models and Advanced Neural Architectures
by Oleksandr Kuznetsov, Kyrylo Chernov, Aigul Shaikhanova, Kainizhamal Iklassova and Dinara Kozhakhmetova
Computers 2025, 14(5), 165; https://doi.org/10.3390/computers14050165 - 29 Apr 2025
Abstract
Modern linguistic steganography faces the fundamental challenge of balancing embedding capacity with detection resistance, particularly against advanced AI-based steganalysis. This paper presents DeepStego, a novel steganographic system leveraging GPT-4-omni’s language modeling capabilities for secure information hiding in text. Our approach combines dynamic synonym [...] Read more.
Modern linguistic steganography faces the fundamental challenge of balancing embedding capacity with detection resistance, particularly against advanced AI-based steganalysis. This paper presents DeepStego, a novel steganographic system leveraging GPT-4-omni’s language modeling capabilities for secure information hiding in text. Our approach combines dynamic synonym generation with semantic-aware embedding to achieve superior detection resistance while maintaining text naturalness. Through comprehensive experimentation, DeepStego demonstrates significantly lower detection rates compared to existing methods across multiple state-of-the-art steganalysis techniques. DeepStego supports higher embedding capacities while maintaining strong detection resistance and semantic coherence. The system shows superior scalability compared to existing methods. Our evaluation demonstrates perfect message recovery accuracy and significant improvements in text quality preservation compared to competing approaches. These results establish DeepStego as a significant advancement in practical steganographic applications, particularly suitable for scenarios requiring secure covert communication with high embedding capacity. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

35 pages, 11134 KiB  
Article
Error Classification and Static Detection Methods in Tri-Programming Models: MPI, OpenMP, and CUDA
by Saeed Musaad Altalhi, Fathy Elbouraey Eassa, Sanaa Abdullah Sharaf, Ahmed Mohammed Alghamdi, Khalid Ali Almarhabi and Rana Ahmad Bilal Khalid
Computers 2025, 14(5), 164; https://doi.org/10.3390/computers14050164 - 28 Apr 2025
Viewed by 12
Abstract
The growing adoption of supercomputers across various scientific disciplines, particularly by researchers without a background in computer science, has intensified the demand for parallel applications. These applications are typically developed using a combination of programming models within languages such as C, C++, and [...] Read more.
The growing adoption of supercomputers across various scientific disciplines, particularly by researchers without a background in computer science, has intensified the demand for parallel applications. These applications are typically developed using a combination of programming models within languages such as C, C++, and Fortran. However, modern multi-core processors and accelerators necessitate fine-grained control to achieve effective parallelism, complicating the development process. To address this, developers commonly utilize high-level programming models such as Open Multi-Processing (OpenMP), Open Accelerators (OpenACCs), Message Passing Interface (MPI), and Compute Unified Device Architecture (CUDA). These models may be used independently or combined into dual- or tri-model applications to leverage their complementary strengths. However, integrating multiple models introduces subtle and difficult-to-detect runtime errors such as data races, deadlocks, and livelocks that often elude conventional compilers. This complexity is exacerbated in applications that simultaneously incorporate MPI, OpenMP, and CUDA, where the origin of runtime errors, whether from individual models, user logic, or their interactions, becomes ambiguous. Moreover, existing tools are inadequate for detecting such errors in tri-model applications, leaving a critical gap in development support. To address this gap, the present study introduces a static analysis tool designed specifically for tri-model applications combining MPI, OpenMP, and CUDA in C++-based environments. The tool analyzes source code to identify both actual and potential runtime errors prior to execution. Central to this approach is the introduction of error dependency graphs, a novel mechanism for systematically representing and analyzing error correlations in hybrid applications. By offering both error classification and comprehensive static detection, the proposed tool enhances error visibility and reduces manual testing effort. This contributes significantly to the development of more robust parallel applications for high-performance computing (HPC) and future exascale systems. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

27 pages, 6632 KiB  
Article
A Study of COVID-19 Diagnosis Applying Artificial Intelligence to X-Rays Images
by Guilherme P. Cardim, Claudio B. Reis Neto, Eduardo S. Nascimento, Henrique P. Cardim, Wallace Casaca, Rogério G. Negri, Flávio C. Cabrera, Renivaldo J. dos Santos, Erivaldo A. da Silva and Mauricio Araujo Dias
Computers 2025, 14(5), 163; https://doi.org/10.3390/computers14050163 - 28 Apr 2025
Viewed by 43
Abstract
X-ray imaging, as a technique of non-destructive testing, has demonstrated considerable promise in COVID-19 diagnosis, particularly if supplemented with artificial intelligence (AI). Both radiologic technologists and AI researchers have raised the alarm about having to use increased doses of radiation in order to [...] Read more.
X-ray imaging, as a technique of non-destructive testing, has demonstrated considerable promise in COVID-19 diagnosis, particularly if supplemented with artificial intelligence (AI). Both radiologic technologists and AI researchers have raised the alarm about having to use increased doses of radiation in order to get more refined images and, hence, enhance diagnostic precision. In this research, we assess whether the disparity in exposure to the radiation dose considerably influences the credibility of AI-based diagnostic systems for COVID-19. A heterogeneous dataset of chest X-rays acquired at varying degrees of radiation exposure was run through four convolutional neural networks: VGG16, VGG19, ResNet50, and ResNet50V2. Results indicated above 91% accuracies, demonstrating that greater radiation exposure does not appreciably enhance diagnostic accuracy. Low radiation exposure sufficient to be utilized by human radiologists is therefore adequate for AI-based diagnosis. These findings are useful to the medical community, emphasizing that maximum diagnostic accuracy using AI does not need increased doses of radiation, thus further guaranteeing the safe application of X-ray imaging in COVID-19 diagnosis and possibly other medical and veterinary applications. Full article
Show Figures

Figure 1

26 pages, 610 KiB  
Article
A Black-Box Analysis of the Capacity of ChatGPT to Generate Datasets of Human-like Comments
by Alejandro Rosete, Guillermo Sosa-Gómez and Omar Rojas
Computers 2025, 14(5), 162; https://doi.org/10.3390/computers14050162 - 27 Apr 2025
Viewed by 154
Abstract
This paper examines the ability of ChatGPT to generate synthetic comment datasets that mimic those produced by humans. To this end, a collection of datasets containing human comments, freely available in the Kaggle repository, was compared to comments generated via ChatGPT. The latter [...] Read more.
This paper examines the ability of ChatGPT to generate synthetic comment datasets that mimic those produced by humans. To this end, a collection of datasets containing human comments, freely available in the Kaggle repository, was compared to comments generated via ChatGPT. The latter were based on prompts designed to provide the necessary context for approximating human results. It was hypothesized that the responses obtained from ChatGPT would demonstrate a high degree of similarity with the human-generated datasets with regard to vocabulary usage. Two categories of prompts were analyzed, depending on whether they specified the desired length of the generated comments. The evaluation of the results primarily focused on the vocabulary used in each comment dataset, employing several analytical measures. This analysis yielded noteworthy observations, which reflect the current capabilities of ChatGPT in this particular task domain. It was observed that ChatGPT typically employs a reduced number of words compared to human respondents and tends to provide repetitive answers. Furthermore, the responses of ChatGPT have been observed to vary considerably when the length is specified. It is noteworthy that ChatGPT employs a smaller vocabulary, which does not always align with human language. Furthermore, the proportion of non-stop words in ChatGPT’s output is higher than that found in human communication. Finally, the vocabulary of ChatGPT is more closely aligned with human language than the similarity between the two configurations of ChatGPT. This alignment is particularly evident in the use of stop words. While it does not fully achieve the intended purpose, the generated vocabulary serves as a reasonable approximation, enabling specific applications such as the creation of word clouds. Full article
Show Figures

Figure 1

8 pages, 243 KiB  
Article
High-Dimensional Cross Parity Codes and Parities from Lower Than (d − 1)-Dimensional Hyperplanes
by Jörg Keller
Computers 2025, 14(5), 161; https://doi.org/10.3390/computers14050161 - 26 Apr 2025
Viewed by 54
Abstract
Cross parity codes are mostly used as 2-dimensional codes, and sometimes as 3-dimensional codes. We argue that higher dimensions can help to reduce the number of parity bits, and thus deserve further investigation. As a start, we investigate parities from [...] Read more.
Cross parity codes are mostly used as 2-dimensional codes, and sometimes as 3-dimensional codes. We argue that higher dimensions can help to reduce the number of parity bits, and thus deserve further investigation. As a start, we investigate parities from (d2)-dimensional hyperplanes in d-dimensional parity codes, instead of parities from (d1)-dimensional hyperplanes as usual. Full article
Show Figures

Figure 1

25 pages, 5901 KiB  
Article
Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems
by Pamela Hermosilla, Mauricio Díaz, Sebastián Berríos and Héctor Allende-Cid
Computers 2025, 14(5), 160; https://doi.org/10.3390/computers14050160 - 25 Apr 2025
Viewed by 186
Abstract
The increase in malicious cyber activities has generated the need to produce effective tools for the field of digital forensics and incident response. Artificial intelligence (AI) and its fields, specifically machine learning (ML) and deep learning (DL), have shown great potential to aid [...] Read more.
The increase in malicious cyber activities has generated the need to produce effective tools for the field of digital forensics and incident response. Artificial intelligence (AI) and its fields, specifically machine learning (ML) and deep learning (DL), have shown great potential to aid the task of processing and analyzing large amounts of information. However, models generated by DL are often considered “black boxes”, a name derived due to the difficulties faced by users when trying to understand the decision-making process for obtaining results. This research seeks to address the challenges of transparency, explainability, and reliability posed by black-box models in digital forensics. To accomplish this, explainable artificial intelligence (XAI) is explored as a solution. This approach seeks to make DL models more interpretable and understandable by humans. The SHAP (SHapley Additive eXplanations) and LIME (Local Interpretable Model-agnostic Explanations) methods will be implemented and evaluated as a model-agnostic technique to explain predictions of the generated models for forensic analysis. By applying these methods to the XGBoost and TabNet models trained on the UNSW-NB15 dataset, the results indicated distinct global feature importance rankings between the model types and revealed greater consistency of local explanations for the tree-based XGBoost model compared to the deep learning-based TabNet. This study aims to make the decision-making process in these models transparent and to assess the confidence and consistency of XAI-generated explanations in a forensic context. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Graphical abstract

28 pages, 2200 KiB  
Article
Fine-Tuning Network Slicing in 5G: Unveiling Mathematical Equations for Precision Classification
by Nikola Anđelić, Sandi Baressi Šegota and Vedran Mrzljak
Computers 2025, 14(5), 159; https://doi.org/10.3390/computers14050159 - 25 Apr 2025
Viewed by 130
Abstract
Modern 5G network slicing centers on the precise design of virtual, independent networks operating over a shared physical infrastructure, each configured to meet specific service requirements. This approach plays a vital role in enabling highly customized and flexible service delivery within the 5G [...] Read more.
Modern 5G network slicing centers on the precise design of virtual, independent networks operating over a shared physical infrastructure, each configured to meet specific service requirements. This approach plays a vital role in enabling highly customized and flexible service delivery within the 5G ecosystem. In this study, we present the application of a genetic programming symbolic classifier to a dedicated network slicing dataset, resulting in the generation of accurate symbolic expressions for classifying different network slice types. To address the issue of class imbalance, we employ oversampling strategies that produce balanced variations of the dataset. Furthermore, a random search strategy is used to explore the hyperparameter space comprehensively in pursuit of optimal classification performance. The derived symbolic models, refined through threshold tuning based on prediction correctness, are subsequently evaluated on the original imbalanced dataset. The proposed method demonstrates outstanding performance, achieving a perfect classification accuracy of 1.0. Full article
Show Figures

Figure 1

21 pages, 4491 KiB  
Article
PyChatAI: Enhancing Python Programming Skills—An Empirical Study of a Smart Learning System
by Manal Alanazi, Ben Soh, Halima Samra and Alice Li
Computers 2025, 14(5), 158; https://doi.org/10.3390/computers14050158 - 23 Apr 2025
Viewed by 166
Abstract
This paper presents strategies for effectively integrating AI tools into programming education and provides recommendations for enhancing student learning outcomes through intelligent educational systems. Learning computer programming is a cognitively demanding task that requires dedication, logical reasoning, and persistence. Many beginners struggle with [...] Read more.
This paper presents strategies for effectively integrating AI tools into programming education and provides recommendations for enhancing student learning outcomes through intelligent educational systems. Learning computer programming is a cognitively demanding task that requires dedication, logical reasoning, and persistence. Many beginners struggle with debugging and often lack effective problem-solving strategies. To address these issues, this study investigates PyChatAI—a bilingual, AI-powered chatbot designed to support novice Python programmers by providing real-time feedback, answering coding-related questions, and fostering independent problem-solving skills. PyChatAI offers continuous, personalised assistance and is particularly beneficial for students who prefer remote or low-pressure learning environments. An empirical evaluation employing a Solomon Four-Group design revealed significant improvements across all programming skill areas, with especially strong gains in theoretical understanding, code writing, and debugging proficiency. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Graphical abstract

16 pages, 1226 KiB  
Article
Advanced Digital System for International Collaboration on Biosample-Oriented Research: A Multicriteria Query Tool for Real-Time Biosample and Patient Cohort Searches
by Alexandros Fridas, Anna Bourouliti, Loukia Touramanidou, Desislava Ivanova, Kostantinos Votis and Panagiotis Katsaounis
Computers 2025, 14(5), 157; https://doi.org/10.3390/computers14050157 - 23 Apr 2025
Viewed by 100
Abstract
The advancement of biomedical research depends on efficient data sharing, integration, and annotation to ensure reproducibility, accessibility, and cross-disciplinary collaboration. International collaborative research is crucial for advancing biomedical science and innovation but often faces significant barriers, such as data sharing limitations, inefficient sample [...] Read more.
The advancement of biomedical research depends on efficient data sharing, integration, and annotation to ensure reproducibility, accessibility, and cross-disciplinary collaboration. International collaborative research is crucial for advancing biomedical science and innovation but often faces significant barriers, such as data sharing limitations, inefficient sample management, and scalability challenges. Existing infrastructures for biosample and data repositories face challenges limiting large-scale research efforts. This study presents a novel platform designed to address these issues, enabling researchers to conduct high-quality research more efficiently and at reduced costs. The platform employs a modular, distributed architecture that ensures high availability, redundancy, and interoperability among diverse stakeholders, as well as integrates advanced features, including secure access management, comprehensive query functionalities, real-time availability reporting, and robust data mining capabilities. In addition, this platform supports dynamic, multi-criteria searches tailored to disease-specific patient profiles and biosample-related data across pre-analytical, post-analytical, and cryo-storage processes. By evaluating the platform’s modular architecture and pilot testing outcomes, this study demonstrates its potential to enhance interdisciplinary collaboration, streamline research workflows, and foster transformative advancements in biomedical research. The key is the innovation of a real-time dynamic e-consent (DRT e-consent) system, which allows donors to update their consent status in real time, ensuring compliance with ethical and regulatory frameworks such as GDPR and HIPAA. The system also supports multi-modal data integration, including genomic sequences, electronic health records (EHRs), and imaging data, enabling researchers to perform complex queries and generate comprehensive insights. Full article
(This article belongs to the Special Issue Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024)
Show Figures

Figure 1

25 pages, 3963 KiB  
Article
Students Collaboratively Prompting ChatGPT
by Maria Perifanou and Anastasios A. Economides
Computers 2025, 14(5), 156; https://doi.org/10.3390/computers14050156 - 22 Apr 2025
Viewed by 209
Abstract
This study investigated how undergraduate students collaborated when working with ChatGPT and what teamwork approaches they used, focusing on students’ preferences, conflict resolution, reliance on AI-generated content, and perceived learning outcomes. In a course on the Applications of Information Systems, 153 undergraduate students [...] Read more.
This study investigated how undergraduate students collaborated when working with ChatGPT and what teamwork approaches they used, focusing on students’ preferences, conflict resolution, reliance on AI-generated content, and perceived learning outcomes. In a course on the Applications of Information Systems, 153 undergraduate students were organized into teams of 3. Team members worked together to create a report and a presentation on a specific data mining technique, exploiting ChatGPT, internet resources, and class materials. The findings revealed no strong preference for a single collaborative mode, though Modes #2, #4, and #5 were marginally favored due to clearer structures, role clarity, or increased individual autonomy. Students reasonably encountered initial disagreements (averaging 30.44%), which were eventually resolved—indicating constructive debates that improve critical thinking. Data also showed that students moderately modified ChatGPT’s responses (50% on average) and based nearly half (44%) of their overall output on AI-generated content, suggesting a balanced yet varied level of reliance on AI. Notably, a statistically significant relationship emerged between students’ perceived learning and actual performance, implying that self-assessment can complement objective academic measures. Students also employed a diverse mix of communication tools, from synchronous (phone calls) to asynchronous (Instagram) and collaborative platforms (Google Drive), valuing their ease of use but facing scheduling, technical, and engagement issues. Overall, these results reveal the need for flexible collaborative patterns, more supportive AI use policies, and versatile communication methods so that educators can apply collaborative learning effectively and maintain academic integrity. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop