-
Cybercrime Resilience in the Era of Advanced Technologies: Evidence from the Financial Sector of a Developing Country
-
A Literature Review on Security in the Internet of Things: Identifying and Analysing Critical Categories
-
Machine Learning and Deep Learning Paradigms: From Techniques to Practical Applications and Research Frontiers
Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q2 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 15.5 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the second half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.6 (2023);
5-Year Impact Factor:
2.4 (2023)
Latest Articles
From Transformers to Voting Ensembles for Interpretable Sentiment Classification: A Comprehensive Comparison
Computers 2025, 14(5), 167; https://doi.org/10.3390/computers14050167 - 29 Apr 2025
Abstract
This study conducts an in-depth investigation of the performance of six transformer models using 12 different datasets—10 with three classes and two with two classes—on sentiment classification. We use these six models and generate all combinations of triple schema ensembles, Majority and Soft
[...] Read more.
This study conducts an in-depth investigation of the performance of six transformer models using 12 different datasets—10 with three classes and two with two classes—on sentiment classification. We use these six models and generate all combinations of triple schema ensembles, Majority and Soft vote. In total, we compare 46 classifiers on each dataset and see in one case up to a 7.6% increase in accuracy on a dataset with three classes from an ensemble scheme and, in a second case, up to 8.5% increase in accuracy on a dataset with two classes. Our study contributes to the field of natural language processing by exploring the reasons for the predominance, in this particular task, of Majority vote over Soft vote. The conclusions are drawn after a thorough investigation of the classifiers that are co-compared with each other through reliability charts, analyses of the confidence the models have in their predictions and their metrics, concluding with statistical analyses using the Friedman test and the Nemenyi post-hoc test with useful conclusions.
Full article
(This article belongs to the Special Issue When Natural Language Processing Meets Machine Learning—Opportunities, Challenges and Solutions)
►
Show Figures
Open AccessArticle
On Generating Synthetic Datasets for Photometric Stereo Applications
by
Elisa Crabu and Giuseppe Rodriguez
Computers 2025, 14(5), 166; https://doi.org/10.3390/computers14050166 - 29 Apr 2025
Abstract
The mathematical model for photometric stereo makes several restricting assumptions, which are often not fulfilled in real-life applications. Specifically, an object surface does not always satisfies Lambert’s cosine law, leading to reflection issues. Moreover, the camera and the light source, in some situations,
[...] Read more.
The mathematical model for photometric stereo makes several restricting assumptions, which are often not fulfilled in real-life applications. Specifically, an object surface does not always satisfies Lambert’s cosine law, leading to reflection issues. Moreover, the camera and the light source, in some situations, have to be placed at a close distance from the target, rather than at infinite distance from it. When studying algorithms for these complex situations, it is extremely useful to have at disposal synthetic datasets with known exact solutions, to assert the accuracy of a solution method. The aim of this paper is to present a Matlab package which constructs such datasets on the basis of a chosen exact solution, providing a tool for simulating various real camera/light configurations. This package, starting from the mathematical expression of a surface, or from a discrete sampling, allows the user to build a set of images matching a particular light configuration. Setting various parameters makes it possible to simulate different scenarios, which can be used to investigate the performance of reconstruction algorithms in several situations and test their response to lack of ideality in data. The ability to construct large datasets is particularly useful to train machine learning based algorithms.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
DeepStego: Privacy-Preserving Natural Language Steganography Using Large Language Models and Advanced Neural Architectures
by
Oleksandr Kuznetsov, Kyrylo Chernov, Aigul Shaikhanova, Kainizhamal Iklassova and Dinara Kozhakhmetova
Computers 2025, 14(5), 165; https://doi.org/10.3390/computers14050165 - 29 Apr 2025
Abstract
Modern linguistic steganography faces the fundamental challenge of balancing embedding capacity with detection resistance, particularly against advanced AI-based steganalysis. This paper presents DeepStego, a novel steganographic system leveraging GPT-4-omni’s language modeling capabilities for secure information hiding in text. Our approach combines dynamic synonym
[...] Read more.
Modern linguistic steganography faces the fundamental challenge of balancing embedding capacity with detection resistance, particularly against advanced AI-based steganalysis. This paper presents DeepStego, a novel steganographic system leveraging GPT-4-omni’s language modeling capabilities for secure information hiding in text. Our approach combines dynamic synonym generation with semantic-aware embedding to achieve superior detection resistance while maintaining text naturalness. Through comprehensive experimentation, DeepStego demonstrates significantly lower detection rates compared to existing methods across multiple state-of-the-art steganalysis techniques. DeepStego supports higher embedding capacities while maintaining strong detection resistance and semantic coherence. The system shows superior scalability compared to existing methods. Our evaluation demonstrates perfect message recovery accuracy and significant improvements in text quality preservation compared to competing approaches. These results establish DeepStego as a significant advancement in practical steganographic applications, particularly suitable for scenarios requiring secure covert communication with high embedding capacity.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
►▼
Show Figures

Figure 1
Open AccessArticle
Error Classification and Static Detection Methods in Tri-Programming Models: MPI, OpenMP, and CUDA
by
Saeed Musaad Altalhi, Fathy Elbouraey Eassa, Sanaa Abdullah Sharaf, Ahmed Mohammed Alghamdi, Khalid Ali Almarhabi and Rana Ahmad Bilal Khalid
Computers 2025, 14(5), 164; https://doi.org/10.3390/computers14050164 - 28 Apr 2025
Abstract
The growing adoption of supercomputers across various scientific disciplines, particularly by researchers without a background in computer science, has intensified the demand for parallel applications. These applications are typically developed using a combination of programming models within languages such as C, C++, and
[...] Read more.
The growing adoption of supercomputers across various scientific disciplines, particularly by researchers without a background in computer science, has intensified the demand for parallel applications. These applications are typically developed using a combination of programming models within languages such as C, C++, and Fortran. However, modern multi-core processors and accelerators necessitate fine-grained control to achieve effective parallelism, complicating the development process. To address this, developers commonly utilize high-level programming models such as Open Multi-Processing (OpenMP), Open Accelerators (OpenACCs), Message Passing Interface (MPI), and Compute Unified Device Architecture (CUDA). These models may be used independently or combined into dual- or tri-model applications to leverage their complementary strengths. However, integrating multiple models introduces subtle and difficult-to-detect runtime errors such as data races, deadlocks, and livelocks that often elude conventional compilers. This complexity is exacerbated in applications that simultaneously incorporate MPI, OpenMP, and CUDA, where the origin of runtime errors, whether from individual models, user logic, or their interactions, becomes ambiguous. Moreover, existing tools are inadequate for detecting such errors in tri-model applications, leaving a critical gap in development support. To address this gap, the present study introduces a static analysis tool designed specifically for tri-model applications combining MPI, OpenMP, and CUDA in C++-based environments. The tool analyzes source code to identify both actual and potential runtime errors prior to execution. Central to this approach is the introduction of error dependency graphs, a novel mechanism for systematically representing and analyzing error correlations in hybrid applications. By offering both error classification and comprehensive static detection, the proposed tool enhances error visibility and reduces manual testing effort. This contributes significantly to the development of more robust parallel applications for high-performance computing (HPC) and future exascale systems.
Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
A Study of COVID-19 Diagnosis Applying Artificial Intelligence to X-Rays Images
by
Guilherme P. Cardim, Claudio B. Reis Neto, Eduardo S. Nascimento, Henrique P. Cardim, Wallace Casaca, Rogério G. Negri, Flávio C. Cabrera, Renivaldo J. dos Santos, Erivaldo A. da Silva and Mauricio Araujo Dias
Computers 2025, 14(5), 163; https://doi.org/10.3390/computers14050163 - 28 Apr 2025
Abstract
X-ray imaging, as a technique of non-destructive testing, has demonstrated considerable promise in COVID-19 diagnosis, particularly if supplemented with artificial intelligence (AI). Both radiologic technologists and AI researchers have raised the alarm about having to use increased doses of radiation in order to
[...] Read more.
X-ray imaging, as a technique of non-destructive testing, has demonstrated considerable promise in COVID-19 diagnosis, particularly if supplemented with artificial intelligence (AI). Both radiologic technologists and AI researchers have raised the alarm about having to use increased doses of radiation in order to get more refined images and, hence, enhance diagnostic precision. In this research, we assess whether the disparity in exposure to the radiation dose considerably influences the credibility of AI-based diagnostic systems for COVID-19. A heterogeneous dataset of chest X-rays acquired at varying degrees of radiation exposure was run through four convolutional neural networks: VGG16, VGG19, ResNet50, and ResNet50V2. Results indicated above 91% accuracies, demonstrating that greater radiation exposure does not appreciably enhance diagnostic accuracy. Low radiation exposure sufficient to be utilized by human radiologists is therefore adequate for AI-based diagnosis. These findings are useful to the medical community, emphasizing that maximum diagnostic accuracy using AI does not need increased doses of radiation, thus further guaranteeing the safe application of X-ray imaging in COVID-19 diagnosis and possibly other medical and veterinary applications.
Full article
(This article belongs to the Special Issue Applications of Machine Learning and Artificial Intelligence for Healthcare)
►▼
Show Figures

Figure 1
Open AccessArticle
A Black-Box Analysis of the Capacity of ChatGPT to Generate Datasets of Human-like Comments
by
Alejandro Rosete, Guillermo Sosa-Gómez and Omar Rojas
Computers 2025, 14(5), 162; https://doi.org/10.3390/computers14050162 - 27 Apr 2025
Abstract
This paper examines the ability of ChatGPT to generate synthetic comment datasets that mimic those produced by humans. To this end, a collection of datasets containing human comments, freely available in the Kaggle repository, was compared to comments generated via ChatGPT. The latter
[...] Read more.
This paper examines the ability of ChatGPT to generate synthetic comment datasets that mimic those produced by humans. To this end, a collection of datasets containing human comments, freely available in the Kaggle repository, was compared to comments generated via ChatGPT. The latter were based on prompts designed to provide the necessary context for approximating human results. It was hypothesized that the responses obtained from ChatGPT would demonstrate a high degree of similarity with the human-generated datasets with regard to vocabulary usage. Two categories of prompts were analyzed, depending on whether they specified the desired length of the generated comments. The evaluation of the results primarily focused on the vocabulary used in each comment dataset, employing several analytical measures. This analysis yielded noteworthy observations, which reflect the current capabilities of ChatGPT in this particular task domain. It was observed that ChatGPT typically employs a reduced number of words compared to human respondents and tends to provide repetitive answers. Furthermore, the responses of ChatGPT have been observed to vary considerably when the length is specified. It is noteworthy that ChatGPT employs a smaller vocabulary, which does not always align with human language. Furthermore, the proportion of non-stop words in ChatGPT’s output is higher than that found in human communication. Finally, the vocabulary of ChatGPT is more closely aligned with human language than the similarity between the two configurations of ChatGPT. This alignment is particularly evident in the use of stop words. While it does not fully achieve the intended purpose, the generated vocabulary serves as a reasonable approximation, enabling specific applications such as the creation of word clouds.
Full article
(This article belongs to the Special Issue Harnessing Artificial Intelligence for Social and Semantic Understanding)
►▼
Show Figures

Figure 1
Open AccessArticle
High-Dimensional Cross Parity Codes and Parities from Lower Than (d − 1)-Dimensional Hyperplanes
by
Jörg Keller
Computers 2025, 14(5), 161; https://doi.org/10.3390/computers14050161 - 26 Apr 2025
Abstract
►▼
Show Figures
Cross parity codes are mostly used as 2-dimensional codes, and sometimes as 3-dimensional codes. We argue that higher dimensions can help to reduce the number of parity bits, and thus deserve further investigation. As a start, we investigate parities from
[...] Read more.
Cross parity codes are mostly used as 2-dimensional codes, and sometimes as 3-dimensional codes. We argue that higher dimensions can help to reduce the number of parity bits, and thus deserve further investigation. As a start, we investigate parities from -dimensional hyperplanes in d-dimensional parity codes, instead of parities from -dimensional hyperplanes as usual.
Full article

Figure 1
Open AccessArticle
Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems
by
Pamela Hermosilla, Mauricio Díaz, Sebastián Berríos and Héctor Allende-Cid
Computers 2025, 14(5), 160; https://doi.org/10.3390/computers14050160 - 25 Apr 2025
Abstract
The increase in malicious cyber activities has generated the need to produce effective tools for the field of digital forensics and incident response. Artificial intelligence (AI) and its fields, specifically machine learning (ML) and deep learning (DL), have shown great potential to aid
[...] Read more.
The increase in malicious cyber activities has generated the need to produce effective tools for the field of digital forensics and incident response. Artificial intelligence (AI) and its fields, specifically machine learning (ML) and deep learning (DL), have shown great potential to aid the task of processing and analyzing large amounts of information. However, models generated by DL are often considered “black boxes”, a name derived due to the difficulties faced by users when trying to understand the decision-making process for obtaining results. This research seeks to address the challenges of transparency, explainability, and reliability posed by black-box models in digital forensics. To accomplish this, explainable artificial intelligence (XAI) is explored as a solution. This approach seeks to make DL models more interpretable and understandable by humans. The SHAP (SHapley Additive eXplanations) and LIME (Local Interpretable Model-agnostic Explanations) methods will be implemented and evaluated as a model-agnostic technique to explain predictions of the generated models for forensic analysis. By applying these methods to the XGBoost and TabNet models trained on the UNSW-NB15 dataset, the results indicated distinct global feature importance rankings between the model types and revealed greater consistency of local explanations for the tree-based XGBoost model compared to the deep learning-based TabNet. This study aims to make the decision-making process in these models transparent and to assess the confidence and consistency of XAI-generated explanations in a forensic context.
Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
►▼
Show Figures

Graphical abstract
Open AccessArticle
Fine-Tuning Network Slicing in 5G: Unveiling Mathematical Equations for Precision Classification
by
Nikola Anđelić, Sandi Baressi Šegota and Vedran Mrzljak
Computers 2025, 14(5), 159; https://doi.org/10.3390/computers14050159 - 25 Apr 2025
Abstract
Modern 5G network slicing centers on the precise design of virtual, independent networks operating over a shared physical infrastructure, each configured to meet specific service requirements. This approach plays a vital role in enabling highly customized and flexible service delivery within the 5G
[...] Read more.
Modern 5G network slicing centers on the precise design of virtual, independent networks operating over a shared physical infrastructure, each configured to meet specific service requirements. This approach plays a vital role in enabling highly customized and flexible service delivery within the 5G ecosystem. In this study, we present the application of a genetic programming symbolic classifier to a dedicated network slicing dataset, resulting in the generation of accurate symbolic expressions for classifying different network slice types. To address the issue of class imbalance, we employ oversampling strategies that produce balanced variations of the dataset. Furthermore, a random search strategy is used to explore the hyperparameter space comprehensively in pursuit of optimal classification performance. The derived symbolic models, refined through threshold tuning based on prediction correctness, are subsequently evaluated on the original imbalanced dataset. The proposed method demonstrates outstanding performance, achieving a perfect classification accuracy of 1.0.
Full article
(This article belongs to the Special Issue Distributed Computing Paradigms for the Internet of Things: Exploring Cloud, Edge, and Fog Solutions)
►▼
Show Figures

Figure 1
Open AccessArticle
PyChatAI: Enhancing Python Programming Skills—An Empirical Study of a Smart Learning System
by
Manal Alanazi, Ben Soh, Halima Samra and Alice Li
Computers 2025, 14(5), 158; https://doi.org/10.3390/computers14050158 - 23 Apr 2025
Abstract
This paper presents strategies for effectively integrating AI tools into programming education and provides recommendations for enhancing student learning outcomes through intelligent educational systems. Learning computer programming is a cognitively demanding task that requires dedication, logical reasoning, and persistence. Many beginners struggle with
[...] Read more.
This paper presents strategies for effectively integrating AI tools into programming education and provides recommendations for enhancing student learning outcomes through intelligent educational systems. Learning computer programming is a cognitively demanding task that requires dedication, logical reasoning, and persistence. Many beginners struggle with debugging and often lack effective problem-solving strategies. To address these issues, this study investigates PyChatAI—a bilingual, AI-powered chatbot designed to support novice Python programmers by providing real-time feedback, answering coding-related questions, and fostering independent problem-solving skills. PyChatAI offers continuous, personalised assistance and is particularly beneficial for students who prefer remote or low-pressure learning environments. An empirical evaluation employing a Solomon Four-Group design revealed significant improvements across all programming skill areas, with especially strong gains in theoretical understanding, code writing, and debugging proficiency.
Full article
(This article belongs to the Special Issue Smart Learning Environments)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Advanced Digital System for International Collaboration on Biosample-Oriented Research: A Multicriteria Query Tool for Real-Time Biosample and Patient Cohort Searches
by
Alexandros Fridas, Anna Bourouliti, Loukia Touramanidou, Desislava Ivanova, Kostantinos Votis and Panagiotis Katsaounis
Computers 2025, 14(5), 157; https://doi.org/10.3390/computers14050157 - 23 Apr 2025
Abstract
The advancement of biomedical research depends on efficient data sharing, integration, and annotation to ensure reproducibility, accessibility, and cross-disciplinary collaboration. International collaborative research is crucial for advancing biomedical science and innovation but often faces significant barriers, such as data sharing limitations, inefficient sample
[...] Read more.
The advancement of biomedical research depends on efficient data sharing, integration, and annotation to ensure reproducibility, accessibility, and cross-disciplinary collaboration. International collaborative research is crucial for advancing biomedical science and innovation but often faces significant barriers, such as data sharing limitations, inefficient sample management, and scalability challenges. Existing infrastructures for biosample and data repositories face challenges limiting large-scale research efforts. This study presents a novel platform designed to address these issues, enabling researchers to conduct high-quality research more efficiently and at reduced costs. The platform employs a modular, distributed architecture that ensures high availability, redundancy, and interoperability among diverse stakeholders, as well as integrates advanced features, including secure access management, comprehensive query functionalities, real-time availability reporting, and robust data mining capabilities. In addition, this platform supports dynamic, multi-criteria searches tailored to disease-specific patient profiles and biosample-related data across pre-analytical, post-analytical, and cryo-storage processes. By evaluating the platform’s modular architecture and pilot testing outcomes, this study demonstrates its potential to enhance interdisciplinary collaboration, streamline research workflows, and foster transformative advancements in biomedical research. The key is the innovation of a real-time dynamic e-consent (DRT e-consent) system, which allows donors to update their consent status in real time, ensuring compliance with ethical and regulatory frameworks such as GDPR and HIPAA. The system also supports multi-modal data integration, including genomic sequences, electronic health records (EHRs), and imaging data, enabling researchers to perform complex queries and generate comprehensive insights.
Full article
(This article belongs to the Special Issue Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024)
►▼
Show Figures

Figure 1
Open AccessArticle
Students Collaboratively Prompting ChatGPT
by
Maria Perifanou and Anastasios A. Economides
Computers 2025, 14(5), 156; https://doi.org/10.3390/computers14050156 - 22 Apr 2025
Abstract
This study investigated how undergraduate students collaborated when working with ChatGPT and what teamwork approaches they used, focusing on students’ preferences, conflict resolution, reliance on AI-generated content, and perceived learning outcomes. In a course on the Applications of Information Systems, 153 undergraduate students
[...] Read more.
This study investigated how undergraduate students collaborated when working with ChatGPT and what teamwork approaches they used, focusing on students’ preferences, conflict resolution, reliance on AI-generated content, and perceived learning outcomes. In a course on the Applications of Information Systems, 153 undergraduate students were organized into teams of 3. Team members worked together to create a report and a presentation on a specific data mining technique, exploiting ChatGPT, internet resources, and class materials. The findings revealed no strong preference for a single collaborative mode, though Modes #2, #4, and #5 were marginally favored due to clearer structures, role clarity, or increased individual autonomy. Students reasonably encountered initial disagreements (averaging 30.44%), which were eventually resolved—indicating constructive debates that improve critical thinking. Data also showed that students moderately modified ChatGPT’s responses (50% on average) and based nearly half (44%) of their overall output on AI-generated content, suggesting a balanced yet varied level of reliance on AI. Notably, a statistically significant relationship emerged between students’ perceived learning and actual performance, implying that self-assessment can complement objective academic measures. Students also employed a diverse mix of communication tools, from synchronous (phone calls) to asynchronous (Instagram) and collaborative platforms (Google Drive), valuing their ease of use but facing scheduling, technical, and engagement issues. Overall, these results reveal the need for flexible collaborative patterns, more supportive AI use policies, and versatile communication methods so that educators can apply collaborative learning effectively and maintain academic integrity.
Full article
(This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning)
►▼
Show Figures

Figure 1
Open AccessArticle
The Ubimus Plugging Framework: Deploying FPGA-Based Prototypes for Ubiquitous Music Hardware Design
by
Damián Keller, Aman Jagwani and Victor Lazzarini
Computers 2025, 14(4), 155; https://doi.org/10.3390/computers14040155 - 21 Apr 2025
Abstract
►▼
Show Figures
The emergent field of embedded computing presents a challenging scenario for ubiquitous music (ubimus) design. Available tools demand specific technical knowledge—as exemplified in the techniques involved in programming integrated circuits of configurable logic units, known as field-programmable gate arrays (FPGAs). Low-level hardware description
[...] Read more.
The emergent field of embedded computing presents a challenging scenario for ubiquitous music (ubimus) design. Available tools demand specific technical knowledge—as exemplified in the techniques involved in programming integrated circuits of configurable logic units, known as field-programmable gate arrays (FPGAs). Low-level hardware description languages used for handling FPGAs involve a steep learning curve. Hence, FPGA programming offers a unique challenge to probe the boundaries of ubimus frameworks as enablers of fast and versatile prototyping. State-of-the-art hardware-oriented approaches point to the use of high-level synthesis as a promising programming technique. Furthermore, current FPGA system-on-chip (SoC) hardware with an associated onboard general-purpose processor may foster the development of flexible platforms for musical signal processing. Taking into account the emergence of an FPGA-based ecology of tools, we introduce the ubimus plugging framework. The procedures employed in the construction of a modular- synthesis library based on field-programmable gate array hardware, ModFPGA, are documented, and examples of musical projects applying key design principles are discussed.
Full article

Figure 1
Open AccessArticle
Generative Artificial Intelligence as a Catalyst for Change in Higher Education Art Study Programs
by
Anna Ansone, Zinta Zālīte-Supe and Linda Daniela
Computers 2025, 14(4), 154; https://doi.org/10.3390/computers14040154 - 20 Apr 2025
Abstract
Generative Artificial Intelligence (AI) has emerged as a transformative tool in art education, offering innovative avenues for creativity and learning. However, concerns persist among educators regarding the potential misuse of text-to-image generators as unethical shortcuts. This study explores how bachelor’s-level art students perceive
[...] Read more.
Generative Artificial Intelligence (AI) has emerged as a transformative tool in art education, offering innovative avenues for creativity and learning. However, concerns persist among educators regarding the potential misuse of text-to-image generators as unethical shortcuts. This study explores how bachelor’s-level art students perceive and use generative AI in artistic composition. Ten art students participated in a lecture on composition principles and completed a practical composition task using both traditional methods and generative AI tools. Their interactions were observed, followed by the administration of a questionnaire capturing their reflections. Qualitative analysis of the data revealed that students recognize the potential of generative AI for ideation and conceptual development but find its limitations frustrating for executing nuanced artistic tasks. This study highlights the current utility of generative AI as an inspirational and conceptual mentor rather than a precise artistic tool, highlighting the need for structured training and a balanced integration of generative AI with traditional design methods. Future research should focus on larger participant samples, assess the evolving capabilities of generative AI tools, and explore their potential to teach fundamental art concepts effectively while addressing concerns about academic integrity. Enhancing the functionality of these tools could bridge gaps between creativity and pedagogy in art education.
Full article
(This article belongs to the Special Issue Smart Learning Environments)
►▼
Show Figures

Figure 1
Open AccessArticle
Design of an Emotional Facial Recognition Task in a 3D Environment
by
Gemma Quirantes-Gutierrez, Ángeles F. Estévez, Gabriel Artés Ordoño and Ginesa López-Crespo
Computers 2025, 14(4), 153; https://doi.org/10.3390/computers14040153 - 18 Apr 2025
Abstract
The recognition of emotional facial expressions is a key skill for social adaptation. Previous studies have shown that clinical and subclinical populations, such as those diagnosed with schizophrenia or autism spectrum disorder, have a significant deficit in the recognition of emotional facial expressions.
[...] Read more.
The recognition of emotional facial expressions is a key skill for social adaptation. Previous studies have shown that clinical and subclinical populations, such as those diagnosed with schizophrenia or autism spectrum disorder, have a significant deficit in the recognition of emotional facial expressions. These studies suggest that this may be the cause of their social dysfunction. Given the importance of this type of recognition in social functioning, the present study aims to design a tool to measure the recognition of emotional facial expressions using Unreal Engine 4 software to develop computer graphics in a 3D environment. Additionally, we tested it in a small pilot study with a sample of 37 university students, aged between 18 and 40, to compare the results with a more classical emotional facial recognition task. We also administered the SEES Scale and a set of custom-formulated questions to both groups to assess potential differences in activation levels between the two modalities (3D environment vs. classical format). The results of this initial pilot study suggest that students who completed the task in the classical format exhibited a greater lack of activation compared to those who completed the task in the 3D environment. Regarding the recognition of emotional facial expressions, both tasks were similar in two of the seven emotions evaluated. We believe that this study represents the beginning of a new line of research that could have important clinical implications.
Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Role of Roadside Units in Cluster Head Election and Coverage Maximization for Vehicle Emergency Services
by
Ravneet Kaur, Robin Doss, Lei Pan, Chaitanya Singla and Selvarajah Thuseethan
Computers 2025, 14(4), 152; https://doi.org/10.3390/computers14040152 - 18 Apr 2025
Abstract
Efficient clustering algorithms are critical for enabling the timely dissemination of emergency messages across maximum coverage areas in vehicular networks. While existing clustering approaches demonstrate stability and scalability, there has been a limited amount of work focused on leveraging roadside units (RSUs) for
[...] Read more.
Efficient clustering algorithms are critical for enabling the timely dissemination of emergency messages across maximum coverage areas in vehicular networks. While existing clustering approaches demonstrate stability and scalability, there has been a limited amount of work focused on leveraging roadside units (RSUs) for cluster head selection. This research proposes a novel framework that utilizes RSUs to facilitate cluster head election, mitigating the cluster head selection process, clustering overhead, and broadcast storm problem. The proposed scheme mandates selecting an optimal number of cluster heads to maximize information coverage and prevent traffic congestion, thereby enhancing the quality of service through improved cluster head duration, reduced cluster formation time, expanded coverage area, and decreased overhead. The framework comprises three key components: (I) an acknowledgment-based system for legitimate vehicle entry into the RSU for cluster head selection; (II) an authoritative node behavior mechanism for choosing cluster heads from received notifications; and (III) the role of bridge nodes in maximizing the coverage of the established network. The comparative analysis evaluates the clustering framework’s performance under uniform and non-uniform vehicle speed scenarios for time-barrier-based emergency message dissemination in vehicular ad hoc networks. The results demonstrate that the proposed model’s effectiveness for uniform highway speed scenarios is 100% whereas for non-uniform scenarios 99.55% information coverage is obtained. Furthermore, the clustering process accelerates by over 50%, decreasing overhead and reducing cluster head election time using RSUs. The proposed approach outperforms existing methods for the number of cluster heads, cluster head election time, total cluster formation time, and maximum information coverage across varying vehicle densities.
Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
Kalman Filter-Enhanced Data Aggregation in LoRaWAN-Based IoT Framework for Aquaculture Monitoring in Sargassum sp. Cultivation
by
Misbahuddin Misbahuddin, Nunik Cokrowati, Muhamad Syamsu Iqbal, Obie Farobie, Apip Amrullah and Lusi Ernawati
Computers 2025, 14(4), 151; https://doi.org/10.3390/computers14040151 - 18 Apr 2025
Abstract
This study presents a LoRaWAN-based IoT framework for robust data aggregation in Sargassum sp. cultivation, integrating multi-sensor monitoring and Kalman filter-based data enhancement. The system employs water quality sensors—including temperature, salinity, light intensity, dissolved oxygen, total dissolved solids, and pH—deployed in 6 out
[...] Read more.
This study presents a LoRaWAN-based IoT framework for robust data aggregation in Sargassum sp. cultivation, integrating multi-sensor monitoring and Kalman filter-based data enhancement. The system employs water quality sensors—including temperature, salinity, light intensity, dissolved oxygen, total dissolved solids, and pH—deployed in 6 out of 14 cultivation containers. Sensor data are transmitted via LoRaWAN to The Things Network (TTN) and processed through an MQTT-based pipeline in Node-RED before visualization in ThingSpeak. The Kalman filter is applied to improve data accuracy and detect faulty sensor readings, ensuring reliable aggregation of environmental parameters. Experimental results demonstrate that this approach effectively maintains optimal cultivation conditions, reducing ecological risks such as eutrophication and improving Sargassum sp. growth monitoring. Findings indicate that balanced light intensity plays a crucial role in photosynthesis, with optimally exposed containers exhibiting the highest survival rates and biomass. However, nutrient supplementation showed limited impact due to uneven distribution, highlighting the need for improved delivery systems. By combining real-time monitoring with advanced data processing, this framework enhances decision-making in sustainable aquaculture, demonstrating the potential of LoRaWAN and Kalman filter-based methodologies for environmental monitoring and resource management.
Full article
(This article belongs to the Special Issue The Internet of Things—Current Trends, Applications, and Future Challenges (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessSystematic Review
A Systematic Literature Review of Machine Unlearning Techniques in Neural Networks
by
Ivanna Daniela Cevallos, Marco E. Benalcázar, Ángel Leonardo Valdivieso Caraguay, Jonathan A. Zea and Lorena Isabel Barona-López
Computers 2025, 14(4), 150; https://doi.org/10.3390/computers14040150 - 18 Apr 2025
Abstract
This review examines the field of machine unlearning in neural networks, an area driven by data privacy regulations such as the General Data Protection Regulation and the California Consumer Privacy Act. By analyzing 37 primary studies of machine unlearning applied to neural networks
[...] Read more.
This review examines the field of machine unlearning in neural networks, an area driven by data privacy regulations such as the General Data Protection Regulation and the California Consumer Privacy Act. By analyzing 37 primary studies of machine unlearning applied to neural networks in both regression and classification tasks, this review thoroughly evaluates the foundational principles, key performance metrics, and methodologies used to assess these techniques. Special attention is given to recent advancements up to December 2023, including emerging approaches and frameworks. By categorizing and detailing these unlearning techniques, this work offers deeper insights into their evolution, effectiveness, efficiency, and broader applicability, thus providing a solid foundation for future research, development, and practical implementations in the realm of data privacy, model management, and compliance with evolving legal standards. Additionally, this review addresses the challenges of selectively removing data contributions at both the client and instance levels, highlighting the balance between computational costs and privacy guarantees.
Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
Enhancing Smart Home Efficiency with Heuristic-Based Energy Optimization
by
Yasir Abbas Khan, Faris Kateb, Ateeq Ur Rehman, Atif Sardar Khan, Fazal Qudus Khan, Sadeeq Jan and Ali Naser Alkhathlan
Computers 2025, 14(4), 149; https://doi.org/10.3390/computers14040149 - 16 Apr 2025
Abstract
In smart homes, heavy reliance on appliance automation has increased, along with the energy demand in developing urban areas, making efficient energy management an important factor. To address the scheduling of appliances under Demand-Side Management, this article explores the use of heuristic-based optimization
[...] Read more.
In smart homes, heavy reliance on appliance automation has increased, along with the energy demand in developing urban areas, making efficient energy management an important factor. To address the scheduling of appliances under Demand-Side Management, this article explores the use of heuristic-based optimization techniques (HOTs) in smart homes (SHs) equipped with renewable and sustainable energy resources (RSERs) and energy storage systems (ESSs). The optimal model for minimization of the peak-to-average ratio (PAR), considering user comfort constraints, is validated by using different techniques, such as the Genetic Algorithm (GA), Binary Particle Swarm Optimization (BPSO), Wind-Driven Optimization (WDO), Bacterial Foraging Optimization (BFO) and the Genetic Modified Particle Swarm Optimization (GmPSO) algorithm, to minimize electricity costs, the PAR, carbon emissions and delay discomfort. This research investigates the energy optimization results of three real-world scenarios. The three scenarios demonstrate the benefits of gradually assembling RSERs and ESSs and integrating them into SHs employing HOTs. The simulation results show substantial outcomes, as in the scenario of Condition 1, GmPSO decreased carbon emissions from 300 kg to 69.23 kg, reducing emissions by 76.9%; bill prices were also cut from an unplanned value of 400.00 cents to 150 cents, a 62.5% reduction. The PAR was decreased from an unscheduled value of 4.5 to 2.2 with the GmPSO algorithm, which reduced the value by 51.1%. The scenario of Condition 2 showed that GmPSO reduced the PAR from 0.5 (unscheduled) to 0.2, a 60% reduction; the costs were reduced from 500.00 cents to 200.00 cents, a 60% reduction; and carbon emissions were reduced from 250.00 kg to 150 kg, a 60% reduction by GmPSO. In the scenario of Condition 3, where batteries and RSERs were integrated, the GmPSO algorithm reduced the carbon emission value to 158.3 kg from an unscheduled value of 208.3 kg, a reduction of 24%. The energy cost was decreased from an unplanned value of 500 cents to 300 cents with GmPSO, decreasing the overall cost by 40%. The GmPSO algorithm achieved a 57.1% reduction in the PAR value from an unscheduled value of 2.8 to 1.2.
Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Innovations in Resilient Energy Systems)
►▼
Show Figures

Figure 1
Open AccessReview
Advancing Predictive Healthcare: A Systematic Review of Transformer Models in Electronic Health Records
by
Azza Mohamed, Reem AlAleeli and Khaled Shaalan
Computers 2025, 14(4), 148; https://doi.org/10.3390/computers14040148 - 14 Apr 2025
Abstract
This systematic study seeks to evaluate the use and impact of transformer models in the healthcare domain, with a particular emphasis on their usefulness in tackling key medical difficulties and performing critical natural language processing (NLP) functions. The research questions focus on how
[...] Read more.
This systematic study seeks to evaluate the use and impact of transformer models in the healthcare domain, with a particular emphasis on their usefulness in tackling key medical difficulties and performing critical natural language processing (NLP) functions. The research questions focus on how these models can improve clinical decision-making through information extraction and predictive analytics. Our findings show that transformer models, especially in applications like named entity recognition (NER) and clinical data analysis, greatly increase the accuracy and efficiency of processing unstructured data. Notably, case studies demonstrated a 30% boost in entity recognition accuracy in clinical notes and a 90% detection rate for malignancies in medical imaging. These contributions emphasize the revolutionary potential of transformer models in healthcare, and therefore their importance in enhancing resource management and patient outcomes. Furthermore, this paper emphasizes significant obstacles, such as the reliance on restricted datasets and the need for data format standardization, and provides a road map for future research to improve the applicability and performance of these models in real-world clinical settings.
Full article
(This article belongs to the Special Issue Applications of Machine Learning and Artificial Intelligence for Healthcare)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2025
Topic in
Applied Sciences, Computers, Electronics, Sensors, Virtual Worlds, IJGI
Simulations and Applications of Augmented and Virtual Reality, 2nd Edition
Topic Editors: Radu Comes, Dorin-Mircea Popovici, Calin Gheorghe Dan Neamtu, Jing-Jing FangDeadline: 20 June 2025
Topic in
Applied Sciences, Automation, Computers, Electronics, Sensors, JCP, Mathematics
Intelligent Optimization, Decision-Making and Privacy Preservation in Cyber–Physical Systems
Topic Editors: Lijuan Zha, Jinliang Liu, Jian LiuDeadline: 31 August 2025
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025

Conferences
Special Issues
Special Issue in
Computers
Smart Learning Environments
Guest Editor: Ananda MaitiDeadline: 30 April 2025
Special Issue in
Computers
Future Trends in Computer Programming Education
Guest Editor: Stelios XinogalosDeadline: 31 May 2025
Special Issue in
Computers
Harnessing the Blockchain Technology in Unveiling Futuristic Applications
Guest Editors: Raman Singh, Shantanu PalDeadline: 15 June 2025
Special Issue in
Computers
Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024
Guest Editor: Xuhui ChenDeadline: 30 June 2025