Next Issue
Volume 14, May
Previous Issue
Volume 14, March
 
 

Computers, Volume 14, Issue 4 (April 2025) – 43 articles

Cover Story (view full-size image): Organizations in production and logistics often face challenges in evaluating their data mining capabilities, especially during the cost-intensive data preparation phase. While maturity models exist for broader domains like data management and artificial intelligence, none is specifically addressing the data mining process. Due to the complexity of this process, the associated phases have to be evaluated in detail. This article reviews relevant maturity models, identifies key factors influencing data preparation, and introduces a prototype data preparation maturity model. This marks an initial step toward a framework for assessing and improving data mining maturity in industrial contexts. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 896 KiB  
Article
The Ubimus Plugging Framework: Deploying FPGA-Based Prototypes for Ubiquitous Music Hardware Design
by Damián Keller, Aman Jagwani and Victor Lazzarini
Computers 2025, 14(4), 155; https://doi.org/10.3390/computers14040155 - 21 Apr 2025
Viewed by 339
Abstract
The emergent field of embedded computing presents a challenging scenario for ubiquitous music (ubimus) design. Available tools demand specific technical knowledge—as exemplified in the techniques involved in programming integrated circuits of configurable logic units, known as field-programmable gate arrays (FPGAs). Low-level hardware description [...] Read more.
The emergent field of embedded computing presents a challenging scenario for ubiquitous music (ubimus) design. Available tools demand specific technical knowledge—as exemplified in the techniques involved in programming integrated circuits of configurable logic units, known as field-programmable gate arrays (FPGAs). Low-level hardware description languages used for handling FPGAs involve a steep learning curve. Hence, FPGA programming offers a unique challenge to probe the boundaries of ubimus frameworks as enablers of fast and versatile prototyping. State-of-the-art hardware-oriented approaches point to the use of high-level synthesis as a promising programming technique. Furthermore, current FPGA system-on-chip (SoC) hardware with an associated onboard general-purpose processor may foster the development of flexible platforms for musical signal processing. Taking into account the emergence of an FPGA-based ecology of tools, we introduce the ubimus plugging framework. The procedures employed in the construction of a modular- synthesis library based on field-programmable gate array hardware, ModFPGA, are documented, and examples of musical projects applying key design principles are discussed. Full article
Show Figures

Figure 1

18 pages, 2949 KiB  
Article
Generative Artificial Intelligence as a Catalyst for Change in Higher Education Art Study Programs
by Anna Ansone, Zinta Zālīte-Supe and Linda Daniela
Computers 2025, 14(4), 154; https://doi.org/10.3390/computers14040154 - 20 Apr 2025
Viewed by 281
Abstract
Generative Artificial Intelligence (AI) has emerged as a transformative tool in art education, offering innovative avenues for creativity and learning. However, concerns persist among educators regarding the potential misuse of text-to-image generators as unethical shortcuts. This study explores how bachelor’s-level art students perceive [...] Read more.
Generative Artificial Intelligence (AI) has emerged as a transformative tool in art education, offering innovative avenues for creativity and learning. However, concerns persist among educators regarding the potential misuse of text-to-image generators as unethical shortcuts. This study explores how bachelor’s-level art students perceive and use generative AI in artistic composition. Ten art students participated in a lecture on composition principles and completed a practical composition task using both traditional methods and generative AI tools. Their interactions were observed, followed by the administration of a questionnaire capturing their reflections. Qualitative analysis of the data revealed that students recognize the potential of generative AI for ideation and conceptual development but find its limitations frustrating for executing nuanced artistic tasks. This study highlights the current utility of generative AI as an inspirational and conceptual mentor rather than a precise artistic tool, highlighting the need for structured training and a balanced integration of generative AI with traditional design methods. Future research should focus on larger participant samples, assess the evolving capabilities of generative AI tools, and explore their potential to teach fundamental art concepts effectively while addressing concerns about academic integrity. Enhancing the functionality of these tools could bridge gaps between creativity and pedagogy in art education. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

13 pages, 1181 KiB  
Article
Design of an Emotional Facial Recognition Task in a 3D Environment
by Gemma Quirantes-Gutierrez, Ángeles F. Estévez, Gabriel Artés Ordoño and Ginesa López-Crespo
Computers 2025, 14(4), 153; https://doi.org/10.3390/computers14040153 - 18 Apr 2025
Viewed by 168
Abstract
The recognition of emotional facial expressions is a key skill for social adaptation. Previous studies have shown that clinical and subclinical populations, such as those diagnosed with schizophrenia or autism spectrum disorder, have a significant deficit in the recognition of emotional facial expressions. [...] Read more.
The recognition of emotional facial expressions is a key skill for social adaptation. Previous studies have shown that clinical and subclinical populations, such as those diagnosed with schizophrenia or autism spectrum disorder, have a significant deficit in the recognition of emotional facial expressions. These studies suggest that this may be the cause of their social dysfunction. Given the importance of this type of recognition in social functioning, the present study aims to design a tool to measure the recognition of emotional facial expressions using Unreal Engine 4 software to develop computer graphics in a 3D environment. Additionally, we tested it in a small pilot study with a sample of 37 university students, aged between 18 and 40, to compare the results with a more classical emotional facial recognition task. We also administered the SEES Scale and a set of custom-formulated questions to both groups to assess potential differences in activation levels between the two modalities (3D environment vs. classical format). The results of this initial pilot study suggest that students who completed the task in the classical format exhibited a greater lack of activation compared to those who completed the task in the 3D environment. Regarding the recognition of emotional facial expressions, both tasks were similar in two of the seven emotions evaluated. We believe that this study represents the beginning of a new line of research that could have important clinical implications. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

18 pages, 699 KiB  
Article
Role of Roadside Units in Cluster Head Election and Coverage Maximization for Vehicle Emergency Services
by Ravneet Kaur, Robin Doss, Lei Pan, Chaitanya Singla and Selvarajah Thuseethan
Computers 2025, 14(4), 152; https://doi.org/10.3390/computers14040152 - 18 Apr 2025
Viewed by 99
Abstract
Efficient clustering algorithms are critical for enabling the timely dissemination of emergency messages across maximum coverage areas in vehicular networks. While existing clustering approaches demonstrate stability and scalability, there has been a limited amount of work focused on leveraging roadside units (RSUs) for [...] Read more.
Efficient clustering algorithms are critical for enabling the timely dissemination of emergency messages across maximum coverage areas in vehicular networks. While existing clustering approaches demonstrate stability and scalability, there has been a limited amount of work focused on leveraging roadside units (RSUs) for cluster head selection. This research proposes a novel framework that utilizes RSUs to facilitate cluster head election, mitigating the cluster head selection process, clustering overhead, and broadcast storm problem. The proposed scheme mandates selecting an optimal number of cluster heads to maximize information coverage and prevent traffic congestion, thereby enhancing the quality of service through improved cluster head duration, reduced cluster formation time, expanded coverage area, and decreased overhead. The framework comprises three key components: (I) an acknowledgment-based system for legitimate vehicle entry into the RSU for cluster head selection; (II) an authoritative node behavior mechanism for choosing cluster heads from received notifications; and (III) the role of bridge nodes in maximizing the coverage of the established network. The comparative analysis evaluates the clustering framework’s performance under uniform and non-uniform vehicle speed scenarios for time-barrier-based emergency message dissemination in vehicular ad hoc networks. The results demonstrate that the proposed model’s effectiveness for uniform highway speed scenarios is 100% whereas for non-uniform scenarios 99.55% information coverage is obtained. Furthermore, the clustering process accelerates by over 50%, decreasing overhead and reducing cluster head election time using RSUs. The proposed approach outperforms existing methods for the number of cluster heads, cluster head election time, total cluster formation time, and maximum information coverage across varying vehicle densities. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 11427 KiB  
Article
Kalman Filter-Enhanced Data Aggregation in LoRaWAN-Based IoT Framework for Aquaculture Monitoring in Sargassum sp. Cultivation
by Misbahuddin Misbahuddin, Nunik Cokrowati, Muhamad Syamsu Iqbal, Obie Farobie, Apip Amrullah and Lusi Ernawati
Computers 2025, 14(4), 151; https://doi.org/10.3390/computers14040151 - 18 Apr 2025
Viewed by 191
Abstract
This study presents a LoRaWAN-based IoT framework for robust data aggregation in Sargassum sp. cultivation, integrating multi-sensor monitoring and Kalman filter-based data enhancement. The system employs water quality sensors—including temperature, salinity, light intensity, dissolved oxygen, total dissolved solids, and pH—deployed in 6 out [...] Read more.
This study presents a LoRaWAN-based IoT framework for robust data aggregation in Sargassum sp. cultivation, integrating multi-sensor monitoring and Kalman filter-based data enhancement. The system employs water quality sensors—including temperature, salinity, light intensity, dissolved oxygen, total dissolved solids, and pH—deployed in 6 out of 14 cultivation containers. Sensor data are transmitted via LoRaWAN to The Things Network (TTN) and processed through an MQTT-based pipeline in Node-RED before visualization in ThingSpeak. The Kalman filter is applied to improve data accuracy and detect faulty sensor readings, ensuring reliable aggregation of environmental parameters. Experimental results demonstrate that this approach effectively maintains optimal cultivation conditions, reducing ecological risks such as eutrophication and improving Sargassum sp. growth monitoring. Findings indicate that balanced light intensity plays a crucial role in photosynthesis, with optimally exposed containers exhibiting the highest survival rates and biomass. However, nutrient supplementation showed limited impact due to uneven distribution, highlighting the need for improved delivery systems. By combining real-time monitoring with advanced data processing, this framework enhances decision-making in sustainable aquaculture, demonstrating the potential of LoRaWAN and Kalman filter-based methodologies for environmental monitoring and resource management. Full article
Show Figures

Figure 1

55 pages, 2668 KiB  
Systematic Review
A Systematic Literature Review of Machine Unlearning Techniques in Neural Networks
by Ivanna Daniela Cevallos, Marco E. Benalcázar, Ángel Leonardo Valdivieso Caraguay, Jonathan A. Zea and Lorena Isabel Barona-López
Computers 2025, 14(4), 150; https://doi.org/10.3390/computers14040150 - 18 Apr 2025
Viewed by 278
Abstract
This review examines the field of machine unlearning in neural networks, an area driven by data privacy regulations such as the General Data Protection Regulation and the California Consumer Privacy Act. By analyzing 37 primary studies of machine unlearning applied to neural networks [...] Read more.
This review examines the field of machine unlearning in neural networks, an area driven by data privacy regulations such as the General Data Protection Regulation and the California Consumer Privacy Act. By analyzing 37 primary studies of machine unlearning applied to neural networks in both regression and classification tasks, this review thoroughly evaluates the foundational principles, key performance metrics, and methodologies used to assess these techniques. Special attention is given to recent advancements up to December 2023, including emerging approaches and frameworks. By categorizing and detailing these unlearning techniques, this work offers deeper insights into their evolution, effectiveness, efficiency, and broader applicability, thus providing a solid foundation for future research, development, and practical implementations in the realm of data privacy, model management, and compliance with evolving legal standards. Additionally, this review addresses the challenges of selectively removing data contributions at both the client and instance levels, highlighting the balance between computational costs and privacy guarantees. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

31 pages, 5128 KiB  
Article
Enhancing Smart Home Efficiency with Heuristic-Based Energy Optimization
by Yasir Abbas Khan, Faris Kateb, Ateeq Ur Rehman, Atif Sardar Khan, Fazal Qudus Khan, Sadeeq Jan and Ali Naser Alkhathlan
Computers 2025, 14(4), 149; https://doi.org/10.3390/computers14040149 - 16 Apr 2025
Viewed by 529
Abstract
In smart homes, heavy reliance on appliance automation has increased, along with the energy demand in developing urban areas, making efficient energy management an important factor. To address the scheduling of appliances under Demand-Side Management, this article explores the use of heuristic-based optimization [...] Read more.
In smart homes, heavy reliance on appliance automation has increased, along with the energy demand in developing urban areas, making efficient energy management an important factor. To address the scheduling of appliances under Demand-Side Management, this article explores the use of heuristic-based optimization techniques (HOTs) in smart homes (SHs) equipped with renewable and sustainable energy resources (RSERs) and energy storage systems (ESSs). The optimal model for minimization of the peak-to-average ratio (PAR), considering user comfort constraints, is validated by using different techniques, such as the Genetic Algorithm (GA), Binary Particle Swarm Optimization (BPSO), Wind-Driven Optimization (WDO), Bacterial Foraging Optimization (BFO) and the Genetic Modified Particle Swarm Optimization (GmPSO) algorithm, to minimize electricity costs, the PAR, carbon emissions and delay discomfort. This research investigates the energy optimization results of three real-world scenarios. The three scenarios demonstrate the benefits of gradually assembling RSERs and ESSs and integrating them into SHs employing HOTs. The simulation results show substantial outcomes, as in the scenario of Condition 1, GmPSO decreased carbon emissions from 300 kg to 69.23 kg, reducing emissions by 76.9%; bill prices were also cut from an unplanned value of 400.00 cents to 150 cents, a 62.5% reduction. The PAR was decreased from an unscheduled value of 4.5 to 2.2 with the GmPSO algorithm, which reduced the value by 51.1%. The scenario of Condition 2 showed that GmPSO reduced the PAR from 0.5 (unscheduled) to 0.2, a 60% reduction; the costs were reduced from 500.00 cents to 200.00 cents, a 60% reduction; and carbon emissions were reduced from 250.00 kg to 150 kg, a 60% reduction by GmPSO. In the scenario of Condition 3, where batteries and RSERs were integrated, the GmPSO algorithm reduced the carbon emission value to 158.3 kg from an unscheduled value of 208.3 kg, a reduction of 24%. The energy cost was decreased from an unplanned value of 500 cents to 300 cents with GmPSO, decreasing the overall cost by 40%. The GmPSO algorithm achieved a 57.1% reduction in the PAR value from an unscheduled value of 2.8 to 1.2. Full article
Show Figures

Figure 1

28 pages, 1521 KiB  
Review
Advancing Predictive Healthcare: A Systematic Review of Transformer Models in Electronic Health Records
by Azza Mohamed, Reem AlAleeli and Khaled Shaalan
Computers 2025, 14(4), 148; https://doi.org/10.3390/computers14040148 - 14 Apr 2025
Viewed by 499
Abstract
This systematic study seeks to evaluate the use and impact of transformer models in the healthcare domain, with a particular emphasis on their usefulness in tackling key medical difficulties and performing critical natural language processing (NLP) functions. The research questions focus on how [...] Read more.
This systematic study seeks to evaluate the use and impact of transformer models in the healthcare domain, with a particular emphasis on their usefulness in tackling key medical difficulties and performing critical natural language processing (NLP) functions. The research questions focus on how these models can improve clinical decision-making through information extraction and predictive analytics. Our findings show that transformer models, especially in applications like named entity recognition (NER) and clinical data analysis, greatly increase the accuracy and efficiency of processing unstructured data. Notably, case studies demonstrated a 30% boost in entity recognition accuracy in clinical notes and a 90% detection rate for malignancies in medical imaging. These contributions emphasize the revolutionary potential of transformer models in healthcare, and therefore their importance in enhancing resource management and patient outcomes. Furthermore, this paper emphasizes significant obstacles, such as the reliance on restricted datasets and the need for data format standardization, and provides a road map for future research to improve the applicability and performance of these models in real-world clinical settings. Full article
Show Figures

Figure 1

13 pages, 5610 KiB  
Article
An Approach to Thermal Management and Performance Throttling for Federated Computation on a Low-Cost 3D ESP32-S3 Package Stack
by Yi Liu, Parth Sandeepbhai Shah, Tian Xia and Dryver Huston
Computers 2025, 14(4), 147; https://doi.org/10.3390/computers14040147 - 11 Apr 2025
Viewed by 192
Abstract
The rise of 3D heterogeneous packaging holds promise for increased performance in applications such as AI by bringing compute and memory modules into close proximity. This increased performance comes with increased thermal management challenges. This research explores the use of thermal sensing and [...] Read more.
The rise of 3D heterogeneous packaging holds promise for increased performance in applications such as AI by bringing compute and memory modules into close proximity. This increased performance comes with increased thermal management challenges. This research explores the use of thermal sensing and load throttling combined with federated computation to manage localized internal heating in a multi-3D chip package. The overall concept is that individual chiplets may heat at different rates due to operational and geometric factors. Shifting computational loads from hot to cooler chiplets can prevent local overheating while maintaining overall computational output. This concept is verified with experiments in a low-cost test vehicle. The test vehicle mimics a 3D chiplet stack with a tightly stacked assembly of SoC devices. These devices can sense and report internal temperature and dynamically adjust frequency. The configuration is for ESP32-S3 microcontrollers to work on a federated computational task, while reporting internal temperature to a host controller. The tight packing of processors causes temperatures to rise, with those internal to the stack rising more quickly than external ones. With real-time temperature monitoring, when the temperatures exceed a threshold, the AI system reduces the processor frequency, i.e., throttles the processor, to save power and dynamically shifts part of the workload to other ESP32-S3s with lower temperatures. This approach maximizes overall efficiency while maintaining thermal safety without compromising computational power. Experimental results with up to six processors confirm the validity of the concept. Full article
Show Figures

Figure 1

23 pages, 1535 KiB  
Article
Review of Maturity Models for Data Mining and Proposal of a Data Preparation Maturity Model Prototype for Data Mining
by Florian Hochkamp, Anne Antonia Scheidler and Markus Rabe
Computers 2025, 14(4), 146; https://doi.org/10.3390/computers14040146 - 11 Apr 2025
Viewed by 322
Abstract
Companies face uncertainties evaluating their own capabilities when implementing data analytics or data mining. Data mining is a valuable process used to analyze data and support decisions based on the knowledge generated, creating a pipeline from the phases of data collection to data [...] Read more.
Companies face uncertainties evaluating their own capabilities when implementing data analytics or data mining. Data mining is a valuable process used to analyze data and support decisions based on the knowledge generated, creating a pipeline from the phases of data collection to data preparation and data mining. In order to assess the target state of a company in terms of its data mining capabilities, maturity models are viable tools. However, no maturity models exist that focus on the data mining process and its particular phases. This article discusses existing maturity models in the broader field of data mining, such as data management, machine learning, or artificial intelligence. Focusing on the most cost-relevant phase of data preparation, the critical influences for the development of a maturity model for the phase of data preparation are identified and categorized. Additionally, a data preparation maturity model prototype is proposed. This provides the first step towards the design of a maturity model for data mining. Full article
(This article belongs to the Special Issue IT in Production and Logistics)
Show Figures

Figure 1

25 pages, 436 KiB  
Review
Comparison of Bioelectric Signals and Their Applications in Artificial Intelligence: A Review
by Juarez-Castro Flavio Alfonso, Toledo-Rios Juan Salvador, Aceves-Fernández Marco Antonio and Tovar-Arriaga Saul
Computers 2025, 14(4), 145; https://doi.org/10.3390/computers14040145 - 11 Apr 2025
Viewed by 398
Abstract
This review examines the role of various bioelectrical signals in conjunction with artificial intelligence (AI) and analyzes how these signals are utilized in AI applications. The applications of electroencephalography (EEG), electroretinography (ERG), electromyography (EMG), electrooculography (EOG), and electrocardiography (ECG) in diagnostic and therapeutic [...] Read more.
This review examines the role of various bioelectrical signals in conjunction with artificial intelligence (AI) and analyzes how these signals are utilized in AI applications. The applications of electroencephalography (EEG), electroretinography (ERG), electromyography (EMG), electrooculography (EOG), and electrocardiography (ECG) in diagnostic and therapeutic systems are focused on. Signal processing techniques are discussed, and relevant studies that have utilized these signals in various clinical and research settings are highlighted. Advances in signal processing and classification methodologies powered by AI have significantly improved accuracy and efficiency in medical analysis. The integration of AI algorithms with bioelectrical signal processing for real-time monitoring and diagnosis, particularly in personalized medicine, is emphasized. AI-driven approaches are shown to have the potential to enhance diagnostic precision and improve patient outcomes. However, further research is needed to optimize these models for diverse clinical environments and fully exploit the interaction between bioelectrical signals and AI technologies. Full article
Show Figures

Figure 1

35 pages, 1189 KiB  
Article
Towards a Better Understanding of Mobile Banking App Adoption and Use: Integrating Security, Risk, and Trust into UTAUT2
by Richard Apau, Elzbieta Titis and Harjinder Singh Lallie
Computers 2025, 14(4), 144; https://doi.org/10.3390/computers14040144 - 10 Apr 2025
Viewed by 213
Abstract
This paper expands the extended unified theory of acceptance and use of technology (UTAUT2) to include four additional constructs (security, risk, institutional trust, and technology trust), providing a more comprehensive understanding of mobile banking applications (m-banking apps) adoption. It also highlights the significant [...] Read more.
This paper expands the extended unified theory of acceptance and use of technology (UTAUT2) to include four additional constructs (security, risk, institutional trust, and technology trust), providing a more comprehensive understanding of mobile banking applications (m-banking apps) adoption. It also highlights the significant role of demographic factors in moderating the impact of these constructs, offering practical insights for promoting the use of mobile devices to access and manage banking services. Data were collected using an online survey from 315 mobile banking users and analysed using covariance-based structural equation modelling (CB-SEM). Most constructs of the baseline UTAUT2 were validated in the m-banking context, with the additional constructs confirmed to affect user intention to adopt m-banking apps, except perceived risk. The model explained 79% of the variance in behavioural intention (BI), and 54.7% in use behaviour (UB), achieving higher fit than the baseline UTAUT2. Age, gender, experience, income, and education moderated the impact of perceived security and institutional trust on BI; age, education, and experience moderated technology trust on BI; and age, gender, and experience moderated perceived security on UB. The guarantee of enhanced security, advanced privacy mechanisms, and trust should be considered paramount in future strategies aimed at promoting m-banking app adoption and use. Overall, the paper advances scientific knowledge by providing a more nuanced and comprehensive framework for understanding m-banking app adoption, validating new constructs, and offering practical recommendations for promoting m-banking usage. Full article
(This article belongs to the Special Issue Multimedia Data and Network Security)
Show Figures

Figure 1

37 pages, 353 KiB  
Review
A State-of-the-Art Review of Artificial Intelligence (AI) Applications in Healthcare: Advances in Diabetes, Cancer, Epidemiology, and Mortality Prediction
by Mariano Vargas-Santiago, Diana Assaely León-Velasco, Christian Efraín Maldonado-Sifuentes and Liliana Chanona-Hernandez
Computers 2025, 14(4), 143; https://doi.org/10.3390/computers14040143 - 10 Apr 2025
Viewed by 669
Abstract
Artificial Intelligence (AI) methodologies have profoundly influenced healthcare research, particularly in chronic disease management and public health. This paper provides a comprehensive state-of-the-art review of AI’s applications across diabetes, cancer, epidemiology, and mortality prediction. The analysis highlights advancements in machine learning (ML), deep [...] Read more.
Artificial Intelligence (AI) methodologies have profoundly influenced healthcare research, particularly in chronic disease management and public health. This paper provides a comprehensive state-of-the-art review of AI’s applications across diabetes, cancer, epidemiology, and mortality prediction. The analysis highlights advancements in machine learning (ML), deep learning (DL), and natural language processing (NLP) that enable robust predictive models and decision support systems, leading to significant clinical and public health outcomes. The study examines predictive modeling, pattern recognition, and decision support applications, addressing their respective challenges and potential in real-world healthcare settings. Emphasis is placed on the emerging role of explainable AI (XAI), multimodal data fusion, and privacy-preserving techniques such as federated learning, which aim to enhance interpretability, robustness, and ethical compliance. This paper underscores the vital role of interdisciplinary collaboration and adaptive AI systems in creating resilient, scalable, and patient-centric healthcare solutions. Full article
15 pages, 3742 KiB  
Article
An Innovative Approach to Topic Clustering for Social Media and Web Data Using AI
by Ioannis Kapantaidakis, Emmanouil Perakakis, George Mastorakis and Ioannis Kopanakis
Computers 2025, 14(4), 142; https://doi.org/10.3390/computers14040142 - 10 Apr 2025
Viewed by 375
Abstract
The vast amount of social media and web data offers valuable insights for purposes such as brand reputation management, topic research, competitive analysis, product development, and public opinion surveys. However, analysing these data to identify patterns and extract valuable insights is challenging due [...] Read more.
The vast amount of social media and web data offers valuable insights for purposes such as brand reputation management, topic research, competitive analysis, product development, and public opinion surveys. However, analysing these data to identify patterns and extract valuable insights is challenging due to the vast number of posts, which can number in the thousands within a single day. One practical approach is topic clustering, which creates clusters of mentions that refer to a specific topic. Following this process will create several manageable clusters, each containing hundreds or thousands of posts. These clusters offer a more meaningful overview of the discussed topics, eliminating the need to categorise each post manually. Several topic detection algorithms can achieve clustering of posts, such as LDA, NMF, BERTopic, etc. The existing algorithms, however, have several important drawbacks, including language constraints and slow or resource-intensive data processing. Moreover, the labels for the clusters typically consist of a few keywords that may not make sense unless one explores the mentions within the cluster. Recently, with the introduction of AI large language models, such as GPT-4, new techniques can be realised for topic clustering to address the aforementioned issues. Our novel approach (AI Mention Clustering) employs LLMs at its core to produce an algorithm for efficient and accurate topic clustering of web and social data. Our solution was tested on social and web data and compared to the popular existing algorithm of BERTopic, demonstrating superior resource efficiency and absolute accuracy of clustered documents. Furthermore, it produces summaries of the clusters that are easily understood by humans instead of just representative keywords. This approach enhances the productivity of social and web data researchers by providing more meaningful and interpretable results. Full article
Show Figures

Figure 1

36 pages, 1629 KiB  
Review
A Systematic Review of Blockchain-Based Initiatives in Comparison to Best Practices Used in Higher Education Institutions
by Diana Laura Silaghi and Daniela Elena Popescu
Computers 2025, 14(4), 141; https://doi.org/10.3390/computers14040141 - 8 Apr 2025
Viewed by 674
Abstract
Blockchain technology, originally introduced through Bitcoin cryptocurrency in 2008, has rapidly expanded beyond its financial roots, offering innovative solutions for secure data management across various sectors, including education. Higher education institutions, faced with challenges in managing academic records, verifying degrees, assessing skills, and [...] Read more.
Blockchain technology, originally introduced through Bitcoin cryptocurrency in 2008, has rapidly expanded beyond its financial roots, offering innovative solutions for secure data management across various sectors, including education. Higher education institutions, faced with challenges in managing academic records, verifying degrees, assessing skills, and safeguarding personal data, have increasingly looked to blockchain for answers. Blockchain’s transparent, immutable, and decentralized nature provides potential solutions to these longstanding problems. This systematic review assesses blockchain-based proposals for academic certificates management, aiming to highlight globally recognized best practices, explore the latest applications, and identify key challenges hindering the widespread adoption of blockchain technology in education. A thorough discussion based on the findings introduces potential solutions to mitigate these challenges and provides insights into possible future research directions that could help overcome these obstacles. Full article
Show Figures

Figure 1

30 pages, 14418 KiB  
Article
LAVID: A Lightweight and Autonomous Smart Camera System for Urban Violence Detection and Geolocation
by Mohammed Azzakhnini, Houda Saidi, Ahmed Azough, Hamid Tairi and Hassan Qjidaa
Computers 2025, 14(4), 140; https://doi.org/10.3390/computers14040140 - 7 Apr 2025
Viewed by 384
Abstract
With the rise of digital video technologies and the proliferation of processing methods and storage systems, video-surveillance systems have received increasing attention over the last decade. However, the spread of cameras installed in public and private spaces makes it more difficult for human [...] Read more.
With the rise of digital video technologies and the proliferation of processing methods and storage systems, video-surveillance systems have received increasing attention over the last decade. However, the spread of cameras installed in public and private spaces makes it more difficult for human operators to perform real-time analysis of the large amounts of data produced by surveillance systems. Due to the advancement of artificial intelligence methods, many automatic video analysis tasks like violence detection have been studied from a research perspective, and are even beginning to be commercialized in industrial solutions. Nevertheless, most of these solutions adopt centralized architectures with costly servers utilized to process streaming videos sent from different cameras. Centralized architectures do not present the ideal solution due to the high cost, processing time issues, and network bandwidth overhead. In this paper, we propose a lightweight autonomous system for the detection and geolocation of violent acts. Our proposed system, named LAVID, is based on a depthwise separable convolution model (DSCNN) combined with a bidirectional long-short-term memory network (BiLSTM) and implemented on a lightweight smart camera. We provide in this study a lightweight video-surveillance system consisting of low-cost autonomous smart cameras that are capable of detecting and identifying harmful behavior and geolocate violent acts that occur over a covered area in real-time. Our proposed system, implemented using Raspberry Pi boards, represents a cost-effective solution with interoperability features making it an ideal IoT solution to be integrated with other smart city infrastructure. Furthermore, our approach, implemented using optimized deep learning models and evaluated on several public datasets, has shown good results in term of accuracy compared to state of the art methods while optimizing reducing power and computational requirements. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

27 pages, 744 KiB  
Article
Microhooks: A Novel Framework to Streamline the Development of Microservices
by Omar Iraqi, Mohamed El Kadiri El Hassani and Anass Zouine
Computers 2025, 14(4), 139; https://doi.org/10.3390/computers14040139 - 7 Apr 2025
Viewed by 337
Abstract
The microservices architectural style has gained widespread adoption in recent years thanks to its ability to deliver high scalability and maintainability. However, the development process for microservices-based applications can be complex and challenging. Indeed, it often requires developers to manage a large number [...] Read more.
The microservices architectural style has gained widespread adoption in recent years thanks to its ability to deliver high scalability and maintainability. However, the development process for microservices-based applications can be complex and challenging. Indeed, it often requires developers to manage a large number of distributed components with the burden of handling low-level, recurring needs, such as inter-service communication, brokering, event management, and data replication. In this article, we present Microhooks: a novel framework designed to streamline the development of microservices by allowing developers to focus on their business logic while declaratively expressing the so-called low-level needs. Based on the inversion of control and the materialized view patterns, among others, our framework automatically generates and injects the corresponding artifacts, leveraging 100% build time code introspection and instrumentation, as well as context building, for optimized runtime performance. We provide the first implementation for the Java world, supporting the most popular containers and brokers, and adhering to the standard Java/Jakarta Persistence API. From the user perspective, Microhooks exposes an intuitive, container-agnostic, broker-neutral, and ORM framework-independent API. Microhooks evaluation against state-of-the-art practices has demonstrated its effectiveness in drastically reducing code size and complexity, without incurring any considerable cost on performance. Based on such promising results, we believe that Microhooks has the potential to become an essential component of the microservices development ecosystem. Full article
Show Figures

Figure 1

14 pages, 3430 KiB  
Article
Optimal Selection of Sampling Rates and Mother Wavelet for an Algorithm to Classify Power Quality Disturbances
by Jonatan A. Medina-Molina, Enrique Reyes-Archundia, José A. Gutiérrez-Gnecchi, Javier A. Rodríguez-Herrejón, Marco V. Chávez-Báez, Juan C. Olivares-Rojas and Néstor F. Guerrero-Rodríguez
Computers 2025, 14(4), 138; https://doi.org/10.3390/computers14040138 - 6 Apr 2025
Viewed by 301
Abstract
The introduction of renewable energy sources, distributed energy systems, and power electronics equipment has led to the emergence of the Smart Grid. However, these developments have also caused the worsening of power quality. Selecting the correct sampling frequency and feature extraction techniques are [...] Read more.
The introduction of renewable energy sources, distributed energy systems, and power electronics equipment has led to the emergence of the Smart Grid. However, these developments have also caused the worsening of power quality. Selecting the correct sampling frequency and feature extraction techniques are essential for appropriately analyzing power quality disturbances. This work compares the performance of an algorithm based on a Support Vector Machine and Discrete Wavelet Transform for the classification of power quality disturbances using eight sampling rates and five different mother wavelets. The algorithm was tested in noisy and noiseless scenarios to show the methodology. The results indicate that a success rate of 99.9% is obtained for the noiseless signals using a sampling rate of 9.6 kHz and 95.2% for signals with a signal-to-noise ratio of 30 dB with a sampling rate of 30 kHz. Full article
Show Figures

Figure 1

16 pages, 434 KiB  
Article
Quantum Testing of Recommender Algorithms on GPU-Based Quantum Simulators
by Chenxi Liu, W. Bernard Lee and Anthony G. Constantinides
Computers 2025, 14(4), 137; https://doi.org/10.3390/computers14040137 - 6 Apr 2025
Viewed by 284
Abstract
This study explores the application of quantum computing in asset management, focusing on the use of the Quantum Approximate Optimization Algorithm (QAOA) to solve specific classes of financial asset recommendation problems. While quantum computing holds promise for combinatorial optimization tasks, its application to [...] Read more.
This study explores the application of quantum computing in asset management, focusing on the use of the Quantum Approximate Optimization Algorithm (QAOA) to solve specific classes of financial asset recommendation problems. While quantum computing holds promise for combinatorial optimization tasks, its application to portfolio management faces significant challenges in scalability for practical implementations. In this work, we model the problem using a graph representation where nodes represent investors, and edges reflect significant similarities in asset choices. We test the proposed method using quantum simulators, including cuQuantum, Cirq-GPU, and Cirq with IonQ, and compare the performance of quantum optimization against classical brute-force methods. Our results suggest that quantum algorithms may offer computational advantages for certain use cases, though classical heuristics also provide competitive performance for smaller datasets. This study contributes to the ongoing investigation into the potential of quantum computing for real-time financial decision-making, providing insights into both its applicability and limitations in asset management for larger and more complex investor datasets. Full article
Show Figures

Figure 1

19 pages, 2173 KiB  
Article
Digital Twins and the Stendhal Syndrome
by Franco Niccolucci and Achille Felicetti
Computers 2025, 14(4), 136; https://doi.org/10.3390/computers14040136 - 6 Apr 2025
Viewed by 266
Abstract
The “Stendhal Syndrome” mentioned in the title refers to the first (early 19th century) documented perception of the role of intangible aspects in characterising cultural heritage. This paper addresses the semantic organisation of data concerning the digital documentation of cultural heritage, considering its [...] Read more.
The “Stendhal Syndrome” mentioned in the title refers to the first (early 19th century) documented perception of the role of intangible aspects in characterising cultural heritage. This paper addresses the semantic organisation of data concerning the digital documentation of cultural heritage, considering its intangible dimension in the framework of Digital Twins. The intangible component was one of the aspects motivating the need of setting up the Heritage Digital Twin (HDT) ontology and its extensions, published in a series of papers since early 2023. In this paper, we analyse how places, persons, and things may give value to a heritage asset, being linked to and supporting its intrinsic cultural significance. This development stems from the consideration of heritage studies and research carried out by scholars and organisations such as UNESCO and ICOMOS, which underline the paramount role of the intangible component in defining heritage assets. The paper then expands the previous semantic structure of the Heritage Digital Twin ontology as concerns the intangible aspects of a heritage asset, extending the HDT concepts by defining new classes and properties related to its intangible component. These are discussed in various cases concerning places, monuments, objects, and persons, and fully developed in examples. Full article
Show Figures

Figure 1

31 pages, 469 KiB  
Article
Enhancing Cryptographic Solutions for Resource-Constrained RFID Assistive Devices: Implementing a Resource-Efficient Field Montgomery Multiplier
by Atef Ibrahim and Fayez Gebali
Computers 2025, 14(4), 135; https://doi.org/10.3390/computers14040135 - 6 Apr 2025
Viewed by 202
Abstract
Radio Frequency Identification (RFID) assistive systems, which integrate RFID devices with IoT technologies, are vital for enhancing the independence, mobility, and safety of individuals with disabilities. These systems enable applications such as RFID navigation for blind users and RFID-enabled canes that provide real-time [...] Read more.
Radio Frequency Identification (RFID) assistive systems, which integrate RFID devices with IoT technologies, are vital for enhancing the independence, mobility, and safety of individuals with disabilities. These systems enable applications such as RFID navigation for blind users and RFID-enabled canes that provide real-time location data. Central to these systems are resource-constrained RFID devices that rely on RFID tags to collect and transmit data, but their limited computational capabilities make them vulnerable to cyberattacks, jeopardizing user safety and privacy. Implementing the Elliptic Curve Cryptography (ECC) algorithm is essential to mitigate these risks; however, its high computational complexity exceeds the capabilities of these devices. The fundamental operation of ECC is finite field multiplication, which is crucial for securing data. Optimizing this operation allows ECC computations to be executed without overloading the devices’ limited resources. Traditional multiplication designs are often unsuitable for such devices due to their excessive area and energy requirements. Therefore, this work tackles these challenges by proposing an efficient and compact field multiplier design optimized for the Montgomery multiplication algorithm, a widely used method in cryptographic applications. The proposed design significantly reduces both space and energy consumption while maintaining computational performance, making it well-suited for resource-constrained environments. ASIC synthesis results demonstrate substantial improvements in key metrics, including area, power consumption, Power-Delay Product (PDP), and Area-Delay Product (ADP), highlighting the multiplier’s efficiency and practicality. This innovation enables the implementation of ECC on RFID assistive devices, enhancing their security and reliability, thereby allowing individuals with disabilities to engage with assistive technologies more safely and confidently. Full article
(This article belongs to the Special Issue Wearable Computing and Activity Recognition)
Show Figures

Figure 1

17 pages, 5373 KiB  
Article
Real-Time Overhead Power Line Component Detection on Edge Computing Platforms
by Nico Surantha
Computers 2025, 14(4), 134; https://doi.org/10.3390/computers14040134 - 5 Apr 2025
Viewed by 256
Abstract
Regular inspection of overhead power line (OPL) systems is required to detect damage early and ensure the efficient and uninterrupted transmission of high-voltage electric power. In the past, these checks were conducted utilizing line crawling, inspection robots, and a helicopter. Yet, these traditional [...] Read more.
Regular inspection of overhead power line (OPL) systems is required to detect damage early and ensure the efficient and uninterrupted transmission of high-voltage electric power. In the past, these checks were conducted utilizing line crawling, inspection robots, and a helicopter. Yet, these traditional solutions are slow, costly, and hazardous. Advancements in drones, edge computing platforms, deep learning, and high-resolution cameras may enable real-time OPL inspections using drones. Some research has been conducted on OPL inspection with autonomous drones. However, it is essential to explore how to achieve real-time OPL component detection effectively and efficiently. In this paper, we report our research on OPL component detection on edge computing devices. The original OPL dataset is generated in this study. In this paper, we evaluate the detection performance with several sizes of training datasets. We also implement simple data augmentation to extend the size of datasets. The performance of the YOLOv7 model is also evaluated on several edge computing platforms, such as Raspberry Pi 4B, Jetson Nano, and Jetson Orin Nano. The model quantization method is used to improve the real-time performance of the detection model. The simulation results show that the proposed YOLOv7 model can achieve mean average precision (mAP) over 90%. While the hardware evaluation shows the real-time detection performance can be achieved in several circumstances. Full article
Show Figures

Figure 1

33 pages, 1066 KiB  
Review
The Ontology-Based Mapping of Microservice Identification Approaches: A Systematic Study of Migration Strategies from Monolithic to Microservice Architectures
by Idris Oumoussa and Rajaa Saidi
Computers 2025, 14(4), 133; https://doi.org/10.3390/computers14040133 - 5 Apr 2025
Viewed by 192
Abstract
The Microservice Architecture Style (MSA) has emerged as a significant computing paradigm in software engineering, with companies increasingly restructuring their monolithic systems to enhance digital performance and competitiveness. However, the migration process, particularly the microservice identification phase, presents complex challenges that require careful [...] Read more.
The Microservice Architecture Style (MSA) has emerged as a significant computing paradigm in software engineering, with companies increasingly restructuring their monolithic systems to enhance digital performance and competitiveness. However, the migration process, particularly the microservice identification phase, presents complex challenges that require careful consideration. This study aimed to provide developers and researchers with a practical roadmap for microservice identification during legacy system migration while highlighting crucial migration steps and research requirements. Through a systematic mapping study following Kitchenham and Petersen’s guidelines, we analyzed various microservice identification approaches and developed a middleweight ontology that can be queried for key inputs, data modeling, identification algorithms, and performance evaluation metrics. Our research makes several significant contributions: a comprehensive analysis of existing identification methodologies, a multi-dimensional framework for categorizing and evaluating approaches, an examination of current research trajectories and literature gaps, an ontological framework specifically designed for microservice identification, and an outline of pressing challenges and future research directions. The study concluded that microservice identification remains a significant barrier in system migration efforts, highlighting the need for more research focused on developing effective identification techniques that consider various aspects, including roles and dependencies within a microservice architecture. This comprehensive analysis provides valuable insights for professionals and researchers working on microservice migration projects. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

23 pages, 3151 KiB  
Article
Scalability and Efficiency Analysis of Hyperledger Fabric and Private Ethereum in Smart Contract Execution
by Maaz Muhammad Khan, Fahd Sikandar Khan, Muhammad Nadeem, Taimur Hayat Khan, Shahab Haider and Dani Daas
Computers 2025, 14(4), 132; https://doi.org/10.3390/computers14040132 - 3 Apr 2025
Viewed by 677
Abstract
Blockchain technology has emerged as a transformative solution for secure, immutable, and decentralized data management across diverse domains, including economics, healthcare, and supply chain management. Given its soaring adoption, it is crucial to assess the suitability of various blockchain platforms for specific applications. [...] Read more.
Blockchain technology has emerged as a transformative solution for secure, immutable, and decentralized data management across diverse domains, including economics, healthcare, and supply chain management. Given its soaring adoption, it is crucial to assess the suitability of various blockchain platforms for specific applications. This study evaluates the performance of Hyperledger Fabric (HF) and private Ethereum (Geth) to analyze their scalability (node count), throughput (transactions per second (TPS)), and latency (measured in milliseconds). A benchmarking tool was developed in-house to assess the execution of key smart contract functions—QueryUser, CreateUser, TransferMoney, and IssueMoney—under varying transaction loads (10–1000 transactions) and network sizes (2–16 node count). The results indicate that HF performs significantly better than private Ethereum in terms of invoke functions, achieving up to 5× throughput and up to 26× lower latency. However, private Ethereum excels in query operations because of its account-based ledger model. While Hyperledger Fabric scales efficiently within moderate transaction volumes, it experiences concurrency limitations beyond 1000 transactions, whereas private Ethereum processes up to 10,000 transactions, albeit with performance fluctuations due to gas fees. The findings offer valuable insights into the strengths and tradeoffs of both platforms, informing optimal blockchain selection for enterprise applications that require high transaction efficiency. Full article
Show Figures

Figure 1

36 pages, 2159 KiB  
Review
Employing Blockchain, NFTs, and Digital Certificates for Unparalleled Authenticity and Data Protection in Source Code: A Systematic Review
by Leonardo Juan Ramirez Lopez and Genesis Gabriela Morillo Ledezma
Computers 2025, 14(4), 131; https://doi.org/10.3390/computers14040131 - 2 Apr 2025
Viewed by 680
Abstract
In higher education, especially in programming-intensive fields like computer science, safeguarding students’ source code is crucial to prevent theft that could impact learning and future careers. Traditional storage solutions like Google Drive are vulnerable to hacking and alterations, highlighting the need for stronger [...] Read more.
In higher education, especially in programming-intensive fields like computer science, safeguarding students’ source code is crucial to prevent theft that could impact learning and future careers. Traditional storage solutions like Google Drive are vulnerable to hacking and alterations, highlighting the need for stronger protection. This work explores digital technologies that enhance source code security, with a focus on Blockchain and NFTs. Due to Blockchain’s decentralized and immutable nature, NFTs can be used to control code ownership, improving security, traceability, and preventing unauthorized access. This approach effectively addresses existing gaps in protecting academic intellectual property. However, as Bennett et al. highlight, while these technologies have significant potential, challenges remain in large-scale implementation and user acceptance. Despite these hurdles, integrating Blockchain and NFTs presents a promising opportunity to enhance academic integrity. Successful adoption in educational settings may require a more inclusive and innovative strategy. Full article
(This article belongs to the Section Blockchain Infrastructures and Enabled Applications)
Show Figures

Figure 1

23 pages, 1956 KiB  
Article
Artificial Intelligence in Neoplasticism: Aesthetic Evaluation and Creative Potential
by Su Jin Mun and Won Ho Choi
Computers 2025, 14(4), 130; https://doi.org/10.3390/computers14040130 - 2 Apr 2025
Viewed by 662
Abstract
This research investigates the aesthetic evaluation of AI-generated neoplasticist artworks, exploring how well artificial intelligence systems, specifically Midjourney, replicate the core principles of neoplasticism, such as geometric forms, balance, and color harmony. The background of this study stems from ongoing debates about the [...] Read more.
This research investigates the aesthetic evaluation of AI-generated neoplasticist artworks, exploring how well artificial intelligence systems, specifically Midjourney, replicate the core principles of neoplasticism, such as geometric forms, balance, and color harmony. The background of this study stems from ongoing debates about the legitimacy of AI-generated art and how these systems engage with established artistic movements. The purpose of the research is to assess whether AI can produce artworks that meet aesthetic standards comparable to human-created works. The research utilized Monroe C. Beardsley’s aesthetic emotion criteria and Noël Carroll’s aesthetic experience criteria as a framework for evaluating the artworks. A logistic regression analysis was conducted to identify key compositional elements in AI-generated neoplasticist works. The findings revealed that AI systems excelled in areas such as unity, color diversity, and overall artistic appeal but showed limitations in handling monochromatic elements. The implications of this research suggest that while AI can produce high-quality art, further refinement is needed for more subtle aspects of design. This study contributes to understanding the potential of AI as a tool in the creative process, offering insights for both artists and AI developers. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

35 pages, 1880 KiB  
Article
Strengthening Cybersecurity Resilience: An Investigation of Customers’ Adoption of Emerging Security Tools in Mobile Banking Apps
by Irfan Riasat, Mahmood Shah and M. Sinan Gonul
Computers 2025, 14(4), 129; https://doi.org/10.3390/computers14040129 - 1 Apr 2025
Viewed by 566
Abstract
The rise in internet-based services has raised risks of data exposure. The manipulation and exploitation of sensitive data significantly impact individuals’ resilience—the ability to protect and prepare against cyber incidents. Emerging technologies seek to enhance cybersecurity resilience by developing various security tools. This [...] Read more.
The rise in internet-based services has raised risks of data exposure. The manipulation and exploitation of sensitive data significantly impact individuals’ resilience—the ability to protect and prepare against cyber incidents. Emerging technologies seek to enhance cybersecurity resilience by developing various security tools. This study aims to explore the adoption of security tools using a qualitative research approach. Twenty-two semi-structured interviews were conducted with users of mobile banking apps from Pakistan. Data were analyzed using thematic analysis, which revealed that biometric authentication and SMS alerts are commonly used. Limited use of multifactor authentication has been observed, mainly due to a lack of awareness or implementation knowledge. Passwords are still regarded as a trusted and secure mechanism. The findings indicate that the adoption of security tools is based on perceptions of usefulness, perceived trust, and perceived ease of use, while knowledge and awareness play a moderating role. This study also proposes a framework by extending TAM to include multiple security tools and introducing knowledge and awareness as a moderator influencing users’ perceptions. The findings inform practical implications for financial institutions, application developers, and policymakers to ensure standardized policy to include security tools in online financial platforms, thereby enhancing overall cybersecurity resilience. Full article
Show Figures

Figure 1

19 pages, 13596 KiB  
Article
SMS3D: 3D Synthetic Mushroom Scenes Dataset for 3D Object Detection and Pose Estimation
by Abdollah Zakeri, Bikram Koirala, Jiming Kang, Venkatesh Balan, Weihang Zhu, Driss Benhaddou and Fatima A. Merchant
Computers 2025, 14(4), 128; https://doi.org/10.3390/computers14040128 - 1 Apr 2025
Viewed by 273
Abstract
The mushroom farming industry struggles to automate harvesting due to limited large-scale annotated datasets and the complex growth patterns of mushrooms, which complicate detection, segmentation, and pose estimation. To address this, we introduce a synthetic dataset with 40,000 unique scenes of white Agaricus [...] Read more.
The mushroom farming industry struggles to automate harvesting due to limited large-scale annotated datasets and the complex growth patterns of mushrooms, which complicate detection, segmentation, and pose estimation. To address this, we introduce a synthetic dataset with 40,000 unique scenes of white Agaricus bisporus and brown baby bella mushrooms, capturing realistic variations in quantity, position, orientation, and growth stages. Our two-stage pose estimation pipeline combines 2D object detection and instance segmentation with a 3D point cloud-based pose estimation network using a Point Transformer. By employing a continuous 6D rotation representation and a geodesic loss, our method ensures precise rotation predictions. Experiments show that processing point clouds with 1024 points and the 6D Gram–Schmidt rotation representation yields optimal results, achieving an average rotational error of 1.67° on synthetic data, surpassing current state-of-the-art methods in mushroom pose estimation. The model, further, generalizes well to real-world data, attaining a mean angle difference of 3.68° on a subset of the M18K dataset with ground-truth annotations. This approach aims to drive automation in harvesting, growth monitoring, and quality assessment in the mushroom industry. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

23 pages, 2184 KiB  
Article
Lossless Compression of Malaria-Infected Erythrocyte Images Using Vision Transformer and Deep Autoencoders
by Md Firoz Mahmud, Zerin Nusrat and W. David Pan
Computers 2025, 14(4), 127; https://doi.org/10.3390/computers14040127 - 1 Apr 2025
Viewed by 306
Abstract
Lossless compression of medical images allows for rapid image data exchange and faithful recovery of the compressed data for medical image assessment. There are many useful telemedicine applications, for example in diagnosing conditions such as malaria in resource-limited regions. This paper presents a [...] Read more.
Lossless compression of medical images allows for rapid image data exchange and faithful recovery of the compressed data for medical image assessment. There are many useful telemedicine applications, for example in diagnosing conditions such as malaria in resource-limited regions. This paper presents a novel machine learning-based approach where lossless compression of malaria-infected erythrocyte images is assisted by cutting-edge classifiers. To this end, we first use a Vision Transformer to classify images into two categories: those cells that are infected with malaria and those that are not. We then employ distinct deep autoencoders for each category, which not only reduces the dimensions of the image data but also preserves crucial diagnostic information. To ensure no loss in reconstructed image quality, we further compress the residuals produced by these autoencoders using the Huffman code. Simulation results show that the proposed method achieves lower overall bit rates and thus higher compression ratios than traditional compression schemes such as JPEG 2000, JPEG-LS, and CALIC. This strategy holds significant potential for effective telemedicine applications and can improve diagnostic capabilities in regions impacted by malaria. Full article
Show Figures

Graphical abstract

15 pages, 387 KiB  
Article
Analyzing Digital Political Campaigning Through Machine Learning: An Exploratory Study for the Italian Campaign for European Union Parliament Election in 2024
by Paolo Sernani, Angela Cossiri, Giovanni Di Cosimo and Emanuele Frontoni
Computers 2025, 14(4), 126; https://doi.org/10.3390/computers14040126 - 30 Mar 2025
Viewed by 357
Abstract
The rapid digitalization of political campaigns has reshaped electioneering strategies, enabling political entities to leverage social media for targeted outreach. This study investigates the impact of digital political campaigning during the 2024 EU elections using machine learning techniques to analyze social media dynamics. [...] Read more.
The rapid digitalization of political campaigns has reshaped electioneering strategies, enabling political entities to leverage social media for targeted outreach. This study investigates the impact of digital political campaigning during the 2024 EU elections using machine learning techniques to analyze social media dynamics. We introduce a novel dataset—Political Popularity Campaign—which comprises social media posts, engagement metrics, and multimedia content from the electoral period. By applying predictive modeling, we estimate key indicators such as post popularity and assess their influence on campaign outcomes. Our findings highlight the significance of micro-targeting practices, the role of algorithmic biases, and the risks associated with disinformation in shaping public opinion. Moreover, this research contributes to the broader discussion on regulating digital campaigning by providing analytical models that can aid policymakers and public authorities in monitoring election compliance and transparency. The study underscores the necessity for robust frameworks to balance the advantages of digital political engagement with the challenges of ensuring fair democratic processes. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop