Next Issue
Volume 18, May
Previous Issue
Volume 18, March
 
 

Algorithms, Volume 18, Issue 4 (April 2025) – 62 articles

Cover Story (view full-size image): This work addresses switching structured linear systems, a class of dynamic systems whose existing links between the state, input, and output variables have unknown numerical values and are subject to change according to an exogenous signal. Geometric methods and tools developed to solve control and observation problems stated for this class of structured linear systems are presented. In particular, this work delves into the notions of invariance, controlled invariance, and conditioned invariance and their use in the statement and proof of conditions for the solvability of control and observation problems. The fundamental concepts and main results are explained using handy examples, with visual aid provided by directed graphs. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 4631 KiB  
Article
ChurnKB: A Generative AI-Enriched Knowledge Base for Customer Churn Feature Engineering
by Maryam Shahabikargar, Amin Beheshti, Wathiq Mansoor, Xuyun Zhang, Eu Jin Foo, Alireza Jolfaei, Ambreen Hanif and Nasrin Shabani
Algorithms 2025, 18(4), 238; https://doi.org/10.3390/a18040238 - 21 Apr 2025
Viewed by 304
Abstract
Customers are the cornerstone of business success across industries. Companies invest significant resources in acquiring new customers and, more importantly, retaining existing ones. However, customer churn remains a major challenge, leading to substantial financial losses. Addressing this issue requires a deep understanding of [...] Read more.
Customers are the cornerstone of business success across industries. Companies invest significant resources in acquiring new customers and, more importantly, retaining existing ones. However, customer churn remains a major challenge, leading to substantial financial losses. Addressing this issue requires a deep understanding of customers’ cognitive status and behaviours, as well as early signs of churn. Predictive and Machine Learning (ML)-based analysis, when trained with appropriate features indicative of customer behaviour and cognitive status, can be highly effective in mitigating churn. A robust ML-driven churn analysis depends on a well-developed feature engineering process. Traditional churn analysis studies have primarily relied on demographic, product usage, and revenue-based features, overlooking the valuable insights embedded in customer–company interactions. Recognizing the importance of domain knowledge and human expertise in feature engineering and building on our previous work, we propose the Customer Churn-related Knowledge Base (ChurnKB) to enhance feature engineering for churn prediction. ChurnKB utilizes textual data mining techniques such as Term Frequency-Inverse Document Frequency (TF-IDF), cosine similarity, regular expressions, word tokenization, and stemming to identify churn-related features within customer-generated content, including emails. To further enrich the structure of ChurnKB, we integrate Generative AI, specifically large language models, which offer flexibility in handling unstructured text and uncovering latent features, to identify and refine features related to customer cognitive status, emotions, and behaviours. Additionally, feedback loops are incorporated to validate and enhance the effectiveness of ChurnKB.Integrating knowledge-based features into machine learning models (e.g., Random Forest, Logistic Regression, Multilayer Perceptron, and XGBoost) improves predictive performance of ML models compared to the baseline, with XGBoost’s F1 score increasing from 0.5752 to 0.7891. Beyond churn prediction, this approach potentially supports applications like personalized marketing, cyberbullying detection, hate speech identification, and mental health monitoring, demonstrating its broader impact on business intelligence and online safety. Full article
Show Figures

Figure 1

31 pages, 9472 KiB  
Article
Mathematics-Driven Analysis of Offshore Green Hydrogen Stations
by Álvaro García-Ruiz, Pablo Fernández-Arias and Diego Vergara
Algorithms 2025, 18(4), 237; https://doi.org/10.3390/a18040237 - 21 Apr 2025
Viewed by 140
Abstract
Renewable energy technologies have become an increasingly important component of the global energy supply. In recent years, photovoltaic and wind energy have been the fastest-growing renewable sources. Although oceans present harsh environments, their estimated energy generation potential is among the highest. Ocean-based solutions [...] Read more.
Renewable energy technologies have become an increasingly important component of the global energy supply. In recent years, photovoltaic and wind energy have been the fastest-growing renewable sources. Although oceans present harsh environments, their estimated energy generation potential is among the highest. Ocean-based solutions are gaining significant momentum, driven by the advancement of offshore wind, floating solar, tidal, and wave energy, among others. The integration of various marine energy sources with green hydrogen production can facilitate the exploitation and transportation of renewable energy. This paper presents a mathematics-driven analysis for the simulation of a technical model designed as a generic framework applicable to any location worldwide and developed to analyze the integration of solar energy generation and green hydrogen production. It evaluates the impact of key factors such as solar irradiance, atmospheric conditions, water surface flatness, as well as the parameters of photovoltaic panels, electrolyzers, and adiabatic compressors, on both energy generation and hydrogen production capacity. The proposed mathematics-based framework serves as an innovative tool for conducting multivariable parametric analyses, selecting optimal design configurations based on specific solar energy and/or hydrogen production requirements, and performing a range of additional assessments including, but not limited to, risk evaluations, cause–effect analyses, and/or degradation studies. Enhancing the efficiency of solar energy generation and hydrogen production processes can reduce the required photovoltaic surface area, thereby simplifying structural and anchoring requirements and lowering associated costs. Simpler, more reliable, and cost-effective designs will foster the expansion of floating solar energy and green hydrogen production in marine environments. Full article
Show Figures

Figure 1

20 pages, 5586 KiB  
Article
iCOR: End-to-End Electrocardiography Morphology Classification Combining Multi-Layer Filter and BiLSTM
by Siti Nurmaini, Wisnu Jatmiko, Satria Mandala, Bambang Tutuko, Erwin Erwin, Alexander Edo Tondas, Annisa Darmawahyuni, Firdaus Firdaus, Muhammad Naufal Rachmatullah, Ade Iriani Sapitri, Anggun Islami, Akhiar Wista Arum and Muhammad Ikhwan Perwira
Algorithms 2025, 18(4), 236; https://doi.org/10.3390/a18040236 - 18 Apr 2025
Viewed by 138
Abstract
Accurate delineation of ECG signals is critical for effective cardiovascular diagnosis and treatment. However, previous studies indicate that models developed for specific datasets and environments perform poorly when used with varying ECG signal morphology characteristics. This paper presents a novel approach to ECG [...] Read more.
Accurate delineation of ECG signals is critical for effective cardiovascular diagnosis and treatment. However, previous studies indicate that models developed for specific datasets and environments perform poorly when used with varying ECG signal morphology characteristics. This paper presents a novel approach to ECG signal delineation using a multi-layer filter (MLF) combined with a bidirectional long short-term memory (BiLSTM) model, namely iCOR. The proposed iCOR architecture enhances noise removal and feature extraction, resulting in improved classification of the P-QRS-T-wave morphology with a simpler model. Our method is evaluated on a combination of two standard ECG databases, the Lobachevsky University Electrocardiography Database (LUDB) and QT Database (QTDB). It can be observed that the classification performance for unseen sets of LUDB datasets yields above 90.4% and 98% accuracy, for record-based and beat-based approaches, respectively. Beat-based approaches outperformed the record-based approach in overall performance metric results. Similar results were shown in an unseen set of the QTDB, in which beat-based approaches performed with accuracy above 97%. These results highlight the robustness and efficacy of the iCOR model across diverse ECG signal datasets. The proposed approach offers a significant advancement in ECG signal analysis, paving the way for more reliable and precise cardiac health monitoring. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

32 pages, 8687 KiB  
Article
Hybrid Deep Learning Methods for Human Activity Recognition and Localization in Outdoor Environments
by Yirga Yayeh Munaye, Metadel Addis, Yenework Belayneh, Atinkut Molla and Wasyihun Admass
Algorithms 2025, 18(4), 235; https://doi.org/10.3390/a18040235 - 18 Apr 2025
Viewed by 188
Abstract
Activity recognition and localization in outdoor environments involve identifying and tracking human movements using sensor data, computer vision, or deep learning techniques. This process is crucial for applications such as smart surveillance, autonomous systems, healthcare monitoring, and human–computer interaction. However, several challenges arise [...] Read more.
Activity recognition and localization in outdoor environments involve identifying and tracking human movements using sensor data, computer vision, or deep learning techniques. This process is crucial for applications such as smart surveillance, autonomous systems, healthcare monitoring, and human–computer interaction. However, several challenges arise in outdoor settings, including varying lighting conditions, occlusions caused by obstacles, environmental noise, and the complexity of differentiating between similar activities. This study presents a hybrid deep learning approach that integrates human activity recognition and localization in outdoor environments using Wi-Fi signal data. The study focuses on applying the hybrid long short-term memory–bi-gated recurrent unit (LSTM-BIGRU) architecture, designed to enhance the accuracy of activity recognition and location estimation. Moreover, experiments were conducted using a real-world dataset collected with the PicoScene Wi-Fi sensing device, which captures both magnitude and phase information. The results demonstrated a significant improvement in accuracy for both activity recognition and localization tasks. To mitigate data scarcity, this study utilized the conditional tabular generative adversarial network (CTGAN) to generate synthetic channel state information (CSI) data. Additionally, carrier frequency offset (CFO) and cyclic shift delay (CSD) preprocessing techniques were implemented to mitigate phase fluctuations. The experiments were conducted in a line-of-sight (LoS) outdoor environment, where CSI data were collected using the PicoScene Wi-Fi sensor platform across four different activities at outdoor locations. Finally, a comparative analysis of the experimental results highlights the superior performance of the proposed hybrid LSTM-BIGRU model, achieving 99.81% and 98.93% accuracy for activity recognition and location prediction, respectively. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 683 KiB  
Article
Utilizing a Bounding Procedure Based on Simulated Annealing to Effectively Locate the Bounds for the Parameters of Radial Basis Function Networks
by Ioannis G. Tsoulos, Vasileios Charilogis and Dimitrios Tsalikakis
Algorithms 2025, 18(4), 234; https://doi.org/10.3390/a18040234 - 18 Apr 2025
Viewed by 86
Abstract
Radial basis function (RBF) networks are an established parametric machine learning tool which has been extensively utilized in data classification and data fitting problems. These specific machine learning tools have been applied in various scientific areas, such as problems in physics, chemistry, and [...] Read more.
Radial basis function (RBF) networks are an established parametric machine learning tool which has been extensively utilized in data classification and data fitting problems. These specific machine learning tools have been applied in various scientific areas, such as problems in physics, chemistry, and medicine, with excellent results. A two-step technique is usually used to adjust the parameters of these models, which is in most cases extremely effective. However, it does not effectively explore the value space of the network parameters and often results in parameter stability problems. In this paper, the use of a bounding technique that explores the value space of the parameters of these networks using intervals generated by a procedure based on the Simulated Annealing method is recommended. After finding a promising range of values for the network parameters, a genetic algorithm is applied within this range of values to more effectively adjust its parameters. The new method was applied on a wide range of classification and regression datasets from the relevant literature and the results are reported in the current manuscript. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Swarm Systems)
Show Figures

Figure 1

26 pages, 8000 KiB  
Article
Patient-Specific Hyperparameter Optimization of a Deep Learning-Based Tumor Autocontouring Algorithm on 2D Liver, Prostate, and Lung Cine MR Images: A Pilot Study
by Gawon Han, Keith Wachowicz, Nawaid Usmani, Don Yee, Jordan Wong, Arun Elangovan, Jihyun Yun and B. Gino Fallone
Algorithms 2025, 18(4), 233; https://doi.org/10.3390/a18040233 - 18 Apr 2025
Viewed by 113
Abstract
Linear accelerator–magnetic resonance (linac-MR) hybrid systems allow for real-time magnetic resonance imaging (MRI)-guided radiotherapy for more accurate dose delivery to the tumor and improved sparing of the adjacent healthy tissues. However, for real-time tumor detection, it is unfeasible for a human expert to [...] Read more.
Linear accelerator–magnetic resonance (linac-MR) hybrid systems allow for real-time magnetic resonance imaging (MRI)-guided radiotherapy for more accurate dose delivery to the tumor and improved sparing of the adjacent healthy tissues. However, for real-time tumor detection, it is unfeasible for a human expert to manually contour (gold standard) the tumor at the fast imaging rate of a linac-MR. This study aims to develop a neural network-based tumor autocontouring algorithm with patient-specific hyperparameter optimization (HPO) and to validate its contouring accuracy using in vivo MR images of cancer patients. Two-dimensional (2D) intrafractional MR images were acquired at 4 frames/s using 3 tesla (T) MRI from 11 liver, 24 prostate, and 12 lung cancer patients. A U-Net architecture was applied for tumor autocontouring and was further enhanced by implementing HPO using the Covariance Matrix Adaptation Evolution Strategy. Six hyperparameters were optimized for each patient, for which intrafractional images and experts’ manual contours were input into the algorithm to find the optimal set of hyperparameters. For evaluation, Dice’s coefficient (DC), centroid displacement (CD), and Hausdorff distance (HD) were computed between the manual contours and autocontours. The performance of the algorithm was benchmarked against two standardized autosegmentation methods: non-optimized U-Net and nnU-Net. For the proposed algorithm, the mean (standard deviation) DC, CD, and HD of the 47 patients were 0.92 (0.04), 1.35 (1.03), and 3.63 (2.17) mm, respectively. Compared to the two benchmarking autosegmentation methods, the proposed algorithm achieved the best overall performance in terms of contouring accuracy and speed. This work presents the first tumor autocontouring algorithm applicable to the intrafractional MR images of liver and prostate cancer patients for real-time tumor-tracked radiotherapy. The proposed algorithm performs patient-specific HPO, enabling accurate tumor delineation comparable to that of experts. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

25 pages, 10513 KiB  
Article
A Comparative Study of Machine Learning Techniques for Cell Annotation of scRNA-Seq Data
by Shahid Ahmad Wani, SMK Quadri, Mohammad Shuaib Mir and Yonis Gulzar
Algorithms 2025, 18(4), 232; https://doi.org/10.3390/a18040232 - 18 Apr 2025
Viewed by 213
Abstract
Accurate cell type annotation is a critical step in single-cell RNA sequencing (scRNA-seq) analysis, enabling deeper insights into cellular heterogeneity and biological processes. In this study, we conducted a comprehensive comparative evaluation of various machine learning techniques, including support vector machine (SVM), decision [...] Read more.
Accurate cell type annotation is a critical step in single-cell RNA sequencing (scRNA-seq) analysis, enabling deeper insights into cellular heterogeneity and biological processes. In this study, we conducted a comprehensive comparative evaluation of various machine learning techniques, including support vector machine (SVM), decision tree, random forest, logistic regression, gradient boosting, k-nearest neighbour, transformer, and naive Bayes, to determine their effectiveness for single-cell annotation. These methods were evaluated using four diverse datasets comprising hundreds of cell types across several tissues. Our results revealed that SVM consistently outperformed other techniques, emerging as the top performer in three out of the four datasets, followed closely by logistic regression. Most methods demonstrated robust capabilities in annotating major cell types and identifying rare cell populations, though naive Bayes was the least effective due to its inherent limitations in handling high-dimensional and interdependent data. This study provides valuable insights into the relative strengths and weaknesses of machine learning methods for single-cell annotation, offering guidance for selecting appropriate techniques in scRNA-seq analyses. Full article
(This article belongs to the Special Issue Advanced Research on Machine Learning Algorithms in Bioinformatics)
Show Figures

Figure 1

37 pages, 980 KiB  
Review
Digital Transformation in Aftersales and Warranty Management: A Review of Advanced Technologies in I4.0
by Vicente González-Prida, Carlos Parra Márquez, Pablo Viveros Gunckel, Fredy Kristjanpoller Rodríguez and Adolfo Crespo Márquez
Algorithms 2025, 18(4), 231; https://doi.org/10.3390/a18040231 - 17 Apr 2025
Viewed by 352
Abstract
This research examines how Industry 4.0 technologies such as artificial intelligence (AI), the Internet of Things (IoT), and digital twins (DT) are used in the digital transformation process of warranty management. This research focuses on converting traditional warranty management practices from reactive systems [...] Read more.
This research examines how Industry 4.0 technologies such as artificial intelligence (AI), the Internet of Things (IoT), and digital twins (DT) are used in the digital transformation process of warranty management. This research focuses on converting traditional warranty management practices from reactive systems to predictive and proactive ones, improving operational performance and customer experiences. Based on an already established eight-phase framework for warranty management, this paper reviews machine learning (ML), natural language processing (NLP), and predictive analytics, among other advanced technologies, to enhance warranty optimization processes. Best practices in the automotive sector, as well as in the railway and aeronautics industries, have experienced substantial achievements, including optimized resource utilization and savings, together with tailored services. This study describes the limitations of capital investments, labor training requirements, and data protection issues. Therefore, it suggests implementation sequencing and staff education approaches as solutions. In addition to the current evolution of Industry 4.0, this research’s conclusion highlights how digital warranty management advancements optimize resources and reduce costs while adhering to international standards and ethical data practices. Full article
Show Figures

Figure 1

22 pages, 2988 KiB  
Article
Scalable Resource Provisioning Framework for Fog Computing Using LLM-Guided Q-Learning Approach
by Bhargavi Krishnamurthy and Sajjan G. Shiva
Algorithms 2025, 18(4), 230; https://doi.org/10.3390/a18040230 - 17 Apr 2025
Viewed by 161
Abstract
Fog computing is one of the growing distributed computing platforms incorporated by Industries today as it performs real-time data analysis closer to the edge of the IoT network. It offers cloud capabilities at the edge of the fog networks through improved efficiency and [...] Read more.
Fog computing is one of the growing distributed computing platforms incorporated by Industries today as it performs real-time data analysis closer to the edge of the IoT network. It offers cloud capabilities at the edge of the fog networks through improved efficiency and flexibility. As the demands of Internet of Things (IoT) devices keep varying, it is important to rapidly modify the resource allocation policies to satisfy them. Constant fluctuation of the demands leads to over or under provisioning of resources. The computing capability of the fog nodes is small, and hence there is a necessity to develop resource provisioning policies that reduce the delay and bandwidth consumption. In this paper, a novel large language model (LLM)-guided Q-learning framework is designed and developed. The uncertainty in the fog environment in terms of delay incurred, bandwidth usage, and heterogeneity of fog nodes is represented using the LLM model. The reward shaping of a Q-learning agent is enriched by considering the heuristic value of the LLM model. The experimental results ensure that the proposed framework is good with respect to processing delay, energy consumption, load balancing, and service level agreement violation under a finite and infinite fog computing environment. The results are further validated through the expected value analysis statistical methodology. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

12 pages, 551 KiB  
Article
Deep-Learning-Based Optimization of the Signal/Background Ratio for Λ Particles in the CBM Experiment at FAIR
by Ivan Kisel, Robin Lakos and Gianna Zischka
Algorithms 2025, 18(4), 229; https://doi.org/10.3390/a18040229 - 16 Apr 2025
Viewed by 188
Abstract
Machine learning algorithms have become essential tools in modern physics experiments, enabling the precise and efficient analysis of large-scale experimental data. The Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR) demands innovative methods for processing the vast [...] Read more.
Machine learning algorithms have become essential tools in modern physics experiments, enabling the precise and efficient analysis of large-scale experimental data. The Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR) demands innovative methods for processing the vast data volumes generated at high collision rates of up to 10 MHz. This study presents a deep-learning-based approach to enhance the signal/background (S/B) ratio for Λ particles within the Kalman Filter (KF) Particle Finder framework. Using the Artificial Neural Networks for First Level Event Selection (ANN4FLES) package of CBM, a multi-layer perceptron model was designed and trained on simulated data to classify Λ particle candidates as signal or background. The model achieved over 98% classification accuracy, enabling significant reductions in background—in particular, a strong suppression of the combinatorial background that lacks physical meaning—while preserving almost the whole Λ particle signal. This approach improved the S/B ratio by a factor of 10.97, demonstrating the potential of deep learning to complement existing particle reconstruction techniques and contribute to the advancement of data analysis methods in heavy-ion physics. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

20 pages, 3991 KiB  
Article
A New GIS-Based Detection Technique for Urban Heat Islands Using the Fuzzy C-Means Clustering Algorithm: A Case Study of Naples, (Italy)
by Rosa Cafaro, Barbara Cardone, Valeria D’Ambrosio, Ferdinando Di Martino and Vittorio Miraglia
Algorithms 2025, 18(4), 228; https://doi.org/10.3390/a18040228 - 15 Apr 2025
Viewed by 226
Abstract
This study proposes a novel urban heat island detection method implemented in a GIS-based framework, designed to identify the most critical urban areas during heatwave events. The framework employs the fuzzy C-means clustering algorithm with remotely sensed land surface temperature and normalized difference [...] Read more.
This study proposes a novel urban heat island detection method implemented in a GIS-based framework, designed to identify the most critical urban areas during heatwave events. The framework employs the fuzzy C-means clustering algorithm with remotely sensed land surface temperature and normalized difference vegetation index data to delineate and visualize hotspots. The proposed approach is compared with other established methods for urban heat island detection to evaluate their relative accuracy and effectiveness. This methodology integrates advanced spatial analysis with environmental indicators such as vegetation cover and permeable open spaces to assess urban vulnerability. The city of Naples, Italy, serves as a case study for testing the framework. The results from the case study indicate that the proposed method outperforms alternative methods in identifying heat hotspots, providing higher accuracy and suggesting potential adaptability to other urban contexts. This GIS-based approach not only provides a robust tool for urban climate assessment but also serves as a decision support framework that enables urban planners and policymakers to identify critical areas and prioritize interventions for climate adaptation and mitigation. Full article
Show Figures

Figure 1

30 pages, 10466 KiB  
Article
Prompt Once, Segment Everything: Leveraging SAM 2 Potential for Infinite Medical Image Segmentation with a Single Prompt
by Juan D. Gutiérrez, Emilio Delgado, Carlos Breuer, José M. Conejero and Roberto Rodriguez-Echeverria
Algorithms 2025, 18(4), 227; https://doi.org/10.3390/a18040227 - 14 Apr 2025
Viewed by 380
Abstract
Semantic segmentation of medical images holds significant potential for enhancing diagnostic and surgical procedures. Radiology specialists can benefit from automated segmentation tools that facilitate identifying and isolating regions of interest in medical scans. Nevertheless, to obtain precise results, sophisticated deep learning models tailored [...] Read more.
Semantic segmentation of medical images holds significant potential for enhancing diagnostic and surgical procedures. Radiology specialists can benefit from automated segmentation tools that facilitate identifying and isolating regions of interest in medical scans. Nevertheless, to obtain precise results, sophisticated deep learning models tailored to this specific task must be developed and trained, a capability not universally accessible. Segment Anything Model (SAM) 2 is a foundational model designed for image and video segmentation tasks, built on its predecessor, SAM. This paper introduces a novel approach leveraging SAM 2’s video segmentation capabilities to reduce the prompts required to segment an entire volume of medical images. The study first compares SAM and SAM 2’s performance in medical image segmentation. Evaluation metrics such as the Jaccard index and Dice score are used to measure precision and segmentation quality. Then, our novel approach is introduced. Statistical tests include comparing precision gains and computational efficiency, focusing on the trade-off between resource use and segmentation time. The results show that SAM 2 achieves an average improvement of 1.76% in the Jaccard index and 1.49% in the Dice score compared to SAM, albeit with a ten-fold increase in segmentation time. Our novel approach to segmentation reduces the number of prompts needed to segment a volume of medical images by 99.95%. We demonstrate that it is possible to segment all the slices of a volume and, even more, of a whole dataset, with a single prompt, achieving results comparable to those obtained by state-of-the-art models explicitly trained for this task. Our approach simplifies the segmentation process, allowing specialists to devote more time to other tasks. The hardware and personnel requirements to obtain these results are much lower than those needed to train a deep learning model from scratch or to modify the behavior of an existing one using model modification techniques. Full article
Show Figures

Figure 1

19 pages, 2677 KiB  
Article
Proving Properties of Dekker’s Algorithm for Mutual Exclusion of N Processes
by Libero Nigro and Franco Cicirelli
Algorithms 2025, 18(4), 226; https://doi.org/10.3390/a18040226 - 13 Apr 2025
Viewed by 250
Abstract
Dekker’s algorithm for mutual exclusion of two processes is the well-known first developed correct solution based only on software mechanisms. The algorithm served as the starting point for researchers to create subsequent safe solutions both for two and N > 2 processes. In [...] Read more.
Dekker’s algorithm for mutual exclusion of two processes is the well-known first developed correct solution based only on software mechanisms. The algorithm served as the starting point for researchers to create subsequent safe solutions both for two and N > 2 processes. In recent years, Dekker proposed a generalization of his mutual exclusion algorithm for N > 2 processes (here referred to as Dekker-N). To the best of our knowledge, Dekker-N correctness was only proven by the author using informal arguments. This paper’s original contribution consists of formal modeling and verification of Dekker-N using an approach based on timed automata (TA) and the Uppaal model checker. The Dekker-N model is checked under atomic registers and also in cases where non-atomic registers are used. This paper first demonstrates that Dekker-N is correct and fair with atomic registers and effectively ensures bounded waiting for competing processes through a linear overtaking. Unfortunately, the algorithm becomes incorrect when non-atomic registers are used. The adopted formal approach, though, allowed us to prove that by making just one single common variable safe, Dekker-N turns out to be fully correct and fair with non-atomic registers as well. The paper presents the TA-based formal approach and goes on by presenting models of Dekker-N and by verifying all its mutual-exclusion properties. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 763 KiB  
Article
The Impact of Environmental Risk on Business Failure: A Fuzzy-Set Qualitative Comparative Analysis Approach with Extreme Gradient Boosting Feature Selection
by Mariano Romero Martínez, Pedro Carmona Ibáñez and José Pozuelo Campillo
Algorithms 2025, 18(4), 225; https://doi.org/10.3390/a18040225 - 13 Apr 2025
Viewed by 183
Abstract
Corporate performance is increasingly impacted by environmental issues, but their specific role in business failure remains underexplored, which leads to a gap in research that is often focused exclusively on financial metrics. By investigating the relationship between environmental financial exposure and business failure, [...] Read more.
Corporate performance is increasingly impacted by environmental issues, but their specific role in business failure remains underexplored, which leads to a gap in research that is often focused exclusively on financial metrics. By investigating the relationship between environmental financial exposure and business failure, this study addresses this gap, integrating financial ratios and environmental variables to understand how environmental performance affects financial viability. A novel dual-stage methodology was employed, first using Extreme Gradient Boosting (XGBoost) for feature selection to identify the most significant predictors of failure from a dataset of Spanish companies (N = 38,456) using 2022 ORBIS data. Next, a fuzzy-set qualitative comparative analysis (fsQCA) was applied to analyze the sufficient causal configurations leading to a high propensity for business failure. The analysis identified three distinct causal configurations associated with failure. All highlighted poor financial performance indicators, such as low results per employee and low profit per employee. Notably, one configuration identified high environmental risk (measured by TRUCAM) as a core condition contributing significantly to financial distress. These findings highlight the critical link between environmental responsibility and financial health, demonstrating the benefits of combining fsQCA with machine learning to identify intricate causal configurations and providing information to companies and governments who want to support long-term financial stability and corporate sustainability. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms in Sustainability)
Show Figures

Figure 1

17 pages, 8061 KiB  
Article
Optimal View Estimation Algorithm and Evaluation with Deviation Angle Analysis
by Meng Yuan and Hongjun Li
Algorithms 2025, 18(4), 224; https://doi.org/10.3390/a18040224 - 12 Apr 2025
Viewed by 249
Abstract
Image-based viewpoint estimation is one of the tasks in image analysis, and another is the inverse problem of selecting the best viewpoint for displaying a three-dimensional object. Currently, two issues need further exploration in image-based viewpoint estimation research: insufficient labeled data and a [...] Read more.
Image-based viewpoint estimation is one of the tasks in image analysis, and another is the inverse problem of selecting the best viewpoint for displaying a three-dimensional object. Currently, two issues need further exploration in image-based viewpoint estimation research: insufficient labeled data and a limited number of evaluation methods for estimation results. To address the first issue, this paper proposes a spherical viewpoint sampling method based on a combination of analytical methods and motion adjustment, and designs a viewpoint-based projection image acquisition algorithm. Considering the difference between viewpoint inference and image classification, we propose an accuracy evaluation method with deviation angle tolerance for viewpoint estimation. Based on constructing a new dataset with viewpoint labels, the new accuracy evaluation method has been validated through experiments. The experimental results show that its estimation accuracy can reach 89% according to the new estimation evaluation indicators. Additionally, we applied our method to estimate the viewpoints of images from a furniture website and analyzed the viewpoint preferences in its furniture displays. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Image Understanding and Analysis)
Show Figures

Figure 1

19 pages, 311 KiB  
Article
Improved Cryptanalysis of Some RSA Variants
by Mohammed Rahmani, Abderrahmane Nitaj and Mhammed Ziane
Algorithms 2025, 18(4), 223; https://doi.org/10.3390/a18040223 - 12 Apr 2025
Viewed by 180
Abstract
Several RSA variants enforce a constraint between their public and private keys through the relation ed1(mod(p21)(q21)), where p and q are the prime factors [...] Read more.
Several RSA variants enforce a constraint between their public and private keys through the relation ed1(mod(p21)(q21)), where p and q are the prime factors of their RSA modulus N=pq. In this paper, we introduce a novel attack on RSA variant schemes where the public exponent satisfies an equation of the form euz(mod(p21)(q21)), with sufficiently small |z|, |u|, in a scenario where the attacker has access to an approximation of one of the prime factors. Our new attack utilizes Coppersmith’s method, combined with lattice basis reduction techniques, to efficiently recover the prime factors of the RSA modulus in these scenarios. This method offers a significant improvement over prior attacks on RSA variants with small private exponents or partial prime information. Full article
(This article belongs to the Special Issue Algorithmic Innovations in Cryptanalysis of Public Key Cryptography)
Show Figures

Graphical abstract

26 pages, 510 KiB  
Article
Integrating Feature Selection and Deep Learning: A Hybrid Approach for Smart Agriculture Applications
by Ali Roman, Md Mostafizer Rahman, Sajjad Ali Haider, Tallha Akram and Syed Rameez Naqvi
Algorithms 2025, 18(4), 222; https://doi.org/10.3390/a18040222 - 12 Apr 2025
Viewed by 223
Abstract
This research tackles the critical challenge of achieving precise and efficient feature selection in machine learning-based classification, particularly for smart agriculture, where existing methods often fail to balance exploration and exploitation in complex, high-dimensional datasets. While current approaches, such as standalone nature-inspired optimization [...] Read more.
This research tackles the critical challenge of achieving precise and efficient feature selection in machine learning-based classification, particularly for smart agriculture, where existing methods often fail to balance exploration and exploitation in complex, high-dimensional datasets. While current approaches, such as standalone nature-inspired optimization algorithms, leverage biological behaviors for feature selection, they are limited by their inability to synergize diverse strategies, resulting in suboptimal performance and scalability. To address this, we introduce the Hybrid Predator Algorithm for Classification (HPA-C), a novel hybrid feature selection algorithm that uniquely integrates the framework of a nature-inspired feature selection technique with position update equations from other algorithms, harnessing diverse biological behaviors like echolocation, foraging, and collaborative hunting. Coupled with a custom convolutional neural network (CNN), HPA-C achieves superior classification accuracy (98.6–99.8%) on agricultural datasets (Plant Leaf Diseases, Weed Detection, Fruits-360, and Fresh n Rotten) and demonstrates exceptional adaptability across diverse imagery applications. Full article
Show Figures

Figure 1

26 pages, 14078 KiB  
Article
Proposal of a Methodology Based on Using a Wavelet Transform as a Convolution Operation in a Convolutional Neural Network for Feature Extraction Purposes
by Nora Isabel Pérez-Quezadas, Héctor Benítez-Pérez and Adrián Durán-Chavesti
Algorithms 2025, 18(4), 221; https://doi.org/10.3390/a18040221 - 11 Apr 2025
Viewed by 216
Abstract
Using methodological tools to construct feature extraction from multidimensional data is challenging. Different treatments are required to build a coherent representation with those features that can be attenuated by various phenomena inherent to the observed process. It is interesting to note that in [...] Read more.
Using methodological tools to construct feature extraction from multidimensional data is challenging. Different treatments are required to build a coherent representation with those features that can be attenuated by various phenomena inherent to the observed process. It is interesting to note that in this methodological generation, several methods converge, such as Wavelet transform, focusing on convolution processing, windowed data shifting, and classification via Self-Organizing Maps. Likewise, a case study is presented in this work, allowing us to understand the scope of this methodological tool using an information cube to detect common features, as discussed previously. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 1287 KiB  
Article
Guided Particle Swarm Optimization for Feature Selection: Application to Cancer Genome Data
by Simone A. Ludwig
Algorithms 2025, 18(4), 220; https://doi.org/10.3390/a18040220 - 11 Apr 2025
Viewed by 211
Abstract
Feature selection is a crucial step in the data preprocessing stage of machine learning. It involves selecting a subset of relevant features for use in model construction. Feature selection helps in improving model performance by reducing overfitting, enhancing generalization, and decreasing computational cost. [...] Read more.
Feature selection is a crucial step in the data preprocessing stage of machine learning. It involves selecting a subset of relevant features for use in model construction. Feature selection helps in improving model performance by reducing overfitting, enhancing generalization, and decreasing computational cost. Techniques for feature selection can be broadly classified into filter methods, wrapper methods, and embedded methods. This paper presents a feature selection method based on Particle Swarm Optimization (PSO). The proposed algorithm makes use of a guided particle scheme whereby three filter-based methods are incorporated. The proposed algorithm addresses the issue of premature convergence to global optima compared to other PSO feature-based methods. In addition, the algorithm is tested on very-high-dimensional genome data that include up to 44,909 features. Results of an experimental comparison with other state-of-the-art feature selection algorithms show that the proposed algorithm produces overall better results. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

31 pages, 4734 KiB  
Article
Comparing an Artificial Intelligence Planner with Traditional Optimization Methods: A Case Study in the Dairy Industry
by Felipe Martins Müller, Vanessa Andréia Schneider, Olinto Cesar Bassi de Araujo, Claudio Roberto Scheer Júnior and Guilherme Lopes Weis
Algorithms 2025, 18(4), 219; https://doi.org/10.3390/a18040219 - 11 Apr 2025
Viewed by 265
Abstract
Automated Planning and Scheduling (APS) is an area of artificial intelligence dedicated to generating efficient plans to achieve goals by optimizing objectives. This case study is based on a middle-mile segment of the dairy supply chain. This article focuses on applying and analyzing [...] Read more.
Automated Planning and Scheduling (APS) is an area of artificial intelligence dedicated to generating efficient plans to achieve goals by optimizing objectives. This case study is based on a middle-mile segment of the dairy supply chain. This article focuses on applying and analyzing APS compared to the following classical optimization methods: mathematical modeling based on Mixed-Integer Programming (MILP) and the Genetic Algorithm (GA). The language supported for APS modeling is Planning Domain Definition Language (PDDL), and the temporal solver used is the OPTIC planner. Optimization methods are guided by a mathematical model developed specifically for the research scope, considering production, inventory, and transportation conditions and constraints. Dairy products are highly perishable; therefore, the main optimization objective is to minimize Tmax, i.e., the total time to meet demand, ensuring that the products are available at the distribution center with a viable shelf life for commercialization. The APS application showed limitations compared to the other optimization approaches, with the Exact Method proving the most efficient. Finally, all algorithms, models, and results are available on GitHub, aiming to foster further research and enhance operational efficiency in the dairy sector through optimization. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 303 KiB  
Article
Korovkin-Type Theorems for Positive Linear Operators Based on the Statistical Derivative of Deferred Cesàro Summability
by Hari Mohan Srivastava, Bidu Bhusan Jena, Susanta Kumar Paikray and Umakanta Misra
Algorithms 2025, 18(4), 218; https://doi.org/10.3390/a18040218 - 11 Apr 2025
Viewed by 237
Abstract
In this paper, we introduce and investigate the concept of statistical derivatives within the framework of the deferred Cesàro summability technique, supported by illustrative examples. Using this approach, we establish a novel Korovkin-type theorem for a specific set of exponential test functions, namely [...] Read more.
In this paper, we introduce and investigate the concept of statistical derivatives within the framework of the deferred Cesàro summability technique, supported by illustrative examples. Using this approach, we establish a novel Korovkin-type theorem for a specific set of exponential test functions, namely 1, eυ and e2υ, which are defined on the Banach space C[0,). Our results significantly extend several well-known Korovkin-type theorems. Additionally, we analyze the rate of convergence associated with the statistical derivatives under deferred Cesàro summability. To support our theoretical findings, we provide compelling numerical examples, followed by graphical representations generated using MATLAB software, to visually illustrate and enhance the understanding of the convergence behavior of the operators. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

27 pages, 1515 KiB  
Article
Wavelet-Based Optimization and Numerical Computing for Fault Detection Method—Signal Fault Localization and Classification Algorithm
by Nikita Sakovich, Dmitry Aksenov, Ekaterina Pleshakova and Sergey Gataullin
Algorithms 2025, 18(4), 217; https://doi.org/10.3390/a18040217 - 10 Apr 2025
Viewed by 777
Abstract
This study focuses on the development of the WONC-FD (Wavelet-Based Optimization and Numerical Computing for Fault Detection) algorithm for the accurate detection and categorization of faults in signals using wavelet analysis augmented with numerical methods. Fault detection is a key problem in areas [...] Read more.
This study focuses on the development of the WONC-FD (Wavelet-Based Optimization and Numerical Computing for Fault Detection) algorithm for the accurate detection and categorization of faults in signals using wavelet analysis augmented with numerical methods. Fault detection is a key problem in areas related to seismic activity analysis, vibration assessment of industrial equipment, structural integrity control, and electrical grid reliability. In the proposed methodology, wavelet transform serves to accurately localize anomalies in the data, and optimization techniques are introduced to refine the classification based on minimizing the error function. This not only improves the accuracy of fault identification but also provides a better understanding of its nature. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 2398 KiB  
Article
Implementation of an Intelligent Controller Based on Neural Networks for the Simulation of Pressure Swing Adsorption Systems
by Moises Ramos-Martinez, Jorge A. Brizuela-Mendoza, Carlos A. Torres-Cantero, Gerardo Ortiz-Torres, Felipe D. J. Sorcia-Vázquez, Mario A. Juarez, Jair de Jesús Cambrón Navarrete, Juan Carlos Mixteco-Sánchez, Mayra G. Mena-Enriquez, Rafael Murrieta Yescas and Jesse Y. Rumbo-Morales
Algorithms 2025, 18(4), 215; https://doi.org/10.3390/a18040215 - 10 Apr 2025
Viewed by 196
Abstract
Biohydrogen has been identified as an attractive renewable energy carrier due to its high energy density and green production from biomass and organic wastes. Efficient biohydrogen production is a challenge that demands precise control of process parameters. Regulation and optimization of biohydrogen production [...] Read more.
Biohydrogen has been identified as an attractive renewable energy carrier due to its high energy density and green production from biomass and organic wastes. Efficient biohydrogen production is a challenge that demands precise control of process parameters. Regulation and optimization of biohydrogen production through advanced approaches are therefore necessary to improve its industrial viability. This study introduces an innovative proposal for controlling the Pressure Swing Adsorption (PSA) process by employing a neural network-based controller derived from a PID control framework. The neural network was trained using input–output data, enabling it to maintain biohydrogen production purity at approximately 99%. The proposed neural network effectively simulates the dynamics of the PSA model, which is traditionally controlled using a PID controller. The results demonstrate exceptional performance and strong robustness against disturbances. Specifically, the neural network enables precise tracking of the desired trajectory and effective attenuation of disturbances, achieving a biohydrogen purity level with a molar fraction of 0.99. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation (2nd Edition))
Show Figures

Figure 1

36 pages, 12574 KiB  
Article
Electric Vehicle Routing Problem with Heterogeneous Energy Replenishment Infrastructures Under Capacity Constraints
by Bowen Song and Rui Xu
Algorithms 2025, 18(4), 216; https://doi.org/10.3390/a18040216 - 9 Apr 2025
Viewed by 194
Abstract
With the escalating environmental crisis, electric vehicles have emerged as a key solution for emission reductions in logistics due to their low-carbon attributes, prompting significant attention and extensive research on the electric vehicle routing problem (EVRP). However, existing studies often overlook charging infrastructure [...] Read more.
With the escalating environmental crisis, electric vehicles have emerged as a key solution for emission reductions in logistics due to their low-carbon attributes, prompting significant attention and extensive research on the electric vehicle routing problem (EVRP). However, existing studies often overlook charging infrastructure (CI) capacity constraints and fail to fully exploit the synergistic potential of heterogeneous energy replenishment infrastructures (HERIs). This paper addresses the EVRP with HERIs under various capacity constraints (EVRP-HERI-CC), proposing a mixed-integer programming (MIP) model and a hybrid ant colony optimization (HACO) algorithm integrated with a variable neighborhood search (VNS) mechanism. Extensive numerical experiments demonstrate HACO’s effective integration of problem-specific characteristics. The algorithm resolves charging conflicts via dynamic rescheduling while optimizing charging-battery swapping decisions under an on-demand energy replenishment strategy, achieving global cost minimization. Through small-scale instance experiments, we have verified the computational complexity of the problem and demonstrated HACO’s superior performance compared to the Gurobi solver. Furthermore, comparative studies with other advanced heuristic algorithms confirm HACO’s effectiveness in solving the EVRP-HERI-CC. Sensitivity analysis reveals that appropriate CI capacity configurations achieve economic efficiency while maximizing resource utilization, further validating the engineering value of HERI networks. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 1222 KiB  
Article
Probabilistic Contingent Planning Based on Hierarchical Task Network for High-Quality Plans
by Peng Zhao, Xiaoyu Liu, Xuqi Su, Di Wu, Zi Li, Kai Kang, Keqin Li and Armando Zhu
Algorithms 2025, 18(4), 214; https://doi.org/10.3390/a18040214 - 9 Apr 2025
Viewed by 244
Abstract
Deterministic hierarchical task network (HTN) planning assumes that planning evolves along a fully predictable path and neglects the quality of the plan in the partially observable environment. To bridge this research gap, this paper proposes an innovative probabilistic contingent HTN planner, named the [...] Read more.
Deterministic hierarchical task network (HTN) planning assumes that planning evolves along a fully predictable path and neglects the quality of the plan in the partially observable environment. To bridge this research gap, this paper proposes an innovative probabilistic contingent HTN planner, named the High-Quality Contingent Planner (HQCP), designed to generate high-quality plans within partially observable contexts. Our methodology extends conventional HTN planning formalisms to accommodate for partial observability and assesses these extensions based on plan cost. Additionally, we propose a novel heuristic for high-quality plans and develop the integrated planning algorithm. These empirical studies verify the effectiveness and efficiency of the planner both in probabilistic contingent planning and in achieving plans of a high quality. Full article
Show Figures

Figure 1

18 pages, 479 KiB  
Article
Computational Aspects of L0 Linking in the Rasch Model
by Alexander Robitzsch
Algorithms 2025, 18(4), 213; https://doi.org/10.3390/a18040213 - 9 Apr 2025
Viewed by 218
Abstract
The L0 linking approach replaces the L2 loss function in mean–mean linking under the Rasch model with the L0 loss function. Using the L0 loss function offers the advantage of potential robustness against fixed differential item functioning effects. However, [...] Read more.
The L0 linking approach replaces the L2 loss function in mean–mean linking under the Rasch model with the L0 loss function. Using the L0 loss function offers the advantage of potential robustness against fixed differential item functioning effects. However, its nondifferentiability necessitates differentiable approximations to ensure feasible and computationally stable estimation. This article examines alternative specifications of two approximations, each controlled by a tuning parameter ε that determines the approximation error. Results demonstrate that the optimal ε value minimizing the RMSE of the linking parameter estimate depends on the magnitude of DIF effects, the number of items, and the sample size. A data-driven selection of ε outperformed a fixed ε across all conditions in both a numerical illustration and a simulation study. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 3rd Edition)
Show Figures

Figure 1

3 pages, 125 KiB  
Editorial
Editorial: Surveys in Algorithm Analysis and Complexity Theory, Part II (Special Issue)
by Jesper Jansson
Algorithms 2025, 18(4), 212; https://doi.org/10.3390/a18040212 - 9 Apr 2025
Viewed by 151
Abstract
This is the second Special Issue of surveys published by the MDPI journal Algorithms [...] Full article
(This article belongs to the Special Issue Surveys in Algorithm Analysis and Complexity Theory, Part II)
18 pages, 3802 KiB  
Article
Distributed Load-Balancing Method for CCA Parallel Component Applications
by Lei Guo, Xin Guo and Feiya Lv
Algorithms 2025, 18(4), 211; https://doi.org/10.3390/a18040211 - 9 Apr 2025
Viewed by 190
Abstract
Numerous universities and national laboratories in the United States have collaboratively established a Common Component Architecture (CCA) forum to conduct research on parallel component technology. Given the overhead associated with component connection and management, performance optimization is of utmost importance. Current research often [...] Read more.
Numerous universities and national laboratories in the United States have collaboratively established a Common Component Architecture (CCA) forum to conduct research on parallel component technology. Given the overhead associated with component connection and management, performance optimization is of utmost importance. Current research often employs static load-balancing strategies or centralized dynamic approaches for load-balancing in parallel component applications. By analyzing the operational mechanism of CCA parallel components, this paper introduces a dynamic and distributed load-balancing method for such applications. We have developed a class library of computing nodes utilizing an object-oriented approach. The resource-management node deploys component applications onto sub-clusters generated by an aggregation algorithm. Dependency among different component calls is determined through data flow analysis. We maintain the load information of computing nodes within the sub-cluster using a distributed table update algorithm. By capturing the dynamic load information of computing nodes at runtime, we implement a load-balancing strategy in a distributed manner. Our dynamic and distributed load-balancing algorithm is capable of balancing component instance tasks across different nodes in a heterogeneous cluster platform, thereby enhancing resource utilization efficiency. Compared to existing static or centralized load-balancing methods, the proposed method demonstrates superior performance and scalability. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

19 pages, 7498 KiB  
Article
An Efficient Explainability of Deep Models on Medical Images
by Salim Khiat, Sidi Ahmed Mahmoudi, Sédrick Stassin, Lillia Boukerroui, Besma Senaï and Saïd Mahmoudi
Algorithms 2025, 18(4), 210; https://doi.org/10.3390/a18040210 - 9 Apr 2025
Viewed by 275
Abstract
Nowadays, Artificial Intelligence (AI) has revolutionized many fields and the medical field is no exception. Thanks to technological advancements and the emergence of Deep Learning (DL) techniques AI has brought new possibilities and significant improvements to medical practice. Despite the excellent results of [...] Read more.
Nowadays, Artificial Intelligence (AI) has revolutionized many fields and the medical field is no exception. Thanks to technological advancements and the emergence of Deep Learning (DL) techniques AI has brought new possibilities and significant improvements to medical practice. Despite the excellent results of DL models in terms of accuracy and performance, they remain black boxes as they do not provide meaningful insights into their internal functioning. This is where the field of Explainable AI (XAI) comes in, aiming to provide insights into the underlying workings of these black box models. In this present paper the visual explainability of deep models on chest radiography images are addressed. This research uses two datasets, the first on COVID-19, viral pneumonia, normality (healthy patients) and the second on pulmonary opacities. Initially the pretrained CNN models (VGG16, VGG19, ResNet50, MobileNetV2, Mixnet and EfficientNetB7) are used to classify chest radiography images. Then, the visual explainability methods (GradCAM, LIME, Vanilla Gradient, Gradient Integrated Gradient and SmoothGrad) are performed to understand and explain the decisions made by these models. The obtained results show that MobileNetV2 and VGG16 are the best models for the first and second datasets, respectively. As for the explainability methods, the results were subjected to doctors and were validated by calculating the mean opinion score. The doctors deemed GradCAM, LIME and Vanilla Gradient as the most effective methods, providing understandable and accurate explanations. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

34 pages, 443 KiB  
Review
Advancements in Machine Learning-Based Intrusion Detection in IoT: Research Trends and Challenges
by Márton Bendegúz Bankó, Szymon Dyszewski, Michaela Králová, Márton Bertalan Limpek, Maria Papaioannou, Gaurav Choudhary and Nicola Dragoni
Algorithms 2025, 18(4), 209; https://doi.org/10.3390/a18040209 - 9 Apr 2025
Viewed by 684
Abstract
This paper presents a systematic literature review based on the PRISMA model on machine learning-based Distributed Denial of Service (DDoS) attacks in Internet of Things (IoT) networks. The primary objective of the review is to compare research trends on deployment options, datasets, and [...] Read more.
This paper presents a systematic literature review based on the PRISMA model on machine learning-based Distributed Denial of Service (DDoS) attacks in Internet of Things (IoT) networks. The primary objective of the review is to compare research trends on deployment options, datasets, and machine learning techniques used in the domain between 2019 and 2024. The results highlight the dominance of certain datasets (BoT-IoT and TON_IoT) in combination with Decision Tree (DT) and Random Forest (RF) models, achieving high median accuracy rates (>99%). This paper discusses various datasets that are used to train and evaluate machine learning (ML) models for detecting Distributed Denial of Service (DDoS) attacks in Internet of Things (IoT) networks and how they impact model performance. Furthermore, the findings suggest that due to hardware limitations, there is a preference for lightweight ML solutions and preprocessed datasets. Current trends indicate that larger or industry-specific datasets will continue to gain popularity alongside more complex ML models, such as deep learning. This emphasizes the need for robust and scalable deployment options, with Software-Defined Networks (SDNs) offering flexibility, edge computing being extensively explored in cloud environments, and blockchain-integrated networks emerging as a promising approach for enhancing security. Full article
(This article belongs to the Special Issue Advances in Deep Learning and Next-Generation Internet Technologies)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop