Previous Issue
Volume 17, May
 
 

Algorithms, Volume 17, Issue 6 (June 2024) – 38 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 625 KiB  
Article
Approximating a Minimum Dominating Set by Purification
by Ernesto Parra Inza, Nodari Vakhania, José María Sigarreta Almira and José Alberto Hernández-Aguilar
Algorithms 2024, 17(6), 258; https://doi.org/10.3390/a17060258 - 12 Jun 2024
Viewed by 200
Abstract
A dominating set of a graph is a subset of vertices such that every vertex not in the subset has at least one neighbor within the subset. The corresponding optimization problem is known to be NP-hard. It is proved to be beneficial to [...] Read more.
A dominating set of a graph is a subset of vertices such that every vertex not in the subset has at least one neighbor within the subset. The corresponding optimization problem is known to be NP-hard. It is proved to be beneficial to separate the solution process in two stages. First, one can apply a fast greedy algorithm to obtain an initial dominating set and then use an iterative procedure to purify (reduce) the size of this dominating set. In this work, we develop the purification stage and propose new purification algorithms. The purification procedures that we present here outperform, in practice, the earlier known purification procedure. We have tested our algorithms for over 1300 benchmark problem instances. Compared to the estimations due to known upper bounds, the obtained solutions are about seven times better. Remarkably, for the 500 benchmark instances for which the optimum is known, the optimal solutions are obtained for 46.33% of the tested instances, whereas the average error for the remaining instances is about 1.01. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

15 pages, 4225 KiB  
Article
NSBR-Net: A Novel Noise Suppression and Boundary Refinement Network for Breast Tumor Segmentation in Ultrasound Images
by Yue Sun, Zhaohong Huang, Guorong Cai, Jinhe Su and Zheng Gong
Algorithms 2024, 17(6), 257; https://doi.org/10.3390/a17060257 - 12 Jun 2024
Viewed by 187
Abstract
Breast tumor segmentation of ultrasound images provides valuable tumor information for early detection and diagnosis. However, speckle noise and blurred boundaries in breast ultrasound images present challenges for tumor segmentation, especially for malignant tumors with irregular shapes. Recent vision transformers have shown promising [...] Read more.
Breast tumor segmentation of ultrasound images provides valuable tumor information for early detection and diagnosis. However, speckle noise and blurred boundaries in breast ultrasound images present challenges for tumor segmentation, especially for malignant tumors with irregular shapes. Recent vision transformers have shown promising performance in handling the variation through global context modeling. Nevertheless, they are often dominated by features of large patterns and lack the ability to recognize negative information in ultrasound images, which leads to the loss of breast tumor details (e.g., boundaries and small objects). In this paper, we propose a novel noise suppression and boundary refinement network, NSBR-Net, to simultaneously alleviate speckle noise interference and blurred boundary problems of breast tumor segmentation. Specifically, we propose two innovative designs, namely, the Noise Suppression Module (NSM) and the Boundary Refinement Module (BRM). The NSM filters noise information from the coarse-grained feature maps, while the BRM progressively refines the boundaries of significant lesion objects. Our method demonstrates superior accuracy over state-of-the-art deep learning models, achieving significant improvements of 3.67% on Dataset B and 2.30% on the BUSI dataset in mDice for testing malignant tumors. Full article
Show Figures

Figure 1

19 pages, 2454 KiB  
Article
Synthesis of Circular Antenna Arrays for Achieving Lower Side Lobe Level and Higher Directivity Using Hybrid Optimization Algorithm
by Vikas Mittal, Kanta Prasad Sharma, Narmadha Thangarasu, Udandarao Sarat, Ahmad O. Hourani and Rohit Salgotra
Algorithms 2024, 17(6), 256; https://doi.org/10.3390/a17060256 - 11 Jun 2024
Viewed by 209
Abstract
Circular antenna arrays (CAAs) find extensive utility in a range of cutting-edge communication applications such as 5G networks, the Internet of Things (IoT), and advanced beamforming technologies. In the realm of antenna design, the side lobes levels (SLL) in the radiation pattern hold [...] Read more.
Circular antenna arrays (CAAs) find extensive utility in a range of cutting-edge communication applications such as 5G networks, the Internet of Things (IoT), and advanced beamforming technologies. In the realm of antenna design, the side lobes levels (SLL) in the radiation pattern hold significant importance within communication systems. This is primarily due to its role in mitigating signal interference across the entire radiation pattern’s side lobes. In order to suppress the subsidiary lobe, achieve the required primary lobe orientation, and improve directivity, an optimization problem is used in this work. This paper introduces a method aimed at enhancing the radiation pattern of CAA by minimizing its SLL using a Hybrid Sooty Tern Naked Mole-Rat Algorithm (STNMRA). The simulation results show that the hybrid optimization method significantly reduces side lobes while maintaining reasonable directivity compared to the uniform array and other competitive metaheuristics. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
4 pages, 226 KiB  
Editorial
Guest Editorial for the Special Issue “New Trends in Algorithms for Intelligent Recommendation Systems”
by Edward Rolando Núñez-Valdez and Vicente García-Díaz
Algorithms 2024, 17(6), 255; https://doi.org/10.3390/a17060255 - 10 Jun 2024
Viewed by 174
Abstract
Currently, the problem of information overload, a term popularized by Alvin Toffler in his book Future Shock [1], is more present than ever due to the rapid development of the Internet [...] Full article
(This article belongs to the Special Issue New Trends in Algorithms for Intelligent Recommendation Systems)
12 pages, 473 KiB  
Review
The Quest for the Application of Artificial Intelligence to Whole Slide Imaging: Unique Prospective from New Advanced Tools
by Gavino Faa, Massimo Castagnola, Luca Didaci, Fernando Coghe, Mario Scartozzi, Luca Saba and Matteo Fraschini
Algorithms 2024, 17(6), 254; https://doi.org/10.3390/a17060254 - 10 Jun 2024
Viewed by 342
Abstract
The introduction of machine learning in digital pathology has deeply impacted the field, especially with the advent of whole slide image (WSI) analysis. In this review, we tried to elucidate the role of machine learning algorithms in diagnostic precision, efficiency, and the reproducibility [...] Read more.
The introduction of machine learning in digital pathology has deeply impacted the field, especially with the advent of whole slide image (WSI) analysis. In this review, we tried to elucidate the role of machine learning algorithms in diagnostic precision, efficiency, and the reproducibility of the results. First, we discuss some of the most used tools, including QuPath, HistoQC, and HistomicsTK, and provide an updated overview of machine learning approaches and their application in pathology. Later, we report how these tools may simplify the automation of WSI analyses, also reducing manual workload and inter-observer variability. A novel aspect of this review is its focus on open-source tools, presented in a way that may help the adoption process for pathologists. Furthermore, we highlight the major benefits of these technologies, with the aim of making this review a practical guide for clinicians seeking to implement machine learning-based solutions in their specific workflows. Moreover, this review also emphasizes some crucial limitations related to data quality and the interpretability of the models, giving insight into future directions for research. Overall, this work tries to bridge the gap between the more recent technological progress in computer science and traditional clinical practice, supporting a broader, yet smooth, adoption of machine learning approaches in digital pathology. Full article
(This article belongs to the Special Issue AI Algorithms in Medical Imaging)
Show Figures

Figure 1

55 pages, 716 KiB  
Review
Hardware Model Checking Algorithms and Techniques
by Gianpiero Cabodi, Paolo Enrico Camurati, Marco Palena and Paolo Pasini
Algorithms 2024, 17(6), 253; https://doi.org/10.3390/a17060253 - 9 Jun 2024
Viewed by 251
Abstract
Digital systems are nowadays ubiquitous and often comprise an extremely high level of complexity. Guaranteeing the correct behavior of such systems has become an ever more pressing need for manufacturers. The correctness of digital systems can be addressed resorting to formal verification techniques, [...] Read more.
Digital systems are nowadays ubiquitous and often comprise an extremely high level of complexity. Guaranteeing the correct behavior of such systems has become an ever more pressing need for manufacturers. The correctness of digital systems can be addressed resorting to formal verification techniques, such as model checking. Currently, it is usually impossible to determine a priori the best algorithm to use given a verification task and, thus, portfolio approaches have become the de facto standard in model checking verification suites. This paper describes the most relevant algorithms and techniques, at the foundations of bit-level SAT-based model checking itself. Full article
(This article belongs to the Special Issue Surveys in Algorithm Analysis and Complexity Theory, Part II)
23 pages, 13655 KiB  
Article
Prediction of Hippocampal Signals in Mice Using a Deep Learning Approach for Neurohybrid Technology Applications
by Albina V. Lebedeva, Margarita I. Samburova, Vyacheslav V. Razin, Nikolay V. Gromov, Svetlana A. Gerasimova, Tatiana A. Levanova, Lev A. Smirnov and Alexander N. Pisarchik
Algorithms 2024, 17(6), 252; https://doi.org/10.3390/a17060252 - 7 Jun 2024
Viewed by 302
Abstract
The increasing growth in knowledge about the functioning of the nervous system of mammals and humans, as well as the significant neuromorphic technology developments in recent decades, has led to the emergence of a large number of brain–computer interfaces and neuroprosthetics for regenerative [...] Read more.
The increasing growth in knowledge about the functioning of the nervous system of mammals and humans, as well as the significant neuromorphic technology developments in recent decades, has led to the emergence of a large number of brain–computer interfaces and neuroprosthetics for regenerative medicine tasks. Neurotechnologies have traditionally been developed for therapeutic purposes to help or replace motor, sensory or cognitive abilities damaged by injury or disease. They also have significant potential for memory enhancement. However, there are still no fully developed neurotechnologies and neural interfaces capable of restoring or expanding cognitive functions, in particular memory, in mammals or humans. In this regard, the search for new technologies in the field of the restoration of cognitive functions is an urgent task of modern neurophysiology, neurotechnology and artificial intelligence. The hippocampus is an important brain structure connected to memory and information processing in the brain. The aim of this paper is to propose an approach based on deep neural networks for the prediction of hippocampal signals in the CA1 region based on received biological input in the CA3 region. We compare the results of prediction for two widely used deep architectures: reservoir computing (RC) and long short-term memory (LSTM) networks. The proposed study can be viewed as a first step in the complex task of the development of a neurohybrid chip, which allows one to restore memory functions in the damaged rodent hippocampus. Full article
Show Figures

Figure 1

16 pages, 778 KiB  
Article
Distributed Control of Hydrogen-Based Microgrids for the Demand Side: A Multiagent Self-Triggered MPC-Based Strategy
by Tingzhe Pan, Jue Hou, Xin Jin, Zhenfan Yu, Wei Zhou and Zhijun Wang
Algorithms 2024, 17(6), 251; https://doi.org/10.3390/a17060251 - 7 Jun 2024
Viewed by 255
Abstract
With the global pursuit of renewable energy and carbon neutrality, hydrogen-based microgrids have also become an important area of research, as ensuring proper design and operation is essential to achieve optimal performance from hybrid systems. This paper proposes a distributed control strategy based [...] Read more.
With the global pursuit of renewable energy and carbon neutrality, hydrogen-based microgrids have also become an important area of research, as ensuring proper design and operation is essential to achieve optimal performance from hybrid systems. This paper proposes a distributed control strategy based on multiagent self-triggered model predictive control (ST-MPC), with the aim of achieving demand-side control of hydrogen-based microgrid systems. This architecture considers a hybrid energy storage system with renewable energy as the main power source, supplemented by fuel cells based on electrolytic hydrogen. The primary objective of this architecture is aiming at the supply and demand balance problem under the supply and demand relationship of microgrid, the service life of hydrogen-based microgrid energy storage equipment can be increased on the basis of realizing demand-side control of hydrogen energy microgrid system. To accomplish this, model predictive controllers are implemented within a self-triggered framework that dynamically adjusts the counting period. The simulation results demonstrate that the ST-MPC architecture significantly reduces the frequency of control action changes while maintaining an acceptable level of set-point tracking. These findings highlight the viability of the proposed solution for microgrids equipped with multiple types of electrochemical storage, which contributes to improved sustainability and efficiency in renewable-based microgrid systems. Full article
(This article belongs to the Special Issue Intelligent Algorithms for High-Penetration New Energy)
Show Figures

Figure 1

23 pages, 5573 KiB  
Article
Research on Distributed Fault Diagnosis Model of Elevator Based on PCA-LSTM
by Chengming Chen, Xuejun Ren and Guoqing Cheng
Algorithms 2024, 17(6), 250; https://doi.org/10.3390/a17060250 - 7 Jun 2024
Viewed by 201
Abstract
A Distributed Elevator Fault Diagnosis System (DEFDS) is developed to tackle frequent malfunctions stemming from the widespread distribution and aging of elevator systems. Due to the complexity of elevator fault data and the subtlety of fault characteristics, traditional methods such as visual inspections [...] Read more.
A Distributed Elevator Fault Diagnosis System (DEFDS) is developed to tackle frequent malfunctions stemming from the widespread distribution and aging of elevator systems. Due to the complexity of elevator fault data and the subtlety of fault characteristics, traditional methods such as visual inspections and basic operational tests fall short in detecting early signs of mechanical wear and electrical issues. These conventional techniques often fail to recognize subtle fault characteristics, necessitating more advanced diagnostic tools. In response, this paper introduces a Principal Component Analysis–Long Short-Term Memory (PCA-LSTM) method for fault diagnosis. The distributed system decentralizes the fault diagnosis process to individual elevator units, utilizing PCA’s feature selection capabilities in high-dimensional spaces to extract and reduce the dimensionality of fault features. Subsequently, the LSTM model is employed for fault prediction. Elevator models within the system exchange data to refine and optimize a global prediction model. The efficacy of this approach is substantiated through empirical validation with actual data, achieving an accuracy rate of 90% and thereby confirming the method’s effectiveness in facilitating distributed elevator fault diagnosis. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

28 pages, 859 KiB  
Article
Simulation of Calibrated Complex Synthetic Population Data with XGBoost
by Johannes Gussenbauer, Matthias Templ, Siro Fritzmann and Alexander Kowarik
Algorithms 2024, 17(6), 249; https://doi.org/10.3390/a17060249 - 6 Jun 2024
Viewed by 291
Abstract
Syntheticdata generation methods are used to transform the original data into privacy-compliant synthetic copies (twin data). With our proposed approach, synthetic data can be simulated in the same size as the input data or in any size, and in the case of finite [...] Read more.
Syntheticdata generation methods are used to transform the original data into privacy-compliant synthetic copies (twin data). With our proposed approach, synthetic data can be simulated in the same size as the input data or in any size, and in the case of finite populations, even the entire population can be simulated. The proposed XGBoost-based method is compared with known model-based approaches to generate synthetic data using a complex survey data set. The XGBoost method shows strong performance, especially with synthetic categorical variables, and outperforms other tested methods. Furthermore, the structure and relationship between variables are well preserved. The tuning of the parameters is performed automatically by a modified k-fold cross-validation. If exact population margins are known, e.g., cross-tabulated population counts on age class, gender and region, the synthetic data must be calibrated to those known population margins. For this purpose, we have implemented a simulated annealing algorithm that is able to use multiple population margins simultaneously to post-calibrate a synthetic population. The algorithm is, thus, able to calibrate simulated population data containing cluster and individual information, e.g., about persons in households, at both person and household level. Furthermore, the algorithm is efficiently implemented so that the adjustment of populations with many millions or more persons is possible. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

25 pages, 1790 KiB  
Article
A Non-Gradient and Non-Iterative Method for Mapping 3D Mesh Objects Based on a Summation of Dependent Random Values
by Ihar Volkau, Sergei Krasovskii, Abdul Mujeeb and Helen Balinsky
Algorithms 2024, 17(6), 248; https://doi.org/10.3390/a17060248 - 6 Jun 2024
Viewed by 290
Abstract
The manuscript presents a novel non-gradient and non-iterative method for mapping two 3D objects by matching extrema. This innovative approach utilizes the amplification of extrema through the summation of dependent random values, accompanied by a comprehensive explanation of the statistical background. The method [...] Read more.
The manuscript presents a novel non-gradient and non-iterative method for mapping two 3D objects by matching extrema. This innovative approach utilizes the amplification of extrema through the summation of dependent random values, accompanied by a comprehensive explanation of the statistical background. The method further incorporates structural patterns based on spherical harmonic functions to calculate the rotation matrix, enabling the juxtaposition of the objects. Without utilizing gradients and iterations to improve the solution step by step, the proposed method generates a limited number of candidates, and the mapping (if it exists) is necessarily among the candidates. For instance, this method holds potential for object analysis and identification in additive manufacturing for 3D printing and protein matching. Full article
Show Figures

Figure 1

16 pages, 5093 KiB  
Article
New Multi-View Feature Learning Method for Accurate Antifungal Peptide Detection
by Sayeda Muntaha Ferdous, Shafayat Bin Shabbir Mugdha and Iman Dehzangi
Algorithms 2024, 17(6), 247; https://doi.org/10.3390/a17060247 - 6 Jun 2024
Viewed by 241
Abstract
Antimicrobial resistance, particularly the emergence of resistant strains in fungal pathogens, has become a pressing global health concern. Antifungal peptides (AFPs) have shown great potential as a promising alternative therapeutic strategy due to their inherent antimicrobial properties and potential application in combating fungal [...] Read more.
Antimicrobial resistance, particularly the emergence of resistant strains in fungal pathogens, has become a pressing global health concern. Antifungal peptides (AFPs) have shown great potential as a promising alternative therapeutic strategy due to their inherent antimicrobial properties and potential application in combating fungal infections. However, the identification of antifungal peptides using experimental approaches is time-consuming and costly. Hence, there is a demand to propose fast and accurate computational approaches to identifying AFPs. This paper introduces a novel multi-view feature learning (MVFL) model, called AFP-MVFL, for accurate AFP identification, utilizing multi-view feature learning. By integrating the sequential and physicochemical properties of amino acids and employing a multi-view approach, the AFP-MVFL model significantly enhances prediction accuracy. It achieves 97.9%, 98.4%, 0.98, and 0.96 in terms of accuracy, precision, F1 score, and Matthews correlation coefficient (MCC), respectively, outperforming previous studies found in the literature. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

13 pages, 346 KiB  
Article
Minimizing Query Frequency to Bound Congestion Potential for Moving Entities at a Fixed Target Time
by William Evans and David Kirkpatrick
Algorithms 2024, 17(6), 246; https://doi.org/10.3390/a17060246 - 6 Jun 2024
Viewed by 201
Abstract
Consider a collection of entities moving continuously with bounded speed, but otherwise unpredictably, in some low-dimensional space. Two such entities encroach upon one another at a fixed time if their separation is less than some specified threshold. Encroachment, of concern in many settings [...] Read more.
Consider a collection of entities moving continuously with bounded speed, but otherwise unpredictably, in some low-dimensional space. Two such entities encroach upon one another at a fixed time if their separation is less than some specified threshold. Encroachment, of concern in many settings such as collision avoidance, may be unavoidable. However, the associated difficulties are compounded if there is uncertainty about the precise location of entities, giving rise to potential encroachment and, more generally, potential congestion within the full collection. We adopt a model in which entities can be queried for their current location (at some cost) and the uncertainty region associated with an entity grows in proportion to the time since that entity was last queried. The goal is to maintain low potential congestion, measured in terms of the (dynamic) intersection graph of uncertainty regions, at specified (possibly all) times, using the lowest possible query cost. Previous work in the same uncertainty model addressed the problem of minimizing the congestion potential of point entities using location queries of some bounded frequency. It was shown that it is possible to design query schemes that are O(1)-competitive, in terms of worst-case congestion potential, with other, even clairvoyant query schemes (that exploit knowledge of the trajectories of all entities), subject to the same bound on query frequency. In this paper, we initiate the treatment of a more general problem with the complementary optimization objective: minimizing the query frequency, measured as the reciprocal of the minimum time between queries (granularity), while guaranteeing a fixed bound on congestion potential of entities with positive extent at one specified target time. This complementary objective necessitates quite different schemes and analyses. Nevertheless, our results parallel those of the earlier papers, specifically tight competitive bounds on required query frequency. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From FCT 2023)
Show Figures

Figure 1

18 pages, 3005 KiB  
Article
A Modified Analytic Hierarchy Process Suitable for Online Survey Preference Elicitation
by Sean Pascoe, Anna Farmery, Rachel Nichols, Sarah Lothian and Kamal Azmi
Algorithms 2024, 17(6), 245; https://doi.org/10.3390/a17060245 - 6 Jun 2024
Viewed by 259
Abstract
A key component of multi-criteria decision analysis is the estimation of criteria weights, reflecting the preference strength of different stakeholder groups related to different objectives. One common method is the Analytic Hierarchy Process (AHP). A key challenge with the AHP is the potential [...] Read more.
A key component of multi-criteria decision analysis is the estimation of criteria weights, reflecting the preference strength of different stakeholder groups related to different objectives. One common method is the Analytic Hierarchy Process (AHP). A key challenge with the AHP is the potential for inconsistency in responses, resulting in potentially unreliable preference weights. In small groups, interactions between analysts and respondents can compensate for this through reassessment of inconsistent responses. In many cases, however, stakeholders may be geographically dispersed, with online surveys being a more cost-effective means to elicit these preferences, making renegotiating with inconsistent respondents impossible. Further, the potentially large number of bivariate comparisons required using the AHP may adversely affect response rates. In this study, we test a new “modified” AHP (MAHP). The MAHP was designed to retain the key desirable features of the AHP but be more amenable to online surveys, reduce the problem of inconsistencies, and require substantially fewer comparisons. The MAHP is tested using three groups of university students through an online survey platform, along with a “traditional” AHP approach. The results indicate that the MAHP can provide statistically equivalent outcomes to the AHP but without problems arising due to inconsistencies. Full article
Show Figures

Figure 1

18 pages, 3670 KiB  
Article
Automated Recommendation of Aggregate Visualizations for Crowdfunding Data
by Mohamed A. Sharaf, Heba Helal, Nazar Zaki, Wadha Alketbi, Latifa Alkaabi, Sara Alshamsi and Fatmah Alhefeiti
Algorithms 2024, 17(6), 244; https://doi.org/10.3390/a17060244 - 6 Jun 2024
Viewed by 282
Abstract
Analyzing crowdfunding data has been the focus of many research efforts, where analysts typically explore this data to identify the main factors and characteristics of the lending process as well as to discover unique patterns and anomalies in loan distributions. However, the manual [...] Read more.
Analyzing crowdfunding data has been the focus of many research efforts, where analysts typically explore this data to identify the main factors and characteristics of the lending process as well as to discover unique patterns and anomalies in loan distributions. However, the manual exploration and visualization of such data is clearly an ad hoc, time-consuming, and labor-intensive process. Hence, in this work, we propose LoanVis, which is an automated solution for discovering and recommending those valuable and insightful visualizations. LoanVis is a data-driven system that utilizes objective metrics to quantify the “interestingness” of a visualization and employs such metrics in the recommendation process. We demonstrate the effectiveness of LoanVis in analyzing and exploring different aspects of the Kiva crowdfunding dataset. Full article
(This article belongs to the Special Issue Recommendations with Responsibility Constraints)
Show Figures

Figure 1

18 pages, 3521 KiB  
Article
Training of Convolutional Neural Networks for Image Classification with Fully Decoupled Extended Kalman Filter
by Armando Gaytan, Ofelia Begovich-Mendoza and Nancy Arana-Daniel
Algorithms 2024, 17(6), 243; https://doi.org/10.3390/a17060243 - 6 Jun 2024
Viewed by 340
Abstract
First-order algorithms have long dominated the training of deep neural networks, excelling in tasks like image classification and natural language processing. Now there is a compelling opportunity to explore alternatives that could outperform current state-of-the-art results. From the estimation theory, the Extended Kalman [...] Read more.
First-order algorithms have long dominated the training of deep neural networks, excelling in tasks like image classification and natural language processing. Now there is a compelling opportunity to explore alternatives that could outperform current state-of-the-art results. From the estimation theory, the Extended Kalman Filter (EKF) arose as a viable alternative and has shown advantages over backpropagation methods. Current computational advances offer the opportunity to review algorithms derived from the EKF, almost excluded from the training of convolutional neural networks. This article revisits an approach of the EKF with decoupling and it brings the Fully Decoupled Extended Kalman Filter (FDEKF) for training convolutional neural networks in image classification tasks. The FDEKF is a second-order algorithm with some advantages over the first-order algorithms, so it can lead to faster convergence and higher accuracy, due to a higher probability of finding the global optimum. In this research, experiments are conducted on well-known datasets that include Fashion, Sports, and Handwritten Digits images. The FDEKF shows faster convergence compared to other algorithms such as the popular Adam optimizer, the sKAdam algorithm, and the reduced extended Kalman filter. Finally, motivated by the finding of the highest accuracy of FDEKF with images of natural scenes, we show its effectiveness in another experiment focused on outdoor terrain recognition. Full article
(This article belongs to the Special Issue Machine Learning in Pattern Recognition)
Show Figures

Figure 1

24 pages, 1150 KiB  
Article
A Comparative Study of Machine Learning Methods and Text Features for Text Authorship Recognition in the Example of Azerbaijani Language Texts
by Rustam Azimov and Efthimios Providas
Algorithms 2024, 17(6), 242; https://doi.org/10.3390/a17060242 - 5 Jun 2024
Viewed by 233
Abstract
This paper presents various machine learning methods with different text features that are explored and evaluated to determine the authorship of the texts in the example of the Azerbaijani language. We consider techniques like artificial neural network, convolutional neural network, random forest, and [...] Read more.
This paper presents various machine learning methods with different text features that are explored and evaluated to determine the authorship of the texts in the example of the Azerbaijani language. We consider techniques like artificial neural network, convolutional neural network, random forest, and support vector machine. These techniques are used with different text features like word length, sentence length, combined word length and sentence length, n-grams, and word frequencies. The models were trained and tested on the works of many famous Azerbaijani writers. The results of computer experiments obtained by utilizing a comparison of various techniques and text features were analyzed. The cases where the usage of text features allowed better results were determined. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
20 pages, 368 KiB  
Article
Fitness Landscape Analysis of Product Unit Neural Networks
by Andries Engelbrecht and Robert Gouldie 
Algorithms 2024, 17(6), 241; https://doi.org/10.3390/a17060241 - 4 Jun 2024
Viewed by 147
Abstract
A fitness landscape analysis of the loss surfaces produced by product unit neural networks is performed in order to gain a better understanding of the impact of product units on the characteristics of the loss surfaces. The loss surface characteristics of product unit [...] Read more.
A fitness landscape analysis of the loss surfaces produced by product unit neural networks is performed in order to gain a better understanding of the impact of product units on the characteristics of the loss surfaces. The loss surface characteristics of product unit neural networks are then compared to the characteristics of loss surfaces produced by neural networks that make use of summation units. The failure of certain optimization algorithms in training product neural networks is explained through trends observed between loss surface characteristics and optimization algorithm performance. The paper shows that the loss surfaces of product unit neural networks have extremely large gradients with many deep ravines and valleys, which explains why gradient-based optimization algorithms fail at training these neural networks. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning (2nd Edition))
16 pages, 3410 KiB  
Article
Feature Extraction Based on Sparse Coding Approach for Hand Grasp Type Classification
by Jirayu Samkunta, Patinya Ketthong, Nghia Thi Mai, Md Abdus Samad Kamal, Iwanori Murakami and Kou Yamada
Algorithms 2024, 17(6), 240; https://doi.org/10.3390/a17060240 - 3 Jun 2024
Viewed by 135
Abstract
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand [...] Read more.
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand grasp types based on time-series kinematic data. In this paper, we propose a novel sparse coding feature extraction technique based on dictionary learning to address this challenge. Our method enhances model accuracy, reduces training time, and minimizes overfitting risk. We benchmarked our approach against principal component analysis (PCA) and sparse coding based on a Gaussian random dictionary. Our results demonstrate a significant improvement in classification accuracy: achieving 81.78% with our method compared to 31.43% for PCA and 77.27% for the Gaussian random dictionary. Furthermore, our technique outperforms in terms of macro-average F1-score and average area under the curve (AUC) while also significantly reducing the number of features required. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

14 pages, 1059 KiB  
Article
Linear System Identification-Oriented Optimal Tampering Attack Strategy and Implementation Based on Information Entropy with Multiple Binary Observations
by Zhongwei Bai, Peng Yu, Yan Liu and Jin Guo
Algorithms 2024, 17(6), 239; https://doi.org/10.3390/a17060239 - 3 Jun 2024
Viewed by 168
Abstract
With the rapid development of computer technology, communication technology, and control technology, cyber-physical systems (CPSs) have been widely used and developed. However, there are massive information interactions in CPSs, which lead to an increase in the amount of data transmitted over the network. [...] Read more.
With the rapid development of computer technology, communication technology, and control technology, cyber-physical systems (CPSs) have been widely used and developed. However, there are massive information interactions in CPSs, which lead to an increase in the amount of data transmitted over the network. The data communication, once attacked by the network, will seriously affect the security and stability of the system. In this paper, for the data tampering attack existing in the linear system with multiple binary observations, in the case where the estimation algorithm of the defender is unknown, the optimization index is constructed based on information entropy from the attacker’s point of view, and the problem is modeled. For the problem of the multi-parameter optimization with energy constraints, this paper uses particle swarm optimization (PSO) to obtain the optimal data tampering attack solution set, and gives the estimation method of unknown parameters in the case of unknown parameters. To implement the real-time improvement of online implementation, the BP neural network is designed. Finally, the validity of the conclusions is verified through numerical simulation. This means that the attacker can construct effective metrics based on information entropy without the knowledge of the defense’s discrimination algorithm. In addition, the optimal attack strategy implementation based on PSO and BP is also effective. Full article
Show Figures

Figure 1

19 pages, 1087 KiB  
Article
Simple Histogram Equalization Technique Improves Performance of VGG Models on Facial Emotion Recognition Datasets
by Jaher Hassan Chowdhury, Qian Liu and Sheela Ramanna
Algorithms 2024, 17(6), 238; https://doi.org/10.3390/a17060238 - 3 Jun 2024
Viewed by 426
Abstract
Facial emotion recognition (FER) is crucial across psychology, neuroscience, computer vision, and machine learning due to the diversified and subjective nature of emotions, varying considerably across individuals, cultures, and contexts. This study explored FER through convolutional neural networks (CNNs) and Histogram Equalization techniques. [...] Read more.
Facial emotion recognition (FER) is crucial across psychology, neuroscience, computer vision, and machine learning due to the diversified and subjective nature of emotions, varying considerably across individuals, cultures, and contexts. This study explored FER through convolutional neural networks (CNNs) and Histogram Equalization techniques. It investigated the impact of histogram equalization, data augmentation, and various model optimization strategies on FER accuracy across different datasets like KDEF, CK+, and FER2013. Using pre-trained VGG architectures, such as VGG19 and VGG16, this study also examined the effectiveness of fine-tuning hyperparameters and implementing different learning rate schedulers. The evaluation encompassed diverse metrics including accuracy, Area Under the Receiver Operating Characteristic Curve (AUC-ROC), Area Under the Precision–Recall Curve (AUC-PRC), and Weighted F1 score. Notably, the fine-tuned VGG architecture demonstrated a state-of-the-art performance compared to conventional transfer learning models and achieved 100%, 95.92%, and 69.65% on the CK+, KDEF, and FER2013 datasets, respectively. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

14 pages, 341 KiB  
Article
Competitive Analysis of Algorithms for an Online Distribution Problem
by Alessandro Barba, Luca Bertazzi and Bruce L. Golden
Algorithms 2024, 17(6), 237; https://doi.org/10.3390/a17060237 - 3 Jun 2024
Viewed by 117
Abstract
We study an online distribution problem in which a producer has to send a load from an origin to a destination. At each time period before the deadline, they ask for transportation price quotes and have to decide to either accept or not [...] Read more.
We study an online distribution problem in which a producer has to send a load from an origin to a destination. At each time period before the deadline, they ask for transportation price quotes and have to decide to either accept or not accept the minimum offered price. If this price is not accepted, they have to pay a penalty cost, which may be the cost to ask for new quotes, the penalty cost for a late delivery, or the inventory cost to store the load for a certain duration. The aim is to minimize the sum of the transportation and the penalty costs. This problem has interesting real-world applications, given that transportation quotes can be obtained from professional websites nowadays. We show that the classical online algorithm used to solve the well-known Secretary problem is not able to provide, on average, effective solutions to our problem, given the trade-off between the transportation and the penalty costs. Therefore, we design two classes of online algorithms. The first class is based on a given time of acceptance, while the second is based on a given threshold price. We formally prove the competitive ratio of each algorithm, i.e., the worst-case performance of the online algorithm with respect to the optimal solution of the offline problem, in which all transportation prices are known at the beginning, rather than being revealed over time. The computational results show the algorithms’ performance on average and in the worst-case scenario when the transportation prices are generated on the basis of given probability distributions. Full article
Show Figures

Figure 1

24 pages, 1023 KiB  
Article
Hybrid Machine Learning Algorithms to Evaluate Prostate Cancer
by Dimitrios Morakis and Adam Adamopoulos
Algorithms 2024, 17(6), 236; https://doi.org/10.3390/a17060236 - 2 Jun 2024
Viewed by 304
Abstract
The adequacy and efficacy of simple and hybrid machine learning and Computational Intelligence algorithms were evaluated for the classification of potential prostate cancer patients in two distinct categories, the high- and the low-risk group for PCa. The evaluation is based on randomly generated [...] Read more.
The adequacy and efficacy of simple and hybrid machine learning and Computational Intelligence algorithms were evaluated for the classification of potential prostate cancer patients in two distinct categories, the high- and the low-risk group for PCa. The evaluation is based on randomly generated surrogate data for the biomarker PSA, considering that reported epidemiological data indicated that PSA values follow a lognormal distribution. In addition, four more biomarkers were considered, namely, PSAD (PSA density), PSAV (PSA velocity), PSA ratio, and Digital Rectal Exam evaluation (DRE), as well as patient age. Seven simple classification algorithms, namely, Decision Trees, Random Forests, Support Vector Machines, K-Nearest Neighbors, Logistic Regression, Naïve Bayes, and Artificial Neural Networks, were evaluated in terms of classification accuracy. In addition, three hybrid algorithms were developed and introduced in the present work, where Genetic Algorithms were utilized as a metaheuristic searching technique in order to optimize the training set, in terms of minimizing its size, to give optimal classification accuracy for the simple algorithms including K-Nearest Neighbors, a K-means clustering algorithm, and a genetic clustering algorithm. Results indicated that prostate cancer cases can be classified with high accuracy, even by the use of small training sets, with sizes that could be even smaller than 30% of the dataset. Numerous computer experiments indicated that the proposed training set minimization does not cause overfitting of the hybrid algorithms. Finally, an easy-to-use Graphical User Interface (GUI) was implemented, incorporating all the evaluated algorithms and the decision-making procedure. Full article
(This article belongs to the Special Issue Hybrid Intelligent Algorithms)
25 pages, 3788 KiB  
Article
A Comprehensive Exploration of Unsupervised Classification in Spike Sorting: A Case Study on Macaque Monkey and Human Pancreatic Signals
by Francisco Javier Iñiguez-Lomeli, Edgar Eliseo Franco-Ortiz, Ana Maria Silvia Gonzalez-Acosta, Andres Amador Garcia-Granada and Horacio Rostro-Gonzalez
Algorithms 2024, 17(6), 235; https://doi.org/10.3390/a17060235 - 30 May 2024
Viewed by 201
Abstract
Spike sorting, an indispensable process in the analysis of neural biosignals, aims to segregate individual action potentials from mixed recordings. This study delves into a comprehensive investigation of diverse unsupervised classification algorithms, some of which, to the best of our knowledge, have not [...] Read more.
Spike sorting, an indispensable process in the analysis of neural biosignals, aims to segregate individual action potentials from mixed recordings. This study delves into a comprehensive investigation of diverse unsupervised classification algorithms, some of which, to the best of our knowledge, have not previously been used for spike sorting. The methods encompass Principal Component Analysis (PCA), K-means, Self-Organizing Maps (SOMs), and hierarchical clustering. The research draws insights from both macaque monkey and human pancreatic signals, providing a holistic evaluation across species. Our research has focused on the utilization of the aforementioned methods for the sorting of 327 detected spikes within an in vivo signal of a macaque monkey, as well as 386 detected spikes within an in vitro signal of a human pancreas. This classification process was carried out by extracting statistical features from these spikes. We initiated our analysis with K-means, employing both unmodified and normalized versions of the features. To enhance the performance of this algorithm, we also employed Principal Component Analysis (PCA) to reduce the dimensionality of the data, thereby leading to more distinct groupings as identified by the K-means algorithm. Furthermore, two additional techniques, namely hierarchical clustering and Self-Organizing Maps, have also undergone exploration and have demonstrated favorable outcomes for both signal types. Across all scenarios, a consistent observation emerged: the identification of six distinctive groups of spikes, each characterized by distinct shapes, within both signal sets. In this regard, we meticulously present and thoroughly analyze the experimental outcomes yielded by each of the employed algorithms. This comprehensive presentation and discussion encapsulate the nuances, patterns, and insights uncovered by these algorithms across our data. By delving into the specifics of these results, we aim to provide a nuanced understanding of the efficacy and performance of each algorithm in the context of spike sorting. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms (2nd Edition))
Show Figures

Figure 1

29 pages, 767 KiB  
Article
Unleashing the Power of Tweets and News in Stock-Price Prediction Using Machine-Learning Techniques
by Hossein Zolfagharinia, Mehdi Najafi, Shamir Rizvi and Aida Haghighi
Algorithms 2024, 17(6), 234; https://doi.org/10.3390/a17060234 - 28 May 2024
Viewed by 380
Abstract
Price prediction tools play a significant role in small investors’ behavior. As such, this study aims to propose a method to more effectively predict stock prices in North America. Chiefly, the study addresses crucial questions related to the relevance of news and tweets [...] Read more.
Price prediction tools play a significant role in small investors’ behavior. As such, this study aims to propose a method to more effectively predict stock prices in North America. Chiefly, the study addresses crucial questions related to the relevance of news and tweets in stock-price prediction and highlights the potential value of considering such parameters in algorithmic trading strategies—particularly during times of market panic. To this end, we develop innovative multi-layer perceptron (MLP) and long short-term memory (LSTM) neural networks to investigate the influence of Twitter count (TC), and news count (NC) variables on stock-price prediction under both normal and market-panic conditions. To capture the impact of these variables, we integrate technical variables with TC and NC and evaluate the prediction accuracy across different model types. We use Bloomberg Twitter count and news publication count variables in North American stock-price prediction and integrate them into MLP and LSTM neural networks to evaluate their impact during the market pandemic. The results showcase improved prediction accuracy, promising significant benefits for traders and investors. This strategic integration reflects a nuanced understanding of the market sentiment derived from public opinion on platforms like Twitter. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Swarm Systems)
Show Figures

Figure 1

16 pages, 4902 KiB  
Article
Data-Driven Load Frequency Control for Multi-Area Power System Based on Switching Method under Cyber Attacks
by Guangqiang Tian and Fuzhong Wang
Algorithms 2024, 17(6), 233; https://doi.org/10.3390/a17060233 - 27 May 2024
Viewed by 376
Abstract
This paper introduces an innovative method for load frequency control (LFC) in multi-area interconnected power systems vulnerable to denial-of-service (DoS) attacks. The system is modeled as a switching system with two subsystems, and an adaptive control algorithm is developed. Initially, a dynamic linear [...] Read more.
This paper introduces an innovative method for load frequency control (LFC) in multi-area interconnected power systems vulnerable to denial-of-service (DoS) attacks. The system is modeled as a switching system with two subsystems, and an adaptive control algorithm is developed. Initially, a dynamic linear data model is used to model each subsystem. Next, a model-free adaptive control strategy is introduced to maintain frequency stability in the multi-area interconnected power system, even during DoS attacks. A rigorous stability analysis of the power system is performed, and the effectiveness of the proposed approach is demonstrated by applying it to a three-area interconnected power system. Full article
Show Figures

Figure 1

24 pages, 3149 KiB  
Article
A Multi-Process System for Investigating Inclusive Design in User Interfaces for Low-Income Countries
by Yann Méhat, Sylvain Sagot, Egon Ostrosi and Dominique Deuff
Algorithms 2024, 17(6), 232; https://doi.org/10.3390/a17060232 - 27 May 2024
Viewed by 400
Abstract
Limited understanding exists regarding the methodologies behind designing interfaces for low-income contexts, despite acknowledging their potential value. The ERSA (Engineering design Research meta-model based Systematic Analysis) process, defined as a dynamic interactive multi-process system, proposes a new approach to constructing learnings to succeed [...] Read more.
Limited understanding exists regarding the methodologies behind designing interfaces for low-income contexts, despite acknowledging their potential value. The ERSA (Engineering design Research meta-model based Systematic Analysis) process, defined as a dynamic interactive multi-process system, proposes a new approach to constructing learnings to succeed in designing interfaces for low-income countries. ERSA is developed by integrating database searches, snowballing, thematic similarity searches for corpus of literature creation, multilayer networks, clustering algorithms, and data processing. ERSA employs an engineering design meta-model to analyze the corpus of literature, facilitating the identification of diverse methodological approaches. The insights from ERSA empower researchers, designers, and engineers to tailor design methodologies to their specific low-income contexts. Our findings show the importance of adopting more versatile and holistic approaches. They suggest that user-based design methodologies and computational design can be defined and theorized together. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 3686 KiB  
Article
Prediction of Customer Churn Behavior in the Telecommunication Industry Using Machine Learning Models
by Victor Chang, Karl Hall, Qianwen Ariel Xu, Folakemi Ololade Amao, Meghana Ashok Ganatra and Vladlena Benson
Algorithms 2024, 17(6), 231; https://doi.org/10.3390/a17060231 - 27 May 2024
Viewed by 496
Abstract
Customer churn is a significant concern, and the telecommunications industry has the largest annual churn rate of any major industry at over 30%. This study examines the use of ensemble learning models to analyze and forecast customer churn in the telecommunications business. Accurate [...] Read more.
Customer churn is a significant concern, and the telecommunications industry has the largest annual churn rate of any major industry at over 30%. This study examines the use of ensemble learning models to analyze and forecast customer churn in the telecommunications business. Accurate churn forecasting is essential for successful client retention initiatives to combat regular customer churn. We used innovative and improved machine learning methods, including Decision Trees, Boosted Trees, and Random Forests, to enhance model interpretability and prediction accuracy. The models were trained and evaluated systematically by using a large dataset. The Random Forest model performed best, with 91.66% predictive accuracy, 82.2% precision, and 81.8% recall. Our results highlight how well the model can identify possible churners with the help of explainable AI (XAI) techniques, allowing for focused and timely intervention strategies. To improve the transparency of the decisions made by the classifier, this study also employs explainable artificial intelligence methods such as LIME and SHAP to illustrate the results of the customer churn prediction model. Our results demonstrate how crucial it is for customer relationship managers to implement strong analytical tools to reduce attrition and promote long-term economic viability in fiercely competitive marketplaces. This study indicates that ensemble learning models have strategic implications for improving consumer loyalty and organizational profitability in addition to confirming their performance. Full article
Show Figures

Figure 1

11 pages, 925 KiB  
Article
Mitigating Co-Activity Conflicts and Resource Overallocation in Construction Projects: A Modular Heuristic Scheduling Approach with Primavera P6 EPPM Integration
by Khwansiri Ninpan, Shuzhang Huang, Francesco Vitillo, Mohamad Ali Assaad, Lies Benmiloud Bechet and Robert Plana
Algorithms 2024, 17(6), 230; https://doi.org/10.3390/a17060230 - 24 May 2024
Viewed by 395
Abstract
This paper proposes a heuristic approach for managing complex construction projects. The tool incorporates Primavera P6 EPPM and Synchro 4D, enabling proactive clash detection and resolution of spatial conflicts during concurrent tasks. Additionally, it performs resource verification for sufficient allocation before task initiation. [...] Read more.
This paper proposes a heuristic approach for managing complex construction projects. The tool incorporates Primavera P6 EPPM and Synchro 4D, enabling proactive clash detection and resolution of spatial conflicts during concurrent tasks. Additionally, it performs resource verification for sufficient allocation before task initiation. This integrated approach facilitates the generation of conflict-free and feasible construction schedules. By adhering to project constraints and seamlessly integrating with existing industry tools, the proposed solution offers a comprehensive and robust approach to construction project management. This constitutes, to our knowledge, the first dynamic digital twin for the delivery of a complex project. Full article
(This article belongs to the Special Issue Scheduling Theory and Algorithms for Sustainable Manufacturing)
16 pages, 1924 KiB  
Article
Employing a Convolutional Neural Network to Classify Sleep Stages from EEG Signals Using Feature Reduction Techniques
by Maadh Rajaa Mohammed and Ali Makki Sagheer
Algorithms 2024, 17(6), 229; https://doi.org/10.3390/a17060229 - 24 May 2024
Viewed by 300
Abstract
One of the most essential components of human life is sleep. One of the first steps in spotting abnormalities connected to sleep is classifying sleep stages. Based on the kind and frequency of signals obtained during a polysomnography test, sleep phases can be [...] Read more.
One of the most essential components of human life is sleep. One of the first steps in spotting abnormalities connected to sleep is classifying sleep stages. Based on the kind and frequency of signals obtained during a polysomnography test, sleep phases can be separated into groups. Accurate classification of sleep stages from electroencephalogram (EEG) signals plays a crucial role in sleep disorder diagnosis and treatment. This study proposes a novel approach that combines feature selection techniques with convolutional neural networks (CNNs) to enhance the classification performance of sleep stages using EEG signals. Firstly, a comprehensive feature selection process was employed to extract discriminative features from raw EEG data, aiming to reduce dimensionality and enhance the efficiency of subsequent classification using mutual information (MI) and analysis of variance (ANOVA) after splitting the dataset into two sets—the training set (70%) and testing set (30%)—then processing it using the standard scalar method. Subsequently, a 1D-CNN architecture was designed to automatically learn hierarchical representations of the selected features, capturing complex patterns indicative of different sleep stages. The proposed method was evaluated on a publicly available EDF-Sleep dataset, demonstrating superior performance compared to traditional approaches. The results highlight the effectiveness of integrating feature selection with CNNs in improving the accuracy and reliability of sleep stage classification from EEG signals, which reached 99.84% with MI-50. This approach not only contributes to advancing the field of sleep disorder diagnosis, but also holds promise for developing more efficient and robust clinical decision support systems. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (2nd Edition))
Previous Issue
Back to TopTop