Previous Issue
Volume 17, May
 
 

Algorithms, Volume 17, Issue 6 (June 2024) – 47 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 9631 KiB  
Article
Semi-Self-Supervised Domain Adaptation: Developing Deep Learning Models with Limited Annotated Data for Wheat Head Segmentation
by Alireza Ghanbari, Gholam Hassan Shirdel and Farhad Maleki
Algorithms 2024, 17(6), 267; https://doi.org/10.3390/a17060267 - 17 Jun 2024
Viewed by 148
Abstract
Precision agriculture involves the application of advanced technologies to improve agricultural productivity, efficiency, and profitability while minimizing waste and environmental impacts. Deep learning approaches enable automated decision-making for many visual tasks. However, in the agricultural domain, variability in growth stages and environmental conditions, [...] Read more.
Precision agriculture involves the application of advanced technologies to improve agricultural productivity, efficiency, and profitability while minimizing waste and environmental impacts. Deep learning approaches enable automated decision-making for many visual tasks. However, in the agricultural domain, variability in growth stages and environmental conditions, such as weather and lighting, presents significant challenges to developing deep-learning-based techniques that generalize across different conditions. The resource-intensive nature of creating extensive annotated datasets that capture these variabilities further hinders the widespread adoption of these approaches. To tackle these issues, we introduce a semi-self-supervised domain adaptation technique based on deep convolutional neural networks with a probabilistic diffusion process, requiring minimal manual data annotation. Using only three manually annotated images and a selection of video clips from wheat fields, we generated a large-scale computationally annotated dataset of image–mask pairs and a large dataset of unannotated images extracted from video frames. We developed a two-branch convolutional encoder–decoder model architecture that uses both synthesized image–mask pairs and unannotated images, enabling effective adaptation to real images. The proposed model achieved a Dice score of 80.7% on an internal test dataset and a Dice score of 64.8% on an external test set composed of images from five countries and spanning 18 domains, indicating its potential to develop generalizable solutions that could encourage the wider adoption of advanced technologies in agriculture. Full article
(This article belongs to the Special Issue Efficient Learning Algorithms with Limited Resources)
Show Figures

Figure 1

23 pages, 826 KiB  
Article
Re-Orthogonalized/Affine GMRES and Orthogonalized Maximal Projection Algorithm for Solving Linear Systems
by Chein-Shan Liu, Chih-Wen Chang  and Chung-Lun Kuo 
Algorithms 2024, 17(6), 266; https://doi.org/10.3390/a17060266 - 15 Jun 2024
Viewed by 210
Abstract
GMRES is one of the most powerful and popular methods to solve linear systems in the Krylov subspace; we examine it from two viewpoints: to maximize the decreasing length of the residual vector, and to maintain the orthogonality of the consecutive residual vector. [...] Read more.
GMRES is one of the most powerful and popular methods to solve linear systems in the Krylov subspace; we examine it from two viewpoints: to maximize the decreasing length of the residual vector, and to maintain the orthogonality of the consecutive residual vector. A stabilization factor, η, to measure the deviation from the orthogonality of the residual vector is inserted into GMRES to preserve the orthogonality automatically. The re-orthogonalized GMRES (ROGMRES) method guarantees the absolute convergence; even the orthogonality is lost gradually in the GMRES iteration. When η<1/2, the residuals’ lengths of GMRES and GMRES(m) no longer decrease; hence, η<1/2 can be adopted as a stopping criterion to terminate the iterations. We prove η=1 for the ROGMRES method; it automatically keeps the orthogonality, and maintains the maximality for reducing the length of the residual vector. We improve GMRES by seeking the descent vector to minimize the residual in a larger space of the affine Krylov subspace. The resulting orthogonalized maximal projection algorithm (OMPA) is identified as having good performance. We further derive the iterative formulas by extending the GMRES method to the affine Krylov subspace; these equations are slightly different from the equations derived by Saad and Schultz (1986). The affine GMRES method is combined with the orthogonalization technique to generate a powerful affine GMRES (A-GMRES) method with high performance. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
4 pages, 157 KiB  
Editorial
Artificial Intelligence in Modeling and Simulation
by Nuno Fachada and Nuno David
Algorithms 2024, 17(6), 265; https://doi.org/10.3390/a17060265 - 15 Jun 2024
Viewed by 221
Abstract
Modeling and simulation (M&S) serve as essential tools in various scientific and engineering domains, enabling the representation of complex systems and processes without the constraints of physical experimentation [1]. [...] Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
23 pages, 2244 KiB  
Article
Optimizing Charging Pad Deployment by Applying a Quad-Tree Scheme
by Rei-Heng Cheng, Chang-Wu Yu and Zuo-Li Zhang
Algorithms 2024, 17(6), 264; https://doi.org/10.3390/a17060264 - 14 Jun 2024
Viewed by 108
Abstract
The recent advancement in wireless power transmission (WPT) has led to the development of wireless rechargeable sensor networks (WRSNs), since this technology provides a means to replenish sensor nodes wirelessly, offering a solution to the energy challenges faced by WSNs. Most of the [...] Read more.
The recent advancement in wireless power transmission (WPT) has led to the development of wireless rechargeable sensor networks (WRSNs), since this technology provides a means to replenish sensor nodes wirelessly, offering a solution to the energy challenges faced by WSNs. Most of the recent previous work has focused on charging sensor nodes using wireless charging vehicles (WCVs) equipped with high-capacity batteries and WPT devices. In these schemes, a vehicle can move close to a sensor node and wirelessly charge it without physical contact. While these schemes can mitigate the energy problem to some extent, they overlook two primary challenges of applied WCVs: off-road navigation and vehicle speed limitations. To overcome these challenges, previous work proposed a new WRSN model equipped with one drone coupled with several pads deployed to charge the drone when it cannot reach the subsequent stop. This wireless charging pad deployment aims to deploy the minimum number of pads so that at least one feasible routing path from the base station can be established for the drone to reach every SN in a given WRSN. The major weakness of previous studies is that they only consider deploying a wireless charging pad at the locations of the wireless sensor nodes. Their schemes are limited and constrained because usually every point in the deployed area can be considered to deploy a pad. Moreover, the deployed pads suggested by these schemes may not be able to meet the connected requirements due to sparse environments. In this work, we introduce a new scheme that utilizes the Quad-Tree concept to address the wireless charging pad deployment problem and reduce the number of deployed pads at the same time. Extensive simulations were conducted to illustrate the merits of the proposed schemes by comparing them with different previous schemes on maps of varying sizes. In the case of large maps, the proposed schemes surpassed all previous works, indicating that our approach is more suitable for large-scale network environments. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
19 pages, 5344 KiB  
Article
3D Reconstruction Based on Iterative Optimization of Moving Least-Squares Function
by Saiya Li, Jinhe Su, Guoqing Jiang, Ziyu Huang and Xiaorong Zhang
Algorithms 2024, 17(6), 263; https://doi.org/10.3390/a17060263 - 14 Jun 2024
Viewed by 178
Abstract
Three-dimensional reconstruction from point clouds is an important research topic in computer vision and computer graphics. However, the discrete nature, sparsity, and noise of the original point cloud contribute to the results of 3D surface generation based on global features often appearing jagged [...] Read more.
Three-dimensional reconstruction from point clouds is an important research topic in computer vision and computer graphics. However, the discrete nature, sparsity, and noise of the original point cloud contribute to the results of 3D surface generation based on global features often appearing jagged and lacking details, making it difficult to describe shape details accurately. We address the challenge of generating smooth and detailed 3D surfaces from point clouds. We propose an adaptive octree partitioning method to divide the global shape into local regions of different scales. An iterative loop method based on GRU is then used to extract features from local voxels and learn local smoothness and global shape priors. Finally, a moving least-squares approach is employed to generate the 3D surface. Experiments demonstrate that our method outperforms existing methods on benchmark datasets (ShapeNet dataset, ABC dataset, and Famous dataset). Ablation studies confirm the effectiveness of the adaptive octree partitioning and GRU modules. Full article
Show Figures

Figure 1

15 pages, 1063 KiB  
Article
EAND-LPRM: Enhanced Attention Network and Decoding for Efficient License Plate Recognition under Complex Conditions
by Shijuan Chen, Zongmei Li, Xiaofeng Du and Qin Nie
Algorithms 2024, 17(6), 262; https://doi.org/10.3390/a17060262 - 14 Jun 2024
Viewed by 177
Abstract
With the rapid advancement of urban intelligence, there is an increasingly urgent demand for technological innovation in traffic management. License plate recognition technology can achieve high accuracy under ideal conditions but faces significant challenges in complex traffic environments and adverse weather conditions. To [...] Read more.
With the rapid advancement of urban intelligence, there is an increasingly urgent demand for technological innovation in traffic management. License plate recognition technology can achieve high accuracy under ideal conditions but faces significant challenges in complex traffic environments and adverse weather conditions. To address these challenges, we propose the enhanced attention network and decoding for license plate recognition model (EAND-LPRM). This model leverages an encoder to extract features from image sequences and employs a self-attention mechanism to focus on critical feature information, enhancing its capability to handle complex traffic scenarios such as rainy weather and license plate distortion. We have curated and utilized publicly available datasets that closely reflect real-world scenarios, ensuring transparency and reproducibility. Experimental evaluations conducted on these datasets, which include various complex scenarios, demonstrate that the EAND-LPRM model achieves an accuracy of 94%, representing a 6% improvement over traditional license plate recognition algorithms. The main contributions of this research include the development of a novel attention-mechanism-based architecture, comprehensive evaluation on multiple datasets, and substantial performance improvements under diverse and challenging conditions. This study provides a practical solution for automatic license plate recognition systems in dynamic and unpredictable environments. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

32 pages, 2034 KiB  
Systematic Review
Artificial Intelligence-Based Algorithms and Healthcare Applications of Respiratory Inductance Plethysmography: A Systematic Review
by Md. Shahidur Rahman, Sowrav Chowdhury, Mirza Rasheduzzaman and A. B. M. S. U. Doulah
Algorithms 2024, 17(6), 261; https://doi.org/10.3390/a17060261 - 14 Jun 2024
Viewed by 318
Abstract
Respiratory Inductance Plethysmography (RIP) is a non-invasive method for the measurement of respiratory rates and lung volumes. Accurate detection of respiratory rates and volumes is crucial for the diagnosis and monitoring of prognosis of lung diseases, for which spirometry is classically used in [...] Read more.
Respiratory Inductance Plethysmography (RIP) is a non-invasive method for the measurement of respiratory rates and lung volumes. Accurate detection of respiratory rates and volumes is crucial for the diagnosis and monitoring of prognosis of lung diseases, for which spirometry is classically used in clinical applications. RIP has been studied as an alternative to spirometry and shown promising results. Moreover, RIP data can be analyzed through machine learning (ML)-based approaches for some other purposes, i.e., detection of apneas, work of breathing (WoB) measurement, and recognition of human activity based on breathing patterns. The goal of this study is to provide an in-depth systematic review of the scope of usage of RIP and current RIP device developments, as well as to evaluate the performance, usability, and reliability of ML-based data analysis techniques within its designated scope while adhering to the PRISMA guidelines. This work also identifies research gaps in the field and highlights the potential scope for future work. The IEEE Explore, Springer, PLoS One, Science Direct, and Google Scholar databases were examined, and 40 publications were included in this work through a structured screening and quality assessment procedure. Studies with conclusive experimentation on RIP published between 2012 and 2023 were included, while unvalidated studies were excluded. The findings indicate that RIP is an effective method to a certain extent for testing and monitoring respiratory functions, though its accuracy is lacking in some settings. However, RIP possesses some advantages over spirometry due to its non-invasive nature and functionality for both stationary and ambulatory uses. RIP also demonstrates its capabilities in ML-based applications, such as detection of breathing asynchrony, classification of apnea, identification of sleep stage, and human activity recognition (HAR). It is our conclusion that, though RIP is not yet ready to replace spirometry and other established methods, it can provide crucial insights into subjects’ condition associated to respiratory illnesses. The implementation of artificial intelligence (AI) could play a potential role in improving the overall effectiveness of RIP, as suggested in some of the selected studies. Full article
Show Figures

Figure 1

15 pages, 1574 KiB  
Article
Exploring Data Augmentation Algorithm to Improve Genomic Prediction of Top-Ranking Cultivars
by Osval A. Montesinos-López, Arvinth Sivakumar, Gloria Isabel Huerta Prado, Josafhat Salinas-Ruiz, Afolabi Agbona, Axel Efraín Ortiz Reyes, Khalid Alnowibet, Rodomiro Ortiz, Abelardo Montesinos-López and José Crossa
Algorithms 2024, 17(6), 260; https://doi.org/10.3390/a17060260 - 14 Jun 2024
Viewed by 458
Abstract
Genomic selection (GS) is a groundbreaking statistical machine learning method for advancing plant and animal breeding. Nonetheless, its practical implementation remains challenging due to numerous factors affecting its predictive performance. This research explores the potential of data augmentation to enhance prediction accuracy across [...] Read more.
Genomic selection (GS) is a groundbreaking statistical machine learning method for advancing plant and animal breeding. Nonetheless, its practical implementation remains challenging due to numerous factors affecting its predictive performance. This research explores the potential of data augmentation to enhance prediction accuracy across entire datasets and specifically within the top 20% of the testing set. Our findings indicate that, overall, the data augmentation method (method A), when compared to the conventional model (method C) and assessed using Mean Arctangent Absolute Prediction Error (MAAPE) and normalized root mean square error (NRMSE), did not improve the prediction accuracy for the unobserved cultivars. However, significant improvements in prediction accuracy (evidenced by reduced prediction error) were observed when data augmentation was applied exclusively to the top 20% of the testing set. Specifically, reductions in MAAPE_20 and NRMSE_20 by 52.86% and 41.05%, respectively, were noted across various datasets. Further investigation is needed to refine data augmentation techniques for effective use in genomic prediction. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (2nd Edition))
Show Figures

Figure 1

13 pages, 531 KiB  
Article
Univariate Outlier Detection: Precision-Driven Algorithm for Single-Cluster Scenarios
by Mohamed Limam El hairach, Amal Tmiri and Insaf Bellamine
Algorithms 2024, 17(6), 259; https://doi.org/10.3390/a17060259 - 14 Jun 2024
Viewed by 195
Abstract
This study introduces a novel algorithm tailored for the precise detection of lower outliers (i.e., data points at the lower tail) in univariate datasets, which is particularly suited for scenarios with a single cluster and similar data distribution. The approach leverages a combination [...] Read more.
This study introduces a novel algorithm tailored for the precise detection of lower outliers (i.e., data points at the lower tail) in univariate datasets, which is particularly suited for scenarios with a single cluster and similar data distribution. The approach leverages a combination of transformative techniques and advanced filtration methods to efficiently segregate anomalies from normal values. Notably, the algorithm emphasizes high-precision outlier detection, ensuring minimal false positives, and requires only a few parameters for configuration. Its unsupervised nature enables robust outlier filtering without the need for extensive manual intervention. To validate its efficacy, the algorithm is rigorously tested using real-world data obtained from photovoltaic (PV) module strings with similar DC capacities, containing various outliers. The results demonstrate the algorithm’s capability to accurately identify lower outliers while maintaining computational efficiency and reliability in practical applications. Full article
Show Figures

Figure 1

14 pages, 625 KiB  
Article
Approximating a Minimum Dominating Set by Purification
by Ernesto Parra Inza, Nodari Vakhania, José María Sigarreta Almira and José Alberto Hernández-Aguilar
Algorithms 2024, 17(6), 258; https://doi.org/10.3390/a17060258 - 12 Jun 2024
Viewed by 274
Abstract
A dominating set of a graph is a subset of vertices such that every vertex not in the subset has at least one neighbor within the subset. The corresponding optimization problem is known to be NP-hard. It is proved to be beneficial to [...] Read more.
A dominating set of a graph is a subset of vertices such that every vertex not in the subset has at least one neighbor within the subset. The corresponding optimization problem is known to be NP-hard. It is proved to be beneficial to separate the solution process in two stages. First, one can apply a fast greedy algorithm to obtain an initial dominating set and then use an iterative procedure to purify (reduce) the size of this dominating set. In this work, we develop the purification stage and propose new purification algorithms. The purification procedures that we present here outperform, in practice, the earlier known purification procedure. We have tested our algorithms for over 1300 benchmark problem instances. Compared to the estimations due to known upper bounds, the obtained solutions are about seven times better. Remarkably, for the 500 benchmark instances for which the optimum is known, the optimal solutions are obtained for 46.33% of the tested instances, whereas the average error for the remaining instances is about 1.01. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

15 pages, 4225 KiB  
Article
NSBR-Net: A Novel Noise Suppression and Boundary Refinement Network for Breast Tumor Segmentation in Ultrasound Images
by Yue Sun, Zhaohong Huang, Guorong Cai, Jinhe Su and Zheng Gong
Algorithms 2024, 17(6), 257; https://doi.org/10.3390/a17060257 - 12 Jun 2024
Viewed by 247
Abstract
Breast tumor segmentation of ultrasound images provides valuable tumor information for early detection and diagnosis. However, speckle noise and blurred boundaries in breast ultrasound images present challenges for tumor segmentation, especially for malignant tumors with irregular shapes. Recent vision transformers have shown promising [...] Read more.
Breast tumor segmentation of ultrasound images provides valuable tumor information for early detection and diagnosis. However, speckle noise and blurred boundaries in breast ultrasound images present challenges for tumor segmentation, especially for malignant tumors with irregular shapes. Recent vision transformers have shown promising performance in handling the variation through global context modeling. Nevertheless, they are often dominated by features of large patterns and lack the ability to recognize negative information in ultrasound images, which leads to the loss of breast tumor details (e.g., boundaries and small objects). In this paper, we propose a novel noise suppression and boundary refinement network, NSBR-Net, to simultaneously alleviate speckle noise interference and blurred boundary problems of breast tumor segmentation. Specifically, we propose two innovative designs, namely, the Noise Suppression Module (NSM) and the Boundary Refinement Module (BRM). The NSM filters noise information from the coarse-grained feature maps, while the BRM progressively refines the boundaries of significant lesion objects. Our method demonstrates superior accuracy over state-of-the-art deep learning models, achieving significant improvements of 3.67% on Dataset B and 2.30% on the BUSI dataset in mDice for testing malignant tumors. Full article
Show Figures

Figure 1

19 pages, 2454 KiB  
Article
Synthesis of Circular Antenna Arrays for Achieving Lower Side Lobe Level and Higher Directivity Using Hybrid Optimization Algorithm
by Vikas Mittal, Kanta Prasad Sharma, Narmadha Thangarasu, Udandarao Sarat, Ahmad O. Hourani and Rohit Salgotra
Algorithms 2024, 17(6), 256; https://doi.org/10.3390/a17060256 - 11 Jun 2024
Viewed by 256
Abstract
Circular antenna arrays (CAAs) find extensive utility in a range of cutting-edge communication applications such as 5G networks, the Internet of Things (IoT), and advanced beamforming technologies. In the realm of antenna design, the side lobes levels (SLL) in the radiation pattern hold [...] Read more.
Circular antenna arrays (CAAs) find extensive utility in a range of cutting-edge communication applications such as 5G networks, the Internet of Things (IoT), and advanced beamforming technologies. In the realm of antenna design, the side lobes levels (SLL) in the radiation pattern hold significant importance within communication systems. This is primarily due to its role in mitigating signal interference across the entire radiation pattern’s side lobes. In order to suppress the subsidiary lobe, achieve the required primary lobe orientation, and improve directivity, an optimization problem is used in this work. This paper introduces a method aimed at enhancing the radiation pattern of CAA by minimizing its SLL using a Hybrid Sooty Tern Naked Mole-Rat Algorithm (STNMRA). The simulation results show that the hybrid optimization method significantly reduces side lobes while maintaining reasonable directivity compared to the uniform array and other competitive metaheuristics. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
4 pages, 226 KiB  
Editorial
Guest Editorial for the Special Issue “New Trends in Algorithms for Intelligent Recommendation Systems”
by Edward Rolando Núñez-Valdez and Vicente García-Díaz
Algorithms 2024, 17(6), 255; https://doi.org/10.3390/a17060255 - 10 Jun 2024
Viewed by 219
Abstract
Currently, the problem of information overload, a term popularized by Alvin Toffler in his book Future Shock [1], is more present than ever due to the rapid development of the Internet [...] Full article
(This article belongs to the Special Issue New Trends in Algorithms for Intelligent Recommendation Systems)
12 pages, 473 KiB  
Review
The Quest for the Application of Artificial Intelligence to Whole Slide Imaging: Unique Prospective from New Advanced Tools
by Gavino Faa, Massimo Castagnola, Luca Didaci, Fernando Coghe, Mario Scartozzi, Luca Saba and Matteo Fraschini
Algorithms 2024, 17(6), 254; https://doi.org/10.3390/a17060254 - 10 Jun 2024
Viewed by 419
Abstract
The introduction of machine learning in digital pathology has deeply impacted the field, especially with the advent of whole slide image (WSI) analysis. In this review, we tried to elucidate the role of machine learning algorithms in diagnostic precision, efficiency, and the reproducibility [...] Read more.
The introduction of machine learning in digital pathology has deeply impacted the field, especially with the advent of whole slide image (WSI) analysis. In this review, we tried to elucidate the role of machine learning algorithms in diagnostic precision, efficiency, and the reproducibility of the results. First, we discuss some of the most used tools, including QuPath, HistoQC, and HistomicsTK, and provide an updated overview of machine learning approaches and their application in pathology. Later, we report how these tools may simplify the automation of WSI analyses, also reducing manual workload and inter-observer variability. A novel aspect of this review is its focus on open-source tools, presented in a way that may help the adoption process for pathologists. Furthermore, we highlight the major benefits of these technologies, with the aim of making this review a practical guide for clinicians seeking to implement machine learning-based solutions in their specific workflows. Moreover, this review also emphasizes some crucial limitations related to data quality and the interpretability of the models, giving insight into future directions for research. Overall, this work tries to bridge the gap between the more recent technological progress in computer science and traditional clinical practice, supporting a broader, yet smooth, adoption of machine learning approaches in digital pathology. Full article
(This article belongs to the Special Issue AI Algorithms in Medical Imaging)
Show Figures

Figure 1

55 pages, 716 KiB  
Review
Hardware Model Checking Algorithms and Techniques
by Gianpiero Cabodi, Paolo Enrico Camurati, Marco Palena and Paolo Pasini
Algorithms 2024, 17(6), 253; https://doi.org/10.3390/a17060253 - 9 Jun 2024
Viewed by 296
Abstract
Digital systems are nowadays ubiquitous and often comprise an extremely high level of complexity. Guaranteeing the correct behavior of such systems has become an ever more pressing need for manufacturers. The correctness of digital systems can be addressed resorting to formal verification techniques, [...] Read more.
Digital systems are nowadays ubiquitous and often comprise an extremely high level of complexity. Guaranteeing the correct behavior of such systems has become an ever more pressing need for manufacturers. The correctness of digital systems can be addressed resorting to formal verification techniques, such as model checking. Currently, it is usually impossible to determine a priori the best algorithm to use given a verification task and, thus, portfolio approaches have become the de facto standard in model checking verification suites. This paper describes the most relevant algorithms and techniques, at the foundations of bit-level SAT-based model checking itself. Full article
(This article belongs to the Special Issue Surveys in Algorithm Analysis and Complexity Theory, Part II)
23 pages, 13655 KiB  
Article
Prediction of Hippocampal Signals in Mice Using a Deep Learning Approach for Neurohybrid Technology Applications
by Albina V. Lebedeva, Margarita I. Samburova, Vyacheslav V. Razin, Nikolay V. Gromov, Svetlana A. Gerasimova, Tatiana A. Levanova, Lev A. Smirnov and Alexander N. Pisarchik
Algorithms 2024, 17(6), 252; https://doi.org/10.3390/a17060252 - 7 Jun 2024
Viewed by 341
Abstract
The increasing growth in knowledge about the functioning of the nervous system of mammals and humans, as well as the significant neuromorphic technology developments in recent decades, has led to the emergence of a large number of brain–computer interfaces and neuroprosthetics for regenerative [...] Read more.
The increasing growth in knowledge about the functioning of the nervous system of mammals and humans, as well as the significant neuromorphic technology developments in recent decades, has led to the emergence of a large number of brain–computer interfaces and neuroprosthetics for regenerative medicine tasks. Neurotechnologies have traditionally been developed for therapeutic purposes to help or replace motor, sensory or cognitive abilities damaged by injury or disease. They also have significant potential for memory enhancement. However, there are still no fully developed neurotechnologies and neural interfaces capable of restoring or expanding cognitive functions, in particular memory, in mammals or humans. In this regard, the search for new technologies in the field of the restoration of cognitive functions is an urgent task of modern neurophysiology, neurotechnology and artificial intelligence. The hippocampus is an important brain structure connected to memory and information processing in the brain. The aim of this paper is to propose an approach based on deep neural networks for the prediction of hippocampal signals in the CA1 region based on received biological input in the CA3 region. We compare the results of prediction for two widely used deep architectures: reservoir computing (RC) and long short-term memory (LSTM) networks. The proposed study can be viewed as a first step in the complex task of the development of a neurohybrid chip, which allows one to restore memory functions in the damaged rodent hippocampus. Full article
Show Figures

Figure 1

16 pages, 778 KiB  
Article
Distributed Control of Hydrogen-Based Microgrids for the Demand Side: A Multiagent Self-Triggered MPC-Based Strategy
by Tingzhe Pan, Jue Hou, Xin Jin, Zhenfan Yu, Wei Zhou and Zhijun Wang
Algorithms 2024, 17(6), 251; https://doi.org/10.3390/a17060251 - 7 Jun 2024
Viewed by 294
Abstract
With the global pursuit of renewable energy and carbon neutrality, hydrogen-based microgrids have also become an important area of research, as ensuring proper design and operation is essential to achieve optimal performance from hybrid systems. This paper proposes a distributed control strategy based [...] Read more.
With the global pursuit of renewable energy and carbon neutrality, hydrogen-based microgrids have also become an important area of research, as ensuring proper design and operation is essential to achieve optimal performance from hybrid systems. This paper proposes a distributed control strategy based on multiagent self-triggered model predictive control (ST-MPC), with the aim of achieving demand-side control of hydrogen-based microgrid systems. This architecture considers a hybrid energy storage system with renewable energy as the main power source, supplemented by fuel cells based on electrolytic hydrogen. The primary objective of this architecture is aiming at the supply and demand balance problem under the supply and demand relationship of microgrid, the service life of hydrogen-based microgrid energy storage equipment can be increased on the basis of realizing demand-side control of hydrogen energy microgrid system. To accomplish this, model predictive controllers are implemented within a self-triggered framework that dynamically adjusts the counting period. The simulation results demonstrate that the ST-MPC architecture significantly reduces the frequency of control action changes while maintaining an acceptable level of set-point tracking. These findings highlight the viability of the proposed solution for microgrids equipped with multiple types of electrochemical storage, which contributes to improved sustainability and efficiency in renewable-based microgrid systems. Full article
(This article belongs to the Special Issue Intelligent Algorithms for High-Penetration New Energy)
Show Figures

Figure 1

23 pages, 5573 KiB  
Article
Research on Distributed Fault Diagnosis Model of Elevator Based on PCA-LSTM
by Chengming Chen, Xuejun Ren and Guoqing Cheng
Algorithms 2024, 17(6), 250; https://doi.org/10.3390/a17060250 - 7 Jun 2024
Viewed by 241
Abstract
A Distributed Elevator Fault Diagnosis System (DEFDS) is developed to tackle frequent malfunctions stemming from the widespread distribution and aging of elevator systems. Due to the complexity of elevator fault data and the subtlety of fault characteristics, traditional methods such as visual inspections [...] Read more.
A Distributed Elevator Fault Diagnosis System (DEFDS) is developed to tackle frequent malfunctions stemming from the widespread distribution and aging of elevator systems. Due to the complexity of elevator fault data and the subtlety of fault characteristics, traditional methods such as visual inspections and basic operational tests fall short in detecting early signs of mechanical wear and electrical issues. These conventional techniques often fail to recognize subtle fault characteristics, necessitating more advanced diagnostic tools. In response, this paper introduces a Principal Component Analysis–Long Short-Term Memory (PCA-LSTM) method for fault diagnosis. The distributed system decentralizes the fault diagnosis process to individual elevator units, utilizing PCA’s feature selection capabilities in high-dimensional spaces to extract and reduce the dimensionality of fault features. Subsequently, the LSTM model is employed for fault prediction. Elevator models within the system exchange data to refine and optimize a global prediction model. The efficacy of this approach is substantiated through empirical validation with actual data, achieving an accuracy rate of 90% and thereby confirming the method’s effectiveness in facilitating distributed elevator fault diagnosis. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

28 pages, 859 KiB  
Article
Simulation of Calibrated Complex Synthetic Population Data with XGBoost
by Johannes Gussenbauer, Matthias Templ, Siro Fritzmann and Alexander Kowarik
Algorithms 2024, 17(6), 249; https://doi.org/10.3390/a17060249 - 6 Jun 2024
Viewed by 316
Abstract
Syntheticdata generation methods are used to transform the original data into privacy-compliant synthetic copies (twin data). With our proposed approach, synthetic data can be simulated in the same size as the input data or in any size, and in the case of finite [...] Read more.
Syntheticdata generation methods are used to transform the original data into privacy-compliant synthetic copies (twin data). With our proposed approach, synthetic data can be simulated in the same size as the input data or in any size, and in the case of finite populations, even the entire population can be simulated. The proposed XGBoost-based method is compared with known model-based approaches to generate synthetic data using a complex survey data set. The XGBoost method shows strong performance, especially with synthetic categorical variables, and outperforms other tested methods. Furthermore, the structure and relationship between variables are well preserved. The tuning of the parameters is performed automatically by a modified k-fold cross-validation. If exact population margins are known, e.g., cross-tabulated population counts on age class, gender and region, the synthetic data must be calibrated to those known population margins. For this purpose, we have implemented a simulated annealing algorithm that is able to use multiple population margins simultaneously to post-calibrate a synthetic population. The algorithm is, thus, able to calibrate simulated population data containing cluster and individual information, e.g., about persons in households, at both person and household level. Furthermore, the algorithm is efficiently implemented so that the adjustment of populations with many millions or more persons is possible. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

25 pages, 1790 KiB  
Article
A Non-Gradient and Non-Iterative Method for Mapping 3D Mesh Objects Based on a Summation of Dependent Random Values
by Ihar Volkau, Sergei Krasovskii, Abdul Mujeeb and Helen Balinsky
Algorithms 2024, 17(6), 248; https://doi.org/10.3390/a17060248 - 6 Jun 2024
Viewed by 320
Abstract
The manuscript presents a novel non-gradient and non-iterative method for mapping two 3D objects by matching extrema. This innovative approach utilizes the amplification of extrema through the summation of dependent random values, accompanied by a comprehensive explanation of the statistical background. The method [...] Read more.
The manuscript presents a novel non-gradient and non-iterative method for mapping two 3D objects by matching extrema. This innovative approach utilizes the amplification of extrema through the summation of dependent random values, accompanied by a comprehensive explanation of the statistical background. The method further incorporates structural patterns based on spherical harmonic functions to calculate the rotation matrix, enabling the juxtaposition of the objects. Without utilizing gradients and iterations to improve the solution step by step, the proposed method generates a limited number of candidates, and the mapping (if it exists) is necessarily among the candidates. For instance, this method holds potential for object analysis and identification in additive manufacturing for 3D printing and protein matching. Full article
Show Figures

Figure 1

16 pages, 5093 KiB  
Article
New Multi-View Feature Learning Method for Accurate Antifungal Peptide Detection
by Sayeda Muntaha Ferdous, Shafayat Bin Shabbir Mugdha and Iman Dehzangi
Algorithms 2024, 17(6), 247; https://doi.org/10.3390/a17060247 - 6 Jun 2024
Viewed by 263
Abstract
Antimicrobial resistance, particularly the emergence of resistant strains in fungal pathogens, has become a pressing global health concern. Antifungal peptides (AFPs) have shown great potential as a promising alternative therapeutic strategy due to their inherent antimicrobial properties and potential application in combating fungal [...] Read more.
Antimicrobial resistance, particularly the emergence of resistant strains in fungal pathogens, has become a pressing global health concern. Antifungal peptides (AFPs) have shown great potential as a promising alternative therapeutic strategy due to their inherent antimicrobial properties and potential application in combating fungal infections. However, the identification of antifungal peptides using experimental approaches is time-consuming and costly. Hence, there is a demand to propose fast and accurate computational approaches to identifying AFPs. This paper introduces a novel multi-view feature learning (MVFL) model, called AFP-MVFL, for accurate AFP identification, utilizing multi-view feature learning. By integrating the sequential and physicochemical properties of amino acids and employing a multi-view approach, the AFP-MVFL model significantly enhances prediction accuracy. It achieves 97.9%, 98.4%, 0.98, and 0.96 in terms of accuracy, precision, F1 score, and Matthews correlation coefficient (MCC), respectively, outperforming previous studies found in the literature. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

13 pages, 346 KiB  
Article
Minimizing Query Frequency to Bound Congestion Potential for Moving Entities at a Fixed Target Time
by William Evans and David Kirkpatrick
Algorithms 2024, 17(6), 246; https://doi.org/10.3390/a17060246 - 6 Jun 2024
Viewed by 226
Abstract
Consider a collection of entities moving continuously with bounded speed, but otherwise unpredictably, in some low-dimensional space. Two such entities encroach upon one another at a fixed time if their separation is less than some specified threshold. Encroachment, of concern in many settings [...] Read more.
Consider a collection of entities moving continuously with bounded speed, but otherwise unpredictably, in some low-dimensional space. Two such entities encroach upon one another at a fixed time if their separation is less than some specified threshold. Encroachment, of concern in many settings such as collision avoidance, may be unavoidable. However, the associated difficulties are compounded if there is uncertainty about the precise location of entities, giving rise to potential encroachment and, more generally, potential congestion within the full collection. We adopt a model in which entities can be queried for their current location (at some cost) and the uncertainty region associated with an entity grows in proportion to the time since that entity was last queried. The goal is to maintain low potential congestion, measured in terms of the (dynamic) intersection graph of uncertainty regions, at specified (possibly all) times, using the lowest possible query cost. Previous work in the same uncertainty model addressed the problem of minimizing the congestion potential of point entities using location queries of some bounded frequency. It was shown that it is possible to design query schemes that are O(1)-competitive, in terms of worst-case congestion potential, with other, even clairvoyant query schemes (that exploit knowledge of the trajectories of all entities), subject to the same bound on query frequency. In this paper, we initiate the treatment of a more general problem with the complementary optimization objective: minimizing the query frequency, measured as the reciprocal of the minimum time between queries (granularity), while guaranteeing a fixed bound on congestion potential of entities with positive extent at one specified target time. This complementary objective necessitates quite different schemes and analyses. Nevertheless, our results parallel those of the earlier papers, specifically tight competitive bounds on required query frequency. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From FCT 2023)
Show Figures

Figure 1

18 pages, 3005 KiB  
Article
A Modified Analytic Hierarchy Process Suitable for Online Survey Preference Elicitation
by Sean Pascoe, Anna Farmery, Rachel Nichols, Sarah Lothian and Kamal Azmi
Algorithms 2024, 17(6), 245; https://doi.org/10.3390/a17060245 - 6 Jun 2024
Viewed by 292
Abstract
A key component of multi-criteria decision analysis is the estimation of criteria weights, reflecting the preference strength of different stakeholder groups related to different objectives. One common method is the Analytic Hierarchy Process (AHP). A key challenge with the AHP is the potential [...] Read more.
A key component of multi-criteria decision analysis is the estimation of criteria weights, reflecting the preference strength of different stakeholder groups related to different objectives. One common method is the Analytic Hierarchy Process (AHP). A key challenge with the AHP is the potential for inconsistency in responses, resulting in potentially unreliable preference weights. In small groups, interactions between analysts and respondents can compensate for this through reassessment of inconsistent responses. In many cases, however, stakeholders may be geographically dispersed, with online surveys being a more cost-effective means to elicit these preferences, making renegotiating with inconsistent respondents impossible. Further, the potentially large number of bivariate comparisons required using the AHP may adversely affect response rates. In this study, we test a new “modified” AHP (MAHP). The MAHP was designed to retain the key desirable features of the AHP but be more amenable to online surveys, reduce the problem of inconsistencies, and require substantially fewer comparisons. The MAHP is tested using three groups of university students through an online survey platform, along with a “traditional” AHP approach. The results indicate that the MAHP can provide statistically equivalent outcomes to the AHP but without problems arising due to inconsistencies. Full article
Show Figures

Figure 1

18 pages, 3670 KiB  
Article
Automated Recommendation of Aggregate Visualizations for Crowdfunding Data
by Mohamed A. Sharaf, Heba Helal, Nazar Zaki, Wadha Alketbi, Latifa Alkaabi, Sara Alshamsi and Fatmah Alhefeiti
Algorithms 2024, 17(6), 244; https://doi.org/10.3390/a17060244 - 6 Jun 2024
Viewed by 306
Abstract
Analyzing crowdfunding data has been the focus of many research efforts, where analysts typically explore this data to identify the main factors and characteristics of the lending process as well as to discover unique patterns and anomalies in loan distributions. However, the manual [...] Read more.
Analyzing crowdfunding data has been the focus of many research efforts, where analysts typically explore this data to identify the main factors and characteristics of the lending process as well as to discover unique patterns and anomalies in loan distributions. However, the manual exploration and visualization of such data is clearly an ad hoc, time-consuming, and labor-intensive process. Hence, in this work, we propose LoanVis, which is an automated solution for discovering and recommending those valuable and insightful visualizations. LoanVis is a data-driven system that utilizes objective metrics to quantify the “interestingness” of a visualization and employs such metrics in the recommendation process. We demonstrate the effectiveness of LoanVis in analyzing and exploring different aspects of the Kiva crowdfunding dataset. Full article
(This article belongs to the Special Issue Recommendations with Responsibility Constraints)
Show Figures

Figure 1

18 pages, 3521 KiB  
Article
Training of Convolutional Neural Networks for Image Classification with Fully Decoupled Extended Kalman Filter
by Armando Gaytan, Ofelia Begovich-Mendoza and Nancy Arana-Daniel
Algorithms 2024, 17(6), 243; https://doi.org/10.3390/a17060243 - 6 Jun 2024
Viewed by 366
Abstract
First-order algorithms have long dominated the training of deep neural networks, excelling in tasks like image classification and natural language processing. Now there is a compelling opportunity to explore alternatives that could outperform current state-of-the-art results. From the estimation theory, the Extended Kalman [...] Read more.
First-order algorithms have long dominated the training of deep neural networks, excelling in tasks like image classification and natural language processing. Now there is a compelling opportunity to explore alternatives that could outperform current state-of-the-art results. From the estimation theory, the Extended Kalman Filter (EKF) arose as a viable alternative and has shown advantages over backpropagation methods. Current computational advances offer the opportunity to review algorithms derived from the EKF, almost excluded from the training of convolutional neural networks. This article revisits an approach of the EKF with decoupling and it brings the Fully Decoupled Extended Kalman Filter (FDEKF) for training convolutional neural networks in image classification tasks. The FDEKF is a second-order algorithm with some advantages over the first-order algorithms, so it can lead to faster convergence and higher accuracy, due to a higher probability of finding the global optimum. In this research, experiments are conducted on well-known datasets that include Fashion, Sports, and Handwritten Digits images. The FDEKF shows faster convergence compared to other algorithms such as the popular Adam optimizer, the sKAdam algorithm, and the reduced extended Kalman filter. Finally, motivated by the finding of the highest accuracy of FDEKF with images of natural scenes, we show its effectiveness in another experiment focused on outdoor terrain recognition. Full article
(This article belongs to the Special Issue Machine Learning in Pattern Recognition)
Show Figures

Figure 1

24 pages, 1150 KiB  
Article
A Comparative Study of Machine Learning Methods and Text Features for Text Authorship Recognition in the Example of Azerbaijani Language Texts
by Rustam Azimov and Efthimios Providas
Algorithms 2024, 17(6), 242; https://doi.org/10.3390/a17060242 - 5 Jun 2024
Viewed by 266
Abstract
This paper presents various machine learning methods with different text features that are explored and evaluated to determine the authorship of the texts in the example of the Azerbaijani language. We consider techniques like artificial neural network, convolutional neural network, random forest, and [...] Read more.
This paper presents various machine learning methods with different text features that are explored and evaluated to determine the authorship of the texts in the example of the Azerbaijani language. We consider techniques like artificial neural network, convolutional neural network, random forest, and support vector machine. These techniques are used with different text features like word length, sentence length, combined word length and sentence length, n-grams, and word frequencies. The models were trained and tested on the works of many famous Azerbaijani writers. The results of computer experiments obtained by utilizing a comparison of various techniques and text features were analyzed. The cases where the usage of text features allowed better results were determined. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
20 pages, 368 KiB  
Article
Fitness Landscape Analysis of Product Unit Neural Networks
by Andries Engelbrecht and Robert Gouldie 
Algorithms 2024, 17(6), 241; https://doi.org/10.3390/a17060241 - 4 Jun 2024
Viewed by 163
Abstract
A fitness landscape analysis of the loss surfaces produced by product unit neural networks is performed in order to gain a better understanding of the impact of product units on the characteristics of the loss surfaces. The loss surface characteristics of product unit [...] Read more.
A fitness landscape analysis of the loss surfaces produced by product unit neural networks is performed in order to gain a better understanding of the impact of product units on the characteristics of the loss surfaces. The loss surface characteristics of product unit neural networks are then compared to the characteristics of loss surfaces produced by neural networks that make use of summation units. The failure of certain optimization algorithms in training product neural networks is explained through trends observed between loss surface characteristics and optimization algorithm performance. The paper shows that the loss surfaces of product unit neural networks have extremely large gradients with many deep ravines and valleys, which explains why gradient-based optimization algorithms fail at training these neural networks. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning (2nd Edition))
16 pages, 3410 KiB  
Article
Feature Extraction Based on Sparse Coding Approach for Hand Grasp Type Classification
by Jirayu Samkunta, Patinya Ketthong, Nghia Thi Mai, Md Abdus Samad Kamal, Iwanori Murakami and Kou Yamada
Algorithms 2024, 17(6), 240; https://doi.org/10.3390/a17060240 - 3 Jun 2024
Viewed by 153
Abstract
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand [...] Read more.
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand grasp types based on time-series kinematic data. In this paper, we propose a novel sparse coding feature extraction technique based on dictionary learning to address this challenge. Our method enhances model accuracy, reduces training time, and minimizes overfitting risk. We benchmarked our approach against principal component analysis (PCA) and sparse coding based on a Gaussian random dictionary. Our results demonstrate a significant improvement in classification accuracy: achieving 81.78% with our method compared to 31.43% for PCA and 77.27% for the Gaussian random dictionary. Furthermore, our technique outperforms in terms of macro-average F1-score and average area under the curve (AUC) while also significantly reducing the number of features required. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

14 pages, 1059 KiB  
Article
Linear System Identification-Oriented Optimal Tampering Attack Strategy and Implementation Based on Information Entropy with Multiple Binary Observations
by Zhongwei Bai, Peng Yu, Yan Liu and Jin Guo
Algorithms 2024, 17(6), 239; https://doi.org/10.3390/a17060239 - 3 Jun 2024
Viewed by 190
Abstract
With the rapid development of computer technology, communication technology, and control technology, cyber-physical systems (CPSs) have been widely used and developed. However, there are massive information interactions in CPSs, which lead to an increase in the amount of data transmitted over the network. [...] Read more.
With the rapid development of computer technology, communication technology, and control technology, cyber-physical systems (CPSs) have been widely used and developed. However, there are massive information interactions in CPSs, which lead to an increase in the amount of data transmitted over the network. The data communication, once attacked by the network, will seriously affect the security and stability of the system. In this paper, for the data tampering attack existing in the linear system with multiple binary observations, in the case where the estimation algorithm of the defender is unknown, the optimization index is constructed based on information entropy from the attacker’s point of view, and the problem is modeled. For the problem of the multi-parameter optimization with energy constraints, this paper uses particle swarm optimization (PSO) to obtain the optimal data tampering attack solution set, and gives the estimation method of unknown parameters in the case of unknown parameters. To implement the real-time improvement of online implementation, the BP neural network is designed. Finally, the validity of the conclusions is verified through numerical simulation. This means that the attacker can construct effective metrics based on information entropy without the knowledge of the defense’s discrimination algorithm. In addition, the optimal attack strategy implementation based on PSO and BP is also effective. Full article
Show Figures

Figure 1

19 pages, 1087 KiB  
Article
Simple Histogram Equalization Technique Improves Performance of VGG Models on Facial Emotion Recognition Datasets
by Jaher Hassan Chowdhury, Qian Liu and Sheela Ramanna
Algorithms 2024, 17(6), 238; https://doi.org/10.3390/a17060238 - 3 Jun 2024
Viewed by 499
Abstract
Facial emotion recognition (FER) is crucial across psychology, neuroscience, computer vision, and machine learning due to the diversified and subjective nature of emotions, varying considerably across individuals, cultures, and contexts. This study explored FER through convolutional neural networks (CNNs) and Histogram Equalization techniques. [...] Read more.
Facial emotion recognition (FER) is crucial across psychology, neuroscience, computer vision, and machine learning due to the diversified and subjective nature of emotions, varying considerably across individuals, cultures, and contexts. This study explored FER through convolutional neural networks (CNNs) and Histogram Equalization techniques. It investigated the impact of histogram equalization, data augmentation, and various model optimization strategies on FER accuracy across different datasets like KDEF, CK+, and FER2013. Using pre-trained VGG architectures, such as VGG19 and VGG16, this study also examined the effectiveness of fine-tuning hyperparameters and implementing different learning rate schedulers. The evaluation encompassed diverse metrics including accuracy, Area Under the Receiver Operating Characteristic Curve (AUC-ROC), Area Under the Precision–Recall Curve (AUC-PRC), and Weighted F1 score. Notably, the fine-tuned VGG architecture demonstrated a state-of-the-art performance compared to conventional transfer learning models and achieved 100%, 95.92%, and 69.65% on the CK+, KDEF, and FER2013 datasets, respectively. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

Previous Issue
Back to TopTop