-
Anomaly Detection for Skin Lesion Images Using Convolutional Neural Network and Injection of Handcrafted Features: A Method That Bypasses the Preprocessing of Dermoscopic Images
-
A Comparative Study of Swarm Intelligence Metaheuristics in UKF-Based Neural Training Applied to the Identification and Control of Robotic Manipulator
-
Reddit CrosspostNet—Studying Reddit Communities with Large-Scale Crosspost Graph Networks
-
Model Predictive Evolutionary Temperature Control via Neural-Network-Based Digital Twins
-
The Electric Vehicle Traveling Salesman Problem on Digital Elevation Models for Traffic-Aware Urban Logistics
Journal Description
Algorithms
Algorithms
is a peer-reviewed, open access journal which provides an advanced forum for studies related to algorithms and their applications. Algorithms is published monthly online by MDPI. The European Society for Fuzzy Logic and Technology (EUSFLAT) is affiliated with Algorithms and their members receive discounts on the article processing charges.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, MathSciNet and other databases.
- Journal Rank: CiteScore - Q2 (Numerical Analysis)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 19.1 days after submission; acceptance to publication is undertaken in 3.4 days (median values for papers published in this journal in the first half of 2023).
- Testimonials: See what our editors and authors say about Algorithms.
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.3 (2022);
5-Year Impact Factor:
2.2 (2022)
Latest Articles
Automatic Segmentation of Histological Images of Mouse Brains
Algorithms 2023, 16(12), 553; https://doi.org/10.3390/a16120553 - 01 Dec 2023
Abstract
Using a high-throughput neuroanatomical screen of histological brain sections developed in collaboration with the International Mouse Phenotyping Consortium, we previously reported a list of 198 genes whose inactivation leads to neuroanatomical phenotypes. To achieve this milestone, tens of thousands of hours of manual
[...] Read more.
Using a high-throughput neuroanatomical screen of histological brain sections developed in collaboration with the International Mouse Phenotyping Consortium, we previously reported a list of 198 genes whose inactivation leads to neuroanatomical phenotypes. To achieve this milestone, tens of thousands of hours of manual image segmentation were necessary. The present work involved developing a full pipeline to automate the application of deep learning methods for the automated segmentation of 24 anatomical regions used in the aforementioned screen. The dataset includes 2000 annotated parasagittal slides (24,000 × 14,000 pixels). Our approach consists of three main parts: the conversion of images (.ROI to .PNG), the training of the deep learning approach on the compressed images (512 × 256 and 2048 × 1024 pixels of the deep learning approach) to extract the regions of interest using either the U-Net or Attention U-Net architectures, and finally the transformation of the identified regions (.PNG to .ROI), enabling visualization and editing within the Fiji/ImageJ 1.54 software environment. With an image resolution of 2048 × 1024, the Attention U-Net provided the best results with an overall Dice Similarity Coefficient (DSC) of 0.90 ± 0.01 for all 24 regions. Using one command line, the end-user is now able to pre-analyze images automatically, then runs the existing analytical pipeline made of ImageJ macros to validate the automatically generated regions of interest resulting. Even for regions with low DSC, expert neuroanatomists rarely correct the results. We estimate a time savings of 6 to 10 times.
Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Imaging)
►
Show Figures
Open AccessArticle
A Lightweight Graph Neural Network Algorithm for Action Recognition Based on Self-Distillation
by
and
Algorithms 2023, 16(12), 552; https://doi.org/10.3390/a16120552 - 01 Dec 2023
Abstract
Recognizing human actions can help in numerous ways, such as health monitoring, intelligent surveillance, virtual reality and human–computer interaction. A quick and accurate detection algorithm is required for daily real-time detection. This paper first proposes to generate a lightweight graph neural network by
[...] Read more.
Recognizing human actions can help in numerous ways, such as health monitoring, intelligent surveillance, virtual reality and human–computer interaction. A quick and accurate detection algorithm is required for daily real-time detection. This paper first proposes to generate a lightweight graph neural network by self-distillation for human action recognition tasks. The lightweight graph neural network was evaluated on the NTU-RGB+D dataset. The results demonstrate that, with competitive accuracy, the heavyweight graph neural network can be compressed by up to . Furthermore, the learned representations have denser clusters, estimated by the Davies–Bouldin index, the Dunn index and silhouette coefficients. The ideal input data and algorithm capacity are also discussed.
Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Computer Vision Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
An Algorithm for Coloring of Picture Fuzzy Graphs Based on Strong and Weak Adjacencies, and Its Application
Algorithms 2023, 16(12), 551; https://doi.org/10.3390/a16120551 - 30 Nov 2023
Abstract
The idea of strong and weak adjacencies between vertices has been generalized into fuzzy graphs and intuitionistic fuzzy graphs (IFGs), and it is an important part of making decisions. However, one or two membership degrees are not always sufficient for making decisions on
[...] Read more.
The idea of strong and weak adjacencies between vertices has been generalized into fuzzy graphs and intuitionistic fuzzy graphs (IFGs), and it is an important part of making decisions. However, one or two membership degrees are not always sufficient for making decisions on real-world problems that need an answer of types “yes, neutral, and no”. Consequently, in previous work, we generalized the concept into picture fuzzy graphs (PFGs) where each element in the PFG has membership, neutral, and non-membership degrees. Moreover, we constructed the notion of the coloring of PFGs based on strong and weak adjacencies between vertices. In this paper, we investigate some properties of the chromatic number of PFGs based on the concept of strong and weak adjacencies between vertices. According to these properties, we construct an algorithm to find the chromatic number of PFGs. The algorithm is useful when we work with large PFGs. Further, we improve the method to implement the PFG’s coloring for determining traffic signal phasing at an intersection. A case study has also been carried to evaluate the method.
Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
►▼
Show Figures

Figure 1
Open AccessArticle
OrthoDETR: A Streamlined Transformer-Based Approach for Precision Detection of Orthopedic Medical Devices
Algorithms 2023, 16(12), 550; https://doi.org/10.3390/a16120550 - 29 Nov 2023
Abstract
The rapid and accurate detection of orthopedic medical devices is pivotal in enhancing health care delivery, particularly by improving workflow efficiency. Despite advancements in medical imaging technology, current detection models often fail to meet the unique requirements of orthopedic device detection. To address
[...] Read more.
The rapid and accurate detection of orthopedic medical devices is pivotal in enhancing health care delivery, particularly by improving workflow efficiency. Despite advancements in medical imaging technology, current detection models often fail to meet the unique requirements of orthopedic device detection. To address this gap, we introduce OrthoDETR, a Transformer-based object detection model specifically designed and optimized for orthopedic medical devices. OrthoDETR is an evolution of the DETR (Detection Transformer) model, with several key modifications to better serve orthopedic applications. We replace the ResNet backbone with the MLP-Mixer, improve the multi-head self-attention mechanism, and refine the loss function for more accurate detections. In our comparative study, OrthoDETR outperformed other models, achieving an AP50 score of 0.897, an AP50:95 score of 0.864, an AR50:95 score of 0.895, and a frame per second (FPS) rate of 26. This represents a significant improvement over the DETR model, which achieved an AP50 score of 0.852, an AP50:95 score of 0.842, an AR50:95 score of 0.862, and an FPS rate of 20. OrthoDETR not only accelerates the detection process but also maintains an acceptable performance trade-off. The real-world impact of this model is substantial. By facilitating the precise and quick detection of orthopedic devices, OrthoDETR can potentially revolutionize the management of orthopedic workflows, improving patient care, and enhancing the efficiency of healthcare systems. This paper underlines the significance of specialized object detection models in orthopedics and sets the stage for further research in this direction.
Full article
(This article belongs to the Topic Emerging Trends in Electric Vehicles, Smart Grids and Smart Cities)
►▼
Show Figures

Figure 1
Open AccessArticle
Predicting the Impact of Data Poisoning Attacks in Blockchain-Enabled Supply Chain Networks
by
, , , , and
Algorithms 2023, 16(12), 549; https://doi.org/10.3390/a16120549 - 29 Nov 2023
Abstract
As computer networks become increasingly important in various domains, the need for secure and reliable networks becomes more pressing, particularly in the context of blockchain-enabled supply chain networks. One way to ensure network security is by using intrusion detection systems (IDSs), which are
[...] Read more.
As computer networks become increasingly important in various domains, the need for secure and reliable networks becomes more pressing, particularly in the context of blockchain-enabled supply chain networks. One way to ensure network security is by using intrusion detection systems (IDSs), which are specialised devices that detect anomalies and attacks in the network. However, these systems are vulnerable to data poisoning attacks, such as label and distance-based flipping, which can undermine their effectiveness within blockchain-enabled supply chain networks. In this research paper, we investigate the effect of these attacks on a network intrusion detection system using several machine learning models, including logistic regression, random forest, SVC, and XGB Classifier, and evaluate each model via their F1 Score, confusion matrix, and accuracy. We run each model three times: once without any attack, once with random label flipping with a randomness of 20%, and once with distance-based label flipping attacks with a distance threshold of 0.5. Additionally, this research tests an eight-layer neural network using accuracy metrics and a classification report library. The primary goal of this research is to provide insights into the effect of data poisoning attacks on machine learning models within the context of blockchain-enabled supply chain networks. By doing so, we aim to contribute to developing more robust intrusion detection systems tailored to the specific challenges of securing blockchain-based supply chain networks.
Full article
(This article belongs to the Special Issue Deep Learning Techniques for Computer Security Problems)
►▼
Show Figures

Figure 1
Open AccessArticle
An Efficient Optimized DenseNet Model for Aspect-Based Multi-Label Classification
Algorithms 2023, 16(12), 548; https://doi.org/10.3390/a16120548 - 28 Nov 2023
Abstract
Sentiment analysis holds great importance within the domain of natural language processing as it examines both the expressed and underlying emotions conveyed through review content. Furthermore, researchers have discovered that relying solely on the overall sentiment derived from the textual content is inadequate.
[...] Read more.
Sentiment analysis holds great importance within the domain of natural language processing as it examines both the expressed and underlying emotions conveyed through review content. Furthermore, researchers have discovered that relying solely on the overall sentiment derived from the textual content is inadequate. Consequently, sentiment analysis was developed to extract nuanced expressions from textual information. One of the challenges in this field is effectively extracting emotional elements using multi-label data that covers various aspects. This article presents a novel approach called the Ensemble of DenseNet based on Aquila Optimizer (EDAO). EDAO is specifically designed to enhance the precision and diversity of multi-label learners. Unlike traditional multi-label methods, EDAO strongly emphasizes improving model diversity and accuracy in multi-label scenarios. To evaluate the effectiveness of our approach, we conducted experiments on seven distinct datasets, including emotions, hotels, movies, proteins, automobiles, medical, news, and birds. Our initial strategy involves establishing a preprocessing mechanism to obtain precise and refined data. Subsequently, we used the Vader tool with Bag of Words (BoW) for feature extraction. In the third stage, we created word associations using the word2vec method. The improved data were also used to train and test the DenseNet model, which was fine-tuned using the Aquila Optimizer (AO). On the news, emotion, auto, bird, movie, hotel, protein, and medical datasets, utilizing the aspect-based multi-labeling technique, we achieved accuracy rates of 95%, 97%, and 96%, respectively, with DenseNet-AO. Our proposed model demonstrates that EDAO outperforms other standard methods across various multi-label datasets with different dimensions. The implemented strategy has been rigorously validated through experimental results, showcasing its effectiveness compared to existing benchmark approaches.
Full article
(This article belongs to the Special Issue Machine Learning in Big Data Modeling)
►▼
Show Figures

Figure 1
Open AccessArticle
Optimizing Physics-Informed Neural Network in Dynamic System Simulation and Learning of Parameters
Algorithms 2023, 16(12), 547; https://doi.org/10.3390/a16120547 - 28 Nov 2023
Abstract
Artificial neural networks have changed many fields by giving scientists a strong way to model complex phenomena. They are also becoming increasingly useful for solving various difficult scientific problems. Still, people keep trying to find faster and more accurate ways to simulate dynamic
[...] Read more.
Artificial neural networks have changed many fields by giving scientists a strong way to model complex phenomena. They are also becoming increasingly useful for solving various difficult scientific problems. Still, people keep trying to find faster and more accurate ways to simulate dynamic systems. This research explores the transformative capabilities of physics-informed neural networks, a specialized subset of artificial neural networks, in modeling complex dynamical systems with enhanced speed and accuracy. These networks incorporate known physical laws into the learning process, ensuring predictions remain consistent with fundamental principles, which is crucial when dealing with scientific phenomena. This study focuses on optimizing the application of this specialized network for simultaneous system dynamics simulations and learning time-varying parameters, particularly when the number of unknowns in the system matches the number of undetermined parameters. Additionally, we explore scenarios with a mismatch between parameters and equations, optimizing network architecture to enhance convergence speed, computational efficiency, and accuracy in learning the time-varying parameter. Our approach enhances the algorithm’s performance and accuracy, ensuring optimal use of computational resources and yielding more precise results. Extensive experiments are conducted on four different dynamical systems: first-order irreversible chain reactions, biomass transfer, the Brusselsator model, and the Lotka-Volterra model, using synthetically generated data to validate our approach. Additionally, we apply our method to the susceptible-infected-recovered model, utilizing real-world COVID-19 data to learn the time-varying parameters of the pandemic’s spread. A comprehensive comparison between the performance of our approach and fully connected deep neural networks is presented, evaluating both accuracy and computational efficiency in parameter identification and system dynamics capture. The results demonstrate that the physics-informed neural networks outperform fully connected deep neural networks in performance, especially with increased network depth, making them ideal for real-time complex system modeling. This underscores the physics-informed neural network’s effectiveness in scientific modeling in scenarios with balanced unknowns and parameters. Furthermore, it provides a fast, accurate, and efficient alternative for analyzing dynamic systems.
Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
►▼
Show Figures

Figure 1
Open AccessArticle
Wind Turbine Predictive Fault Diagnostics Based on a Novel Long Short-Term Memory Model
Algorithms 2023, 16(12), 546; https://doi.org/10.3390/a16120546 - 28 Nov 2023
Abstract
The operation and maintenance (O&M) issues of offshore wind turbines (WTs) are more challenging because of the harsh operational environment and hard accessibility. As sudden component failures within WTs bring about durable downtimes and significant revenue losses, condition monitoring and predictive fault diagnostic
[...] Read more.
The operation and maintenance (O&M) issues of offshore wind turbines (WTs) are more challenging because of the harsh operational environment and hard accessibility. As sudden component failures within WTs bring about durable downtimes and significant revenue losses, condition monitoring and predictive fault diagnostic approaches must be developed to detect faults before they occur, thus preventing durable downtimes and costly unplanned maintenance. Based primarily on supervisory control and data acquisition (SCADA) data, thirty-three weighty features from operational data are extracted, and eight specific faults are categorised for fault predictions from status information. By providing a model-agnostic vector representation for time, Time2Vec (T2V), into Long Short-Term Memory (LSTM), this paper develops a novel deep-learning neural network model, T2V-LSTM, conducting multi-level fault predictions. The classification steps allow fault diagnosis from 10 to 210 min prior to faults. The results show that T2V-LSTM can successfully predict over 84.97% of faults and outperform LSTM and other counterparts in both overall and individual fault predictions due to its topmost recall scores in most multistep-ahead cases performed. Thus, the proposed T2V-LSTM can correctly diagnose more faults and upgrade the predictive performances based on vanilla LSTM in terms of accuracy, recall scores, and F-scores.
Full article
(This article belongs to the Special Issue Artificial Intelligence for Fault Detection and Diagnosis)
►▼
Show Figures

Figure 1
Open AccessArticle
Measuring the Performance of Ant Colony Optimization Algorithms for the Dynamic Traveling Salesman Problem
Algorithms 2023, 16(12), 545; https://doi.org/10.3390/a16120545 - 28 Nov 2023
Abstract
Ant colony optimization (ACO) has proven its adaptation capabilities on optimization problems with dynamic environments. In this work, the dynamic traveling salesman problem (DTSP) is used as the base problem to generate dynamic test cases. Two types of dynamic changes for the DTSP
[...] Read more.
Ant colony optimization (ACO) has proven its adaptation capabilities on optimization problems with dynamic environments. In this work, the dynamic traveling salesman problem (DTSP) is used as the base problem to generate dynamic test cases. Two types of dynamic changes for the DTSP are considered: (1) node changes and (2) weight changes. In the experiments, ACO algorithms are systematically compared in different DTSP test cases. Statistical tests are performed using the arithmetic mean and standard deviation of ACO algorithms, which is the standard method of comparing ACO algorithms. To complement the comparisons, the quantiles of the distribution are also used to measure the peak-, average-, and bad-case performance of ACO algorithms. The experimental results demonstrate some advantages of using quantiles for evaluating the performance of ACO algorithms in some DTSP test cases.
Full article
(This article belongs to the Special Issue Peak and Bad-Case Performance of Swarm and Evolutionary Optimization Algorithms)
►▼
Show Figures

Figure 1
Open AccessEditorial
Special Issue on “Algorithms for Biomedical Image Analysis and Processing”
by
and
Algorithms 2023, 16(12), 544; https://doi.org/10.3390/a16120544 - 28 Nov 2023
Abstract
Biomedical imaging is a broad field concerning image capture for diagnostic and therapeutic purposes [...]
Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Open AccessArticle
A Multi-Class Deep Learning Approach for Early Detection of Depressive and Anxiety Disorders Using Twitter Data
by
, , , , and
Algorithms 2023, 16(12), 543; https://doi.org/10.3390/a16120543 - 27 Nov 2023
Abstract
Social media occupies an important place in people’s daily lives where users share various contents and topics such as thoughts, experiences, events and feelings. The massive use of social media has led to the generation of huge volumes of data. These data constitute
[...] Read more.
Social media occupies an important place in people’s daily lives where users share various contents and topics such as thoughts, experiences, events and feelings. The massive use of social media has led to the generation of huge volumes of data. These data constitute a treasure trove, allowing the extraction of high volumes of relevant information particularly by involving deep learning techniques. Based on this context, various research studies have been carried out with the aim of studying the detection of mental disorders, notably depression and anxiety, through the analysis of data extracted from the Twitter platform. However, although these studies were able to achieve very satisfactory results, they nevertheless relied mainly on binary classification models by treating each mental disorder separately. Indeed, it would be better if we managed to develop systems capable of dealing with several mental disorders at the same time. To address this point, we propose a well-defined methodology involving the use of deep learning to develop effective multi-class models for detecting both depression and anxiety disorders through the analysis of tweets. The idea consists in testing a large number of deep learning models ranging from simple to hybrid variants to examine their strengths and weaknesses. Moreover, we involve the grid search technique to help find suitable values for the learning rate hyper-parameter due to its importance in training models. Our work is validated through several experiments and comparisons by considering various datasets and other binary classification models. The aim is to show the effectiveness of both the assumptions used to collect the data and the use of multi-class models rather than binary class models. Overall, the results obtained are satisfactory and very competitive compared to related works.
Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Enhancing Cryptocurrency Price Forecasting by Integrating Machine Learning with Social Media and Market Data
Algorithms 2023, 16(12), 542; https://doi.org/10.3390/a16120542 - 27 Nov 2023
Abstract
Since the advent of Bitcoin, the cryptocurrency landscape has seen the emergence of several virtual currencies that have quickly established their presence in the global market. The dynamics of this market, influenced by a multitude of factors that are difficult to predict, pose
[...] Read more.
Since the advent of Bitcoin, the cryptocurrency landscape has seen the emergence of several virtual currencies that have quickly established their presence in the global market. The dynamics of this market, influenced by a multitude of factors that are difficult to predict, pose a challenge to fully comprehend its underlying insights. This paper proposes a methodology for suggesting when it is appropriate to buy or sell cryptocurrencies, in order to maximize profits. Starting from large sets of market and social media data, our methodology combines different statistical, text analytics, and deep learning techniques to support a recommendation trading algorithm. In particular, we exploit additional information such as correlation between social media posts and price fluctuations, causal connection among prices, and the sentiment of social media users regarding cryptocurrencies. Several experiments were carried out on historical data to assess the effectiveness of the trading algorithm, achieving an overall average gain of 194% without transaction fees and 117% when considering fees. In particular, among the different types of cryptocurrencies considered (i.e., high capitalization, solid projects, and meme coins), the trading algorithm has proven to be very effective in predicting the price trends of influential meme coins, yielding considerably higher profits compared to other cryptocurrency types.
Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
►▼
Show Figures

Figure 1
Open AccessArticle
A Novel Deep Reinforcement Learning (DRL) Algorithm to Apply Artificial Intelligence-Based Maintenance in Electrolysers
Algorithms 2023, 16(12), 541; https://doi.org/10.3390/a16120541 - 27 Nov 2023
Abstract
Hydrogen provides a clean source of energy that can be produced with the aid of electrolysers. For electrolysers to operate cost-effectively and safely, it is necessary to define an appropriate maintenance strategy. Predictive maintenance is one of such strategies but often relies on
[...] Read more.
Hydrogen provides a clean source of energy that can be produced with the aid of electrolysers. For electrolysers to operate cost-effectively and safely, it is necessary to define an appropriate maintenance strategy. Predictive maintenance is one of such strategies but often relies on data from sensors which can also become faulty, resulting in false information. Consequently, maintenance will not be performed at the right time and failure will occur. To address this problem, the artificial intelligence concept is applied to make predictions on sensor readings based on data obtained from another instrument within the process. In this study, a novel algorithm is developed using Deep Reinforcement Learning (DRL) to select the best feature(s) among measured data of the electrolyser, which can best predict the target sensor data for predictive maintenance. The features are used as input into a type of deep neural network called long short-term memory (LSTM) to make predictions. The DLR developed has been compared with those found in literatures within the scope of this study. The results have been excellent and, in fact, have produced the best scores. Specifically, its correlation coefficient with the target variable was practically total (0.99). Likewise, the root-mean-square error (RMSE) between the experimental sensor data and the predicted variable was only 0.1351.
Full article
(This article belongs to the Special Issue Reinforcement Learning and Its Applications in Modern Power and Energy Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Estimating the Frequencies of Maximal Theta-Gamma Coupling in EEG during the N-Back Task: Sensitivity to Methodology and Temporal Instability
by
, , , , , and
Algorithms 2023, 16(12), 540; https://doi.org/10.3390/a16120540 - 27 Nov 2023
Abstract
►▼
Show Figures
Phase-amplitude coupling (PAC) of theta and gamma rhythms of the brain has been observed in animals and humans, with evidence of its involvement in cognitive functions and brain disorders. This motivates finding individual frequencies of maximal theta-gamma coupling (TGC) and using them to
[...] Read more.
Phase-amplitude coupling (PAC) of theta and gamma rhythms of the brain has been observed in animals and humans, with evidence of its involvement in cognitive functions and brain disorders. This motivates finding individual frequencies of maximal theta-gamma coupling (TGC) and using them to adjust brain stimulation. This use implies the stability of the frequencies at least during the investigation, which has not been sufficiently studied. Meanwhile, there is a range of available algorithms for PAC estimation in the literature. We explored several options at different steps of the calculation, applying the resulting algorithms to the EEG data of 16 healthy subjects performing the n-back working memory task, as well as a benchmark recording with previously reported strong PAC. By comparing the results for the two halves of each session, we estimated reproducibility at a time scale of a few minutes. For the benchmark data, the results were largely similar between the algorithms and stable over time. However, for the EEG, the results depended substantially on the algorithm, while also showing poor reproducibility, challenging the validity of using them for personalizing brain stimulation. Further research is needed on the PAC estimation algorithms, cognitive tasks, and other aspects to reliably determine and effectively use TGC parameters in neuromodulation.
Full article

Figure 1
Open AccessArticle
Improved Load Frequency Control in Power Systems Hosting Wind Turbines by an Augmented Fractional Order PID Controller Optimized by the Powerful Owl Search Algorithm
Algorithms 2023, 16(12), 539; https://doi.org/10.3390/a16120539 - 25 Nov 2023
Abstract
The penetration of intermittent wind turbines in power systems imposes challenges to frequency stability. In this light, a new control method is presented in this paper by proposing a modified fractional order proportional integral derivative (FOPID) controller. This method focuses on the coordinated
[...] Read more.
The penetration of intermittent wind turbines in power systems imposes challenges to frequency stability. In this light, a new control method is presented in this paper by proposing a modified fractional order proportional integral derivative (FOPID) controller. This method focuses on the coordinated control of the load-frequency control (LFC) and superconducting magnetic energy storage (SMES) using a cascaded FOPD–FOPID controller. To improve the performance of the FOPD–FOPID controller, the developed owl search algorithm (DOSA) is used to optimize its parameters. The proposed control method is compared with several other methods, including LFC and SMES based on the robust controller, LFC and SMES based on the Moth swarm algorithm (MSA)–PID controller, LFC based on the MSA–PID controller with SMES, and LFC based on the MSA–PID controller without SMES in four scenarios. The results demonstrate the superior performance of the proposed method compared to the other mentioned methods. The proposed method is robust against load disturbances, disturbances caused by wind turbines, and system parameter uncertainties. The method suggested is characterized by its resilience in addressing the challenges posed by load disturbances, disruptions arising from wind turbines, and uncertainties surrounding system parameters.
Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimal Design of Engineering Problems)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Heart Disease Prediction Using Concatenated Hybrid Ensemble Classifiers
by
, , , , , and
Algorithms 2023, 16(12), 538; https://doi.org/10.3390/a16120538 - 25 Nov 2023
Abstract
Heart disease is a leading global cause of mortality, demanding early detection for effective and timely medical intervention. In this study, we propose a machine learning-based model for early heart disease prediction. This model is trained on a dataset from the UC Irvine
[...] Read more.
Heart disease is a leading global cause of mortality, demanding early detection for effective and timely medical intervention. In this study, we propose a machine learning-based model for early heart disease prediction. This model is trained on a dataset from the UC Irvine Machine Learning Repository (UCI) and employs the Extra Trees Classifier for performing feature selection. To ensure robust model training, we standardize this dataset using the StandardScaler method for data standardization, thus preserving the distribution shape and mitigating the impact of outliers. For the classification task, we introduce a novel approach, which is the concatenated hybrid ensemble voting classification. This method combines two hybrid ensemble classifiers, each one utilizing a distinct subset of base classifiers from a set that includes Support Vector Machine, Decision Tree, K-Nearest Neighbor, Logistic Regression, Adaboost and Naive Bayes. By leveraging the concatenated ensemble classifiers, the proposed model shows some promising performance results; in particular, it achieves an accuracy of 86.89%. The obtained results highlight the efficacy of combining the strengths of multiple base classifiers in the problem of early heart disease prediction, thus aiding and enabling timely medical intervention.
Full article
(This article belongs to the Collection Feature Papers in Algorithms and Mathematical Models for Computer-Assisted Diagnostic Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Comparing Activation Functions in Machine Learning for Finite Element Simulations in Thermomechanical Forming
Algorithms 2023, 16(12), 537; https://doi.org/10.3390/a16120537 - 25 Nov 2023
Abstract
Finite element (FE) simulations have been effective in simulating thermomechanical forming processes, yet challenges arise when applying them to new materials due to nonlinear behaviors. To address this, machine learning techniques and artificial neural networks play an increasingly vital role in developing complex
[...] Read more.
Finite element (FE) simulations have been effective in simulating thermomechanical forming processes, yet challenges arise when applying them to new materials due to nonlinear behaviors. To address this, machine learning techniques and artificial neural networks play an increasingly vital role in developing complex models. This paper presents an innovative approach to parameter identification in flow laws, utilizing an artificial neural network that learns directly from test data and automatically generates a Fortran subroutine for the Abaqus standard or explicit FE codes. We investigate the impact of activation functions on prediction and computational efficiency by comparing Sigmoid, Tanh, ReLU, Swish, Softplus, and the less common Exponential function. Despite its infrequent use, the Exponential function demonstrates noteworthy performance and reduced computation times. Model validation involves comparing predictive capabilities with experimental data from compression tests, and numerical simulations confirm the numerical implementation in the Abaqus explicit FE code.
Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
►▼
Show Figures

Figure 1
Open AccessArticle
NDARTS: A Differentiable Architecture Search Based on the Neumann Series
Algorithms 2023, 16(12), 536; https://doi.org/10.3390/a16120536 - 25 Nov 2023
Abstract
Neural architecture search (NAS) has shown great potential in discovering powerful and flexible network models, becoming an important branch of automatic machine learning (AutoML). Although search methods based on reinforcement learning and evolutionary algorithms can find high-performance architectures, these search methods typically require
[...] Read more.
Neural architecture search (NAS) has shown great potential in discovering powerful and flexible network models, becoming an important branch of automatic machine learning (AutoML). Although search methods based on reinforcement learning and evolutionary algorithms can find high-performance architectures, these search methods typically require hundreds of GPU days. Unlike searching in a discrete search space based on reinforcement learning and evolutionary algorithms, the differentiable neural architecture search (DARTS) continuously relaxes the search space, allowing for optimization using gradient-based methods. Based on DARTS, we propose NDARTS in this article. The new algorithm uses the Implicit Function Theorem and the Neumann series to approximate the hyper-gradient, which obtains better results than DARTS. In the simulation experiment, an ablation experiment was carried out to study the influence of the different parameters on the NDARTS algorithm and to determine the optimal weight, then the best performance of the NDARTS algorithm was searched for in the DARTS search space and the NAS-BENCH-201 search space. Compared with other NAS algorithms, the results showed that NDARTS achieved excellent results on the CIFAR-10, CIFAR-100, and ImageNet datasets, and was an effective neural architecture search algorithm.
Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
Assessing Algorithms Used for Constructing Confidence Ellipses in Multidimensional Scaling Solutions
by
and
Algorithms 2023, 16(12), 535; https://doi.org/10.3390/a16120535 - 24 Nov 2023
Abstract
This paper assesses algorithms proposed for constructing confidence ellipses in multidimensional scaling (MDS) solutions and proposes a new approach to interpreting these confidence ellipses via hierarchical cluster analysis (HCA). It is shown that the most effective algorithm for constructing confidence ellipses involves the
[...] Read more.
This paper assesses algorithms proposed for constructing confidence ellipses in multidimensional scaling (MDS) solutions and proposes a new approach to interpreting these confidence ellipses via hierarchical cluster analysis (HCA). It is shown that the most effective algorithm for constructing confidence ellipses involves the generation of simulated distances based on the original multivariate dataset and then the creation of MDS maps that are scaled, reflected, rotated, translated, and finally superimposed. For this algorithm, the stability measure of the average areas tends to zero with increasing sample size n following the power model, An−B, with positive B values ranging from 0.7 to 2 and high R-squared fitting values around 0.99. This algorithm was applied to create confidence ellipses in the MDS plots of squared Euclidean and Mahalanobis distances for continuous and binary data. It was found that plotting confidence ellipses in MDS plots offers a better visualization of the distance map of the populations under study compared to plotting single points. However, the confidence ellipses cannot eliminate the subjective selection of clusters in the MDS plot based simply on the proximity of the MDS points. To overcome this subjective selection, we should quantify the formation of clusters of proximal samples. Thus, in addition to the algorithm assessment, we propose a new approach that estimates all possible cluster probabilities associated with the confidence ellipses by applying HCA using distance matrices derived from these ellipses.
Full article
(This article belongs to the Special Issue Algorithms in Data Classification)
►▼
Show Figures

Figure 1
Open AccessArticle
Ship Detection Algorithm Based on YOLOv5 Network Improved with Lightweight Convolution and Attention Mechanism
Algorithms 2023, 16(12), 534; https://doi.org/10.3390/a16120534 - 22 Nov 2023
Abstract
►▼
Show Figures
Aiming at the problem of insufficient feature extraction, low precision, and recall in sea surface ship detection, a YOLOv5 algorithm based on lightweight convolution and attention mechanism is proposed. We combine the receptive field enhancement module (REF) with the spatial pyramid rapid pooling
[...] Read more.
Aiming at the problem of insufficient feature extraction, low precision, and recall in sea surface ship detection, a YOLOv5 algorithm based on lightweight convolution and attention mechanism is proposed. We combine the receptive field enhancement module (REF) with the spatial pyramid rapid pooling module to retain richer semantic information and expand the sensory field. The slim-neck module based on a lightweight convolution (GSConv) is added to the neck section, to achieve greater computational cost-effectiveness of the detector. And, to lift the model’s performance and focus on positional information, we added the coordinate attention mechanism. Finally, the loss function CIoU is replaced by SIoU. Experimental results using the seaShips dataset show that compared with the original YOLOv5 algorithm, the improved YOLOv5 algorithm has certain improvements in model evaluation indexes, while the number of parameters in the model does not increase significantly, and the detection speed also meets the requirements of sea surface ship detection.
Full article

Figure 1

Journal Menu
► ▼ Journal Menu-
- Algorithms Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Algorithms, Axioms, Information, Mathematics, Symmetry
Fuzzy Number, Fuzzy Difference, Fuzzy Differential: Theory and Applications
Topic Editors: Changyou Wang, Dong Qiu, Yonghong ShenDeadline: 20 December 2023
Topic in
Algorithms, Entropy, Future Internet, Mathematics, Symmetry
Complex Systems and Network Science
Topic Editors: Massimo Marchiori, Latora VitoDeadline: 31 December 2023
Topic in
AI, Algorithms, Applied Sciences, Energies, JNE
Intelligent, Explainable and Trustworthy AI for Advanced Nuclear and Sustainable Energy Systems
Topic Editors: Dinesh Kumar, Syed Bahauddin AlamDeadline: 31 January 2024
Topic in
Entropy, Future Internet, Algorithms, Computation, MAKE, MTI
Interactive Artificial Intelligence and Man-Machine Communication
Topic Editors: Christos Troussas, Cleo Sgouropoulou, Akrivi Krouska, Ioannis Voyiatzis, Athanasios VoulodimosDeadline: 20 February 2024

Conferences
Special Issues
Special Issue in
Algorithms
Algorithms for Games AI
Guest Editors: Wenxin Li, Haifeng ZhangDeadline: 15 December 2023
Special Issue in
Algorithms
String Algorithms and Applications
Guest Editors: Dominik Köppl, Tatsuya AkutsuDeadline: 30 December 2023
Special Issue in
Algorithms
Bio-Inspired Algorithms
Guest Editors: Sándor Szénási, Gábor KertészDeadline: 31 December 2023
Special Issue in
Algorithms
Meta-Heuristics and Machine Learning in Modelling, Developing and Optimising Complex Systems
Guest Editors: Mehdi Neshat, Francisco Cuevas de la RosaDeadline: 15 January 2024
Topical Collections
Topical Collection in
Algorithms
Feature Papers in Algorithms for Multidisciplinary Applications
Collection Editor: Francesc Pozo
Topical Collection in
Algorithms
Feature Papers in Randomized, Online and Approximation Algorithms
Collection Editor: Frank Werner
Topical Collection in
Algorithms
Featured Reviews of Algorithms
Collection Editors: Arun Kumar Sangaiah, Xingjuan Cai