Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 3724 KiB  
Review
Defect Detection Methods for Industrial Products Using Deep Learning Techniques: A Review
by Alireza Saberironaghi, Jing Ren and Moustafa El-Gindy
Algorithms 2023, 16(2), 95; https://doi.org/10.3390/a16020095 - 08 Feb 2023
Cited by 20 | Viewed by 11972
Abstract
Over the last few decades, detecting surface defects has attracted significant attention as a challenging task. There are specific classes of problems that can be solved using traditional image processing techniques. However, these techniques struggle with complex textures in backgrounds, noise, and differences [...] Read more.
Over the last few decades, detecting surface defects has attracted significant attention as a challenging task. There are specific classes of problems that can be solved using traditional image processing techniques. However, these techniques struggle with complex textures in backgrounds, noise, and differences in lighting conditions. As a solution to this problem, deep learning has recently emerged, motivated by two main factors: accessibility to computing power and the rapid digitization of society, which enables the creation of large databases of labeled samples. This review paper aims to briefly summarize and analyze the current state of research on detecting defects using machine learning methods. First, deep learning-based detection of surface defects on industrial products is discussed from three perspectives: supervised, semi-supervised, and unsupervised. Secondly, the current research status of deep learning defect detection methods for X-ray images is discussed. Finally, we summarize the most common challenges and their potential solutions in surface defect detection, such as unbalanced sample identification, limited sample size, and real-time processing. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

14 pages, 2116 KiB  
Article
Effective Heart Disease Prediction Using Machine Learning Techniques
by Chintan M. Bhatt, Parth Patel, Tarang Ghetia and Pier Luigi Mazzeo
Algorithms 2023, 16(2), 88; https://doi.org/10.3390/a16020088 - 06 Feb 2023
Cited by 33 | Viewed by 32888
Abstract
The diagnosis and prognosis of cardiovascular disease are crucial medical tasks to ensure correct classification, which helps cardiologists provide proper treatment to the patient. Machine learning applications in the medical niche have increased as they can recognize patterns from data. Using machine learning [...] Read more.
The diagnosis and prognosis of cardiovascular disease are crucial medical tasks to ensure correct classification, which helps cardiologists provide proper treatment to the patient. Machine learning applications in the medical niche have increased as they can recognize patterns from data. Using machine learning to classify cardiovascular disease occurrence can help diagnosticians reduce misdiagnosis. This research develops a model that can correctly predict cardiovascular diseases to reduce the fatality caused by cardiovascular diseases. This paper proposes a method of k-modes clustering with Huang starting that can improve classification accuracy. Models such as random forest (RF), decision tree classifier (DT), multilayer perceptron (MP), and XGBoost (XGB) are used. GridSearchCV was used to hypertune the parameters of the applied model to optimize the result. The proposed model is applied to a real-world dataset of 70,000 instances from Kaggle. Models were trained on data that were split in 80:20 and achieved accuracy as follows: decision tree: 86.37% (with cross-validation) and 86.53% (without cross-validation), XGBoost: 86.87% (with cross-validation) and 87.02% (without cross-validation), random forest: 87.05% (with cross-validation) and 86.92% (without cross-validation), multilayer perceptron: 87.28% (with cross-validation) and 86.94% (without cross-validation). The proposed models have AUC (area under the curve) values: decision tree: 0.94, XGBoost: 0.95, random forest: 0.95, multilayer perceptron: 0.95. The conclusion drawn from this underlying research is that multilayer perceptron with cross-validation has outperformed all other algorithms in terms of accuracy. It achieved the highest accuracy of 87.28%. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Healthcare)
Show Figures

Figure 1

18 pages, 2875 KiB  
Article
Image-to-Image Translation-Based Data Augmentation for Improving Crop/Weed Classification Models for Precision Agriculture Applications
by L. G. Divyanth, D. S. Guru, Peeyush Soni, Rajendra Machavaram, Mohammad Nadimi and Jitendra Paliwal
Algorithms 2022, 15(11), 401; https://doi.org/10.3390/a15110401 - 30 Oct 2022
Cited by 19 | Viewed by 5352
Abstract
Applications of deep-learning models in machine visions for crop/weed identification have remarkably upgraded the authenticity of precise weed management. However, compelling data are required to obtain the desired result from this highly data-driven operation. This study aims to curtail the effort needed to [...] Read more.
Applications of deep-learning models in machine visions for crop/weed identification have remarkably upgraded the authenticity of precise weed management. However, compelling data are required to obtain the desired result from this highly data-driven operation. This study aims to curtail the effort needed to prepare very large image datasets by creating artificial images of maize (Zea mays) and four common weeds (i.e., Charlock, Fat Hen, Shepherd’s Purse, and small-flowered Cranesbill) through conditional Generative Adversarial Networks (cGANs). The fidelity of these synthetic images was tested through t-distributed stochastic neighbor embedding (t-SNE) visualization plots of real and artificial images of each class. The reliability of this method as a data augmentation technique was validated through classification results based on the transfer learning of a pre-defined convolutional neural network (CNN) architecture—the AlexNet; the feature extraction method came from the deepest pooling layer of the same network. Machine learning models based on a support vector machine (SVM) and linear discriminant analysis (LDA) were trained using these feature vectors. The F1 scores of the transfer learning model increased from 0.97 to 0.99, when additionally supported by an artificial dataset. Similarly, in the case of the feature extraction technique, the classification F1-scores increased from 0.93 to 0.96 for SVM and from 0.94 to 0.96 for the LDA model. The results show that image augmentation using generative adversarial networks (GANs) can improve the performance of crop/weed classification models with the added advantage of reduced time and manpower. Furthermore, it has demonstrated that generative networks could be a great tool for deep-learning applications in agriculture. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

24 pages, 2504 KiB  
Review
A Survey on Fault Diagnosis of Rolling Bearings
by Bo Peng, Ying Bi, Bing Xue, Mengjie Zhang and Shuting Wan
Algorithms 2022, 15(10), 347; https://doi.org/10.3390/a15100347 - 26 Sep 2022
Cited by 20 | Viewed by 4161
Abstract
The failure of a rolling bearing may cause the shutdown of mechanical equipment and even induce catastrophic accidents, resulting in tremendous economic losses and a severely negative impact on society. Fault diagnosis of rolling bearings becomes an important topic with much attention from [...] Read more.
The failure of a rolling bearing may cause the shutdown of mechanical equipment and even induce catastrophic accidents, resulting in tremendous economic losses and a severely negative impact on society. Fault diagnosis of rolling bearings becomes an important topic with much attention from researchers and industrial pioneers. There are an increasing number of publications on this topic. However, there is a lack of a comprehensive survey of existing works from the perspectives of fault detection and fault type recognition in rolling bearings using vibration signals. Therefore, this paper reviews recent fault detection and fault type recognition methods using vibration signals. First, it provides an overview of fault diagnosis of rolling bearings and typical fault types. Then, existing fault diagnosis methods are categorized into fault detection methods and fault type recognition methods, which are separately revised and discussed. Finally, a summary of existing datasets, limitations/challenges of existing methods, and future directions are presented to provide more guidance for researchers who are interested in this field. Overall, this survey paper conducts a review and analysis of the methods used to diagnose rolling bearing faults and provide comprehensive guidance for researchers in this field. Full article
(This article belongs to the Special Issue Artificial Intelligence for Fault Detection and Diagnosis)
Show Figures

Figure 1

19 pages, 2574 KiB  
Article
GA−Reinforced Deep Neural Network for Net Electric Load Forecasting in Microgrids with Renewable Energy Resources for Scheduling Battery Energy Storage Systems
by Chaoran Zheng, Mohsen Eskandari, Ming Li and Zeyue Sun
Algorithms 2022, 15(10), 338; https://doi.org/10.3390/a15100338 - 21 Sep 2022
Cited by 14 | Viewed by 2821
Abstract
The large−scale integration of wind power and PV cells into electric grids alleviates the problem of an energy crisis. However, this is also responsible for technical and management problems in the power grid, such as power fluctuation, scheduling difficulties, and reliability reduction. The [...] Read more.
The large−scale integration of wind power and PV cells into electric grids alleviates the problem of an energy crisis. However, this is also responsible for technical and management problems in the power grid, such as power fluctuation, scheduling difficulties, and reliability reduction. The microgrid concept has been proposed to locally control and manage a cluster of local distributed energy resources (DERs) and loads. If the net load power can be accurately predicted, it is possible to schedule/optimize the operation of battery energy storage systems (BESSs) through economic dispatch to cover intermittent renewables. However, the load curve of the microgrid is highly affected by various external factors, resulting in large fluctuations, which makes the prediction problematic. This paper predicts the net electric load of the microgrid using a deep neural network to realize a reliable power supply as well as reduce the cost of power generation. Considering that the backpropagation (BP) neural network has a good approximation effect as well as a strong adaptation ability, the load prediction model of the BP deep neural network is established. However, there are some defects in the BP neural network, such as the prediction effect, which is not precise enough and easily falls into a locally optimal solution. Hence, a genetic algorithm (GA)−reinforced deep neural network is introduced. By optimizing the weight and threshold of the BP network, the deficiency of the BP neural network algorithm is improved so that the prediction effect is realized and optimized. The results reveal that the error reduction in the mean square error (MSE) of the GA–BP neural network prediction is 2.0221, which is significantly smaller than the 30.3493 of the BP neural network prediction. Additionally, the error reduction is 93.3%. The error reductions of the root mean square error (RMSE) and mean absolute error (MAE) are 74.18% and 51.2%, respectively. Full article
Show Figures

Figure 1

23 pages, 3231 KiB  
Article
Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)
by Harshkumar Mehta and Kalpdrum Passi
Algorithms 2022, 15(8), 291; https://doi.org/10.3390/a15080291 - 17 Aug 2022
Cited by 11 | Viewed by 5822
Abstract
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting and explaining decisions made by complex artificial intelligence (AI) models to understand the decision-making process of these model were the aims of this research. [...] Read more.
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting and explaining decisions made by complex artificial intelligence (AI) models to understand the decision-making process of these model were the aims of this research. As a part of this research study, two datasets were taken to demonstrate hate speech detection using XAI. Data preprocessing was performed to clean data of any inconsistencies, clean the text of the tweets, tokenize and lemmatize the text, etc. Categorical variables were also simplified in order to generate a clean dataset for training purposes. Exploratory data analysis was performed on the datasets to uncover various patterns and insights. Various pre-existing models were applied to the Google Jigsaw dataset such as decision trees, k-nearest neighbors, multinomial naïve Bayes, random forest, logistic regression, and long short-term memory (LSTM), among which LSTM achieved an accuracy of 97.6%. Explainable methods such as LIME (local interpretable model—agnostic explanations) were applied to the HateXplain dataset. Variants of BERT (bidirectional encoder representations from transformers) model such as BERT + ANN (artificial neural network) with an accuracy of 93.55% and BERT + MLP (multilayer perceptron) with an accuracy of 93.67% were created to achieve a good performance in terms of explainability using the ERASER (evaluating rationales and simple English reasoning) benchmark. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

23 pages, 14110 KiB  
Article
Design of Multi-Objective-Based Artificial Intelligence Controller for Wind/Battery-Connected Shunt Active Power Filter
by Srilakshmi Koganti, Krishna Jyothi Koganti and Surender Reddy Salkuti
Algorithms 2022, 15(8), 256; https://doi.org/10.3390/a15080256 - 25 Jul 2022
Cited by 18 | Viewed by 3439
Abstract
Nowadays, the integration of renewable energy sources such as solar, wind, etc. into the grid is recommended to reduce losses and meet demands. The application of power electronics devices (PED) to control non-linear, unbalanced loads leads to power quality (PQ) issues. This work [...] Read more.
Nowadays, the integration of renewable energy sources such as solar, wind, etc. into the grid is recommended to reduce losses and meet demands. The application of power electronics devices (PED) to control non-linear, unbalanced loads leads to power quality (PQ) issues. This work presents a hybrid controller for the self-tuning filter (STF)-based Shunt active power filter (SHAPF), integrated with a wind power generation system (WPGS) and a battery storage system (BS). The SHAPF comprises a three-phase voltage source inverter, coupled via a DC-Link. The proposed neuro-fuzzy inference hybrid controller (NFIHC) utilizes both the properties of Fuzzy Logic (FL) and artificial neural network (ANN) controllers and maintains constant DC-Link voltage. The phase synchronization was generated by a self-tuning filter (STF) for the effective working of SHAPF during unbalanced and distorted supply voltages. In addition, STF also does the work of low-pass filters (LPFs) and HPFs (high-pass filters) for splitting the Fundamental component (FC) and Harmonic component (HC) of the current. The control of SHAPF works on d-q theory with the advantage of eliminating low-pass filters (LPFs) and phase-locked loop (PLL). The prime objective of the projected work is to regulate the DC-Link voltage during wind uncertainties and load variations, and minimize the total harmonic distortion (THD) in the current waveforms, thereby improving the power factor (PF).Test studies with various combinations of balanced/unbalanced loads, wind velocity variations, and supply voltage were used to evaluate the suggested method’s superior performance. In addition, the comparative analysis was carried out with those of the existing controllers such as conventional proportional-integral (PI), ANN, and FL. Full article
Show Figures

Figure 1

28 pages, 1344 KiB  
Review
Overview of Distributed Machine Learning Techniques for 6G Networks
by Eugenio Muscinelli, Swapnil Sadashiv Shinde and Daniele Tarchi
Algorithms 2022, 15(6), 210; https://doi.org/10.3390/a15060210 - 15 Jun 2022
Cited by 18 | Viewed by 3651
Abstract
The main goal of this paper is to survey the influential research of distributed learning technologies playing a key role in the 6G world. Upcoming 6G technology is expected to create an intelligent, highly scalable, dynamic, and programable wireless communication network able to [...] Read more.
The main goal of this paper is to survey the influential research of distributed learning technologies playing a key role in the 6G world. Upcoming 6G technology is expected to create an intelligent, highly scalable, dynamic, and programable wireless communication network able to serve many heterogeneous wireless devices. Various machine learning (ML) techniques are expected to be deployed over the intelligent 6G wireless network that provide solutions to highly complex networking problems. In order to do this, various 6G nodes and devices are expected to generate tons of data through external sensors, and data analysis will be needed. With such massive and distributed data, and various innovations in computing hardware, distributed ML techniques are expected to play an important role in 6G. Though they have several advantages over the centralized ML techniques, implementing the distributed ML algorithms over resource-constrained wireless environments can be challenging. Therefore, it is important to select a proper ML algorithm based upon the characteristics of the wireless environment and the resource requirements of the learning process. In this work, we survey the recently introduced distributed ML techniques with their characteristics and possible benefits by focusing our attention on the most influential papers in the area. We finally give our perspective on the main challenges and advantages for telecommunication networks, along with the main scenarios that could eventuate. Full article
(This article belongs to the Special Issue Algorithms for Communication Networks)
Show Figures

Figure 1

22 pages, 22712 KiB  
Article
Improved JPS Path Optimization for Mobile Robots Based on Angle-Propagation Theta* Algorithm
by Yuan Luo, Jiakai Lu, Qiong Qin and Yanyu Liu
Algorithms 2022, 15(6), 198; https://doi.org/10.3390/a15060198 - 08 Jun 2022
Cited by 7 | Viewed by 2693
Abstract
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path [...] Read more.
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path optimization strategy of the JPS algorithm by combining the viewable angle of the Angle-Propagation Theta* (AP Theta*) algorithm, and it proposes the AP-JPS algorithm based on an any-angle pathfinding strategy. First, based on the JPS algorithm, this paper proposes a vision triangle judgment method to optimize the generated path by selecting the successor search point. Secondly, the idea of the node viewable angle in the AP Theta* algorithm is introduced to modify the line of sight (LOS) reachability detection between two nodes. Finally, the paths are optimized using a seventh-order polynomial based on minimum snap, so that the AP-JPS algorithm generates paths that better match the actual robot motion. The feasibility and effectiveness of this method are proved by simulation experiments and comparison with other algorithms. The results show that the path planning algorithm in this paper obtains paths with good smoothness in environments with different obstacle densities and different map sizes. In the algorithm comparison experiments, it can be seen that the AP-JPS algorithm reduces the path by 1.61–4.68% and the total turning angle of the path by 58.71–84.67% compared with the JPS algorithm. The AP-JPS algorithm reduces the computing time by 98.59–99.22% compared with the AP-Theta* algorithm. Full article
Show Figures

Figure 1

22 pages, 848 KiB  
Review
A Survey on Network Optimization Techniques for Blockchain Systems
by Robert Antwi, James Dzisi Gadze, Eric Tutu Tchao, Axel Sikora, Henry Nunoo-Mensah, Andrew Selasi Agbemenu, Kwame Opunie-Boachie Obour Agyekum, Justice Owusu Agyemang, Dominik Welte and Eliel Keelson
Algorithms 2022, 15(6), 193; https://doi.org/10.3390/a15060193 - 04 Jun 2022
Cited by 9 | Viewed by 4490
Abstract
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as [...] Read more.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work. Full article
(This article belongs to the Special Issue Advances in Blockchain Architecture and Consensus)
Show Figures

Figure 1

22 pages, 878 KiB  
Article
Efficient Machine Learning Models for Early Stage Detection of Autism Spectrum Disorder
by Mousumi Bala, Mohammad Hanif Ali, Md. Shahriare Satu, Khondokar Fida Hasan and Mohammad Ali Moni
Algorithms 2022, 15(5), 166; https://doi.org/10.3390/a15050166 - 16 May 2022
Cited by 19 | Viewed by 4311
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder that severely impairs an individual’s cognitive, linguistic, object recognition, communication, and social abilities. This situation is not treatable, although early detection of ASD can assist to diagnose and take proper steps for mitigating its effect. [...] Read more.
Autism spectrum disorder (ASD) is a neurodevelopmental disorder that severely impairs an individual’s cognitive, linguistic, object recognition, communication, and social abilities. This situation is not treatable, although early detection of ASD can assist to diagnose and take proper steps for mitigating its effect. Using various artificial intelligence (AI) techniques, ASD can be detected an at earlier stage than with traditional methods. The aim of this study was to propose a machine learning model that investigates ASD data of different age levels and to identify ASD more accurately. In this work, we gathered ASD datasets of toddlers, children, adolescents, and adults and used several feature selection techniques. Then, different classifiers were applied into these datasets, and we assessed their performance with evaluation metrics including predictive accuracy, kappa statistics, the f1-measure, and AUROC. In addition, we analyzed the performance of individual classifiers using a non-parametric statistical significant test. For the toddler, child, adolescent, and adult datasets, we found that Support Vector Machine (SVM) performed better than other classifiers where we gained 97.82% accuracy for the RIPPER-based toddler subset; 99.61% accuracy for the Correlation-based feature selection (CFS) and Boruta CFS intersect (BIC) method-based child subset; 95.87% accuracy for the Boruta-based adolescent subset; and 96.82% accuracy for the CFS-based adult subset. Then, we applied the Shapley Additive Explanations (SHAP) method into different feature subsets, which gained the highest accuracy and ranked their features based on the analysis. Full article
(This article belongs to the Special Issue Interpretability, Accountability and Robustness in Machine Learning)
Show Figures

Figure 1

18 pages, 523 KiB  
Article
Closed-Form Solution of the Bending Two-Phase Integral Model of Euler-Bernoulli Nanobeams
by Efthimios Providas
Algorithms 2022, 15(5), 151; https://doi.org/10.3390/a15050151 - 28 Apr 2022
Cited by 7 | Viewed by 3121
Abstract
Recent developments have shown that the widely used simplified differential model of Eringen’s nonlocal elasticity in nanobeam analysis is not equivalent to the corresponding and initially proposed integral models, the pure integral model and the two-phase integral model, in all cases of loading [...] Read more.
Recent developments have shown that the widely used simplified differential model of Eringen’s nonlocal elasticity in nanobeam analysis is not equivalent to the corresponding and initially proposed integral models, the pure integral model and the two-phase integral model, in all cases of loading and boundary conditions. This has resolved a paradox with solutions that are not in line with the expected softening effect of the nonlocal theory that appears in all other cases. In addition, it revived interest in the integral model and the two-phase integral model, which were not used due to their complexity in solving the relevant integral and integro-differential equations, respectively. In this article, we use a direct operator method for solving boundary value problems for nth order linear Volterra–Fredholm integro-differential equations of convolution type to construct closed-form solutions to the two-phase integral model of Euler–Bernoulli nanobeams in bending under transverse distributed load and various types of boundary conditions. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 937 KiB  
Review
A Review on the Performance of Linear and Mixed Integer Two-Stage Stochastic Programming Software
by Juan J. Torres, Can Li, Robert M. Apap and Ignacio E. Grossmann
Algorithms 2022, 15(4), 103; https://doi.org/10.3390/a15040103 - 22 Mar 2022
Cited by 10 | Viewed by 4152
Abstract
This paper presents a tutorial on the state-of-the-art software for the solution of two-stage (mixed-integer) linear stochastic programs and provides a list of software designed for this purpose. The methodologies are classified according to the decomposition alternatives and the types of the variables [...] Read more.
This paper presents a tutorial on the state-of-the-art software for the solution of two-stage (mixed-integer) linear stochastic programs and provides a list of software designed for this purpose. The methodologies are classified according to the decomposition alternatives and the types of the variables in the problem. We review the fundamentals of Benders decomposition, dual decomposition and progressive hedging, as well as possible improvements and variants. We also present extensive numerical results to underline the properties and performance of each algorithm using software implementations, including DECIS, FORTSP, PySP, and DSP. Finally, we discuss the strengths and weaknesses of each methodology and propose future research directions. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

18 pages, 576 KiB  
Article
Evolutionary Optimization of Spiking Neural P Systems for Remaining Useful Life Prediction
by Leonardo Lucio Custode, Hyunho Mo, Andrea Ferigo and Giovanni Iacca
Algorithms 2022, 15(3), 98; https://doi.org/10.3390/a15030098 - 19 Mar 2022
Cited by 8 | Viewed by 3597
Abstract
Remaining useful life (RUL) prediction is a key enabler for predictive maintenance. In fact, the possibility of accurately and reliably predicting the RUL of a system, based on a record of its monitoring data, can allow users to schedule maintenance interventions before faults [...] Read more.
Remaining useful life (RUL) prediction is a key enabler for predictive maintenance. In fact, the possibility of accurately and reliably predicting the RUL of a system, based on a record of its monitoring data, can allow users to schedule maintenance interventions before faults occur. In the recent literature, several data-driven methods for RUL prediction have been proposed. However, most of them are based on traditional (connectivist) neural networks, such as convolutional neural networks, and alternative mechanisms have barely been explored. Here, we tackle the RUL prediction problem for the first time by using a membrane computing paradigm, namely that of Spiking Neural P (in short, SN P) systems. First, we show how SN P systems can be adapted to handle the RUL prediction problem. Then, we propose the use of a neuro-evolutionary algorithm to optimize the structure and parameters of the SN P systems. Our results on two datasets, namely the CMAPSS and new CMAPSS benchmarks from NASA, are fairly comparable with those obtained by much more complex deep networks, showing a reasonable compromise between performance and number of trainable parameters, which in turn correlates with memory consumption and computing time. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

37 pages, 1000 KiB  
Article
Deterministic Approximate EM Algorithm; Application to the Riemann Approximation EM and the Tempered EM
by Thomas Lartigue, Stanley Durrleman and Stéphanie Allassonnière
Algorithms 2022, 15(3), 78; https://doi.org/10.3390/a15030078 - 25 Feb 2022
Cited by 4 | Viewed by 2812
Abstract
The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with latent variables. Many authors modified its simple design to fit more specific situations. For instance, the Expectation (E) step has been replaced by Monte Carlo (MC), Markov Chain Monte [...] Read more.
The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with latent variables. Many authors modified its simple design to fit more specific situations. For instance, the Expectation (E) step has been replaced by Monte Carlo (MC), Markov Chain Monte Carlo or tempered approximations, etc. Most of the well-studied approximations belong to the stochastic class. By comparison, the literature is lacking when it comes to deterministic approximations. In this paper, we introduce a theoretical framework, with state-of-the-art convergence guarantees, for any deterministic approximation of the E step. We analyse theoretically and empirically several approximations that fit into this framework. First, for intractable E-steps, we introduce a deterministic version of MC-EM using Riemann sums. A straightforward method, not requiring any hyper-parameter fine-tuning, useful when the low dimensionality does not warrant a MC-EM. Then, we consider the tempered approximation, borrowed from the Simulated Annealing literature and used to escape local extrema. We prove that the tempered EM verifies the convergence guarantees for a wider range of temperature profiles than previously considered. We showcase empirically how new non-trivial profiles can more successfully escape adversarial initialisations. Finally, we combine the Riemann and tempered approximations into a method that accomplishes both their purposes. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

19 pages, 5851 KiB  
Review
Machine Learning in Cereal Crops Disease Detection: A Review
by Fraol Gelana Waldamichael, Taye Girma Debelee, Friedhelm Schwenker, Yehualashet Megersa Ayano and Samuel Rahimeto Kebede
Algorithms 2022, 15(3), 75; https://doi.org/10.3390/a15030075 - 24 Feb 2022
Cited by 16 | Viewed by 6507
Abstract
Cereals are an important and major source of the human diet. They constitute more than two-thirds of the world’s food source and cover more than 56% of the world’s cultivatable land. These important sources of food are affected by a variety of damaging [...] Read more.
Cereals are an important and major source of the human diet. They constitute more than two-thirds of the world’s food source and cover more than 56% of the world’s cultivatable land. These important sources of food are affected by a variety of damaging diseases, causing significant loss in annual production. In this regard, detection of diseases at an early stage and quantification of the severity has acquired the urgent attention of researchers worldwide. One emerging and popular approach for this task is the utilization of machine learning techniques. In this work, we have identified the most common and damaging diseases affecting cereal crop production, and we also reviewed 45 works performed on the detection and classification of various diseases that occur on six cereal crops within the past five years. In addition, we identified and summarised numerous publicly available datasets for each cereal crop, which the lack thereof we identified as the main challenges faced for researching the application of machine learning in cereal crop detection. In this survey, we identified deep convolutional neural networks trained on hyperspectral data as the most effective approach for early detection of diseases and transfer learning as the most commonly used and yielding the best result training method. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications III)
Show Figures

Figure 1

14 pages, 1123 KiB  
Article
Approximation of the Riesz–Caputo Derivative by Cubic Splines
by Francesca Pitolli, Chiara Sorgentone and Enza Pellegrino
Algorithms 2022, 15(2), 69; https://doi.org/10.3390/a15020069 - 21 Feb 2022
Cited by 15 | Viewed by 4010
Abstract
Differential problems with the Riesz derivative in space are widely used to model anomalous diffusion. Although the Riesz–Caputo derivative is more suitable for modeling real phenomena, there are few examples in literature where numerical methods are used to solve such differential problems. In [...] Read more.
Differential problems with the Riesz derivative in space are widely used to model anomalous diffusion. Although the Riesz–Caputo derivative is more suitable for modeling real phenomena, there are few examples in literature where numerical methods are used to solve such differential problems. In this paper, we propose to approximate the Riesz–Caputo derivative of a given function with a cubic spline. As far as we are aware, this is the first time that cubic splines have been used in the context of the Riesz–Caputo derivative. To show the effectiveness of the proposed numerical method, we present numerical tests in which we compare the analytical solution of several boundary differential problems which have the Riesz–Caputo derivative in space with the numerical solution we obtain by a spline collocation method. The numerical results show that the proposed method is efficient and accurate. Full article
Show Figures

Figure 1

20 pages, 2764 KiB  
Article
A Real-Time Network Traffic Classifier for Online Applications Using Machine Learning
by Ahmed Abdelmoamen Ahmed and Gbenga Agunsoye
Algorithms 2021, 14(8), 250; https://doi.org/10.3390/a14080250 - 21 Aug 2021
Cited by 16 | Viewed by 5443
Abstract
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported [...] Read more.
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported over secure application-layer protocols (e.g., HTTPS, SSL, and SSH). This makes it a challenging task for network administrators to identify online applications using traditional port-based approaches. One way for classifying the modern network traffic is to use machine learning (ML) to distinguish between the different traffic attributes such as packet count and size, packet inter-arrival time, packet send–receive ratio, etc. This paper presents the design and implementation of NetScrapper, a flow-based network traffic classifier for online applications. NetScrapper uses three ML models, namely K-Nearest Neighbors (KNN), Random Forest (RF), and Artificial Neural Network (ANN), for classifying the most popular 53 online applications, including Amazon, Youtube, Google, Twitter, and many others. We collected a network traffic dataset containing 3,577,296 packet flows with different 87 features for training, validating, and testing the ML models. A web-based user-friendly interface is developed to enable users to either upload a snapshot of their network traffic to NetScrapper or sniff the network traffic directly from the network interface card in real time. Additionally, we created a middleware pipeline for interfacing the three models with the Flask GUI. Finally, we evaluated NetScrapper using various performance metrics such as classification accuracy and prediction time. Most notably, we found that our ANN model achieves an overall classification accuracy of 99.86% in recognizing the online applications in our dataset. Full article
Show Figures

Figure 1

22 pages, 2628 KiB  
Article
COVID-19 Prediction Applying Supervised Machine Learning Algorithms with Comparative Analysis Using WEKA
by Charlyn Nayve Villavicencio, Julio Jerison Escudero Macrohon, Xavier Alphonse Inbaraj, Jyh-Horng Jeng and Jer-Guang Hsieh
Algorithms 2021, 14(7), 201; https://doi.org/10.3390/a14070201 - 30 Jun 2021
Cited by 36 | Viewed by 7327
Abstract
Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as [...] Read more.
Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as possible. With the use of technology, available information concerning COVID-19 increases each day, and extracting useful information from massive data can be done through data mining. In this study, authors utilized several supervised machine learning algorithms in building a model to analyze and predict the presence of COVID-19 using the COVID-19 Symptoms and Presence dataset from Kaggle. J48 Decision Tree, Random Forest, Support Vector Machine, K-Nearest Neighbors and Naïve Bayes algorithms were applied through WEKA machine learning software. Each model’s performance was evaluated using 10-fold cross validation and compared according to major accuracy measures, correctly or incorrectly classified instances, kappa, mean absolute error, and time taken to build the model. The results show that Support Vector Machine using Pearson VII universal kernel outweighs other algorithms by attaining 98.81% accuracy and a mean absolute error of 0.012. Full article
Show Figures

Figure 1

12 pages, 994 KiB  
Article
Digital Twins in Solar Farms: An Approach through Time Series and Deep Learning
by Kamel Arafet and Rafael Berlanga
Algorithms 2021, 14(5), 156; https://doi.org/10.3390/a14050156 - 18 May 2021
Cited by 14 | Viewed by 4153
Abstract
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial [...] Read more.
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial development in automatic diagnostic systems. The objective of this work is to obtain the DT of a Photovoltaic Solar Farm (PVSF) with a deep-learning (DL) approach. To build such a DT, sensor-based time series are properly analyzed and processed. The resulting data are used to train a DL model (e.g., autoencoders) in order to detect anomalies of the physical system in its DT. Results show a reconstruction error around 0.1, a recall score of 0.92 and an Area Under Curve (AUC) of 0.97. Therefore, this paper demonstrates that the DT can reproduce the behavior as well as detect efficiently anomalies of the physical system. Full article
(This article belongs to the Special Issue Algorithms and Applications of Time Series Analysis)
Show Figures

Figure 1

21 pages, 1182 KiB  
Article
Machine Learning Predicts Outcomes of Phase III Clinical Trials for Prostate Cancer
by Felix D. Beacher, Lilianne R. Mujica-Parodi, Shreyash Gupta and Leonardo A. Ancora
Algorithms 2021, 14(5), 147; https://doi.org/10.3390/a14050147 - 05 May 2021
Cited by 10 | Viewed by 6122
Abstract
The ability to predict the individual outcomes of clinical trials could support the development of tools for precision medicine and improve the efficiency of clinical-stage drug development. However, there are no published attempts to predict individual outcomes of clinical trials for cancer. We [...] Read more.
The ability to predict the individual outcomes of clinical trials could support the development of tools for precision medicine and improve the efficiency of clinical-stage drug development. However, there are no published attempts to predict individual outcomes of clinical trials for cancer. We used machine learning (ML) to predict individual responses to a two-year course of bicalutamide, a standard treatment for prostate cancer, based on data from three Phase III clinical trials (n = 3653). We developed models that used a merged dataset from all three studies. The best performing models using merged data from all three studies had an accuracy of 76%. The performance of these models was confirmed by further modeling using a merged dataset from two of the three studies, and a separate study for testing. Together, our results indicate the feasibility of ML-based tools for predicting cancer treatment outcomes, with implications for precision oncology and improving the efficiency of clinical-stage drug development. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application)
Show Figures

Graphical abstract

17 pages, 503 KiB  
Article
Multiple Criteria Decision Making and Prospective Scenarios Model for Selection of Companies to Be Incubated
by Altina S. Oliveira, Carlos F. S. Gomes, Camilla T. Clarkson, Adriana M. Sanseverino, Mara R. S. Barcelos, Igor P. A. Costa and Marcos Santos
Algorithms 2021, 14(4), 111; https://doi.org/10.3390/a14040111 - 30 Mar 2021
Cited by 46 | Viewed by 3260
Abstract
This paper proposes a model to evaluate business projects to get into an incubator, allowing to rank them in order of selection priority. The model combines the Momentum method to build prospective scenarios and the AHP-TOPSIS-2N Multiple Criteria Decision Making (MCDM) method to [...] Read more.
This paper proposes a model to evaluate business projects to get into an incubator, allowing to rank them in order of selection priority. The model combines the Momentum method to build prospective scenarios and the AHP-TOPSIS-2N Multiple Criteria Decision Making (MCDM) method to rank the alternatives. Six business projects were evaluated to be incubated. The Momentum method made it possible for us to create an initial core of criteria for the evaluation of incubation projects. The AHP-TOPSIS-2N method supported the decision to choose the company to be incubated by ranking the alternatives in order of relevance. Our evaluation model has improved the existing models used by incubators. This model can be used and/or adapted by any incubator to evaluate the business projects to be incubated. The set of criteria for the evaluation of incubation projects is original and the use of prospective scenarios with an MCDM method to evaluate companies to be incubated does not exist in the literature. Full article
(This article belongs to the Special Issue Algorithms and Models for Dynamic Multiple Criteria Decision Making)
Show Figures

Figure 1

14 pages, 611 KiB  
Article
An Integrated Neural Network and SEIR Model to Predict COVID-19
by Sharif Noor Zisad, Mohammad Shahadat Hossain, Mohammed Sazzad Hossain and Karl Andersson
Algorithms 2021, 14(3), 94; https://doi.org/10.3390/a14030094 - 19 Mar 2021
Cited by 24 | Viewed by 4863
Abstract
A novel coronavirus (COVID-19), which has become a great concern for the world, was identified first in Wuhan city in China. The rapid spread throughout the world was accompanied by an alarming number of infected patients and increasing number of deaths gradually. If [...] Read more.
A novel coronavirus (COVID-19), which has become a great concern for the world, was identified first in Wuhan city in China. The rapid spread throughout the world was accompanied by an alarming number of infected patients and increasing number of deaths gradually. If the number of infected cases can be predicted in advance, it would have a large contribution to controlling this pandemic in any area. Therefore, this study introduces an integrated model for predicting the number of confirmed cases from the perspective of Bangladesh. Moreover, the number of quarantined patients and the change in basic reproduction rate (the R0-value) can also be evaluated using this model. This integrated model combines the SEIR (Susceptible, Exposed, Infected, Removed) epidemiological model and neural networks. The model was trained using available data from 250 days. The accuracy of the prediction of confirmed cases is almost between 90% and 99%. The performance of this integrated model was evaluated by showing the difference in accuracy between the integrated model and the general SEIR model. The result shows that the integrated model is more accurate than the general SEIR model while predicting the number of confirmed cases in Bangladesh. Full article
Show Figures

Figure 1

12 pages, 393 KiB  
Article
UAV Formation Shape Control via Decentralized Markov Decision Processes
by Md Ali Azam, Hans D. Mittelmann and Shankarachary Ragi
Algorithms 2021, 14(3), 91; https://doi.org/10.3390/a14030091 - 17 Mar 2021
Cited by 13 | Viewed by 3415
Abstract
In this paper, we present a decentralized unmanned aerial vehicle (UAV) swarm formation control approach based on a decision theoretic approach. Specifically, we pose the UAV swarm motion control problem as a decentralized Markov decision process (Dec-MDP). Here, the goal is to drive [...] Read more.
In this paper, we present a decentralized unmanned aerial vehicle (UAV) swarm formation control approach based on a decision theoretic approach. Specifically, we pose the UAV swarm motion control problem as a decentralized Markov decision process (Dec-MDP). Here, the goal is to drive the UAV swarm from an initial geographical region to another geographical region where the swarm must form a three-dimensional shape (e.g., surface of a sphere). As most decision-theoretic formulations suffer from the curse of dimensionality, we adapt an existing fast approximate dynamic programming method called nominal belief-state optimization (NBO) to approximately solve the formation control problem. We perform numerical studies in MATLAB to validate the performance of the above control algorithms. Full article
(This article belongs to the Special Issue Algorithms in Stochastic Models)
Show Figures

Figure 1

15 pages, 367 KiB  
Article
An Improved Greedy Heuristic for the Minimum Positive Influence Dominating Set Problem in Social Networks
by Salim Bouamama and Christian Blum
Algorithms 2021, 14(3), 79; https://doi.org/10.3390/a14030079 - 28 Feb 2021
Cited by 15 | Viewed by 4711
Abstract
This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify [...] Read more.
This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify a small subset of key influential individuals in order to facilitate the spread of positive influence in the whole network. In this paper, we focus on the development of a fast and effective greedy heuristic for the MPIDS problem, because greedy heuristics are an essential component of more sophisticated metaheuristics. Thus, the development of well-working greedy heuristics supports the development of efficient metaheuristics. Extensive experiments conducted on a wide range of social networks and complex networks confirm the overall superiority of our greedy algorithm over its competitors, especially when the problem size becomes large. Moreover, we compare our algorithm with the integer linear programming solver CPLEX. While the performance of CPLEX is very strong for small and medium-sized networks, it reaches its limits when being applied to the largest networks. However, even in the context of small and medium-sized networks, our greedy algorithm is only 2.53% worse than CPLEX. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Graphical abstract

31 pages, 2431 KiB  
Article
Solution Merging in Matheuristics for Resource Constrained Job Scheduling
by Dhananjay Thiruvady, Christian Blum and Andreas T. Ernst
Algorithms 2020, 13(10), 256; https://doi.org/10.3390/a13100256 - 09 Oct 2020
Cited by 12 | Viewed by 3176
Abstract
Matheuristics have been gaining in popularity for solving combinatorial optimisation problems in recent years. This new class of hybrid method combines elements of both mathematical programming for intensification and metaheuristic searches for diversification. A recent approach in this direction has been to build [...] Read more.
Matheuristics have been gaining in popularity for solving combinatorial optimisation problems in recent years. This new class of hybrid method combines elements of both mathematical programming for intensification and metaheuristic searches for diversification. A recent approach in this direction has been to build a neighbourhood for integer programs by merging information from several heuristic solutions, namely construct, solve, merge and adapt (CMSA). In this study, we investigate this method alongside a closely related novel approach—merge search (MS). Both methods rely on a population of solutions, and for the purposes of this study, we examine two options: (a) a constructive heuristic and (b) ant colony optimisation (ACO); that is, a method based on learning. These methods are also implemented in a parallel framework using multi-core shared memory, which leads to improving the overall efficiency. Using a resource constrained job scheduling problem as a test case, different aspects of the algorithms are investigated. We find that both methods, using ACO, are competitive with current state-of-the-art methods, outperforming them for a range of problems. Regarding MS and CMSA, the former seems more effective on medium-sized problems, whereas the latter performs better on large problems. Full article
(This article belongs to the Special Issue Algorithms for Graphs and Networks)
Show Figures

Figure 1

12 pages, 247 KiB  
Article
The Use of an Exact Algorithm within a Tabu Search Maximum Clique Algorithm
by Derek H. Smith, Roberto Montemanni and Stephanie Perkins
Algorithms 2020, 13(10), 253; https://doi.org/10.3390/a13100253 - 04 Oct 2020
Cited by 7 | Viewed by 2529
Abstract
Let G=(V,E) be an undirected graph with vertex set V and edge set E. A clique C of G is a subset of the vertices of V with every pair of vertices of C adjacent. A [...] Read more.
Let G=(V,E) be an undirected graph with vertex set V and edge set E. A clique C of G is a subset of the vertices of V with every pair of vertices of C adjacent. A maximum clique is a clique with the maximum number of vertices. A tabu search algorithm for the maximum clique problem that uses an exact algorithm on subproblems is presented. The exact algorithm uses a graph coloring upper bound for pruning, and the best such algorithm to use in this context is considered. The final tabu search algorithm successfully finds the optimal or best known solution for all standard benchmarks considered. It is compared with a state-of-the-art algorithm that does not use exact search. It is slower to find the known optimal solution for most instances but is faster for five instances and finds a larger clique for two instances. Full article
(This article belongs to the Special Issue Algorithms for Graphs and Networks)
18 pages, 404 KiB  
Article
A Survey on Shortest Unique Substring Queries
by Paniz Abedin, M. Oğuzhan Külekci and Shama V. Thankachan
Algorithms 2020, 13(9), 224; https://doi.org/10.3390/a13090224 - 06 Sep 2020
Cited by 4 | Viewed by 3073
Abstract
The shortest unique substring (SUS) problem is an active line of research in the field of string algorithms and has several applications in bioinformatics and information retrieval. The initial version of the problem was proposed by Pei et al. [ICDE’13]. Over the years, [...] Read more.
The shortest unique substring (SUS) problem is an active line of research in the field of string algorithms and has several applications in bioinformatics and information retrieval. The initial version of the problem was proposed by Pei et al. [ICDE’13]. Over the years, many variants and extensions have been pursued, which include positional-SUS, interval-SUS, approximate-SUS, palindromic-SUS, range-SUS, etc. In this article, we highlight some of the key results and summarize the recent developments in this area. Full article
(This article belongs to the Special Issue Algorithms in Bioinformatics)
17 pages, 471 KiB  
Article
Exact Method for Generating Strategy-Solvable Sudoku Clues
by Kohei Nishikawa and Takahisa Toda
Algorithms 2020, 13(7), 171; https://doi.org/10.3390/a13070171 - 16 Jul 2020
Viewed by 9596
Abstract
A Sudoku puzzle often has a regular pattern in the arrangement of initial digits and it is typically made solvable with known solving techniques called strategies. In this paper, we consider the problem of generating such Sudoku instances. We introduce a rigorous framework [...] Read more.
A Sudoku puzzle often has a regular pattern in the arrangement of initial digits and it is typically made solvable with known solving techniques called strategies. In this paper, we consider the problem of generating such Sudoku instances. We introduce a rigorous framework to discuss solvability for Sudoku instances with respect to strategies. This allows us to handle not only known strategies but also general strategies under a few reasonable assumptions. We propose an exact method for determining Sudoku clues for a given set of clue positions that is solvable with a given set of strategies. This is the first exact method except for a trivial brute-force search. Besides the clue generation, we present an application of our method to the problem of determining the minimum number of strategy-solvable Sudoku clues. We conduct experiments to evaluate our method, varying the position and the number of clues at random. Our method terminates within 1 min for many grids. However, as the number of clues gets closer to 20, the running time rapidly increases and exceeds the time limit set to 600 s. We also evaluate our method for several instances with 17 clue positions taken from known minimum Sudokus to see the efficiency for deciding unsolvability. Full article
Show Figures

Figure 1

24 pages, 1066 KiB  
Article
Binary Time Series Classification with Bayesian Convolutional Neural Networks When Monitoring for Marine Gas Discharges
by Kristian Gundersen, Guttorm Alendal, Anna Oleynik and Nello Blaser
Algorithms 2020, 13(6), 145; https://doi.org/10.3390/a13060145 - 19 Jun 2020
Cited by 12 | Viewed by 4444
Abstract
The world’s oceans are under stress from climate change, acidification and other human activities, and the UN has declared 2021–2030 as the decade for marine science. To monitor the marine waters, with the purpose of detecting discharges of tracers from unknown locations, large [...] Read more.
The world’s oceans are under stress from climate change, acidification and other human activities, and the UN has declared 2021–2030 as the decade for marine science. To monitor the marine waters, with the purpose of detecting discharges of tracers from unknown locations, large areas will need to be covered with limited resources. To increase the detectability of marine gas seepage we propose a deep probabilistic learning algorithm, a Bayesian Convolutional Neural Network (BCNN), to classify time series of measurements. The BCNN will classify time series to belong to a leak/no-leak situation, including classification uncertainty. The latter is important for decision makers who must decide to initiate costly confirmation surveys and, hence, would like to avoid false positives. Results from a transport model are used for the learning process of the BCNN and the task is to distinguish the signal from a leak hidden within the natural variability. We show that the BCNN classifies time series arising from leaks with high accuracy and estimates its associated uncertainty. We combine the output of the BCNN model, the posterior predictive distribution, with a Bayesian decision rule showcasing how the framework can be used in practice to make optimal decisions based on a given cost function. Full article
Show Figures

Figure 1

26 pages, 388 KiB  
Article
Late Acceptance Hill-Climbing Matheuristic for the General Lot Sizing and Scheduling Problem with Rich Constraints
by Andreas Goerler, Eduardo Lalla-Ruiz and Stefan Voß
Algorithms 2020, 13(6), 138; https://doi.org/10.3390/a13060138 - 09 Jun 2020
Cited by 12 | Viewed by 4008
Abstract
This paper considers the general lot sizing and scheduling problem with rich constraints exemplified by means of rework and lifetime constraints for defective items (GLSP-RP), which finds numerous applications in industrial settings, for example, the food processing industry and the pharmaceutical industry. To [...] Read more.
This paper considers the general lot sizing and scheduling problem with rich constraints exemplified by means of rework and lifetime constraints for defective items (GLSP-RP), which finds numerous applications in industrial settings, for example, the food processing industry and the pharmaceutical industry. To address this problem, we propose the Late Acceptance Hill-climbing Matheuristic (LAHCM) as a novel solution framework that exploits and integrates the late acceptance hill climbing algorithm and exact approaches for speeding up the solution process in comparison to solving the problem by means of a general solver. The computational results show the benefits of incorporating exact approaches within the LAHCM template leading to high-quality solutions within short computational times. Full article
(This article belongs to the Special Issue Optimization Algorithms for Allocation Problems)
Show Figures

Figure 1

30 pages, 871 KiB  
Article
Short-Term Wind Speed Forecasting Using Statistical and Machine Learning Methods
by Lucky O. Daniel, Caston Sigauke, Colin Chibaya and Rendani Mbuvha
Algorithms 2020, 13(6), 132; https://doi.org/10.3390/a13060132 - 26 May 2020
Cited by 13 | Viewed by 4545
Abstract
Wind offers an environmentally sustainable energy resource that has seen increasing global adoption in recent years. However, its intermittent, unstable and stochastic nature hampers its representation among other renewable energy sources. This work addresses the forecasting of wind speed, a primary input needed [...] Read more.
Wind offers an environmentally sustainable energy resource that has seen increasing global adoption in recent years. However, its intermittent, unstable and stochastic nature hampers its representation among other renewable energy sources. This work addresses the forecasting of wind speed, a primary input needed for wind energy generation, using data obtained from the South African Wind Atlas Project. Forecasting is carried out on a two days ahead time horizon. We investigate the predictive performance of artificial neural networks (ANN) trained with Bayesian regularisation, decision trees based stochastic gradient boosting (SGB) and generalised additive models (GAMs). The results of the comparative analysis suggest that ANN displays superior predictive performance based on root mean square error (RMSE). In contrast, SGB shows outperformance in terms of mean average error (MAE) and the related mean average percentage error (MAPE). A further comparison of two forecast combination methods involving the linear and additive quantile regression averaging show the latter forecast combination method as yielding lower prediction accuracy. The additive quantile regression averaging based prediction intervals also show outperformance in terms of validity, reliability, quality and accuracy. Interval combination methods show the median method as better than its pure average counterpart. Point forecasts combination and interval forecasting methods are found to improve forecast performance. Full article
Show Figures

Figure 1

19 pages, 570 KiB  
Article
A Survey of Low-Rank Updates of Preconditioners for Sequences of Symmetric Linear Systems
by Luca Bergamaschi
Algorithms 2020, 13(4), 100; https://doi.org/10.3390/a13040100 - 21 Apr 2020
Cited by 13 | Viewed by 3558
Abstract
The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , arising in many scientific applications, such [...] Read more.
The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , arising in many scientific applications, such as discretization of transient Partial Differential Equations (PDEs), solution of eigenvalue problems, (Inexact) Newton methods applied to nonlinear systems, rational Krylov methods for computing a function of a matrix. In this paper, we will analyze a number of techniques of updating a given initial preconditioner by a low-rank matrix with the aim of improving the clustering of eigenvalues around 1, in order to speed-up the convergence of the Preconditioned Conjugate Gradient (PCG) method. We will also review some techniques to efficiently approximate the linearly independent vectors which constitute the low-rank corrections and whose choice is crucial for the effectiveness of the approach. Numerical results on real-life applications show that the performance of a given iterative solver can be very much enhanced by the use of low-rank updates. Full article
Show Figures

Figure 1

19 pages, 11395 KiB  
Article
How to Identify Varying Lead–Lag Effects in Time Series Data: Implementation, Validation, and Application of the Generalized Causality Algorithm
by Johannes Stübinger and Katharina Adler
Algorithms 2020, 13(4), 95; https://doi.org/10.3390/a13040095 - 16 Apr 2020
Cited by 3 | Viewed by 5181
Abstract
This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This [...] Read more.
This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This procedure allows an elastic adjustment of the time axis to find similar but phase-shifted sequences—structural breaks in their relationship are also captured. A large-scale simulation study validates the outperformance in the vast majority of parameter constellations in terms of efficiency, robustness, and feasibility. Finally, the presented methodology is applied to real data from the areas of macroeconomics, finance, and metal. Highest similarity show the pairs of gross domestic product and consumer price index (macroeconomics), S&P 500 index and Deutscher Aktienindex (finance), as well as gold and silver (metal). In addition, the algorithm takes full use of its flexibility and identifies both various structural breaks and regime patterns over time, which are (partly) well documented in the literature. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications)
Show Figures

Figure 1

9 pages, 2485 KiB  
Article
Beyond Newton: A New Root-Finding Fixed-Point Iteration for Nonlinear Equations
by Ankush Aggarwal and Sanjay Pant
Algorithms 2020, 13(4), 78; https://doi.org/10.3390/a13040078 - 29 Mar 2020
Cited by 3 | Viewed by 4822
Abstract
Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In [...] Read more.
Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In this short note, we seek to enlarge the basin of attraction of the classical Newton’s method. The key idea is to develop a relatively simple multiplicative transform of the original equations, which leads to a reduction in nonlinearity, thereby alleviating the limitation of Newton’s method. Based on this idea, we derive a new class of iterative methods and rediscover Halley’s method as the limit case. We present the application of these methods to several mathematical functions (real, complex, and vector equations). Across all examples, our numerical experiments suggest that the new methods converge for a significantly wider range of initial guesses. For scalar equations, the increase in computational cost per iteration is minimal. For vector functions, more extensive analysis is needed to compare the increase in cost per iteration and the improvement in convergence of specific problems. Full article
Show Figures

Figure 1

65 pages, 1890 KiB  
Review
Energy Efficient Routing in Wireless Sensor Networks: A Comprehensive Survey
by Christos Nakas, Dionisis Kandris and Georgios Visvardis
Algorithms 2020, 13(3), 72; https://doi.org/10.3390/a13030072 - 24 Mar 2020
Cited by 87 | Viewed by 12950
Abstract
Wireless Sensor Networks (WSNs) are among the most emerging technologies, thanks to their great capabilities and their ever growing range of applications. However, the lifetime of WSNs is extremely restricted due to the delimited energy capacity of their sensor nodes. This is why [...] Read more.
Wireless Sensor Networks (WSNs) are among the most emerging technologies, thanks to their great capabilities and their ever growing range of applications. However, the lifetime of WSNs is extremely restricted due to the delimited energy capacity of their sensor nodes. This is why energy conservation is considered as the most important research concern for WSNs. Radio communication is the utmost energy consuming function in a WSN. Thus, energy efficient routing is necessitated to save energy and thus prolong the lifetime of WSNs. For this reason, numerous protocols for energy efficient routing in WSNs have been proposed. This article offers an analytical and up to date survey on the protocols of this kind. The classic and modern protocols presented are categorized, depending on i) how the network is structured, ii) how data are exchanged, iii) whether location information is or not used, and iv) whether Quality of Service (QoS) or multiple paths are or not supported. In each distinct category, protocols are both described and compared in terms of specific performance metrics, while their advantages and disadvantages are discussed. Finally, the study findings are discussed, concluding remarks are drawn, and open research issues are indicated. Full article
Show Figures

Figure 1

17 pages, 2260 KiB  
Article
Observability of Uncertain Nonlinear Systems Using Interval Analysis
by Thomas Paradowski, Sabine Lerch, Michelle Damaszek, Robert Dehnert and Bernd Tibken
Algorithms 2020, 13(3), 66; https://doi.org/10.3390/a13030066 - 16 Mar 2020
Cited by 9 | Viewed by 3922
Abstract
In the field of control engineering, observability of uncertain nonlinear systems is often neglected and not examined. This is due to the complex analytical calculations required for the verification. Therefore, the aim of this work is to provide an algorithm which numerically analyzes [...] Read more.
In the field of control engineering, observability of uncertain nonlinear systems is often neglected and not examined. This is due to the complex analytical calculations required for the verification. Therefore, the aim of this work is to provide an algorithm which numerically analyzes the observability of nonlinear systems described by finite-dimensional, continuous-time sets of ordinary differential equations. The algorithm is based on definitions for distinguishability and local observability using a rank check from which conditions are deduced. The only requirements are the uncertain model equations of the system. Further, the methodology verifies observability of nonlinear systems on a given state space. In case that the state space is not fully observable, the algorithm provides the observable set of states. In addition, the results obtained by the algorithm allows insight into why the remaining states cannot be distinguished. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control)
Show Figures

Figure 1

12 pages, 368 KiB  
Article
Optimization of Constrained Stochastic Linear-Quadratic Control on an Infinite Horizon: A Direct-Comparison Based Approach
by Ruobing Xue, Xiangshen Ye and Weiping Wu
Algorithms 2020, 13(2), 49; https://doi.org/10.3390/a13020049 - 24 Feb 2020
Viewed by 3432
Abstract
In this paper we study the optimization of the discrete-time stochastic linear-quadratic (LQ) control problem with conic control constraints on an infinite horizon, considering multiplicative noises. Stochastic control systems can be formulated as Markov Decision Problems (MDPs) with continuous state spaces and therefore [...] Read more.
In this paper we study the optimization of the discrete-time stochastic linear-quadratic (LQ) control problem with conic control constraints on an infinite horizon, considering multiplicative noises. Stochastic control systems can be formulated as Markov Decision Problems (MDPs) with continuous state spaces and therefore we can apply the direct-comparison based optimization approach to solve the problem. We first derive the performance difference formula for the LQ problem by utilizing the state separation property of the system structure. Based on this, we successfully derive the optimality conditions and the stationary optimal feedback control. By introducing the optimization, we establish a general framework for infinite horizon stochastic control problems. The direct-comparison based approach is applicable to both linear and nonlinear systems. Our work provides a new perspective in LQ control problems; based on this approach, learning based algorithms can be developed without identifying all of the system parameters. Full article
Show Figures

Figure 1

13 pages, 6398 KiB  
Article
Optimal Learning and Self-Awareness Versus PDI
by Brendon Smeresky, Alex Rizzo and Timothy Sands
Algorithms 2020, 13(1), 23; https://doi.org/10.3390/a13010023 - 11 Jan 2020
Cited by 30 | Viewed by 6283
Abstract
This manuscript will explore and analyze the effects of different paradigms for the control of rigid body motion mechanics. The experimental setup will include deterministic artificial intelligence composed of optimal self-awareness statements together with a novel, optimal learning algorithm, and these will be [...] Read more.
This manuscript will explore and analyze the effects of different paradigms for the control of rigid body motion mechanics. The experimental setup will include deterministic artificial intelligence composed of optimal self-awareness statements together with a novel, optimal learning algorithm, and these will be re-parameterized as ideal nonlinear feedforward and feedback evaluated within a Simulink simulation. Comparison is made to a custom proportional, derivative, integral controller (modified versions of classical proportional-integral-derivative control) implemented as a feedback control with a specific term to account for the nonlinear coupled motion. Consistent proportional, derivative, and integral gains were used throughout the duration of the experiments. The simulation results will show that akin feedforward control, deterministic self-awareness statements lack an error correction mechanism, relying on learning (which stands in place of feedback control), and the proposed combination of optimal self-awareness statements and a newly demonstrated analytically optimal learning yielded the highest accuracy with the lowest execution time. This highlights the potential effectiveness of a learning control system. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2019)
Show Figures

Figure 1

Back to TopTop