Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 878 KiB  
Article
Efficient Machine Learning Models for Early Stage Detection of Autism Spectrum Disorder
by Mousumi Bala, Mohammad Hanif Ali, Md. Shahriare Satu, Khondokar Fida Hasan and Mohammad Ali Moni
Algorithms 2022, 15(5), 166; https://doi.org/10.3390/a15050166 - 16 May 2022
Cited by 74 | Viewed by 8909
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder that severely impairs an individual’s cognitive, linguistic, object recognition, communication, and social abilities. This situation is not treatable, although early detection of ASD can assist to diagnose and take proper steps for mitigating its effect. [...] Read more.
Autism spectrum disorder (ASD) is a neurodevelopmental disorder that severely impairs an individual’s cognitive, linguistic, object recognition, communication, and social abilities. This situation is not treatable, although early detection of ASD can assist to diagnose and take proper steps for mitigating its effect. Using various artificial intelligence (AI) techniques, ASD can be detected an at earlier stage than with traditional methods. The aim of this study was to propose a machine learning model that investigates ASD data of different age levels and to identify ASD more accurately. In this work, we gathered ASD datasets of toddlers, children, adolescents, and adults and used several feature selection techniques. Then, different classifiers were applied into these datasets, and we assessed their performance with evaluation metrics including predictive accuracy, kappa statistics, the f1-measure, and AUROC. In addition, we analyzed the performance of individual classifiers using a non-parametric statistical significant test. For the toddler, child, adolescent, and adult datasets, we found that Support Vector Machine (SVM) performed better than other classifiers where we gained 97.82% accuracy for the RIPPER-based toddler subset; 99.61% accuracy for the Correlation-based feature selection (CFS) and Boruta CFS intersect (BIC) method-based child subset; 95.87% accuracy for the Boruta-based adolescent subset; and 96.82% accuracy for the CFS-based adult subset. Then, we applied the Shapley Additive Explanations (SHAP) method into different feature subsets, which gained the highest accuracy and ranked their features based on the analysis. Full article
(This article belongs to the Special Issue Interpretability, Accountability and Robustness in Machine Learning)
Show Figures

Figure 1

18 pages, 523 KiB  
Article
Closed-Form Solution of the Bending Two-Phase Integral Model of Euler-Bernoulli Nanobeams
by Efthimios Providas
Algorithms 2022, 15(5), 151; https://doi.org/10.3390/a15050151 - 28 Apr 2022
Cited by 9 | Viewed by 4205
Abstract
Recent developments have shown that the widely used simplified differential model of Eringen’s nonlocal elasticity in nanobeam analysis is not equivalent to the corresponding and initially proposed integral models, the pure integral model and the two-phase integral model, in all cases of loading [...] Read more.
Recent developments have shown that the widely used simplified differential model of Eringen’s nonlocal elasticity in nanobeam analysis is not equivalent to the corresponding and initially proposed integral models, the pure integral model and the two-phase integral model, in all cases of loading and boundary conditions. This has resolved a paradox with solutions that are not in line with the expected softening effect of the nonlocal theory that appears in all other cases. In addition, it revived interest in the integral model and the two-phase integral model, which were not used due to their complexity in solving the relevant integral and integro-differential equations, respectively. In this article, we use a direct operator method for solving boundary value problems for nth order linear Volterra–Fredholm integro-differential equations of convolution type to construct closed-form solutions to the two-phase integral model of Euler–Bernoulli nanobeams in bending under transverse distributed load and various types of boundary conditions. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 937 KiB  
Review
A Review on the Performance of Linear and Mixed Integer Two-Stage Stochastic Programming Software
by Juan J. Torres, Can Li, Robert M. Apap and Ignacio E. Grossmann
Algorithms 2022, 15(4), 103; https://doi.org/10.3390/a15040103 - 22 Mar 2022
Cited by 22 | Viewed by 6712
Abstract
This paper presents a tutorial on the state-of-the-art software for the solution of two-stage (mixed-integer) linear stochastic programs and provides a list of software designed for this purpose. The methodologies are classified according to the decomposition alternatives and the types of the variables [...] Read more.
This paper presents a tutorial on the state-of-the-art software for the solution of two-stage (mixed-integer) linear stochastic programs and provides a list of software designed for this purpose. The methodologies are classified according to the decomposition alternatives and the types of the variables in the problem. We review the fundamentals of Benders decomposition, dual decomposition and progressive hedging, as well as possible improvements and variants. We also present extensive numerical results to underline the properties and performance of each algorithm using software implementations, including DECIS, FORTSP, PySP, and DSP. Finally, we discuss the strengths and weaknesses of each methodology and propose future research directions. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

18 pages, 576 KiB  
Article
Evolutionary Optimization of Spiking Neural P Systems for Remaining Useful Life Prediction
by Leonardo Lucio Custode, Hyunho Mo, Andrea Ferigo and Giovanni Iacca
Algorithms 2022, 15(3), 98; https://doi.org/10.3390/a15030098 - 19 Mar 2022
Cited by 9 | Viewed by 5197
Abstract
Remaining useful life (RUL) prediction is a key enabler for predictive maintenance. In fact, the possibility of accurately and reliably predicting the RUL of a system, based on a record of its monitoring data, can allow users to schedule maintenance interventions before faults [...] Read more.
Remaining useful life (RUL) prediction is a key enabler for predictive maintenance. In fact, the possibility of accurately and reliably predicting the RUL of a system, based on a record of its monitoring data, can allow users to schedule maintenance interventions before faults occur. In the recent literature, several data-driven methods for RUL prediction have been proposed. However, most of them are based on traditional (connectivist) neural networks, such as convolutional neural networks, and alternative mechanisms have barely been explored. Here, we tackle the RUL prediction problem for the first time by using a membrane computing paradigm, namely that of Spiking Neural P (in short, SN P) systems. First, we show how SN P systems can be adapted to handle the RUL prediction problem. Then, we propose the use of a neuro-evolutionary algorithm to optimize the structure and parameters of the SN P systems. Our results on two datasets, namely the CMAPSS and new CMAPSS benchmarks from NASA, are fairly comparable with those obtained by much more complex deep networks, showing a reasonable compromise between performance and number of trainable parameters, which in turn correlates with memory consumption and computing time. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

37 pages, 1000 KiB  
Article
Deterministic Approximate EM Algorithm; Application to the Riemann Approximation EM and the Tempered EM
by Thomas Lartigue, Stanley Durrleman and Stéphanie Allassonnière
Algorithms 2022, 15(3), 78; https://doi.org/10.3390/a15030078 - 25 Feb 2022
Cited by 6 | Viewed by 3845
Abstract
The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with latent variables. Many authors modified its simple design to fit more specific situations. For instance, the Expectation (E) step has been replaced by Monte Carlo (MC), Markov Chain Monte [...] Read more.
The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with latent variables. Many authors modified its simple design to fit more specific situations. For instance, the Expectation (E) step has been replaced by Monte Carlo (MC), Markov Chain Monte Carlo or tempered approximations, etc. Most of the well-studied approximations belong to the stochastic class. By comparison, the literature is lacking when it comes to deterministic approximations. In this paper, we introduce a theoretical framework, with state-of-the-art convergence guarantees, for any deterministic approximation of the E step. We analyse theoretically and empirically several approximations that fit into this framework. First, for intractable E-steps, we introduce a deterministic version of MC-EM using Riemann sums. A straightforward method, not requiring any hyper-parameter fine-tuning, useful when the low dimensionality does not warrant a MC-EM. Then, we consider the tempered approximation, borrowed from the Simulated Annealing literature and used to escape local extrema. We prove that the tempered EM verifies the convergence guarantees for a wider range of temperature profiles than previously considered. We showcase empirically how new non-trivial profiles can more successfully escape adversarial initialisations. Finally, we combine the Riemann and tempered approximations into a method that accomplishes both their purposes. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

19 pages, 5851 KiB  
Review
Machine Learning in Cereal Crops Disease Detection: A Review
by Fraol Gelana Waldamichael, Taye Girma Debelee, Friedhelm Schwenker, Yehualashet Megersa Ayano and Samuel Rahimeto Kebede
Algorithms 2022, 15(3), 75; https://doi.org/10.3390/a15030075 - 24 Feb 2022
Cited by 36 | Viewed by 9968
Abstract
Cereals are an important and major source of the human diet. They constitute more than two-thirds of the world’s food source and cover more than 56% of the world’s cultivatable land. These important sources of food are affected by a variety of damaging [...] Read more.
Cereals are an important and major source of the human diet. They constitute more than two-thirds of the world’s food source and cover more than 56% of the world’s cultivatable land. These important sources of food are affected by a variety of damaging diseases, causing significant loss in annual production. In this regard, detection of diseases at an early stage and quantification of the severity has acquired the urgent attention of researchers worldwide. One emerging and popular approach for this task is the utilization of machine learning techniques. In this work, we have identified the most common and damaging diseases affecting cereal crop production, and we also reviewed 45 works performed on the detection and classification of various diseases that occur on six cereal crops within the past five years. In addition, we identified and summarised numerous publicly available datasets for each cereal crop, which the lack thereof we identified as the main challenges faced for researching the application of machine learning in cereal crop detection. In this survey, we identified deep convolutional neural networks trained on hyperspectral data as the most effective approach for early detection of diseases and transfer learning as the most commonly used and yielding the best result training method. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications III)
Show Figures

Figure 1

14 pages, 1123 KiB  
Article
Approximation of the Riesz–Caputo Derivative by Cubic Splines
by Francesca Pitolli, Chiara Sorgentone and Enza Pellegrino
Algorithms 2022, 15(2), 69; https://doi.org/10.3390/a15020069 - 21 Feb 2022
Cited by 19 | Viewed by 6004
Abstract
Differential problems with the Riesz derivative in space are widely used to model anomalous diffusion. Although the Riesz–Caputo derivative is more suitable for modeling real phenomena, there are few examples in literature where numerical methods are used to solve such differential problems. In [...] Read more.
Differential problems with the Riesz derivative in space are widely used to model anomalous diffusion. Although the Riesz–Caputo derivative is more suitable for modeling real phenomena, there are few examples in literature where numerical methods are used to solve such differential problems. In this paper, we propose to approximate the Riesz–Caputo derivative of a given function with a cubic spline. As far as we are aware, this is the first time that cubic splines have been used in the context of the Riesz–Caputo derivative. To show the effectiveness of the proposed numerical method, we present numerical tests in which we compare the analytical solution of several boundary differential problems which have the Riesz–Caputo derivative in space with the numerical solution we obtain by a spline collocation method. The numerical results show that the proposed method is efficient and accurate. Full article
Show Figures

Figure 1

20 pages, 2764 KiB  
Article
A Real-Time Network Traffic Classifier for Online Applications Using Machine Learning
by Ahmed Abdelmoamen Ahmed and Gbenga Agunsoye
Algorithms 2021, 14(8), 250; https://doi.org/10.3390/a14080250 - 21 Aug 2021
Cited by 33 | Viewed by 9549
Abstract
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported [...] Read more.
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported over secure application-layer protocols (e.g., HTTPS, SSL, and SSH). This makes it a challenging task for network administrators to identify online applications using traditional port-based approaches. One way for classifying the modern network traffic is to use machine learning (ML) to distinguish between the different traffic attributes such as packet count and size, packet inter-arrival time, packet send–receive ratio, etc. This paper presents the design and implementation of NetScrapper, a flow-based network traffic classifier for online applications. NetScrapper uses three ML models, namely K-Nearest Neighbors (KNN), Random Forest (RF), and Artificial Neural Network (ANN), for classifying the most popular 53 online applications, including Amazon, Youtube, Google, Twitter, and many others. We collected a network traffic dataset containing 3,577,296 packet flows with different 87 features for training, validating, and testing the ML models. A web-based user-friendly interface is developed to enable users to either upload a snapshot of their network traffic to NetScrapper or sniff the network traffic directly from the network interface card in real time. Additionally, we created a middleware pipeline for interfacing the three models with the Flask GUI. Finally, we evaluated NetScrapper using various performance metrics such as classification accuracy and prediction time. Most notably, we found that our ANN model achieves an overall classification accuracy of 99.86% in recognizing the online applications in our dataset. Full article
Show Figures

Figure 1

22 pages, 2628 KiB  
Article
COVID-19 Prediction Applying Supervised Machine Learning Algorithms with Comparative Analysis Using WEKA
by Charlyn Nayve Villavicencio, Julio Jerison Escudero Macrohon, Xavier Alphonse Inbaraj, Jyh-Horng Jeng and Jer-Guang Hsieh
Algorithms 2021, 14(7), 201; https://doi.org/10.3390/a14070201 - 30 Jun 2021
Cited by 53 | Viewed by 9581
Abstract
Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as [...] Read more.
Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as possible. With the use of technology, available information concerning COVID-19 increases each day, and extracting useful information from massive data can be done through data mining. In this study, authors utilized several supervised machine learning algorithms in building a model to analyze and predict the presence of COVID-19 using the COVID-19 Symptoms and Presence dataset from Kaggle. J48 Decision Tree, Random Forest, Support Vector Machine, K-Nearest Neighbors and Naïve Bayes algorithms were applied through WEKA machine learning software. Each model’s performance was evaluated using 10-fold cross validation and compared according to major accuracy measures, correctly or incorrectly classified instances, kappa, mean absolute error, and time taken to build the model. The results show that Support Vector Machine using Pearson VII universal kernel outweighs other algorithms by attaining 98.81% accuracy and a mean absolute error of 0.012. Full article
Show Figures

Figure 1

12 pages, 994 KiB  
Article
Digital Twins in Solar Farms: An Approach through Time Series and Deep Learning
by Kamel Arafet and Rafael Berlanga
Algorithms 2021, 14(5), 156; https://doi.org/10.3390/a14050156 - 18 May 2021
Cited by 26 | Viewed by 5913
Abstract
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial [...] Read more.
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial development in automatic diagnostic systems. The objective of this work is to obtain the DT of a Photovoltaic Solar Farm (PVSF) with a deep-learning (DL) approach. To build such a DT, sensor-based time series are properly analyzed and processed. The resulting data are used to train a DL model (e.g., autoencoders) in order to detect anomalies of the physical system in its DT. Results show a reconstruction error around 0.1, a recall score of 0.92 and an Area Under Curve (AUC) of 0.97. Therefore, this paper demonstrates that the DT can reproduce the behavior as well as detect efficiently anomalies of the physical system. Full article
(This article belongs to the Special Issue Algorithms and Applications of Time Series Analysis)
Show Figures

Figure 1

21 pages, 1182 KiB  
Article
Machine Learning Predicts Outcomes of Phase III Clinical Trials for Prostate Cancer
by Felix D. Beacher, Lilianne R. Mujica-Parodi, Shreyash Gupta and Leonardo A. Ancora
Algorithms 2021, 14(5), 147; https://doi.org/10.3390/a14050147 - 5 May 2021
Cited by 16 | Viewed by 8846
Abstract
The ability to predict the individual outcomes of clinical trials could support the development of tools for precision medicine and improve the efficiency of clinical-stage drug development. However, there are no published attempts to predict individual outcomes of clinical trials for cancer. We [...] Read more.
The ability to predict the individual outcomes of clinical trials could support the development of tools for precision medicine and improve the efficiency of clinical-stage drug development. However, there are no published attempts to predict individual outcomes of clinical trials for cancer. We used machine learning (ML) to predict individual responses to a two-year course of bicalutamide, a standard treatment for prostate cancer, based on data from three Phase III clinical trials (n = 3653). We developed models that used a merged dataset from all three studies. The best performing models using merged data from all three studies had an accuracy of 76%. The performance of these models was confirmed by further modeling using a merged dataset from two of the three studies, and a separate study for testing. Together, our results indicate the feasibility of ML-based tools for predicting cancer treatment outcomes, with implications for precision oncology and improving the efficiency of clinical-stage drug development. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application)
Show Figures

Graphical abstract

17 pages, 503 KiB  
Article
Multiple Criteria Decision Making and Prospective Scenarios Model for Selection of Companies to Be Incubated
by Altina S. Oliveira, Carlos F. S. Gomes, Camilla T. Clarkson, Adriana M. Sanseverino, Mara R. S. Barcelos, Igor P. A. Costa and Marcos Santos
Algorithms 2021, 14(4), 111; https://doi.org/10.3390/a14040111 - 30 Mar 2021
Cited by 49 | Viewed by 4574
Abstract
This paper proposes a model to evaluate business projects to get into an incubator, allowing to rank them in order of selection priority. The model combines the Momentum method to build prospective scenarios and the AHP-TOPSIS-2N Multiple Criteria Decision Making (MCDM) method to [...] Read more.
This paper proposes a model to evaluate business projects to get into an incubator, allowing to rank them in order of selection priority. The model combines the Momentum method to build prospective scenarios and the AHP-TOPSIS-2N Multiple Criteria Decision Making (MCDM) method to rank the alternatives. Six business projects were evaluated to be incubated. The Momentum method made it possible for us to create an initial core of criteria for the evaluation of incubation projects. The AHP-TOPSIS-2N method supported the decision to choose the company to be incubated by ranking the alternatives in order of relevance. Our evaluation model has improved the existing models used by incubators. This model can be used and/or adapted by any incubator to evaluate the business projects to be incubated. The set of criteria for the evaluation of incubation projects is original and the use of prospective scenarios with an MCDM method to evaluate companies to be incubated does not exist in the literature. Full article
(This article belongs to the Special Issue Algorithms and Models for Dynamic Multiple Criteria Decision Making)
Show Figures

Figure 1

14 pages, 611 KiB  
Article
An Integrated Neural Network and SEIR Model to Predict COVID-19
by Sharif Noor Zisad, Mohammad Shahadat Hossain, Mohammed Sazzad Hossain and Karl Andersson
Algorithms 2021, 14(3), 94; https://doi.org/10.3390/a14030094 - 19 Mar 2021
Cited by 34 | Viewed by 7059
Abstract
A novel coronavirus (COVID-19), which has become a great concern for the world, was identified first in Wuhan city in China. The rapid spread throughout the world was accompanied by an alarming number of infected patients and increasing number of deaths gradually. If [...] Read more.
A novel coronavirus (COVID-19), which has become a great concern for the world, was identified first in Wuhan city in China. The rapid spread throughout the world was accompanied by an alarming number of infected patients and increasing number of deaths gradually. If the number of infected cases can be predicted in advance, it would have a large contribution to controlling this pandemic in any area. Therefore, this study introduces an integrated model for predicting the number of confirmed cases from the perspective of Bangladesh. Moreover, the number of quarantined patients and the change in basic reproduction rate (the R0-value) can also be evaluated using this model. This integrated model combines the SEIR (Susceptible, Exposed, Infected, Removed) epidemiological model and neural networks. The model was trained using available data from 250 days. The accuracy of the prediction of confirmed cases is almost between 90% and 99%. The performance of this integrated model was evaluated by showing the difference in accuracy between the integrated model and the general SEIR model. The result shows that the integrated model is more accurate than the general SEIR model while predicting the number of confirmed cases in Bangladesh. Full article
Show Figures

Figure 1

12 pages, 393 KiB  
Article
UAV Formation Shape Control via Decentralized Markov Decision Processes
by Md Ali Azam, Hans D. Mittelmann and Shankarachary Ragi
Algorithms 2021, 14(3), 91; https://doi.org/10.3390/a14030091 - 17 Mar 2021
Cited by 28 | Viewed by 5010
Abstract
In this paper, we present a decentralized unmanned aerial vehicle (UAV) swarm formation control approach based on a decision theoretic approach. Specifically, we pose the UAV swarm motion control problem as a decentralized Markov decision process (Dec-MDP). Here, the goal is to drive [...] Read more.
In this paper, we present a decentralized unmanned aerial vehicle (UAV) swarm formation control approach based on a decision theoretic approach. Specifically, we pose the UAV swarm motion control problem as a decentralized Markov decision process (Dec-MDP). Here, the goal is to drive the UAV swarm from an initial geographical region to another geographical region where the swarm must form a three-dimensional shape (e.g., surface of a sphere). As most decision-theoretic formulations suffer from the curse of dimensionality, we adapt an existing fast approximate dynamic programming method called nominal belief-state optimization (NBO) to approximately solve the formation control problem. We perform numerical studies in MATLAB to validate the performance of the above control algorithms. Full article
(This article belongs to the Special Issue Algorithms in Stochastic Models)
Show Figures

Figure 1

15 pages, 367 KiB  
Article
An Improved Greedy Heuristic for the Minimum Positive Influence Dominating Set Problem in Social Networks
by Salim Bouamama and Christian Blum
Algorithms 2021, 14(3), 79; https://doi.org/10.3390/a14030079 - 28 Feb 2021
Cited by 18 | Viewed by 6505
Abstract
This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify [...] Read more.
This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify a small subset of key influential individuals in order to facilitate the spread of positive influence in the whole network. In this paper, we focus on the development of a fast and effective greedy heuristic for the MPIDS problem, because greedy heuristics are an essential component of more sophisticated metaheuristics. Thus, the development of well-working greedy heuristics supports the development of efficient metaheuristics. Extensive experiments conducted on a wide range of social networks and complex networks confirm the overall superiority of our greedy algorithm over its competitors, especially when the problem size becomes large. Moreover, we compare our algorithm with the integer linear programming solver CPLEX. While the performance of CPLEX is very strong for small and medium-sized networks, it reaches its limits when being applied to the largest networks. However, even in the context of small and medium-sized networks, our greedy algorithm is only 2.53% worse than CPLEX. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Graphical abstract

31 pages, 2431 KiB  
Article
Solution Merging in Matheuristics for Resource Constrained Job Scheduling
by Dhananjay Thiruvady, Christian Blum and Andreas T. Ernst
Algorithms 2020, 13(10), 256; https://doi.org/10.3390/a13100256 - 9 Oct 2020
Cited by 16 | Viewed by 4182
Abstract
Matheuristics have been gaining in popularity for solving combinatorial optimisation problems in recent years. This new class of hybrid method combines elements of both mathematical programming for intensification and metaheuristic searches for diversification. A recent approach in this direction has been to build [...] Read more.
Matheuristics have been gaining in popularity for solving combinatorial optimisation problems in recent years. This new class of hybrid method combines elements of both mathematical programming for intensification and metaheuristic searches for diversification. A recent approach in this direction has been to build a neighbourhood for integer programs by merging information from several heuristic solutions, namely construct, solve, merge and adapt (CMSA). In this study, we investigate this method alongside a closely related novel approach—merge search (MS). Both methods rely on a population of solutions, and for the purposes of this study, we examine two options: (a) a constructive heuristic and (b) ant colony optimisation (ACO); that is, a method based on learning. These methods are also implemented in a parallel framework using multi-core shared memory, which leads to improving the overall efficiency. Using a resource constrained job scheduling problem as a test case, different aspects of the algorithms are investigated. We find that both methods, using ACO, are competitive with current state-of-the-art methods, outperforming them for a range of problems. Regarding MS and CMSA, the former seems more effective on medium-sized problems, whereas the latter performs better on large problems. Full article
(This article belongs to the Special Issue Algorithms for Graphs and Networks)
Show Figures

Figure 1

12 pages, 247 KiB  
Article
The Use of an Exact Algorithm within a Tabu Search Maximum Clique Algorithm
by Derek H. Smith, Roberto Montemanni and Stephanie Perkins
Algorithms 2020, 13(10), 253; https://doi.org/10.3390/a13100253 - 4 Oct 2020
Cited by 8 | Viewed by 3432
Abstract
Let G=(V,E) be an undirected graph with vertex set V and edge set E. A clique C of G is a subset of the vertices of V with every pair of vertices of C adjacent. A [...] Read more.
Let G=(V,E) be an undirected graph with vertex set V and edge set E. A clique C of G is a subset of the vertices of V with every pair of vertices of C adjacent. A maximum clique is a clique with the maximum number of vertices. A tabu search algorithm for the maximum clique problem that uses an exact algorithm on subproblems is presented. The exact algorithm uses a graph coloring upper bound for pruning, and the best such algorithm to use in this context is considered. The final tabu search algorithm successfully finds the optimal or best known solution for all standard benchmarks considered. It is compared with a state-of-the-art algorithm that does not use exact search. It is slower to find the known optimal solution for most instances but is faster for five instances and finds a larger clique for two instances. Full article
(This article belongs to the Special Issue Algorithms for Graphs and Networks)
18 pages, 404 KiB  
Article
A Survey on Shortest Unique Substring Queries
by Paniz Abedin, M. Oğuzhan Külekci and Shama V. Thankachan
Algorithms 2020, 13(9), 224; https://doi.org/10.3390/a13090224 - 6 Sep 2020
Cited by 5 | Viewed by 4396
Abstract
The shortest unique substring (SUS) problem is an active line of research in the field of string algorithms and has several applications in bioinformatics and information retrieval. The initial version of the problem was proposed by Pei et al. [ICDE’13]. Over the years, [...] Read more.
The shortest unique substring (SUS) problem is an active line of research in the field of string algorithms and has several applications in bioinformatics and information retrieval. The initial version of the problem was proposed by Pei et al. [ICDE’13]. Over the years, many variants and extensions have been pursued, which include positional-SUS, interval-SUS, approximate-SUS, palindromic-SUS, range-SUS, etc. In this article, we highlight some of the key results and summarize the recent developments in this area. Full article
(This article belongs to the Special Issue Algorithms in Bioinformatics)
17 pages, 471 KiB  
Article
Exact Method for Generating Strategy-Solvable Sudoku Clues
by Kohei Nishikawa and Takahisa Toda
Algorithms 2020, 13(7), 171; https://doi.org/10.3390/a13070171 - 16 Jul 2020
Cited by 3 | Viewed by 11786
Abstract
A Sudoku puzzle often has a regular pattern in the arrangement of initial digits and it is typically made solvable with known solving techniques called strategies. In this paper, we consider the problem of generating such Sudoku instances. We introduce a rigorous framework [...] Read more.
A Sudoku puzzle often has a regular pattern in the arrangement of initial digits and it is typically made solvable with known solving techniques called strategies. In this paper, we consider the problem of generating such Sudoku instances. We introduce a rigorous framework to discuss solvability for Sudoku instances with respect to strategies. This allows us to handle not only known strategies but also general strategies under a few reasonable assumptions. We propose an exact method for determining Sudoku clues for a given set of clue positions that is solvable with a given set of strategies. This is the first exact method except for a trivial brute-force search. Besides the clue generation, we present an application of our method to the problem of determining the minimum number of strategy-solvable Sudoku clues. We conduct experiments to evaluate our method, varying the position and the number of clues at random. Our method terminates within 1 min for many grids. However, as the number of clues gets closer to 20, the running time rapidly increases and exceeds the time limit set to 600 s. We also evaluate our method for several instances with 17 clue positions taken from known minimum Sudokus to see the efficiency for deciding unsolvability. Full article
Show Figures

Figure 1

24 pages, 1066 KiB  
Article
Binary Time Series Classification with Bayesian Convolutional Neural Networks When Monitoring for Marine Gas Discharges
by Kristian Gundersen, Guttorm Alendal, Anna Oleynik and Nello Blaser
Algorithms 2020, 13(6), 145; https://doi.org/10.3390/a13060145 - 19 Jun 2020
Cited by 14 | Viewed by 5885
Abstract
The world’s oceans are under stress from climate change, acidification and other human activities, and the UN has declared 2021–2030 as the decade for marine science. To monitor the marine waters, with the purpose of detecting discharges of tracers from unknown locations, large [...] Read more.
The world’s oceans are under stress from climate change, acidification and other human activities, and the UN has declared 2021–2030 as the decade for marine science. To monitor the marine waters, with the purpose of detecting discharges of tracers from unknown locations, large areas will need to be covered with limited resources. To increase the detectability of marine gas seepage we propose a deep probabilistic learning algorithm, a Bayesian Convolutional Neural Network (BCNN), to classify time series of measurements. The BCNN will classify time series to belong to a leak/no-leak situation, including classification uncertainty. The latter is important for decision makers who must decide to initiate costly confirmation surveys and, hence, would like to avoid false positives. Results from a transport model are used for the learning process of the BCNN and the task is to distinguish the signal from a leak hidden within the natural variability. We show that the BCNN classifies time series arising from leaks with high accuracy and estimates its associated uncertainty. We combine the output of the BCNN model, the posterior predictive distribution, with a Bayesian decision rule showcasing how the framework can be used in practice to make optimal decisions based on a given cost function. Full article
Show Figures

Figure 1

26 pages, 388 KiB  
Article
Late Acceptance Hill-Climbing Matheuristic for the General Lot Sizing and Scheduling Problem with Rich Constraints
by Andreas Goerler, Eduardo Lalla-Ruiz and Stefan Voß
Algorithms 2020, 13(6), 138; https://doi.org/10.3390/a13060138 - 9 Jun 2020
Cited by 17 | Viewed by 5876
Abstract
This paper considers the general lot sizing and scheduling problem with rich constraints exemplified by means of rework and lifetime constraints for defective items (GLSP-RP), which finds numerous applications in industrial settings, for example, the food processing industry and the pharmaceutical industry. To [...] Read more.
This paper considers the general lot sizing and scheduling problem with rich constraints exemplified by means of rework and lifetime constraints for defective items (GLSP-RP), which finds numerous applications in industrial settings, for example, the food processing industry and the pharmaceutical industry. To address this problem, we propose the Late Acceptance Hill-climbing Matheuristic (LAHCM) as a novel solution framework that exploits and integrates the late acceptance hill climbing algorithm and exact approaches for speeding up the solution process in comparison to solving the problem by means of a general solver. The computational results show the benefits of incorporating exact approaches within the LAHCM template leading to high-quality solutions within short computational times. Full article
(This article belongs to the Special Issue Optimization Algorithms for Allocation Problems)
Show Figures

Figure 1

30 pages, 871 KiB  
Article
Short-Term Wind Speed Forecasting Using Statistical and Machine Learning Methods
by Lucky O. Daniel, Caston Sigauke, Colin Chibaya and Rendani Mbuvha
Algorithms 2020, 13(6), 132; https://doi.org/10.3390/a13060132 - 26 May 2020
Cited by 20 | Viewed by 6317
Abstract
Wind offers an environmentally sustainable energy resource that has seen increasing global adoption in recent years. However, its intermittent, unstable and stochastic nature hampers its representation among other renewable energy sources. This work addresses the forecasting of wind speed, a primary input needed [...] Read more.
Wind offers an environmentally sustainable energy resource that has seen increasing global adoption in recent years. However, its intermittent, unstable and stochastic nature hampers its representation among other renewable energy sources. This work addresses the forecasting of wind speed, a primary input needed for wind energy generation, using data obtained from the South African Wind Atlas Project. Forecasting is carried out on a two days ahead time horizon. We investigate the predictive performance of artificial neural networks (ANN) trained with Bayesian regularisation, decision trees based stochastic gradient boosting (SGB) and generalised additive models (GAMs). The results of the comparative analysis suggest that ANN displays superior predictive performance based on root mean square error (RMSE). In contrast, SGB shows outperformance in terms of mean average error (MAE) and the related mean average percentage error (MAPE). A further comparison of two forecast combination methods involving the linear and additive quantile regression averaging show the latter forecast combination method as yielding lower prediction accuracy. The additive quantile regression averaging based prediction intervals also show outperformance in terms of validity, reliability, quality and accuracy. Interval combination methods show the median method as better than its pure average counterpart. Point forecasts combination and interval forecasting methods are found to improve forecast performance. Full article
Show Figures

Figure 1

19 pages, 570 KiB  
Article
A Survey of Low-Rank Updates of Preconditioners for Sequences of Symmetric Linear Systems
by Luca Bergamaschi
Algorithms 2020, 13(4), 100; https://doi.org/10.3390/a13040100 - 21 Apr 2020
Cited by 16 | Viewed by 4892
Abstract
The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , arising in many scientific applications, such [...] Read more.
The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , arising in many scientific applications, such as discretization of transient Partial Differential Equations (PDEs), solution of eigenvalue problems, (Inexact) Newton methods applied to nonlinear systems, rational Krylov methods for computing a function of a matrix. In this paper, we will analyze a number of techniques of updating a given initial preconditioner by a low-rank matrix with the aim of improving the clustering of eigenvalues around 1, in order to speed-up the convergence of the Preconditioned Conjugate Gradient (PCG) method. We will also review some techniques to efficiently approximate the linearly independent vectors which constitute the low-rank corrections and whose choice is crucial for the effectiveness of the approach. Numerical results on real-life applications show that the performance of a given iterative solver can be very much enhanced by the use of low-rank updates. Full article
Show Figures

Figure 1

19 pages, 11395 KiB  
Article
How to Identify Varying Lead–Lag Effects in Time Series Data: Implementation, Validation, and Application of the Generalized Causality Algorithm
by Johannes Stübinger and Katharina Adler
Algorithms 2020, 13(4), 95; https://doi.org/10.3390/a13040095 - 16 Apr 2020
Cited by 4 | Viewed by 8239
Abstract
This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This [...] Read more.
This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This procedure allows an elastic adjustment of the time axis to find similar but phase-shifted sequences—structural breaks in their relationship are also captured. A large-scale simulation study validates the outperformance in the vast majority of parameter constellations in terms of efficiency, robustness, and feasibility. Finally, the presented methodology is applied to real data from the areas of macroeconomics, finance, and metal. Highest similarity show the pairs of gross domestic product and consumer price index (macroeconomics), S&P 500 index and Deutscher Aktienindex (finance), as well as gold and silver (metal). In addition, the algorithm takes full use of its flexibility and identifies both various structural breaks and regime patterns over time, which are (partly) well documented in the literature. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications)
Show Figures

Figure 1

9 pages, 2485 KiB  
Article
Beyond Newton: A New Root-Finding Fixed-Point Iteration for Nonlinear Equations
by Ankush Aggarwal and Sanjay Pant
Algorithms 2020, 13(4), 78; https://doi.org/10.3390/a13040078 - 29 Mar 2020
Cited by 4 | Viewed by 6387
Abstract
Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In [...] Read more.
Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In this short note, we seek to enlarge the basin of attraction of the classical Newton’s method. The key idea is to develop a relatively simple multiplicative transform of the original equations, which leads to a reduction in nonlinearity, thereby alleviating the limitation of Newton’s method. Based on this idea, we derive a new class of iterative methods and rediscover Halley’s method as the limit case. We present the application of these methods to several mathematical functions (real, complex, and vector equations). Across all examples, our numerical experiments suggest that the new methods converge for a significantly wider range of initial guesses. For scalar equations, the increase in computational cost per iteration is minimal. For vector functions, more extensive analysis is needed to compare the increase in cost per iteration and the improvement in convergence of specific problems. Full article
Show Figures

Figure 1

65 pages, 1890 KiB  
Review
Energy Efficient Routing in Wireless Sensor Networks: A Comprehensive Survey
by Christos Nakas, Dionisis Kandris and Georgios Visvardis
Algorithms 2020, 13(3), 72; https://doi.org/10.3390/a13030072 - 24 Mar 2020
Cited by 145 | Viewed by 19229
Abstract
Wireless Sensor Networks (WSNs) are among the most emerging technologies, thanks to their great capabilities and their ever growing range of applications. However, the lifetime of WSNs is extremely restricted due to the delimited energy capacity of their sensor nodes. This is why [...] Read more.
Wireless Sensor Networks (WSNs) are among the most emerging technologies, thanks to their great capabilities and their ever growing range of applications. However, the lifetime of WSNs is extremely restricted due to the delimited energy capacity of their sensor nodes. This is why energy conservation is considered as the most important research concern for WSNs. Radio communication is the utmost energy consuming function in a WSN. Thus, energy efficient routing is necessitated to save energy and thus prolong the lifetime of WSNs. For this reason, numerous protocols for energy efficient routing in WSNs have been proposed. This article offers an analytical and up to date survey on the protocols of this kind. The classic and modern protocols presented are categorized, depending on i) how the network is structured, ii) how data are exchanged, iii) whether location information is or not used, and iv) whether Quality of Service (QoS) or multiple paths are or not supported. In each distinct category, protocols are both described and compared in terms of specific performance metrics, while their advantages and disadvantages are discussed. Finally, the study findings are discussed, concluding remarks are drawn, and open research issues are indicated. Full article
Show Figures

Figure 1

17 pages, 2260 KiB  
Article
Observability of Uncertain Nonlinear Systems Using Interval Analysis
by Thomas Paradowski, Sabine Lerch, Michelle Damaszek, Robert Dehnert and Bernd Tibken
Algorithms 2020, 13(3), 66; https://doi.org/10.3390/a13030066 - 16 Mar 2020
Cited by 11 | Viewed by 5031
Abstract
In the field of control engineering, observability of uncertain nonlinear systems is often neglected and not examined. This is due to the complex analytical calculations required for the verification. Therefore, the aim of this work is to provide an algorithm which numerically analyzes [...] Read more.
In the field of control engineering, observability of uncertain nonlinear systems is often neglected and not examined. This is due to the complex analytical calculations required for the verification. Therefore, the aim of this work is to provide an algorithm which numerically analyzes the observability of nonlinear systems described by finite-dimensional, continuous-time sets of ordinary differential equations. The algorithm is based on definitions for distinguishability and local observability using a rank check from which conditions are deduced. The only requirements are the uncertain model equations of the system. Further, the methodology verifies observability of nonlinear systems on a given state space. In case that the state space is not fully observable, the algorithm provides the observable set of states. In addition, the results obtained by the algorithm allows insight into why the remaining states cannot be distinguished. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control)
Show Figures

Figure 1

12 pages, 368 KiB  
Article
Optimization of Constrained Stochastic Linear-Quadratic Control on an Infinite Horizon: A Direct-Comparison Based Approach
by Ruobing Xue, Xiangshen Ye and Weiping Wu
Algorithms 2020, 13(2), 49; https://doi.org/10.3390/a13020049 - 24 Feb 2020
Viewed by 4188
Abstract
In this paper we study the optimization of the discrete-time stochastic linear-quadratic (LQ) control problem with conic control constraints on an infinite horizon, considering multiplicative noises. Stochastic control systems can be formulated as Markov Decision Problems (MDPs) with continuous state spaces and therefore [...] Read more.
In this paper we study the optimization of the discrete-time stochastic linear-quadratic (LQ) control problem with conic control constraints on an infinite horizon, considering multiplicative noises. Stochastic control systems can be formulated as Markov Decision Problems (MDPs) with continuous state spaces and therefore we can apply the direct-comparison based optimization approach to solve the problem. We first derive the performance difference formula for the LQ problem by utilizing the state separation property of the system structure. Based on this, we successfully derive the optimality conditions and the stationary optimal feedback control. By introducing the optimization, we establish a general framework for infinite horizon stochastic control problems. The direct-comparison based approach is applicable to both linear and nonlinear systems. Our work provides a new perspective in LQ control problems; based on this approach, learning based algorithms can be developed without identifying all of the system parameters. Full article
Show Figures

Figure 1

13 pages, 6398 KiB  
Article
Optimal Learning and Self-Awareness Versus PDI
by Brendon Smeresky, Alex Rizzo and Timothy Sands
Algorithms 2020, 13(1), 23; https://doi.org/10.3390/a13010023 - 11 Jan 2020
Cited by 34 | Viewed by 7646
Abstract
This manuscript will explore and analyze the effects of different paradigms for the control of rigid body motion mechanics. The experimental setup will include deterministic artificial intelligence composed of optimal self-awareness statements together with a novel, optimal learning algorithm, and these will be [...] Read more.
This manuscript will explore and analyze the effects of different paradigms for the control of rigid body motion mechanics. The experimental setup will include deterministic artificial intelligence composed of optimal self-awareness statements together with a novel, optimal learning algorithm, and these will be re-parameterized as ideal nonlinear feedforward and feedback evaluated within a Simulink simulation. Comparison is made to a custom proportional, derivative, integral controller (modified versions of classical proportional-integral-derivative control) implemented as a feedback control with a specific term to account for the nonlinear coupled motion. Consistent proportional, derivative, and integral gains were used throughout the duration of the experiments. The simulation results will show that akin feedforward control, deterministic self-awareness statements lack an error correction mechanism, relying on learning (which stands in place of feedback control), and the proposed combination of optimal self-awareness statements and a newly demonstrated analytically optimal learning yielded the highest accuracy with the lowest execution time. This highlights the potential effectiveness of a learning control system. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2019)
Show Figures

Figure 1

Back to TopTop