Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
Improved JPS Path Optimization for Mobile Robots Based on Angle-Propagation Theta* Algorithm
Algorithms 2022, 15(6), 198; https://doi.org/10.3390/a15060198 - 08 Jun 2022
Cited by 4 | Viewed by 1782
Abstract
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path [...] Read more.
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path optimization strategy of the JPS algorithm by combining the viewable angle of the Angle-Propagation Theta* (AP Theta*) algorithm, and it proposes the AP-JPS algorithm based on an any-angle pathfinding strategy. First, based on the JPS algorithm, this paper proposes a vision triangle judgment method to optimize the generated path by selecting the successor search point. Secondly, the idea of the node viewable angle in the AP Theta* algorithm is introduced to modify the line of sight (LOS) reachability detection between two nodes. Finally, the paths are optimized using a seventh-order polynomial based on minimum snap, so that the AP-JPS algorithm generates paths that better match the actual robot motion. The feasibility and effectiveness of this method are proved by simulation experiments and comparison with other algorithms. The results show that the path planning algorithm in this paper obtains paths with good smoothness in environments with different obstacle densities and different map sizes. In the algorithm comparison experiments, it can be seen that the AP-JPS algorithm reduces the path by 1.61–4.68% and the total turning angle of the path by 58.71–84.67% compared with the JPS algorithm. The AP-JPS algorithm reduces the computing time by 98.59–99.22% compared with the AP-Theta* algorithm. Full article
Show Figures

Figure 1

Review
A Survey on Network Optimization Techniques for Blockchain Systems
Algorithms 2022, 15(6), 193; https://doi.org/10.3390/a15060193 - 04 Jun 2022
Cited by 3 | Viewed by 2956
Abstract
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as [...] Read more.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work. Full article
(This article belongs to the Special Issue Advances in Blockchain Architecture and Consensus)
Show Figures

Figure 1

Article
Deterministic Approximate EM Algorithm; Application to the Riemann Approximation EM and the Tempered EM
Algorithms 2022, 15(3), 78; https://doi.org/10.3390/a15030078 - 25 Feb 2022
Cited by 4 | Viewed by 2208
Abstract
The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with latent variables. Many authors modified its simple design to fit more specific situations. For instance, the Expectation (E) step has been replaced by Monte Carlo (MC), Markov Chain Monte [...] Read more.
The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with latent variables. Many authors modified its simple design to fit more specific situations. For instance, the Expectation (E) step has been replaced by Monte Carlo (MC), Markov Chain Monte Carlo or tempered approximations, etc. Most of the well-studied approximations belong to the stochastic class. By comparison, the literature is lacking when it comes to deterministic approximations. In this paper, we introduce a theoretical framework, with state-of-the-art convergence guarantees, for any deterministic approximation of the E step. We analyse theoretically and empirically several approximations that fit into this framework. First, for intractable E-steps, we introduce a deterministic version of MC-EM using Riemann sums. A straightforward method, not requiring any hyper-parameter fine-tuning, useful when the low dimensionality does not warrant a MC-EM. Then, we consider the tempered approximation, borrowed from the Simulated Annealing literature and used to escape local extrema. We prove that the tempered EM verifies the convergence guarantees for a wider range of temperature profiles than previously considered. We showcase empirically how new non-trivial profiles can more successfully escape adversarial initialisations. Finally, we combine the Riemann and tempered approximations into a method that accomplishes both their purposes. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

Article
A Real-Time Network Traffic Classifier for Online Applications Using Machine Learning
Algorithms 2021, 14(8), 250; https://doi.org/10.3390/a14080250 - 21 Aug 2021
Cited by 9 | Viewed by 3798
Abstract
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported [...] Read more.
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported over secure application-layer protocols (e.g., HTTPS, SSL, and SSH). This makes it a challenging task for network administrators to identify online applications using traditional port-based approaches. One way for classifying the modern network traffic is to use machine learning (ML) to distinguish between the different traffic attributes such as packet count and size, packet inter-arrival time, packet send–receive ratio, etc. This paper presents the design and implementation of NetScrapper, a flow-based network traffic classifier for online applications. NetScrapper uses three ML models, namely K-Nearest Neighbors (KNN), Random Forest (RF), and Artificial Neural Network (ANN), for classifying the most popular 53 online applications, including Amazon, Youtube, Google, Twitter, and many others. We collected a network traffic dataset containing 3,577,296 packet flows with different 87 features for training, validating, and testing the ML models. A web-based user-friendly interface is developed to enable users to either upload a snapshot of their network traffic to NetScrapper or sniff the network traffic directly from the network interface card in real time. Additionally, we created a middleware pipeline for interfacing the three models with the Flask GUI. Finally, we evaluated NetScrapper using various performance metrics such as classification accuracy and prediction time. Most notably, we found that our ANN model achieves an overall classification accuracy of 99.86% in recognizing the online applications in our dataset. Full article
Show Figures

Figure 1

Article
COVID-19 Prediction Applying Supervised Machine Learning Algorithms with Comparative Analysis Using WEKA
Algorithms 2021, 14(7), 201; https://doi.org/10.3390/a14070201 - 30 Jun 2021
Cited by 20 | Viewed by 5709
Abstract
Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as [...] Read more.
Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as possible. With the use of technology, available information concerning COVID-19 increases each day, and extracting useful information from massive data can be done through data mining. In this study, authors utilized several supervised machine learning algorithms in building a model to analyze and predict the presence of COVID-19 using the COVID-19 Symptoms and Presence dataset from Kaggle. J48 Decision Tree, Random Forest, Support Vector Machine, K-Nearest Neighbors and Naïve Bayes algorithms were applied through WEKA machine learning software. Each model’s performance was evaluated using 10-fold cross validation and compared according to major accuracy measures, correctly or incorrectly classified instances, kappa, mean absolute error, and time taken to build the model. The results show that Support Vector Machine using Pearson VII universal kernel outweighs other algorithms by attaining 98.81% accuracy and a mean absolute error of 0.012. Full article
Show Figures

Figure 1

Article
Digital Twins in Solar Farms: An Approach through Time Series and Deep Learning
Algorithms 2021, 14(5), 156; https://doi.org/10.3390/a14050156 - 18 May 2021
Cited by 10 | Viewed by 3095
Abstract
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial [...] Read more.
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial development in automatic diagnostic systems. The objective of this work is to obtain the DT of a Photovoltaic Solar Farm (PVSF) with a deep-learning (DL) approach. To build such a DT, sensor-based time series are properly analyzed and processed. The resulting data are used to train a DL model (e.g., autoencoders) in order to detect anomalies of the physical system in its DT. Results show a reconstruction error around 0.1, a recall score of 0.92 and an Area Under Curve (AUC) of 0.97. Therefore, this paper demonstrates that the DT can reproduce the behavior as well as detect efficiently anomalies of the physical system. Full article
(This article belongs to the Special Issue Algorithms and Applications of Time Series Analysis)
Show Figures

Figure 1

Article
Machine Learning Predicts Outcomes of Phase III Clinical Trials for Prostate Cancer
Algorithms 2021, 14(5), 147; https://doi.org/10.3390/a14050147 - 05 May 2021
Cited by 6 | Viewed by 4814
Abstract
The ability to predict the individual outcomes of clinical trials could support the development of tools for precision medicine and improve the efficiency of clinical-stage drug development. However, there are no published attempts to predict individual outcomes of clinical trials for cancer. We [...] Read more.
The ability to predict the individual outcomes of clinical trials could support the development of tools for precision medicine and improve the efficiency of clinical-stage drug development. However, there are no published attempts to predict individual outcomes of clinical trials for cancer. We used machine learning (ML) to predict individual responses to a two-year course of bicalutamide, a standard treatment for prostate cancer, based on data from three Phase III clinical trials (n = 3653). We developed models that used a merged dataset from all three studies. The best performing models using merged data from all three studies had an accuracy of 76%. The performance of these models was confirmed by further modeling using a merged dataset from two of the three studies, and a separate study for testing. Together, our results indicate the feasibility of ML-based tools for predicting cancer treatment outcomes, with implications for precision oncology and improving the efficiency of clinical-stage drug development. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application)
Show Figures

Graphical abstract

Article
Multiple Criteria Decision Making and Prospective Scenarios Model for Selection of Companies to Be Incubated
Algorithms 2021, 14(4), 111; https://doi.org/10.3390/a14040111 - 30 Mar 2021
Cited by 26 | Viewed by 2518
Abstract
This paper proposes a model to evaluate business projects to get into an incubator, allowing to rank them in order of selection priority. The model combines the Momentum method to build prospective scenarios and the AHP-TOPSIS-2N Multiple Criteria Decision Making (MCDM) method to [...] Read more.
This paper proposes a model to evaluate business projects to get into an incubator, allowing to rank them in order of selection priority. The model combines the Momentum method to build prospective scenarios and the AHP-TOPSIS-2N Multiple Criteria Decision Making (MCDM) method to rank the alternatives. Six business projects were evaluated to be incubated. The Momentum method made it possible for us to create an initial core of criteria for the evaluation of incubation projects. The AHP-TOPSIS-2N method supported the decision to choose the company to be incubated by ranking the alternatives in order of relevance. Our evaluation model has improved the existing models used by incubators. This model can be used and/or adapted by any incubator to evaluate the business projects to be incubated. The set of criteria for the evaluation of incubation projects is original and the use of prospective scenarios with an MCDM method to evaluate companies to be incubated does not exist in the literature. Full article
(This article belongs to the Special Issue Algorithms and Models for Dynamic Multiple Criteria Decision Making)
Show Figures

Figure 1

Article
An Integrated Neural Network and SEIR Model to Predict COVID-19
Algorithms 2021, 14(3), 94; https://doi.org/10.3390/a14030094 - 19 Mar 2021
Cited by 15 | Viewed by 3724
Abstract
A novel coronavirus (COVID-19), which has become a great concern for the world, was identified first in Wuhan city in China. The rapid spread throughout the world was accompanied by an alarming number of infected patients and increasing number of deaths gradually. If [...] Read more.
A novel coronavirus (COVID-19), which has become a great concern for the world, was identified first in Wuhan city in China. The rapid spread throughout the world was accompanied by an alarming number of infected patients and increasing number of deaths gradually. If the number of infected cases can be predicted in advance, it would have a large contribution to controlling this pandemic in any area. Therefore, this study introduces an integrated model for predicting the number of confirmed cases from the perspective of Bangladesh. Moreover, the number of quarantined patients and the change in basic reproduction rate (the R0-value) can also be evaluated using this model. This integrated model combines the SEIR (Susceptible, Exposed, Infected, Removed) epidemiological model and neural networks. The model was trained using available data from 250 days. The accuracy of the prediction of confirmed cases is almost between 90% and 99%. The performance of this integrated model was evaluated by showing the difference in accuracy between the integrated model and the general SEIR model. The result shows that the integrated model is more accurate than the general SEIR model while predicting the number of confirmed cases in Bangladesh. Full article
Show Figures

Figure 1

Article
UAV Formation Shape Control via Decentralized Markov Decision Processes
Algorithms 2021, 14(3), 91; https://doi.org/10.3390/a14030091 - 17 Mar 2021
Cited by 12 | Viewed by 2526
Abstract
In this paper, we present a decentralized unmanned aerial vehicle (UAV) swarm formation control approach based on a decision theoretic approach. Specifically, we pose the UAV swarm motion control problem as a decentralized Markov decision process (Dec-MDP). Here, the goal is to drive [...] Read more.
In this paper, we present a decentralized unmanned aerial vehicle (UAV) swarm formation control approach based on a decision theoretic approach. Specifically, we pose the UAV swarm motion control problem as a decentralized Markov decision process (Dec-MDP). Here, the goal is to drive the UAV swarm from an initial geographical region to another geographical region where the swarm must form a three-dimensional shape (e.g., surface of a sphere). As most decision-theoretic formulations suffer from the curse of dimensionality, we adapt an existing fast approximate dynamic programming method called nominal belief-state optimization (NBO) to approximately solve the formation control problem. We perform numerical studies in MATLAB to validate the performance of the above control algorithms. Full article
(This article belongs to the Special Issue Algorithms in Stochastic Models)
Show Figures

Figure 1

Article
An Improved Greedy Heuristic for the Minimum Positive Influence Dominating Set Problem in Social Networks
Algorithms 2021, 14(3), 79; https://doi.org/10.3390/a14030079 - 28 Feb 2021
Cited by 11 | Viewed by 3782
Abstract
This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify [...] Read more.
This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify a small subset of key influential individuals in order to facilitate the spread of positive influence in the whole network. In this paper, we focus on the development of a fast and effective greedy heuristic for the MPIDS problem, because greedy heuristics are an essential component of more sophisticated metaheuristics. Thus, the development of well-working greedy heuristics supports the development of efficient metaheuristics. Extensive experiments conducted on a wide range of social networks and complex networks confirm the overall superiority of our greedy algorithm over its competitors, especially when the problem size becomes large. Moreover, we compare our algorithm with the integer linear programming solver CPLEX. While the performance of CPLEX is very strong for small and medium-sized networks, it reaches its limits when being applied to the largest networks. However, even in the context of small and medium-sized networks, our greedy algorithm is only 2.53% worse than CPLEX. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Graphical abstract

Article
Solution Merging in Matheuristics for Resource Constrained Job Scheduling
Algorithms 2020, 13(10), 256; https://doi.org/10.3390/a13100256 - 09 Oct 2020
Cited by 10 | Viewed by 2644
Abstract
Matheuristics have been gaining in popularity for solving combinatorial optimisation problems in recent years. This new class of hybrid method combines elements of both mathematical programming for intensification and metaheuristic searches for diversification. A recent approach in this direction has been to build [...] Read more.
Matheuristics have been gaining in popularity for solving combinatorial optimisation problems in recent years. This new class of hybrid method combines elements of both mathematical programming for intensification and metaheuristic searches for diversification. A recent approach in this direction has been to build a neighbourhood for integer programs by merging information from several heuristic solutions, namely construct, solve, merge and adapt (CMSA). In this study, we investigate this method alongside a closely related novel approach—merge search (MS). Both methods rely on a population of solutions, and for the purposes of this study, we examine two options: (a) a constructive heuristic and (b) ant colony optimisation (ACO); that is, a method based on learning. These methods are also implemented in a parallel framework using multi-core shared memory, which leads to improving the overall efficiency. Using a resource constrained job scheduling problem as a test case, different aspects of the algorithms are investigated. We find that both methods, using ACO, are competitive with current state-of-the-art methods, outperforming them for a range of problems. Regarding MS and CMSA, the former seems more effective on medium-sized problems, whereas the latter performs better on large problems. Full article
(This article belongs to the Special Issue Algorithms for Graphs and Networks)
Show Figures

Figure 1

Article
The Use of an Exact Algorithm within a Tabu Search Maximum Clique Algorithm
Algorithms 2020, 13(10), 253; https://doi.org/10.3390/a13100253 - 04 Oct 2020
Cited by 5 | Viewed by 2088
Abstract
Let G=(V,E) be an undirected graph with vertex set V and edge set E. A clique C of G is a subset of the vertices of V with every pair of vertices of C adjacent. A [...] Read more.
Let G=(V,E) be an undirected graph with vertex set V and edge set E. A clique C of G is a subset of the vertices of V with every pair of vertices of C adjacent. A maximum clique is a clique with the maximum number of vertices. A tabu search algorithm for the maximum clique problem that uses an exact algorithm on subproblems is presented. The exact algorithm uses a graph coloring upper bound for pruning, and the best such algorithm to use in this context is considered. The final tabu search algorithm successfully finds the optimal or best known solution for all standard benchmarks considered. It is compared with a state-of-the-art algorithm that does not use exact search. It is slower to find the known optimal solution for most instances but is faster for five instances and finds a larger clique for two instances. Full article
(This article belongs to the Special Issue Algorithms for Graphs and Networks)
Article
A Survey on Shortest Unique Substring Queries
Algorithms 2020, 13(9), 224; https://doi.org/10.3390/a13090224 - 06 Sep 2020
Cited by 4 | Viewed by 2462
Abstract
The shortest unique substring (SUS) problem is an active line of research in the field of string algorithms and has several applications in bioinformatics and information retrieval. The initial version of the problem was proposed by Pei et al. [ICDE’13]. Over the years, [...] Read more.
The shortest unique substring (SUS) problem is an active line of research in the field of string algorithms and has several applications in bioinformatics and information retrieval. The initial version of the problem was proposed by Pei et al. [ICDE’13]. Over the years, many variants and extensions have been pursued, which include positional-SUS, interval-SUS, approximate-SUS, palindromic-SUS, range-SUS, etc. In this article, we highlight some of the key results and summarize the recent developments in this area. Full article
(This article belongs to the Special Issue Algorithms in Bioinformatics)
Article
Exact Method for Generating Strategy-Solvable Sudoku Clues
Algorithms 2020, 13(7), 171; https://doi.org/10.3390/a13070171 - 16 Jul 2020
Viewed by 6856
Abstract
A Sudoku puzzle often has a regular pattern in the arrangement of initial digits and it is typically made solvable with known solving techniques called strategies. In this paper, we consider the problem of generating such Sudoku instances. We introduce a rigorous framework [...] Read more.
A Sudoku puzzle often has a regular pattern in the arrangement of initial digits and it is typically made solvable with known solving techniques called strategies. In this paper, we consider the problem of generating such Sudoku instances. We introduce a rigorous framework to discuss solvability for Sudoku instances with respect to strategies. This allows us to handle not only known strategies but also general strategies under a few reasonable assumptions. We propose an exact method for determining Sudoku clues for a given set of clue positions that is solvable with a given set of strategies. This is the first exact method except for a trivial brute-force search. Besides the clue generation, we present an application of our method to the problem of determining the minimum number of strategy-solvable Sudoku clues. We conduct experiments to evaluate our method, varying the position and the number of clues at random. Our method terminates within 1 min for many grids. However, as the number of clues gets closer to 20, the running time rapidly increases and exceeds the time limit set to 600 s. We also evaluate our method for several instances with 17 clue positions taken from known minimum Sudokus to see the efficiency for deciding unsolvability. Full article
Show Figures

Figure 1

Article
Binary Time Series Classification with Bayesian Convolutional Neural Networks When Monitoring for Marine Gas Discharges
Algorithms 2020, 13(6), 145; https://doi.org/10.3390/a13060145 - 19 Jun 2020
Cited by 11 | Viewed by 3669
Abstract
The world’s oceans are under stress from climate change, acidification and other human activities, and the UN has declared 2021–2030 as the decade for marine science. To monitor the marine waters, with the purpose of detecting discharges of tracers from unknown locations, large [...] Read more.
The world’s oceans are under stress from climate change, acidification and other human activities, and the UN has declared 2021–2030 as the decade for marine science. To monitor the marine waters, with the purpose of detecting discharges of tracers from unknown locations, large areas will need to be covered with limited resources. To increase the detectability of marine gas seepage we propose a deep probabilistic learning algorithm, a Bayesian Convolutional Neural Network (BCNN), to classify time series of measurements. The BCNN will classify time series to belong to a leak/no-leak situation, including classification uncertainty. The latter is important for decision makers who must decide to initiate costly confirmation surveys and, hence, would like to avoid false positives. Results from a transport model are used for the learning process of the BCNN and the task is to distinguish the signal from a leak hidden within the natural variability. We show that the BCNN classifies time series arising from leaks with high accuracy and estimates its associated uncertainty. We combine the output of the BCNN model, the posterior predictive distribution, with a Bayesian decision rule showcasing how the framework can be used in practice to make optimal decisions based on a given cost function. Full article
Show Figures

Figure 1

Article
Late Acceptance Hill-Climbing Matheuristic for the General Lot Sizing and Scheduling Problem with Rich Constraints
Algorithms 2020, 13(6), 138; https://doi.org/10.3390/a13060138 - 09 Jun 2020
Cited by 9 | Viewed by 3281
Abstract
This paper considers the general lot sizing and scheduling problem with rich constraints exemplified by means of rework and lifetime constraints for defective items (GLSP-RP), which finds numerous applications in industrial settings, for example, the food processing industry and the pharmaceutical industry. To [...] Read more.
This paper considers the general lot sizing and scheduling problem with rich constraints exemplified by means of rework and lifetime constraints for defective items (GLSP-RP), which finds numerous applications in industrial settings, for example, the food processing industry and the pharmaceutical industry. To address this problem, we propose the Late Acceptance Hill-climbing Matheuristic (LAHCM) as a novel solution framework that exploits and integrates the late acceptance hill climbing algorithm and exact approaches for speeding up the solution process in comparison to solving the problem by means of a general solver. The computational results show the benefits of incorporating exact approaches within the LAHCM template leading to high-quality solutions within short computational times. Full article
(This article belongs to the Special Issue Optimization Algorithms for Allocation Problems)
Show Figures

Figure 1

Article
Short-Term Wind Speed Forecasting Using Statistical and Machine Learning Methods
Algorithms 2020, 13(6), 132; https://doi.org/10.3390/a13060132 - 26 May 2020
Cited by 12 | Viewed by 3551
Abstract
Wind offers an environmentally sustainable energy resource that has seen increasing global adoption in recent years. However, its intermittent, unstable and stochastic nature hampers its representation among other renewable energy sources. This work addresses the forecasting of wind speed, a primary input needed [...] Read more.
Wind offers an environmentally sustainable energy resource that has seen increasing global adoption in recent years. However, its intermittent, unstable and stochastic nature hampers its representation among other renewable energy sources. This work addresses the forecasting of wind speed, a primary input needed for wind energy generation, using data obtained from the South African Wind Atlas Project. Forecasting is carried out on a two days ahead time horizon. We investigate the predictive performance of artificial neural networks (ANN) trained with Bayesian regularisation, decision trees based stochastic gradient boosting (SGB) and generalised additive models (GAMs). The results of the comparative analysis suggest that ANN displays superior predictive performance based on root mean square error (RMSE). In contrast, SGB shows outperformance in terms of mean average error (MAE) and the related mean average percentage error (MAPE). A further comparison of two forecast combination methods involving the linear and additive quantile regression averaging show the latter forecast combination method as yielding lower prediction accuracy. The additive quantile regression averaging based prediction intervals also show outperformance in terms of validity, reliability, quality and accuracy. Interval combination methods show the median method as better than its pure average counterpart. Point forecasts combination and interval forecasting methods are found to improve forecast performance. Full article
Show Figures

Figure 1

Article
A Survey of Low-Rank Updates of Preconditioners for Sequences of Symmetric Linear Systems
Algorithms 2020, 13(4), 100; https://doi.org/10.3390/a13040100 - 21 Apr 2020
Cited by 11 | Viewed by 2934
Abstract
The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , arising in many scientific applications, such [...] Read more.
The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , arising in many scientific applications, such as discretization of transient Partial Differential Equations (PDEs), solution of eigenvalue problems, (Inexact) Newton methods applied to nonlinear systems, rational Krylov methods for computing a function of a matrix. In this paper, we will analyze a number of techniques of updating a given initial preconditioner by a low-rank matrix with the aim of improving the clustering of eigenvalues around 1, in order to speed-up the convergence of the Preconditioned Conjugate Gradient (PCG) method. We will also review some techniques to efficiently approximate the linearly independent vectors which constitute the low-rank corrections and whose choice is crucial for the effectiveness of the approach. Numerical results on real-life applications show that the performance of a given iterative solver can be very much enhanced by the use of low-rank updates. Full article
Show Figures

Figure 1

Article
How to Identify Varying Lead–Lag Effects in Time Series Data: Implementation, Validation, and Application of the Generalized Causality Algorithm
Algorithms 2020, 13(4), 95; https://doi.org/10.3390/a13040095 - 16 Apr 2020
Cited by 3 | Viewed by 3981
Abstract
This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This [...] Read more.
This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This procedure allows an elastic adjustment of the time axis to find similar but phase-shifted sequences—structural breaks in their relationship are also captured. A large-scale simulation study validates the outperformance in the vast majority of parameter constellations in terms of efficiency, robustness, and feasibility. Finally, the presented methodology is applied to real data from the areas of macroeconomics, finance, and metal. Highest similarity show the pairs of gross domestic product and consumer price index (macroeconomics), S&P 500 index and Deutscher Aktienindex (finance), as well as gold and silver (metal). In addition, the algorithm takes full use of its flexibility and identifies both various structural breaks and regime patterns over time, which are (partly) well documented in the literature. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications)
Show Figures

Figure 1

Article
Beyond Newton: A New Root-Finding Fixed-Point Iteration for Nonlinear Equations
Algorithms 2020, 13(4), 78; https://doi.org/10.3390/a13040078 - 29 Mar 2020
Cited by 2 | Viewed by 3920
Abstract
Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In [...] Read more.
Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In this short note, we seek to enlarge the basin of attraction of the classical Newton’s method. The key idea is to develop a relatively simple multiplicative transform of the original equations, which leads to a reduction in nonlinearity, thereby alleviating the limitation of Newton’s method. Based on this idea, we derive a new class of iterative methods and rediscover Halley’s method as the limit case. We present the application of these methods to several mathematical functions (real, complex, and vector equations). Across all examples, our numerical experiments suggest that the new methods converge for a significantly wider range of initial guesses. For scalar equations, the increase in computational cost per iteration is minimal. For vector functions, more extensive analysis is needed to compare the increase in cost per iteration and the improvement in convergence of specific problems. Full article
Show Figures

Figure 1

Review
Energy Efficient Routing in Wireless Sensor Networks: A Comprehensive Survey
Algorithms 2020, 13(3), 72; https://doi.org/10.3390/a13030072 - 24 Mar 2020
Cited by 67 | Viewed by 9010
Abstract
Wireless Sensor Networks (WSNs) are among the most emerging technologies, thanks to their great capabilities and their ever growing range of applications. However, the lifetime of WSNs is extremely restricted due to the delimited energy capacity of their sensor nodes. This is why [...] Read more.
Wireless Sensor Networks (WSNs) are among the most emerging technologies, thanks to their great capabilities and their ever growing range of applications. However, the lifetime of WSNs is extremely restricted due to the delimited energy capacity of their sensor nodes. This is why energy conservation is considered as the most important research concern for WSNs. Radio communication is the utmost energy consuming function in a WSN. Thus, energy efficient routing is necessitated to save energy and thus prolong the lifetime of WSNs. For this reason, numerous protocols for energy efficient routing in WSNs have been proposed. This article offers an analytical and up to date survey on the protocols of this kind. The classic and modern protocols presented are categorized, depending on i) how the network is structured, ii) how data are exchanged, iii) whether location information is or not used, and iv) whether Quality of Service (QoS) or multiple paths are or not supported. In each distinct category, protocols are both described and compared in terms of specific performance metrics, while their advantages and disadvantages are discussed. Finally, the study findings are discussed, concluding remarks are drawn, and open research issues are indicated. Full article
Show Figures

Figure 1

Article
Observability of Uncertain Nonlinear Systems Using Interval Analysis
Algorithms 2020, 13(3), 66; https://doi.org/10.3390/a13030066 - 16 Mar 2020
Cited by 8 | Viewed by 3402
Abstract
In the field of control engineering, observability of uncertain nonlinear systems is often neglected and not examined. This is due to the complex analytical calculations required for the verification. Therefore, the aim of this work is to provide an algorithm which numerically analyzes [...] Read more.
In the field of control engineering, observability of uncertain nonlinear systems is often neglected and not examined. This is due to the complex analytical calculations required for the verification. Therefore, the aim of this work is to provide an algorithm which numerically analyzes the observability of nonlinear systems described by finite-dimensional, continuous-time sets of ordinary differential equations. The algorithm is based on definitions for distinguishability and local observability using a rank check from which conditions are deduced. The only requirements are the uncertain model equations of the system. Further, the methodology verifies observability of nonlinear systems on a given state space. In case that the state space is not fully observable, the algorithm provides the observable set of states. In addition, the results obtained by the algorithm allows insight into why the remaining states cannot be distinguished. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control)
Show Figures

Figure 1

Article
Optimization of Constrained Stochastic Linear-Quadratic Control on an Infinite Horizon: A Direct-Comparison Based Approach
Algorithms 2020, 13(2), 49; https://doi.org/10.3390/a13020049 - 24 Feb 2020
Viewed by 3019
Abstract
In this paper we study the optimization of the discrete-time stochastic linear-quadratic (LQ) control problem with conic control constraints on an infinite horizon, considering multiplicative noises. Stochastic control systems can be formulated as Markov Decision Problems (MDPs) with continuous state spaces and therefore [...] Read more.
In this paper we study the optimization of the discrete-time stochastic linear-quadratic (LQ) control problem with conic control constraints on an infinite horizon, considering multiplicative noises. Stochastic control systems can be formulated as Markov Decision Problems (MDPs) with continuous state spaces and therefore we can apply the direct-comparison based optimization approach to solve the problem. We first derive the performance difference formula for the LQ problem by utilizing the state separation property of the system structure. Based on this, we successfully derive the optimality conditions and the stationary optimal feedback control. By introducing the optimization, we establish a general framework for infinite horizon stochastic control problems. The direct-comparison based approach is applicable to both linear and nonlinear systems. Our work provides a new perspective in LQ control problems; based on this approach, learning based algorithms can be developed without identifying all of the system parameters. Full article
Show Figures

Figure 1

Article
Optimal Learning and Self-Awareness Versus PDI
Algorithms 2020, 13(1), 23; https://doi.org/10.3390/a13010023 - 11 Jan 2020
Cited by 27 | Viewed by 5128
Abstract
This manuscript will explore and analyze the effects of different paradigms for the control of rigid body motion mechanics. The experimental setup will include deterministic artificial intelligence composed of optimal self-awareness statements together with a novel, optimal learning algorithm, and these will be [...] Read more.
This manuscript will explore and analyze the effects of different paradigms for the control of rigid body motion mechanics. The experimental setup will include deterministic artificial intelligence composed of optimal self-awareness statements together with a novel, optimal learning algorithm, and these will be re-parameterized as ideal nonlinear feedforward and feedback evaluated within a Simulink simulation. Comparison is made to a custom proportional, derivative, integral controller (modified versions of classical proportional-integral-derivative control) implemented as a feedback control with a specific term to account for the nonlinear coupled motion. Consistent proportional, derivative, and integral gains were used throughout the duration of the experiments. The simulation results will show that akin feedforward control, deterministic self-awareness statements lack an error correction mechanism, relying on learning (which stands in place of feedback control), and the proposed combination of optimal self-awareness statements and a newly demonstrated analytically optimal learning yielded the highest accuracy with the lowest execution time. This highlights the potential effectiveness of a learning control system. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2019)
Show Figures

Figure 1

Back to TopTop