Next Issue
Volume 13, July
Previous Issue
Volume 13, May

Table of Contents

Algorithms, Volume 13, Issue 6 (June 2020) – 24 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Geophysical inversion can estimate the spatial distributions of physical subsurface properties, [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
An Algorithm for Fuzzy Negations Based-Intuitionistic Fuzzy Copula Aggregation Operators in Multiple Attribute Decision Making
Algorithms 2020, 13(6), 154; https://doi.org/10.3390/a13060154 - 26 Jun 2020
Viewed by 424
Abstract
In this paper, we develop a novel computation model of Intuitionistic Fuzzy Values with the usage of fuzzy negations and Archimedean copulas. This novel computation model’s structure is based on the extension of the existing operations of intuitionistic fuzzy values with some classes [...] Read more.
In this paper, we develop a novel computation model of Intuitionistic Fuzzy Values with the usage of fuzzy negations and Archimedean copulas. This novel computation model’s structure is based on the extension of the existing operations of intuitionistic fuzzy values with some classes of fuzzy negations. Many properties of the proposed operations are investigated and proved. Additionally, in this paper we introduce the concepts of intuitionistic fuzzy Archimedean copula weighted arithmetic and geometric aggregation operators based on fuzzy negations, including a further analysis of their properties. Finally, using a case study from an already published paper we found that our method has many advantages. Full article
Show Figures

Figure 1

Open AccessArticle
Novel Graph Model for Solving Collision-Free Multiple-Vehicle Traveling Salesman Problem Using Ant Colony Optimization
Algorithms 2020, 13(6), 153; https://doi.org/10.3390/a13060153 - 26 Jun 2020
Viewed by 503
Abstract
In this paper, a novel graph model to figure Collision-Free Multiple Traveling Salesman Problem (CFMTSP) is proposed. In this problem, a group of vehicles start from different nodes in an undirected graph and must visit each node in the graph, following the well-known [...] Read more.
In this paper, a novel graph model to figure Collision-Free Multiple Traveling Salesman Problem (CFMTSP) is proposed. In this problem, a group of vehicles start from different nodes in an undirected graph and must visit each node in the graph, following the well-known Traveling Salesman Problem (TSP) fashion without any collision. This paper’s main objective is to obtain free-collision routes for each vehicle while minimizing the traveling time of the slowest vehicle. This problem can be approached by applying speed to each vehicle, and a novel augmented graph model can perform it. This approach accommodates not only the position of nodes and inter-node distances, but also the speed of all the vehicles is proposed. The proposed augmented graph should be able to be used to perform optimal trajectories, i.e., routes and speeds, for all vehicles. An ant colony optimization (ACO) algorithm is used on the proposed augmented graph. Simulations show that the algorithm can satisfy the main objective. Considered factors, such as limitation of the mission successfulness, i.e., the inter-vehicle arrival time on a node, the number of vehicles, and the numbers of vehicles and edges of the graph are also discussed. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

Open AccessArticle
DS Evidence Theory-Based Energy Balanced Routing Algorithm for Network Lifetime Enhancement in WSN-Assisted IOT
Algorithms 2020, 13(6), 152; https://doi.org/10.3390/a13060152 - 24 Jun 2020
Viewed by 367
Abstract
Wireless sensor networks (WSNs) can provide data acquisition for long-term environment monitoring, which are important parts of Internet of Things (IoT). In the WSN-assisted IoT, energy efficient routing algorithms are required to maintain a long network lifetime. In this paper, a DS evidence [...] Read more.
Wireless sensor networks (WSNs) can provide data acquisition for long-term environment monitoring, which are important parts of Internet of Things (IoT). In the WSN-assisted IoT, energy efficient routing algorithms are required to maintain a long network lifetime. In this paper, a DS evidence theory-based energy balanced routing algorithm for network lifetime enhancement (EBRA-NLE) in WSN-assisted IOT is proposed. From the perspective of energy balance and minimization of routing path energy consumption, three attribute indexes are established to evaluate the forward neighboring nodes. Then a route selection method based on DS evidence theory is developed to comprehensively evaluate the nodes and select the optimal next hop. In order to avoid missing the ideal solution because of the excessive difference between the index values, the sine function is used to adjust this difference. The simulation results show that the proposed EBRA-NLE has certain advantages in prolonging network lifetime and balancing energy between nodes. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

Open AccessArticle
Compression of Next-Generation Sequencing Data and of DNA Digital Files
Algorithms 2020, 13(6), 151; https://doi.org/10.3390/a13060151 - 24 Jun 2020
Viewed by 387
Abstract
The increase in memory and in network traffic used and caused by new sequenced biological data has recently deeply grown. Genomic projects such as HapMap and 1000 Genomes have contributed to the very large rise of databases and network traffic related to genomic [...] Read more.
The increase in memory and in network traffic used and caused by new sequenced biological data has recently deeply grown. Genomic projects such as HapMap and 1000 Genomes have contributed to the very large rise of databases and network traffic related to genomic data and to the development of new efficient technologies. The large-scale sequencing of samples of DNA has brought new attention and produced new research, and thus the interest in the scientific community for genomic data has greatly increased. In a very short time, researchers have developed hardware tools, analysis software, algorithms, private databases, and infrastructures to support the research in genomics. In this paper, we analyze different approaches for compressing digital files generated by Next-Generation Sequencing tools containing nucleotide sequences, and we discuss and evaluate the compression performance of generic compression algorithms by confronting them with a specific system designed by Jones et al. specifically for genomic file compression: Quip. Moreover, we present a simple but effective technique for the compression of DNA sequences in which we only consider the relevant DNA data and experimentally evaluate its performances. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

Open AccessArticle
Fibers of Failure: Classifying Errors in Predictive Processes
Algorithms 2020, 13(6), 150; https://doi.org/10.3390/a13060150 - 23 Jun 2020
Viewed by 532
Abstract
Predictive models are used in many different fields of science and engineering and are always prone to make faulty predictions. These faulty predictions can be more or less malignant depending on the model application. We describe fibers of failure (FiFa [...] Read more.
Predictive models are used in many different fields of science and engineering and are always prone to make faulty predictions. These faulty predictions can be more or less malignant depending on the model application. We describe fibers of failure (FiFa), a method to classify failure modes of predictive processes. Our method uses Mapper, an algorithm from topological data analysis (TDA), to build a graphical model of input data stratified by prediction errors. We demonstrate two ways to use the failure mode groupings: either to produce a correction layer that adjusts predictions by similarity to the failure modes; or to inspect members of the failure modes to illustrate and investigate what characterizes each failure mode. We demonstrate FiFa on two scenarios: a convolutional neural network (CNN) predicting MNIST images with added noise, and an artificial neural network (ANN) predicting the electrical energy consumption of an electric arc furnace (EAF). The correction layer on the CNN model improved its prediction accuracy significantly while the inspection of failure modes for the EAF model provided guiding insights into the domain-specific reasons behind several high-error regions. Full article
(This article belongs to the Special Issue Topological Data Analysis)
Show Figures

Figure 1

Open AccessArticle
A Distributed Approach to the Evasion Problem
Algorithms 2020, 13(6), 149; https://doi.org/10.3390/a13060149 - 23 Jun 2020
Viewed by 471
Abstract
The Evasion Problem is the question of whether—given a collection of sensors and a particular movement pattern over time—it is possible to stay undetected within the domain over the same stretch of time. It has been studied using topological techniques since 2006—with sufficient [...] Read more.
The Evasion Problem is the question of whether—given a collection of sensors and a particular movement pattern over time—it is possible to stay undetected within the domain over the same stretch of time. It has been studied using topological techniques since 2006—with sufficient conditions for non-existence of an Evasion Path provided by de Silva and Ghrist; sufficient and necessary conditions with extended sensor capabilities provided by Adams and Carlsson; and sufficient and necessary conditions using sheaf theory by Krishnan and Ghrist. In this paper, we propose three algorithms for the Evasion Problem: one distributed algorithm extension of Adams’ approach for evasion path detection, and two different approaches to evasion path enumeration. Full article
(This article belongs to the Special Issue Topological Data Analysis)
Show Figures

Figure 1

Open AccessArticle
An Application of a Modified Gappy Proper Orthogonal Decomposition on Complexity Reduction of Allen-Cahn Equation
Algorithms 2020, 13(6), 148; https://doi.org/10.3390/a13060148 - 22 Jun 2020
Viewed by 385
Abstract
This work considers model reduction techniques that can substantially decrease computational cost in simulating parmetrized Allen–Cahn equation. We first employ the proper orthogonal decomposition (POD) approach to reduce the number of unknowns in the full-order discretized system. Since POD cannot reduce the computational [...] Read more.
This work considers model reduction techniques that can substantially decrease computational cost in simulating parmetrized Allen–Cahn equation. We first employ the proper orthogonal decomposition (POD) approach to reduce the number of unknowns in the full-order discretized system. Since POD cannot reduce the computational complexity of nonlinearity in Allen–Cahn equation, we also apply discrete empirical interpolation method (DEIM) to approximate the nonlinear term for a substantial reduction in overall simulation time. However, in general, the POD-DEIM approach is less accurate than the POD approach, since it further approximates the nonlinear term. To increase the accuracy of the POD-DEIM approach, this work introduces an extension of the DEIM approximation based on the concept of Gappy POD (GPOD), which is optimal in the least-squares sense. The POD-GPOD approach is tested and compared with the POD and POD-DEIM approaches on Allen–Cahn equation for both cases of fixed parameter value and varying parameter values. The modified GPOD approximation introduced in this work is demonstrated to improve accuracy of DEIM without sacrificing too much efficiency on the computational speedup, e.g., in one of our numerical tests, the POD-GPOD approach provides an approximate solution to the parmetrized Allen–Cahn equation 200 times faster than the full-order system with average error of order O ( 10 4 ) . The POD-GPOD approach is therefore shown to be a promising technique that compromises between the accuracy of POD approach and the efficiency of POD-DEIM approach. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Local Comparison between Two Ninth Convergence Order Algorithms for Equations
Algorithms 2020, 13(6), 147; https://doi.org/10.3390/a13060147 - 20 Jun 2020
Viewed by 452
Abstract
A local convergence comparison is presented between two ninth order algorithms for solving nonlinear equations. In earlier studies derivatives not appearing on the algorithms up to the 10th order were utilized to show convergence. Moreover, no error estimates, radius of convergence or results [...] Read more.
A local convergence comparison is presented between two ninth order algorithms for solving nonlinear equations. In earlier studies derivatives not appearing on the algorithms up to the 10th order were utilized to show convergence. Moreover, no error estimates, radius of convergence or results on the uniqueness of the solution that can be computed were given. The novelty of our study is that we address all these concerns by using only the first derivative which actually appears on these algorithms. That is how to extend the applicability of these algorithms. Our technique provides a direct comparison between these algorithms under the same set of convergence criteria. This technique can be used on other algorithms. Numerical experiments are utilized to test the convergence criteria. Full article
Open AccessArticle
A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms
Algorithms 2020, 13(6), 146; https://doi.org/10.3390/a13060146 - 19 Jun 2020
Viewed by 503
Abstract
Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques [...] Read more.
Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions. Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
Open AccessArticle
Binary Time Series Classification with Bayesian Convolutional Neural Networks When Monitoring for Marine Gas Discharges
Algorithms 2020, 13(6), 145; https://doi.org/10.3390/a13060145 - 19 Jun 2020
Viewed by 570
Abstract
The world’s oceans are under stress from climate change, acidification and other human activities, and the UN has declared 2021–2030 as the decade for marine science. To monitor the marine waters, with the purpose of detecting discharges of tracers from unknown locations, large [...] Read more.
The world’s oceans are under stress from climate change, acidification and other human activities, and the UN has declared 2021–2030 as the decade for marine science. To monitor the marine waters, with the purpose of detecting discharges of tracers from unknown locations, large areas will need to be covered with limited resources. To increase the detectability of marine gas seepage we propose a deep probabilistic learning algorithm, a Bayesian Convolutional Neural Network (BCNN), to classify time series of measurements. The BCNN will classify time series to belong to a leak/no-leak situation, including classification uncertainty. The latter is important for decision makers who must decide to initiate costly confirmation surveys and, hence, would like to avoid false positives. Results from a transport model are used for the learning process of the BCNN and the task is to distinguish the signal from a leak hidden within the natural variability. We show that the BCNN classifies time series arising from leaks with high accuracy and estimates its associated uncertainty. We combine the output of the BCNN model, the posterior predictive distribution, with a Bayesian decision rule showcasing how the framework can be used in practice to make optimal decisions based on a given cost function. Full article
Show Figures

Figure 1

Open AccessArticle
Efficient Probabilistic Joint Inversion of Direct Current Resistivity and Small-Loop Electromagnetic Data
Algorithms 2020, 13(6), 144; https://doi.org/10.3390/a13060144 - 18 Jun 2020
Viewed by 423
Abstract
Often, multiple geophysical measurements are sensitive to the same subsurface parameters. In this case, joint inversions are mostly preferred over two (or more) separate inversions of the geophysical data sets due to the expected reduction of the non-uniqueness in the joint inverse solution. [...] Read more.
Often, multiple geophysical measurements are sensitive to the same subsurface parameters. In this case, joint inversions are mostly preferred over two (or more) separate inversions of the geophysical data sets due to the expected reduction of the non-uniqueness in the joint inverse solution. This reduction can be quantified using Bayesian inversions. However, standard Markov chain Monte Carlo (MCMC) approaches are computationally expensive for most geophysical inverse problems. We present the Kalman ensemble generator (KEG) method as an efficient alternative to the standard MCMC inversion approaches. As proof of concept, we provide two synthetic studies of joint inversion of frequency domain electromagnetic (FDEM) and direct current (DC) resistivity data for a parameter model with vertical variation in electrical conductivity. For both studies, joint results show a considerable improvement for the joint framework over the separate inversions. This improvement consists of (1) an uncertainty reduction in the posterior probability density function and (2) an ensemble mean that is closer to the synthetic true electrical conductivities. Finally, we apply the KEG joint inversion to FDEM and DC resistivity field data. Joint field data inversions improve in the same way seen for the synthetic studies. Full article
Show Figures

Figure 1

Open AccessArticle
Numerically Efficient Fuzzy MPC Algorithm with Advanced Generation of Prediction—Application to a Chemical Reactor
Algorithms 2020, 13(6), 143; https://doi.org/10.3390/a13060143 - 14 Jun 2020
Viewed by 600
Abstract
In Model Predictive Control (MPC) algorithms, control signals are generated after solving optimization problems. If the model used for prediction is linear then the optimization problem is a standard, easy to solve, quadratic programming problem with linear constraints. However, such an algorithm may [...] Read more.
In Model Predictive Control (MPC) algorithms, control signals are generated after solving optimization problems. If the model used for prediction is linear then the optimization problem is a standard, easy to solve, quadratic programming problem with linear constraints. However, such an algorithm may offer insufficient performance if applied to a nonlinear control plant. On the other hand, if a model used for prediction is nonlinear, then non–convex optimization problem must be solved at each algorithm iteration. Then the numerical problems may occur during solving it and the time needed to calculate the control signals cannot be determined. Therefore approaches based on linearized models are preferred in practical applications. A fuzzy algorithm with an advanced generation of the prediction is proposed in the article. The prediction is obtained in such a way that the algorithm is formulated as a quadratic optimization problem but offers performance very close to that of the MPC algorithm with nonlinear optimization. The efficiency of the proposed approach is demonstrated in the control system of a nonlinear chemical control plant—a CSTR (Continuous Stirred–Tank Reactor) with van de Vusse reaction. Full article
(This article belongs to the Special Issue Model Predictive Control: Algorithms and Applications)
Show Figures

Figure 1

Open AccessArticle
Parallelized Swarm Intelligence Approach for Solving TSP and JSSP Problems
Algorithms 2020, 13(6), 142; https://doi.org/10.3390/a13060142 - 12 Jun 2020
Viewed by 486
Abstract
One of the possible approaches to solving difficult optimization problems is applying population-based metaheuristics. Among such metaheuristics, there is a special class where searching for the best solution is based on the collective behavior of decentralized, self-organized agents. This study proposes an approach [...] Read more.
One of the possible approaches to solving difficult optimization problems is applying population-based metaheuristics. Among such metaheuristics, there is a special class where searching for the best solution is based on the collective behavior of decentralized, self-organized agents. This study proposes an approach in which a swarm of agents tries to improve solutions from the population of solutions. The process is carried out in parallel threads. The proposed algorithm—based on the mushroom-picking metaphor—was implemented using Scala in an Apache Spark environment. An extended computational experiment shows how introducing a combination of simple optimization agents and increasing the number of threads may improve the results obtained by the model in the case of TSP and JSSP problems. Full article
(This article belongs to the Special Issue Swarm Intelligence Applications for NP Hard Challenges)
Show Figures

Figure 1

Open AccessArticle
Dynamic Ring Exploration with (H,S) View
Algorithms 2020, 13(6), 141; https://doi.org/10.3390/a13060141 - 12 Jun 2020
Viewed by 502
Abstract
The researches about a mobile entity (called agent) on dynamic networks have attracted a lot of attention in recent years. Exploration which requires an agent to visit all the nodes in the network is one of the most fundamental problems. While the exploration [...] Read more.
The researches about a mobile entity (called agent) on dynamic networks have attracted a lot of attention in recent years. Exploration which requires an agent to visit all the nodes in the network is one of the most fundamental problems. While the exploration of dynamic networks with complete information or with no information about network changes has been studied, an agent with partial information about the network changes has not been considered yet despite its practical importance. In this paper, we consider the exploration of dynamic networks by a single agent with partial information about network changes. To the best of our knowledge, this is the very first work to investigate the exploration problem with such partial information. As a first step in this research direction, we focus on 1-interval connected rings as dynamic networks in this paper. We assume that the single agent has partial information called the ( H , S ) view by which it always knows whether or not each of the links within H hops is available in each of the next S time steps. In this setting, we show that H + S n and S n / 2 (n is the size of the network) are necessary and sufficient conditions to explore 1-interval connected rings. Moreover, we investigate the upper and lower bounds of the exploration time. It is proven that the exploration time is O ( n 2 ) for n / 2 S < 2 H 1 , O ( n 2 / H + n H ) for S max ( n / 2 , 2 H 1 ) , O ( n 2 / H + n log H ) for S n 1 , and Ω ( n 2 / H ) for any S where H = min ( H , n / 2 ) . Full article
Show Figures

Figure 1

Open AccessEditorial
Special Issue on Ensemble Learning and Applications
Algorithms 2020, 13(6), 140; https://doi.org/10.3390/a13060140 - 11 Jun 2020
Viewed by 513
Abstract
During the last decades, in the area of machine learning and data mining, the development of ensemble methods has gained a significant attention from the scientific community. Machine learning ensemble methods combine multiple learning algorithms to obtain better predictive performance than could be [...] Read more.
During the last decades, in the area of machine learning and data mining, the development of ensemble methods has gained a significant attention from the scientific community. Machine learning ensemble methods combine multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Combining multiple learning models has been theoretically and experimentally shown to provide significantly better performance than their single base learners. In the literature, ensemble learning algorithms constitute a dominant and state-of-the-art approach for obtaining maximum performance, thus they have been applied in a variety of real-world problems ranging from face and emotion recognition through text classification and medical diagnosis to financial forecasting. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Open AccessArticle
Optimization Algorithms for Detection of Social Interactions
Algorithms 2020, 13(6), 139; https://doi.org/10.3390/a13060139 - 11 Jun 2020
Viewed by 623
Abstract
Community detection is one of the most challenging and interesting problems in many research areas. Being able to detect highly linked communities in a network can lead to many benefits, such as understanding relationships between entities or interactions between biological genes, for instance. [...] Read more.
Community detection is one of the most challenging and interesting problems in many research areas. Being able to detect highly linked communities in a network can lead to many benefits, such as understanding relationships between entities or interactions between biological genes, for instance. Two different immunological algorithms have been designed for this problem, called Opt-IA and Hybrid-IA, respectively. The main difference between the two algorithms is the search strategy and related immunological operators developed: the first carries out a random search together with purely stochastic operators; the last one is instead based on a deterministic Local Search that tries to refine and improve the current solutions discovered. The robustness of Opt-IA and Hybrid-IA has been assessed on several real social networks. These same networks have also been considered for comparing both algorithms with other seven different metaheuristics and the well-known greedy optimization Louvain algorithm. The experimental analysis conducted proves that Opt-IA and Hybrid-IA are reliable optimization methods for community detection, outperforming all compared algorithms. Full article
(This article belongs to the Special Issue Algorithms for Graphs and Networks)
Show Figures

Figure 1

Open AccessArticle
Late Acceptance Hill-Climbing Matheuristic for the General Lot Sizing and Scheduling Problem with Rich Constraints
Algorithms 2020, 13(6), 138; https://doi.org/10.3390/a13060138 - 09 Jun 2020
Viewed by 563
Abstract
This paper considers the general lot sizing and scheduling problem with rich constraints exemplified by means of rework and lifetime constraints for defective items (GLSP-RP), which finds numerous applications in industrial settings, for example, the food processing industry and the pharmaceutical industry. To [...] Read more.
This paper considers the general lot sizing and scheduling problem with rich constraints exemplified by means of rework and lifetime constraints for defective items (GLSP-RP), which finds numerous applications in industrial settings, for example, the food processing industry and the pharmaceutical industry. To address this problem, we propose the Late Acceptance Hill-climbing Matheuristic (LAHCM) as a novel solution framework that exploits and integrates the late acceptance hill climbing algorithm and exact approaches for speeding up the solution process in comparison to solving the problem by means of a general solver. The computational results show the benefits of incorporating exact approaches within the LAHCM template leading to high-quality solutions within short computational times. Full article
(This article belongs to the Special Issue Optimization Algorithms for Allocation Problems)
Show Figures

Figure 1

Open AccessArticle
Sparse Logistic Regression: Comparison of Regularization and Bayesian Implementations
Algorithms 2020, 13(6), 137; https://doi.org/10.3390/a13060137 - 08 Jun 2020
Viewed by 526
Abstract
In knowledge-based systems, besides obtaining good output prediction accuracy, it is crucial to understand the subset of input variables that have most influence on the output, with the goal of gaining deeper insight into the underlying process. These requirements call for logistic model [...] Read more.
In knowledge-based systems, besides obtaining good output prediction accuracy, it is crucial to understand the subset of input variables that have most influence on the output, with the goal of gaining deeper insight into the underlying process. These requirements call for logistic model estimation techniques that provide a sparse solution, i.e., where coefficients associated with non-important variables are set to zero. In this work we compare the performance of two methods: the first one is based on the well known Least Absolute Shrinkage and Selection Operator (LASSO) which involves regularization with an 1 norm; the second one is the Relevance Vector Machine (RVM) which is based on a Bayesian implementation of the linear logistic model. The two methods are extensively compared in this paper, on real and simulated datasets. Results show that, in general, the two approaches are comparable in terms of prediction performance. RVM outperforms the LASSO both in term of structure recovery (estimation of the correct non-zero model coefficients) and prediction accuracy when the dimensionality of the data tends to increase. However, LASSO shows comparable performance to RVM when the dimensionality of the data is much higher than number of samples that is p > > n . Full article
(This article belongs to the Special Issue Classification and Regression in Machine Learning)
Show Figures

Figure 1

Open AccessArticle
Improved Convergence Speed of a DCD-Based Algorithm for Sparse Solutions
Algorithms 2020, 13(6), 136; https://doi.org/10.3390/a13060136 - 04 Jun 2020
Viewed by 520
Abstract
To solve a system of equations that needs few updates, such as sparse systems, the leading dichotomous coordinate descent (DCD) algorithm is better than the cyclic DCD algorithm because of its fast speed of convergence. In the case of sparse systems requiring a [...] Read more.
To solve a system of equations that needs few updates, such as sparse systems, the leading dichotomous coordinate descent (DCD) algorithm is better than the cyclic DCD algorithm because of its fast speed of convergence. In the case of sparse systems requiring a large number of updates, the cyclic DCD algorithm converges faster and has a lower error level than the leading DCD algorithm. However, the leading DCD algorithm has a faster convergence speed in the initial updates. In this paper, we propose a combination of leading and cyclic DCD iterations, the leading-cyclic DCD algorithm, to improve the convergence speed of the cyclic DCD algorithm. The proposed algorithm involves two steps. First, by properly selecting the number of updates of the solution vector used in the leading DCD algorithm, a solution is obtained from the leading DCD algorithm. Second, taking the output of the leading DCD algorithm as the initial values, an improved soft output is generated by the cyclic DCD algorithm with a large number of iterations. Numerical results demonstrate that when the solution sparsity γ is in the interval [ 1 / 8 , 6 / 8 ] , the proposed leading-cyclic DCD algorithm outperforms both the existing cyclic and leading DCD algorithms for all iterations. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
A Recursive Least-Squares Algorithm for the Identification of Trilinear Forms
Algorithms 2020, 13(6), 135; https://doi.org/10.3390/a13060135 - 01 Jun 2020
Viewed by 590
Abstract
High-dimensional system identification problems can be efficiently addressed based on tensor decompositions and modelling. In this paper, we design a recursive least-squares (RLS) algorithm tailored for the identification of trilinear forms, namely RLS-TF. In our framework, the trilinear form is related to the [...] Read more.
High-dimensional system identification problems can be efficiently addressed based on tensor decompositions and modelling. In this paper, we design a recursive least-squares (RLS) algorithm tailored for the identification of trilinear forms, namely RLS-TF. In our framework, the trilinear form is related to the decomposition of a third-order tensor (of rank one). The proposed RLS-TF algorithm acts on the individual components of the global impulse response, thus being efficient in terms of both performance and complexity. Simulation results indicate that the proposed solution outperforms the conventional RLS algorithm (which handles only the global impulse response), but also the previously developed trilinear counterparts based on the least-mean- squares algorithm. Full article
Show Figures

Figure 1

Open AccessArticle
Study of Quasi-Static Magnetization with the Random-Field Ising Model
Algorithms 2020, 13(6), 134; https://doi.org/10.3390/a13060134 - 29 May 2020
Viewed by 599
Abstract
The topic of this paper is modeling based on Hamiltonian spin interactions. Preliminary studies on the identification of quasi-static magnetizing field in a magnetic system were presented. The random-field Ising model was then used to simulate the simplified ferromagnetic structure. The validation of [...] Read more.
The topic of this paper is modeling based on Hamiltonian spin interactions. Preliminary studies on the identification of quasi-static magnetizing field in a magnetic system were presented. The random-field Ising model was then used to simulate the simplified ferromagnetic structure. The validation of algorithms and simulation tests were carried out for the 2D and the 3D model spaces containing at least 106 unit cells. The research showed that the response of a slowly driven magnetic system did not depend on the external field sweep rate. Changes in the spatial magnetization of the lattice were very similar below a certain rate of the external field change known as the quasi-static boundary. The observed differences in obtained magnetization curves under quasi-static conditions stemmed from the random nature of the molecular field and the avalanche-like magnetization process Full article
(This article belongs to the Special Issue Algorithms for Diagnostics and Nondestructive Testing)
Show Figures

Figure 1

Open AccessArticle
Metric Embedding Learning on Multi-Directional Projections
Algorithms 2020, 13(6), 133; https://doi.org/10.3390/a13060133 - 29 May 2020
Viewed by 633
Abstract
Image based instance recognition is a difficult problem, in some cases even for the human eye. While latest developments in computer vision—mostly driven by deep learning—have shown that high performance models for classification or categorization can be engineered, the problem of discriminating similar [...] Read more.
Image based instance recognition is a difficult problem, in some cases even for the human eye. While latest developments in computer vision—mostly driven by deep learning—have shown that high performance models for classification or categorization can be engineered, the problem of discriminating similar objects with a low number of samples remain challenging. Advances from multi-class classification are applied for object matching problems, as the feature extraction techniques are the same; nature-inspired multi-layered convolutional nets learn the representations, and the output of such a model maps them to a multidimensional encoding space. A metric based loss brings same instance embeddings close to each other. While these solutions achieve high classification performance, low efficiency is caused by memory cost of high parameter number, which is in a relationship with input image size. Upon shrinking the input, the model requires less trainable parameters, while performance decreases. This drawback is tackled by using compressed feature extraction, e.g., projections. In this paper, a multi-directional image projection transformation with fixed vector lengths (MDIPFL) is applied for one-shot recognition tasks, trained on Siamese and Triplet architectures. Results show, that MDIPFL based approach achieves decent performance, despite of the significantly lower number of parameters. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms for Image Processing)
Show Figures

Figure 1

Open AccessArticle
Short-Term Wind Speed Forecasting Using Statistical and Machine Learning Methods
Algorithms 2020, 13(6), 132; https://doi.org/10.3390/a13060132 - 26 May 2020
Viewed by 710
Abstract
Wind offers an environmentally sustainable energy resource that has seen increasing global adoption in recent years. However, its intermittent, unstable and stochastic nature hampers its representation among other renewable energy sources. This work addresses the forecasting of wind speed, a primary input needed [...] Read more.
Wind offers an environmentally sustainable energy resource that has seen increasing global adoption in recent years. However, its intermittent, unstable and stochastic nature hampers its representation among other renewable energy sources. This work addresses the forecasting of wind speed, a primary input needed for wind energy generation, using data obtained from the South African Wind Atlas Project. Forecasting is carried out on a two days ahead time horizon. We investigate the predictive performance of artificial neural networks (ANN) trained with Bayesian regularisation, decision trees based stochastic gradient boosting (SGB) and generalised additive models (GAMs). The results of the comparative analysis suggest that ANN displays superior predictive performance based on root mean square error (RMSE). In contrast, SGB shows outperformance in terms of mean average error (MAE) and the related mean average percentage error (MAPE). A further comparison of two forecast combination methods involving the linear and additive quantile regression averaging show the latter forecast combination method as yielding lower prediction accuracy. The additive quantile regression averaging based prediction intervals also show outperformance in terms of validity, reliability, quality and accuracy. Interval combination methods show the median method as better than its pure average counterpart. Point forecasts combination and interval forecasting methods are found to improve forecast performance. Full article
Show Figures

Figure 1

Open AccessArticle
Unsupervised Text Feature Selection Using Memetic Dichotomous Differential Evolution
Algorithms 2020, 13(6), 131; https://doi.org/10.3390/a13060131 - 26 May 2020
Viewed by 650
Abstract
Feature Selection (FS) methods have been studied extensively in the literature, and there are a crucial component in machine learning techniques. However, unsupervised text feature selection has not been well studied in document clustering problems. Feature selection could be modelled as an optimization [...] Read more.
Feature Selection (FS) methods have been studied extensively in the literature, and there are a crucial component in machine learning techniques. However, unsupervised text feature selection has not been well studied in document clustering problems. Feature selection could be modelled as an optimization problem due to the large number of possible solutions that might be valid. In this paper, a memetic method that combines Differential Evolution (DE) with Simulated Annealing (SA) for unsupervised FS was proposed. Due to the use of only two values indicating the existence or absence of the feature, a binary version of differential evolution is used. A dichotomous DE was used for the purpose of the binary version, and the proposed method is named Dichotomous Differential Evolution Simulated Annealing (DDESA). This method uses dichotomous mutation instead of using the standard mutation DE to be more effective for binary purposes. The Mean Absolute Distance (MAD) filter was used as the feature subset internal evaluation measure in this paper. The proposed method was compared with other state-of-the-art methods including the standard DE combined with SA, which is named DESA in this paper, using five benchmark datasets. The F-micro, F-macro (F-scores) and Average Distance of Document to Cluster (ADDC) measures were utilized as the evaluation measures. The Reduction Rate (RR) was also used as an evaluation measure. Test results showed that the proposed DDESA outperformed the other tested methods in performing the unsupervised text feature selection. Full article
(This article belongs to the Special Issue Memetic Algorithms for Solving Very Complex Optimization Problems)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop