Previous Issue
Volume 12, September

Table of Contents

Algorithms, Volume 12, Issue 10 (October 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
A New Coding Paradigm for the Primitive Relay Channel
Algorithms 2019, 12(10), 218; https://doi.org/10.3390/a12100218 (registering DOI) - 18 Oct 2019
Abstract
We consider the primitive relay channel, where the source sends a message to the relay and to the destination, and the relay helps the communication by transmitting an additional message to the destination via a separate channel. Two well-known coding techniques have been [...] Read more.
We consider the primitive relay channel, where the source sends a message to the relay and to the destination, and the relay helps the communication by transmitting an additional message to the destination via a separate channel. Two well-known coding techniques have been introduced for this setting: decode-and-forward and compress-and-forward. In decode-and-forward, the relay completely decodes the message and sends some information to the destination; in compress-and-forward, the relay does not decode, and it sends a compressed version of the received signal to the destination using Wyner–Ziv coding. In this paper, we present a novel coding paradigm that provides an improved achievable rate for the primitive relay channel. The idea is to combine compress-and-forward and decode-and-forward via a chaining construction. We transmit over pairs of blocks: in the first block, we use compress-and-forward; and, in the second block, we use decode-and-forward. More specifically, in the first block, the relay does not decode, it compresses the received signal via Wyner–Ziv, and it sends only part of the compression to the destination. In the second block, the relay completely decodes the message, it sends some information to the destination, and it also sends the remaining part of the compression coming from the first block. By doing so, we are able to strictly outperform both compress-and-forward and decode-and-forward. Note that the proposed coding scheme can be implemented with polar codes. As such, it has the typical attractive properties of polar coding schemes, namely, quasi-linear encoding and decoding complexity, and error probability that decays at super-polynomial speed. As a running example, we take into account the special case of the erasure relay channel, and we provide a comparison between the rates achievable by our proposed scheme and the existing upper and lower bounds. Full article
(This article belongs to the Special Issue Coding Theory and Its Application)
Show Figures

Figure 1

Open AccessArticle
Can People Really Do Nothing? Handling Annotation Gaps in ADL Sensor Data
Algorithms 2019, 12(10), 217; https://doi.org/10.3390/a12100217 - 17 Oct 2019
Viewed by 119
Abstract
In supervised Activities of Daily Living (ADL) recognition systems, annotating collected sensor readings is an essential, yet exhaustive, task. Readings are collected from activity-monitoring sensors in a 24/7 manner. The size of the produced dataset is so huge that it is almost impossible [...] Read more.
In supervised Activities of Daily Living (ADL) recognition systems, annotating collected sensor readings is an essential, yet exhaustive, task. Readings are collected from activity-monitoring sensors in a 24/7 manner. The size of the produced dataset is so huge that it is almost impossible for a human annotator to give a certain label to every single instance in the dataset. This results in annotation gaps in the input data to the adopting learning system. The performance of the recognition system is negatively affected by these gaps. In this work, we propose and investigate three different paradigms to handle these gaps. In the first paradigm, the gaps are taken out by dropping all unlabeled readings. A single “Unknown” or “Do-Nothing” label is given to the unlabeled readings within the operation of the second paradigm. The last paradigm handles these gaps by giving every set of them a unique label identifying the encapsulating certain labels. Also, we propose a semantic preprocessing method of annotation gaps by constructing a hybrid combination of some of these paradigms for further performance improvement. The performance of the proposed three paradigms and their hybrid combination is evaluated using an ADL benchmark dataset containing more than 2.5 × 10 6 sensor readings that had been collected over more than nine months. The evaluation results emphasize the performance contrast under the operation of each paradigm and support a specific gap handling approach for better performance. Full article
Show Figures

Figure 1

Open AccessArticle
Adaptive Clustering via Symmetric Nonnegative Matrix Factorization of the Similarity Matrix
Algorithms 2019, 12(10), 216; https://doi.org/10.3390/a12100216 - 17 Oct 2019
Viewed by 73
Abstract
The problem of clustering, that is, the partitioning of data into groups of similar objects, is a key step for many data-mining problems. The algorithm we propose for clustering is based on the symmetric nonnegative matrix factorization (SymNMF) of a similarity matrix. The [...] Read more.
The problem of clustering, that is, the partitioning of data into groups of similar objects, is a key step for many data-mining problems. The algorithm we propose for clustering is based on the symmetric nonnegative matrix factorization (SymNMF) of a similarity matrix. The algorithm is first presented for the case of a prescribed number k of clusters, then it is extended to the case of a not a priori given k. A heuristic approach improving the standard multistart strategy is proposed and validated by the experimentation. Full article
Show Figures

Figure 1

Open AccessArticle
Backstepping Adaptive Neural Network Control for Electric Braking Systems of Aircrafts
Algorithms 2019, 12(10), 215; https://doi.org/10.3390/a12100215 - 15 Oct 2019
Viewed by 144
Abstract
This paper proposes an adaptive backstepping control algorithm for electric braking systems with electromechanical actuators (EMAs). First, the ideal mathematical model of the EMA is established, and the nonlinear factors are analyzed, such as the deformation of the reduction gear. Subsequently, the actual [...] Read more.
This paper proposes an adaptive backstepping control algorithm for electric braking systems with electromechanical actuators (EMAs). First, the ideal mathematical model of the EMA is established, and the nonlinear factors are analyzed, such as the deformation of the reduction gear. Subsequently, the actual mathematical model of the EMA is rebuilt by combining the ideal model and the nonlinear factors. To realize high performance braking pressure control, the backstepping control method is adopted to address the mismatched uncertainties in the electric braking system, and a radial basis function (RBF) neural network is established to estimate the nonlinear functions in the control system. The experimental results indicate that the proposed braking pressure control strategy can improve the servo performance of the electric braking system. In addition, the hardware-in-loop (HIL) experimental results show that the proposed EMA controller can satisfy the requirements of the aircraft antilock braking systems. Full article
Open AccessArticle
Exploiting Sparse Statistics for a Sequence-Based Prediction of the Effect of Mutations
Algorithms 2019, 12(10), 214; https://doi.org/10.3390/a12100214 - 14 Oct 2019
Viewed by 131
Abstract
Recent work showed that there is a significant difference between the statistics of amino acid triplets and quadruplets in sequences of folded proteins and randomly generated sequences. These statistics were used to assign a score to each sequence and make a prediction whether [...] Read more.
Recent work showed that there is a significant difference between the statistics of amino acid triplets and quadruplets in sequences of folded proteins and randomly generated sequences. These statistics were used to assign a score to each sequence and make a prediction whether a sequence is likely to fold. The present paper extends the statistics to higher multiplets and suggests a way to handle the treatment of multiplets that were not found in the set of folded proteins. In particular, foldability predictions were done along the line of the previous work using pentuplet statistics and a way was found to combine the quadruplet and pentuplets statistics to improve the foldability predictions. A different, simpler, score was defined for hextuplets and heptuplets and were used to predict the direction of stability change of a protein upon mutation. With the best score combination the accuracy of the prediction was 73.4%. Full article
Show Figures

Figure 1

Open AccessArticle
Multimodal Dynamic Journey-Planning
Algorithms 2019, 12(10), 213; https://doi.org/10.3390/a12100213 - 13 Oct 2019
Viewed by 208
Abstract
In this paper, a new model, known as the multimodal dynamic timetable model (DTM), is presented for computing optimal multimodal journeys in schedule-based public transport systems. The new model constitutes an extension of the dynamic timetable model (DTM), which was developed originally [...] Read more.
In this paper, a new model, known as the multimodal dynamic timetable model (DTM), is presented for computing optimal multimodal journeys in schedule-based public transport systems. The new model constitutes an extension of the dynamic timetable model (DTM), which was developed originally for a different setting (unimodal journey-planning). Multimodal DTM demonstrates a very fast query algorithm that meets the requirement for real-time response to best journey queries, and an ultra-fast update algorithm for updating the timetable information in case of delays of scheduled-based vehicles. An experimental study on real-world metropolitan networks demonstrates that the query and update algorithms of Multimodal DTM compare favorably with other state-of-the-art approaches when public transport, including unrestricted—with respect to departing time—traveling (e.g., walking and electric vehicles) is considered. Full article
Open AccessAddendum
Addendum: Mircea-Bogdan Radac and Timotei Lala. Learning Output Reference Model Tracking for Higher-order Nonlinear Systems with Unknown Dynamics. Algorithms 2019, 12, 121
Algorithms 2019, 12(10), 212; https://doi.org/10.3390/a12100212 - 10 Oct 2019
Viewed by 216
Abstract
The authors would like to mention that their paper is an extended version of the IEEE conference paper [...] Full article
Open AccessArticle
Approximating the Temporal Neighbourhood Function of Large Temporal Graphs
Algorithms 2019, 12(10), 211; https://doi.org/10.3390/a12100211 - 10 Oct 2019
Viewed by 142
Abstract
Temporal networks are graphs in which edges have temporal labels, specifying their starting times and their traversal times. Several notions of distances between two nodes in a temporal network can be analyzed, by referring, for example, to the earliest arrival time or to [...] Read more.
Temporal networks are graphs in which edges have temporal labels, specifying their starting times and their traversal times. Several notions of distances between two nodes in a temporal network can be analyzed, by referring, for example, to the earliest arrival time or to the latest starting time of a temporal path connecting the two nodes. In this paper, we mostly refer to the notion of temporal reachability by using the earliest arrival time. In particular, we first show how the sketch approach, which has already been used in the case of classical graphs, can be applied to the case of temporal networks in order to approximately compute the sizes of the temporal cones of a temporal network. By making use of this approach, we subsequently show how we can approximate the temporal neighborhood function (that is, the number of pairs of nodes reachable from one another in a given time interval) of large temporal networks in a few seconds. Finally, we apply our algorithm in order to analyze and compare the behavior of 25 public transportation temporal networks. Our results can be easily adapted to the case in which we want to refer to the notion of distance based on the latest starting time. Full article
Open AccessArticle
Laplacian Eigenmaps Dimensionality Reduction Based on Clustering-Adjusted Similarity
Algorithms 2019, 12(10), 210; https://doi.org/10.3390/a12100210 - 04 Oct 2019
Viewed by 204
Abstract
Euclidean distance between instances is widely used to capture the manifold structure of data and for graph-based dimensionality reduction. However, in some circumstances, the basic Euclidean distance cannot accurately capture the similarity between instances; some instances from different classes but close to the [...] Read more.
Euclidean distance between instances is widely used to capture the manifold structure of data and for graph-based dimensionality reduction. However, in some circumstances, the basic Euclidean distance cannot accurately capture the similarity between instances; some instances from different classes but close to the decision boundary may be close to each other, which may mislead the graph-based dimensionality reduction and compromise the performance. To mitigate this issue, in this paper, we proposed an approach called Laplacian Eigenmaps based on Clustering-Adjusted Similarity (LE-CAS). LE-CAS first performs clustering on all instances to explore the global structure and discrimination of instances, and quantifies the similarity between cluster centers. Then, it adjusts the similarity between pairwise instances by multiplying the similarity between centers of clusters, which these two instances respectively belong to. In this way, if two instances are from different clusters, the similarity between them is reduced; otherwise, it is unchanged. Finally, LE-CAS performs graph-based dimensionality reduction (via Laplacian Eigenmaps) based on the adjusted similarity. We conducted comprehensive empirical studies on UCI datasets and show that LE-CAS not only has a better performance than other relevant comparing methods, but also is more robust to input parameters. Full article
(This article belongs to the Special Issue Clustering Algorithms and Their Applications)
Open AccessArticle
A Finite Regime Analysis of Information Set Decoding Algorithms
Algorithms 2019, 12(10), 209; https://doi.org/10.3390/a12100209 - 01 Oct 2019
Viewed by 183
Abstract
Decoding of random linear block codes has been long exploited as a computationally hard problem on which it is possible to build secure asymmetric cryptosystems. In particular, both correcting an error-affected codeword, and deriving the error vector corresponding to a given syndrome were [...] Read more.
Decoding of random linear block codes has been long exploited as a computationally hard problem on which it is possible to build secure asymmetric cryptosystems. In particular, both correcting an error-affected codeword, and deriving the error vector corresponding to a given syndrome were proven to be equally difficult tasks. Since the pioneering work of Eugene Prange in the early 1960s, a significant research effort has been put into finding more efficient methods to solve the random code decoding problem through a family of algorithms known as information set decoding. The obtained improvements effectively reduce the overall complexity, which was shown to decrease asymptotically at each optimization, while remaining substantially exponential in the number of errors to be either found or corrected. In this work, we provide a comprehensive survey of the information set decoding techniques, providing finite regime temporal and spatial complexities for them. We exploit these formulas to assess the effectiveness of the asymptotic speedups obtained by the improved information set decoding techniques when working with code parameters relevant for cryptographic purposes. We also delineate computational complexities taking into account the achievable speedup via quantum computers and similarly assess such speedups in the finite regime. To provide practical grounding to the choice of cryptographically relevant parameters, we employ as our validation suite the ones chosen by cryptosystems admitted to the second round of the ongoing standardization initiative promoted by the US National Institute of Standards and Technology. Full article
(This article belongs to the Special Issue Coding Theory and Its Application)
Open AccessArticle
A Convex Optimization Algorithm for Electricity Pricing of Charging Stations
Algorithms 2019, 12(10), 208; https://doi.org/10.3390/a12100208 - 01 Oct 2019
Viewed by 187
Abstract
The problem of electricity pricing for charging stations is a multi-objective mixed integer nonlinear programming. Existing algorithms have low efficiency in solving this problem. In this paper, a convex optimization algorithm is proposed to get the optimal solution quickly. Firstly, the model is [...] Read more.
The problem of electricity pricing for charging stations is a multi-objective mixed integer nonlinear programming. Existing algorithms have low efficiency in solving this problem. In this paper, a convex optimization algorithm is proposed to get the optimal solution quickly. Firstly, the model is transformed into a convex optimization problem by second-order conic relaxation and Karush–Kuhn–Tucker optimality conditions. Secondly, a polyhedral approximation method is applied to construct a mixed integer linear programming, which can be solved quickly by branch and bound method. Finally, the model is solved many times to obtain the Pareto front according to the scalarization basic theorem. Based on an IEEE 33-bus distribution network model, simulation results show that the proposed algorithm can obtain an exact global optimal solution quickly compared with the heuristic method. Full article
(This article belongs to the Special Issue Algorithms in Convex Optimization and Applications)
Show Figures

Figure 1

Open AccessArticle
Recommending Links to Control Elections via Social Influence
Algorithms 2019, 12(10), 207; https://doi.org/10.3390/a12100207 - 01 Oct 2019
Viewed by 192
Abstract
Political parties recently learned that they must use social media campaigns along with advertising on traditional media to defeat their opponents. Before the campaign starts, it is important for a political party to establish and ensure its media presence, for example by enlarging [...] Read more.
Political parties recently learned that they must use social media campaigns along with advertising on traditional media to defeat their opponents. Before the campaign starts, it is important for a political party to establish and ensure its media presence, for example by enlarging their number of connections in the social network in order to assure a larger portion of users. Indeed, adding new connections between users increases the capabilities of a social network of spreading information, which in turn can increase the retention rate and the number of new voters. In this work, we address the problem of selecting a fixed-size set of new connections to be added to a subset of voters that, with their influence, will change the opinion of the network’s users about a target candidate, maximizing its chances to win the election. We provide a constant factor approximation algorithm for this problem and we experimentally show that, with few new links and small computational time, our algorithm is able to maximize the chances to make the target candidate win the elections. Full article
Open AccessArticle
Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning
Algorithms 2019, 12(10), 206; https://doi.org/10.3390/a12100206 - 01 Oct 2019
Viewed by 211
Abstract
Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To [...] Read more.
Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43 % and, therefore, proceed with incremental class learning. Full article
Show Figures

Figure 1

Open AccessArticle
Real-Time Conveyor Belt Deviation Detection Algorithm Based on Multi-Scale Feature Fusion Network
Algorithms 2019, 12(10), 205; https://doi.org/10.3390/a12100205 - 26 Sep 2019
Viewed by 278
Abstract
The conveyor belt is an indispensable piece of conveying equipment for a mine whose deviation caused by roller sticky material and uneven load distribution is the most common failure during operation. In this paper, a real-time conveyor belt detection algorithm based on a [...] Read more.
The conveyor belt is an indispensable piece of conveying equipment for a mine whose deviation caused by roller sticky material and uneven load distribution is the most common failure during operation. In this paper, a real-time conveyor belt detection algorithm based on a multi-scale feature fusion network is proposed, which mainly includes two parts: the feature extraction module and the deviation detection module. The feature extraction module uses a multi-scale feature fusion network structure to fuse low-level features with rich position and detail information and high-level features with stronger semantic information to improve network detection performance. Depthwise separable convolutions are used to achieve real-time detection. The deviation detection module identifies and monitors the deviation fault by calculating the offset of conveyor belt. In particular, a new weighted loss function is designed to optimize the network and to improve the detection effect of the conveyor belt edge. In order to evaluate the effectiveness of the proposed method, the Canny algorithm, FCNs, UNet and Deeplab v3 networks are selected for comparison. The experimental results show that the proposed algorithm achieves 78.92% in terms of pixel accuracy (PA), and reaches 13.4 FPS (Frames per Second) with the error of less than 3.2 mm, which outperforms the other four algorithms. Full article
Show Figures

Figure 1

Open AccessArticle
Simulation on Cooperative Changeover of Production Team Using Hybrid Modeling Method
Algorithms 2019, 12(10), 204; https://doi.org/10.3390/a12100204 - 24 Sep 2019
Viewed by 281
Abstract
In the multi-variety and small-quantity manufacturing environment, changeover operation occurs frequently, and cooperative changeover method is often used as a way to shorten the changeover time and balance the workload. However, more workers and tasks will be affected by cooperative changeover. As such, [...] Read more.
In the multi-variety and small-quantity manufacturing environment, changeover operation occurs frequently, and cooperative changeover method is often used as a way to shorten the changeover time and balance the workload. However, more workers and tasks will be affected by cooperative changeover. As such, the effectiveness of the cooperative changeover is dependent on other factors, such as the scope of cooperation and the proportion of newly introduced products. For this reason, this paper proposes a hybrid modeling method to support the simulation study of the production team's cooperative changeover strategies under various environments. Firstly, a hybrid simulation modeling method consisting of multi-agent systems and discrete events is introduced. Secondly, according to the scope of cooperation, this paper puts forward four kinds of cooperative changeover strategies. This paper also describes the cooperative line-changing behavior of operators. Finally, based on the changeover strategies, the proposed simulation method is applied to a production cell. Four production scenarios are considered according to the proportion of newly introduced part. The performance of various cooperative strategies in different production scenarios is simulated, and the statistical test results show that the optimal or satisfactory strategy can be determined in each production scenario. Additionally, the effectiveness and practicability of the proposed modeling method are verified. Full article
Show Figures

Figure 1

Open AccessArticle
Data-Driven Predictive Modeling of Neuronal Dynamics Using Long Short-Term Memory
Algorithms 2019, 12(10), 203; https://doi.org/10.3390/a12100203 - 24 Sep 2019
Viewed by 389
Abstract
Modeling brain dynamics to better understand and control complex behaviors underlying various cognitive brain functions have been of interest to engineers, mathematicians and physicists over the last several decades. With the motivation of developing computationally efficient models of brain dynamics to use in [...] Read more.
Modeling brain dynamics to better understand and control complex behaviors underlying various cognitive brain functions have been of interest to engineers, mathematicians and physicists over the last several decades. With the motivation of developing computationally efficient models of brain dynamics to use in designing control-theoretic neurostimulation strategies, we have developed a novel data-driven approach in a long short-term memory (LSTM) neural network architecture to predict the temporal dynamics of complex systems over an extended long time-horizon in future. In contrast to recent LSTM-based dynamical modeling approaches that make use of multi-layer perceptrons or linear combination layers as output layers, our architecture uses a single fully connected output layer and reversed-order sequence-to-sequence mapping to improve short time-horizon prediction accuracy and to make multi-timestep predictions of dynamical behaviors. We demonstrate the efficacy of our approach in reconstructing the regular spiking to bursting dynamics exhibited by an experimentally-validated 9-dimensional Hodgkin-Huxley model of hippocampal CA1 pyramidal neurons. Through simulations, we show that our LSTM neural network can predict the multi-time scale temporal dynamics underlying various spiking patterns with reasonable accuracy. Moreover, our results show that the predictions improve with increasing predictive time-horizon in the multi-timestep deep LSTM neural network. Full article
Show Figures

Figure 1

Open AccessEditorial
Evolutionary Algorithms in Health Technologies
Algorithms 2019, 12(10), 202; https://doi.org/10.3390/a12100202 - 24 Sep 2019
Viewed by 293
Abstract
Health technology research brings together complementary interdisciplinary research skills in the development of innovative health technology applications. Recent research indicates that artificial intelligence can help achieve outstanding performance for particular types of health technology applications. An evolutionary algorithm is one of the subfields [...] Read more.
Health technology research brings together complementary interdisciplinary research skills in the development of innovative health technology applications. Recent research indicates that artificial intelligence can help achieve outstanding performance for particular types of health technology applications. An evolutionary algorithm is one of the subfields of artificial intelligence, and is an effective algorithm for global optimization inspired by biological evolution. With the rapidly growing complexity of design issues, methodologies and a higher demand for quality health technology applications, the development of evolutionary computation algorithms for health has become timely and of high relevance. This Special Issue intends to bring together researchers to report the recent findings in evolutionary algorithms in health technology. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Health Technologies)
Open AccessArticle
GASP: Genetic Algorithms for Service Placement in Fog Computing Systems
Algorithms 2019, 12(10), 201; https://doi.org/10.3390/a12100201 - 21 Sep 2019
Viewed by 305
Abstract
Fog computing is becoming popular as a solution to support applications based on geographically distributed sensors that produce huge volumes of data to be processed and filtered with response time constraints. In this scenario, typical of a smart city environment, the traditional cloud [...] Read more.
Fog computing is becoming popular as a solution to support applications based on geographically distributed sensors that produce huge volumes of data to be processed and filtered with response time constraints. In this scenario, typical of a smart city environment, the traditional cloud paradigm with few powerful data centers located far away from the sources of data becomes inadequate. The fog computing paradigm, which provides a distributed infrastructure of nodes placed close to the data sources, represents a better solution to perform filtering, aggregation, and preprocessing of incoming data streams reducing the experienced latency and increasing the overall scalability. However, many issues still exist regarding the efficient management of a fog computing architecture, such as the distribution of data streams coming from sensors over the fog nodes to minimize the experienced latency. The contribution of this paper is two-fold. First, we present an optimization model for the problem of mapping data streams over fog nodes, considering not only the current load of the fog nodes, but also the communication latency between sensors and fog nodes. Second, to address the complexity of the problem, we present a scalable heuristic based on genetic algorithms. We carried out a set of experiments based on a realistic smart city scenario: the results show how the performance of the proposed heuristic is comparable with the one achieved through the solution of the optimization problem. Then, we carried out a comparison among different genetic evolution strategies and operators that identify the uniform crossover as the best option. Finally, we perform a wide sensitivity analysis to show the stability of the heuristic performance with respect to its main parameters. Full article
Show Figures

Figure 1

Open AccessArticle
A Machine Learning Approach to Algorithm Selection for Exact Computation of Treewidth
Algorithms 2019, 12(10), 200; https://doi.org/10.3390/a12100200 - 20 Sep 2019
Viewed by 439
Abstract
We present an algorithm selection framework based on machine learning for the exact computation of treewidth, an intensively studied graph parameter that is NP-hard to compute. Specifically, we analyse the comparative performance of three state-of-the-art exact treewidth algorithms on a wide array [...] Read more.
We present an algorithm selection framework based on machine learning for the exact computation of treewidth, an intensively studied graph parameter that is NP-hard to compute. Specifically, we analyse the comparative performance of three state-of-the-art exact treewidth algorithms on a wide array of graphs and use this information to predict which of the algorithms, on a graph by graph basis, will compute the treewidth the quickest. Experimental results show that the proposed meta-algorithm outperforms existing methods on benchmark instances on all three performance metrics we use: in a nutshell, it computes treewidth faster than any single algorithm in isolation. We analyse our results to derive insights about graph feature importance and the strengths and weaknesses of the algorithms we used. Our results are further evidence of the advantages to be gained by strategically blending machine learning and combinatorial optimisation approaches within a hybrid algorithmic framework. The machine learning model we use is intentionally simple to emphasise that speedup can already be obtained without having to engage in the full complexities of machine learning engineering. We reflect on how future work could extend this simple but effective, proof-of-concept by deploying more sophisticated machine learning models. Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
Show Figures

Figure 1

Previous Issue
Back to TopTop