Next Issue
Volume 13, October
Previous Issue
Volume 13, August
 
 

Algorithms, Volume 13, Issue 9 (September 2020) – 38 articles

Cover Story (view full-size image): Modeling structured objects such as graphs and computing the similarity between them is a difficult problem since one wants to be able to encode both the structure of the object and the feature of the individual elements of the objects. One recent approach relied on modeling the object as a distribution in a joint structure/feature space. In this work, we propose and study the use of the Fused Gromov–Wasserstein distance to measure the similarity between graphs. We also show several potential applications of the distance such as graph classification, graph barycenters, graph clustering, and community clustering. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 6889 KiB  
Article
Policy-Based Composition and Embedding of Extended Virtual Networks and SFCs for IIoT
by Waseem Mandarawi, Jürgen Rottmeier, Milad Rezaeighale and Hermann de Meer
Algorithms 2020, 13(9), 240; https://doi.org/10.3390/a13090240 - 22 Sep 2020
Cited by 2 | Viewed by 2133
Abstract
The autonomic composition of Virtual Networks (VNs) and Service Function Chains (SFCs) based on application requirements is significant for complex environments. In this paper, we use graph transformation in order to compose an Extended Virtual Network (EVN) that is based on different requirements, [...] Read more.
The autonomic composition of Virtual Networks (VNs) and Service Function Chains (SFCs) based on application requirements is significant for complex environments. In this paper, we use graph transformation in order to compose an Extended Virtual Network (EVN) that is based on different requirements, such as locations, low latency, redundancy, and security functions. The EVN can represent physical environment devices and virtual application and network functions. We build a generic Virtual Network Embedding (VNE) framework for transforming an Application Request (AR) to an EVN. Subsequently, we define a set of transformations that reflect preliminary topological, performance, reliability, and security policies. These transformations update the entities and demands of the VN and add SFCs that include the required Virtual Network Functions (VNFs). Additionally, we propose a greedy proactive heuristic for path-independent embedding of the composed SFCs. This heuristic is appropriate for real complex environments, such as industrial networks. Furthermore, we present an Industrail Internet of Things (IIoT) use case that was inspired by Industry 4.0 concepts, in which EVNs for remote asset management are deployed over three levels; manufacturing halls and edge and cloud computing. We also implement the developed methods in Alevin and show exemplary mapping results from our use case. Finally, we evaluate the chain embedding heuristic while using a random topology that is typical for such a use case, and show that it can improve the admission ratio and resource utilization with minimal overhead. Full article
(This article belongs to the Special Issue Virtual Network Embedding)
Show Figures

Figure 1

16 pages, 2109 KiB  
Article
Feasibility Analysis and Application of Reinforcement Learning Algorithm Based on Dynamic Parameter Adjustment
by Menglin Li, Xueqiang Gu, Chengyi Zeng and Yuan Feng
Algorithms 2020, 13(9), 239; https://doi.org/10.3390/a13090239 - 22 Sep 2020
Cited by 2 | Viewed by 2066
Abstract
Reinforcement learning, as a branch of machine learning, has been gradually applied in the control field. However, in the practical application of the algorithm, the hyperparametric approach to network settings for deep reinforcement learning still follows the empirical attempts of traditional machine learning [...] Read more.
Reinforcement learning, as a branch of machine learning, has been gradually applied in the control field. However, in the practical application of the algorithm, the hyperparametric approach to network settings for deep reinforcement learning still follows the empirical attempts of traditional machine learning (supervised learning and unsupervised learning). This method ignores part of the information generated by agents exploring the environment contained in the updating of the reinforcement learning value function, which will affect the performance of the convergence and cumulative return of reinforcement learning. The reinforcement learning algorithm based on dynamic parameter adjustment is a new method for setting learning rate parameters of deep reinforcement learning. Based on the traditional method of setting parameters for reinforcement learning, this method analyzes the advantages of different learning rates at different stages of reinforcement learning and dynamically adjusts the learning rates in combination with the temporal-difference (TD) error values to achieve the advantages of different learning rates in different stages to improve the rationality of the algorithm in practical application. At the same time, by combining the Robbins–Monro approximation algorithm and deep reinforcement learning algorithm, it is proved that the algorithm of dynamic regulation learning rate can theoretically meet the convergence requirements of the intelligent control algorithm. In the experiment, the effect of this method is analyzed through the continuous control scenario in the standard experimental environment of ”Car-on-The-Hill” of reinforcement learning, and it is verified that the new method can achieve better results than the traditional reinforcement learning in practical application. According to the model characteristics of the deep reinforcement learning, a more suitable setting method for the learning rate of the deep reinforcement learning network proposed. At the same time, the feasibility of the method has been proved both in theory and in the application. Therefore, the method of setting the learning rate parameter is worthy of further development and research. Full article
Show Figures

Figure 1

26 pages, 7092 KiB  
Article
Study on Multi-Objective Optimization-Based Climate Responsive Design of Residential Building
by Zhixing Li, Paolo Vincenzo Genovese and Yafei Zhao
Algorithms 2020, 13(9), 238; https://doi.org/10.3390/a13090238 - 21 Sep 2020
Cited by 8 | Viewed by 3126
Abstract
This paper proposes an optimization process based on a parametric platform for building climate responsive design. Taking residential buildings in six typical American cities as examples, it proposes thermal environment comfort (Discomfort Hour, DH), building energy demand (BED) and building global cost (GC) [...] Read more.
This paper proposes an optimization process based on a parametric platform for building climate responsive design. Taking residential buildings in six typical American cities as examples, it proposes thermal environment comfort (Discomfort Hour, DH), building energy demand (BED) and building global cost (GC) as the objective functions for optimization. The design variables concern building orientation, envelope components, and window types, etc. The optimal solution is provided from two different perspectives of the public sector (energy saving optimal) and private households (cost-optimal) respectively. By comparing the optimization results with the performance indicators of the reference buildings in various cities, the outcome can give the precious indications to rebuild the U.S. residential buildings with a view to energy-efficiency and cost optimality depending on the location. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Engineering Optimization Problems)
Show Figures

Figure 1

22 pages, 750 KiB  
Article
A Simheuristic Algorithm for Solving the Stochastic Omnichannel Vehicle Routing Problem with Pick-up and Delivery
by Leandro do C. Martins, Christopher Bayliss, Pedro J. Copado-Méndez, Javier Panadero and Angel A. Juan
Algorithms 2020, 13(9), 237; https://doi.org/10.3390/a13090237 - 19 Sep 2020
Cited by 5 | Viewed by 3838
Abstract
Advances in information and communication technologies have made possible the emergence of new shopping channels. The so-called ‘omnichannel’ retailing mode allows customers to shop for products online and receive them at home. This paper focuses on the omnichannel delivery concept for the retailing [...] Read more.
Advances in information and communication technologies have made possible the emergence of new shopping channels. The so-called ‘omnichannel’ retailing mode allows customers to shop for products online and receive them at home. This paper focuses on the omnichannel delivery concept for the retailing industry, which addresses the replenishment of a set of retail stores and the direct shipment of the products to customers within an integrated vehicle routing formulation. Due to its NP-Hardness, a constructive heuristic, which is extended into a biased-randomized heuristic and which is embedded into a multi-start procedure, is introduced for solving the large-sized instances of the problem. Next, the problem is enriched by considering a more realistic scenario in which travel times are modeled as random variables. For dealing with the stochastic version of the problem, a simheuristic algorithm is proposed. A series of computational experiments contribute to illustrate how our simheuristic can provide reliable and low-cost solutions under uncertain conditions. Full article
(This article belongs to the Special Issue Simulation-Optimization in Logistics, Transportation, and SCM)
Show Figures

Figure 1

4 pages, 170 KiB  
Editorial
Special Issue “New Frontiers in Parameterized Complexity and Algorithms”: Foreward by the Guest Editors
by Neeldhara Misra, Frances Rosamond and Meirav Zehavi
Algorithms 2020, 13(9), 236; https://doi.org/10.3390/a13090236 - 18 Sep 2020
Viewed by 2277
Abstract
This Special Issue contains eleven articles—surveys and research papers—that represent fresh and ambitious new directions in the area of Parameterized Complexity. They provide ground-breaking research at the frontiers of knowledge, and they contribute to bridging the gap between theory and practice. The scope [...] Read more.
This Special Issue contains eleven articles—surveys and research papers—that represent fresh and ambitious new directions in the area of Parameterized Complexity. They provide ground-breaking research at the frontiers of knowledge, and they contribute to bridging the gap between theory and practice. The scope and impact of the field continues to increase. Promising avenues and new research challenges are highlighted in this Special Issue. Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
16 pages, 1542 KiB  
Article
A Mixed-Integer and Asynchronous Level Decomposition with Application to the Stochastic Hydrothermal Unit-Commitment Problem
by Bruno Colonetti, Erlon Cristian Finardi and Welington de Oliveira
Algorithms 2020, 13(9), 235; https://doi.org/10.3390/a13090235 - 18 Sep 2020
Cited by 3 | Viewed by 1992
Abstract
Independent System Operators (ISOs) worldwide face the ever-increasing challenge of coping with uncertainties, which requires sophisticated algorithms for solving unit-commitment (UC) problems of increasing complexity in less-and-less time. Hence, decomposition methods are appealing options to produce easier-to-handle problems that can hopefully return good [...] Read more.
Independent System Operators (ISOs) worldwide face the ever-increasing challenge of coping with uncertainties, which requires sophisticated algorithms for solving unit-commitment (UC) problems of increasing complexity in less-and-less time. Hence, decomposition methods are appealing options to produce easier-to-handle problems that can hopefully return good solutions at reasonable times. When applied to two-stage stochastic models, decomposition often yields subproblems that are embarrassingly parallel. Synchronous parallel-computing techniques are applied to the decomposable subproblem and frequently result in considerable time savings. However, due to the inherent run-time differences amongst the subproblem’s optimization models, unequal equipment, and communication overheads, synchronous approaches may underuse the computing resources. Consequently, asynchronous computing constitutes a natural enhancement to existing methods. In this work, we propose a novel extension of the asynchronous level decomposition to solve stochastic hydrothermal UC problems with mixed-integer variables in the first stage. In addition, we combine this novel method with an efficient task allocation to yield an innovative algorithm that far outperforms the current state-of-the-art. We provide convergence analysis of our proposal and assess its computational performance on a testbed consisting of 54 problems from a 46-bus system. Results show that our asynchronous algorithm outperforms its synchronous counterpart in terms of wall-clock computing time in 40% of the problems, providing time savings averaging about 45%, while also reducing the standard deviation of running times over the testbed in the order of 25%. Full article
Show Figures

Figure 1

9 pages, 252 KiB  
Article
More Time-Space Tradeoffs for Finding a Shortest Unique Substring
by Hideo Bannai, Travis Gagie, Gary Hoppenworth, Simon J. Puglisi and Luís M. S. Russo
Algorithms 2020, 13(9), 234; https://doi.org/10.3390/a13090234 - 18 Sep 2020
Cited by 2 | Viewed by 2240
Abstract
We extend recent results regarding finding shortest unique substrings (SUSs) to obtain new time-space tradeoffs for this problem and the generalization of finding k-mismatch SUSs. Our new results include the first algorithm for finding a k-mismatch SUS in sublinear space, which [...] Read more.
We extend recent results regarding finding shortest unique substrings (SUSs) to obtain new time-space tradeoffs for this problem and the generalization of finding k-mismatch SUSs. Our new results include the first algorithm for finding a k-mismatch SUS in sublinear space, which we obtain by extending an algorithm by Senanayaka (2019) and combining it with a result on sketching by Gawrychowski and Starikovskaya (2019). We first describe how, given a text T of length n and m words of workspace, with high probability we can find an SUS of length L in O(n(L/m)logL) time using random access to T, or in O(n(L/m)log2(L)loglogσ) time using O((L/m)log2L) sequential passes over T. We then describe how, for constant k, with high probability, we can find a k-mismatch SUS in O(n1+ϵL/m) time using O(nϵL/m) sequential passes over T, again using only m words of workspace. Finally, we also describe a deterministic algorithm that takes O(nτlogσlogn) time to find an SUS using O(n/τ) words of workspace, where τ is a parameter. Full article
(This article belongs to the Special Issue Algorithms in Bioinformatics)
13 pages, 315 KiB  
Article
A Brain-Inspired Hyperdimensional Computing Approach for Classifying Massive DNA Methylation Data of Cancer
by Fabio Cumbo, Eleonora Cappelli and Emanuel Weitschek
Algorithms 2020, 13(9), 233; https://doi.org/10.3390/a13090233 - 17 Sep 2020
Cited by 7 | Viewed by 3321
Abstract
The recent advancements in cancer genomics have put under the spotlight DNA methylation, a genetic modification that regulates the functioning of the genome and whose modifications have an important role in tumorigenesis and tumor-suppression. Because of the high dimensionality and the enormous amount [...] Read more.
The recent advancements in cancer genomics have put under the spotlight DNA methylation, a genetic modification that regulates the functioning of the genome and whose modifications have an important role in tumorigenesis and tumor-suppression. Because of the high dimensionality and the enormous amount of genomic data that are produced through the last advancements in Next Generation Sequencing, it is very challenging to effectively make use of DNA methylation data in diagnostics applications, e.g., in the identification of healthy vs diseased samples. Additionally, state-of-the-art techniques are not fast enough to rapidly produce reliable results or efficient in managing those massive amounts of data. For this reason, we propose HD-classifier, an in-memory cognitive-based hyperdimensional (HD) supervised machine learning algorithm for the classification of tumor vs non tumor samples through the analysis of their DNA Methylation data. The approach takes inspiration from how the human brain is able to remember and distinguish simple and complex concepts by adopting hypervectors and no single numerical values. Exactly as the brain works, this allows for encoding complex patterns, which makes the whole architecture robust to failures and mistakes also with noisy data. We design and develop an algorithm and a software tool that is able to perform supervised classification with the HD approach. We conduct experiments on three DNA methylation datasets of different types of cancer in order to prove the validity of our algorithm, i.e., Breast Invasive Carcinoma (BRCA), Kidney renal papillary cell carcinoma (KIRP), and Thyroid carcinoma (THCA). We obtain outstanding results in terms of accuracy and computational time with a low amount of computational resources. Furthermore, we validate our approach by comparing it (i) to BIGBIOCL, a software based on Random Forest for classifying big omics datasets in distributed computing environments, (ii) to Support Vector Machine (SVM), and (iii) to Decision Tree state-of-the-art classification methods. Finally, we freely release both the datasets and the software on GitHub. Full article
(This article belongs to the Special Issue Algorithms in Bioinformatics)
Show Figures

Figure 1

11 pages, 2891 KiB  
Article
A Comparison of Ensemble and Dimensionality Reduction DEA Models Based on Entropy Criterion
by Parag C. Pendharkar
Algorithms 2020, 13(9), 232; https://doi.org/10.3390/a13090232 - 16 Sep 2020
Viewed by 1934
Abstract
Dimensionality reduction research in data envelopment analysis (DEA) has focused on subjective approaches to reduce dimensionality. Such approaches are less useful or attractive in practice because a subjective selection of variables introduces bias. A competing unbiased approach would be to use ensemble DEA [...] Read more.
Dimensionality reduction research in data envelopment analysis (DEA) has focused on subjective approaches to reduce dimensionality. Such approaches are less useful or attractive in practice because a subjective selection of variables introduces bias. A competing unbiased approach would be to use ensemble DEA scores. This paper illustrates that in addition to unbiased evaluations, the ensemble DEA scores result in unique rankings that have high entropy. Under restrictive assumptions, it is also shown that the ensemble DEA scores are normally distributed. Ensemble models do not require any new modifications to existing DEA objective functions or constraints, and when ensemble scores are normally distributed, returns-to-scale hypothesis testing can be carried out using traditional parametric statistical techniques. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

11 pages, 309 KiB  
Article
A Class of Spline Functions for Solving 2-Order Linear Differential Equations with Boundary Conditions
by Chengzhi Liu, Xuli Han and Juncheng Li
Algorithms 2020, 13(9), 231; https://doi.org/10.3390/a13090231 - 15 Sep 2020
Viewed by 1875
Abstract
In this paper, we exploit an numerical method for solving second order differential equations with boundary conditions. Based on the theory of the analytic solution, a series of spline functions are presented to find approximate solutions, and one of them is selected to [...] Read more.
In this paper, we exploit an numerical method for solving second order differential equations with boundary conditions. Based on the theory of the analytic solution, a series of spline functions are presented to find approximate solutions, and one of them is selected to approximate the solution automatically. Compared with the other methods, we only need to solve a tri-diagonal system, which is much easier to implement. This method has the advantages of high precision and less computational cost. The analysis of local truncation error is also discussed in this paper. At the end, some numerical examples are given to illustrate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

26 pages, 1922 KiB  
Article
Simulated Annealing with Exploratory Sensing for Global Optimization
by Majid Almarashi, Wael Deabes, Hesham H. Amin and Abdel-Rahman Hedar
Algorithms 2020, 13(9), 230; https://doi.org/10.3390/a13090230 - 12 Sep 2020
Cited by 7 | Viewed by 2424
Abstract
Simulated annealing is a well-known search algorithm used with success history in many search problems. However, the random walk of the simulated annealing does not benefit from the memory of visited states, causing excessive random search with no diversification history. Unlike memory-based search [...] Read more.
Simulated annealing is a well-known search algorithm used with success history in many search problems. However, the random walk of the simulated annealing does not benefit from the memory of visited states, causing excessive random search with no diversification history. Unlike memory-based search algorithms such as the tabu search, the search in simulated annealing is dependent on the choice of the initial temperature to explore the search space, which has little indications of how much exploration has been carried out. The lack of exploration eye can affect the quality of the found solutions while the nature of the search in simulated annealing is mainly local. In this work, a methodology of two phases using an automatic diversification and intensification based on memory and sensing tools is proposed. The proposed method is called Simulated Annealing with Exploratory Sensing. The computational experiments show the efficiency of the proposed method in ensuring a good exploration while finding good solutions within a similar number of iterations. Full article
Show Figures

Figure 1

17 pages, 506 KiB  
Article
A Jacobi–Davidson Method for Large Scale Canonical Correlation Analysis
by Zhongming Teng and Xiaowei Zhang
Algorithms 2020, 13(9), 229; https://doi.org/10.3390/a13090229 - 12 Sep 2020
Cited by 1 | Viewed by 1987
Abstract
In the large scale canonical correlation analysis arising from multi-view learning applications, one needs to compute canonical weight vectors corresponding to a few of largest canonical correlations. For such a task, we propose a Jacobi–Davidson type algorithm to calculate canonical weight vectors by [...] Read more.
In the large scale canonical correlation analysis arising from multi-view learning applications, one needs to compute canonical weight vectors corresponding to a few of largest canonical correlations. For such a task, we propose a Jacobi–Davidson type algorithm to calculate canonical weight vectors by transforming it into the so-called canonical correlation generalized eigenvalue problem. Convergence results are established and reveal the accuracy of the approximate canonical weight vectors. Numerical examples are presented to support the effectiveness of the proposed method. Full article
Show Figures

Figure 1

19 pages, 1369 KiB  
Article
Online Topology Inference from Streaming Stationary Graph Signals with Partial Connectivity Information
by Rasoul Shafipour and Gonzalo Mateos
Algorithms 2020, 13(9), 228; https://doi.org/10.3390/a13090228 - 09 Sep 2020
Cited by 17 | Viewed by 2635
Abstract
We develop online graph learning algorithms from streaming network data. Our goal is to track the (possibly) time-varying network topology, and affect memory and computational savings by processing the data on-the-fly as they are acquired. The setup entails observations modeled as stationary graph [...] Read more.
We develop online graph learning algorithms from streaming network data. Our goal is to track the (possibly) time-varying network topology, and affect memory and computational savings by processing the data on-the-fly as they are acquired. The setup entails observations modeled as stationary graph signals generated by local diffusion dynamics on the unknown network. Moreover, we may have a priori information on the presence or absence of a few edges as in the link prediction problem. The stationarity assumption implies that the observations’ covariance matrix and the so-called graph shift operator (GSO—a matrix encoding the graph topology) commute under mild requirements. This motivates formulating the topology inference task as an inverse problem, whereby one searches for a sparse GSO that is structurally admissible and approximately commutes with the observations’ empirical covariance matrix. For streaming data, said covariance can be updated recursively, and we show online proximal gradient iterations can be brought to bear to efficiently track the time-varying solution of the inverse problem with quantifiable guarantees. Specifically, we derive conditions under which the GSO recovery cost is strongly convex and use this property to prove that the online algorithm converges to within a neighborhood of the optimal time-varying batch solution. Numerical tests illustrate the effectiveness of the proposed graph learning approach in adapting to streaming information and tracking changes in the sought dynamic network. Full article
(This article belongs to the Special Issue Efficient Graph Algorithms in Machine Learning)
Show Figures

Figure 1

17 pages, 780 KiB  
Article
An Image Hashing Algorithm for Authentication with Multi-Attack Reference Generation and Adaptive Thresholding
by Ling Du, Zehong He, Yijing Wang, Xiaochao Wang and Anthony T. S. Ho
Algorithms 2020, 13(9), 227; https://doi.org/10.3390/a13090227 - 08 Sep 2020
Cited by 9 | Viewed by 2766
Abstract
Image hashing-based authentication methods have been widely studied with continuous advancements owing to the speed and memory efficiency. However, reference hash generation and threshold setting, which are used for similarity measures between original images and corresponding distorted version, are important but less considered [...] Read more.
Image hashing-based authentication methods have been widely studied with continuous advancements owing to the speed and memory efficiency. However, reference hash generation and threshold setting, which are used for similarity measures between original images and corresponding distorted version, are important but less considered by most of existing models. In this paper, we propose an image hashing method based on multi-attack reference generation and adaptive thresholding for image authentication. We propose to build the prior information set based on the help of multiple virtual prior attacks, and present a multi-attack reference generation method based on hashing clusters. The perceptual hashing algorithm was applied to the reference/queried image to obtain the hashing codes for authentication. Furthermore, we introduce the concept of adaptive thresholding to account for variations in hashing distance. Extensive experiments on benchmark datasets have validated the effectiveness of our proposed method. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

14 pages, 1064 KiB  
Article
Spatially Adaptive Regularization in Image Segmentation
by Laura Antonelli, Valentina De Simone and Daniela di Serafino
Algorithms 2020, 13(9), 226; https://doi.org/10.3390/a13090226 - 08 Sep 2020
Cited by 8 | Viewed by 2347
Abstract
We present a total-variation-regularized image segmentation model that uses local regularization parameters to take into account spatial image information. We propose some techniques for defining those parameters, based on the cartoon-texture decomposition of the given image, on the mean and median filters, and [...] Read more.
We present a total-variation-regularized image segmentation model that uses local regularization parameters to take into account spatial image information. We propose some techniques for defining those parameters, based on the cartoon-texture decomposition of the given image, on the mean and median filters, and on a thresholding technique, with the aim of preventing excessive regularization in piecewise-constant or smooth regions and preserving spatial features in nonsmooth regions. Our model is obtained by modifying a well-known image segmentation model that was developed by T. Chan, S. Esedoḡlu, and M. Nikolova. We solve the modified model by an alternating minimization method using split Bregman iterations. Numerical experiments show the effectiveness of our approach. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

18 pages, 384 KiB  
Article
A Linear-Time Algorithm for the Isometric Reconciliation of Unrooted Trees
by Broňa Brejová and Rastislav Královič
Algorithms 2020, 13(9), 225; https://doi.org/10.3390/a13090225 - 08 Sep 2020
Cited by 1 | Viewed by 2144
Abstract
In the reconciliation problem, we are given two phylogenetic trees. A species tree represents the evolutionary history of a group of species, and a gene tree represents the history of a family of related genes within these species. A reconciliation maps nodes of [...] Read more.
In the reconciliation problem, we are given two phylogenetic trees. A species tree represents the evolutionary history of a group of species, and a gene tree represents the history of a family of related genes within these species. A reconciliation maps nodes of the gene tree to the corresponding points of the species tree, and thus helps to interpret the gene family history. In this paper, we study the case when both trees are unrooted and their edge lengths are known exactly. The goal is to root them and to find a reconciliation that agrees with the edge lengths. We show a linear-time algorithm for finding the set of all possible root locations, which is a significant improvement compared to the previous O(N3logN) algorithm. Full article
(This article belongs to the Special Issue Algorithms in Bioinformatics)
Show Figures

Figure 1

18 pages, 404 KiB  
Article
A Survey on Shortest Unique Substring Queries
by Paniz Abedin, M. Oğuzhan Külekci and Shama V. Thankachan
Algorithms 2020, 13(9), 224; https://doi.org/10.3390/a13090224 - 06 Sep 2020
Cited by 4 | Viewed by 3073
Abstract
The shortest unique substring (SUS) problem is an active line of research in the field of string algorithms and has several applications in bioinformatics and information retrieval. The initial version of the problem was proposed by Pei et al. [ICDE’13]. Over the years, [...] Read more.
The shortest unique substring (SUS) problem is an active line of research in the field of string algorithms and has several applications in bioinformatics and information retrieval. The initial version of the problem was proposed by Pei et al. [ICDE’13]. Over the years, many variants and extensions have been pursued, which include positional-SUS, interval-SUS, approximate-SUS, palindromic-SUS, range-SUS, etc. In this article, we highlight some of the key results and summarize the recent developments in this area. Full article
(This article belongs to the Special Issue Algorithms in Bioinformatics)
27 pages, 1909 KiB  
Article
Solving the Urban Transit Routing Problem Using a Cat Swarm Optimization-Based Algorithm
by Iosif V. Katsaragakis, Ioannis X. Tassopoulos and Grigorios N. Beligiannis
Algorithms 2020, 13(9), 223; https://doi.org/10.3390/a13090223 - 06 Sep 2020
Cited by 10 | Viewed by 3290
Abstract
Presented in this research paper is an attempt to apply a cat swarm optimization (CSO)-based algorithm to the urban transit routing problem (UTRP). Using the proposed algorithm, we can attain feasible and efficient (near) optimal route sets for public transportation networks. It is, [...] Read more.
Presented in this research paper is an attempt to apply a cat swarm optimization (CSO)-based algorithm to the urban transit routing problem (UTRP). Using the proposed algorithm, we can attain feasible and efficient (near) optimal route sets for public transportation networks. It is, to our knowledge, the first time that cat swarm optimization (CSO)-based algorithm is applied to cope with this specific problem. The algorithm’s efficiency and excellent performance are demonstrated by conducting experiments with both real-world as well as artificial data. These specific data have also been used as test instances by other researchers in their publications. Computational results reveal that the proposed cat swarm optimization (CSO)-based algorithm exhibits better performance, using the same evaluation criteria, compared to most of the other existing approaches applied to the same test instances. The differences of the proposed algorithm in comparison with other published approaches lie in its main process, which is a modification of the classic cat swarm optimization (CSO) algorithm applied to solve the urban transit routing problem. This modification in addition to a variation of the initialization process, as well as the enrichment of the algorithm with a process of improving the final solution, constitute the innovations of this contribution. The UTRP is studied from both passenger and provider sides of interest, and the algorithm is applied in both cases according to necessary modifications. Full article
(This article belongs to the Special Issue Hybrid Intelligent Algorithms)
Show Figures

Figure 1

19 pages, 2232 KiB  
Article
Detecting Traffic Incidents Using Persistence Diagrams
by Eric S. Weber, Steven N. Harding and Lee Przybylski
Algorithms 2020, 13(9), 222; https://doi.org/10.3390/a13090222 - 05 Sep 2020
Cited by 3 | Viewed by 2501
Abstract
We introduce a novel methodology for anomaly detection in time-series data. The method uses persistence diagrams and bottleneck distances to identify anomalies. Specifically, we generate multiple predictors by randomly bagging the data (reference bags), then for each data point replacing the data point [...] Read more.
We introduce a novel methodology for anomaly detection in time-series data. The method uses persistence diagrams and bottleneck distances to identify anomalies. Specifically, we generate multiple predictors by randomly bagging the data (reference bags), then for each data point replacing the data point for a randomly chosen point in each bag (modified bags). The predictors then are the set of bottleneck distances for the reference/modified bag pairs. We prove the stability of the predictors as the number of bags increases. We apply our methodology to traffic data and measure the performance for identifying known incidents. Full article
(This article belongs to the Special Issue Topological Data Analysis)
Show Figures

Figure 1

27 pages, 1156 KiB  
Article
EEG Feature Extraction Using Genetic Programming for the Classification of Mental States
by Emigdio Z-Flores, Leonardo Trujillo, Pierrick Legrand and Frédérique Faïta-Aïnseba
Algorithms 2020, 13(9), 221; https://doi.org/10.3390/a13090221 - 03 Sep 2020
Cited by 4 | Viewed by 2990
Abstract
The design of efficient electroencephalogram (EEG) classification systems for the detection of mental states is still an open problem. Such systems can be used to provide assistance to humans in tasks where a certain level of alertness is required, like in surgery or [...] Read more.
The design of efficient electroencephalogram (EEG) classification systems for the detection of mental states is still an open problem. Such systems can be used to provide assistance to humans in tasks where a certain level of alertness is required, like in surgery or in the operation of heavy machines, among others. In this work, we extend a previous study where a classification system is proposed using a Common Spatial Pattern (CSP) and Linear Discriminant Analysis (LDA) for the classification of two mental states, namely a relaxed and a normal state. Here, we propose an enhanced feature extraction algorithm (Augmented Feature Extraction with Genetic Programming, or +FEGP) that improves upon previous results by employing a Genetic-Programming-based methodology on top of the CSP. The proposed algorithm searches for non-linear transformations that build new features and simplify the classification task. Although the proposed algorithm can be coupled with any classifier, LDA achieves 78.8% accuracy, the best predictive accuracy among tested classifiers, significantly improving upon previously published results on the same real-world dataset. Full article
(This article belongs to the Special Issue Genetic Programming)
Show Figures

Figure 1

19 pages, 3053 KiB  
Article
Fuzzy Preference Programming Framework for Functional assessment of Subway Networks
by Mona Abouhamad and Tarek Zayed
Algorithms 2020, 13(9), 220; https://doi.org/10.3390/a13090220 - 03 Sep 2020
Cited by 2 | Viewed by 2218
Abstract
The 2019 Canadian Infrastructure report card identified 60% of the subway system to be in a very poor to a poor condition. With multiple assets competing for the limited fund, new methodologies are required to prioritize assets for rehabilitation. The report suggested that [...] Read more.
The 2019 Canadian Infrastructure report card identified 60% of the subway system to be in a very poor to a poor condition. With multiple assets competing for the limited fund, new methodologies are required to prioritize assets for rehabilitation. The report suggested that adopting an Asset Management Plan would assist municipalities in maintaining and operating infrastructure effectively. ISO 55000 emphasized the importance of risk assessment in assessing the value of an organization’s assets. Subway risk assessment models mainly focus on structural failures with minimum focus on functional failure impacts and network criticality attributes. This research presents two modules to measure the functional failure impacts of a subway network, given financial, social, and operational perspectives, in addition to the station criticality. The model uses the Fuzzy Analytical Network Process with application to Fuzzy Preference Programming to calculate the weights for seven failure impact attributers and seven criticality attributes. Data are collected using questionnaires and unstructured/structured interviews with municipality personnel. The analysis identified social impacts to have the highest score of 38%, followed by operational and financial impacts at 34% and 27.65%, respectively. The subway station criticality revealed station location to have the highest impact at 35%, followed by station nature of use and station characteristics at 30.5% and 31.82%, respectively. When integrated with probability of failure, this model provides a comprehensive risk index to optimize stations for rehabilitation. Full article
(This article belongs to the Special Issue Fuzzy Hybrid Systems for Construction Engineering and Management)
Show Figures

Figure 1

23 pages, 332 KiB  
Article
Relaxed Rule-Based Learning for Automated Predictive Maintenance: Proof of Concept
by Margarita Razgon and Alireza Mousavi
Algorithms 2020, 13(9), 219; https://doi.org/10.3390/a13090219 - 03 Sep 2020
Cited by 4 | Viewed by 2617 | Correction
Abstract
In this paper we propose a novel approach of rule learning called Relaxed Separate-and- Conquer (RSC): a modification of the standard Separate-and-Conquer (SeCo) methodology that does not require elimination of covered rows. This method can be seen as a generalization of the methods [...] Read more.
In this paper we propose a novel approach of rule learning called Relaxed Separate-and- Conquer (RSC): a modification of the standard Separate-and-Conquer (SeCo) methodology that does not require elimination of covered rows. This method can be seen as a generalization of the methods of SeCo and weighted covering that does not suffer from fragmentation. We present an empirical investigation of the proposed RSC approach in the area of Predictive Maintenance (PdM) of complex manufacturing machines, to predict forthcoming failures of these machines. In particular, we use for experiments a real industrial case study of a machine which manufactures the plastic bottle. We compare the RSC approach with a Decision Tree (DT) based and SeCo algorithms and demonstrate that RSC significantly outperforms both DT based and SeCo rule learners. We conclude that the proposed RSC approach is promising for PdM guided by rule learning. Full article
Show Figures

Figure 1

14 pages, 918 KiB  
Article
A Simulated Annealing Algorithm for Solving Two-Echelon Vehicle Routing Problem with Locker Facilities
by A. A. N. Perwira Redi, Parida Jewpanya, Adji Candra Kurniawan, Satria Fadil Persada, Reny Nadlifatin and Oki Anita Candra Dewi
Algorithms 2020, 13(9), 218; https://doi.org/10.3390/a13090218 - 03 Sep 2020
Cited by 18 | Viewed by 4259
Abstract
We consider the problem of utilizing the parcel locker network for the logistics solution in the metropolitan area. Two-echelon distribution systems are attractive from an economic standpoint, whereas the product from the depot can be distributed from or to intermediate facilities. In this [...] Read more.
We consider the problem of utilizing the parcel locker network for the logistics solution in the metropolitan area. Two-echelon distribution systems are attractive from an economic standpoint, whereas the product from the depot can be distributed from or to intermediate facilities. In this case, the intermediate facilities are considered as locker facilities present in an accessible location in the vicinity of the final customers. In addition, the utilization of locker facilities can reduce the cost caused by the unattended deliveries. The problem is addressed as an optimization model that formulated into an integer linear programming model denoted as the two-echelon vehicle routing problem with locker facilities (2EVRP-LF). The objective is to minimize the cost of transportation with regards to the vehicle travelling cost, the intermediate facilities renting cost, and the additional cost to compensate the customer that needs to travel to access the intermediate facilities. Because of its complexity, a simulated annealing algorithm is proposed to solve the problem. On the other hand, the modelling approach can be conducted by generating two-phase optimization model approaches, which are the p-median problem and the capacitated vehicle routing problem. The results from both methods are compared in numerical experiments. The results show the effectiveness of 2EVRP-LF compared to the two-phase optimization. Furthermore, the simulated annealing algorithm showed an effective performance in solving 2EVRP-LF. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

13 pages, 1310 KiB  
Article
R2D2: A Dbpedia Chatbot Using Triple-Pattern Like Queries
by Haridimos Kondylakis, Dimitrios Tsirigotakis, Giorgos Fragkiadakis, Emmanouela Panteri, Alexandros Papadakis, Alexandros Fragkakis, Eleytherios Tzagkarakis, Ioannis Rallis, Zacharias Saridakis, Apostolos Trampas, Giorgos Pirounakis and Nikolaos Papadakis
Algorithms 2020, 13(9), 217; https://doi.org/10.3390/a13090217 - 03 Sep 2020
Cited by 8 | Viewed by 4505
Abstract
Chatbots, also known as conversation agents, are programs that are able to simulate and reproduce an intelligent conversation with humans. Although this type of program is not new, the explosion of the available information and the rapid increase of the users seeking this [...] Read more.
Chatbots, also known as conversation agents, are programs that are able to simulate and reproduce an intelligent conversation with humans. Although this type of program is not new, the explosion of the available information and the rapid increase of the users seeking this information have renewed the interest in their development. In this paper, we present R2D2, an intelligent chatbot relying on semantic web technologies and offering an intelligent controlled natural language interface for accessing the information available in DBpedia. The chatbot accepts structured input, allowing users to enter triple-pattern like queries, which are answered by the underlying engine. While typing, an auto-complete service guides users on creating the triple patterns, suggesting resources available in the DBpedia. Based on user input (in the form of triple-pattern like queries), the corresponding SPARQL queries are automatically formulated. The queries are submitted to the corresponding DBpedia SPARQL endpoint, and then the result is received by R2D2 and augmented with maps and visuals and eventually presented to the user. The usability evaluation performed shows the advantages of our solution and its usefulness. Full article
Show Figures

Figure 1

23 pages, 2563 KiB  
Article
Distributed Graph Diameter Approximation
by Matteo Ceccarello, Andrea Pietracaprina, Geppino Pucci and Eli Upfal
Algorithms 2020, 13(9), 216; https://doi.org/10.3390/a13090216 - 01 Sep 2020
Cited by 4 | Viewed by 3046
Abstract
We present an algorithm for approximating the diameter of massive weighted undirected graphs on distributed platforms supporting a MapReduce-like abstraction. In order to be efficient in terms of both time and space, our algorithm is based on a decomposition strategy which partitions the [...] Read more.
We present an algorithm for approximating the diameter of massive weighted undirected graphs on distributed platforms supporting a MapReduce-like abstraction. In order to be efficient in terms of both time and space, our algorithm is based on a decomposition strategy which partitions the graph into disjoint clusters of bounded radius. Theoretically, our algorithm uses linear space and yields a polylogarithmic approximation guarantee; most importantly, for a large family of graphs, it features a round complexity asymptotically smaller than the one exhibited by a natural approximation algorithm based on the state-of-the-art Δ-stepping SSSP algorithm, which is its only practical, linear-space competitor in the distributed setting. We complement our theoretical findings with a proof-of-concept experimental analysis on large benchmark graphs, which suggests that our algorithm may attain substantial improvements in terms of running time compared to the aforementioned competitor, while featuring, in practice, a similar approximation ratio. Full article
(This article belongs to the Special Issue Algorithmic Aspects of Networks)
Show Figures

Figure 1

20 pages, 934 KiB  
Article
A Two-Phase Approach for Semi-Supervised Feature Selection
by Amit Saxena, Shreya Pare, Mahendra Singh Meena, Deepak Gupta, Akshansh Gupta, Imran Razzak, Chin-Teng Lin and Mukesh Prasad
Algorithms 2020, 13(9), 215; https://doi.org/10.3390/a13090215 - 31 Aug 2020
Cited by 1 | Viewed by 2590
Abstract
This paper proposes a novel approach for selecting a subset of features in semi-supervised datasets where only some of the patterns are labeled. The whole process is completed in two phases. In the first phase, i.e., Phase-I, the whole dataset is divided into [...] Read more.
This paper proposes a novel approach for selecting a subset of features in semi-supervised datasets where only some of the patterns are labeled. The whole process is completed in two phases. In the first phase, i.e., Phase-I, the whole dataset is divided into two parts: The first part, which contains labeled patterns, and the second part, which contains unlabeled patterns. In the first part, a small number of features are identified using well-known maximum relevance (from first part) and minimum redundancy (whole dataset) based feature selection approaches using the correlation coefficient. The subset of features from the identified set of features, which produces a high classification accuracy using any supervised classifier from labeled patterns, is selected for later processing. In the second phase, i.e., Phase-II, the patterns belonging to the first and second part are clustered separately into the available number of classes of the dataset. In the clusters of the first part, take the majority of patterns belonging to a cluster as the class for that cluster, which is given already. Form the pairs of cluster centroids made in the first and second part. The centroid of the second part nearest to a centroid of the first part will be paired. As the class of the first centroid is known, the same class can be assigned to the centroid of the cluster of the second part, which is unknown. The actual class of the patterns if known for the second part of the dataset can be used to test the classification accuracy of patterns in the second part. The proposed two-phase approach performs well in terms of classification accuracy and number of features selected on the given benchmarked datasets. Full article
Show Figures

Figure 1

26 pages, 2583 KiB  
Article
Fast Spectral Approximation of Structured Graphs with Applications to Graph Filtering
by Mario Coutino, Sundeep Prabhakar Chepuri, Takanori Maehara and Geert Leus
Algorithms 2020, 13(9), 214; https://doi.org/10.3390/a13090214 - 31 Aug 2020
Viewed by 3000
Abstract
To analyze and synthesize signals on networks or graphs, Fourier theory has been extended to irregular domains, leading to a so-called graph Fourier transform. Unfortunately, different from the traditional Fourier transform, each graph exhibits a different graph Fourier transform. Therefore to analyze the [...] Read more.
To analyze and synthesize signals on networks or graphs, Fourier theory has been extended to irregular domains, leading to a so-called graph Fourier transform. Unfortunately, different from the traditional Fourier transform, each graph exhibits a different graph Fourier transform. Therefore to analyze the graph-frequency domain properties of a graph signal, the graph Fourier modes and graph frequencies must be computed for the graph under study. Although to find these graph frequencies and modes, a computationally expensive, or even prohibitive, eigendecomposition of the graph is required, there exist families of graphs that have properties that could be exploited for an approximate fast graph spectrum computation. In this work, we aim to identify these families and to provide a divide-and-conquer approach for computing an approximate spectral decomposition of the graph. Using the same decomposition, results on reducing the complexity of graph filtering are derived. These results provide an attempt to leverage the underlying topological properties of graphs in order to devise general computational models for graph signal processing. Full article
(This article belongs to the Special Issue Efficient Graph Algorithms in Machine Learning)
Show Figures

Figure 1

19 pages, 6802 KiB  
Article
Low-Power FPGA Implementation of Convolution Neural Network Accelerator for Pulse Waveform Classification
by Chuanglu Chen, Zhiqiang Li, Yitao Zhang, Shaolong Zhang, Jiena Hou and Haiying Zhang
Algorithms 2020, 13(9), 213; https://doi.org/10.3390/a13090213 - 31 Aug 2020
Cited by 6 | Viewed by 2869
Abstract
In pulse waveform classification, the convolution neural network (CNN) shows excellent performance. However, due to its numerous parameters and intensive computation, it is challenging to deploy a CNN model to low-power devices. To solve this problem, we implement a CNN accelerator based on [...] Read more.
In pulse waveform classification, the convolution neural network (CNN) shows excellent performance. However, due to its numerous parameters and intensive computation, it is challenging to deploy a CNN model to low-power devices. To solve this problem, we implement a CNN accelerator based on a field-programmable gate array (FPGA), which can accurately and quickly infer the waveform category. By designing the structure of CNN, we significantly reduce its parameters on the premise of high accuracy. Then the CNN is realized on FPGA and optimized by a variety of memory access optimization methods. Experimental results show that our customized CNN has high accuracy and fewer parameters, and the accelerator costs only 0.714 W under a working frequency of 100 MHz, which proves that our proposed solution is feasible. Furthermore, the accelerator classifies the pulse waveform in real time, which could help doctors make the diagnosis. Full article
(This article belongs to the Special Issue Algorithms in Bioinformatics)
Show Figures

Figure 1

33 pages, 1731 KiB  
Article
Fused Gromov-Wasserstein Distance for Structured Objects
by Titouan Vayer, Laetitia Chapel, Remi Flamary, Romain Tavenard and Nicolas Courty
Algorithms 2020, 13(9), 212; https://doi.org/10.3390/a13090212 - 31 Aug 2020
Cited by 23 | Viewed by 8605
Abstract
Optimal transport theory has recently found many applications in machine learning thanks to its capacity to meaningfully compare various machine learning objects that are viewed as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of [...] Read more.
Optimal transport theory has recently found many applications in machine learning thanks to its capacity to meaningfully compare various machine learning objects that are viewed as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of the objects, but treats them independently, whereas the Gromov–Wasserstein distance focuses on the relations between the elements, depicting the structure of the object, yet discarding its features. In this paper, we study the Fused Gromov-Wasserstein distance that extends the Wasserstein and Gromov–Wasserstein distances in order to encode simultaneously both the feature and structure information. We provide the mathematical framework for this distance in the continuous setting, prove its metric and interpolation properties, and provide a concentration result for the convergence of finite samples. We also illustrate and interpret its use in various applications, where structured objects are involved. Full article
(This article belongs to the Special Issue Efficient Graph Algorithms in Machine Learning)
Show Figures

Figure 1

27 pages, 949 KiB  
Article
Finding Top-k Nodes for Temporal Closeness in Large Temporal Graphs
by Pierluigi Crescenzi, Clémence Magnien and Andrea Marino
Algorithms 2020, 13(9), 211; https://doi.org/10.3390/a13090211 - 29 Aug 2020
Cited by 14 | Viewed by 2874
Abstract
The harmonic closeness centrality measure associates, to each node of a graph, the average of the inverse of its distances from all the other nodes (by assuming that unreachable nodes are at infinite distance). This notion has been adapted to temporal graphs (that [...] Read more.
The harmonic closeness centrality measure associates, to each node of a graph, the average of the inverse of its distances from all the other nodes (by assuming that unreachable nodes are at infinite distance). This notion has been adapted to temporal graphs (that is, graphs in which edges can appear and disappear during time) and in this paper we address the question of finding the top-k nodes for this metric. Computing the temporal closeness for one node can be done in O(m) time, where m is the number of temporal edges. Therefore computing exactly the closeness for all nodes, in order to find the ones with top closeness, would require O(nm) time, where n is the number of nodes. This time complexity is intractable for large temporal graphs. Instead, we show how this measure can be efficiently approximated by using a “backward” temporal breadth-first search algorithm and a classical sampling technique. Our experimental results show that the approximation is excellent for nodes with high closeness, allowing us to detect them in practice in a fraction of the time needed for computing the exact closeness of all nodes. We validate our approach with an extensive set of experiments. Full article
(This article belongs to the Special Issue Big Data Algorithmics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop