Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 6, Issue 1 (March 2013), Pages 1-196

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-11
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Maximum Disjoint Paths on Edge-Colored Graphs: Approximability and Tractability
Algorithms 2013, 6(1), 1-11; doi:10.3390/a6010001
Received: 31 October 2012 / Revised: 13 December 2012 / Accepted: 18 December 2012 / Published: 27 December 2012
Cited by 3 | PDF Full-text (189 KB) | HTML Full-text | XML Full-text
Abstract
The problem of finding the maximum number of vertex-disjoint uni-color paths in an edge-colored graph has been recently introduced in literature, motivated by applications in social network analysis. In this paper we investigate the approximation and parameterized complexity of the problem. First, [...] Read more.
The problem of finding the maximum number of vertex-disjoint uni-color paths in an edge-colored graph has been recently introduced in literature, motivated by applications in social network analysis. In this paper we investigate the approximation and parameterized complexity of the problem. First, we show that, for any constant ε > 0, the problem is not approximable within factor c1-ε, where c is the number of colors, and that the corresponding decision problem is W[1]-hard when parametrized by the number of disjoint paths. Then, we present a fixed-parameter algorithm for the problem parameterized by the number and the length of the disjoint paths. Full article
(This article belongs to the Special Issue Graph Algorithms)
Open AccessArticle 1 Major Component Detection and Analysis (1 MCDA): Foundations in Two Dimensions
Algorithms 2013, 6(1), 12-28; doi:10.3390/a6010012
Received: 5 October 2012 / Revised: 3 January 2013 / Accepted: 7 January 2013 / Published: 17 January 2013
Cited by 1 | PDF Full-text (513 KB) | HTML Full-text | XML Full-text
Abstract
Principal Component Analysis (PCA) is widely used for identifying the major components of statistically distributed point clouds. Robust versions of PCA, often based in part on the 1 norm (rather than the ℓ2 norm), are increasingly used, especially for point clouds [...] Read more.
Principal Component Analysis (PCA) is widely used for identifying the major components of statistically distributed point clouds. Robust versions of PCA, often based in part on the 1 norm (rather than the ℓ2 norm), are increasingly used, especially for point clouds with many outliers. Neither standard PCA nor robust PCAs can provide, without additional assumptions, reliable information for outlier-rich point clouds and for distributions with several main directions (spokes). We carry out a fundamental and complete reformulation of the PCA approach in a framework based exclusively on the 1 norm and heavy-tailed distributions. The 1 Major Component Detection and Analysis (1 MCDA) that we propose can determine the main directions and the radial extent of 2D data from single or multiple superimposed Gaussian or heavy-tailed distributions without and with patterned artificial outliers (clutter). In nearly all cases in the computational results, 2D 1 MCDA has accuracy superior to that of standard PCA and of two robust PCAs, namely, the projection-pursuit method of Croux and Ruiz-Gazen and the 1 factorization method of Ke and Kanade. (Standard PCA is, of course, superior to 1 MCDA for Gaussian-distributed point clouds.) The computing time of 1 MCDA is competitive with the computing times of the two robust PCAs. Full article
Open AccessArticle Energy Efficient Routing in Wireless Sensor Networks Through Balanced Clustering
Algorithms 2013, 6(1), 29-42; doi:10.3390/a6010029
Received: 14 September 2012 / Revised: 4 January 2013 / Accepted: 14 January 2013 / Published: 18 January 2013
Cited by 40 | PDF Full-text (250 KB) | HTML Full-text | XML Full-text
Abstract
The wide utilization of Wireless Sensor Networks (WSNs) is obstructed by the severely limited energy constraints of the individual sensor nodes. This is the reason why a large part of the research in WSNs focuses on the development of energy efficient routing [...] Read more.
The wide utilization of Wireless Sensor Networks (WSNs) is obstructed by the severely limited energy constraints of the individual sensor nodes. This is the reason why a large part of the research in WSNs focuses on the development of energy efficient routing protocols. In this paper, a new protocol called Equalized Cluster Head Election Routing Protocol (ECHERP), which pursues energy conservation through balanced clustering, is proposed. ECHERP models the network as a linear system and, using the Gaussian elimination algorithm, calculates the combinations of nodes that can be chosen as cluster heads in order to extend the network lifetime. The performance evaluation of ECHERP is carried out through simulation tests, which evince the effectiveness of this protocol in terms of network energy efficiency when compared against other well-known protocols. Full article
(This article belongs to the Special Issue Sensor Network)
Figures

Open AccessArticle Computational Study on a PTAS for Planar Dominating Set Problem
Algorithms 2013, 6(1), 43-59; doi:10.3390/a6010043
Received: 2 November 2012 / Revised: 10 January 2013 / Accepted: 13 January 2013 / Published: 21 January 2013
Cited by 1 | PDF Full-text (150 KB) | HTML Full-text | XML Full-text
Abstract
The dominating set problem is a core NP-hard problem in combinatorial optimization and graph theory, and has many important applications. Baker [JACM 41,1994] introduces a k-outer planar graph decomposition-based framework for designing polynomial time approximation scheme (PTAS) for a class of NP-hard [...] Read more.
The dominating set problem is a core NP-hard problem in combinatorial optimization and graph theory, and has many important applications. Baker [JACM 41,1994] introduces a k-outer planar graph decomposition-based framework for designing polynomial time approximation scheme (PTAS) for a class of NP-hard problems in planar graphs. It is mentioned that the framework can be applied to obtain an O(2ckn) time, c is a constant, (1+1/k)-approximation algorithm for the planar dominating set problem. We show that the approximation ratio achieved by the mentioned application of the framework is not bounded by any constant for the planar dominating set problem. We modify the application of the framework to give a PTAS for the planar dominating set problem. With k-outer planar graph decompositions, the modified PTAS has an approximation ratio (1 + 2/k). Using 2k-outer planar graph decompositions, the modified PTAS achieves the approximation ratio (1+1/k) in O(22ckn) time. We report a computational study on the modified PTAS. Our results show that the modified PTAS is practical. Full article
(This article belongs to the Special Issue Graph Algorithms)
Open AccessArticle Tractabilities and Intractabilities on Geometric Intersection Graphs
Algorithms 2013, 6(1), 60-83; doi:10.3390/a6010060
Received: 23 October 2012 / Revised: 10 January 2013 / Accepted: 14 January 2013 / Published: 25 January 2013
Cited by 4 | PDF Full-text (193 KB) | HTML Full-text | XML Full-text
Abstract
A graph is said to be an intersection graph if there is a set of objects such that each vertex corresponds to an object and two vertices are adjacent if and only if the corresponding objects have a nonempty intersection. There are [...] Read more.
A graph is said to be an intersection graph if there is a set of objects such that each vertex corresponds to an object and two vertices are adjacent if and only if the corresponding objects have a nonempty intersection. There are several natural graph classes that have geometric intersection representations. The geometric representations sometimes help to prove tractability/intractability of problems on graph classes. In this paper, we show some results proved by using geometric representations. Full article
(This article belongs to the Special Issue Graph Algorithms)
Open AccessArticle Dubins Traveling Salesman Problem with Neighborhoods: A Graph-Based Approach
Algorithms 2013, 6(1), 84-99; doi:10.3390/a6010084
Received: 31 October 2012 / Revised: 17 January 2013 / Accepted: 18 January 2013 / Published: 4 February 2013
Cited by 6 | PDF Full-text (3056 KB) | HTML Full-text | XML Full-text
Abstract
We study the problem of finding the minimum-length curvature constrained closed path through a set of regions in the plane. This problem is referred to as the Dubins Traveling Salesperson Problem with Neighborhoods (DTSPN). An algorithm is presented that uses sampling to [...] Read more.
We study the problem of finding the minimum-length curvature constrained closed path through a set of regions in the plane. This problem is referred to as the Dubins Traveling Salesperson Problem with Neighborhoods (DTSPN). An algorithm is presented that uses sampling to cast this infinite dimensional combinatorial optimization problem as a Generalized Traveling Salesperson Problem (GTSP) with intersecting node sets. The GTSP is then converted to an Asymmetric Traveling Salesperson Problem (ATSP) through a series of graph transformations, thus allowing the use of existing approximation algorithms. This algorithm is shown to perform no worse than the best existing DTSPN algorithm and is shown to perform significantly better when the regions overlap. We report on the application of this algorithm to route an Unmanned Aerial Vehicle (UAV) equipped with a radio to collect data from sparsely deployed ground sensors in a field demonstration of autonomous detection, localization, and verification of multiple acoustic events. Full article
(This article belongs to the Special Issue Graph Algorithms)
Open AccessArticle Computing the Eccentricity Distribution of Large Graphs
Algorithms 2013, 6(1), 100-118; doi:10.3390/a6010100
Received: 1 November 2012 / Revised: 24 January 2013 / Accepted: 31 January 2013 / Published: 18 February 2013
Cited by 7 | PDF Full-text (450 KB) | HTML Full-text | XML Full-text
Abstract
The eccentricity of a node in a graph is defined as the length of a longest shortest path starting at that node. The eccentricity distribution over all nodes is a relevant descriptive property of the graph, and its extreme values allow the [...] Read more.
The eccentricity of a node in a graph is defined as the length of a longest shortest path starting at that node. The eccentricity distribution over all nodes is a relevant descriptive property of the graph, and its extreme values allow the derivation of measures such as the radius, diameter, center and periphery of the graph. This paper describes two new methods for computing the eccentricity distribution of large graphs such as social networks, web graphs, biological networks and routing networks.We first propose an exact algorithm based on eccentricity lower and upper bounds, which achieves significant speedups compared to the straightforward algorithm when computing both the extreme values of the distribution as well as the eccentricity distribution as a whole. The second algorithm that we describe is a hybrid strategy that combines the exact approach with an efficient sampling technique in order to obtain an even larger speedup on the computation of the entire eccentricity distribution. We perform an extensive set of experiments on a number of large graphs in order to measure and compare the performance of our algorithms, and demonstrate how we can efficiently compute the eccentricity distribution of various large real-world graphs. Full article
(This article belongs to the Special Issue Graph Algorithms)
Open AccessArticle A Polynomial-Time Algorithm for Computing the Maximum Common Connected Edge Subgraph of Outerplanar Graphs of Bounded Degree
Algorithms 2013, 6(1), 119-135; doi:10.3390/a6010119
Received: 30 October 2012 / Revised: 27 January 2013 / Accepted: 7 February 2013 / Published: 18 February 2013
Cited by 1 | PDF Full-text (200 KB) | HTML Full-text | XML Full-text
Abstract
The maximum common connected edge subgraph problem is to find a connected graph with the maximum number of edges that is isomorphic to a subgraph of each of the two input graphs, where it has applications in pattern recognition and chemistry. This [...] Read more.
The maximum common connected edge subgraph problem is to find a connected graph with the maximum number of edges that is isomorphic to a subgraph of each of the two input graphs, where it has applications in pattern recognition and chemistry. This paper presents a dynamic programming algorithm for the problem when the two input graphs are outerplanar graphs of a bounded vertex degree, where it is known that the problem is NP-hard, even for outerplanar graphs of an unbounded degree. Although the algorithm repeatedly modifies input graphs, it is shown that the number of relevant subproblems is polynomially bounded, and thus, the algorithm works in polynomial time. Full article
(This article belongs to the Special Issue Graph Algorithms)
Open AccessArticle Stable Multicommodity Flows
Algorithms 2013, 6(1), 161-168; doi:10.3390/a6010161
Received: 31 December 2012 / Revised: 25 January 2013 / Accepted: 8 March 2013 / Published: 18 March 2013
Cited by 1 | PDF Full-text (165 KB) | HTML Full-text | XML Full-text
Abstract
We extend the stable flow model of Fleiner to multicommodity flows. In addition to the preference lists of agents on trading partners for each commodity, every trading pair has a preference list on the commodities that the seller can sell to the [...] Read more.
We extend the stable flow model of Fleiner to multicommodity flows. In addition to the preference lists of agents on trading partners for each commodity, every trading pair has a preference list on the commodities that the seller can sell to the buyer. A blocking walk (with respect to a certain commodity) may include saturated arcs, provided that a positive amount of less preferred commodity is traded along the arc. We prove that a stable multicommodity flow always exists, although it is PPAD-hard to find one. Full article
(This article belongs to the Special Issue Special Issue on Matching under Preferences)
Open AccessArticle An Open-Source Implementation of the Critical-Line Algorithm for Portfolio Optimization
Algorithms 2013, 6(1), 169-196; doi:10.3390/a6010169
Received: 29 January 2013 / Revised: 8 March 2013 / Accepted: 18 March 2013 / Published: 22 March 2013
Cited by 1 | PDF Full-text (401 KB) | HTML Full-text | XML Full-text
Abstract
Portfolio optimization is one of the problems most frequently encountered by financial practitioners. The main goal of this paper is to fill a gap in the literature by providing a well-documented, step-by-step open-source implementation of Critical Line Algorithm (CLA) in scientific language. [...] Read more.
Portfolio optimization is one of the problems most frequently encountered by financial practitioners. The main goal of this paper is to fill a gap in the literature by providing a well-documented, step-by-step open-source implementation of Critical Line Algorithm (CLA) in scientific language. The code is implemented as a Python class object, which allows it to be imported like any other Python module, and integrated seamlessly with pre-existing code. We discuss the logic behind CLA following the algorithm’s decision flow. In addition, we developed several utilities that support finding answers to recurrent practical problems. We believe this publication will offer a better alternative to financial practitioners, many of whom are currently relying on generic-purpose optimizers which often deliver suboptimal solutions. The source code discussed in this paper can be downloaded at the authors’ websites (see Appendix). Full article
(This article belongs to the Special Issue Algorithms and Financial Optimization)

Review

Jump to: Research

Open AccessReview Algorithms for Non-Negatively Constrained Maximum Penalized Likelihood Reconstruction in Tomographic Imaging
Algorithms 2013, 6(1), 136-160; doi:10.3390/a6010136
Received: 28 November 2012 / Revised: 18 February 2013 / Accepted: 19 February 2013 / Published: 12 March 2013
PDF Full-text (297 KB) | HTML Full-text | XML Full-text
Abstract
Image reconstruction is a key component in many medical imaging modalities. The problem of image reconstruction can be viewed as a special inverse problem where the unknown image pixel intensities are estimated from the observed measurements. Since the measurements are usually noise [...] Read more.
Image reconstruction is a key component in many medical imaging modalities. The problem of image reconstruction can be viewed as a special inverse problem where the unknown image pixel intensities are estimated from the observed measurements. Since the measurements are usually noise contaminated, statistical reconstruction methods are preferred. In this paper we review some non-negatively constrained simultaneous iterative algorithms for maximum penalized likelihood reconstructions, where all measurements are used to estimate all pixel intensities in each iteration. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging)

Journal Contact

MDPI AG
Algorithms Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
algorithms@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Algorithms
Back to Top