Journal Browser

## Algorithms, Volume 6, Issue 4 (December 2013), Pages 591-882

• PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
View options order results:
result details:
Displaying articles 1-13
Export citation of selected articles as:

# Research

Open AccessArticle Local Search Approaches in Stable Matching Problems
Algorithms 2013, 6(4), 591-617; https://doi.org/10.3390/a6040591
Received: 14 August 2013 / Revised: 4 September 2013 / Accepted: 22 September 2013 / Published: 3 October 2013
Cited by 12 | PDF Full-text (454 KB) | HTML Full-text | XML Full-text
Abstract
The stable marriage (SM) problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools or, more generally, to any two-sided market. In the classical formulation, n men and n women express their preferences
The stable marriage (SM) problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools or, more generally, to any two-sided market. In the classical formulation, n men and n women express their preferences (via a strict total order) over the members of the other sex. Solving an SM problem means finding a stable marriage where stability is an envy-free notion: no man and woman who are not married to each other would both prefer each other to their partners or to being single. We consider both the classical stable marriage problem and one of its useful variations (denoted SMTI (Stable Marriage with Ties and Incomplete lists)) where the men and women express their preferences in the form of an incomplete preference list with ties over a subset of the members of the other sex. Matchings are permitted only with people who appear in these preference lists, and we try to find a stable matching that marries as many people as possible. Whilst the SM problem is polynomial to solve, the SMTI problem is NP-hard. We propose to tackle both problems via a local search approach, which exploits properties of the problems to reduce the size of the neighborhood and to make local moves efficiently. We empirically evaluate our algorithm for SM problems by measuring its runtime behavior and its ability to sample the lattice of all possible stable marriages. We evaluate our algorithm for SMTI problems in terms of both its runtime behavior and its ability to find a maximum cardinality stable marriage. Experimental results suggest that for SM problems, the number of steps of our algorithm grows only as O(n log(n)), and that it samples very well the set of all stable marriages. It is thus a fair and efficient approach to generate stable marriages. Furthermore, our approach for SMTI problems is able to solve large problems, quickly returning stable matchings of large and often optimal size, despite the NP-hardness of this problem. Full article
(This article belongs to the Special Issue Special Issue on Matching under Preferences)
Figures

Figure 1

Open AccessArticle Multi-Threading a State-of-the-Art Maximum Clique Algorithm
Algorithms 2013, 6(4), 618-635; https://doi.org/10.3390/a6040618
Received: 15 August 2013 / Revised: 13 September 2013 / Accepted: 18 September 2013 / Published: 3 October 2013
Cited by 19 | PDF Full-text (244 KB) | HTML Full-text | XML Full-text
Abstract
We present a threaded parallel adaptation of a state-of-the-art maximum clique algorithm for dense, computationally challenging graphs. We show that near-linear speedups are achievable in practice and that superlinear speedups are common. We include results for several previously unsolved benchmark problems. Full article
(This article belongs to the Special Issue Algorithms for Multi Core Parallel Computation)
Figures

Figure 1

Open AccessArticle Sublinear Time Motif Discovery from Multiple Sequences
Algorithms 2013, 6(4), 636-677; https://doi.org/10.3390/a6040636
Received: 11 June 2013 / Revised: 30 September 2013 / Accepted: 1 October 2013 / Published: 14 October 2013
Cited by 1 | PDF Full-text (479 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a natural probabilistic model for motif discovery has been used to experimentally test the quality of motif discovery programs. In this model, there are k background sequences, and each character in a background sequence is a random character from an
In this paper, a natural probabilistic model for motif discovery has been used to experimentally test the quality of motif discovery programs. In this model, there are k background sequences, and each character in a background sequence is a random character from an alphabet, Σ. A motif G = g1g2 ... gm is a string of m characters. In each background sequence is implanted a probabilistically-generated approximate copy of G. For a probabilistically-generated approximate copy b1b2 ... bm of G, every character, bi, is probabilistically generated, such that the probability for bi gi is at most α. We develop two new randomized algorithms and one new deterministic algorithm. They make advancements in the following aspects: (1) The algorithms are much faster than those before. Our algorithms can even run in sublinear time. (2) They can handle any motif pattern. (3) The restriction for the alphabet size is a lower bound of four. This gives them potential applications in practical problems, since gene sequences have an alphabet size of four. (4) All algorithms have rigorous proofs about their performances. The methods developed in this paper have been used in the software implementation. We observed some encouraging results that show improved performance for motif detection compared with other software. Full article
(This article belongs to the Special Issue Algorithms for Sequence Analysis and Storage)
Figures

Figure 1

Open AccessArticle Pattern-Guided k-Anonymity
Algorithms 2013, 6(4), 678-701; https://doi.org/10.3390/a6040678
Received: 2 August 2013 / Revised: 9 October 2013 / Accepted: 9 October 2013 / Published: 17 October 2013
Cited by 3 | PDF Full-text (288 KB) | HTML Full-text | XML Full-text
Abstract
We suggest a user-oriented approach to combinatorial data anonymization. A data matrix is called k-anonymous if every row appears at least k times—the goal of the NP-hard k-ANONYMITY problem then is to make a given matrix k-anonymous by suppressing (blanking out)
We suggest a user-oriented approach to combinatorial data anonymization. A data matrix is called k-anonymous if every row appears at least k times—the goal of the NP-hard k-ANONYMITY problem then is to make a given matrix k-anonymous by suppressing (blanking out) as few entries as possible. Building on previous work and coping with corresponding deficiencies, we describe an enhanced k-anonymization problem called PATTERN-GUIDED k-ANONYMITY, where the users specify in which combinations suppressions may occur. In this way, the user of the anonymized data can express the differing importance of various data features. We show that PATTERN-GUIDED k-ANONYMITY is NP-hard. We complement this by a fixed-parameter tractability result based on a “data-driven parameterization” and, based on this, develop an exact integer linear program (ILP)-based solution method, as well as a simple, but very effective, greedy heuristic. Experiments on several real-world datasets show that our heuristic easily matches up to the established “Mondrian” algorithm for k-ANONYMITY in terms of the quality of the anonymization and outperforms it in terms of running time. Full article
Figures

Figure 1

Open AccessArticle New Parallel Sparse Direct Solvers for Multicore Architectures
Algorithms 2013, 6(4), 702-725; https://doi.org/10.3390/a6040702
Received: 9 August 2013 / Revised: 16 October 2013 / Accepted: 21 October 2013 / Published: 1 November 2013
Cited by 10 | PDF Full-text (315 KB) | HTML Full-text | XML Full-text
Abstract
At the heart of many computations in science and engineering lies the need to efficiently and accurately solve large sparse linear systems of equations. Direct methods are frequently the method of choice because of their robustness, accuracy and potential for use as black-box
At the heart of many computations in science and engineering lies the need to efficiently and accurately solve large sparse linear systems of equations. Direct methods are frequently the method of choice because of their robustness, accuracy and potential for use as black-box solvers. In the last few years, there have been many new developments, and a number of new modern parallel general-purpose sparse solvers have been written for inclusion within the HSL mathematical software library. In this paper, we introduce and briefly review these solvers for symmetric sparse systems. We describe the algorithms used, highlight key features (including bit-compatibility and out-of-core working) and then, using problems arising from a range of practical applications, we illustrate and compare their performances. We demonstrate that modern direct solvers are able to accurately solve systems of order 106 in less than 3 minutes on a 16-core machine. Full article
(This article belongs to the Special Issue Algorithms for Multi Core Parallel Computation)
Figures

Figure 1

Open AccessArticle An Efficient Local Search for the Feedback Vertex Set Problem
Algorithms 2013, 6(4), 726-746; https://doi.org/10.3390/a6040726
Received: 24 June 2013 / Revised: 7 August 2013 / Accepted: 28 October 2013 / Published: 1 November 2013
Cited by 2 | PDF Full-text (201 KB) | HTML Full-text | XML Full-text
Abstract
Inspired by many deadlock detection applications, the feedback vertex set is defined as a set of vertices in an undirected graph, whose removal would result in a graph without cycle. The Feedback Vertex Set Problem, known to be NP-complete, is to search for
Inspired by many deadlock detection applications, the feedback vertex set is defined as a set of vertices in an undirected graph, whose removal would result in a graph without cycle. The Feedback Vertex Set Problem, known to be NP-complete, is to search for a feedback vertex set with the minimal cardinality to benefit the deadlock recovery. To address the issue, this paper presents NewkLS FVS(LS, local search; FVS, feedback vertex set), a variable depth-based local search algorithm with a randomized scheme to optimize the efficiency and performance. Experimental simulations are conducted to compare the algorithm with recent metaheuristics, and the computational results show that the proposed algorithm can outperform the other state-of-art algorithms and generate satisfactory solutions for most DIMACSbenchmarks. Full article
Figures

Figure 1

Open AccessArticle Multi-Core Parallel Gradual Pattern Mining Based on Multi-Precision Fuzzy Orderings
Algorithms 2013, 6(4), 747-761; https://doi.org/10.3390/a6040747
Received: 17 September 2013 / Revised: 16 October 2013 / Accepted: 21 October 2013 / Published: 1 November 2013
Cited by 1 | PDF Full-text (1431 KB) | HTML Full-text | XML Full-text
Abstract
Gradual patterns aim at describing co-variations of data such as the higher the size, the higher the weight. In recent years, such patterns have been studied more and more from the data mining point of view. The extraction of such patterns relies on
Gradual patterns aim at describing co-variations of data such as the higher the size, the higher the weight. In recent years, such patterns have been studied more and more from the data mining point of view. The extraction of such patterns relies on efficient and smart orderings that can be built among data, for instance, when ordering the data with respect to the size, then the data are also ordered with respect to the weight. However, in many application domains, it is hardly possible to consider that data values are crisply ordered. When considering gene expression, it is not true from the biological point of view that Gene 1 is more expressed than Gene 2, if the levels of expression only differ from the tenth decimal. We thus consider fuzzy orderings and fuzzy gamma rank correlation. In this paper, we address two major problems related to this framework: (i) the high memory consumption and (ii) the precision, representation and efficient storage of the fuzzy concordance degrees versus the loss or gain of computing power. For this purpose, we consider multi-precision matrices represented using sparse matrices coupled with parallel algorithms. Experimental results show the interest of our proposal. Full article
(This article belongs to the Special Issue Algorithms for Multi Core Parallel Computation)
Figures

Figure 1

Open AccessArticle Very High Resolution Satellite Image Classification Using Fuzzy Rule-Based Systems
Algorithms 2013, 6(4), 762-781; https://doi.org/10.3390/a6040762
Received: 23 August 2013 / Revised: 22 October 2013 / Accepted: 6 November 2013 / Published: 12 November 2013
Cited by 19 | PDF Full-text (678 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this research is to present a detailed step-by-step method for classification of very high resolution urban satellite images (VHRSI) into specific classes such as road, building, vegetation, etc., using fuzzy logic. In this study, object-based image analysis is used
The aim of this research is to present a detailed step-by-step method for classification of very high resolution urban satellite images (VHRSI) into specific classes such as road, building, vegetation, etc., using fuzzy logic. In this study, object-based image analysis is used for image classification. The main problems in high resolution image classification are the uncertainties in the position of object borders in satellite images and also multiplex resemblance of the segments to different classes. In order to solve this problem, fuzzy logic is used for image classification, since it provides the possibility of image analysis using multiple parameters without requiring inclusion of certain thresholds in the class assignment process. In this study, an inclusive semi-automatic method for image classification is offered, which presents the configuration of the related fuzzy functions as well as fuzzy rules. The produced results are compared to the results of a normal classification using the same parameters, but with crisp rules. The overall accuracies and kappa coefficients of the presented method stand higher than the check projects. Full article
(This article belongs to the Special Issue Fuzzy Algorithms for Decision Making and Data Analysis)
Figures

Figure 1

Open AccessArticle Stability, Optimality and Manipulation in Matching Problems with Weighted Preferences
Algorithms 2013, 6(4), 782-804; https://doi.org/10.3390/a6040782
Received: 9 July 2013 / Revised: 7 November 2013 / Accepted: 8 November 2013 / Published: 18 November 2013
Cited by 1 | PDF Full-text (242 KB) | HTML Full-text | XML Full-text
Abstract
The stable matching problem (also known as the stable marriage problem) is a well-known problem of matching men to women, so that no man and woman, who are not married to each other, both prefer each other. Such a problem has a wide
The stable matching problem (also known as the stable marriage problem) is a well-known problem of matching men to women, so that no man and woman, who are not married to each other, both prefer each other. Such a problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools or, more generally, to any two-sided market. In the classical stable marriage problem, both men and women express a strict preference order over the members of the other sex, in a qualitative way. Here, we consider stable marriage problems with weighted preferences: each man (resp., woman) provides a score for each woman (resp., man). Such problems are more expressive than the classical stable marriage problems. Moreover, in some real-life situations, it is more natural to express scores (to model, for example, profits or costs) rather than a qualitative preference ordering. In this context, we define new notions of stability and optimality, and we provide algorithms to find marriages that are stable and/or optimal according to these notions. While expressivity greatly increases by adopting weighted preferences, we show that, in most cases, the desired solutions can be found by adapting existing algorithms for the classical stable marriage problem. We also consider the manipulability properties of the procedures that return such stable marriages. While we know that all procedures are manipulable by modifying the preference lists or by truncating them, here, we consider if manipulation can occur also by just modifying the weights while preserving the ordering and avoiding truncation. It turns out that, by adding weights, in some cases, we may increase the possibility of manipulating, and this cannot be avoided by any reasonable restriction on the weights. Full article
(This article belongs to the Special Issue Special Issue on Matching under Preferences)
Open AccessArticle PMS6MC: A Multicore Algorithm for Motif Discovery
Algorithms 2013, 6(4), 805-823; https://doi.org/10.3390/a6040805
Received: 15 September 2013 / Revised: 8 November 2013 / Accepted: 11 November 2013 / Published: 18 November 2013
Cited by 2 | PDF Full-text (260 KB) | HTML Full-text | XML Full-text
Abstract
We develop an efficient multicore algorithm, PMS6MC, for the (l; d)-motif discovery problem in which we are to find all strings of length l that appear in every string of a given set of strings with at most d mismatches. PMS6MC is based
We develop an efficient multicore algorithm, PMS6MC, for the (l; d)-motif discovery problem in which we are to find all strings of length l that appear in every string of a given set of strings with at most d mismatches. PMS6MC is based on PMS6, which is currently the fastest single-core algorithm for motif discovery in large instances. The speedup, relative to PMS6, attained by our multicore algorithm ranges from a high of 6.62 for the (17,6) challenging instances to a low of 2.75 for the (13,4) challenging instances on an Intel 6-core system. We estimate that PMS6MC is 2 to 4 times faster than other parallel algorithms for motif search on large instances. Full article
(This article belongs to the Special Issue Algorithms for Multi Core Parallel Computation)
Figures

Figure 1

Open AccessArticle Overlays with Preferences: Distributed, Adaptive Approximation Algorithms for Matching with Preference Lists
Algorithms 2013, 6(4), 824-856; https://doi.org/10.3390/a6040824
Received: 21 June 2013 / Revised: 7 November 2013 / Accepted: 8 November 2013 / Published: 19 November 2013
Cited by 3 | PDF Full-text (952 KB) | HTML Full-text | XML Full-text
Abstract
A key property of overlay networks is the overlay nodes’ ability to establish connections (or be matched) to other nodes by preference, based on some suitability metric related to, e.g., the node’s distance, interests, recommendations, transaction history or available resources. When there are
A key property of overlay networks is the overlay nodes’ ability to establish connections (or be matched) to other nodes by preference, based on some suitability metric related to, e.g., the node’s distance, interests, recommendations, transaction history or available resources. When there are no preference cycles among the nodes, a stable matching exists in which nodes have maximized individual satisfaction, due to their choices, however no such guarantees are currently being given in the generic case. In this work, we employ the notion of node satisfaction to suggest a novel modeling for matching problems, suitable for overlay networks. We start by presenting a simple, yet powerful, distributed algorithm that solves the many-to-many matching problem with preferences. It achieves that by using local information and aggregate satisfaction as an optimization metric, while providing a guaranteed convergence and approximation ratio. Subsequently, we show how to extend the algorithm in order to support and adapt to changes in the nodes’ connectivity and preferences. In addition, we provide a detailed experimental study that focuses on the levels of achieved satisfaction, as well as convergence and reconvergence speed. Full article
(This article belongs to the Special Issue Special Issue on Matching under Preferences)
Figures

Figure 1

Open AccessArticle Solving Matrix Equations on Multi-Core and Many-Core Architectures
Algorithms 2013, 6(4), 857-870; https://doi.org/10.3390/a6040857
Received: 27 September 2013 / Revised: 12 November 2013 / Accepted: 18 November 2013 / Published: 25 November 2013
Cited by 6 | PDF Full-text (226 KB) | HTML Full-text | XML Full-text
Abstract
We address the numerical solution of Lyapunov, algebraic and differential Riccati equations, via the matrix sign function, on platforms equipped with general-purpose multicore processors and, optionally, one or more graphics processing units (GPUs). In particular, we review the solvers for these equations, as
We address the numerical solution of Lyapunov, algebraic and differential Riccati equations, via the matrix sign function, on platforms equipped with general-purpose multicore processors and, optionally, one or more graphics processing units (GPUs). In particular, we review the solvers for these equations, as well as the underlying methods, analyze their concurrency and scalability and provide details on their parallel implementation. Our experimental results show that this class of hardware provides sufficient computational power to tackle large-scale problems, which only a few years ago would have required a cluster of computers. Full article
(This article belongs to the Special Issue Algorithms for Multi Core Parallel Computation)
Figures

Figure 1

Open AccessArticle Sparse Signal Recovery from Fixed Low-Rank Subspace via Compressive Measurement
Algorithms 2013, 6(4), 871-882; https://doi.org/10.3390/a6040871
Received: 16 November 2013 / Revised: 11 December 2013 / Accepted: 11 December 2013 / Published: 17 December 2013
Cited by 2 | PDF Full-text (515 KB) | HTML Full-text | XML Full-text
Abstract
This paper designs and evaluates a variant of CoSaMP algorithm, for recovering the sparse signal s from the compressive measurement $v\text{}=\text{}A\left(Uw+s\right)$ given a fixed low-rank subspace spanned by U. Instead of firstly recovering the
This paper designs and evaluates a variant of CoSaMP algorithm, for recovering the sparse signal s from the compressive measurement $v\text{}=\text{}A\left(Uw+s\right)$ given a fixed low-rank subspace spanned by U. Instead of firstly recovering the full vector then separating the sparse part from the structured dense part, the proposed algorithm directly works on the compressive measurement to do the separation. We investigate the performance of the algorithm on both simulated data and video compressive sensing. The results show that for a fixed low-rank subspace and truly sparse signal the proposed algorithm could successfully recover the signal only from a few compressive sensing (CS) measurements, and it performs better than ordinary CoSaMP when the sparse signal is corrupted by additional Gaussian noise. Full article