Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 5, Issue 4 (December 2012), Pages 398-667

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-15
Export citation of selected articles as:

Research

Open AccessArticle Better Metrics to Automatically Predict the Quality of a Text Summary
Algorithms 2012, 5(4), 398-420; doi:10.3390/a5040398
Received: 2 July 2012 / Revised: 5 September 2012 / Accepted: 7 September 2012 / Published: 26 September 2012
PDF Full-text (401 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The
[...] Read more.
In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE. Full article
Open AccessArticle Univariate Lp and ɭ p Averaging, 0 < p < 1, in Polynomial Time by Utilization of Statistical Structure
Algorithms 2012, 5(4), 421-432; doi:10.3390/a5040421
Received: 28 July 2012 / Revised: 6 September 2012 / Accepted: 17 September 2012 / Published: 5 October 2012
Cited by 1 | PDF Full-text (532 KB) | HTML Full-text | XML Full-text
Abstract
We present evidence that one can calculate generically combinatorially expensive Lp and lp averages, 0 p a priori sparsity requirements or on accepting a local minimum as a replacement for a global minimum. The functionals by which Lp averages are
[...] Read more.
We present evidence that one can calculate generically combinatorially expensive Lp and lp averages, 0 < p < 1, in polynomial time by restricting the data to come from a wide class of statistical distributions. Our approach differs from the approaches in the previous literature, which are based on a priori sparsity requirements or on accepting a local minimum as a replacement for a global minimum. The functionals by which Lp averages are calculated are not convex but are radially monotonic and the functionals by which lp averages are calculated are nearly so, which are the keys to solvability in polynomial time. Analytical results for symmetric, radially monotonic univariate distributions are presented. An algorithm for univariate lp averaging is presented. Computational results for a Gaussian distribution, a class of symmetric heavy-tailed distributions and a class of asymmetric heavy-tailed distributions are presented. Many phenomena in human-based areas are increasingly known to be represented by data that have large numbers of outliers and belong to very heavy-tailed distributions. When tails of distributions are so heavy that even medians (L1 and l1 averages) do not exist, one needs to consider using lp minimization principles with 0 < p < 1. Full article
Open AccessArticle Interaction Enhanced Imperialist Competitive Algorithms
Algorithms 2012, 5(4), 433-448; doi:10.3390/a5040433
Received: 11 July 2012 / Revised: 14 September 2012 / Accepted: 20 September 2012 / Published: 15 October 2012
Cited by 3 | PDF Full-text (379 KB) | HTML Full-text | XML Full-text
Abstract
Imperialist Competitive Algorithm (ICA) is a new population-based evolutionary algorithm. It divides its population of solutions into several sub-populations, and then searches for the optimal solution through two operations: assimilation and competition. The assimilation operation moves each non-best solution (called colony) in
[...] Read more.
Imperialist Competitive Algorithm (ICA) is a new population-based evolutionary algorithm. It divides its population of solutions into several sub-populations, and then searches for the optimal solution through two operations: assimilation and competition. The assimilation operation moves each non-best solution (called colony) in a sub-population toward the best solution (called imperialist) in the same sub-population. The competition operation removes a colony from the weakest sub-population and adds it to another sub-population. Previous work on ICA focuses mostly on improving the assimilation operation or replacing the assimilation operation with more powerful meta-heuristics, but none focuses on the improvement of the competition operation. Since the competition operation simply moves a colony (i.e., an inferior solution) from one sub-population to another sub-population, it incurs weak interaction among these sub-populations. This work proposes Interaction Enhanced ICA that strengthens the interaction among the imperialists of all sub-populations. The performance of Interaction Enhanced ICA is validated on a set of benchmark functions for global optimization. The results indicate that the performance of Interaction Enhanced ICA is superior to that of ICA and its existing variants. Full article
Open AccessArticle Forecasting the Unit Cost of a Product with Some Linear Fuzzy Collaborative Forecasting Models
Algorithms 2012, 5(4), 449-468; doi:10.3390/a5040449
Received: 6 June 2012 / Revised: 7 September 2012 / Accepted: 17 September 2012 / Published: 15 October 2012
Cited by 2 | PDF Full-text (283 KB) | HTML Full-text | XML Full-text
Abstract
Forecasting the unit cost of every product type in a factory is an important task. However, it is not easy to deal with the uncertainty of the unit cost. Fuzzy collaborative forecasting is a very effective treatment of the uncertainty in the distributed
[...] Read more.
Forecasting the unit cost of every product type in a factory is an important task. However, it is not easy to deal with the uncertainty of the unit cost. Fuzzy collaborative forecasting is a very effective treatment of the uncertainty in the distributed environment. This paper presents some linear fuzzy collaborative forecasting models to predict the unit cost of a product. In these models, the experts’ forecasts differ and therefore need to be aggregated through collaboration. According to the experimental results, the effectiveness of forecasting the unit cost was considerably improved through collaboration. Full article
Open AccessArticle Contextual Anomaly Detection in Text Data
Algorithms 2012, 5(4), 469-489; doi:10.3390/a5040469
Received: 20 June 2012 / Revised: 10 October 2012 / Accepted: 11 October 2012 / Published: 19 October 2012
Cited by 2 | PDF Full-text (3733 KB) | HTML Full-text | XML Full-text
Abstract
We propose using side information to further inform anomaly detection algorithms of the semantic context of the text data they are analyzing, thereby considering both divergence from the statistical pattern seen in particular datasets and divergence seen from more general semantic expectations. Computational
[...] Read more.
We propose using side information to further inform anomaly detection algorithms of the semantic context of the text data they are analyzing, thereby considering both divergence from the statistical pattern seen in particular datasets and divergence seen from more general semantic expectations. Computational experiments show that our algorithm performs as expected on data that reflect real-world events with contextual ambiguity, while replicating conventional clustering on data that are either too specialized or generic to result in contextual information being actionable. These results suggest that our algorithm could potentially reduce false positive rates in existing anomaly detection systems. Full article
Figures

Open AccessArticle The Effects of Tabular-Based Content Extraction on Patent Document Clustering
Algorithms 2012, 5(4), 490-505; doi:10.3390/a5040490
Received: 1 July 2012 / Revised: 16 August 2012 / Accepted: 9 October 2012 / Published: 22 October 2012
PDF Full-text (1308 KB) | HTML Full-text | XML Full-text
Abstract
Data can be represented in many different ways within a particular document or set of documents. Hence, attempts to automatically process the relationships between documents or determine the relevance of certain document objects can be problematic. In this study, we have developed software
[...] Read more.
Data can be represented in many different ways within a particular document or set of documents. Hence, attempts to automatically process the relationships between documents or determine the relevance of certain document objects can be problematic. In this study, we have developed software to automatically catalog objects contained in HTML files for patents granted by the United States Patent and Trademark Office (USPTO). Once these objects are recognized, the software creates metadata that assigns a data type to each document object. Such metadata can be easily processed and analyzed for subsequent text mining tasks. Specifically, document similarity and clustering techniques were applied to a subset of the USPTO document collection. Although our preliminary results demonstrate that tables and numerical data do not provide quantifiable value to a document’s content, the stage for future work in measuring the importance of document objects within a large corpus has been set. Full article
Figures

Open AccessArticle Extracting Hierarchies from Data Clusters for Better Classification
Algorithms 2012, 5(4), 506-520; doi:10.3390/a5040506
Received: 2 July 2012 / Revised: 24 September 2012 / Accepted: 17 October 2012 / Published: 23 October 2012
Cited by 1 | PDF Full-text (1199 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we present the PHOCS-2 algorithm, which extracts a “Predicted Hierarchy Of ClassifierS”. The extracted hierarchy helps us to enhance performance of flat classification. Nodes in the hierarchy contain classifiers. Each intermediate node corresponds to a set of classes and each
[...] Read more.
In this paper we present the PHOCS-2 algorithm, which extracts a “Predicted Hierarchy Of ClassifierS”. The extracted hierarchy helps us to enhance performance of flat classification. Nodes in the hierarchy contain classifiers. Each intermediate node corresponds to a set of classes and each leaf node corresponds to a single class. In the PHOCS-2 we make estimation for each node and achieve more precise computation of false positives, true positives and false negatives. Stopping criteria are based on the results of the flat classification. The proposed algorithm is validated against nine datasets. Full article
Open AccessArticle Alpha-Beta Pruning and Althöfer’s Pathology-Free Negamax Algorithm
Algorithms 2012, 5(4), 521-528; doi:10.3390/a5040521
Received: 25 July 2012 / Revised: 6 October 2012 / Accepted: 25 October 2012 / Published: 5 November 2012
PDF Full-text (81 KB) | HTML Full-text | XML Full-text
Abstract
The minimax algorithm, also called the negamax algorithm, remains today the most widely used search technique for two-player perfect-information games. However, minimaxing has been shown to be susceptible to game tree pathology, a paradoxical situation in which the accuracy of the search can
[...] Read more.
The minimax algorithm, also called the negamax algorithm, remains today the most widely used search technique for two-player perfect-information games. However, minimaxing has been shown to be susceptible to game tree pathology, a paradoxical situation in which the accuracy of the search can decrease as the height of the tree increases. Althöfer’s alternative minimax algorithm has been proven to be invulnerable to pathology. However, it has not been clear whether alpha-beta pruning, a crucial component of practical game programs, could be applied in the context of Alhöfer’s algorithm. In this brief paper, we show how alpha-beta pruning can be adapted to Althöfer’s algorithm. Full article
Open AccessArticle Finite Element Quadrature of Regularized Discontinuous and Singular Level Set Functions in 3D Problems
Algorithms 2012, 5(4), 529-544; doi:10.3390/a5040529
Received: 10 September 2012 / Revised: 22 October 2012 / Accepted: 29 October 2012 / Published: 7 November 2012
Cited by 6 | PDF Full-text (2904 KB) | HTML Full-text | XML Full-text
Abstract
Regularized Heaviside and Dirac delta function are used in several fields of computational physics and mechanics. Hence the issue of the quadrature of integrals of discontinuous and singular functions arises. In order to avoid ad-hoc quadrature procedures, regularization of the discontinuous and the
[...] Read more.
Regularized Heaviside and Dirac delta function are used in several fields of computational physics and mechanics. Hence the issue of the quadrature of integrals of discontinuous and singular functions arises. In order to avoid ad-hoc quadrature procedures, regularization of the discontinuous and the singular fields is often carried out. In particular, weight functions of the signed distance with respect to the discontinuity interface are exploited. Tornberg and Engquist (Journal of Scientific Computing, 2003, 19: 527–552) proved that the use of compact support weight function is not suitable because it leads to errors that do not vanish for decreasing mesh size. They proposed the adoption of non-compact support weight functions. In the present contribution, the relationship between the Fourier transform of the weight functions and the accuracy of the regularization procedure is exploited. The proposed regularized approach was implemented in the eXtended Finite Element Method. As a three-dimensional example, we study a slender solid characterized by an inclined interface across which the displacement is discontinuous. The accuracy is evaluated for varying position of the discontinuity interfaces with respect to the underlying mesh. A procedure for the choice of the regularization parameters is proposed. Full article
Open AccessArticle Exact Algorithms for Maximum Clique: A Computational Study 
Algorithms 2012, 5(4), 545-587; doi:10.3390/a5040545
Received: 11 September 2012 / Revised: 29 October 2012 / Accepted: 29 October 2012 / Published: 19 November 2012
Cited by 24 | PDF Full-text (1432 KB) | HTML Full-text | XML Full-text
Abstract
We investigate a number of recently reported exact algorithms for the maximum clique problem. The program code is presented and analyzed to show how small changes in implementation can have a drastic effect on performance. The computational study demonstrates how problem features and
[...] Read more.
We investigate a number of recently reported exact algorithms for the maximum clique problem. The program code is presented and analyzed to show how small changes in implementation can have a drastic effect on performance. The computational study demonstrates how problem features and hardware platforms influence algorithm behaviour. The effect of vertex ordering is investigated. One of the algorithms (MCS) is broken into its constituent parts and we discover that one of these parts frequently degrades performance. It is shown that the standard procedure used for rescaling published results (i.e., adjusting run times based on the calibration of a standard program over a set of benchmarks) is unsafe and can lead to incorrect conclusions being drawn from empirical data. Full article
(This article belongs to the Special Issue Graph Algorithms)
Open AccessArticle An Efficient Algorithm for Automatic Peak Detection in Noisy Periodic and Quasi-Periodic Signals
Algorithms 2012, 5(4), 588-603; doi:10.3390/a5040588
Received: 3 August 2012 / Revised: 29 October 2012 / Accepted: 13 November 2012 / Published: 21 November 2012
Cited by 32 | PDF Full-text (3787 KB) | HTML Full-text | XML Full-text
Abstract
We present a new method for automatic detection of peaks in noisy periodic and quasi-periodic signals. The new method, called automatic multiscale-based peak detection (AMPD), is based on the calculation and analysis of the local maxima scalogram, a matrix comprising the scale-dependent occurrences
[...] Read more.
We present a new method for automatic detection of peaks in noisy periodic and quasi-periodic signals. The new method, called automatic multiscale-based peak detection (AMPD), is based on the calculation and analysis of the local maxima scalogram, a matrix comprising the scale-dependent occurrences of local maxima. The usefulness of the proposed method is shown by applying the AMPD algorithm to simulated and real-world signals. Full article
Open AccessArticle Laplace–Fourier Transform of the Stretched Exponential Function: Analytic Error Bounds, Double Exponential Transform, and Open-Source Implementation “libkww”
Algorithms 2012, 5(4), 604-628; doi:10.3390/a5040604
Received: 12 October 2012 / Revised: 13 November 2012 / Accepted: 14 November 2012 / Published: 22 November 2012
Cited by 10 | PDF Full-text (471 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The C library libkww provides functions to compute the Kohlrausch–Williams– Watts function, i.e., the Laplace–Fourier transform of the stretched (or compressed) exponential function exp(-tβ ) for exponents β between 0.1 and 1.9 with double precision. Analytic error bounds are derived for
[...] Read more.
The C library libkww provides functions to compute the Kohlrausch–Williams– Watts function, i.e., the Laplace–Fourier transform of the stretched (or compressed) exponential function exp(-tβ ) for exponents β between 0.1 and 1.9 with double precision. Analytic error bounds are derived for the low and high frequency series expansions. For intermediate frequencies, the numeric integration is enormously accelerated by using the Ooura–Mori double exponential transformation. The primitive of the cosine transform needed for the convolution integrals is also implemented. The software is hosted at http://apps.jcns.fz-juelich.de/kww; version 3.0 is deposited as supplementary material to this article. Full article
Open AccessArticle Testing Goodness of Fit of Random Graph Models
Algorithms 2012, 5(4), 629-635; doi:10.3390/a5040629
Received: 7 May 2012 / Revised: 8 November 2012 / Accepted: 30 November 2012 / Published: 6 December 2012
Cited by 2 | PDF Full-text (136 KB) | HTML Full-text | XML Full-text
Abstract
Random graphs are matrices with independent 0–1 elements with probabilities determined by a small number of parameters. One of the oldest models is the Rasch model where the odds are ratios of positive numbers scaling the rows and columns. Later Persi Diaconis with
[...] Read more.
Random graphs are matrices with independent 0–1 elements with probabilities determined by a small number of parameters. One of the oldest models is the Rasch model where the odds are ratios of positive numbers scaling the rows and columns. Later Persi Diaconis with his coworkers rediscovered the model for symmetric matrices and called the model beta. Here we give goodness-of-fit tests for the model and extend the model to a version of the block model introduced by Holland, Laskey and Leinhard. Full article
Open AccessArticle Edge Detection from MRI and DTI Images with an Anisotropic Vector Field Flow Using a Divergence Map
Algorithms 2012, 5(4), 636-653; doi:10.3390/a5040636
Received: 31 July 2012 / Revised: 5 November 2012 / Accepted: 3 December 2012 / Published: 13 December 2012
Cited by 2 | PDF Full-text (1412 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this work is the extraction of edges from Magnetic Resonance Imaging (MRI) and Diffusion Tensor Imaging (DTI) images by a deformable contour procedure, using an external force field derived from an anisotropic flow. Moreover, we introduce a divergence map in
[...] Read more.
The aim of this work is the extraction of edges from Magnetic Resonance Imaging (MRI) and Diffusion Tensor Imaging (DTI) images by a deformable contour procedure, using an external force field derived from an anisotropic flow. Moreover, we introduce a divergence map in order to check the convergence of the process. As we know from vector calculus, divergence is a measure of the magnitude of a vector field convergence at a given point. Thus by means level curves of the divergence map, we have automatically selected an initial contour for the deformation process. If the initial curve includes the areas from which the vector field diverges, it will be able to push the curve towards the edges. Furthermore the divergence map highlights the presence of curves pointing to the most significant geometric parts of boundaries corresponding to high curvature values. In this way, the skeleton of the extracted object will be rather well defined and may subsequently be employed in shape analysis and morphological studies. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging)
Open AccessArticle Extracting Co-Occurrence Relations from ZDDs
Algorithms 2012, 5(4), 654-667; doi:10.3390/a5040654
Received: 27 September 2012 / Revised: 4 December 2012 / Accepted: 6 December 2012 / Published: 13 December 2012
Cited by 2 | PDF Full-text (154 KB) | HTML Full-text | XML Full-text
Abstract
A zero-suppressed binary decision diagram (ZDD) is a graph representation suitable for handling sparse set families. Given a ZDD representing a set family, we present an efficient algorithm to discover a hidden structure, called a co-occurrence relation, on the ground set. This computation
[...] Read more.
A zero-suppressed binary decision diagram (ZDD) is a graph representation suitable for handling sparse set families. Given a ZDD representing a set family, we present an efficient algorithm to discover a hidden structure, called a co-occurrence relation, on the ground set. This computation can be done in time complexity that is related not to the number of sets, but to some feature values of the ZDD. We furthermore introduce a conditional co-occurrence relation and present an extraction algorithm, which enables us to discover further structural information. Full article
(This article belongs to the Special Issue Graph Algorithms)

Journal Contact

MDPI AG
Algorithms Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
algorithms@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Algorithms
Back to Top