Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 8, Issue 4 (December 2015) , Pages 799-1218

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-24
Export citation of selected articles as:
Open AccessArticle
Numerical Properties of Different Root-Finding Algorithms Obtained for Approximating Continuous Newton’s Method
Algorithms 2015, 8(4), 1210-1218; https://doi.org/10.3390/a8041210
Received: 28 October 2015 / Revised: 10 December 2015 / Accepted: 14 December 2015 / Published: 17 December 2015
Cited by 4 | Viewed by 1434 | PDF Full-text (374 KB) | HTML Full-text | XML Full-text
Abstract
This paper is dedicated to the study of continuous Newton’s method, which is a generic differential equation whose associated flow tends to the zeros of a given polynomial. Firstly, we analyze some numerical features related to the root-finding methods obtained after applying different [...] Read more.
This paper is dedicated to the study of continuous Newton’s method, which is a generic differential equation whose associated flow tends to the zeros of a given polynomial. Firstly, we analyze some numerical features related to the root-finding methods obtained after applying different numerical methods for solving initial value problems. The relationship between the step size and the order of convergence is particularly considered. We have analyzed both the cases of a constant and non-constant step size in the procedure of integration. We show that working with a non-constant step, the well-known Chebyshev-Halley family of iterative methods for solving nonlinear scalar equations is obtained. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)
Figures

Figure 1

Open AccessArticle
A New Smoothing Conjugate Gradient Method for Solving Nonlinear Nonsmooth Complementarity Problems
Algorithms 2015, 8(4), 1195-1209; https://doi.org/10.3390/a8041195
Received: 13 October 2015 / Revised: 27 November 2015 / Accepted: 11 December 2015 / Published: 17 December 2015
Cited by 3 | Viewed by 1447 | PDF Full-text (227 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, by using the smoothing Fischer-Burmeister function, we present a new smoothing conjugate gradient method for solving the nonlinear nonsmooth complementarity problems. The line search which we used guarantees the descent of the method. Under suitable conditions, the new smoothing conjugate [...] Read more.
In this paper, by using the smoothing Fischer-Burmeister function, we present a new smoothing conjugate gradient method for solving the nonlinear nonsmooth complementarity problems. The line search which we used guarantees the descent of the method. Under suitable conditions, the new smoothing conjugate gradient method is proved globally convergent. Finally, preliminary numerical experiments show that the new method is efficient. Full article
Open AccessArticle
A Data Analytic Algorithm for Managing, Querying, and Processing Uncertain Big Data in Cloud Environments
Algorithms 2015, 8(4), 1175-1194; https://doi.org/10.3390/a8041175
Received: 26 September 2015 / Revised: 20 November 2015 / Accepted: 3 December 2015 / Published: 11 December 2015
Cited by 10 | Viewed by 2405 | PDF Full-text (374 KB) | HTML Full-text | XML Full-text
Abstract
Big data are everywhere as high volumes of varieties of valuable precise and uncertain data can be easily collected or generated at high velocity in various real-life applications. Embedded in these big data are rich sets of useful information and knowledge. To mine [...] Read more.
Big data are everywhere as high volumes of varieties of valuable precise and uncertain data can be easily collected or generated at high velocity in various real-life applications. Embedded in these big data are rich sets of useful information and knowledge. To mine these big data and to discover useful information and knowledge, we present a data analytic algorithm in this article. Our algorithm manages, queries, and processes uncertain big data in cloud environments. More specifically, it manages transactions of uncertain big data, allows users to query these big data by specifying constraints expressing their interests, and processes the user-specified constraints to discover useful information and knowledge from the uncertain big data. As each item in every transaction in these uncertain big data is associated with an existential probability value expressing the likelihood of that item to be present in a particular transaction, computation could be intensive. Our algorithm uses the MapReduce model on a cloud environment for effective data analytics on these uncertain big data. Experimental results show the effectiveness of our data analytic algorithm for managing, querying, and processing uncertain big data in cloud environments. Full article
Figures

Figure 1

Open AccessArticle
Generating Realistic Labelled, Weighted Random Graphs
Algorithms 2015, 8(4), 1143-1174; https://doi.org/10.3390/a8041143
Received: 1 June 2015 / Revised: 16 November 2015 / Accepted: 20 November 2015 / Published: 8 December 2015
Viewed by 2669 | PDF Full-text (1137 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many [...] Read more.
Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs) with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI) approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs). Our results allow us to draw conclusions about the contribution of vertex labels and edge weights to graph structure. Full article
(This article belongs to the Special Issue Graph Drawing and Experimental Algorithms)
Figures

Figure 1

Open AccessArticle
Efficiency Intra-Cluster Device-to-Device Relay Selection for Multicast Services Based on Combinatorial Auction
Algorithms 2015, 8(4), 1129-1142; https://doi.org/10.3390/a8041129
Received: 16 July 2015 / Revised: 11 October 2015 / Accepted: 23 November 2015 / Published: 2 December 2015
Viewed by 1631 | PDF Full-text (932 KB) | HTML Full-text | XML Full-text
Abstract
In Long Term Evolution-Advanced (LTE-A) networks, Device-to-device (D2D) communications can be utilized to enhance the performance of multicast services by leveraging D2D relays to serve nodes with worse channel conditions within a cluster. For traditional D2D relay schemes, D2D links with poor channel [...] Read more.
In Long Term Evolution-Advanced (LTE-A) networks, Device-to-device (D2D) communications can be utilized to enhance the performance of multicast services by leveraging D2D relays to serve nodes with worse channel conditions within a cluster. For traditional D2D relay schemes, D2D links with poor channel condition may be the bottleneck of system sum data rate. In this paper, to optimize the throughput of D2D communications, we introduce an iterative combinatorial auction algorithm for efficient D2D relay selection. In combinatorial auctions, the User Equipments (UEs) that fails to correctly receive multicast data from eNodeB (eNB) are viewed as bidders that compete for D2D relays, while the eNB is treated as the auctioneer. We also give properties of convergency and low-complexity and present numerical simulations to verify the efficiency of the proposed algorithm. Full article
Figures

Figure 1

Open AccessArticle
On the Local Convergence of a Third Order Family of Iterative Processes
Algorithms 2015, 8(4), 1121-1128; https://doi.org/10.3390/a8041121
Received: 7 September 2015 / Revised: 24 November 2015 / Accepted: 26 November 2015 / Published: 1 December 2015
Cited by 7 | Viewed by 1782 | PDF Full-text (315 KB) | HTML Full-text | XML Full-text
Abstract
Efficiency is generally the most important aspect to take into account when choosing an iterative method to approximate a solution of an equation, but is not the only aspect to consider in the iterative process. Another important aspect to consider is the accessibility [...] Read more.
Efficiency is generally the most important aspect to take into account when choosing an iterative method to approximate a solution of an equation, but is not the only aspect to consider in the iterative process. Another important aspect to consider is the accessibility of the iterative process, which shows the domain of starting points from which the iterative process converges to a solution of the equation. So, we consider a family of iterative processes with a higher efficiency index than Newton’s method. However, this family of proecsses presents problems of accessibility to the solution x * . From a local study of the convergence of this family, we perform an optimization study of the accessibility and obtain iterative processes with better accessibility than Newton’s method. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)
Figures

Figure 1

Open AccessArticle
An Optimal Biparametric Multipoint Family and Its Self-Acceleration with Memory for Solving Nonlinear Equations
Algorithms 2015, 8(4), 1111-1120; https://doi.org/10.3390/a8041111
Received: 8 October 2015 / Revised: 22 November 2015 / Accepted: 24 November 2015 / Published: 1 December 2015
Cited by 2 | Viewed by 1432 | PDF Full-text (210 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a family of Steffensen-type methods of optimal order of convergence with two parameters is constructed by direct Newtonian interpolation. It satisfies the conjecture proposed by Kung and Traub (J. Assoc. Comput. Math. 1974, 21, 634–651) that an iterative method based [...] Read more.
In this paper, a family of Steffensen-type methods of optimal order of convergence with two parameters is constructed by direct Newtonian interpolation. It satisfies the conjecture proposed by Kung and Traub (J. Assoc. Comput. Math. 1974, 21, 634–651) that an iterative method based on m evaluations per iteration without memory would arrive at the optimal convergence of order 2m-1 . Furthermore, the family of Steffensen-type methods of super convergence is suggested by using arithmetic expressions for the parameters with memory but no additional new evaluation of the function. Their error equations, asymptotic convergence constants and convergence orders are obtained. Finally, they are compared with related root-finding methods in the numerical examples. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)
Open AccessArticle
Computer Aided Diagnosis System for Early Lung Cancer Detection
Algorithms 2015, 8(4), 1088-1110; https://doi.org/10.3390/a8041088
Received: 2 June 2015 / Accepted: 10 November 2015 / Published: 20 November 2015
Cited by 3 | Viewed by 2720 | PDF Full-text (1633 KB) | HTML Full-text | XML Full-text
Abstract
Lung cancer continues to rank as the leading cause of cancer deaths worldwide. One of the most promising techniques for early detection of cancerous cells relies on sputum cell analysis. This was the motivation behind the design and the development of a new [...] Read more.
Lung cancer continues to rank as the leading cause of cancer deaths worldwide. One of the most promising techniques for early detection of cancerous cells relies on sputum cell analysis. This was the motivation behind the design and the development of a new computer aided diagnosis (CAD) system for early detection of lung cancer based on the analysis of sputum color images. The proposed CAD system encompasses four main processing steps. First is the preprocessing step which utilizes a Bayesian classification method using histogram analysis. Then, in the second step, mean shift segmentation is applied to segment the nuclei from the cytoplasm. The third step is the feature analysis. In this step, geometric and chromatic features are extracted from the nucleus region. These features are used in the diagnostic process of the sputum images. Finally, the diagnosis is completed using an artificial neural network and support vector machine (SVM) for classifying the cells into benign or malignant. The performance of the system was analyzed based on different criteria such as sensitivity, specificity and accuracy. The evaluation was carried out using Receiver Operating Characteristic (ROC) curve. The experimental results demonstrate the efficiency of the SVM classifier over other classifiers, with 97% sensitivity and accuracy as well as a significant reduction in the number of false positive and false negative rates. Full article
Figures

Figure 1

Open AccessArticle
Local Convergence of an Efficient High Convergence Order Method Using Hypothesis Only on the First Derivative
Algorithms 2015, 8(4), 1076-1087; https://doi.org/10.3390/a8041076
Received: 25 September 2015 / Accepted: 11 November 2015 / Published: 20 November 2015
Cited by 1 | Viewed by 1542 | PDF Full-text (234 KB) | HTML Full-text | XML Full-text
Abstract
We present a local convergence analysis of an eighth order three step methodin order to approximate a locally unique solution of nonlinear equation in a Banach spacesetting. In an earlier study by Sharma and Arora (2015), the order of convergence wasshown using Taylor [...] Read more.
We present a local convergence analysis of an eighth order three step methodin order to approximate a locally unique solution of nonlinear equation in a Banach spacesetting. In an earlier study by Sharma and Arora (2015), the order of convergence wasshown using Taylor series expansions and hypotheses up to the fourth order derivative oreven higher of the function involved which restrict the applicability of the proposed scheme. However, only first order derivative appears in the proposed scheme. In order to overcomethis problem, we proposed the hypotheses up to only the first order derivative. In this way,we not only expand the applicability of the methods but also propose convergence domain. Finally, where earlier studies cannot be applied, a variety of concrete numerical examplesare proposed to obtain the solutions of nonlinear equations. Our study does not exhibit thistype of problem/restriction. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)
Open AccessArticle
A New Approach for Automatic Removal of Movement Artifacts in Near-Infrared Spectroscopy Time Series by Means of Acceleration Data
Algorithms 2015, 8(4), 1052-1075; https://doi.org/10.3390/a8041052
Received: 6 July 2015 / Revised: 14 September 2015 / Accepted: 28 October 2015 / Published: 19 November 2015
Cited by 7 | Viewed by 2696 | PDF Full-text (1543 KB) | HTML Full-text | XML Full-text
Abstract
Near-infrared spectroscopy (NIRS) enables the non-invasive measurement of changes in hemodynamics and oxygenation in tissue. Changes in light-coupling due to movement of the subject can cause movement artifacts (MAs) in the recorded signals. Several methods have been developed so far that facilitate the [...] Read more.
Near-infrared spectroscopy (NIRS) enables the non-invasive measurement of changes in hemodynamics and oxygenation in tissue. Changes in light-coupling due to movement of the subject can cause movement artifacts (MAs) in the recorded signals. Several methods have been developed so far that facilitate the detection and reduction of MAs in the data. However, due to fixed parameter values (e.g., global threshold) none of these methods are perfectly suitable for long-term (i.e., hours) recordings or were not time-effective when applied to large datasets. We aimed to overcome these limitations by automation, i.e., data adaptive thresholding specifically designed for long-term measurements, and by introducing a stable long-term signal reconstruction. Our new technique (“acceleration-based movement artifact reduction algorithm”, AMARA) is based on combining two methods: the “movement artifact reduction algorithm” (MARA, Scholkmann et al. Phys. Meas. 2010, 31, 649–662), and the “accelerometer-based motion artifact removal” (ABAMAR, Virtanen et al. J. Biomed. Opt. 2011, 16, 087005). We describe AMARA in detail and report about successful validation of the algorithm using empirical NIRS data, measured over the prefrontal cortex in adolescents during sleep. In addition, we compared the performance of AMARA to that of MARA and ABAMAR based on validation data. Full article
Figures

Figure 1

Open AccessArticle
Natalie 2.0: Sparse Global Network Alignment as a Special Case of Quadratic Assignment
Algorithms 2015, 8(4), 1035-1051; https://doi.org/10.3390/a8041035
Received: 6 July 2015 / Revised: 29 September 2015 / Accepted: 12 November 2015 / Published: 18 November 2015
Cited by 4 | Viewed by 2616 | PDF Full-text (328 KB) | HTML Full-text | XML Full-text
Abstract
Data on molecular interactions is increasing at a tremendous pace, while the development of solid methods for analyzing this network data is still lagging behind. This holds in particular for the field of comparative network analysis, where one wants to identify commonalities between [...] Read more.
Data on molecular interactions is increasing at a tremendous pace, while the development of solid methods for analyzing this network data is still lagging behind. This holds in particular for the field of comparative network analysis, where one wants to identify commonalities between biological networks. Since biological functionality primarily operates at the network level, there is a clear need for topology-aware comparison methods. We present a method for global network alignment that is fast and robust and can flexibly deal with various scoring schemes taking both node-to-node correspondences as well as network topologies into account. We exploit that network alignment is a special case of the well-studied quadratic assignment problem (QAP). We focus on sparse network alignment, where each node can be mapped only to a typically small subset of nodes in the other network. This corresponds to a QAP instance with a symmetric and sparse weight matrix. We obtain strong upper and lower bounds for the problem by improving a Lagrangian relaxation approach and introduce the open source software tool Natalie 2.0, a publicly available implementation of our method. In an extensive computational study on protein interaction networks for six different species, we find that our new method outperforms alternative established and recent state-of-the-art methods. Full article
(This article belongs to the Special Issue Algorithmic Themes in Bioinformatics)
Figures

Figure 1

Open AccessArticle
Semi-Supervised Classification Based on Mixture Graph
Algorithms 2015, 8(4), 1021-1034; https://doi.org/10.3390/a8041021
Received: 2 August 2015 / Revised: 3 November 2015 / Accepted: 5 November 2015 / Published: 16 November 2015
Cited by 2 | Viewed by 1732 | PDF Full-text (329 KB) | HTML Full-text | XML Full-text
Abstract
Graph-based semi-supervised classification heavily depends on a well-structured graph. In this paper, we investigate a mixture graph and propose a method called semi-supervised classification based on mixture graph (SSCMG). SSCMG first constructs multiple k nearest neighborhood (kNN) graphs in different random subspaces of [...] Read more.
Graph-based semi-supervised classification heavily depends on a well-structured graph. In this paper, we investigate a mixture graph and propose a method called semi-supervised classification based on mixture graph (SSCMG). SSCMG first constructs multiple k nearest neighborhood (kNN) graphs in different random subspaces of the samples. Then, it combines these graphs into a mixture graph and incorporates this graph into a graph-based semi-supervised classifier. SSCMG can preserve the local structure of samples in subspaces and is less affected by noisy and redundant features. Empirical study on facial images classification shows that SSCMG not only has better recognition performance, but also is more robust to input parameters than other related methods. Full article
Figures

Figure 1

Open AccessArticle
An Integer Linear Programming Formulation for the Minimum Cardinality Segmentation Problem
Algorithms 2015, 8(4), 999-1020; https://doi.org/10.3390/a8040999
Received: 27 June 2015 / Revised: 23 September 2015 / Accepted: 2 November 2015 / Published: 11 November 2015
Cited by 1 | Viewed by 1784 | PDF Full-text (615 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we investigate the Minimum Cardinality Segmentation Problem (MCSP), an NP-hard combinatorial optimization problem arising in intensity-modulated radiation therapy. The problem consists in decomposing a given nonnegative integer matrix into a nonnegative integer linear combination of a minimum cardinality set [...] Read more.
In this article, we investigate the Minimum Cardinality Segmentation Problem (MCSP), an NP-hard combinatorial optimization problem arising in intensity-modulated radiation therapy. The problem consists in decomposing a given nonnegative integer matrix into a nonnegative integer linear combination of a minimum cardinality set of binary matrices satisfying the consecutive ones property. We show how to transform the MCSP into a combinatorial optimization problem on a weighted directed network and we exploit this result to develop an integer linear programming formulation to exactly solve it. Computational experiments show that the lower bounds obtained by the linear relaxation of the considered formulation improve upon those currently described in the literature and suggest, at the same time, new directions for the development of future exact solution approaches to the problem. Full article
(This article belongs to the Special Issue Algorithmic Themes in Bioinformatics)
Figures

Figure 1

Open AccessArticle
Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations
Algorithms 2015, 8(4), 982-998; https://doi.org/10.3390/a8040982
Received: 25 June 2015 / Revised: 24 October 2015 / Accepted: 26 October 2015 / Published: 3 November 2015
Cited by 2 | Viewed by 1797 | PDF Full-text (407 KB) | HTML Full-text | XML Full-text
Abstract
An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the [...] Read more.
An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the hyper-power iterative scheme are based on its proper factorization, as well as on the possibility to accelerate the iterations in the initial phase of the convergence. Although the effectiveness of our approach is confirmed on the basis of the theoretical point of view, some numerical comparisons in balancing chemical equations, as well as on randomly-generated matrices are furnished. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)
Figures

Figure 1

Open AccessArticle
A Particle Filter Track-Before-Detect Algorithm Based on Hybrid Differential Evolution
Algorithms 2015, 8(4), 965-981; https://doi.org/10.3390/a8040965
Received: 19 August 2015 / Revised: 24 October 2015 / Accepted: 27 October 2015 / Published: 3 November 2015
Cited by 3 | Viewed by 1712 | PDF Full-text (841 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we address the problem of detecting and tracking targets with a low signal-to-noise ratio (SNR) by exploiting hybrid differential evolution (HDE) in the particle filter track-before-detect (PF-TBD) context. Firstly, we introduce the Bayesian PF-TBD method and its weaknesses. Secondly, the [...] Read more.
In this paper, we address the problem of detecting and tracking targets with a low signal-to-noise ratio (SNR) by exploiting hybrid differential evolution (HDE) in the particle filter track-before-detect (PF-TBD) context. Firstly, we introduce the Bayesian PF-TBD method and its weaknesses. Secondly, the HDE algorithm is regarded as a novel particle updating strategy, which is proposed to optimize the performance of the PF-TBD algorithm. Thirdly, we combine the systematic resampling approach to enhance the performance of the proposed algorithm. Then, an improved PF-TBD algorithm based on the HDE method is proposed. Experiment results indicate that the proposed method has better performance in detecting and tracking than previous algorithms when the targets have a low SNR. Full article
Figures

Figure 1

Open AccessArticle
A New Swarm Intelligence Approach for Clustering Based on Krill Herd with Elitism Strategy
Algorithms 2015, 8(4), 951-964; https://doi.org/10.3390/a8040951
Received: 4 August 2015 / Revised: 20 September 2015 / Accepted: 15 October 2015 / Published: 22 October 2015
Cited by 8 | Viewed by 2170 | PDF Full-text (769 KB) | HTML Full-text | XML Full-text
Abstract
As one of the most popular and well-recognized clustering methods, fuzzy C-means (FCM) clustering algorithm is the basis of other fuzzy clustering analysis methods in theory and application respects. However, FCM algorithm is essentially a local search optimization algorithm. Therefore, sometimes, it may [...] Read more.
As one of the most popular and well-recognized clustering methods, fuzzy C-means (FCM) clustering algorithm is the basis of other fuzzy clustering analysis methods in theory and application respects. However, FCM algorithm is essentially a local search optimization algorithm. Therefore, sometimes, it may fail to find the global optimum. For the purpose of getting over the disadvantages of FCM algorithm, a new version of the krill herd (KH) algorithm with elitism strategy, called KHE, is proposed to solve the clustering problem. Elitism tragedy has a strong ability of preventing the krill population from degrading. In addition, the well-selected parameters are used in the KHE method instead of originating from nature. Through an array of simulation experiments, the results show that the KHE is indeed a good choice for solving general benchmark problems and fuzzy clustering analyses. Full article
(This article belongs to the Special Issue Clustering Algorithms)
Figures

Figure 1

Open AccessArticle
Series Arc Fault Detection Algorithm Based on Autoregressive Bispectrum Analysis
Algorithms 2015, 8(4), 929-950; https://doi.org/10.3390/a8040929
Received: 3 July 2015 / Revised: 24 September 2015 / Accepted: 28 September 2015 / Published: 16 October 2015
Cited by 2 | Viewed by 2301 | PDF Full-text (1930 KB) | HTML Full-text | XML Full-text
Abstract
Arc fault is one of the most critical reasons for electrical fires. Due to the diversity, randomness and concealment of arc faults in low-voltage circuits, it is difficult for general methods to protect all loads from series arc faults. From the analysis of [...] Read more.
Arc fault is one of the most critical reasons for electrical fires. Due to the diversity, randomness and concealment of arc faults in low-voltage circuits, it is difficult for general methods to protect all loads from series arc faults. From the analysis of many series arc faults, a large number of high frequency signals generated in circuits are found. These signals are easily affected by Gaussian noise which is difficult to be eliminated as a result of frequency aliasing. Thus, a novel detection algorithm is developed to accurately detect series arc faults in this paper. Initially, an autoregressive model of the mixed high frequency signals is modelled. Then, autoregressive bispectrum analysis is introduced to analyze common series arc fault features. The phase information of arc fault signal is preserved using this method. The influence of Gaussian noise is restrained effectively. Afterwards, several features including characteristic frequency, fluctuation of phase angles, diffused distribution and incremental numbers of bispectrum peaks are extracted for recognizing arc faults. Finally, least squares support vector machine is used to accurately identify series arc faults from the load states based on these frequency features of bispectrum. The validity of the algorithm is experimentally verified obtaining arc fault detection rate above 97%. Full article
Figures

Figure 1

Open AccessArticle
Effective Data Acquisition Protocol for Multi-Hop Heterogeneous Wireless Sensor Networks Using Compressive Sensing
Algorithms 2015, 8(4), 910-928; https://doi.org/10.3390/a8040910
Received: 1 June 2015 / Revised: 3 September 2015 / Accepted: 7 September 2015 / Published: 16 October 2015
Viewed by 2155 | PDF Full-text (2408 KB) | HTML Full-text | XML Full-text
Abstract
In designing wireless sensor networks (WSNs), it is important to reduce energy dissipation and prolong network lifetime. Clustering of nodes is one of the most effective approaches for conserving energy in WSNs. Cluster formation protocols generally consider the heterogeneity of sensor nodes in [...] Read more.
In designing wireless sensor networks (WSNs), it is important to reduce energy dissipation and prolong network lifetime. Clustering of nodes is one of the most effective approaches for conserving energy in WSNs. Cluster formation protocols generally consider the heterogeneity of sensor nodes in terms of energy difference of nodes but ignore the different transmission ranges of them. In this paper, we propose an effective data acquisition clustered protocol using compressive sensing (EDACP-CS) for heterogeneous WSNs that aims to conserve the energy of sensor nodes in the presence of energy and transmission range heterogeneity. In EDACP-CS, cluster heads are selected based on the distance from the base station and sensor residual energy. Simulation results show that our protocol offers a much better performance than the existing protocols in terms of energy consumption, stability, network lifetime, and throughput. Full article
(This article belongs to the Special Issue Algorithms for Sensor Networks)
Figures

Figure 1

Open AccessArticle
On Some Improved Harmonic Mean Newton-Like Methods for Solving Systems of Nonlinear Equations
Algorithms 2015, 8(4), 895-909; https://doi.org/10.3390/a8040895
Received: 5 September 2015 / Revised: 23 September 2015 / Accepted: 24 September 2015 / Published: 9 October 2015
Cited by 3 | Viewed by 1840 | PDF Full-text (342 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we have developed a fourth order Newton-like method based on harmonic mean and its multi-step version for solving system of nonlinear equations. The new fourth order method requires evaluation of one function and two first order Fréchet derivatives for each [...] Read more.
In this work, we have developed a fourth order Newton-like method based on harmonic mean and its multi-step version for solving system of nonlinear equations. The new fourth order method requires evaluation of one function and two first order Fréchet derivatives for each iteration. The multi-step version requires one more function evaluation for each iteration. The proposed new scheme does not require the evaluation of second or higher order Fréchet derivatives and still reaches fourth order convergence. The multi-step version converges with order 2r+4, where r is a positive integer and r ≥ 1. We have proved that the root α is a point of attraction for a general iterative function, whereas the proposed new schemes also satisfy this result. Numerical experiments including an application to 1-D Bratu problem are given to illustrate the efficiency of the new methods. Also, the new methods are compared with some existing methods. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)
Figures

Figure 1

Open AccessArticle
Code Synchronization Algorithm Based on Segment Correlation in Spread Spectrum Communication
Algorithms 2015, 8(4), 870-894; https://doi.org/10.3390/a8040870
Received: 30 June 2015 / Revised: 20 September 2015 / Accepted: 23 September 2015 / Published: 9 October 2015
Cited by 1 | Viewed by 2322 | PDF Full-text (1695 KB) | HTML Full-text | XML Full-text
Abstract
Spread Spectrum (SPSP) Communication is the theoretical basis of Direct Sequence Spread Spectrum (DSSS) transceiver technology. Spreading code, modulation, demodulation, carrier synchronization and code synchronization in SPSP communications are the core parts of DSSS transceivers. This paper focuses on the code synchronization problem [...] Read more.
Spread Spectrum (SPSP) Communication is the theoretical basis of Direct Sequence Spread Spectrum (DSSS) transceiver technology. Spreading code, modulation, demodulation, carrier synchronization and code synchronization in SPSP communications are the core parts of DSSS transceivers. This paper focuses on the code synchronization problem in SPSP communications. A novel code synchronization algorithm based on segment correlation is proposed. The proposed algorithm can effectively deal with the informational misjudgment caused by the unreasonable data acquisition times. This misjudgment may lead to an inability of DSSS receivers to restore transmitted signals. Simulation results show the feasibility of a DSSS transceiver design based on the proposed code synchronization algorithm. Finally, the communication functions of the DSSS transceiver based on the proposed code synchronization algorithm are implemented on Field Programmable Gate Array (FPGA). Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Big Data)
Figures

Figure 1

Open AccessArticle
Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric
Algorithms 2015, 8(4), 850-869; https://doi.org/10.3390/a8040850
Received: 27 June 2015 / Revised: 31 August 2015 / Accepted: 16 September 2015 / Published: 9 October 2015
Cited by 1 | Viewed by 1973 | PDF Full-text (604 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a [...] Read more.
In this work, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifies up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments. Full article
(This article belongs to the Special Issue Algorithmic Themes in Bioinformatics)
Figures

Figure 1

Open AccessArticle
Newton-Type Methods on Generalized Banach Spaces and Applications in Fractional Calculus
Algorithms 2015, 8(4), 832-849; https://doi.org/10.3390/a8040832
Received: 23 June 2015 / Revised: 13 September 2015 / Accepted: 29 September 2015 / Published: 9 October 2015
Viewed by 1491 | PDF Full-text (247 KB) | HTML Full-text | XML Full-text
Abstract
We present a semilocal convergence study of Newton-type methods on a generalized Banach space setting to approximate a locally unique zero of an operator. Earlier studies require that the operator involved is Fréchet differentiable. In the present study we assume that the operator [...] Read more.
We present a semilocal convergence study of Newton-type methods on a generalized Banach space setting to approximate a locally unique zero of an operator. Earlier studies require that the operator involved is Fréchet differentiable. In the present study we assume that the operator is only continuous. This way we extend the applicability of Newton-type methods to include fractional calculus and problems from other areas. Moreover, under the same or weaker conditions, we obtain weaker sufficient convergence criteria, tighter error bounds on the distances involved and an at least as precise information on the location of the solution. Special cases are provided where the old convergence criteria cannot apply but the new criteria can apply to locate zeros of operators. Some applications include fractional calculus involving the Riemann-Liouville fractional integral and the Caputo fractional derivative. Fractional calculus is very important for its applications in many applied sciences. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)
Open AccessArticle
Finding Supported Paths in Heterogeneous Networks
Algorithms 2015, 8(4), 810-831; https://doi.org/10.3390/a8040810
Received: 17 June 2015 / Revised: 25 September 2015 / Accepted: 29 September 2015 / Published: 9 October 2015
Cited by 1 | Viewed by 1843 | PDF Full-text (498 KB) | HTML Full-text | XML Full-text
Abstract
Subnetwork mining is an essential issue in the analysis of biological, social and communication networks. Recent applications require the simultaneous mining of several networks on the same or a similar vertex set. That is, one searches for subnetworks fulfilling different properties in each [...] Read more.
Subnetwork mining is an essential issue in the analysis of biological, social and communication networks. Recent applications require the simultaneous mining of several networks on the same or a similar vertex set. That is, one searches for subnetworks fulfilling different properties in each input network. We study the case that the input consists of a directed graph D and an undirected graph G on the same vertex set, and the sought pattern is a path P in D whose vertex set induces a connected subgraph of G. In this context, three concrete problems arise, depending on whether the existence of P is questioned or whether the length of P is to be optimized: in that case, one can search for a longest path or (maybe less intuitively) a shortest one. These problems have immediate applications in biological networks and predictable applications in social, information and communication networks. We study the classic and parameterized complexity of the problem, thus identifying polynomial and NP-complete cases, as well as fixed-parameter tractable and W[1]-hard cases. We also propose two enumeration algorithms that we evaluate on synthetic and biological data. Full article
(This article belongs to the Special Issue Algorithmic Themes in Bioinformatics)
Figures

Figure 1

Open AccessArticle
Reweighted Factor Selection for SLMS-RL1 Algorithm under Gaussian Mixture Noise Environments
Algorithms 2015, 8(4), 799-809; https://doi.org/10.3390/a8040799
Received: 6 July 2015 / Revised: 2 September 2015 / Accepted: 11 September 2015 / Published: 25 September 2015
Cited by 1 | Viewed by 1975 | PDF Full-text (721 KB) | HTML Full-text | XML Full-text
Abstract
The sign least mean square with reweighted L1-norm constraint (SLMS-RL1) algorithm is an attractive sparse channel estimation method among Gaussian mixture model (GMM) based algorithms for use in impulsive noise environments. The channel sparsity can be exploited by SLMS-RL1 algorithm based on appropriate [...] Read more.
The sign least mean square with reweighted L1-norm constraint (SLMS-RL1) algorithm is an attractive sparse channel estimation method among Gaussian mixture model (GMM) based algorithms for use in impulsive noise environments. The channel sparsity can be exploited by SLMS-RL1 algorithm based on appropriate reweighted factor, which is one of key parameters to adjust the sparse constraint for SLMS-RL1 algorithm. However, to the best of the authors’ knowledge, a reweighted factor selection scheme has not been developed. This paper proposes a Monte-Carlo (MC) based reweighted factor selection method to further strengthen the performance of SLMS-RL1 algorithm. To validate the performance of SLMS-RL1 using the proposed reweighted factor, simulations results are provided to demonstrate that convergence speed can be reduced by increasing the channel sparsity, while the steady-state MSE performance only slightly changes with different GMM impulsive-noise strengths. Full article
Figures

Figure 1

Algorithms EISSN 1999-4893 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top