Algorithms
http://www.mdpi.com/journal/algorithms
Latest open access articles published in Algorithms at http://www.mdpi.com/journal/algorithms<![CDATA[Algorithms, Vol. 8, Pages 723-742: A Comparative Study of Modern Heuristics on the School Timetabling Problem]]>
http://www.mdpi.com/1999-4893/8/3/723
In this contribution a comparative study of modern heuristics on the school timetabling problem is presented. More precisely, we investigate the application of two population-based algorithms, namely a Particle Swarm Optimization (PSO) and an Artificial Fish Swarm (AFS), on the high school timetabling problem. In order to demonstrate their efficiency and performance, experiments with real-world input data have been performed. Both algorithms proposed manage to create feasible and efficient high school timetables, thus fulfilling adequately the timetabling needs of the respective high schools. Computational results demonstrate that both algorithms manage to reach efficient solutions, most of the times better than existing approaches applied to the same school timetabling input instances using the same evaluation criteria.Algorithms2015-08-2883Article10.3390/a80307237237421999-48932015-08-28doi: 10.3390/a8030723Iosif KatsaragakisIoannis TassopoulosGrigorios Beligiannis<![CDATA[Algorithms, Vol. 8, Pages 712-722: Gradient-Based Iterative Identification for Wiener Nonlinear Dynamic Systems with Moving Average Noises]]>
http://www.mdpi.com/1999-4893/8/3/712
This paper focuses on the parameter identification problem for Wiener nonlinear dynamic systems with moving average noises. In order to improve the convergence rate, the gradient-based iterative algorithm is presented by replacing the unmeasurable variables with their corresponding iterative estimates, and to compute iteratively the noise estimates based on the obtained parameter estimates. The simulation results show that the proposed algorithm can effectively estimate the parameters of Wiener systems with moving average noises.Algorithms2015-08-2683Article10.3390/a80307127127221999-48932015-08-26doi: 10.3390/a8030712Lincheng ZhouXiangli LiHuigang XuPeiyi Zhu<![CDATA[Algorithms, Vol. 8, Pages 697-711: Comparative Study of DE, PSO and GA for Position Domain PID Controller Tuning]]>
http://www.mdpi.com/1999-4893/8/3/697
Gain tuning is very important in order to obtain good performances for a given controller. Contour tracking performance is mainly determined by the selected control gains of a position domain PID controller. In this paper, three popular evolutionary algorithms are utilized to optimize the gains of a position domain PID controller for performance improvement of contour tracking of robotic manipulators. Differential Evolution (DE), Genetic Algorithm (GA), and Particle Swarm Optimization (PSO) are used to determine the optimal gains of the position domain PID controller, and three distinct fitness functions are also used to quantify the contour tracking performance of each solution set. Simulation results show that DE features the highest performance indexes for both linear and nonlinear contour tracking, while PSO is quite efficient for linear contour tracking. Both algorithms performed consistently better than GA that featured premature convergence in all cases.Algorithms2015-08-2183Article10.3390/a80306976977111999-48932015-08-21doi: 10.3390/a8030697Puren OuyangVangjel Pano<![CDATA[Algorithms, Vol. 8, Pages 680-696: Network Community Detection on Metric Space]]>
http://www.mdpi.com/1999-4893/8/3/680
Community detection in a complex network is an important problem of much interest in recent years. In general, a community detection algorithm chooses an objective function and captures the communities of the network by optimizing the objective function, and then, one uses various heuristics to solve the optimization problem to extract the interesting communities for the user. In this article, we demonstrate the procedure to transform a graph into points of a metric space and develop the methods of community detection with the help of a metric defined for a pair of points. We have also studied and analyzed the community structure of the network therein. The results obtained with our approach are very competitive with most of the well-known algorithms in the literature, and this is justified over the large collection of datasets. On the other hand, it can be observed that time taken by our algorithm is quite less compared to other methods and justifies the theoretical findings.Algorithms2015-08-2183Article10.3390/a80306806806961999-48932015-08-21doi: 10.3390/a8030680Suman SahaSatya Ghrera<![CDATA[Algorithms, Vol. 8, Pages 669-679: Expanding the Applicability of a Third Order Newton-Type Method Free of Bilinear Operators]]>
http://www.mdpi.com/1999-4893/8/3/669
This paper is devoted to the semilocal convergence, using centered hypotheses, of a third order Newton-type method in a Banach space setting. The method is free of bilinear operators and then interesting for the solution of systems of equations. Without imposing any type of Fréchet differentiability on the operator, a variant using divided differences is also analyzed. A variant of the method using only divided differences is also presented.Algorithms2015-08-2183Article10.3390/a80306696696791999-48932015-08-21doi: 10.3390/a8030669Sergio AmatSonia BusquierConcepción BermúdezÁngel Magreñán<![CDATA[Algorithms, Vol. 8, Pages 656-668: Fifth-Order Iterative Method for Solving Multiple Roots of the Highest Multiplicity of Nonlinear Equation]]>
http://www.mdpi.com/1999-4893/8/3/656
A three-step iterative method with fifth-order convergence as a new modification of Newton’s method was presented. This method is for finding multiple roots of nonlinear equation with unknown multiplicity m whose multiplicity m is the highest multiplicity. Its order of convergence is analyzed and proved. Results for some numerical examples show the efficiency of the new method.Algorithms2015-08-2083Article10.3390/a80306566566681999-48932015-08-20doi: 10.3390/a8030656Juan LiangXiaowu LiZhinan WuMingsheng ZhangLin WangFeng Pan<![CDATA[Algorithms, Vol. 8, Pages 645-655: Local Convergence of an Optimal Eighth Order Method under Weak Conditions]]>
http://www.mdpi.com/1999-4893/8/3/645
We study the local convergence of an eighth order Newton-like method to approximate a locally-unique solution of a nonlinear equation. Earlier studies, such as Chen et al. (2015) show convergence under hypotheses on the seventh derivative or even higher, although only the first derivative and the divided difference appear in these methods. The convergence in this study is shown under hypotheses only on the first derivative. Hence, the applicability of the method is expanded. Finally, numerical examples are also provided to show that our results apply to solve equations in cases where earlier studies cannot apply.Algorithms2015-08-1983Article10.3390/a80306456456551999-48932015-08-19doi: 10.3390/a8030645Ioannis ArgyrosRamandeep BehlS.S. Motsa<![CDATA[Algorithms, Vol. 8, Pages 632-644: Data Fusion Modeling for an RT3102 and Dewetron System Application in Hybrid Vehicle Stability Testing]]>
http://www.mdpi.com/1999-4893/8/3/632
More and more hybrid electric vehicles are driven since they offer such advantages as energy savings and better active safety performance. Hybrid vehicles have two or more power driving systems and frequently switch working condition, so controlling stability is very important. In this work, a two-stage Kalman algorithm method is used to fuse data in hybrid vehicle stability testing. First, the RT3102 navigation system and Dewetron system are introduced. Second, a modeling of data fusion is proposed based on the Kalman filter. Then, this modeling is simulated and tested on a sample vehicle, using Carsim and Simulink software to test the results. The results showed the merits of this modeling.Algorithms2015-08-1283Article10.3390/a80306326326441999-48932015-08-12doi: 10.3390/a8030632Zhibin MiaoHongtian Zhang<![CDATA[Algorithms, Vol. 8, Pages 621-631: One-Bit Quantization and Distributed Detection with an Unknown Scale Parameter]]>
http://www.mdpi.com/1999-4893/8/3/621
We examine a distributed detection problem in a wireless sensor network, where sensor nodes collaborate to detect a Gaussian signal with an unknown change of power, i.e., a scale parameter. Due to power/bandwidth constraints, we consider the case where each sensor quantizes its observation into a binary digit. The binary data are then transmitted through error-prone wireless links to a fusion center, where a generalized likelihood ratio test (GLRT) detector is employed to perform a global decision. We study the design of a binary quantizer based on an asymptotic analysis of the GLRT. Interestingly, the quantization threshold of the quantizer is independent of the unknown scale parameter. Numerical results are included to illustrate the performance of the proposed quantizer and GLRT in binary symmetric channels (BSCs).Algorithms2015-08-1183Article10.3390/a80306216216311999-48932015-08-11doi: 10.3390/a8030621Fei GaoLili GuoHongbin LiJun Fang<![CDATA[Algorithms, Vol. 8, Pages 590-620: An Overview of a Class of Clock Synchronization Algorithms for Wireless Sensor Networks: A Statistical Signal Processing Perspective]]>
http://www.mdpi.com/1999-4893/8/3/590
Recently, wireless sensor networks (WSNs) have drawn great interest due to their outstanding monitoring and management potential in medical, environmental and industrial applications. Most of the applications that employ WSNs demand all of the sensor nodes to run on a common time scale, a requirement that highlights the importance of clock synchronization. The clock synchronization problem in WSNs is inherently related to parameter estimation. The accuracy of clock synchronization algorithms depends essentially on the statistical properties of the parameter estimation algorithms. Recently, studies dedicated to the estimation of synchronization parameters, such as clock offset and skew, have begun to emerge in the literature. The aim of this article is to provide an overview of the state-of-the-art clock synchronization algorithms for WSNs from a statistical signal processing point of view. This article focuses on describing the key features of the class of clock synchronization algorithms that exploit the traditional two-way message (signal) exchange mechanism. Upon introducing the two-way message exchange mechanism, the main clock offset estimation algorithms for pairwise synchronization of sensor nodes are first reviewed, and their performance is compared. The class of fully-distributed clock offset estimation algorithms for network-wide synchronization is then surveyed. The paper concludes with a list of open research problems pertaining to clock synchronization of WSNs.Algorithms2015-08-0683Review10.3390/a80305905906201999-48932015-08-06doi: 10.3390/a8030590Xu WangDaniel JeskeErchin Serpedin<![CDATA[Algorithms, Vol. 8, Pages 573-589: Robust Rank Reduction Algorithm with Iterative Parameter Optimization and Vector Perturbation]]>
http://www.mdpi.com/1999-4893/8/3/573
In dynamic propagation environments, beamforming algorithms may suffer from strong interference, steering vector mismatches, a low convergence speed and a high computational complexity. Reduced-rank signal processing techniques provide a way to address the problems mentioned above. This paper presents a low-complexity robust data-dependent dimensionality reduction based on an iterative optimization with steering vector perturbation (IOVP) algorithm for reduced-rank beamforming and steering vector estimation. The proposed robust optimization procedure jointly adjusts the parameters of a rank reduction matrix and an adaptive beamformer. The optimized rank reduction matrix projects the received signal vector onto a subspace with lower dimension. The beamformer/steering vector optimization is then performed in a reduced dimension subspace. We devise efficient stochastic gradient and recursive least-squares algorithms for implementing the proposed robust IOVP design. The proposed robust IOVP beamforming algorithms result in a faster convergence speed and an improved performance. Simulation results show that the proposed IOVP algorithms outperform some existing full-rank and reduced-rank algorithms with a comparable complexity.Algorithms2015-08-0583Article10.3390/a80305735735891999-48932015-08-05doi: 10.3390/a8030573Peng LiJiao FengRodrigo de Lamare<![CDATA[Algorithms, Vol. 8, Pages 562-572: Modeling Documents with Event Model]]>
http://www.mdpi.com/1999-4893/8/3/562
Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.Algorithms2015-08-0483Article10.3390/a80305625625721999-48932015-08-04doi: 10.3390/a8030562Longhui WangGuoguang ZhaoDonghong Sun<![CDATA[Algorithms, Vol. 8, Pages 552-561: Some Improvements to a Third Order Variant of Newton’s Method from Simpson’s Rule]]>
http://www.mdpi.com/1999-4893/8/3/552
In this paper, we present three improvements to a three-point third order variant of Newton’s method derived from the Simpson rule. The first one is a fifth order method using the same number of functional evaluations as the third order method, the second one is a four-point 10th order method and the last one is a five-point 20th order method. In terms of computational point of view, our methods require four evaluations (one function and three first derivatives) to get fifth order, five evaluations (two functions and three derivatives) to get 10th order and six evaluations (three functions and three derivatives) to get 20th order. Hence, these methods have efficiency indexes of 1.495, 1.585 and 1.648, respectively which are better than the efficiency index of 1.316 of the third order method. We test the methods through some numerical experiments which show that the 20th order method is very efficient.Algorithms2015-07-2983Article10.3390/a80305525525611999-48932015-07-29doi: 10.3390/a8030552Diyashvir Babajee<![CDATA[Algorithms, Vol. 8, Pages 541-551: Target Detection Algorithm Based on Two Layers Human Visual System]]>
http://www.mdpi.com/1999-4893/8/3/541
Robust small target detection of low signal-to-noise ratio (SNR) is very important in infrared search and track applications for self-defense or attacks. Due to the complex background, current algorithms have some unsolved issues with false alarm rate. In order to reduce the false alarm rate, an infrared small target detection algorithm based on saliency detection and support vector machine was proposed. Firstly, we detect salient regions that may contain targets with phase spectrum Fourier transform (PFT) approach. Then, target recognition was performed in the salient regions. Experimental results show the proposed algorithm has ideal robustness and efficiency for real infrared small target detection applications.Algorithms2015-07-2983Article10.3390/a80305415415511999-48932015-07-29doi: 10.3390/a8030541Zheng CuiJingli YangShouda JiangChangan Wei<![CDATA[Algorithms, Vol. 8, Pages 529-540: A Parallel Search Strategy Based on Sparse Representation for Infrared Target Tracking]]>
http://www.mdpi.com/1999-4893/8/3/529
A parallel search strategy based on sparse representation (PS-L1 tracker) is proposed in the particle filter framework. To obtain the weights of state particles, target templates are represented linearly with the dictionary of target candidates. Sparse constraints on the coefficient guarantee that only true target candidates can be selected, and the nonnegative entries denote the associate weights of efficient target states. Then the optimal target state can be estimated by the linear combination of above weighted states. In this way, efficient target states are selected simultaneously from all the particles, which we call a parallel search strategy. Experimental results demonstrate excellent performance of the proposed method on challenging infrared images.Algorithms2015-07-2783Article10.3390/a80305295295401999-48932015-07-27doi: 10.3390/a8030529Zhen ShiChang'an WeiPing FuShouda Jiang<![CDATA[Algorithms, Vol. 8, Pages 514-528: On the Accessibility of Newton’s Method under a Hölder Condition on the First Derivative]]>
http://www.mdpi.com/1999-4893/8/3/514
We see how we can improve the accessibility of Newton’s method for approximating a solution of a nonlinear equation in Banach spaces when a center Hölder condition on the first derivative is used to prove its semi-local convergence.Algorithms2015-07-2383Article10.3390/a80305145145281999-48932015-07-23doi: 10.3390/a8030514José EzquerroMiguel Hernández-Verón<![CDATA[Algorithms, Vol. 8, Pages 484-513: Multi-Feedback Interference Cancellation Algorithms for OFDM Systems over Doubly-Selective Channels]]>
http://www.mdpi.com/1999-4893/8/3/484
Orthogonal frequency-division multiplexing (OFDM) systems over rapidly time varying channels may suffer from significant inter-carrier interference (ICI), which destroys the orthogonality between subcarriers and degrades the detection performance. Without sufficient ICI suppression, OFDM systems usually experience an error floor. According to the approximate matched filter bound (AMFB), the error floor in a coded OFDM system is not irreducible. In this work, we introduce novel multiple feedback matched filter (MBMF)-based ICI cancellation receivers. Based on the output of a novel MBMF scheme, the approach employs a multiple ICI cancellation strategy with or without signal-to-interference-plus-noise-ratio (SINR) ordering. The developed schemes can significantly improve the performance and remove the error floor with a negligible complexity increase. Given the multiple cancellation approach, we compare the SINR performance of the MBMF outputs with that employing single feedback and show that the SINR performance with multiple cancellation candidates is improved over that with a single one at practical SNR values. Additionally, for time-varying channels, we exploit partial fast Fourier transform (PFFT) by splitting one OFDM symbol into multiple segments; the channel state is separately estimated by least-squares (LS) methods without inserting more pilots. Simulation results demonstrate the superiority of the proposed methods over serial and block equalizers and the robustness to the Doppler effects compared to conventional single-segment method.Algorithms2015-07-1483Article10.3390/a80304844845131999-48932015-07-14doi: 10.3390/a8030484Peng LiMin ChenLi LiJiao Feng<![CDATA[Algorithms, Vol. 8, Pages 466-483: Conditional Random Fields for Pattern Recognition Applied to Structured Data]]>
http://www.mdpi.com/1999-4893/8/3/466
Pattern recognition uses measurements from an input domain, X, to predict their labels from an output domain, Y. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features between parts of the model are often correlated. Therefore, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. This paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.Algorithms2015-07-1483Review10.3390/a80304664664831999-48932015-07-14doi: 10.3390/a8030466Tom BurrAlexei Skurikhin<![CDATA[Algorithms, Vol. 8, Pages 459-465: Solving the (n2 − 1)-Puzzle with 8/3 n3 Expected Moves]]>
http://www.mdpi.com/1999-4893/8/3/459
It is shown that the greedy algorithm for the \((n^2-1)\)-puzzle makes \(\tfrac{8}{3}n^3 +O(n^2)\) expected moves. This analysis is verified experimentally on 10,000 random instances each of the \((n^2-1)\)-puzzle for \(4 \leq n \leq 200\).Algorithms2015-07-1083Article10.3390/a80304594594651999-48932015-07-10doi: 10.3390/a8030459Ian Parberry<![CDATA[Algorithms, Vol. 8, Pages 435-458: A Benchmarking Algorithm to Determine Minimum Aggregation Delay for Data Gathering Trees and an Analysis of the Diameter-Aggregation Delay Tradeoff]]>
http://www.mdpi.com/1999-4893/8/3/435
Aggregation delay is the minimum number of time slots required to aggregate data along the edges of a data gathering tree (DG tree) spanning all the nodes in a wireless sensor network (WSN). We propose a benchmarking algorithm to determine the minimum possible aggregation delay for DG trees in a WSN. We assume the availability of a sufficient number of unique CDMA (Code Division Multiple Access) codes for the intermediate nodes to simultaneously aggregate data from their child nodes if the latter are ready with the data. An intermediate node has to still schedule non-overlapping time slots to sequentially aggregate data from its own child nodes (one time slot per child node). We show that the minimum aggregation delay for a DG tree depends on the underlying design choices (bottleneck node-weight based or bottleneck link-weight based) behind its construction. We observe the bottleneck node-weight based DG trees incur a smaller diameter and a larger number of child nodes per intermediate node; whereas, the bottleneck link-weight based DG trees incur a larger diameter and a much lower number of child nodes per intermediate node. As a result, we observe a complex diameter-aggregation delay tradeoff for data gathering trees in WSNs.Algorithms2015-07-1083Article10.3390/a80304354354581999-48932015-07-10doi: 10.3390/a8030435Natarajan Meghanathan<![CDATA[Algorithms, Vol. 8, Pages 424-434: Multi-Objective Design Optimization of an Over-Constrained Flexure-Based Amplifier]]>
http://www.mdpi.com/1999-4893/8/3/424
The optimizing design for enhancement of the micro performance of manipulator based on analytical models is investigated in this paper. By utilizing the established uncanonical linear homogeneous equations, the quasi-static analytical model of the micro-manipulator is built, and the theoretical calculation results are tested by FEA simulations. To provide a theoretical basis for a micro-manipulator being used in high-precision engineering applications, this paper investigates the modal property based on the analytical model. Based on the finite element method, with multipoint constraint equations, the model is built and the results have a good match with the simulation. The following parametric influences studied show that the influences of other objectives on one objective are complicated. Consequently, the multi-objective optimization by the derived analytical models is carried out to find out the optimal solutions of the manipulator. Besides the inner relationships among these design objectives during the optimization process are discussed.Algorithms2015-07-0883Article10.3390/a80304244244341999-48932015-07-08doi: 10.3390/a8030424Yuan NiZongquan DengJunbao LiXiang WuLong Li<![CDATA[Algorithms, Vol. 8, Pages 415-423: A Quartically Convergent Jarratt-Type Method for Nonlinear System of Equations]]>
http://www.mdpi.com/1999-4893/8/3/415
In this work, we propose a new fourth-order Jarratt-type method for solving systems of nonlinear equations. The local convergence order of the method is proven analytically. Finally, we validate our results via some numerical experiments including an application to the Chandrashekar integral equations.Algorithms2015-07-0683Article10.3390/a80304154154231999-48932015-07-06doi: 10.3390/a8030415Mohammad GhorbanzadehFazlollah Soleymani<![CDATA[Algorithms, Vol. 8, Pages 407-414: Implementation of a Parallel Algorithm Based on a Spark Cloud Computing Platform]]>
http://www.mdpi.com/1999-4893/8/3/407
Parallel algorithms, such as the ant colony algorithm, take a long time when solving large-scale problems. In this paper, the MAX-MIN Ant System algorithm (MMAS) is parallelized to solve Traveling Salesman Problem (TSP) based on a Spark cloud computing platform. We combine MMAS with Spark MapReduce to execute the path building and the pheromone operation in a distributed computer cluster. To improve the precision of the solution, local optimization strategy 2-opt is adapted in MMAS. The experimental results show that Spark has a very great accelerating effect on the ant colony algorithm when the city scale of TSP or the number of ants is relatively large.Algorithms2015-07-0383Article10.3390/a80304074074141999-48932015-07-03doi: 10.3390/a8030407Longhui WangYong WangYudong Xie<![CDATA[Algorithms, Vol. 8, Pages 395-406: Algorithms for Computerized Fetal Heart Rate Diagnosis with Direct Reporting]]>
http://www.mdpi.com/1999-4893/8/3/395
Aims: Since pattern classification of fetal heart rate (FHR) was subjective and enlarged interobserver difference, objective FHR analysis was achieved with computerized FHR diagnosis. Methods: The computer algorithm was composed of an experts’ knowledge system, including FHR analysis and FHR score calculation, and also of an objective artificial neural network system with software. In addition, a FHR frequency spectrum was studied to detect ominous sinusoidal FHR and the loss of baseline variability related to fetal brain damage. The algorithms were installed in a central-computerized automatic FHR monitoring system, which gave the diagnosis rapidly and directly to the attending doctor. Results: Clinically perinatal mortality decreased significantly and no cerebral palsy developed after introduction of the centralized system. Conclusion: The automatic multichannel FHR monitoring system improved the monitoring, increased the objectivity of FHR diagnosis and promoted clinical results.Algorithms2015-06-2983Review10.3390/a80303953954061999-48932015-06-29doi: 10.3390/a8030395Kazuo MaedaYasuaki NoguchiMasaji UtsuTakashi Nagassawa<![CDATA[Algorithms, Vol. 8, Pages 380-394: Improving CLOPE’s Profit Value and Stability with an Optimized Agglomerative Approach]]>
http://www.mdpi.com/1999-4893/8/3/380
CLOPE (Clustering with sLOPE) is a simple and fast histogram-based clustering algorithm for categorical data. However, given the same data set with the same input parameter, the clustering results by this algorithm would possibly be different if the transactions are input in a different sequence. In this paper, a hierarchical clustering framework is proposed as an extension of CLOPE to generate stable and satisfactory clustering results based on an optimized agglomerative merge process. The new clustering profit is defined as the merge criteria and the cluster graph structure is proposed to optimize the merge iteration process. The experiments conducted on two datasets both demonstrate that the agglomerative approach achieves stable clustering results with a better profit value, but costs much more time due to the worse complexity.Algorithms2015-06-2683Article10.3390/a80303803803941999-48932015-06-26doi: 10.3390/a8030380Yefeng LiJiajin LeMei Wang<![CDATA[Algorithms, Vol. 8, Pages 366-379: Identification of Dual-Rate Sampled Hammerstein Systems with a Piecewise-Linear Nonlinearity Using the Key Variable Separation Technique]]>
http://www.mdpi.com/1999-4893/8/3/366
The identification difficulties for a dual-rate Hammerstein system lie in two aspects. First, the identification model of the system contains the products of the parameters of the nonlinear block and the linear block, and a standard least squares method cannot be directly applied to the model; second, the traditional single-rate discrete-time Hammerstein model cannot be used as the identification model for the dual-rate sampled system. In order to solve these problems, by combining the polynomial transformation technique with the key variable separation technique, this paper converts the Hammerstein system into a dual-rate linear regression model about all parameters (linear-in-parameter model) and proposes a recursive least squares algorithm to estimate the parameters of the dual-rate system. The simulation results verify the effectiveness of the proposed algorithm.Algorithms2015-06-2483Article10.3390/a80303663663791999-48932015-06-24doi: 10.3390/a8030366Ying-Ying WangXiang-Dong WangDong-Qing Wang<![CDATA[Algorithms, Vol. 8, Pages 336-365: MAKHA—A New Hybrid Swarm Intelligence Global Optimization Algorithm]]>
http://www.mdpi.com/1999-4893/8/2/336
The search for efficient and reliable bio-inspired optimization methods continues to be an active topic of research due to the wide application of the developed methods. In this study, we developed a reliable and efficient optimization method via the hybridization of two bio-inspired swarm intelligence optimization algorithms, namely, the Monkey Algorithm (MA) and the Krill Herd Algorithm (KHA). The hybridization made use of the efficient steps in each of the two original algorithms and provided a better balance between the exploration/diversification steps and the exploitation/intensification steps. The new hybrid algorithm, MAKHA, was rigorously tested with 27 benchmark problems and its results were compared with the results of the two original algorithms. MAKHA proved to be considerably more reliable and more efficient in tested problems.Algorithms2015-06-1982Article10.3390/a80203363363651999-48932015-06-19doi: 10.3390/a8020336Ahmed KhalilSeif-Eddeen FateenAdrián Bonilla-Petriciolet<![CDATA[Algorithms, Vol. 8, Pages 321-335: Time Domain Simulation of Sound Waves Using Smoothed Particle Hydrodynamics Algorithm with Artificial Viscosity]]>
http://www.mdpi.com/1999-4893/8/2/321
Smoothed particle hydrodynamics (SPH), as a Lagrangian, meshfree method, is supposed to be useful in solving acoustic problems, such as combustion noise, bubble acoustics, etc., and has been gradually used in sound wave computation. However, unphysical oscillations in the sound wave simulation cannot be ignored. In this paper, an artificial viscosity term is added into the standard SPH algorithm used for solving linearized acoustic wave equations. SPH algorithms with or without artificial viscosity are both built to compute sound propagation and interference in the time domain. Then, the effects of the smoothing kernel function, particle spacing and Courant number on the SPH algorithms of sound waves are discussed. After comparing SPH simulation results with theoretical solutions, it is shown that the result of the SPH algorithm with the artificial viscosity term added attains good agreement with the theoretical solution by effectively reducing unphysical oscillations. In addition, suitable computational parameters of SPH algorithms are proposed through analyzing the sound pressure errors for simulating sound waves.Algorithms2015-06-1782Article10.3390/a80203213213351999-48932015-06-17doi: 10.3390/a8020321Xu LiTao ZhangYong Zhang<![CDATA[Algorithms, Vol. 8, Pages 309-320: An Optimal Eighth-Order Derivative-Free Family of Potra-Pták’s Method]]>
http://www.mdpi.com/1999-4893/8/2/309
In this paper, we present a new three-step derivative-free family based on Potra-Pták’s method for solving nonlinear equations numerically. In terms of computational cost, each member of the proposed family requires only four functional evaluations per full iteration to achieve optimal eighth-order convergence. Further, computational results demonstrate that the proposed methods are highly efficient as compared with many well-known methods.Algorithms2015-06-1582Article10.3390/a80203093093201999-48932015-06-15doi: 10.3390/a8020309Munish KansalVinay KanwarSaurabh Bhatia<![CDATA[Algorithms, Vol. 8, Pages 292-308: Training Artificial Neural Networks by a Hybrid PSO-CS Algorithm]]>
http://www.mdpi.com/1999-4893/8/2/292
Presenting a satisfactory and efficient training algorithm for artificial neural networks (ANN) has been a challenging task in the supervised learning area. Particle swarm optimization (PSO) is one of the most widely used algorithms due to its simplicity of implementation and fast convergence speed. On the other hand, Cuckoo Search (CS) algorithm has been proven to have a good ability for finding the global optimum; however, it has a slow convergence rate. In this study, a hybrid algorithm based on PSO and CS is proposed to make use of the advantages of both PSO and CS algorithms. The proposed hybrid algorithm is employed as a new training method for feedforward neural networks (FNNs). To investigate the performance of the proposed algorithm, two benchmark problems are used and the results are compared with those obtained from FNNs trained by original PSO and CS algorithms. The experimental results show that the proposed hybrid algorithm outperforms both PSO and CS in training FNNs.Algorithms2015-06-1182Article10.3390/a80202922923081999-48932015-06-11doi: 10.3390/a8020292Jeng-Fung ChenQuang DoHo-Nien Hsieh<![CDATA[Algorithms, Vol. 8, Pages 280-291: Model Equivalence-Based Identification Algorithm for Equation-Error Systems with Colored Noise]]>
http://www.mdpi.com/1999-4893/8/2/280
For equation-error autoregressive (EEAR) systems, this paper proposes an identification algorithm by means of the model equivalence transformation. The basic idea is to eliminate the autoregressive term in the model using the model transformation, to estimate the parameters of the converted system and further to compute the parameter estimates of the original system using the comparative coefficient way and the model equivalence principle. For comparison, the recursive generalized least squares algorithm is given simply. The simulation results verify that the proposed algorithm is effective and can produce more accurate parameter estimates.Algorithms2015-06-0282Article10.3390/a80202802802911999-48932015-06-02doi: 10.3390/a8020280Dandan MengFeng Ding<![CDATA[Algorithms, Vol. 8, Pages 271-279: Dynamics and Fractal Dimension of Steffensen-Type Methods]]>
http://www.mdpi.com/1999-4893/8/2/271
In this paper, the dynamical behavior of different optimal iterative schemes for solving nonlinear equations with increasing order, is studied. The tendency of the complexity of the Julia set is analyzed and referred to the fractal dimension. In fact, this fractal dimension can be shown to be a powerful tool to compare iterative schemes that estimate the solution of a nonlinear equation. Based on the box-counting algorithm, several iterative derivative-free methods of different convergence orders are compared.Algorithms2015-06-0182Article10.3390/a80202712712791999-48932015-06-01doi: 10.3390/a8020271Francisco ChicharroAlicia CorderoJuan Torregrosa<![CDATA[Algorithms, Vol. 8, Pages 248-270: On String Matching with Mismatches]]>
http://www.mdpi.com/1999-4893/8/2/248
In this paper, we consider several variants of the pattern matching with mismatches problem. In particular, given a text \(T=t_1 t_2\cdots t_n\) and a pattern \(P=p_1p_2\cdots p_m\), we investigate the following problems: (1) pattern matching with mismatches: for every \(i, 1\leq i \leq n-m+1\) output, the distance between \(P\) and \(t_i t_{i+1}\cdots t_{i+m-1}\); and (2) pattern matching with \(k\) mismatches: output those positions \(i\) where the distance between \(P\) and \(t_i t_{i+1}\cdots t_{i+m-1}\) is less than a given threshold \(k\). The distance metric used is the Hamming distance. We present some novel algorithms and techniques for solving these problems. We offer deterministic, randomized and approximation algorithms. We consider variants of these problems where there could be wild cards in either the text or the pattern or both. We also present an experimental evaluation of these algorithms. The source code is available at http://www.engr.uconn.edu/\(\sim\)man09004/kmis.zip.Algorithms2015-05-2682Article10.3390/a80202482482701999-48932015-05-26doi: 10.3390/a8020248Marius NicolaeSanguthevar Rajasekaran<![CDATA[Algorithms, Vol. 8, Pages 234-247: An Optimization Clustering Algorithm Based on Texture Feature Fusion for Color Image Segmentation]]>
http://www.mdpi.com/1999-4893/8/2/234
We introduce a multi-feature optimization clustering algorithm for color image segmentation. The local binary pattern, the mean of the min-max difference, and the color components are combined as feature vectors to describe the magnitude change of grey value and the contrastive information of neighbor pixels. In clustering stage, it gets the initial clustering center and avoids getting into local optimization by adding mutation operator of genetic algorithm to particle swarm optimization. Compared with well-known methods, the proposed method has an overall better segmentation performance and can segment image more accurately by evaluating the ratio of misclassification.Algorithms2015-05-2282Article10.3390/a80202342342471999-48932015-05-22doi: 10.3390/a8020234Gaihua WangYang LiuCaiquan Xiong<![CDATA[Algorithms, Vol. 8, Pages 224-233: Numerical Solution of Turbulence Problems by Solving Burgers’ Equation]]>
http://www.mdpi.com/1999-4893/8/2/224
In this work we generate the numerical solutions of Burgers’ equation by applying the Crank-Nicholson method and different schemes for solving nonlinear systems, instead of using Hopf-Cole transformation to reduce Burgers’ equation into the linear heat equation. The method is analyzed on two test problems in order to check its efficiency on different kinds of initial conditions. Numerical solutions as well as exact solutions for different values of viscosity are calculated, concluding that the numerical results are very close to the exact solution.Algorithms2015-05-0882Article10.3390/a80202242242331999-48932015-05-08doi: 10.3390/a8020224Alicia CorderoAntonio FranquesJuan Torregrosa<![CDATA[Algorithms, Vol. 8, Pages 209-223: Pulmonary Nodule Detection from X-ray CT Images Based on Region Shape Analysis and Appearance-based Clustering]]>
http://www.mdpi.com/1999-4893/8/2/209
In this paper, we propose a detection method of pulmonary nodules in X-ray computed tomography (CT) scans by use of three image filters and appearance-based k-means clustering. First, voxel values are suppressed in radial directions so as to eliminate extra regions in the volumes of interest (VOIs). Globular regions are enhanced by moment-of-inertia tensors where the voxel values in the VOIs are regarded as mass. Excessively enhanced voxels are reduced based on displacement between the VOI centers and the gravity points of the voxel values in the VOIs. Initial nodule candidates are determined by these filtering processings. False positives are reduced by, first, normalizing the directions of intensity distributions in the VOIs by rotating the VOIs based on the eigenvectors of the moment-of-inertia tensors, and then applying an appearance-based two-step k-means clustering technique to the rotated VOIs. The proposed method is applied to actual CT scans and experimental results are shown.Algorithms2015-05-0882Article10.3390/a80202092092231999-48932015-05-08doi: 10.3390/a8020209Takanobu YanagiharaHotaka Takizawa<![CDATA[Algorithms, Vol. 8, Pages 190-208: From Enumerating to Generating: A Linear Time Algorithm for Generating 2D Lattice Paths with a Given Number of Turns]]>
http://www.mdpi.com/1999-4893/8/2/190
We propose a linear time algorithm, called G2DLP, for generating 2D lattice L(n1, n2) paths, equivalent to two-item multiset permutations, with a given number of turns. The usage of turn has three meanings: in the context of multiset permutations, it means that two consecutive elements of a permutation belong to two different items; in lattice path enumerations, it means that the path changes its direction, either from eastward to northward or from northward to eastward; in open shop scheduling, it means that we transfer a job from one type of machine to another. The strategy of G2DLP is divide-and-combine; the division is based on the enumeration results of a previous study and is achieved by aid of an integer partition algorithm and a multiset permutation algorithm; the combination is accomplished by a concatenation algorithm that constructs the paths we require. The advantage of G2DLP is twofold. First, it is optimal in the sense that it directly generates all feasible paths without visiting an infeasible one. Second, it can generate all paths in any specified order of turns, for example, a decreasing order or an increasing order. In practice, two applications, scheduling and cryptography, are discussed.Algorithms2015-05-0882Article10.3390/a80201901902081999-48932015-05-08doi: 10.3390/a8020190Ting Kuo<![CDATA[Algorithms, Vol. 8, Pages 177-189: An Adaptive Spectral Clustering Algorithm Based on the Importance of Shared Nearest Neighbors]]>
http://www.mdpi.com/1999-4893/8/2/177
The construction of a similarity matrix is one significant step for the spectral clustering algorithm; while the Gaussian kernel function is one of the most common measures for constructing the similarity matrix. However, with a fixed scaling parameter, the similarity between two data points is not adaptive and appropriate for multi-scale datasets. In this paper, through quantitating the value of the importance for each vertex of the similarity graph, the Gaussian kernel function is scaled, and an adaptive Gaussian kernel similarity measure is proposed. Then, an adaptive spectral clustering algorithm is gotten based on the importance of shared nearest neighbors. The idea is that the greater the importance of the shared neighbors between two vertexes, the more possible it is that these two vertexes belong to the same cluster; and the importance value of the shared neighbors is obtained with an iterative method, which considers both the local structural information and the distance similarity information, so as to improve the algorithm’s performance. Experimental results on different datasets show that our spectral clustering algorithm outperforms the other spectral clustering algorithms, such as the self-tuning spectral clustering and the adaptive spectral clustering based on shared nearest neighbors in clustering accuracy on most datasets.Algorithms2015-05-0782Article10.3390/a80201771771891999-48932015-05-07doi: 10.3390/a8020177Xiaoqi HeSheng ZhangYangguang Liu<![CDATA[Algorithms, Vol. 8, Pages 157-176: Multiobjective Cloud Particle Optimization Algorithm Based on Decomposition]]>
http://www.mdpi.com/1999-4893/8/2/157
The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has received attention from researchers in recent years. This paper presents a new multiobjective algorithm based on decomposition and the cloud model called multiobjective decomposition evolutionary algorithm based on Cloud Particle Differential Evolution (MOEA/D-CPDE). In the proposed method, the best solution found so far acts as a seed in each generation and evolves two individuals by cloud generator. A new individual is produced by updating the current individual with the position vector difference of these two individuals. The performance of the proposed algorithm is carried on 16 well-known multi-objective problems. The experimental results indicate that MOEA/D-CPDE is competitive.Algorithms2015-04-2382Article10.3390/a80201571571761999-48932015-04-23doi: 10.3390/a8020157Wei LiLei WangQiaoyong JiangXinhong HeiBin Wang<![CDATA[Algorithms, Vol. 8, Pages 144-156: The Auxiliary Problem Principle with Self-Adaptive Penalty Parameter for Multi-Area Economic Dispatch Problem]]>
http://www.mdpi.com/1999-4893/8/2/144
The auxiliary problem principle is a powerful tool for solving multi-area economic dispatch problem. One of the main drawbacks of the auxiliary problem principle method is that the convergence performance depends on the selection of penalty parameter. In this paper, we propose a self-adaptive strategy to adjust penalty parameter based on the iterative information, the proposed approach is verified by two given test systems. The corresponding simulation results demonstrate that the proposed self-adaptive auxiliary problem principle iterative scheme is robust in terms of the selection of penalty parameter and has better convergence rate compared with the traditional auxiliary problem principle method.Algorithms2015-04-2282Article10.3390/a80201441441561999-48932015-04-22doi: 10.3390/a8020144Yaming RenShumin Fei<![CDATA[Algorithms, Vol. 8, Pages 128-143: A Clustering Algorithm based on Feature Weighting Fuzzy Compactness and Separation]]>
http://www.mdpi.com/1999-4893/8/2/128
Aiming at improving the well-known fuzzy compactness and separation algorithm (FCS), this paper proposes a new clustering algorithm based on feature weighting fuzzy compactness and separation (WFCS). In view of the contribution of features to clustering, the proposed algorithm introduces the feature weighting into the objective function. We first formulate the membership and feature weighting, and analyze the membership of data points falling on the crisp boundary, then give the adjustment strategy. The proposed WFCS is validated both on simulated dataset and real dataset. The experimental results demonstrate that the proposed WFCS has the characteristics of hard clustering and fuzzy clustering, and outperforms many existing clustering algorithms with respect to three metrics: Rand Index, Xie-Beni Index and Within-Between(WB) Index.Algorithms2015-04-1382Article10.3390/a80201281281431999-48932015-04-13doi: 10.3390/a8020128Yuan ZhouHong-fu ZuoJiao Feng<![CDATA[Algorithms, Vol. 8, Pages 100-127: A Study on the Fuzzy-Logic-Based Solar Power MPPT Algorithms Using Different Fuzzy Input Variables]]>
http://www.mdpi.com/1999-4893/8/2/100
Maximum power point tracking (MPPT) is one of the key functions of the solar power management system in solar energy deployment. This paper investigates the design of fuzzy-logic-based solar power MPPT algorithms using different fuzzy input variables. Six fuzzy MPPT algorithms, based on different input variables, were considered in this study, namely (i) slope (of solar power-versus-solar voltage) and changes of the slope; (ii) slope and variation of the power; (iii) variation of power and variation of voltage; (iv) variation of power and variation of current; (v) sum of conductance and increment of the conductance; and (vi) sum of angles of arctangent of the conductance and arctangent of increment of the conductance. Algorithms (i)–(iv) have two input variables each while algorithms (v) and (vi) use a single input variable. The fuzzy logic MPPT function is deployed using a buck-boost power converter. This paper presents the details of the determinations, considerations of the fuzzy rules, as well as advantages and disadvantages of each MPPT algorithm based upon photovoltaic (PV) cell properties. The range of the input variable of Algorithm (vi) is finite and the maximum power point condition is well defined in steady condition and, therefore, it can be used for multipurpose controller design. Computer simulations are conducted to verify the design.Algorithms2015-04-0882Article10.3390/a80201001001271999-48932015-04-08doi: 10.3390/a8020100Jaw-Kuen ShiauYu-Chen WeiBo-Chih Chen<![CDATA[Algorithms, Vol. 8, Pages 92-99: Statistical Properties of Protein-Protein Interfaces]]>
http://www.mdpi.com/1999-4893/8/2/92
The properties of 1172 protein complexes (downloaded from the Protein Data Bank (PDB)) have been studied based on the concept of circular variance as a buriedness indicator and the concept of mutual proximity as a parameter-free definition of contact. The propensities of residues to be in the protein, on the surface or form contact, as well as residue pairs to form contact were calculated. In addition, the concept of circular variance has been used to compare the ruggedness and shape of the contact surface with the overall surface.Algorithms2015-04-0282Article10.3390/a802009292991999-48932015-04-02doi: 10.3390/a8020092Mihaly Mezei<![CDATA[Algorithms, Vol. 8, Pages 82-91: A Stable Gaussian Fitting Procedure for the Parameterization of Remote Sensed Thermal Images]]>
http://www.mdpi.com/1999-4893/8/2/82
An image analysis procedure based on a two dimensional Gaussian fitting is presented and applied to satellite maps describing the surface urban heat island (SUHI). The application of this fitting technique allows us to parameterize the SUHI pattern in order to better understand its intensity trend and also to perform quantitative comparisons among different images in time and space. The proposed procedure is computationally rapid and stable, executing an initial guess parameter estimation by a multiple regression before the iterative nonlinear fitting. The Gaussian fit was applied to both low and high resolution images (1 km and 30 m pixel size) and the results of the SUHI parameterization shown. As expected, a reduction of the correlation coefficient between the map values and the Gaussian surface was observed for the image with the higher spatial resolution due to the greater variability of the SUHI values. Since the fitting procedure provides a smoothed Gaussian surface, it has better performance when applied to low resolution images, even if the reliability of the SUHI pattern representation can be preserved also for high resolution images.Algorithms2015-03-2782Article10.3390/a802008282911999-48932015-03-27doi: 10.3390/a8020082Roberta AnniballeStefania Bonafoni<![CDATA[Algorithms, Vol. 8, Pages 60-81: The Parameterized Complexity of the Rainbow Subgraph Problem]]>
http://www.mdpi.com/1999-4893/8/1/60
The NP-hard RAINBOW SUBGRAPH problem, motivated from bioinformatics, is to find in an edge-colored graph a subgraph that contains each edge color exactly once and has at most \(k\) vertices. We examine the parameterized complexity of RAINBOW SUBGRAPH for paths, trees, and general graphs. We show that RAINBOW SUBGRAPH is W[1]-hard with respect to the parameter \(k\) and also with respect to the dual parameter \(\ell:=n-k\) where \(n\) is the number of vertices. Hence, we examine parameter combinations and show, for example, a polynomial-size problem kernel for the combined parameter \(\ell\) and ``maximum number of colors incident with any vertex''. Additionally, we show APX-hardness even if the input graph is a properly edge-colored path in which every color occurs at most twice.Algorithms2015-02-2781Article10.3390/a801006060811999-48932015-02-27doi: 10.3390/a8010060Falk HüffnerChristian KomusiewiczRolf NiedermeierMartin Rötzschke<![CDATA[Algorithms, Vol. 8, Pages 46-59: Recognition of Unipolar and Generalised Split Graphs]]>
http://www.mdpi.com/1999-4893/8/1/46
A graph is unipolar if it can be partitioned into a clique and a disjoint union of cliques, and a graph is a generalised split graph if it or its complement is unipolar. A unipolar partition of a graph can be used to find efficiently the clique number, the stability number, the chromatic number, and to solve other problems that are hard for general graphs. We present an O(n2)-time algorithm for recognition of n-vertex generalised split graphs, improving on previous O(n3)-time algorithms.Algorithms2015-02-1381Article10.3390/a801004646591999-48932015-02-13doi: 10.3390/a8010046Colin McDiarmidNikola Yolov<![CDATA[Algorithms, Vol. 8, Pages 32-45: An Efficient SAR Image Segmentation Framework Using Transformed Nonlocal Mean and Multi-Objective Clustering in Kernel Space]]>
http://www.mdpi.com/1999-4893/8/1/32
Synthetic aperture radar (SAR) image segmentation usually involves two crucial issues: suitable speckle noise removing technique and effective image segmentation methodology. Here, an efficient SAR image segmentation method considering both of the two aspects is presented. As for the first issue, the famous nonlocal mean (NLM) filter is introduced in this study to suppress the multiplicative speckle noise in SAR image. Furthermore, to achieve a higher denoising accuracy, the local neighboring pixels in the searching window are projected into a lower dimensional subspace by principal component analysis (PCA). Thus, the nonlocal mean filter is implemented in the subspace. Afterwards, a multi-objective clustering algorithm is proposed using the principals of artificial immune system (AIS) and kernel-induced distance measures. The multi-objective clustering has been shown to discover the data distribution with different characteristics and the kernel methods can improve its robustness to noise and outliers. Experiments demonstrate that the proposed method is able to partition the SAR image robustly and accurately than the conventional approaches.Algorithms2015-02-0981Article10.3390/a801003232451999-48932015-02-09doi: 10.3390/a8010032Dongdong YangHui YangRong Fei<![CDATA[Algorithms, Vol. 8, Pages 19-31: An Improved Shuffled Frog-Leaping Algorithm for Flexible Job Shop Scheduling Problem]]>
http://www.mdpi.com/1999-4893/8/1/19
The flexible job shop scheduling problem is a well-known combinatorial optimization problem. This paper proposes an improved shuffled frog-leaping algorithm to solve the flexible job shop scheduling problem. The algorithm possesses an adjustment sequence to design the strategy of local searching and an extremal optimization in information exchange. The computational result shows that the proposed algorithm has a powerful search capability in solving the flexible job shop scheduling problem compared with other heuristic algorithms, such as the genetic algorithm, tabu search and ant colony optimization. Moreover, the results also show that the improved strategies could improve the performance of the algorithm effectively.Algorithms2015-02-0481Article10.3390/a801001919311999-48932015-02-04doi: 10.3390/a8010019Kong LuLi TingWang KemingZhu HanbingTakano MakotoYu Bin<![CDATA[Algorithms, Vol. 8, Pages 3-18: Networked Control System Time-Delay Compensation Based on Time-Delay Prediction and Improved Implicit GPC]]>
http://www.mdpi.com/1999-4893/8/1/3
The random time delay in a networked control system can usually deteriorate the control performance and stability of the networked control system. In order to solve this problem, this paper puts forward a networked control system random time-delay compensation method based on time-delay prediction and improved implicit generalized predictive control (GPC). The least squares support vector machine is used to predict the future time delay of network. The parameters of the least squares support vector machine time-delay prediction model are difficult to determine, and the genetic algorithm is used for least squares support vector machine optimal prediction parameter optimization. Then, an improved implicit generalized predictive control method is adopted to compensate for the time delay. The simulation results show that the method in this paper has high prediction accuracy and a good compensation effect for the random time delay of the networked control system, has a small amount of on-line calculation and that the output response and control stability of the system are improved.Algorithms2015-01-1981Article10.3390/a80100033181999-48932015-01-19doi: 10.3390/a8010003Zhong-Da TianShu-Jiang LiYan-Hong WangHong-Xia Yu<![CDATA[Algorithms, Vol. 8, Pages 1-2: Acknowledgement to Reviewers of Algorithms in 2014]]>
http://www.mdpi.com/1999-4893/8/1/1
The editors of Algorithms would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2014:[...]Algorithms2015-01-0881Editorial10.3390/a8010001121999-48932015-01-08doi: 10.3390/a8010001 Algorithms Editorial Office<![CDATA[Algorithms, Vol. 7, Pages 685-702: Fusion of Multiple Pyroelectric Characteristics for Human Body Identification]]>
http://www.mdpi.com/1999-4893/7/4/685
Due to instability and poor identification ability of single pyroelectric infrared (PIR) detector for human target identification, this paper proposes a new approach to fuse the information collected from multiple PIR sensors for human identification. Firstly, Fast Fourier Transform (FFT), Short Time Fourier Transform (STFT), Wavelet Transform (WT) and Wavelet Packet Transform (WPT) are adopted to extract features of the human body, which can be achieved by single PIR sensor. Then, we apply Principal Component Analysis (PCA) and Support Vector Machine (SVM) to reduce the characteristic dimensions and to classify the human targets, respectively. Finally, Fuzzy Comprehensive Evaluation (FCE) is utilized to fuse recognition results from multiple PIR sensors to finalize human identification. The pyroelectric characteristics under scenarios with different people and/or different paths are analyzed by various experiments, and the recognition results with/without fusion procedure are also shown and compared. The experimental results demonstrate our scheme has improved efficiency for human identification.Algorithms2014-12-1874Article10.3390/a70406856857021999-48932014-12-18doi: 10.3390/a7040685Wanchun ZhouJi XiongFangmin LiNa JiangNing Zhao<![CDATA[Algorithms, Vol. 7, Pages 663-684: COOBBO: A Novel Opposition-Based Soft Computing Algorithm for TSP Problems]]>
http://www.mdpi.com/1999-4893/7/4/663
In this paper, we propose a novel definition of opposite path. Its core feature is that the sequence of candidate paths and the distances between adjacent nodes in the tour are considered simultaneously. In a sense, the candidate path and its corresponding opposite path have the same (or similar at least) distance to the optimal path in the current population. Based on an accepted framework for employing opposition-based learning, Oppositional Biogeography-Based Optimization using the Current Optimum, called COOBBO algorithm, is introduced to solve traveling salesman problems. We demonstrate its performance on eight benchmark problems and compare it with other optimization algorithms. Simulation results illustrate that the excellent performance of our proposed algorithm is attributed to the distinct definition of opposite path. In addition, its great strength lies in exploitation for enhancing the solution accuracy, not exploration for improving the population diversity. Finally, by comparing different version of COOBBO, another conclusion is that each successful opposition-based soft computing algorithm needs to adjust and remain a good balance between backward adjacent node and forward adjacent node.Algorithms2014-12-1274Article10.3390/a70406636636841999-48932014-12-12doi: 10.3390/a7040663Qingzheng XuLemeng GuoNa WangYongjian He<![CDATA[Algorithms, Vol. 7, Pages 650-662: Time Series Prediction Method of Bank Cash Flow and Simulation Comparison]]>
http://www.mdpi.com/1999-4893/7/4/650
In order to improve the accuracy of all kinds of information in the cash business and enhance the linkage between cash inventory forecasting and cash management information in the commercial bank, the first moving average prediction method, the second moving average prediction method, the first exponential smoothing prediction and the second exponential smoothing prediction methods are adopted to realize the time series prediction of bank cash flow, respectively. The prediction accuracy of the cash flow time series is improved by optimizing the algorithm parameters. The simulation experiments are carried out on the reality commercial bank’s cash flow data and the predictive performance comparison results show the effectiveness of the proposed methods.Algorithms2014-11-2674Article10.3390/a70406506506621999-48932014-11-26doi: 10.3390/a7040650Wen-Hua CuiJie-Sheng WangChen-Xu Ning<![CDATA[Algorithms, Vol. 7, Pages 635-649: The Lobe Fissure Tracking by the Modified Ant Colony Optimization Framework in CT Images]]>
http://www.mdpi.com/1999-4893/7/4/635
Chest computed tomography (CT) is the most commonly used technique for the inspection of lung lesions. However, the lobe fissures in lung CT is still difficult to observe owing to its imaging structure. Therefore, in this paper, we aimed to develop an efficient tracking framework to extract the lobe fissures by the proposed modified ant colony optimization (ACO) algorithm. We used the method of increasing the consistency of pheromone on lobe fissure to improve the accuracy of path tracking. In order to validate the proposed system, we had tested our method in a database from 15 lung patients. In the experiment, the quantitative assessment shows that the proposed ACO method achieved the average F-measures of 80.9% and 82.84% in left and right lungs, respectively. The experiments indicate our method results more satisfied performance, and can help investigators detect lung lesion for further examination.Algorithms2014-11-2474Article10.3390/a70406356356491999-48932014-11-24doi: 10.3390/a7040635Chii-Jen ChenYou-Wei WangWei-Chih ShenChih-Yi ChenWen-Pinn Fang<![CDATA[Algorithms, Vol. 7, Pages 621-634: Neural Networks for Muscle Forces Prediction in Cycling]]>
http://www.mdpi.com/1999-4893/7/4/621
This paper documents the research towards the development of a system based on Artificial Neural Networks to predict muscle force patterns of an athlete during cycling. Two independent inverse problems must be solved for the force estimation: evaluation of the kinematic model and evaluation of the forces distribution along the limb. By solving repeatedly the two inverse problems for different subjects and conditions, a training pattern for an Artificial Neural Network was created. Then, the trained network was validated against an independent validation set, and compared to evaluate agreement between the two alternative approaches using Bland-Altman method. The obtained neural network for the different test patterns yields a normalized error well below 1% and the Bland-Altman plot shows a considerable correlation between the two methods. The new approach proposed herein allows a direct and fast computation for the inverse dynamics of a cyclist, opening the possibility of integrating such algorithm in a real time environment such as an embedded application.Algorithms2014-11-1374Article10.3390/a70406216216341999-48932014-11-13doi: 10.3390/a7040621Giulio CecchiniGabriele LozitoMaurizio SchmidSilvia ConfortoFrancesco FulgineiDaniele Bibbo<![CDATA[Algorithms, Vol. 7, Pages 608-620: High-Order Entropy Compressed Bit Vectors with Rank/Select]]>
http://www.mdpi.com/1999-4893/7/4/608
We design practical implementations of data structures for compressing bit-vectors to support efficient rank-queries (counting the number of ones up to a given point). Unlike previous approaches, which either store the bit vectors plainly, or focus on compressing bit-vectors with low densities of ones or zeros, we aim at low entropies of higher order, for example 101010...10. Our implementations achieve very good compression ratios, while showing only a modest increase in query time.Algorithms2014-11-0374Article10.3390/a70406086086201999-48932014-11-03doi: 10.3390/a7040608Kai BeskersJohannes Fischer<![CDATA[Algorithms, Vol. 7, Pages 597-607: Eight-Scale Image Contrast Enhancement Based on Adaptive Inverse Hyperbolic Tangent Algorithm]]>
http://www.mdpi.com/1999-4893/7/4/597
The Eight-Scale parameter adjustment is a natural extension of Adaptive Inverse Hyperbolic Tangent (AIHT) algorithm. It has long been known that the Human Vision System (HVS) heavily depends on detail and edge in the understanding and perception of scenes. The main goal of this study is to produce a contrast enhancement technique to recover an image from blurring and darkness, and at the same time to improve visual quality. Eight-scale coefficient adjustments can provide a further local refinement in detail under the AIHT algorithm. The proposed Eight-Scale Adaptive Inverse Hyperbolic Tangent (8SAIHT) method uses the sub-band to calculate the local mean and local variance before the AIHT algorithm is applied. This study also shows that this approach is convenient and effective in the enhancement processes for various types of images. The 8SAIHT is also capable of adaptively enhancing the local contrast of the original image while simultaneously extruding more on object details.Algorithms2014-10-2874Article10.3390/a70405975976071999-48932014-10-28doi: 10.3390/a7040597Cheng-Yi YuChi-Yuan LinSheng-Chih YangHsueh-Yi Lin<![CDATA[Algorithms, Vol. 7, Pages 582-596: Processing KNN Queries in Grid-Based Sensor Networks]]>
http://www.mdpi.com/1999-4893/7/4/582
Recently, developing efficient processing techniques in spatio-temporal databases has been a much discussed topic. Many applications, such as mobile information systems, traffic control system, and geographical information systems, can benefit from efficient processing of spatio-temporal queries. In this paper, we focus on processing an important type of spatio-temporal queries, the K-nearest neighbor (KNN) queries. Different from the previous research, the locations of objects are located by the sensors which are deployed in a grid-based manner. As the positioning technique used is not the GPS technique, but the sensor network technique, this results in a greater uncertainty regarding object location. With the uncertain location information of objects, we try to develop an efficient algorithm to process the KNN queries. Moreover, we design a probability model to quantify the possibility of each object being the query result. Finally, extensive experiments are conducted to demonstrate the efficiency of the proposed algorithms.Algorithms2014-10-2374Article10.3390/a70405825825961999-48932014-10-23doi: 10.3390/a7040582Yuan-Ko Huang<![CDATA[Algorithms, Vol. 7, Pages 554-581: Parallelizing Particle Swarm Optimization in a Functional Programming Environment]]>
http://www.mdpi.com/1999-4893/7/4/554
Many bioinspired methods are based on using several simple entities which search for a reasonable solution (somehow) independently. This is the case of Particle Swarm Optimization (PSO), where many simple particles search for the optimum solution by using both their local information and the information of the best solution found so far by any of the other particles. Particles are partially independent, and we can take advantage of this fact to parallelize PSO programs. Unfortunately, providing good parallel implementations for each specific PSO program can be tricky and time-consuming for the programmer. In this paper we introduce several parallel functional skeletons which, given a sequential PSO implementation, automatically provide the corresponding parallel implementations of it. We use these skeletons and report some experimental results. We observe that, despite the low effort required by programmers to use these skeletons, empirical results show that skeletons reach reasonable speedups.Algorithms2014-10-2374Article10.3390/a70405545545811999-48932014-10-23doi: 10.3390/a7040554Pablo RabanalIsmael RodríguezFernando Rubio<![CDATA[Algorithms, Vol. 7, Pages 538-553: Predicting Student Academic Performance: A Comparison of Two Meta-Heuristic Algorithms Inspired by Cuckoo Birds for Training Neural Networks]]>
http://www.mdpi.com/1999-4893/7/4/538
Predicting student academic performance with a high accuracy facilitates admission decisions and enhances educational services at educational institutions. This raises the need to propose a model that predicts student performance, based on the results of standardized exams, including university entrance exams, high school graduation exams, and other influential factors. In this study, an approach to the problem based on the artificial neural network (ANN) with the two meta-heuristic algorithms inspired by cuckoo birds and their lifestyle, namely, Cuckoo Search (CS) and Cuckoo Optimization Algorithm (COA) is proposed. In particular, we used previous exam results and other factors, such as the location of the student’s high school and the student’s gender as input variables, and predicted the student academic performance. The standard CS and standard COA were separately utilized to train the feed-forward network for prediction. The algorithms optimized the weights between layers and biases of the neuron network. The simulation results were then discussed and analyzed to investigate the prediction ability of the neural network trained by these two algorithms. The findings demonstrated that both CS and COA have potential in training ANN and ANN-COA obtained slightly better results for predicting student academic performance in this case. It is expected that this work may be used to support student admission procedures and strengthen the service system in educational institutions.Algorithms2014-10-1674Article10.3390/a70405385385531999-48932014-10-16doi: 10.3390/a7040538Jeng-Fung ChenHo-Nien HsiehQuang Do<![CDATA[Algorithms, Vol. 7, Pages 523-537: Multi-Sensor Building Fire Alarm System with Information Fusion Technology Based on D-S Evidence Theory]]>
http://www.mdpi.com/1999-4893/7/4/523
Multi-sensor and information fusion technology based on Dempster-Shafer evidence theory is applied in the system of a building fire alarm to realize early detecting and alarming. By using a multi-sensor to monitor the parameters of the fire process, such as light, smoke, temperature, gas and moisture, the range of fire monitoring in space and time is expanded compared with a single-sensor system. Then, the D-S evidence theory is applied to fuse the information from the multi-sensor with the specific fire model, and the fire alarm is more accurate and timely. The proposed method can avoid the failure of the monitoring data effectively, deal with the conflicting evidence from the multi-sensor robustly and improve the reliability of fire warning significantly.Algorithms2014-10-1474Article10.3390/a70405235235371999-48932014-10-14doi: 10.3390/a7040523Qian DingZhenghong PengTianzhen LiuQiaohui Tong<![CDATA[Algorithms, Vol. 7, Pages 510-522: A CR Spectrum Allocation Algorithm in Smart Grid Wireless Sensor Network]]>
http://www.mdpi.com/1999-4893/7/4/510
Cognitive radio (CR) method was introduced in smart grid communication systems to resolve potential maladies such as the coexistence of heterogeneous networks, overloaded data flow, diversity in data structures, and unstable quality of service (QOS). In this paper, a cognitive spectrum allocation algorithm based on non-cooperative game theory is proposed. The CR spectrum allocation model was developed by modifying the traditional game model via the insertion of a time variable and a critical function. The computing simulation result shows that the improved spectrum allocation algorithm can achieve stable spectrum allocation strategies and avoid the appearance of multi-Nash equilibrium at the expense of certain sacrifices in the system utility. It is suitable for application in distributed cognitive networks in power grids, thus contributing to the improvement of the isomerism and data capacity of power communication systems.Algorithms2014-10-1374Article10.3390/a70405105105221999-48932014-10-13doi: 10.3390/a7040510Wei HeKe LiQiang ZhouSongnong Li<![CDATA[Algorithms, Vol. 7, Pages 492-509: Power Cable Fault Recognition Based on an Annealed Chaotic Competitive Learning Network]]>
http://www.mdpi.com/1999-4893/7/4/492
In electric power systems, power cable operation under normal conditions is very important. Various cable faults will happen in practical applications. Recognizing the cable faults correctly and in a timely manner is crucial. In this paper we propose a method that an annealed chaotic competitive learning network recognizes power cable types. The result shows a good performance using the support vector machine (SVM) and improved Particle Swarm Optimization (IPSO)-SVM method. The experimental result shows that the fault recognition accuracy reached was 96.2%, using 54 data samples. The network training time is about 0.032 second. The method can achieve cable fault classification effectively.Algorithms2014-09-2674Article10.3390/a70404924925091999-48932014-09-26doi: 10.3390/a7040492Xuebin QinMei WangJzau-Sheng LinXiaowei Li<![CDATA[Algorithms, Vol. 7, Pages 471-491: Applying a Dynamic Resource Supply Model in a Smart Grid]]>
http://www.mdpi.com/1999-4893/7/3/471
Dynamic resource supply is a complex issue to resolve in a cyber-physical system (CPS). In our previous work, a resource model called the dynamic resource supply model (DRSM) has been proposed to handle resources specification, management and allocation in CPS. In this paper, we are integrating the DRSM with service-oriented architecture and applying it to a smart grid (SG), one of the most complex CPS examples. We give the detailed design of the SG for electricity charging request and electricity allocation between plug-in hybrid electric vehicles (PHEV) and DRSM through the Android system. In the design, we explain a mechanism for electricity consumption with data collection and re-allocation through ZigBee network. In this design, we verify the correctness of this resource model for expected electricity allocation.Algorithms2014-09-2273Article10.3390/a70304714714911999-48932014-09-22doi: 10.3390/a7030471Kaiyu WanYuji DongQian ChangTengfei Qian<![CDATA[Algorithms, Vol. 7, Pages 456-470: A Fovea Localization Scheme Using Vessel Origin-Based Parabolic Model]]>
http://www.mdpi.com/1999-4893/7/3/456
At the center of the macula, fovea plays an important role in computer-aided diagnosis. To locate the fovea, this paper proposes a vessel origin (VO)-based parabolic model, which takes the VO as the vertex of the parabola-like vasculature. Image processing steps are applied to accurately locate the fovea on retinal images. Firstly, morphological gradient and the circular Hough transform are used to find the optic disc. The structure of the vessel is then segmented with the line detector. Based on the characteristics of the VO, four features of VO are extracted, following the Bayesian classification procedure. Once the VO is identified, the VO-based parabolic model will locate the fovea. To find the fittest parabola and the symmetry axis of the retinal vessel, an Shift and Rotation (SR)-Hough transform that combines the Hough transform with the shift and rotation of coordinates is presented. Two public databases of retinal images, DRIVE and STARE, are used to evaluate the proposed method. The experiment results show that the average Euclidean distances between the located fovea and the fovea marked by experts in two databases are 9.8 pixels and 30.7 pixels, respectively. The results are stronger than other methods and thus provide a better macular detection for further disease discovery.Algorithms2014-09-1073Article10.3390/a70304564564701999-48932014-09-10doi: 10.3390/a7030456Chun-Yuan YuChen-Chung LiuShyr-Shen Yu<![CDATA[Algorithms, Vol. 7, Pages 444-455: A Novel Contrast Enhancement Technique on Palm Bone Images]]>
http://www.mdpi.com/1999-4893/7/3/444
Contrast enhancement plays a fundamental role in image processing. Many histogram-based techniques are widely used for contrast enhancement of given images, due to their simple function and effectiveness. However, the conventional histogram equalization (HE) methods result in excessive contrast enhancement, which causes natural looking and satisfactory results for a variety of low contrast images. To solve such problems, a novel multi-histogram equalization technique is proposed to enhance the contrast of the palm bone X-ray radiographs in this paper. For images, the mean-variance analysis method is employed to partition the histogram of the original grey scale image into multiple sub-histograms. These histograms are independently equalized. By using this mean-variance partition method, a proposed multi-histogram equalization technique is employed to achieve the contrast enhancement of the palm bone X-ray radiographs. Experimental results show that the multi-histogram equalization technique achieves a lower average absolute mean brightness error (AMBE) value. The multi-histogram equalization technique simultaneously preserved the mean brightness and enhanced the local contrast of the original image.Algorithms2014-09-0573Article10.3390/a70304444444551999-48932014-09-05doi: 10.3390/a7030444Yung-Tsang ChangJen-Tse WangWang-Hsai Yang<![CDATA[Algorithms, Vol. 7, Pages 429-443: ℓ1 Major Component Detection and Analysis (ℓ1 MCDA) in Three and Higher Dimensional Spaces]]>
http://www.mdpi.com/1999-4893/7/3/429
Based on the recent development of two dimensional ℓ1 major component detection and analysis (ℓ1 MCDA), we develop a scalable ℓ1 MCDA in the n-dimensional space to identify the major directions of star-shaped heavy-tailed statistical distributions with irregularly positioned “spokes” and “clutters”. In order to achieve robustness and efficiency, the proposed ℓ1 MCDA in n-dimensional space adopts a two-level median fit process in a local neighbor of a given direction in each iteration. Computational results indicate that in terms of accuracy ℓ1 MCDA is competitive with two well-known PCAs when there is only one major direction in the data, and ℓ1 MCDA can further determine multiple major directions of the n-dimensional data from superimposed Gaussians or heavy-tailed distributions without and with patterned artificial outliers. With the ability to recover complex spoke structures with heavy-tailed noise and clutter in the data, ℓ1 MCDA has potential to generate better semantics than other methods.Algorithms2014-08-1973Article10.3390/a70304294294431999-48932014-08-19doi: 10.3390/a7030429Zhibin DengJohn LaveryShu-Cherng FangJian Luo<![CDATA[Algorithms, Vol. 7, Pages 418-428: Target Channel Visiting Order Design Using Particle Swarm Optimization for Spectrum Handoff in Cognitive Radio Networks]]>
http://www.mdpi.com/1999-4893/7/3/418
In a dynamic spectrum access network, when a primary user (licensed user) reappears on the current channel, cognitive radios (CRs) need to vacate the channel and reestablish a communications link on some other channel to avoid interference to primary users, resulting in spectrum handoff. This paper studies the problem of designing target channel visiting order for spectrum handoff to minimize expected spectrum handoff delay. A particle swarm optimization (PSO) based algorithm is proposed to solve the problem. Simulation results show that the proposed algorithm performs far better than random target channel visiting scheme. The solutions obtained by PSO are very close to the optimal solution which further validates the effectiveness of the proposed method.Algorithms2014-08-1873Article10.3390/a70304184184281999-48932014-08-18doi: 10.3390/a7030418Shilian ZhengZhijin ZhaoChanglin LuoXiaoniu Yang<![CDATA[Algorithms, Vol. 7, Pages 405-417: Seminal Quality Prediction Using Clustering-Based Decision Forests]]>
http://www.mdpi.com/1999-4893/7/3/405
Prediction of seminal quality with statistical learning tools is an emerging methodology in decision support systems in biomedical engineering and is very useful in early diagnosis of seminal patients and selection of semen donors candidates. However, as is common in medical diagnosis, seminal quality prediction faces the class imbalance problem. In this paper, we propose a novel supervised ensemble learning approach, namely Clustering-Based Decision Forests, to tackle unbalanced class learning problem in seminal quality prediction. Experiment results on real fertility diagnosis dataset have shown that Clustering-Based Decision Forests outperforms decision tree, Support Vector Machines, random forests, multilayer perceptron neural networks and logistic regression by a noticeable margin. Clustering-Based Decision Forests can also be used to evaluate variables’ importance and the top five important factors that may affect semen concentration obtained in this study are age, serious trauma, sitting time, the season when the semen sample is produced, and high fevers in the last year. The findings could be helpful in explaining seminal concentration problems in infertile males or pre-screening semen donor candidates.Algorithms2014-08-1173Article10.3390/a70304054054171999-48932014-08-11doi: 10.3390/a7030405Hong WangQingsong XuLifeng Zhou<![CDATA[Algorithms, Vol. 7, Pages 397-404: Algorithm Based on Heuristic Strategy to Infer Lossy Links in Wireless Sensor Networks]]>
http://www.mdpi.com/1999-4893/7/3/397
With the maturing of the actual application of wireless sensor networks, network fault management is eagerly demanded. Severe link packet loss affects the performance of wireless sensor networks, so it must be found and repaired. Subject to the constraints on limited resources, lossy link is inferred using end to end measurement and network tomography. The algorithm based on heuristic strategy is proposed. This maps the problem of lossy links inferences to minimal set-cover problems. The performance of inference algorithms is evaluated by simulation, and the simulation results indicate feasibility and efficiency of the method.Algorithms2014-07-2973Article10.3390/a70303973974041999-48932014-07-29doi: 10.3390/a7030397Wen-Qing MaJing Zhang<![CDATA[Algorithms, Vol. 7, Pages 376-396: A Hybrid Metaheuristic Approach for Minimizing the Total Flow Time in A Flow Shop Sequence Dependent Group Scheduling Problem]]>
http://www.mdpi.com/1999-4893/7/3/376
Production processes in Cellular Manufacturing Systems (CMS) often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS) problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS) problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs) and Biased Random Sampling (BRS) search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.Algorithms2014-07-1473Article10.3390/a70303763763961999-48932014-07-14doi: 10.3390/a7030376Antonio CostaFulvio CappadonnaSergio Fichera<![CDATA[Algorithms, Vol. 7, Pages 363-375: Pressure Model of Control Valve Based on LS-SVM with the Fruit Fly Algorithm]]>
http://www.mdpi.com/1999-4893/7/3/363
Control valve is a kind of essential terminal control component which is hard to model by traditional methodologies because of its complexity and nonlinearity. This paper proposes a new modeling method for the upstream pressure of control valve using the least squares support vector machine (LS-SVM), which has been successfully used to identify nonlinear system. In order to improve the modeling performance, the fruit fly optimization algorithm (FOA) is used to optimize two critical parameters of LS-SVM. As an example, a set of actual production data from a controlling system of chlorine in a salt chemistry industry is applied. The validity of LS-SVM modeling method using FOA is verified by comparing the predicted results with the actual data with a value of MSE 2.474 × 10−3. Moreover, it is demonstrated that the initial position of FOA does not affect its optimal ability. By comparison, simulation experiments based on PSO algorithm and the grid search method are also carried out. The results show that LS-SVM based on FOA has equal performance in prediction accuracy. However, from the respect of calculation time, FOA has a significant advantage and is more suitable for the online prediction.Algorithms2014-07-1173Article10.3390/a70303633633751999-48932014-07-11doi: 10.3390/a7030363Huang AiqinWang Yong<![CDATA[Algorithms, Vol. 7, Pages 339-362: Model Checking Properties on Reduced Trace Systems]]>
http://www.mdpi.com/1999-4893/7/3/339
Temporal logic has become a well-established method for specifying the behavior of distributed systems. In this paper, we interpret a temporal logic over a partial order model that is a trace system. The satisfaction of the formulae is directly defined on traces on the basis of rewriting rules; so, the graph representation of the system can be completely avoided; moreover, a method is presented that keeps the trace system finite, also in the presence of infinite computations. To further reduce the complexity of model checking temporal logic formulae, an abstraction technique is applied to trace systems.Algorithms2014-07-0873Article10.3390/a70303393393621999-48932014-07-08doi: 10.3390/a7030339Antonella SantoneGigliola Vaglini<![CDATA[Algorithms, Vol. 7, Pages 328-338: Economic Dispatch Using Modified Bat Algorithm]]>
http://www.mdpi.com/1999-4893/7/3/328
Economic dispatch is an important non-linear optimization task in power systems. In this process, the total power demand is distributed amongst the generating units such that each unit satisfies its generation limit constraints and the cost of power production is minimized. This paper presents an over view of three optimization algorithms namely real coded genetic algorithm, particle swarm optimization and a relatively new optimization technique called bat algorithm. This study will further propose modifications to the original bat. Simulations are carried out for two test cases. First is a six-generator power system with a simplified convex objective function. The second test case is a five-generator system with a non-convex objective function. Finally the results of the modified algorithm are compared with the results of genetic algorithm, particle swarm and the original bat algorithm. The results demonstrate the improvement in the Bat Algorithm.Algorithms2014-07-0373Article10.3390/a70303283283381999-48932014-07-03doi: 10.3390/a7030328Aadil LatifPeter Palensky<![CDATA[Algorithms, Vol. 7, Pages 295-327: Solving the Examination Timetabling Problem in GPUs]]>
http://www.mdpi.com/1999-4893/7/3/295
The examination timetabling problem belongs to the class of combinatorial optimization problems and is of great importance for every University. In this paper, a hybrid evolutionary algorithm running on a GPU is employed to solve the examination timetabling problem. The hybrid evolutionary algorithm proposed has a genetic algorithm component and a greedy steepest descent component. The GPU computational capabilities allow the use of very large population sizes, leading to a more thorough exploration of the problem solution space. The GPU implementation, depending on the size of the problem, is up to twenty six times faster than the identical single-threaded CPU implementation of the algorithm. The algorithm is evaluated with the well known Toronto datasets and compares well with the best results found in the bibliography. Moreover, the selection of the encoding of the chromosomes and the tournament selection size as the population grows are examined and optimized. The compressed sparse row format is used for the conflict matrix and was proven essential to the process, since most of the datasets have a small conflict density, which translates into an extremely sparse matrix.Algorithms2014-07-0373Article10.3390/a70302952953271999-48932014-07-03doi: 10.3390/a7030295Vasileios KoloniasGeorge GoulasChristos GogosPanayiotis AlefragisEfthymios Housos<![CDATA[Algorithms, Vol. 7, Pages 276-294: Group Sparse Reconstruction of Multi-Dimensional Spectroscopic Imaging in Human Brain in vivo]]>
http://www.mdpi.com/1999-4893/7/3/276
Four-dimensional (4D) Magnetic Resonance Spectroscopic Imaging (MRSI) data combining 2 spatial and 2 spectral dimensions provides valuable biochemical information in vivo; however, its 20–40 min acquisition time is too long to be used for a clinical protocol. Data acquisition can be accelerated by non-uniformly under-sampling (NUS) the ky− t1 plane, but this causes artifacts in the spatial-spectral domain that must be removed by non-linear, iterative reconstruction. Previous work has demonstrated the feasibility of accelerating 4D MRSI data acquisition through NUS and iterative reconstruction using Compressed Sensing (CS), Total Variation (TV), and Maximum Entropy (MaxEnt) reconstruction. Group Sparse (GS) reconstruction is a variant of CS that exploits the structural sparsity of transform coefficients to achieve higher acceleration factors than traditional CS. In this article, we derive a solution to the GS reconstruction problem within the Split Bregman iterative framework that uses arbitrary transform grouping patterns of overlapping or non-overlapping groups. The 4D Echo-Planar Correlated Spectroscopic Imaging (EP-COSI) gray matter brain phantom and in vivo brain data are retrospectively under-sampled 2×, 4×, 6×, 8×, and 10___ and reconstructed using CS, TV, MaxEnt, and GS with overlapping or non-overlapping groups. Results show that GS reconstruction with overlapping groups outperformed the other reconstruction methods at each NUS rate for both phantom and in vivo data. These results can potentially reduce the scan time of a 4D EP-COSI brain scan from 40 min to under 5 min in vivo.Algorithms2014-06-2673Article10.3390/a70302762762941999-48932014-06-26doi: 10.3390/a7030276Brian BurnsNeil WilsonM. Thomas<![CDATA[Algorithms, Vol. 7, Pages 253-275: A Faster Quick Search Algorithm]]>
http://www.mdpi.com/1999-4893/7/2/253
We present the FQS (faster quick search) algorithm, an improved variation of the quick search algorithm. The quick search (QS) exact pattern matching algorithm and its variants are among the fastest practical matching algorithms today. The FQS algorithm computes a statistically expected shift value, which allows maximal shifts and a smaller number of comparisons between the pattern and the text. Compared to the state-of-the-art QS variants of exact pattern matching algorithms, the proposed FQS algorithm is the fastest when lΣl ≤ 128, where lΣl is the alphabet size. FQS also has a competitive running time when lΣl &gt; 128. Running on three practical text files, E. coli (lΣl = 4), Bible (lΣl = 63) and World192 (lΣl = 94), FQS resulted in the best performance in practice. Our FQS algorithm will have important applications in the domain of genomic database searching, involving DNA or RNA sequence databases with four symbols Σ = {A, C, G, T(/U)} or protein databases with lΣl = 20.Algorithms2014-06-2372Article10.3390/a70202532532751999-48932014-06-23doi: 10.3390/a7020253Jie LinDonald AdjerohYue Jiang<![CDATA[Algorithms, Vol. 7, Pages 243-252: Efficient Algorithms for Subgraph Listing]]>
http://www.mdpi.com/1999-4893/7/2/243
Subgraph isomorphism is a fundamental problem in graph theory. In this paper we focus on listing subgraphs isomorphic to a given pattern graph. First, we look at the algorithm due to Chiba and Nishizeki for listing complete subgraphs of fixed size, and show that it cannot be extended to general subgraphs of fixed size. Then, we consider the algorithm due to Ga̧sieniec et al. for finding multiple witnesses of a Boolean matrix product, and use it to design a new output-sensitive algorithm for listing all triangles in a graph. As a corollary, we obtain an output-sensitive algorithm for listing subgraphs and induced subgraphs isomorphic to an arbitrary fixed pattern graph.Algorithms2014-05-1572Article10.3390/a70202432432521999-48932014-05-15doi: 10.3390/a7020243Niklas ZechnerAndrzej Lingas<![CDATA[Algorithms, Vol. 7, Pages 229-242: Application of Imperialist Competitive Algorithm on Solving the Traveling Salesman Problem]]>
http://www.mdpi.com/1999-4893/7/2/229
The imperialist competitive algorithm (ICA) is a new heuristic algorithm proposed for continuous optimization problems. The research about its application on solving the traveling salesman problem (TSP) is still very limited. Aiming to explore its ability on solving TSP, we present a discrete imperialist competitive algorithm in this paper. The proposed algorithm modifies the original rules of the assimilation and introduces the 2-opt algorithm into the revolution process. To examine its performance, we tested the proposed algorithm on 10 small-scale and 2 large-scale standard benchmark instances from the TSPLIB and compared the experimental results with that obtained by two other ICA-based algorithms and six other existing algorithms. The proposed algorithm shows excellent performance in the experiments and comparisons.Algorithms2014-05-1372Article10.3390/a70202292292421999-48932014-05-13doi: 10.3390/a7020229Shuhui XuYong WangAiqin Huang<![CDATA[Algorithms, Vol. 7, Pages 206-228: Stochastic Diffusion Search: A Comparison of Swarm Intelligence Parameter Estimation Algorithms with RANSAC]]>
http://www.mdpi.com/1999-4893/7/2/206
Stochastic diffusion search (SDS) is a multi-agent global optimisation technique based on the behaviour of ants, rooted in the partial evaluation of an objective function and direct communication between agents. Standard SDS, the fundamental algorithm at work in all SDS processes, is presented here. Parameter estimation is the task of suitably fitting a model to given data; some form of parameter estimation is a key element of many computer vision processes. Here, the task of hyperplane estimation in many dimensions is investigated. Following RANSAC (random sample consensus), a widely used optimisation technique and a standard technique for many parameter estimation problems, increasingly sophisticated data-driven forms of SDS are developed. The performance of these SDS algorithms and RANSAC is analysed and compared for a hyperplane estimation task. SDS is shown to perform similarly to RANSAC, with potential for tuning to particular search problems for improved results.Algorithms2014-05-0572Article10.3390/a70202062062281999-48932014-05-05doi: 10.3390/a7020206Howard WilliamsMark Bishop<![CDATA[Algorithms, Vol. 7, Pages 203-205: Editorial: Special Issue on Matching under Preferences]]>
http://www.mdpi.com/1999-4893/7/2/203
This special issue of Algorithms is devoted to the study of matching problems involving ordinal preferences from the standpoint of algorithms and complexity.Algorithms2014-04-0872Editorial10.3390/a70202032032051999-48932014-04-08doi: 10.3390/a7020203Péter BiróDavid Manlove<![CDATA[Algorithms, Vol. 7, Pages 189-202: Faster and Simpler Approximation of Stable Matchings]]>
http://www.mdpi.com/1999-4893/7/2/189
We give a 3 2 -approximation algorithm for finding stable matchings that runs in O(m) time. The previous most well-known algorithm, by McDermid, has the same approximation ratio but runs in O(n3/2m) time, where n denotes the number of people andm is the total length of the preference lists in a given instance. In addition, the algorithm and the analysis are much simpler. We also give the extension of the algorithm for computing stable many-to-many matchings.Algorithms2014-04-0472Article10.3390/a70201891892021999-48932014-04-04doi: 10.3390/a7020189Katarzyna Paluch<![CDATA[Algorithms, Vol. 7, Pages 188: Correction: Pareto Optimization or Cascaded Weighted Sum: A Comparison of Concepts. Algorithms 2014, 7, 166–185]]>
http://www.mdpi.com/1999-4893/7/2/188
It has come to our attention that due to an error in producing the PDF version of the paper [1], doi:10.3390/a7010166, website: http://www.mdpi.com/1999-4893/7/1/166, Figures 1 and 9 are displayed incorrectly. [...]Algorithms2014-04-0272Correction10.3390/a70201881881881999-48932014-04-02doi: 10.3390/a7020188Kazuo Iwama<![CDATA[Algorithms, Vol. 7, Pages 186-187: Editorial: Special Issue on Algorithms for Sequence Analysis and Storage]]>
http://www.mdpi.com/1999-4893/7/1/186
This special issue of Algorithms is dedicated to approaches to biological sequence analysis that have algorithmic novelty and potential for fundamental impact in methods used for genome research.Algorithms2014-03-2571Editorial10.3390/a70101861861871999-48932014-03-25doi: 10.3390/a7010186Veli Mäkinen<![CDATA[Algorithms, Vol. 7, Pages 166-185: Pareto Optimization or Cascaded Weighted Sum: A Comparison of Concepts]]>
http://www.mdpi.com/1999-4893/7/1/166
Looking at articles or conference papers published since the turn of the century, Pareto optimization is the dominating assessment method for multi-objective nonlinear optimization problems. However, is it always the method of choice for real-world applications, where either more than four objectives have to be considered, or the same type of task is repeated again and again with only minor modifications, in an automated optimization or planning process? This paper presents a classification of application scenarios and compares the Pareto approach with an extended version of the weighted sum, called cascaded weighted sum, for the different scenarios. Its range of application within the field of multi-objective optimization is discussed as well as its strengths and weaknesses.Algorithms2014-03-2171Article10.3390/a70101661661851999-48932014-03-21doi: 10.3390/a7010166Wilfried JakobChristian Blume<![CDATA[Algorithms, Vol. 7, Pages 145-165: The Minimum Scheduling Time for Convergecast in Wireless Sensor Networks]]>
http://www.mdpi.com/1999-4893/7/1/145
We study the scheduling problem for data collection from sensor nodes to the sink node in wireless sensor networks, also referred to as the convergecast problem. The convergecast problem in general network topology has been proven to be NP-hard. In this paper, we propose our heuristic algorithm (finding the minimum scheduling time for convergecast (FMSTC)) for general network topology and evaluate the performance by simulation. The results of the simulation showed that the number of time slots to reach the sink node decreased with an increase in the power. We compared the performance of the proposed algorithm to the optimal time slots in a linear network topology. The proposed algorithm for convergecast in a general network topology has 2.27 times more time slots than that of a linear network topology. To the best of our knowledge, the proposed method is the first attempt to apply the optimal algorithm in a linear network topology to a general network topology.Algorithms2014-03-1771Article10.3390/a70101451451651999-48932014-03-17doi: 10.3390/a7010145Changyong JungSuk LeeVijay Bhuse<![CDATA[Algorithms, Vol. 7, Pages 62-144: Modeling Dynamic Programming Problems over Sequences and Trees with Inverse Coupled Rewrite Systems]]>
http://www.mdpi.com/1999-4893/7/1/62
Dynamic programming is a classical algorithmic paradigm, which often allows the evaluation of a search space of exponential size in polynomial time. Recursive problem decomposition, tabulation of intermediate results for re-use, and Bellman’s Principle of Optimality are its well-understood ingredients. However, algorithms often lack abstraction and are difficult to implement, tedious to debug, and delicate to modify. The present article proposes a generic framework for specifying dynamic programming problems. This framework can handle all kinds of sequential inputs, as well as tree-structured data. Biosequence analysis, document processing, molecular structure analysis, comparison of objects assembled in a hierarchic fashion, and generally, all domains come under consideration where strings and ordered, rooted trees serve as natural data representations. The new approach introduces inverse coupled rewrite systems. They describe the solutions of combinatorial optimization problems as the inverse image of a term rewrite relation that reduces problem solutions to problem inputs. This specification leads to concise yet translucent specifications of dynamic programming algorithms. Their actual implementation may be challenging, but eventually, as we hope, it can be produced automatically. The present article demonstrates the scope of this new approach by describing a diverse set of dynamic programming problems which arise in the domain of computational biology, with examples in biosequence and molecular structure analysis.Algorithms2014-03-0771Article10.3390/a7010062621441999-48932014-03-07doi: 10.3390/a7010062Robert GiegerichH´el'ene Touzet<![CDATA[Algorithms, Vol. 7, Pages 60-61: Acknowledgement to Reviewers of Algorithms in 2013]]>
http://www.mdpi.com/1999-4893/7/1/60
The editors of Algorithms would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2013. [...]Algorithms2014-02-2571Editorial10.3390/a701006060611999-48932014-02-25doi: 10.3390/a7010060 Algorithms Editorial Office<![CDATA[Algorithms, Vol. 7, Pages 32-59: Choice Function-Based Two-Sided Markets: Stability, Lattice Property, Path Independence and Algorithms]]>
http://www.mdpi.com/1999-4893/7/1/32
We build an abstract model, closely related to the stable marriage problem and motivated by Hungarian college admissions. We study different stability notions and show that an extension of the lattice property of stable marriages holds in these more general settings, even if the choice function on one side is not path independent. We lean on Tarski’s fixed point theorem and the substitutability property of choice functions. The main virtue of the work is that it exhibits practical, interesting examples, where non-path independent choice functions play a role, and proves various stability-related results.Algorithms2014-02-1471Article10.3390/a701003232591999-48932014-02-14doi: 10.3390/a7010032Tamàs FleinerZsuzsanna Jankó<![CDATA[Algorithms, Vol. 7, Pages 15-31: Bio-Inspired Meta-Heuristics for Emergency Transportation Problems]]>
http://www.mdpi.com/1999-4893/7/1/15
Emergency transportation plays a vital role in the success of disaster rescue and relief operations, but its planning and scheduling often involve complex objectives and search spaces. In this paper, we conduct a survey of recent advances in bio-inspired meta-heuristics, including genetic algorithms (GA), particle swarm optimization (PSO), ant colony optimization (ACO), etc., for solving emergency transportation problems. We then propose a new hybrid biogeography-based optimization (BBO) algorithm, which outperforms some state-of-the-art heuristics on a typical transportation planning problem.Algorithms2014-02-1171Article10.3390/a701001515311999-48932014-02-11doi: 10.3390/a7010015Min-Xia ZhangBei ZhangYu-Jun Zheng<![CDATA[Algorithms, Vol. 7, Pages 1-14: On Stable Matchings and Flows]]>
http://www.mdpi.com/1999-4893/7/1/1
We describe a flow model related to ordinary network flows the same way as stable matchings are related to maximum matchings in bipartite graphs. We prove that there always exists a stable flow and generalize the lattice structure of stable marriages to stable flows. Our main tool is a straightforward reduction of the stable flow problem to stable allocations. For the sake of completeness, we prove the results we need on stable allocations as an application of Tarski’s fixed point theorem.Algorithms2014-01-2271Article10.3390/a70100011141999-48932014-01-22doi: 10.3390/a7010001Tamás Fleiner<![CDATA[Algorithms, Vol. 6, Pages 871-882: Sparse Signal Recovery from Fixed Low-Rank Subspace via Compressive Measurement]]>
http://www.mdpi.com/1999-4893/6/4/871
This paper designs and evaluates a variant of CoSaMP algorithm, for recovering the sparse signal s from the compressive measurement v = A(Uw+s) given a fixed low-rank subspace spanned by U. Instead of firstly recovering the full vector then separating the sparse part from the structured dense part, the proposed algorithm directly works on the compressive measurement to do the separation. We investigate the performance of the algorithm on both simulated data and video compressive sensing. The results show that for a fixed low-rank subspace and truly sparse signal the proposed algorithm could successfully recover the signal only from a few compressive sensing (CS) measurements, and it performs better than ordinary CoSaMP when the sparse signal is corrupted by additional Gaussian noise.Algorithms2013-12-1764Article10.3390/a60408718718821999-48932013-12-17doi: 10.3390/a6040871Jun HeMing-Wei GaoLei ZhangHao Wu<![CDATA[Algorithms, Vol. 6, Pages 857-870: Solving Matrix Equations on Multi-Core and Many-Core Architectures]]>
http://www.mdpi.com/1999-4893/6/4/857
We address the numerical solution of Lyapunov, algebraic and differential Riccati equations, via the matrix sign function, on platforms equipped with general-purpose multicore processors and, optionally, one or more graphics processing units (GPUs). In particular, we review the solvers for these equations, as well as the underlying methods, analyze their concurrency and scalability and provide details on their parallel implementation. Our experimental results show that this class of hardware provides sufficient computational power to tackle large-scale problems, which only a few years ago would have required a cluster of computers.Algorithms2013-11-2564Article10.3390/a60408578578701999-48932013-11-25doi: 10.3390/a6040857Peter BennerPablo EzzattiHermann MenaEnrique Quintana-OrtíAlfredo Remón<![CDATA[Algorithms, Vol. 6, Pages 824-856: Overlays with Preferences: Distributed, Adaptive Approximation Algorithms for Matching with Preference Lists]]>
http://www.mdpi.com/1999-4893/6/4/824
A key property of overlay networks is the overlay nodes’ ability to establish connections (or be matched) to other nodes by preference, based on some suitability metric related to, e.g., the node’s distance, interests, recommendations, transaction history or available resources. When there are no preference cycles among the nodes, a stable matching exists in which nodes have maximized individual satisfaction, due to their choices, however no such guarantees are currently being given in the generic case. In this work, we employ the notion of node satisfaction to suggest a novel modeling for matching problems, suitable for overlay networks. We start by presenting a simple, yet powerful, distributed algorithm that solves the many-to-many matching problem with preferences. It achieves that by using local information and aggregate satisfaction as an optimization metric, while providing a guaranteed convergence and approximation ratio. Subsequently, we show how to extend the algorithm in order to support and adapt to changes in the nodes’ connectivity and preferences. In addition, we provide a detailed experimental study that focuses on the levels of achieved satisfaction, as well as convergence and reconvergence speed.Algorithms2013-11-1964Article10.3390/a60408248248561999-48932013-11-19doi: 10.3390/a6040824Giorgos GeorgiadisMarina Papatriantafilou<![CDATA[Algorithms, Vol. 6, Pages 805-823: PMS6MC: A Multicore Algorithm for Motif Discovery]]>
http://www.mdpi.com/1999-4893/6/4/805
We develop an efficient multicore algorithm, PMS6MC, for the (l; d)-motif discovery problem in which we are to find all strings of length l that appear in every string of a given set of strings with at most d mismatches. PMS6MC is based on PMS6, which is currently the fastest single-core algorithm for motif discovery in large instances. The speedup, relative to PMS6, attained by our multicore algorithm ranges from a high of 6.62 for the (17,6) challenging instances to a low of 2.75 for the (13,4) challenging instances on an Intel 6-core system. We estimate that PMS6MC is 2 to 4 times faster than other parallel algorithms for motif search on large instances.Algorithms2013-11-1864Article10.3390/a60408058058231999-48932013-11-18doi: 10.3390/a6040805Shibdas BandyopadhyaySartaj SahniSanguthevar Rajasekaran<![CDATA[Algorithms, Vol. 6, Pages 782-804: Stability, Optimality and Manipulation in Matching Problems with Weighted Preferences]]>
http://www.mdpi.com/1999-4893/6/4/782
The stable matching problem (also known as the stable marriage problem) is a well-known problem of matching men to women, so that no man and woman, who are not married to each other, both prefer each other. Such a problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools or, more generally, to any two-sided market. In the classical stable marriage problem, both men and women express a strict preference order over the members of the other sex, in a qualitative way. Here, we consider stable marriage problems with weighted preferences: each man (resp., woman) provides a score for each woman (resp., man). Such problems are more expressive than the classical stable marriage problems. Moreover, in some real-life situations, it is more natural to express scores (to model, for example, profits or costs) rather than a qualitative preference ordering. In this context, we define new notions of stability and optimality, and we provide algorithms to find marriages that are stable and/or optimal according to these notions. While expressivity greatly increases by adopting weighted preferences, we show that, in most cases, the desired solutions can be found by adapting existing algorithms for the classical stable marriage problem. We also consider the manipulability properties of the procedures that return such stable marriages. While we know that all procedures are manipulable by modifying the preference lists or by truncating them, here, we consider if manipulation can occur also by just modifying the weights while preserving the ordering and avoiding truncation. It turns out that, by adding weights, in some cases, we may increase the possibility of manipulating, and this cannot be avoided by any reasonable restriction on the weights.Algorithms2013-11-1864Article10.3390/a60407827828041999-48932013-11-18doi: 10.3390/a6040782Maria PiniFrancesca RossiK. VenableToby Walsh<![CDATA[Algorithms, Vol. 6, Pages 762-781: Very High Resolution Satellite Image Classification Using Fuzzy Rule-Based Systems]]>
http://www.mdpi.com/1999-4893/6/4/762
The aim of this research is to present a detailed step-by-step method for classification of very high resolution urban satellite images (VHRSI) into specific classes such as road, building, vegetation, etc., using fuzzy logic. In this study, object-based image analysis is used for image classification. The main problems in high resolution image classification are the uncertainties in the position of object borders in satellite images and also multiplex resemblance of the segments to different classes. In order to solve this problem, fuzzy logic is used for image classification, since it provides the possibility of image analysis using multiple parameters without requiring inclusion of certain thresholds in the class assignment process. In this study, an inclusive semi-automatic method for image classification is offered, which presents the configuration of the related fuzzy functions as well as fuzzy rules. The produced results are compared to the results of a normal classification using the same parameters, but with crisp rules. The overall accuracies and kappa coefficients of the presented method stand higher than the check projects.Algorithms2013-11-1264Article10.3390/a60407627627811999-48932013-11-12doi: 10.3390/a6040762Shabnam JabariYun Zhang<![CDATA[Algorithms, Vol. 6, Pages 747-761: Multi-Core Parallel Gradual Pattern Mining Based on Multi-Precision Fuzzy Orderings]]>
http://www.mdpi.com/1999-4893/6/4/747
Gradual patterns aim at describing co-variations of data such as the higher the size, the higher the weight. In recent years, such patterns have been studied more and more from the data mining point of view. The extraction of such patterns relies on efficient and smart orderings that can be built among data, for instance, when ordering the data with respect to the size, then the data are also ordered with respect to the weight. However, in many application domains, it is hardly possible to consider that data values are crisply ordered. When considering gene expression, it is not true from the biological point of view that Gene 1 is more expressed than Gene 2, if the levels of expression only differ from the tenth decimal. We thus consider fuzzy orderings and fuzzy gamma rank correlation. In this paper, we address two major problems related to this framework: (i) the high memory consumption and (ii) the precision, representation and efficient storage of the fuzzy concordance degrees versus the loss or gain of computing power. For this purpose, we consider multi-precision matrices represented using sparse matrices coupled with parallel algorithms. Experimental results show the interest of our proposal.Algorithms2013-11-0164Article10.3390/a60407477477611999-48932013-11-01doi: 10.3390/a6040747Nicolas SicardYogi AryadinataFederico Del Razo LopezAnne LaurentPerfecto Flores<![CDATA[Algorithms, Vol. 6, Pages 726-746: An Efficient Local Search for the Feedback Vertex Set Problem]]>
http://www.mdpi.com/1999-4893/6/4/726
Inspired by many deadlock detection applications, the feedback vertex set is defined as a set of vertices in an undirected graph, whose removal would result in a graph without cycle. The Feedback Vertex Set Problem, known to be NP-complete, is to search for a feedback vertex set with the minimal cardinality to benefit the deadlock recovery. To address the issue, this paper presents NewkLS FVS(LS, local search; FVS, feedback vertex set), a variable depth-based local search algorithm with a randomized scheme to optimize the efficiency and performance. Experimental simulations are conducted to compare the algorithm with recent metaheuristics, and the computational results show that the proposed algorithm can outperform the other state-of-art algorithms and generate satisfactory solutions for most DIMACSbenchmarks.Algorithms2013-11-0164Article10.3390/a60407267267461999-48932013-11-01doi: 10.3390/a6040726Zhiqiang ZhangAnsheng YeXiaoqing ZhouZehui Shao<![CDATA[Algorithms, Vol. 6, Pages 702-725: New Parallel Sparse Direct Solvers for Multicore Architectures]]>
http://www.mdpi.com/1999-4893/6/4/702
At the heart of many computations in science and engineering lies the need to efficiently and accurately solve large sparse linear systems of equations. Direct methods are frequently the method of choice because of their robustness, accuracy and potential for use as black-box solvers. In the last few years, there have been many new developments, and a number of new modern parallel general-purpose sparse solvers have been written for inclusion within the HSL mathematical software library. In this paper, we introduce and briefly review these solvers for symmetric sparse systems. We describe the algorithms used, highlight key features (including bit-compatibility and out-of-core working) and then, using problems arising from a range of practical applications, we illustrate and compare their performances. We demonstrate that modern direct solvers are able to accurately solve systems of order 106 in less than 3 minutes on a 16-core machine.Algorithms2013-11-0164Article10.3390/a60407027027251999-48932013-11-01doi: 10.3390/a6040702Jonathan HoggJennifer Scott