Algorithms
http://www.mdpi.com/journal/algorithms
Latest open access articles published in Algorithms at http://www.mdpi.com/journal/algorithms<![CDATA[Algorithms, Vol. 9, Pages 71: A Variable Block Insertion Heuristic for the Blocking Flowshop Scheduling Problem with Total Flowtime Criterion]]>
http://www.mdpi.com/1999-4893/9/4/71
In this paper, we present a variable block insertion heuristic (VBIH) algorithm to solve the blocking flowshop scheduling problem with the total flowtime criterion. In the VBIH algorithm, we define a minimum and a maximum block size. After constructing the initial sequence, the VBIH algorithm starts with a minimum block size being equal to one. It removes the block from the current sequence and inserts it into the partial sequence sequentially with a predetermined move size. The sequence, which is obtained after several block moves, goes under a variable local search (VLS), which is based on traditional insertion and swap neighborhood structures. If the new sequence obtained after the VLS local search is better than the current sequence, it replaces the current sequence. As long as it improves, it keeps the same block size. However, if it does not improve, the block size is incremented by one and a simulated annealing-type of acceptance criterion is used to accept the current sequence. This process is repeated until the block size reaches at the maximum block size. Furthermore, we present a novel constructive heuristic, which is based on the profile fitting heuristic from the literature. The proposed constructive heuristic is able to further improve the best known solutions for some larger instances in a few seconds. Parameters of the constructive heuristic and the VBIH algorithm are determined through a design of experiment approach. Extensive computational results on the Taillard’s well-known benchmark suite show that the proposed VBIH algorithm outperforms the discrete artificial bee colony algorithm, which is one of the most efficient algorithms recently in the literature. Ultimately, 52 out of the 150 best known solutions are further improved with substantial margins.Algorithms2016-10-2094Article10.3390/a9040071711999-48932016-10-20doi: 10.3390/a9040071Mehmet TasgetirenQuan-Ke PanDamla KizilayKaizhou Gao<![CDATA[Algorithms, Vol. 9, Pages 70: Plant Electrical Signal Classification Based on Waveform Similarity]]>
http://www.mdpi.com/1999-4893/9/4/70
(1) Background: Plant electrical signals are important physiological traits which reflect plant physiological state. As a kind of phenotypic data, plant action potential (AP) evoked by external stimuli—e.g., electrical stimulation, environmental stress—may be associated with inhibition of gene expression related to stress tolerance. However, plant AP is a response to environment changes and full of variability. It is an aperiodic signal with refractory period, discontinuity, noise, and artifacts. In consequence, there are still challenges to automatically recognize and classify plant AP; (2) Methods: Therefore, we proposed an AP recognition algorithm based on dynamic difference threshold to extract all waveforms similar to AP. Next, an incremental template matching algorithm was used to classify the AP and non-AP waveforms; (3) Results: Experiment results indicated that the template matching algorithm achieved a classification rate of 96.0%, and it was superior to backpropagation artificial neural networks (BP-ANNs), supported vector machine (SVM) and deep learning method; (4) Conclusion: These findings imply that the proposed methods are likely to expand possibilities for rapidly recognizing and classifying plant action potentials in the database in the future.Algorithms2016-10-1594Article10.3390/a9040070701999-48932016-10-15doi: 10.3390/a9040070Yang ChenDong-Jie ZhaoZi-Yang WangZhong-Yi WangGuiliang TangLan Huang<![CDATA[Algorithms, Vol. 9, Pages 69: A New Fuzzy Harmony Search Algorithm Using Fuzzy Logic for Dynamic Parameter Adaptation]]>
http://www.mdpi.com/1999-4893/9/4/69
In this paper, a new fuzzy harmony search algorithm (FHS) for solving optimization problems is presented. FHS is based on a recent method using fuzzy logic for dynamic adaptation of the harmony memory accepting (HMR) and pitch adjustment (PArate) parameters that improve the convergence rate of traditional harmony search algorithm (HS). The objective of the method is to dynamically adjust the parameters in the range from 0.7 to 1. The impact of using fixed parameters in the harmony search algorithm is discussed and a strategy for efficiently tuning these parameters using fuzzy logic is presented. The FHS algorithm was successfully applied to different benchmarking optimization problems. The results of simulation and comparison studies demonstrate the effectiveness and efficiency of the proposed approach.Algorithms2016-10-1494Article10.3390/a9040069691999-48932016-10-14doi: 10.3390/a9040069Cinthia PerazaFevrier ValdezMario GarciaPatricia MelinOscar Castillo<![CDATA[Algorithms, Vol. 9, Pages 68: An Overview on the Applications of Matrix Theory in Wireless Communications and Signal Processing]]>
http://www.mdpi.com/1999-4893/9/4/68
This paper overviews the key applications enabled by matrix theory in two major fields of interest in electrical engineering, namely wireless communications and signal processing. The paper focuses on the fundamental role played by matrices in modeling and optimization of wireless communication systems, and in detection, extraction and processing of the information embedded in signals. Among the major applications in wireless communications, the role of matrix representations and decompositions in characterizing multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) communication systems is described. In addition, this paper points out the important contribution made by matrices in solving signal estimation and detection problems. Special attention is given to the implementation of matrices in sensor array signal processing and the design of adaptive filters. Furthermore, the crucial role played by matrices in representing and processing digital images is depicted by several illustrative applications. This paper concludes with some applications of matrix theory in the area of compressive sensing of signals and by outlining a few open research problems for future study.Algorithms2016-10-1494Review10.3390/a9040068681999-48932016-10-14doi: 10.3390/a9040068Xu WangErchin Serpedin<![CDATA[Algorithms, Vol. 9, Pages 67: Comparison of Different Algorithms to Orthorectify WorldView-2 Satellite Imagery]]>
http://www.mdpi.com/1999-4893/9/4/67
Due to their level of spatial detail (pixel dimensions equal to or less than 1 m), very high-resolution satellite images (VHRSIs) need particular georeferencing and geometric corrections which require careful orthorectification. Although there are several dedicated algorithms, mainly commercial and free software for geographic information system (GIS) and remote sensing applications, the quality of the results may be inadequate in terms of the representation scale for which these images are intended. This paper compares the most common orthorectification algorithms in order to define the best approach for VHRSIs. Both empirical models (such as 2D polynomial functions, PFs; or 3D rational polynomial functions, RPFs) and rigorous physical and deterministic models (such as Toutin) are considered. Ground control points (GCPs) and check points (CPs)—whose positions in the image as, well as in the real world, are known—support algorithm applications. Tests were executed on a WorldView-2 (WV-2) panchromatic image of an area near the Gulf of Naples in Campania (Italy) to establish the best-performing algorithm. Combining 3D RPFs with 2D PFs produced the best results.Algorithms2016-10-1194Article10.3390/a9040067671999-48932016-10-11doi: 10.3390/a9040067Oscar BelfioreClaudio Parente<![CDATA[Algorithms, Vol. 9, Pages 66: Fault Sensing Using Fractal Dimension and Wavelet]]>
http://www.mdpi.com/1999-4893/9/4/66
A new fusion sensing (FS) method was proposed by using the improved fractal box dimension (IFBD) and a developed maximum wavelet coefficient (DMWC) for fault sensing of an online power cable. There are four strategies that were used. Firstly, the traditional fractal box dimension was improved to enlarge the feature distances between the different fault classes. Secondly, the IFBD recognition algorithm was proposed by using the improved fractal dimension feature extracted from the three-phase currents for the first stage of fault recognition. Thirdly, the DMWC recognition algorithm was developed based on the K-transform and wavelet analysis to establish the relationship between the maximum wavelet coefficient and the fault class. Fourthly, the FS method was formed by combining the IFBD algorithm and the DMWC algorithm in order to recognize the 10 types of short circuit faults of online power. The designed test system proved that the FS method increased the fault recognition accuracy obviously. In addition, the parameters of the initial angle, transient resistance, and fault distance had no influence on the FS method.Algorithms2016-10-1194Article10.3390/a9040066661999-48932016-10-11doi: 10.3390/a9040066Mei WangLiang ZhuYanan Guo<![CDATA[Algorithms, Vol. 9, Pages 65: Local Convergence Analysis of an Eighth Order Scheme Using Hypothesis Only on the First Derivative]]>
http://www.mdpi.com/1999-4893/9/4/65
In this paper, we propose a local convergence analysis of an eighth order three-step method to approximate a locally unique solution of a nonlinear equation in a Banach space setting. Further, we also study the dynamic behaviour of that scheme. In an earlier study, Sharma and Arora (2015) did not discuss these properties. Furthermore, the order of convergence was shown using Taylor series expansions and hypotheses up to the fourth order derivative or even higher of the function involved which restrict the applicability of the proposed scheme. However, only the first order derivatives appear in the proposed scheme. To overcome this problem, we present the hypotheses for the proposed scheme maximum up to first order derivative. In this way, we not only expand the applicability of the methods but also suggest convergence domain. Finally, a variety of concrete numerical examples are proposed where earlier studies can not be applied to obtain the solutions of nonlinear equations on the other hand our study does not exhibit this type of problem/restriction.Algorithms2016-09-2994Article10.3390/a9040065651999-48932016-09-29doi: 10.3390/a9040065Ioannis ArgyrosRamandeep BehlSandile Motsa<![CDATA[Algorithms, Vol. 9, Pages 64: Theorietage der Gesellschaft für Informatik in Speyer 2015—Special Issue]]>
http://www.mdpi.com/1999-4893/9/4/64
We briefly report on the national workshops on Formal Languages and Automata Theory as well as on Algorithms and Complexity Theory held in early Autumn, 2015.Algorithms2016-09-2694Meeting Report10.3390/a9040064641999-48932016-09-26doi: 10.3390/a9040064Henning Fernau<![CDATA[Algorithms, Vol. 9, Pages 62: Noise Reduction of Steel Cord Conveyor Belt Defect Electromagnetic Signal by Combined Use of Improved Wavelet and EMD]]>
http://www.mdpi.com/1999-4893/9/4/62
In order to reduce the noise of a defect electromagnetic signal of the steel cord conveyor belt used in coal mines, a new signal noise reduction method by combined use of the improved threshold wavelet and Empirical Mode Decomposition (EMD) is proposed. Firstly, the denoising method based on the improved threshold wavelet is applied to reduce the noise of a defect electromagnetic signal obtained by an electromagnetic testing system. Then, the EMD is used to decompose the denoised signal and then the effective Intrinsic Mode Function (IMF) is extracted by the dominant eigenvalue strategy. Finally, the signal reconstruction is carried out by utilizing the obtained IMF. In order to verify the proposed noise reduction method, the experiments are carried out in two cases including the defective joint and steel wire rope break. The experimental results show that the proposed method in this paper obtains the higher Signal to Noise Ratio (SNR) for the defect electromagnetic signal noise reduction of steel cord conveyor belts.Algorithms2016-09-2694Article10.3390/a9040062621999-48932016-09-26doi: 10.3390/a9040062Hong-Wei MaHong-Wei FanQing-Hua MaoXu-Hui ZhangWang Xing<![CDATA[Algorithms, Vol. 9, Pages 63: A Practical and Robust Execution Time-Frame Procedure for the Multi-Mode Resource-Constrained Project Scheduling Problem with Minimal and Maximal Time Lags]]>
http://www.mdpi.com/1999-4893/9/4/63
Modeling and optimizing organizational processes, such as the one represented by the Resource-Constrained Project Scheduling Problem (RCPSP), improve outcomes. Based on assumptions and simplification, this model tackles the allocation of resources so that organizations can continue to generate profits and reinvest in future growth. Nonetheless, despite all of the research dedicated to solving the RCPSP and its multi-mode variations, there is no standardized procedure that can guide project management practitioners in their scheduling tasks. This is mainly because many of the proposed approaches are either based on unrealistic/oversimplified scenarios or they propose solution procedures not easily applicable or even feasible in real-life situations. In this study, we solve a more true-to-life and complex model, Multimode RCPSP with minimal and maximal time lags (MRCPSP/max). The complexity of the model solved is presented, and the practicality of the proposed approach is justified depending on only information that is available for every project regardless of its industrial context. The results confirm that it is possible to determine a robust makespan and to calculate an execution time-frame with gaps lower than 11% between their lower and upper bounds. In addition, in many instances, the solved lower bound obtained was equal to the best-known optimum.Algorithms2016-09-2494Article10.3390/a9040063631999-48932016-09-24doi: 10.3390/a9040063Angela ChenYun-Chia LiangJose Padilla<![CDATA[Algorithms, Vol. 9, Pages 61: Noncircular Sources-Based Sparse Representation Algorithm for Direction of Arrival Estimation in MIMO Radar with Mutual Coupling]]>
http://www.mdpi.com/1999-4893/9/3/61
In this paper, a reweighted sparse representation algorithm based on noncircular sources is proposed, and the problem of the direction of arrival (DOA) estimation for multiple-input multiple-output (MIMO) radar with mutual coupling is addressed. Making full use of the special structure of banded symmetric Toeplitz mutual coupling matrices (MCM), the proposed algorithm firstly eliminates the effect of mutual coupling by linear transformation. Then, a reduced dimensional transformation is exploited to reduce the computational complexity of the proposed algorithm. Furthermore, by utilizing the noncircular feature of signals, the new extended received data matrix is formulated to enlarge the array aperture. Finally, based on the new received data, a reweighted matrix is constructed, and the proposed method further designs the joint reweighted sparse representation scheme to achieve the DOA estimation by solving the l 1 -norm constraint minimization problem. The proposed method enlarges the array aperture due to the application of signal noncircularity, and in the presence of mutual coupling, the proposed algorithm provides higher resolution and better angle estimation performance than ESPRIT-like, l 1 -SVD and l 1 -SRDML (sparse representation deterministic maximum likelihood) algorithms. Numerical experiment results verify the effectiveness and advantages of the proposed method.Algorithms2016-09-0893Article10.3390/a9030061611999-48932016-09-08doi: 10.3390/a9030061Weidong ZhouJing LiuPengxiang ZhuWenhe GongJiaxin Hou<![CDATA[Algorithms, Vol. 9, Pages 60: HMM Adaptation for Improving a Human Activity Recognition System]]>
http://www.mdpi.com/1999-4893/9/3/60
When developing a fully automatic system for evaluating motor activities performed by a person, it is necessary to segment and recognize the different activities in order to focus the analysis. This process must be carried out by a Human Activity Recognition (HAR) system. This paper proposes a user adaptation technique for improving a HAR system based on Hidden Markov Models (HMMs). This system segments and recognizes six different physical activities (walking, walking upstairs, walking downstairs, sitting, standing and lying down) using inertial signals from a smartphone. The system is composed of a feature extractor for obtaining the most relevant characteristics from the inertial signals, a module for training the six HMMs (one per activity), and the last module for segmenting new activity sequences using these models. The user adaptation technique consists of a Maximum A Posteriori (MAP) approach that adapts the activity HMMs to the user, using some activity examples from this specific user. The main results on a public dataset have reported a significant relative error rate reduction of more than 30%. In conclusion, adapting a HAR system to the user who is performing the physical activities provides significant improvement in the system’s performance.Algorithms2016-09-0293Article10.3390/a9030060601999-48932016-09-02doi: 10.3390/a9030060Rubén San-SegundoJuan MonteroJosé Moreno-PimentelJosé Pardo<![CDATA[Algorithms, Vol. 9, Pages 59: Binary Cockroach Swarm Optimization for Combinatorial Optimization Problem]]>
http://www.mdpi.com/1999-4893/9/3/59
The Cockroach Swarm Optimization (CSO) algorithm is inspired by cockroach social behavior. It is a simple and efficient meta-heuristic algorithm and has been applied to solve global optimization problems successfully. The original CSO algorithm and its variants operate mainly in continuous search space and cannot solve binary-coded optimization problems directly. Many optimization problems have their decision variables in binary. Binary Cockroach Swarm Optimization (BCSO) is proposed in this paper to tackle such problems and was evaluated on the popular Traveling Salesman Problem (TSP), which is considered to be an NP-hard Combinatorial Optimization Problem (COP). A transfer function was employed to map a continuous search space CSO to binary search space. The performance of the proposed algorithm was tested firstly on benchmark functions through simulation studies and compared with the performance of existing binary particle swarm optimization and continuous space versions of CSO. The proposed BCSO was adapted to TSP and applied to a set of benchmark instances of symmetric TSP from the TSP library. The results of the proposed Binary Cockroach Swarm Optimization (BCSO) algorithm on TSP were compared to other meta-heuristic algorithms.Algorithms2016-09-0293Article10.3390/a9030059591999-48932016-09-02doi: 10.3390/a9030059Ibidun ObagbuwaAdemola Abidoye<![CDATA[Algorithms, Vol. 9, Pages 58: LR Parsing for LCFRS]]>
http://www.mdpi.com/1999-4893/9/3/58
LR parsing is a popular parsing strategy for variants of Context-Free Grammar (CFG). It has also been used for mildly context-sensitive formalisms, such as Tree-Adjoining Grammar. In this paper, we present the first LR-style parsing algorithm for Linear Context-Free Rewriting Systems (LCFRS), a mildly context-sensitive extension of CFG which has received considerable attention in the last years in the context of natural language processing.Algorithms2016-08-2793Article10.3390/a9030058581999-48932016-08-27doi: 10.3390/a9030058Laura KallmeyerWolfgang Maier<![CDATA[Algorithms, Vol. 9, Pages 57: Uniform Page Migration Problem in Euclidean Space]]>
http://www.mdpi.com/1999-4893/9/3/57
The page migration problem in Euclidean space is revisited. In this problem, online requests occur at any location to access a single page located at a server. Every request must be served, and the server has the choice to migrate from its current location to a new location in space. Each service costs the Euclidean distance between the server and request. A migration costs the distance between the former and the new server location, multiplied by the page size. We study the problem in the uniform model, in which the page has size D = 1 . All request locations are not known in advance; however, they are sequentially presented in an online fashion. We design a 2.75 -competitive online algorithm that improves the current best upper bound for the problem with the unit page size. We also provide a lower bound of 2.732 for our algorithm. It was already known that 2.5 is a lower bound for this problem.Algorithms2016-08-2393Article10.3390/a9030057571999-48932016-08-23doi: 10.3390/a9030057Amanj KhorramianAkira Matsubayashi<![CDATA[Algorithms, Vol. 9, Pages 56: Multiple Artificial Neural Networks with Interaction Noise for Estimation of Spatial Categorical Variables]]>
http://www.mdpi.com/1999-4893/9/3/56
This paper presents a multiple artificial neural networks (MANN) method with interaction noise for estimating the occurrence probabilities of different classes at any site in space. The MANN consists of several independent artificial neural networks, the number of which is determined by the neighbors around the target location. In the proposed algorithm, the conditional or pre-posterior (multi-point) probabilities are viewed as output nodes, which can be estimated by weighted combinations of input nodes: two-point transition probabilities. The occurrence probability of a certain class at a certain location can be easily computed by the product of output probabilities using Bayes’ theorem. Spatial interaction or redundancy information can be measured in the form of interaction noises. Prediction results show that the method of MANN with interaction noise has a higher classification accuracy than the traditional Markov chain random fields (MCRF) model and can successfully preserve small-scale features.Algorithms2016-08-2093Article10.3390/a9030056561999-48932016-08-20doi: 10.3390/a9030056Xiang HuangZhizhong Wang<![CDATA[Algorithms, Vol. 9, Pages 55: A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis]]>
http://www.mdpi.com/1999-4893/9/3/55
With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monitor wheelchair propulsion in court sports, they all have limitations. Through experimental testing, we have shown the Attitude and Heading Reference System (AHRS)-based algorithm to be a suitable and reliable candidate algorithm for estimating velocity, distance, and approximating trajectory. The proposed algorithm is computationally inexpensive, agnostic of wheel camber, not sensitive to sensor placement, and can be embedded for real-time implementations. The research is conducted under Griffith University Ethics (GU Ref No: 2016/294).Algorithms2016-08-1793Article10.3390/a9030055551999-48932016-08-17doi: 10.3390/a9030055Jonathan ShepherdTomohito WadaDavid RowlandsDaniel James<![CDATA[Algorithms, Vol. 9, Pages 54: Sign Function Based Sparse Adaptive Filtering Algorithms for Robust Channel Estimation under Non-Gaussian Noise Environments]]>
http://www.mdpi.com/1999-4893/9/3/54
Robust channel estimation is required for coherent demodulation in multipath fading wireless communication systems which are often deteriorated by non-Gaussian noises. Our research is motivated by the fact that classical sparse least mean square error (LMS) algorithms are very sensitive to impulsive noise while standard SLMS algorithm does not take into account the inherent sparsity information of wireless channels. This paper proposes a sign function based sparse adaptive filtering algorithm for developing robust channel estimation techniques. Specifically, sign function based least mean square error (SLMS) algorithms to remove the non-Gaussian noise that is described by a symmetric α-stable noise model. By exploiting channel sparsity, sparse SLMS algorithms are proposed by introducing several effective sparse-promoting functions into the standard SLMS algorithm. The convergence analysis of the proposed sparse SLMS algorithms indicates that they outperform the standard SLMS algorithm for robust sparse channel estimation, which can be also verified by simulation results.Algorithms2016-08-1293Article10.3390/a9030054541999-48932016-08-12doi: 10.3390/a9030054Tingping ZhangGuan Gui<![CDATA[Algorithms, Vol. 9, Pages 52: Control for Ship Course-Keeping Using Optimized Support Vector Machines]]>
http://www.mdpi.com/1999-4893/9/3/52
Support vector machines (SVM) are proposed in order to obtain a robust controller for ship course-keeping. A cascaded system is constructed by combining the dynamics of the rudder actuator with the dynamics of ship motion. Modeling errors and disturbances are taken into account in the plant. A controller with a simple structure is produced by applying an SVM and L2-gain design. The SVM is used to identify the complicated nonlinear functions and the modeling errors in the plant. The Lagrangian factors in the SVM are obtained using on-line tuning algorithms. L2-gain design is applied to suppress the disturbances. To obtain the optimal parameters in the SVM, then particle swarm optimization (PSO) method is incorporated. The stability and robustness of the close-loop system are confirmed by Lyapunov stability analysis. Numerical simulation is performed to demonstrate the validity of the proposed hybrid controller and its superior performance over a conventional PD controller.Algorithms2016-08-1093Article10.3390/a9030052521999-48932016-08-10doi: 10.3390/a9030052Weilin LuoHongchao Cong<![CDATA[Algorithms, Vol. 9, Pages 53: Faster Force-Directed Graph Drawing with the Well-Separated Pair Decomposition]]>
http://www.mdpi.com/1999-4893/9/3/53
The force-directed paradigm is one of the few generic approaches to drawing graphs. Since force-directed algorithms can be extended easily, they are used frequently. Most of these algorithms are, however, quite slow on large graphs, as they compute a quadratic number of forces in each iteration. We give a new algorithm that takes only O ( m + n log n ) time per iteration when laying out a graph with n vertices and m edges. Our algorithm approximates the true forces using the so-called well-separated pair decomposition. We perform experiments on a large number of graphs and show that we can strongly reduce the runtime, even on graphs with less than a hundred vertices, without a significant influence on the quality of the drawings (in terms of the number of crossings and deviation in edge lengths).Algorithms2016-08-0493Article10.3390/a9030053531999-48932016-08-04doi: 10.3390/a9030053Fabian LippAlexander WolffJohannes Zink<![CDATA[Algorithms, Vol. 9, Pages 51: A Multi-Objective Harmony Search Algorithm for Sustainable Design of Floating Settlements]]>
http://www.mdpi.com/1999-4893/9/3/51
This paper is concerned with the application of computational intelligence techniques to the conceptual design and development of a large-scale floating settlement. The settlement in question is a design for the area of Urla, which is a rural touristic region located on the west coast of Turkey, near the metropolis of Izmir. The problem at hand includes both engineering and architectural aspects that need to be addressed in a comprehensive manner. We thus adapt the view as a multi-objective constrained real-parameter optimization problem. Specifically, we consider three objectives, which are conflicting. The first one aims at maximizing accessibility of urban functions such as housing and public spaces, as well as special functions, such as a marina for yachts and a yacht club. The second one aims at ensuring the wind protection of the general areas of the settlement, by adequately placing them in between neighboring land masses. The third one aims at maximizing visibility of the settlement from external observation points, so as to maximize the exposure of the settlement. To address this complex multi-objective optimization problem and identify lucrative alternative design solutions, a multi-objective harmony search algorithm (MOHS) is developed and applied in this paper. When compared to the Differential Evolution algorithm developed for the problem in the literature, we demonstrate that MOHS achieves competitive or slightly better performance in terms of hyper volume calculation, and gives promising results when the Pareto front approximation is examined.Algorithms2016-07-3093Article10.3390/a9030051511999-48932016-07-30doi: 10.3390/a9030051Cemre CubukcuogluIoannis ChatzikonstantinouMehmet TasgetirenI. SariyildizQuan-Ke Pan<![CDATA[Algorithms, Vol. 9, Pages 50: Utilizing Network Structure to Accelerate Markov Chain Monte Carlo Algorithms]]>
http://www.mdpi.com/1999-4893/9/3/50
We consider the problem of estimating the measure of subsets in very large networks. A prime tool for this purpose is the Markov Chain Monte Carlo (MCMC) algorithm. This algorithm, while extremely useful in many cases, still often suffers from the drawback of very slow convergence. We show that in a special, but important case, it is possible to obtain significantly better bounds on the convergence rate. This special case is when the huge state space can be aggregated into a smaller number of clusters, in which the states behave approximately the same way (but their behavior still may not be identical). A Markov chain with this structure is called quasi-lumpable. This property allows the aggregation of states (nodes) into clusters. Our main contribution is a rigorously proved bound on the rate at which the aggregated state distribution approaches its limit in quasi-lumpable Markov chains. We also demonstrate numerically that in certain cases this can indeed lead to a significantly accelerated way of estimating the measure of subsets. The result can be a useful tool in the analysis of complex networks, whenever they have a clustering that aggregates nodes with similar (but not necessarily identical) behavior.Algorithms2016-07-2993Article10.3390/a9030050501999-48932016-07-29doi: 10.3390/a9030050Ahmad AskarianRupei XuAndrás Faragó<![CDATA[Algorithms, Vol. 9, Pages 49: Data Filtering Based Recursive and Iterative Least Squares Algorithms for Parameter Estimation of Multi-Input Output Systems]]>
http://www.mdpi.com/1999-4893/9/3/49
This paper discusses the parameter estimation problems of multi-input output-error autoregressive (OEAR) systems. By combining the auxiliary model identification idea and the data filtering technique, a data filtering based recursive generalized least squares (F-RGLS) identification algorithm and a data filtering based iterative least squares (F-LSI) identification algorithm are derived. Compared with the F-RGLS algorithm, the proposed F-LSI algorithm is more effective and can generate more accurate parameter estimates. The simulation results confirm this conclusion.Algorithms2016-07-2693Article10.3390/a9030049491999-48932016-07-26doi: 10.3390/a9030049Jiling Ding<![CDATA[Algorithms, Vol. 9, Pages 48: Semi-Supervised Classification Based on Low Rank Representation]]>
http://www.mdpi.com/1999-4893/9/3/48
Graph-based semi-supervised classification uses a graph to capture the relationship between samples and exploits label propagation techniques on the graph to predict the labels of unlabeled samples. However, it is difficult to construct a graph that faithfully describes the relationship between high-dimensional samples. Recently, low-rank representation has been introduced to construct a graph, which can preserve the global structure of high-dimensional samples and help to train accurate transductive classifiers. In this paper, we take advantage of low-rank representation for graph construction and propose an inductive semi-supervised classifier called Semi-Supervised Classification based on Low-Rank Representation (SSC-LRR). SSC-LRR first utilizes a linearized alternating direction method with adaptive penalty to compute the coefficient matrix of low-rank representation of samples. Then, the coefficient matrix is adopted to define a graph. Finally, SSC-LRR incorporates this graph into a graph-based semi-supervised linear classifier to classify unlabeled samples. Experiments are conducted on four widely used facial datasets to validate the effectiveness of the proposed SSC-LRR and the results demonstrate that SSC-LRR achieves higher accuracy than other related methods.Algorithms2016-07-2293Article10.3390/a9030048481999-48932016-07-22doi: 10.3390/a9030048Xuan HouGuangjun YaoJun Wang<![CDATA[Algorithms, Vol. 9, Pages 47: A Hybrid Course Recommendation System by Integrating Collaborative Filtering and Artificial Immune Systems]]>
http://www.mdpi.com/1999-4893/9/3/47
This research proposes a two-stage user-based collaborative filtering process using an artificial immune system for the prediction of student grades, along with a filter for professor ratings in the course recommendation for college students. We test for cosine similarity and Karl Pearson (KP) correlation in affinity calculations for clustering and prediction. This research uses student information and professor information datasets of Yuan Ze University from the years 2005–2009 for the purpose of testing and training. The mean average error and confusion matrix analysis form the testing parameters. A minimum professor rating was tested to check the results, and observed that the recommendation systems herein provide highly accurate results for students with higher mean grades.Algorithms2016-07-2293Article10.3390/a9030047471999-48932016-07-22doi: 10.3390/a9030047Pei-Chann ChangCheng-Hui LinMeng-Hui Chen<![CDATA[Algorithms, Vol. 9, Pages 46: Affinity Propagation Clustering Using Path Based Similarity]]>
http://www.mdpi.com/1999-4893/9/3/46
Clustering is a fundamental task in data mining. Affinity propagation clustering (APC) is an effective and efficient clustering technique that has been applied in various domains. APC iteratively propagates information between affinity samples, updates the responsibility matrix and availability matrix, and employs these matrices to choose cluster centers (or exemplars) of respective clusters. However, since it mainly uses negative Euclidean distance between exemplars and samples as the similarity between them, it is difficult to identify clusters with complex structure. Therefore, the performance of APC deteriorates on samples distributed with complex structure. To mitigate this problem, we propose an improved APC based on a path-based similarity (APC-PS). APC-PS firstly utilizes negative Euclidean distance to find exemplars of clusters. Then, it employs the path-based similarity to measure the similarity between exemplars and samples, and to explore the underlying structure of clusters. Next, it assigns non-exemplar samples to their respective clusters via that similarity. Our empirical study on synthetic and UCI datasets shows that the proposed APC-PS significantly outperforms original APC and other related approaches.Algorithms2016-07-2193Article10.3390/a9030046461999-48932016-07-21doi: 10.3390/a9030046Yuan JiangYuliang LiaoGuoxian Yu<![CDATA[Algorithms, Vol. 9, Pages 45: Designing a Framework to Improve Time Series Data of Construction Projects: Application of a Simulation Model and Singular Spectrum Analysis]]>
http://www.mdpi.com/1999-4893/9/3/45
During a construction project life cycle, project costs and time estimations contribute greatly to baseline scheduling. Besides, schedule risk analysis and project control are also influenced by the above factors. Although many papers have offered estimation techniques, little attempt has been made to generate project time series data as daily progressive estimations in different project environments that could help researchers in generating general and customized formulae in further studies. This paper, however, is an attempt to introduce a new simulation approach to reflect the data regarding time series progress of the project, considering the specifications and the complexity of the project and the environment where the project is performed. Moreover, this simulator can equip project managers with estimated information, which reassures them of the execution stages of the project although they lack historical data. A case study is presented to show the usefulness of the model and its applicability in practice. In this study, singular spectrum analysis has been employed to analyze the simulated outputs, and the results are separated based on their signal and noise trends. The signal trend is used as a point-of-reference to compare the outputs of a simulation employing S-curve technique results and the formulae corresponding to earned value management, as well as the life of a given project.Algorithms2016-07-1893Article10.3390/a9030045451999-48932016-07-18doi: 10.3390/a9030045Zahra Hojjati TavassoliSeyed IranmaneshAhmad Tavassoli Hojjati<![CDATA[Algorithms, Vol. 9, Pages 44: A Gentle Introduction to Applications of Algorithmic Metatheorems for Space and Circuit Classes]]>
http://www.mdpi.com/1999-4893/9/3/44
Algorithmic metatheorems state that if a problem can be described in a certain logic and the inputs are structured in a certain way, then the problem can be solved with a certain amount of resources. As an example, by Courcelle’s Theorem, all monadic second-order (“in a certain logic”) properties of graphs of bounded tree width (“structured in a certain way”) can be solved in linear time (“with a certain amount of resources”). Such theorems have become valuable tools in algorithmics: if a problem happens to have the right structure and can be described in the right logic, they immediately yield a (typically tight) upper bound on the time complexity of the problem. Perhaps even more importantly, several complex algorithms rely on algorithmic metatheorems internally to solve subproblems, which considerably broadens the range of applications of these theorems. This paper is intended as a gentle introduction to the ideas behind algorithmic metatheorems, especially behind some recent results concerning space and circuit classes, and tries to give a flavor of the range of their applications.Algorithms2016-07-0993Article10.3390/a9030044441999-48932016-07-09doi: 10.3390/a9030044Till Tantau<![CDATA[Algorithms, Vol. 9, Pages 43: Opposition-Based Adaptive Fireworks Algorithm]]>
http://www.mdpi.com/1999-4893/9/3/43
A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA). The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA), differential evolution (DE), self-adapting control parameters in differential evolution (jDE), a firefly algorithm (FA), and a standard particle swarm optimization 2011 (SPSO2011) algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.Algorithms2016-07-0893Article10.3390/a9030043431999-48932016-07-08doi: 10.3390/a9030043Chibing Gong<![CDATA[Algorithms, Vol. 9, Pages 42: Joint Antenna Selection and Beamforming Algorithms for Physical Layer Multicasting with Massive Antennas]]>
http://www.mdpi.com/1999-4893/9/2/42
We investigate the problem of minimizing the total power consumption under the constraint of the signal-to-noise ratio (SNR) requirement for the physical layer multicasting system with large-scale antenna arrays. In contrast with existing work, we explicitly consider both the transmit power and the circuit power scaling with the number of antennas. The joint antenna selection and beamforming technique is proposed to minimize the total power consumption. The problem is a challenging one, which aims to minimize the linear combination of ℓ 0 -norm and ℓ 2 -norm. To our best knowledge, this minimization problem has not yet been well solved. A random decremental antenna selection algorithm is designed, which is further modified by an approximation of the minimal transmit power based on the asymptotic orthogonality of the channels. Then, a more efficient decremental antenna selection algorithm is proposed based on minimizing the ℓ 0 norm. Performance results show that the ℓ 0 norm minimization algorithm greatly outperforms the random selection algorithm in terms of the total power consumption and the average run time.Algorithms2016-06-2292Article10.3390/a9020042421999-48932016-06-22doi: 10.3390/a9020042Xinhua WangJinlu Sheng<![CDATA[Algorithms, Vol. 9, Pages 41: Visual and Textual Sentiment Analysis of a Microblog Using Deep Convolutional Neural Networks]]>
http://www.mdpi.com/1999-4893/9/2/41
Sentiment analysis of online social media has attracted significant interest recently. Many studies have been performed, but most existing methods focus on either only textual content or only visual content. In this paper, we utilize deep learning models in a convolutional neural network (CNN) to analyze the sentiment in Chinese microblogs from both textual and visual content. We first train a CNN on top of pre-trained word vectors for textual sentiment analysis and employ a deep convolutional neural network (DNN) with generalized dropout for visual sentiment analysis. We then evaluate our sentiment prediction framework on a dataset collected from a famous Chinese social media network (Sina Weibo) that includes text and related images and demonstrate state-of-the-art results on this Chinese sentiment analysis benchmark.Algorithms2016-06-2192Article10.3390/a9020041411999-48932016-06-21doi: 10.3390/a9020041Yuhai YuHongfei LinJiana MengZhehuan Zhao<![CDATA[Algorithms, Vol. 9, Pages 40: A Direct Search Algorithm for Global Optimization]]>
http://www.mdpi.com/1999-4893/9/2/40
A direct search algorithm is proposed for minimizing an arbitrary real valued function. The algorithm uses a new function transformation and three simplex-based operations. The function transformation provides global exploration features, while the simplex-based operations guarantees the termination of the algorithm and provides global convergence to a stationary point if the cost function is differentiable and its gradient is Lipschitz continuous. The algorithm’s performance has been extensively tested using benchmark functions and compared to some well-known global optimization algorithms. The results of the computational study show that the algorithm combines both simplicity and efficiency and is competitive with the heuristics-based strategies presently used for global optimization.Algorithms2016-06-1392Article10.3390/a9020040401999-48932016-06-13doi: 10.3390/a9020040Enrique BaeyensAlberto HerrerosJosé Perán<![CDATA[Algorithms, Vol. 9, Pages 39: Review of Recent Type-2 Fuzzy Controller Applications]]>
http://www.mdpi.com/1999-4893/9/2/39
Type-2 fuzzy logic controllers (T2 FLC) can be viewed as an emerging class of intelligent controllers because of their abilities in handling uncertainties; in many cases, they have been shown to outperform their Type-1 counterparts. This paper presents a literature review on recent applications of T2 FLCs. To follow the developments in this field, we first review general T2 FLCs and the most well-known interval T2 FLS algorithms that have been used for control design. Certain applications of these controllers include robotic control, bandwidth control, industrial systems control, electrical control and aircraft control. The most promising applications are found in the robotics and automotive areas, where T2 FLCs have been demonstrated and proven to perform better than traditional controllers. With the development of enhanced algorithms, along with the advancement in both hardware and software, we shall witness increasing applications of these frontier controllers.Algorithms2016-06-0992Review10.3390/a9020039391999-48932016-06-09doi: 10.3390/a9020039Kevin TaiAbdul-Rahman El-SayedMohammad BiglarbegianClaudia GonzalezOscar CastilloShohel Mahmud<![CDATA[Algorithms, Vol. 9, Pages 38: A 3/2-Approximation Algorithm for the Graph Balancing Problem with Two Weights]]>
http://www.mdpi.com/1999-4893/9/2/38
In the pursuit of finding subclasses of the makespan minimization problem on unrelated parallel machines that have approximation algorithms with approximation ratio better than 2, the graph balancing problem has been of current interest. In the graph balancing problem each job can be non-preemptively scheduled on one of at most two machines with the same processing time on either machine. Recently, Ebenlendr, Krčál, and Sgall (Algorithmica 2014, 68, 62–80.) presented a 7 / 4 -approximation algorithm for the graph balancing problem. Let r , s ∈ Z + . In this paper we consider the graph balancing problem with two weights, where a job either takes r time units or s time units. We present a 3 / 2 -approximation algorithm for this problem. This is an improvement over the previously best-known approximation algorithm for the problem with approximation ratio 1.652 and it matches the best known inapproximability bound for it.Algorithms2016-06-0892Article10.3390/a9020038381999-48932016-06-08doi: 10.3390/a9020038Daniel PageRoberto Solis-Oba<![CDATA[Algorithms, Vol. 9, Pages 37: A New Multi-Step Iterative Algorithm for Approximating Common Fixed Points of a Finite Family of Multi-Valued Bregman Relatively Nonexpansive Mappings]]>
http://www.mdpi.com/1999-4893/9/2/37
In this article, we introduce a new multi-step iteration for approximating a common fixed point of a finite class of multi-valued Bregman relatively nonexpansive mappings in the setting of reflexive Banach spaces. We prove a strong convergence theorem for the proposed iterative algorithm under certain hypotheses. Additionally, we also use our results for the solution of variational inequality problems and to find the zero points of maximal monotone operators. The theorems furnished in this work are new and well-established and generalize many well-known recent research works in this field.Algorithms2016-05-3092Article10.3390/a9020037371999-48932016-05-30doi: 10.3390/a9020037Wiyada KumamPongsakorn SunthrayuthPhond PhunchongharnKhajonpong AkkarajitsakulParinya NgiamsunthornPoom Kumam<![CDATA[Algorithms, Vol. 9, Pages 36: Robust Hessian Locally Linear Embedding Techniques for High-Dimensional Data]]>
http://www.mdpi.com/1999-4893/9/2/36
Recently manifold learning has received extensive interest in the community of pattern recognition. Despite their appealing properties, most manifold learning algorithms are not robust in practical applications. In this paper, we address this problem in the context of the Hessian locally linear embedding (HLLE) algorithm and propose a more robust method, called RHLLE, which aims to be robust against both outliers and noise in the data. Specifically, we first propose a fast outlier detection method for high-dimensional datasets. Then, we employ a local smoothing method to reduce noise. Furthermore, we reformulate the original HLLE algorithm by using the truncation function from differentiable manifolds. In the reformulated framework, we explicitly introduce a weighted global functional to further reduce the undesirable effect of outliers and noise on the embedding result. Experiments on synthetic as well as real datasets demonstrate the effectiveness of our proposed algorithm.Algorithms2016-05-2692Article10.3390/a9020036361999-48932016-05-26doi: 10.3390/a9020036Xianglei XingSidan DuKejun Wang<![CDATA[Algorithms, Vol. 9, Pages 35: Application of the Energy-Conserving Integration Method to Hybrid Simulation of a Full-Scale Steel Frame]]>
http://www.mdpi.com/1999-4893/9/2/35
The nonlinear unconditionally stable energy-conserving integration method (ECM) is a new method for solving a continuous equation of motion. To our knowledge, there is still no report about its application on a hybrid test. Aiming to explore its effect on hybrid tests, the nonlinear beam-column element program is developed for computation. The program contains both the ECM and the average acceleration method (AAM). The comparison of the hybrid test results with thesetwo methods validates the effectiveness of the ECM in the hybrid simulation. We found that the energy error of hybrid test by using ECM is less than that of AAM. In addition, a new iteration strategy with reduction factor is presented to avoid the overshooting phenomena during iteration process with the finite element program.Algorithms2016-05-2192Article10.3390/a9020035351999-48932016-05-21doi: 10.3390/a9020035Tianlin PanBin WuYongsheng ChenGuoshan Xu<![CDATA[Algorithms, Vol. 9, Pages 34: A State Recognition Approach for Complex Equipment Based on a Fuzzy Probabilistic Neural Network]]>
http://www.mdpi.com/1999-4893/9/2/34
Due to the traditional state recognition approaches for complex electromechanical equipment having had the disadvantages of excessive reliance on complete expert knowledge and insufficient training sets, real-time state identification system was always difficult to be established. The running efficiency cannot be guaranteed and the fault rate cannot be reduced fundamentally especially in some extreme working conditions. To solve these problems, an online state recognition method for complex equipment based on a fuzzy probabilistic neural network (FPNN) was proposed in this paper. The fuzzy rule base for complex equipment was established and a multi-level state space model was constructed. Moreover, a probabilistic neural network (PNN) was applied in state recognition, and the fuzzy functions and quantification matrix were presented. The flowchart of proposed approach was designed. Finally, a simulation example of shearer state recognition and the industrial application with an accuracy of 90.91% were provided and the proposed approach was feasible and efficient.Algorithms2016-05-2092Article10.3390/a9020034341999-48932016-05-20doi: 10.3390/a9020034Jing XuZhongbin WangChao TanXinhua Liu<![CDATA[Algorithms, Vol. 9, Pages 33: Mining Branching Rules from Past Survey Data with an Illustration Using a Geriatric Assessment Survey for Older Adults with Cancer]]>
http://www.mdpi.com/1999-4893/9/2/33
We construct a fast data mining algorithm that can be used to identify high-frequency response patterns in historical surveys. Identification of these patterns leads to the derivation of question branching rules that shorten the time required to complete a survey. The data mining algorithm allows the user to control the error rate that is incurred through the use of implied answers that go along with each branching rule. The context considered is binary response questions, which can be obtained from multi-level response questions through dichotomization. The algorithm is illustrated by the analysis of four sections of a geriatric assessment survey used by oncologists. Reductions in the number of questions that need to be asked in these four sections range from 33% to 54%.Algorithms2016-05-1392Article10.3390/a9020033331999-48932016-05-13doi: 10.3390/a9020033Daniel JeskeJeffrey LongmateVani KatheriaArti Hurria<![CDATA[Algorithms, Vol. 9, Pages 32: Uniform vs. Nonuniform Membership for Mildly Context-Sensitive Languages: A Brief Survey]]>
http://www.mdpi.com/1999-4893/9/2/32
Parsing for mildly context-sensitive language formalisms is an important area within natural language processing. While the complexity of the parsing problem for some such formalisms is known to be polynomial, this is not the case for all of them. This article presents a series of results regarding the complexity of parsing for linear context-free rewriting systems and deterministic tree-walking transducers. We discuss the difference between uniform and nonuniform complexity measures and how parameterized complexity theory can be used to investigate how different aspects of the formalisms influence how hard the parsing problem is. The main results we survey are all hardness results and indicate that parsing is hard even for relatively small values of parameters such as rank and fan-out in a rewriting system.Algorithms2016-05-1192Article10.3390/a9020032321999-48932016-05-11doi: 10.3390/a9020032Henrik BjörklundMartin BerglundPetter Ericson<![CDATA[Algorithms, Vol. 9, Pages 31: Improved Direct Linear Transformation for Parameter Decoupling in Camera Calibration]]>
http://www.mdpi.com/1999-4893/9/2/31
For camera calibration based on direct linear transformation (DLT), the camera’s intrinsic and extrinsic parameters are simultaneously calibrated, which may cause coupling errors in the parameters and affect the calibration parameter accuracy. In this paper, we propose an improved direct linear transformation (IDLT) algorithm for calibration parameter decoupling. This algorithm uses a linear relationship of calibration parameter errors and obtains calibration parameters by moving a three-dimensional template. Simulation experiments were conducted to compare the calibration accuracy of DLT and IDLT algorithms with image noise and distortion. The results show that the IDLT algorithm calibration parameters achieve higher accuracy because the algorithm removes the coupling errors.Algorithms2016-04-2992Article10.3390/a9020031311999-48932016-04-29doi: 10.3390/a9020031Zhenqing ZhaoDong YeXin ZhangGang ChenBin Zhang<![CDATA[Algorithms, Vol. 9, Pages 30: Comment on: On the Kung-Traub Conjecture for Iterative Methods for Solving Quadratic Equations. Algorithms 2016, 9, 1]]>
http://www.mdpi.com/1999-4893/9/2/30
Kung-Traub conjecture states that an iterative method without memory for finding the simple zero of a scalar equation could achieve convergence order 2 d − 1 , and d is the total number of function evaluations. In an article “Babajee, D.K.R. On the Kung-Traub Conjecture for Iterative Methods for Solving Quadratic Equations, Algorithms 2016, 9, 1, doi:10.3390/a9010001”, the author has shown that Kung-Traub conjecture is not valid for the quadratic equation and proposed an iterative method for the scalar and vector quadratic equations. In this comment, we have shown that we first reported the aforementioned iterative method.Algorithms2016-04-2692Comment10.3390/a9020030301999-48932016-04-26doi: 10.3390/a9020030Fayyaz Ahmad<![CDATA[Algorithms, Vol. 9, Pages 29: An Improved Dynamic Joint Resource Allocation Algorithm Based on SFR]]>
http://www.mdpi.com/1999-4893/9/2/29
Inter-cell interference (ICI) is the main factor affecting system capacity and spectral efficiency. Effective spectrum resource management is an important and challenging issue for the design of wireless communication systems. The soft frequency reuse (SFR) is regarded as an interesting approach to significantly eliminate ICI. However, the allocation of resource is fixed prior to system deployment in static SFR. To overcome this drawback, this paper adopts a distributed method and proposes an improved dynamic joint resource allocation algorithm (DJRA). The improved scheme adaptively adjusts resource allocation based on the real-time user distribution. DJRA first detects the edge-user distribution vector to determine the optimal scheme, which guarantees that all the users have available resources and the number of iterations is reduced. Then, the DJRA maximizes the throughput for each cell via optimizing resource and power allocation. Due to further eliminate interference, the sector partition method is used in the center region and in view of fairness among users, the novel approach adds the proportional fair algorithm at the end of DJRA. Simulation results show that the proposed algorithm outperforms previous approaches for improving the system capacity and cell edge user performance.Algorithms2016-04-2292Article10.3390/a9020029291999-48932016-04-22doi: 10.3390/a9020029Yibing LiXueying DiaoGe DongFang Ye<![CDATA[Algorithms, Vol. 9, Pages 28: Alternating Direction Method of Multipliers for Generalized Low-Rank Tensor Recovery]]>
http://www.mdpi.com/1999-4893/9/2/28
Low-Rank Tensor Recovery (LRTR), the higher order generalization of Low-Rank Matrix Recovery (LRMR), is especially suitable for analyzing multi-linear data with gross corruptions, outliers and missing values, and it attracts broad attention in the fields of computer vision, machine learning and data mining. This paper considers a generalized model of LRTR and attempts to recover simultaneously the low-rank, the sparse, and the small disturbance components from partial entries of a given data tensor. Specifically, we first describe generalized LRTR as a tensor nuclear norm optimization problem that minimizes a weighted combination of the tensor nuclear norm, the l1-norm and the Frobenius norm under linear constraints. Then, the technique of Alternating Direction Method of Multipliers (ADMM) is employed to solve the proposed minimization problem. Next, we discuss the weak convergence of the proposed iterative algorithm. Finally, experimental results on synthetic and real-world datasets validate the efficiency and effectiveness of the proposed method.Algorithms2016-04-1992Article10.3390/a9020028281999-48932016-04-19doi: 10.3390/a9020028Jiarong ShiQingyan YinXiuyun ZhengWei Yang<![CDATA[Algorithms, Vol. 9, Pages 27: The Effect of Preprocessing on Arabic Document Categorization]]>
http://www.mdpi.com/1999-4893/9/2/27
Preprocessing is one of the main components in a conventional document categorization (DC) framework. This paper aims to highlight the effect of preprocessing tasks on the efficiency of the Arabic DC system. In this study, three classification techniques are used, namely, naive Bayes (NB), k-nearest neighbor (KNN), and support vector machine (SVM). Experimental analysis on Arabic datasets reveals that preprocessing techniques have a significant impact on the classification accuracy, especially with complicated morphological structure of the Arabic language. Choosing appropriate combinations of preprocessing tasks provides significant improvement on the accuracy of document categorization depending on the feature size and classification techniques. Findings of this study show that the SVM technique has outperformed the KNN and NB techniques. The SVM technique achieved 96.74% micro-F1 value by using the combination of normalization and stemming as preprocessing tasks.Algorithms2016-04-1892Article10.3390/a9020027271999-48932016-04-18doi: 10.3390/a9020027Abdullah AyedhGuanzheng TANKhaled AlwesabiHamdi Rajeh<![CDATA[Algorithms, Vol. 9, Pages 26: siEDM: An Efficient String Index and Search Algorithm for Edit Distance with Moves]]>
http://www.mdpi.com/1999-4893/9/2/26
Although several self-indexes for highly repetitive text collections exist, developing an index and search algorithm with editing operations remains a challenge. Edit distance with moves (EDM) is a string-to-string distance measure that includes substring moves in addition to ordinal editing operations to turn one string into another. Although the problem of computing EDM is intractable, it has a wide range of potential applications, especially in approximate string retrieval. Despite the importance of computing EDM, there has been no efficient method for indexing and searching large text collections based on the EDM measure. We propose the first algorithm, named string index for edit distance with moves (siEDM), for indexing and searching strings with EDM. The siEDM algorithm builds an index structure by leveraging the idea behind the edit sensitive parsing (ESP), an efficient algorithm enabling approximately computing EDM with guarantees of upper and lower bounds for the exact EDM. siEDM efficiently prunes the space for searching query strings by the proposed method, which enables fast query searches with the same guarantee as ESP. We experimentally tested the ability of siEDM to index and search strings on benchmark datasets, and we showed siEDM’s efficiency.Algorithms2016-04-1592Article10.3390/a9020026261999-48932016-04-15doi: 10.3390/a9020026Yoshimasa TakabatakeKenta NakashimaTetsuji KuboyamaYasuo TabeiHiroshi Sakamoto<![CDATA[Algorithms, Vol. 9, Pages 25: Primary User Localization Algorithm Based on Compressive Sensing in Cognitive Radio Networks]]>
http://www.mdpi.com/1999-4893/9/2/25
In order to locate source signal more accurately in authorized frequency bands, a novel primary user localization algorithm based on compressive sensing (PU-CSL) in cognitive radio networks (CRNs) is proposed in this paper. In comparison to existing centroid locating algorithms, PU-CSL shows higher locating accuracy for integrally exploring correlation between source signal and secondary users (SUs). Energy detection is first adopted for collecting the energy fingerprint of source signal at each SU, then degree of correlation between source signal and SUs is reconstructed based on compressive sensing (CS), which determines weights of centroid coordinates. A weighted centroid scheme is finally utilized to estimate source position. Simulation results show that PU-CSL has smaller maximum error of positioning and root-mean-square error. Moreover, the proposed PU-CSL algorithm possess excellent location accuracy and strong anti-noise performance.Algorithms2016-04-1492Article10.3390/a9020025251999-48932016-04-14doi: 10.3390/a9020025Fang YeXun ZhangYibing LiHui Huang<![CDATA[Algorithms, Vol. 9, Pages 24: Structural Damage Localization by the Principal Eigenvector of Modal Flexibility Change]]>
http://www.mdpi.com/1999-4893/9/2/24
Using the principal eigenvector (PE) of modal flexibility change, a new vibration-based algorithm for structural defect localization was presented in this paper. From theoretical investigations, it was proven that the PE of modal flexibility variation has a turning point with a sharp peak in its curvature at the damage location. A three-span continuous beam was used as an example to illustrate the feasibility and superiority of the proposed PE algorithm for damage localization. Furthermore, defect localization was also performed using the well-known uniform load surface approach for comparison. Numerical results demonstrated that the PE algorithm can locate structural defects with good accuracy, whereas the ULS approach occasionally missed one or two defect locations. It was found that the PE algorithm may be promising for structural defect assessment.Algorithms2016-04-1392Article10.3390/a9020024241999-48932016-04-13doi: 10.3390/a9020024Cui-Hong LiQiu-Wei YangBing-Xiang Sun<![CDATA[Algorithms, Vol. 9, Pages 23: An Improved Fireworks Algorithm Based on Grouping Strategy of the Shuffled Frog Leaping Algorithm to Solve Function Optimization Problems]]>
http://www.mdpi.com/1999-4893/9/2/23
The fireworks algorithm (FA) is a new parallel diffuse optimization algorithm to simulate the fireworks explosion phenomenon, which realizes the balance between global exploration and local searching by means of adjusting the explosion mode of fireworks bombs. By introducing the grouping strategy of the shuffled frog leaping algorithm (SFLA), an improved FA-SFLA hybrid algorithm is put forward, which can effectively make the FA jump out of the local optimum and accelerate the global search ability. The simulation results show that the hybrid algorithm greatly improves the accuracy and convergence velocity for solving the function optimization problems.Algorithms2016-04-0192Article10.3390/a9020023231999-48932016-04-01doi: 10.3390/a9020023Yu-Feng SunJie-Sheng WangJiang-Di Song<![CDATA[Algorithms, Vol. 9, Pages 22: Modifying Orthogonal Drawings for Label Placement]]>
http://www.mdpi.com/1999-4893/9/2/22
In this paper, we investigate how one can modify an orthogonal graph drawing to accommodate the placement of overlap-free labels with the minimum cost (i.e., minimum increase of the area and preservation of the quality of the drawing). We investigate computational complexity issues of variations of that problem, and we present polynomial time algorithms that find the minimum increase of space in one direction, needed to resolve overlaps, while preserving the orthogonal representation of the orthogonal drawing when objects have a predefined partial order.Algorithms2016-03-2892Article10.3390/a9020022221999-48932016-03-28doi: 10.3390/a9020022Konstantinos KakoulisIoannis Tollis<![CDATA[Algorithms, Vol. 9, Pages 21: Multivariate Algorithmics for Finding Cohesive Subnetworks]]>
http://www.mdpi.com/1999-4893/9/1/21
Community detection is an important task in the analysis of biological, social or technical networks. We survey different models of cohesive graphs, commonly referred to as clique relaxations, that are used in the detection of network communities. For each clique relaxation, we give an overview of basic model properties and of the complexity of the problem of finding large cohesive subgraphs under this model. Since this problem is usually NP-hard, we focus on combinatorial fixed-parameter algorithms exploiting typical structural properties of input networks.Algorithms2016-03-1691Article10.3390/a9010021211999-48932016-03-16doi: 10.3390/a9010021Christian Komusiewicz<![CDATA[Algorithms, Vol. 9, Pages 20: The Iterative Solution to Discrete-Time H∞ Control Problems for Periodic Systems]]>
http://www.mdpi.com/1999-4893/9/1/20
This paper addresses the problem of solving discrete-time H ∞ control problems for periodic systems. The approach for solving such a type of equations is well known in the literature. However, the focus of our research is set on the numerical computation of the stabilizing solution. In particular, two effective methods for practical realization of the known iterative processes are described. Furthermore, a new iterative approach is investigated and applied. On the basis of numerical experiments, we compare the presented methods. A major conclusion is that the new iterative approach is faster than rest of the methods and it uses less RAM memory than other methods.Algorithms2016-03-1491Article10.3390/a9010020201999-48932016-03-14doi: 10.3390/a9010020Ivan IvanovBoryana Bogdanova<![CDATA[Algorithms, Vol. 9, Pages 19: Review of Recent Advances in the Application of the Wavelet Transform to Diagnose Cracked Rotors]]>
http://www.mdpi.com/1999-4893/9/1/19
Wavelet transform (WT) has been used in the diagnosis of cracked rotors since the 1990s. At present, WT is one of the most commonly used tools to treat signals in several fields. Understandably, this has been an area of extensive scientific research, which is why this paper aims to summarize briefly the major advances in the field since 2008. The present review considers advances in the use and application of WT, the selection of the parameters used, and the key achievements in using WT for crack diagnosis.Algorithms2016-03-1491Review10.3390/a9010019191999-48932016-03-14doi: 10.3390/a9010019María GómezCristina CastejónJuan García-Prada<![CDATA[Algorithms, Vol. 9, Pages 18: Constructing Frozen Jacobian Iterative Methods for Solving Systems of Nonlinear Equations, Associated with ODEs and PDEs Using the Homotopy Method]]>
http://www.mdpi.com/1999-4893/9/1/18
A homotopy method is presented for the construction of frozen Jacobian iterative methods. The frozen Jacobian iterative methods are attractive because the inversion of the Jacobian is performed in terms of LUfactorization only once, for a single instance of the iterative method. We embedded parameters in the iterative methods with the help of the homotopy method: the values of the parameters are determined in such a way that a better convergence rate is achieved. The proposed homotopy technique is general and has the ability to construct different families of iterative methods, for solving weakly nonlinear systems of equations. Further iterative methods are also proposed for solving general systems of nonlinear equations.Algorithms2016-03-1191Article10.3390/a9010018181999-48932016-03-11doi: 10.3390/a9010018Uswah QasimZulifqar AliFayyaz AhmadStefano Serra-CapizzanoMalik Zaka UllahMir Asma<![CDATA[Algorithms, Vol. 9, Pages 17: Co-Clustering under the Maximum Norm]]>
http://www.mdpi.com/1999-4893/9/1/17
Co-clustering, that is partitioning a numerical matrix into “homogeneous” submatrices, has many applications ranging from bioinformatics to election analysis. Many interesting variants of co-clustering are NP-hard. We focus on the basic variant of co-clustering where the homogeneity of a submatrix is defined in terms of minimizing the maximum distance between two entries. In this context, we spot several NP-hard, as well as a number of relevant polynomial-time solvable special cases, thus charting the border of tractability for this challenging data clustering problem. For instance, we provide polynomial-time solvability when having to partition the rows and columns into two subsets each (meaning that one obtains four submatrices). When partitioning rows and columns into three subsets each, however, we encounter NP-hardness, even for input matrices containing only values from {0, 1, 2}.Algorithms2016-02-2591Article10.3390/a9010017171999-48932016-02-25doi: 10.3390/a9010017Laurent BulteauVincent FroeseSepp HartungRolf Niedermeier<![CDATA[Algorithms, Vol. 9, Pages 16: Multiband and Lossless Compression of Hyperspectral Images]]>
http://www.mdpi.com/1999-4893/9/1/16
Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.). We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.Algorithms2016-02-1891Article10.3390/a9010016161999-48932016-02-18doi: 10.3390/a9010016Raffaele PizzolanteBruno Carpentieri<![CDATA[Algorithms, Vol. 9, Pages 15: A Geometric Orthogonal Projection Strategy for Computing the Minimum Distance Between a Point and a Spatial Parametric Curve]]>
http://www.mdpi.com/1999-4893/9/1/15
A new orthogonal projection method for computing the minimum distance between a point and a spatial parametric curve is presented. It consists of a geometric iteration which converges faster than the existing Newton’s method, and it is insensitive to the choice of initial values. We prove that projecting a point onto a spatial parametric curve under the method is globally second-order convergence.Algorithms2016-02-0691Article10.3390/a9010015151999-48932016-02-06doi: 10.3390/a9010015Xiaowu LiZhinan WuLinke HouLin WangChunguang YueQiao Xin<![CDATA[Algorithms, Vol. 9, Pages 14: Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems]]>
http://www.mdpi.com/1999-4893/9/1/14
In this work, two multi-step derivative-free iterative methods are presented for solving system of nonlinear equations. The new methods have high computational efficiency and low computational cost. The order of convergence of the new methods is proved by a development of an inverse first-order divided difference operator. The computational efficiency is compared with the existing methods. Numerical experiments support the theoretical results. Experimental results show that the new methods remarkably reduce the computing time in the process of high-precision computing.Algorithms2016-02-0191Article10.3390/a9010014141999-48932016-02-01doi: 10.3390/a9010014Xiaofeng WangXiaodong Fan<![CDATA[Algorithms, Vol. 9, Pages 13: Algorithms for Managing, Querying and Processing Big Data in Cloud Environments]]>
http://www.mdpi.com/1999-4893/9/1/13
Big data (e.g., [1–3]) has become one of the most challenging research topics in current years. Big data is everywhere, from social networks to web advertisements, from sensor and stream systems to bio-informatics, from graph management tools to smart cities, and so forth. [...]Algorithms2016-02-0191Editorial10.3390/a9010013131999-48932016-02-01doi: 10.3390/a9010013Alfredo Cuzzocrea<![CDATA[Algorithms, Vol. 9, Pages 12: Integrating Pareto Optimization into Dynamic Programming]]>
http://www.mdpi.com/1999-4893/9/1/12
Pareto optimization combines independent objectives by computing the Pareto front of the search space, yielding a set of optima where none scores better on all objectives than any other. Recently, it was shown that Pareto optimization seamlessly integrates with algebraic dynamic programming: when scoring schemes A and B can correctly evaluate the search space via dynamic programming, then so can Pareto optimization with respect to A and B. However, the integration of Pareto optimization into dynamic programming opens a wide range of algorithmic alternatives, which we study in substantial detail in this article, using real-world applications in biosequence analysis, a field where dynamic programming is ubiquitous. Our results are two-fold: (1) We introduce the operation of a “Pareto algebra product” in the dynamic programming framework of Bellman’s GAP. Users of this framework can now ask for Pareto optimization with a single keystroke. Careful evaluation of the implementation alternatives by means of an extended Bellman’s GAP compiler demonstrates the dependence of the best implementation choice on the application at hand. (2) We extract from our experiments several pieces of advice to programmers who do not use a system such as Bellman’s GAP, but who choose to hand-craft their dynamic programming recurrences, incorporating Pareto optimization from scratch.Algorithms2016-01-2791Article10.3390/a9010012121999-48932016-01-27doi: 10.3390/a9010012Thomas GatterRobert GiegerichCédric Saule<![CDATA[Algorithms, Vol. 9, Pages 11: Acknowledgement to Reviewers of Algorithms in 2015]]>
http://www.mdpi.com/1999-4893/9/1/11
The editors of Algorithms would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2015. [...]Algorithms2016-01-2291Editorial10.3390/a9010011111999-48932016-01-22doi: 10.3390/a9010011 Algorithms Editorial Office<![CDATA[Algorithms, Vol. 9, Pages 10: An Optimal Order Method for Multiple Roots in Case of Unknown Multiplicity]]>
http://www.mdpi.com/1999-4893/9/1/10
In the literature, recently, some three-step schemes involving four function evaluations for the solution of multiple roots of nonlinear equations, whose multiplicity is not known in advance, are considered, but they do not agree with Kung–Traub’s conjecture. The present article is devoted to the study of an iterative scheme for approximating multiple roots with a convergence rate of eight, when the multiplicity is hidden, which agrees with Kung–Traub’s conjecture. The theoretical study of the convergence rate is investigated and demonstrated. A few nonlinear problems are presented to justify the theoretical study.Algorithms2016-01-2291Article10.3390/a9010010101999-48932016-01-22doi: 10.3390/a9010010Jai Jaiswal<![CDATA[Algorithms, Vol. 9, Pages 9: NBTI-Aware Transient Fault Rate Analysis Method for Logic Circuit Based on Probability Voltage Transfer Characteristics]]>
http://www.mdpi.com/1999-4893/9/1/9
The reliability of Very Large Scale Integration (VLSI) circuits has become increasingly susceptible to transient faults induced by environmental noise with the scaling of technology. Some commonly used fault tolerance strategies require statistical methods to accurately estimate the fault rate in different parts of the logic circuit, and Monte Carlo (MC) simulation is often applied to complete this task. However, the MC method suffers from impractical computation costs due to the size of the circuits. Furthermore, circuit aging effects, such as negative bias temperature instability (NBTI), will change the characteristics of the circuit during its lifetime, leading to a change in the circuit’s noise margin. This change will increase the complexity of transient fault rate estimation tasks. In this paper, an NBTI-aware statistical analysis method based on probability voltage transfer characteristics is proposed for combinational logic circuit. This method can acquire accurate fault rates using a discrete probability density function approximation process, thus resolving the computation cost problem of the MC method. The proposed method can also consider aging effects and analyze statistical changes in the fault rates. Experimental results demonstrate that, compared to the MC simulation, our method can achieve computation times that are two orders of magnitude shorter while maintaining an error rate less than 9%.Algorithms2016-01-1891Article10.3390/a901000991999-48932016-01-18doi: 10.3390/a9010009Zhiming YangJunbao LiYang YuXiyuan Peng<![CDATA[Algorithms, Vol. 9, Pages 8: A Greedy Algorithm for Neighborhood Overlap-Based Community Detection]]>
http://www.mdpi.com/1999-4893/9/1/8
The neighborhood overlap (NOVER) of an edge u-v is defined as the ratio of the number of nodes who are neighbors for both u and v to that of the number of nodes who are neighbors of at least u or v. In this paper, we hypothesize that an edge u-v with a lower NOVER score bridges two or more sets of vertices, with very few edges (other than u-v) connecting vertices from one set to another set. Accordingly, we propose a greedy algorithm of iteratively removing the edges of a network in the increasing order of their neighborhood overlap and calculating the modularity score of the resulting network component(s) after the removal of each edge. The network component(s) that have the largest cumulative modularity score are identified as the different communities of the network. We evaluate the performance of the proposed NOVER-based community detection algorithm on nine real-world network graphs and compare the performance against the multi-level aggregation-based Louvain algorithm, as well as the original and time-efficient versions of the edge betweenness-based Girvan-Newman (GN) community detection algorithm.Algorithms2016-01-1191Article10.3390/a901000881999-48932016-01-11doi: 10.3390/a9010008Natarajan Meghanathan<![CDATA[Algorithms, Vol. 9, Pages 7: An Effective and Efficient MapReduce Algorithm for Computing BFS-Based Traversals of Large-Scale RDF Graphs]]>
http://www.mdpi.com/1999-4893/9/1/7
Nowadays, a leading instance of big data is represented by Web data that lead to the definition of so-called big Web data. Indeed, extending beyond to a large number of critical applications (e.g., Web advertisement), these data expose several characteristics that clearly adhere to the well-known 3V properties (i.e., volume, velocity, variety). Resource Description Framework (RDF) is a significant formalism and language for the so-called Semantic Web, due to the fact that a very wide family of Web entities can be naturally modeled in a graph-shaped manner. In this context, RDF graphs play a first-class role, because they are widely used in the context of modern Web applications and systems, including the emerging context of social networks. When RDF graphs are defined on top of big (Web) data, they lead to the so-called large-scale RDF graphs, which reasonably populate the next-generation Semantic Web. In order to process such kind of big data, MapReduce, an open source computational framework specifically tailored to big data processing, has emerged during the last years as the reference implementation for this critical setting. In line with this trend, in this paper, we present an approach for efficiently implementing traversals of large-scale RDF graphs over MapReduce that is based on the Breadth First Search (BFS) strategy for visiting (RDF) graphs to be decomposed and processed according to the MapReduce framework. We demonstrate how such implementation speeds-up the analysis of RDF graphs with respect to competitor approaches. Experimental results clearly support our contributions.Algorithms2016-01-1191Article10.3390/a901000771999-48932016-01-11doi: 10.3390/a9010007Alfredo CuzzocreaMirel CosulschiRoberto de Virgilio<![CDATA[Algorithms, Vol. 9, Pages 6: Efficient Metaheuristics for the Mixed Team Orienteering Problem with Time Windows]]>
http://www.mdpi.com/1999-4893/9/1/6
Given a graph whose nodes and edges are associated with a profit, a visiting (or traversing) time and an admittance time window, the Mixed Team Orienteering Problem with Time Windows (MTOPTW) seeks for a specific number of walks spanning a subset of nodes and edges of the graph so as to maximize the overall collected profit. The visit of the included nodes and edges should take place within their respective time window and the overall duration of each walk should be below a certain threshold. In this paper we introduce the MTOPTW, which can be used for modeling a realistic variant of the Tourist Trip Design Problem where the objective is the derivation of near-optimal multiple-day itineraries for tourists visiting a destination which features several points of interest (POIs) and scenic routes. Since the MTOPTW is a NP-hard problem, we propose the first metaheuristic approaches to tackle it. The effectiveness of our algorithms is validated through a number of experiments on POI and scenic route sets compiled from the city of Athens (Greece).Algorithms2016-01-0591Article10.3390/a901000661999-48932016-01-05doi: 10.3390/a9010006Damianos GavalasCharalampos KonstantopoulosKonstantinos MastakasGrammati PantziouNikolaos Vathis<![CDATA[Algorithms, Vol. 9, Pages 5: A Family of Iterative Methods for Solving Systems of Nonlinear Equations Having Unknown Multiplicity]]>
http://www.mdpi.com/1999-4893/9/1/5
The singularity of Jacobian happens when we are looking for a root, with multiplicity greater than one, of a system of nonlinear equations. The purpose of this article is two-fold. Firstly, we will present a modification of an existing method that computes roots with known multiplicities. Secondly, will propose the generalization of a family of methods for solving nonlinear equations with unknown multiplicities, to the system of nonlinear equations. The inclusion of a nonzero multi-variable auxiliary function is the key idea. Different choices of the auxiliary function give different families of the iterative method to find roots with unknown multiplicities. Few illustrative numerical experiments and a critical discussion end the paper.Algorithms2015-12-3191Article10.3390/a901000551999-48932015-12-31doi: 10.3390/a9010005Fayyaz AhmadS. Serra-CapizzanoMalik UllahA. Al-Fhaid<![CDATA[Algorithms, Vol. 9, Pages 4: A Novel Complex-Valued Encoding Grey Wolf Optimization Algorithm]]>
http://www.mdpi.com/1999-4893/9/1/4
Grey wolf optimization (GWO) is one of the recently proposed heuristic algorithms imitating the leadership hierarchy and hunting mechanism of grey wolves in nature. The aim of these algorithms is to perform global optimization. This paper presents a modified GWO algorithm based on complex-valued encoding; namely the complex-valued encoding grey wolf optimization (CGWO). We use CGWO to test 16 unconstrained benchmark functions with seven different scales and infinite impulse response (IIR) model identification. Compared to the real-valued GWO algorithm and other optimization algorithms; the CGWO performs significantly better in terms of accuracy; robustness; and convergence speed.Algorithms2015-12-3091Article10.3390/a901000441999-48932015-12-30doi: 10.3390/a9010004Qifang LuoSen ZhangZhiming LiYongquan Zhou<![CDATA[Algorithms, Vol. 9, Pages 3: Function Optimization and Parameter Performance Analysis Based on Gravitation Search Algorithm]]>
http://www.mdpi.com/1999-4893/9/1/3
The gravitational search algorithm (GSA) is a kind of swarm intelligence optimization algorithm based on the law of gravitation. The parameter initialization of all swarm intelligence optimization algorithms has an important influence on the global optimization ability. Seen from the basic principle of GSA, the convergence rate of GSA is determined by the gravitational constant and the acceleration of the particles. The optimization performances on six typical test functions are verified by the simulation experiments. The simulation results show that the convergence speed of the GSA algorithm is relatively sensitive to the setting of the algorithm parameters, and the GSA parameter can be used flexibly to improve the algorithm’s convergence velocity and improve the accuracy of the solutions.Algorithms2015-12-2491Article10.3390/a901000331999-48932015-12-24doi: 10.3390/a9010003Jie-Sheng WangJiang-Di Song<![CDATA[Algorithms, Vol. 9, Pages 1: On the Kung-Traub Conjecture for Iterative Methods for Solving Quadratic Equations]]>
http://www.mdpi.com/1999-4893/9/1/1
Kung-Traub’s conjecture states that an optimal iterative method based on d function evaluations for finding a simple zero of a nonlinear function could achieve a maximum convergence order of 2 d−1. During the last years, many attempts have been made to prove this conjecture or develop optimal methods which satisfy the conjecture. We understand from the conjecture that the maximum order reached by a method with three function evaluations is four, even for quadratic functions. In this paper, we show that the conjecture fails for quadratic functions. In fact, we can find a 2-point method with three function evaluations reaching fifth order convergence. We also develop 2-point 3rd to 8th order methods with one function and two first derivative evaluations using weight functions. Furthermore, we show that with the same number of function evaluations we can develop higher order 2-point methods of order r + 2 , where r is a positive integer, ≥ 1 . We also show that we can develop a higher order method with the same number of function evaluations if we know the asymptotic error constant of the previous method. We prove the local convergence of these methods which we term as Babajee’s Quadratic Iterative Methods and we extend these methods to systems involving quadratic equations. We test our methods with some numerical experiments including an application to Chandrasekhar’s integral equation arising in radiative heat transfer theory.Algorithms2015-12-2491Article10.3390/a901000111999-48932015-12-24doi: 10.3390/a9010001Diyashvir Babajee<![CDATA[Algorithms, Vol. 9, Pages 2: Offset-Assisted Factored Solution of Nonlinear Systems]]>
http://www.mdpi.com/1999-4893/9/1/2
This paper presents an improvement to the recently-introduced factored method for the solution of nonlinear equations. The basic idea consists of transforming the original system by adding an offset to all unknowns. When searching for real solutions, a real offset prevents the intermediate values of unknowns from becoming complex. Reciprocally, when searching for complex solutions, a complex offset is advisable to allow the iterative process to quickly abandon the real domain. Several examples are used to illustrate the performance of the proposed algorithm, when compared to Newton’s method.Algorithms2015-12-2391Article10.3390/a901000221999-48932015-12-23doi: 10.3390/a9010002José Ruiz-OltraCatalina Gómez-QuilesAntonio Gómez-Expósito<![CDATA[Algorithms, Vol. 8, Pages 1210-1218: Numerical Properties of Different Root-Finding Algorithms Obtained for Approximating Continuous Newton’s Method]]>
http://www.mdpi.com/1999-4893/8/4/1210
This paper is dedicated to the study of continuous Newton’s method, which is a generic differential equation whose associated flow tends to the zeros of a given polynomial. Firstly, we analyze some numerical features related to the root-finding methods obtained after applying different numerical methods for solving initial value problems. The relationship between the step size and the order of convergence is particularly considered. We have analyzed both the cases of a constant and non-constant step size in the procedure of integration. We show that working with a non-constant step, the well-known Chebyshev-Halley family of iterative methods for solving nonlinear scalar equations is obtained.Algorithms2015-12-1784Article10.3390/a8041210121012181999-48932015-12-17doi: 10.3390/a8041210José Gutiérrez<![CDATA[Algorithms, Vol. 8, Pages 1195-1209: A New Smoothing Conjugate Gradient Method for Solving Nonlinear Nonsmooth Complementarity Problems]]>
http://www.mdpi.com/1999-4893/8/4/1195
In this paper, by using the smoothing Fischer-Burmeister function, we present a new smoothing conjugate gradient method for solving the nonlinear nonsmooth complementarity problems. The line search which we used guarantees the descent of the method. Under suitable conditions, the new smoothing conjugate gradient method is proved globally convergent. Finally, preliminary numerical experiments show that the new method is efficient.Algorithms2015-12-1784Article10.3390/a8041195119512091999-48932015-12-17doi: 10.3390/a8041195Ajie ChuShouqiang DuYixiao Su<![CDATA[Algorithms, Vol. 8, Pages 1175-1194: A Data Analytic Algorithm for Managing, Querying, and Processing Uncertain Big Data in Cloud Environments]]>
http://www.mdpi.com/1999-4893/8/4/1175
Big data are everywhere as high volumes of varieties of valuable precise and uncertain data can be easily collected or generated at high velocity in various real-life applications. Embedded in these big data are rich sets of useful information and knowledge. To mine these big data and to discover useful information and knowledge, we present a data analytic algorithm in this article. Our algorithm manages, queries, and processes uncertain big data in cloud environments. More specifically, it manages transactions of uncertain big data, allows users to query these big data by specifying constraints expressing their interests, and processes the user-specified constraints to discover useful information and knowledge from the uncertain big data. As each item in every transaction in these uncertain big data is associated with an existential probability value expressing the likelihood of that item to be present in a particular transaction, computation could be intensive. Our algorithm uses the MapReduce model on a cloud environment for effective data analytics on these uncertain big data. Experimental results show the effectiveness of our data analytic algorithm for managing, querying, and processing uncertain big data in cloud environments.Algorithms2015-12-1184Article10.3390/a8041175117511941999-48932015-12-11doi: 10.3390/a8041175Fan JiangCarson Leung<![CDATA[Algorithms, Vol. 8, Pages 1143-1174: Generating Realistic Labelled, Weighted Random Graphs]]>
http://www.mdpi.com/1999-4893/8/4/1143
Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs) with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI) approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs). Our results allow us to draw conclusions about the contribution of vertex labels and edge weights to graph structure.Algorithms2015-12-0884Article10.3390/a8041143114311741999-48932015-12-08doi: 10.3390/a8041143Michael DavisZhanyu MaWeiru LiuPaul MillerRuth HunterFrank Kee<![CDATA[Algorithms, Vol. 8, Pages 1129-1142: Efficiency Intra-Cluster Device-to-Device Relay Selection for Multicast Services Based on Combinatorial Auction]]>
http://www.mdpi.com/1999-4893/8/4/1129
In Long Term Evolution-Advanced (LTE-A) networks, Device-to-device (D2D) communications can be utilized to enhance the performance of multicast services by leveraging D2D relays to serve nodes with worse channel conditions within a cluster. For traditional D2D relay schemes, D2D links with poor channel condition may be the bottleneck of system sum data rate. In this paper, to optimize the throughput of D2D communications, we introduce an iterative combinatorial auction algorithm for efficient D2D relay selection. In combinatorial auctions, the User Equipments (UEs) that fails to correctly receive multicast data from eNodeB (eNB) are viewed as bidders that compete for D2D relays, while the eNB is treated as the auctioneer. We also give properties of convergency and low-complexity and present numerical simulations to verify the efficiency of the proposed algorithm.Algorithms2015-12-0284Article10.3390/a8041129112911421999-48932015-12-02doi: 10.3390/a8041129Yong ZhangFangmin Li<![CDATA[Algorithms, Vol. 8, Pages 1121-1128: On the Local Convergence of a Third Order Family of Iterative Processes]]>
http://www.mdpi.com/1999-4893/8/4/1121
Efficiency is generally the most important aspect to take into account when choosing an iterative method to approximate a solution of an equation, but is not the only aspect to consider in the iterative process. Another important aspect to consider is the accessibility of the iterative process, which shows the domain of starting points from which the iterative process converges to a solution of the equation. So, we consider a family of iterative processes with a higher efficiency index than Newton’s method. However, this family of proecsses presents problems of accessibility to the solution x * . From a local study of the convergence of this family, we perform an optimization study of the accessibility and obtain iterative processes with better accessibility than Newton’s method.Algorithms2015-12-0184Article10.3390/a8041121112111281999-48932015-12-01doi: 10.3390/a8041121M. Hernández-VerónN. Romero<![CDATA[Algorithms, Vol. 8, Pages 1111-1120: An Optimal Biparametric Multipoint Family and Its Self-Acceleration with Memory for Solving Nonlinear Equations]]>
http://www.mdpi.com/1999-4893/8/4/1111
In this paper, a family of Steffensen-type methods of optimal order of convergence with two parameters is constructed by direct Newtonian interpolation. It satisfies the conjecture proposed by Kung and Traub (J. Assoc. Comput. Math. 1974, 21, 634–651) that an iterative method based on m evaluations per iteration without memory would arrive at the optimal convergence of order 2m-1 . Furthermore, the family of Steffensen-type methods of super convergence is suggested by using arithmetic expressions for the parameters with memory but no additional new evaluation of the function. Their error equations, asymptotic convergence constants and convergence orders are obtained. Finally, they are compared with related root-finding methods in the numerical examples.Algorithms2015-12-0184Article10.3390/a8041111111111201999-48932015-12-01doi: 10.3390/a8041111Quan ZhengXin ZhaoYufeng Liu<![CDATA[Algorithms, Vol. 8, Pages 1088-1110: Computer Aided Diagnosis System for Early Lung Cancer Detection]]>
http://www.mdpi.com/1999-4893/8/4/1088
Lung cancer continues to rank as the leading cause of cancer deaths worldwide. One of the most promising techniques for early detection of cancerous cells relies on sputum cell analysis. This was the motivation behind the design and the development of a new computer aided diagnosis (CAD) system for early detection of lung cancer based on the analysis of sputum color images. The proposed CAD system encompasses four main processing steps. First is the preprocessing step which utilizes a Bayesian classification method using histogram analysis. Then, in the second step, mean shift segmentation is applied to segment the nuclei from the cytoplasm. The third step is the feature analysis. In this step, geometric and chromatic features are extracted from the nucleus region. These features are used in the diagnostic process of the sputum images. Finally, the diagnosis is completed using an artificial neural network and support vector machine (SVM) for classifying the cells into benign or malignant. The performance of the system was analyzed based on different criteria such as sensitivity, specificity and accuracy. The evaluation was carried out using Receiver Operating Characteristic (ROC) curve. The experimental results demonstrate the efficiency of the SVM classifier over other classifiers, with 97% sensitivity and accuracy as well as a significant reduction in the number of false positive and false negative rates.Algorithms2015-11-2084Article10.3390/a8041088108811101999-48932015-11-20doi: 10.3390/a8041088Fatma TaherNaoufel WerghiHussain Al-Ahmad<![CDATA[Algorithms, Vol. 8, Pages 1076-1087: Local Convergence of an Efﬁcient High Convergence Order Method Using Hypothesis Only on the First Derivative]]>
http://www.mdpi.com/1999-4893/8/4/1076
We present a local convergence analysis of an eighth order three step methodin order to approximate a locally unique solution of nonlinear equation in a Banach spacesetting. In an earlier study by Sharma and Arora (2015), the order of convergence wasshown using Taylor series expansions and hypotheses up to the fourth order derivative oreven higher of the function involved which restrict the applicability of the proposed scheme. However, only ﬁrst order derivative appears in the proposed scheme. In order to overcomethis problem, we proposed the hypotheses up to only the ﬁrst order derivative. In this way,we not only expand the applicability of the methods but also propose convergence domain. Finally, where earlier studies cannot be applied, a variety of concrete numerical examplesare proposed to obtain the solutions of nonlinear equations. Our study does not exhibit thistype of problem/restriction.Algorithms2015-11-2084Article10.3390/a8041076107610871999-48932015-11-20doi: 10.3390/a8041076Ioannis ArgyrosRamandeep BehlS.S. Motsa<![CDATA[Algorithms, Vol. 8, Pages 1052-1075: A New Approach for Automatic Removal of Movement Artifacts in Near-Infrared Spectroscopy Time Series by Means of Acceleration Data]]>
http://www.mdpi.com/1999-4893/8/4/1052
Near-infrared spectroscopy (NIRS) enables the non-invasive measurement of changes in hemodynamics and oxygenation in tissue. Changes in light-coupling due to movement of the subject can cause movement artifacts (MAs) in the recorded signals. Several methods have been developed so far that facilitate the detection and reduction of MAs in the data. However, due to fixed parameter values (e.g., global threshold) none of these methods are perfectly suitable for long-term (i.e., hours) recordings or were not time-effective when applied to large datasets. We aimed to overcome these limitations by automation, i.e., data adaptive thresholding specifically designed for long-term measurements, and by introducing a stable long-term signal reconstruction. Our new technique (“acceleration-based movement artifact reduction algorithm”, AMARA) is based on combining two methods: the “movement artifact reduction algorithm” (MARA, Scholkmann et al. Phys. Meas. 2010, 31, 649–662), and the “accelerometer-based motion artifact removal” (ABAMAR, Virtanen et al. J. Biomed. Opt. 2011, 16, 087005). We describe AMARA in detail and report about successful validation of the algorithm using empirical NIRS data, measured over the prefrontal cortex in adolescents during sleep. In addition, we compared the performance of AMARA to that of MARA and ABAMAR based on validation data.Algorithms2015-11-1984Article10.3390/a8041052105210751999-48932015-11-19doi: 10.3390/a8041052Andreas MetzMartin WolfPeter AchermannFelix Scholkmann<![CDATA[Algorithms, Vol. 8, Pages 1035-1051: Natalie 2.0: Sparse Global Network Alignment as a Special Case of Quadratic Assignment]]>
http://www.mdpi.com/1999-4893/8/4/1035
Data on molecular interactions is increasing at a tremendous pace, while the development of solid methods for analyzing this network data is still lagging behind. This holds in particular for the field of comparative network analysis, where one wants to identify commonalities between biological networks. Since biological functionality primarily operates at the network level, there is a clear need for topology-aware comparison methods. We present a method for global network alignment that is fast and robust and can flexibly deal with various scoring schemes taking both node-to-node correspondences as well as network topologies into account. We exploit that network alignment is a special case of the well-studied quadratic assignment problem (QAP). We focus on sparse network alignment, where each node can be mapped only to a typically small subset of nodes in the other network. This corresponds to a QAP instance with a symmetric and sparse weight matrix. We obtain strong upper and lower bounds for the problem by improving a Lagrangian relaxation approach and introduce the open source software tool Natalie 2.0, a publicly available implementation of our method. In an extensive computational study on protein interaction networks for six different species, we find that our new method outperforms alternative established and recent state-of-the-art methods.Algorithms2015-11-1884Article10.3390/a8041035103510511999-48932015-11-18doi: 10.3390/a8041035Mohammed El-KebirJaap HeringaGunnar Klau<![CDATA[Algorithms, Vol. 8, Pages 1021-1034: Semi-Supervised Classification Based on Mixture Graph]]>
http://www.mdpi.com/1999-4893/8/4/1021
Graph-based semi-supervised classification heavily depends on a well-structured graph. In this paper, we investigate a mixture graph and propose a method called semi-supervised classification based on mixture graph (SSCMG). SSCMG first constructs multiple k nearest neighborhood (kNN) graphs in different random subspaces of the samples. Then, it combines these graphs into a mixture graph and incorporates this graph into a graph-based semi-supervised classifier. SSCMG can preserve the local structure of samples in subspaces and is less affected by noisy and redundant features. Empirical study on facial images classification shows that SSCMG not only has better recognition performance, but also is more robust to input parameters than other related methods.Algorithms2015-11-1684Article10.3390/a8041021102110341999-48932015-11-16doi: 10.3390/a8041021Lei FengGuoxian Yu<![CDATA[Algorithms, Vol. 8, Pages 999-1020: An Integer Linear Programming Formulation for the Minimum Cardinality Segmentation Problem]]>
http://www.mdpi.com/1999-4893/8/4/999
In this article, we investigate the Minimum Cardinality Segmentation Problem (MCSP), an NP-hard combinatorial optimization problem arising in intensity-modulated radiation therapy. The problem consists in decomposing a given nonnegative integer matrix into a nonnegative integer linear combination of a minimum cardinality set of binary matrices satisfying the consecutive ones property. We show how to transform the MCSP into a combinatorial optimization problem on a weighted directed network and we exploit this result to develop an integer linear programming formulation to exactly solve it. Computational experiments show that the lower bounds obtained by the linear relaxation of the considered formulation improve upon those currently described in the literature and suggest, at the same time, new directions for the development of future exact solution approaches to the problem.Algorithms2015-11-1184Article10.3390/a804099999910201999-48932015-11-11doi: 10.3390/a8040999Daniele CatanzaroCéline Engelbeen<![CDATA[Algorithms, Vol. 8, Pages 982-998: Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations]]>
http://www.mdpi.com/1999-4893/8/4/982
An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the hyper-power iterative scheme are based on its proper factorization, as well as on the possibility to accelerate the iterations in the initial phase of the convergence. Although the effectiveness of our approach is confirmed on the basis of the theoretical point of view, some numerical comparisons in balancing chemical equations, as well as on randomly-generated matrices are furnished.Algorithms2015-11-0384Article10.3390/a80409829829981999-48932015-11-03doi: 10.3390/a8040982Farahnaz SoleimaniPredrag Stanimirovi´cFazlollah Soleymani<![CDATA[Algorithms, Vol. 8, Pages 965-981: A Particle Filter Track-Before-Detect Algorithm Based on Hybrid Differential Evolution]]>
http://www.mdpi.com/1999-4893/8/4/965
In this paper, we address the problem of detecting and tracking targets with a low signal-to-noise ratio (SNR) by exploiting hybrid differential evolution (HDE) in the particle filter track-before-detect (PF-TBD) context. Firstly, we introduce the Bayesian PF-TBD method and its weaknesses. Secondly, the HDE algorithm is regarded as a novel particle updating strategy, which is proposed to optimize the performance of the PF-TBD algorithm. Thirdly, we combine the systematic resampling approach to enhance the performance of the proposed algorithm. Then, an improved PF-TBD algorithm based on the HDE method is proposed. Experiment results indicate that the proposed method has better performance in detecting and tracking than previous algorithms when the targets have a low SNR.Algorithms2015-11-0384Article10.3390/a80409659659811999-48932015-11-03doi: 10.3390/a8040965Chaozhu ZhangLin LiYu Wang<![CDATA[Algorithms, Vol. 8, Pages 951-964: A New Swarm Intelligence Approach for Clustering Based on Krill Herd with Elitism Strategy]]>
http://www.mdpi.com/1999-4893/8/4/951
As one of the most popular and well-recognized clustering methods, fuzzy C-means (FCM) clustering algorithm is the basis of other fuzzy clustering analysis methods in theory and application respects. However, FCM algorithm is essentially a local search optimization algorithm. Therefore, sometimes, it may fail to find the global optimum. For the purpose of getting over the disadvantages of FCM algorithm, a new version of the krill herd (KH) algorithm with elitism strategy, called KHE, is proposed to solve the clustering problem. Elitism tragedy has a strong ability of preventing the krill population from degrading. In addition, the well-selected parameters are used in the KHE method instead of originating from nature. Through an array of simulation experiments, the results show that the KHE is indeed a good choice for solving general benchmark problems and fuzzy clustering analyses.Algorithms2015-10-2284Article10.3390/a80409519519641999-48932015-10-22doi: 10.3390/a8040951Zhi-Yong LiJiao-Hong YiGai-Ge Wang<![CDATA[Algorithms, Vol. 8, Pages 929-950: Series Arc Fault Detection Algorithm Based on Autoregressive Bispectrum Analysis]]>
http://www.mdpi.com/1999-4893/8/4/929
Arc fault is one of the most critical reasons for electrical fires. Due to the diversity, randomness and concealment of arc faults in low-voltage circuits, it is difficult for general methods to protect all loads from series arc faults. From the analysis of many series arc faults, a large number of high frequency signals generated in circuits are found. These signals are easily affected by Gaussian noise which is difficult to be eliminated as a result of frequency aliasing. Thus, a novel detection algorithm is developed to accurately detect series arc faults in this paper. Initially, an autoregressive model of the mixed high frequency signals is modelled. Then, autoregressive bispectrum analysis is introduced to analyze common series arc fault features. The phase information of arc fault signal is preserved using this method. The influence of Gaussian noise is restrained effectively. Afterwards, several features including characteristic frequency, fluctuation of phase angles, diffused distribution and incremental numbers of bispectrum peaks are extracted for recognizing arc faults. Finally, least squares support vector machine is used to accurately identify series arc faults from the load states based on these frequency features of bispectrum. The validity of the algorithm is experimentally verified obtaining arc fault detection rate above 97%.Algorithms2015-10-1684Article10.3390/a80409299299501999-48932015-10-16doi: 10.3390/a8040929Kai YangRencheng ZhangShouhong ChenFujiang ZhangJianhong YangXingbin Zhang<![CDATA[Algorithms, Vol. 8, Pages 910-928: Effective Data Acquisition Protocol for Multi-Hop Heterogeneous Wireless Sensor Networks Using Compressive Sensing]]>
http://www.mdpi.com/1999-4893/8/4/910
In designing wireless sensor networks (WSNs), it is important to reduce energy dissipation and prolong network lifetime. Clustering of nodes is one of the most effective approaches for conserving energy in WSNs. Cluster formation protocols generally consider the heterogeneity of sensor nodes in terms of energy difference of nodes but ignore the different transmission ranges of them. In this paper, we propose an effective data acquisition clustered protocol using compressive sensing (EDACP-CS) for heterogeneous WSNs that aims to conserve the energy of sensor nodes in the presence of energy and transmission range heterogeneity. In EDACP-CS, cluster heads are selected based on the distance from the base station and sensor residual energy. Simulation results show that our protocol offers a much better performance than the existing protocols in terms of energy consumption, stability, network lifetime, and throughput.Algorithms2015-10-1684Article10.3390/a80409109109281999-48932015-10-16doi: 10.3390/a8040910Ahmed Khedr<![CDATA[Algorithms, Vol. 8, Pages 895-909: On Some Improved Harmonic Mean Newton-Like Methods for Solving Systems of Nonlinear Equations]]>
http://www.mdpi.com/1999-4893/8/4/895
In this work, we have developed a fourth order Newton-like method based on harmonic mean and its multi-step version for solving system of nonlinear equations. The new fourth order method requires evaluation of one function and two first order Fréchet derivatives for each iteration. The multi-step version requires one more function evaluation for each iteration. The proposed new scheme does not require the evaluation of second or higher order Fréchet derivatives and still reaches fourth order convergence. The multi-step version converges with order 2r+4, where r is a positive integer and r ≥ 1. We have proved that the root α is a point of attraction for a general iterative function, whereas the proposed new schemes also satisfy this result. Numerical experiments including an application to 1-D Bratu problem are given to illustrate the efficiency of the new methods. Also, the new methods are compared with some existing methods.Algorithms2015-10-0984Article10.3390/a80408958959091999-48932015-10-09doi: 10.3390/a8040895Diyashvir BabajeeKalyanasundaram MadhuJayakumar Jayaraman<![CDATA[Algorithms, Vol. 8, Pages 870-894: Code Synchronization Algorithm Based on Segment Correlation in Spread Spectrum Communication]]>
http://www.mdpi.com/1999-4893/8/4/870
Spread Spectrum (SPSP) Communication is the theoretical basis of Direct Sequence Spread Spectrum (DSSS) transceiver technology. Spreading code, modulation, demodulation, carrier synchronization and code synchronization in SPSP communications are the core parts of DSSS transceivers. This paper focuses on the code synchronization problem in SPSP communications. A novel code synchronization algorithm based on segment correlation is proposed. The proposed algorithm can effectively deal with the informational misjudgment caused by the unreasonable data acquisition times. This misjudgment may lead to an inability of DSSS receivers to restore transmitted signals. Simulation results show the feasibility of a DSSS transceiver design based on the proposed code synchronization algorithm. Finally, the communication functions of the DSSS transceiver based on the proposed code synchronization algorithm are implemented on Field Programmable Gate Array (FPGA).Algorithms2015-10-0984Article10.3390/a80408708708941999-48932015-10-09doi: 10.3390/a8040870Aohan LiZiheng YangRenji QiFeng ZhouGuangjie Han<![CDATA[Algorithms, Vol. 8, Pages 850-869: Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric]]>
http://www.mdpi.com/1999-4893/8/4/850
In this work, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifies up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.Algorithms2015-10-0984Article10.3390/a80408508508691999-48932015-10-09doi: 10.3390/a8040850Rumen AndonovHristo DjidjevGunnar KlauMathilde Boudic-JaminInken Wohlers<![CDATA[Algorithms, Vol. 8, Pages 832-849: Newton-Type Methods on Generalized Banach Spaces and Applications in Fractional Calculus]]>
http://www.mdpi.com/1999-4893/8/4/832
We present a semilocal convergence study of Newton-type methods on a generalized Banach space setting to approximate a locally unique zero of an operator. Earlier studies require that the operator involved is Fréchet differentiable. In the present study we assume that the operator is only continuous. This way we extend the applicability of Newton-type methods to include fractional calculus and problems from other areas. Moreover, under the same or weaker conditions, we obtain weaker sufficient convergence criteria, tighter error bounds on the distances involved and an at least as precise information on the location of the solution. Special cases are provided where the old convergence criteria cannot apply but the new criteria can apply to locate zeros of operators. Some applications include fractional calculus involving the Riemann-Liouville fractional integral and the Caputo fractional derivative. Fractional calculus is very important for its applications in many applied sciences.Algorithms2015-10-0984Article10.3390/a80408328328491999-48932015-10-09doi: 10.3390/a8040832George AnastassiouIoannis Argyros<![CDATA[Algorithms, Vol. 8, Pages 810-831: Finding Supported Paths in Heterogeneous Networks]]>
http://www.mdpi.com/1999-4893/8/4/810
Subnetwork mining is an essential issue in the analysis of biological, social and communication networks. Recent applications require the simultaneous mining of several networks on the same or a similar vertex set. That is, one searches for subnetworks fulfilling different properties in each input network. We study the case that the input consists of a directed graph D and an undirected graph G on the same vertex set, and the sought pattern is a path P in D whose vertex set induces a connected subgraph of G. In this context, three concrete problems arise, depending on whether the existence of P is questioned or whether the length of P is to be optimized: in that case, one can search for a longest path or (maybe less intuitively) a shortest one. These problems have immediate applications in biological networks and predictable applications in social, information and communication networks. We study the classic and parameterized complexity of the problem, thus identifying polynomial and NP-complete cases, as well as fixed-parameter tractable and W[1]-hard cases. We also propose two enumeration algorithms that we evaluate on synthetic and biological data.Algorithms2015-10-0984Article10.3390/a80408108108311999-48932015-10-09doi: 10.3390/a8040810Guillaume FertinChristian KomusiewiczHafedh Mohamed-BabouIrena Rusu<![CDATA[Algorithms, Vol. 8, Pages 799-809: Reweighted Factor Selection for SLMS-RL1 Algorithm under Gaussian Mixture Noise Environments]]>
http://www.mdpi.com/1999-4893/8/4/799
The sign least mean square with reweighted L1-norm constraint (SLMS-RL1) algorithm is an attractive sparse channel estimation method among Gaussian mixture model (GMM) based algorithms for use in impulsive noise environments. The channel sparsity can be exploited by SLMS-RL1 algorithm based on appropriate reweighted factor, which is one of key parameters to adjust the sparse constraint for SLMS-RL1 algorithm. However, to the best of the authors’ knowledge, a reweighted factor selection scheme has not been developed. This paper proposes a Monte-Carlo (MC) based reweighted factor selection method to further strengthen the performance of SLMS-RL1 algorithm. To validate the performance of SLMS-RL1 using the proposed reweighted factor, simulations results are provided to demonstrate that convergence speed can be reduced by increasing the channel sparsity, while the steady-state MSE performance only slightly changes with different GMM impulsive-noise strengths.Algorithms2015-09-2584Article10.3390/a80407997998091999-48932015-09-25doi: 10.3390/a8040799Tingping ZhangGuan Gui<![CDATA[Algorithms, Vol. 8, Pages 786-798: A Family of Newton Type Iterative Methods for Solving Nonlinear Equations]]>
http://www.mdpi.com/1999-4893/8/3/786
In this paper, a general family of n-point Newton type iterative methods for solving nonlinear equations is constructed by using direct Hermite interpolation. The order of convergence of the new n-point iterative methods without memory is 2n requiring the evaluations of n functions and one first-order derivative in per full iteration, which implies that this family is optimal according to Kung and Traub’s conjecture (1974). Its error equations and asymptotic convergence constants are obtained. The n-point iterative methods with memory are obtained by using a self-accelerating parameter, which achieve much faster convergence than the corresponding n-point methods without memory. The increase of convergence order is attained without any additional calculations so that the n-point Newton type iterative methods with memory possess a very high computational efficiency. Numerical examples are demonstrated to confirm theoretical results.Algorithms2015-09-2283Article10.3390/a80307867867981999-48932015-09-22doi: 10.3390/a8030786Xiaofeng WangYuping QinWeiyi QianSheng ZhangXiaodong Fan<![CDATA[Algorithms, Vol. 8, Pages 774-785: Parallel Variants of Broyden’s Method]]>
http://www.mdpi.com/1999-4893/8/3/774
In this paper we investigate some parallel variants of Broyden’s method and, for the basic variant, we present its convergence properties. The main result is that the behavior of the considered parallel Broyden’s variants is comparable with the classical parallel Newton method, and significantly better than the parallel Cimmino method, both for linear and nonlinear cases. The considered variants are also compared with two more recently proposed parallel Broyden’s method. Some numerical experiments are presented to illustrate the advantages and limits of the proposed algorithms.Algorithms2015-09-1583Article10.3390/a80307747747851999-48932015-09-15doi: 10.3390/a8030774Ioan BistranStefan MarusterLiviu Mafteiu-Scai<![CDATA[Algorithms, Vol. 8, Pages 754-773: Modified Classical Graph Algorithms for the DNA Fragment Assembly Problem]]>
http://www.mdpi.com/1999-4893/8/3/754
DNA fragment assembly represents an important challenge to the development of efficient and practical algorithms due to the large number of elements to be assembled. In this study, we present some graph theoretical linear time algorithms to solve the problem. To achieve linear time complexity, a heap with constant time operations was developed, for the special case where the edge weights are integers and do not depend on the problem size. The experiments presented show that modified classical graph theoretical algorithms can solve the DNA fragment assembly problem efficiently.Algorithms2015-09-1083Article10.3390/a80307547547731999-48932015-09-10doi: 10.3390/a8030754Guillermo Mallén-FullertonJ. Quiroz-IbarraAntonio MirandaGuillermo Fernández-Anaya<![CDATA[Algorithms, Vol. 8, Pages 743-753: A CS Recovery Algorithm for Model and Time Delay Identification of MISO-FIR Systems]]>
http://www.mdpi.com/1999-4893/8/3/743
This paper considers identifying the multiple input single output finite impulse response (MISO-FIR) systems with unknown time delays and orders. Generally, parameters, orders and time delays of an MISO system are separately identified from different algorithms. In this paper, we aim to perform the model identification and time delay estimation simultaneously from a limited number of observations. For an MISO-FIR system with many inputs and unknown input time delays, the corresponding identification model contains a large number of parameters, requiring a great number of observations for identification and leading to a heavy computational burden. Inspired by the compressed sensing (CS) recovery theory, a threshold orthogonal matching pursuit algorithm (TH-OMP) is presented to simultaneously identify the parameters, the orders and the time delays of the MISO-FIR systems. The proposed algorithm requires only a small number of sampled data compared to the conventional identification methods, such as the least squares method. The effectiveness of the proposed algorithm is verified by simulation results.Algorithms2015-09-1083Article10.3390/a80307437437531999-48932015-09-10doi: 10.3390/a8030743Yanjun LiuTaiyang Tao<![CDATA[Algorithms, Vol. 8, Pages 723-742: A Comparative Study of Modern Heuristics on the School Timetabling Problem]]>
http://www.mdpi.com/1999-4893/8/3/723
In this contribution a comparative study of modern heuristics on the school timetabling problem is presented. More precisely, we investigate the application of two population-based algorithms, namely a Particle Swarm Optimization (PSO) and an Artificial Fish Swarm (AFS), on the high school timetabling problem. In order to demonstrate their efficiency and performance, experiments with real-world input data have been performed. Both algorithms proposed manage to create feasible and efficient high school timetables, thus fulfilling adequately the timetabling needs of the respective high schools. Computational results demonstrate that both algorithms manage to reach efficient solutions, most of the times better than existing approaches applied to the same school timetabling input instances using the same evaluation criteria.Algorithms2015-08-2883Article10.3390/a80307237237421999-48932015-08-28doi: 10.3390/a8030723Iosif KatsaragakisIoannis TassopoulosGrigorios Beligiannis