Algorithms
http://www.mdpi.com/journal/algorithms
Latest open access articles published in Algorithms at http://www.mdpi.com/journal/algorithms<![CDATA[Algorithms, Vol. 10, Pages 57: A Prediction of Precipitation Data Based on Support Vector Machine and Particle Swarm Optimization (PSO-SVM) Algorithms]]>
http://www.mdpi.com/1999-4893/10/2/57
Precipitation is a very important topic in weather forecasts. Weather forecasts, especially precipitation prediction, poses complex tasks because they depend on various parameters to predict the dependent variables like temperature, humidity, wind speed and direction, which are changing from time to time and weather calculation varies with the geographical location along with its atmospheric variables. To improve the prediction accuracy of precipitation, this context proposes a prediction model for rainfall forecast based on Support Vector Machine with Particle Swarm Optimization (PSO-SVM) to replace the linear threshold used in traditional precipitation. Parameter selection has a critical impact on the predictive accuracy of SVM, and PSO is proposed to find the optimal parameters for SVM. The PSO-SVM algorithm was used for the training of a model by using the historical data for precipitation prediction, which can be useful information and used by people of all walks of life in making wise and intelligent decisions. The simulations demonstrate that prediction models indicate that the performance of the proposed algorithm has much better accuracy than the direct prediction model based on a set of experimental data if other things are equal. On the other hand, simulation results demonstrate the effectiveness and advantages of the SVM-PSO model used in machine learning and further promises the scope for improvement as more and more relevant attributes can be used in predicting the dependent variables.Algorithms2017-05-17102Article10.3390/a10020057571999-48932017-05-17doi: 10.3390/a10020057Jinglin DuYayun LiuYanan YuWeilan Yan<![CDATA[Algorithms, Vol. 10, Pages 56: Clustering Using an Improved Krill Herd Algorithm]]>
http://www.mdpi.com/1999-4893/10/2/56
In recent years, metaheuristic algorithms have been widely used in solving clustering problems because of their good performance and application effects. krill herd algorithm (KHA) is a new effective algorithm to solve optimization problems based on the imitation of krill individual behavior, and it is proven to perform better than other swarm intelligence algorithms. However, there are some weaknesses yet. In this paper, an improved krill herd algorithm (IKHA) is studied. Modified mutation operators and updated mechanisms are applied to improve global optimization, and the proposed IKHA can overcome the weakness of KHA and performs better than KHA in optimization problems. Then, KHA and IKHA are introduced into the clustering problem. In our proposed clustering algorithm, KHA and IKHA are used to find appropriate cluster centers. Experiments were conducted on University of California Irvine (UCI) standard datasets, and the results showed that the IKHA clustering algorithm is the most effective.Algorithms2017-05-17102Article10.3390/a10020056561999-48932017-05-17doi: 10.3390/a10020056Qin LiBo Liu<![CDATA[Algorithms, Vol. 10, Pages 54: Extending the Applicability of the MMN-HSS Method for Solving Systems of Nonlinear Equations under Generalized Conditions]]>
http://www.mdpi.com/1999-4893/10/2/54
We present the semilocal convergence of a multi-step modified Newton-Hermitian and Skew-Hermitian Splitting method (MMN-HSS method) to approximate a solution of a nonlinear equation. Earlier studies show convergence under only Lipschitz conditions limiting the applicability of this method. The convergence in this study is shown under generalized Lipschitz-type conditions and restricted convergence domains. Hence, the applicability of the method is extended. Moreover, numerical examples are also provided to show that our results can be applied to solve equations in cases where earlier study cannot be applied. Furthermore, in the cases where both old and new results are applicable, the latter provides a larger domain of convergence and tighter error bounds on the distances involved.Algorithms2017-05-12102Article10.3390/a10020054541999-48932017-05-12doi: 10.3390/a10020054Ioannis ArgyrosJanak SharmaDeepak Kumar<![CDATA[Algorithms, Vol. 10, Pages 55: Erratum: Ahmad, F., et al. A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity. Algorithms 2017, 10, 17]]>
http://www.mdpi.com/1999-4893/10/2/55
n/aAlgorithms2017-05-12102Erratum10.3390/a10020055551999-48932017-05-12doi: 10.3390/a10020055Fayyaz AhmadToseef BhuttaUmar ShoaibMalik UllahAli AlshomraniShamshad AhmadShahid Ahmad<![CDATA[Algorithms, Vol. 10, Pages 53: Application of Gradient Descent Continuous Actor-Critic Algorithm for Bilateral Spot Electricity Market Modeling Considering Renewable Power Penetration]]>
http://www.mdpi.com/1999-4893/10/2/53
The bilateral spot electricity market is very complicated because all generation units and demands must strategically bid in this market. Considering renewable resource penetration, the high variability and the non-dispatchable nature of these intermittent resources make it more difficult to model and simulate the dynamic bidding process and the equilibrium in the bilateral spot electricity market, which makes developing fast and reliable market modeling approaches a matter of urgency nowadays. In this paper, a Gradient Descent Continuous Actor-Critic algorithm is proposed for hour-ahead bilateral electricity market modeling in the presence of renewable resources because this algorithm can solve electricity market modeling problems with continuous state and action spaces without causing the “curse of dimensionality” and has low time complexity. In our simulation, the proposed approach is implemented on an IEEE 30-bus test system. The adequate performance of our proposed approach—such as reaching Nash Equilibrium results after enough iterations of training are tested and verified, and some conclusions about the relationship between increasing the renewable power output and participants’ bidding strategy, locational marginal prices, and social welfare—is also evaluated. Moreover, the comparison of our proposed approach with the fuzzy Q-learning-based electricity market approach implemented in this paper confirms the superiority of our proposed approach in terms of participants’ profits, social welfare, average locational marginal prices, etc.Algorithms2017-05-10102Article10.3390/a10020053531999-48932017-05-10doi: 10.3390/a10020053Huiru ZhaoYuwei WangMingrui ZhaoChuyu SunQingkun Tan<![CDATA[Algorithms, Vol. 10, Pages 51: Adaptive Vector Quantization for Lossy Compression of Image Sequences ]]>
http://www.mdpi.com/1999-4893/10/2/51
In this work, we present a scheme for the lossy compression of image sequences, based on the Adaptive Vector Quantization (AVQ) algorithm. The AVQ algorithm is a lossy compression algorithm for grayscale images, which processes the input data in a single-pass, by using the properties of the vector quantization to approximate data. First, we review the key aspects of the AVQ algorithm and, subsequently, we outline the basic concepts and the design choices behind the proposed scheme. Finally, we report the experimental results, which highlight an improvement in compression performances when our scheme is compared with the AVQ algorithm.Algorithms2017-05-09102Article10.3390/a10020051511999-48932017-05-09doi: 10.3390/a10020051Raffaele PizzolanteBruno CarpentieriSergio De Agostino<![CDATA[Algorithms, Vol. 10, Pages 52: Searchable Data Vault: Encrypted Queries in Secure Distributed Cloud Storage]]>
http://www.mdpi.com/1999-4893/10/2/52
Cloud storage services allow users to efficiently outsource their documents anytime and anywhere. Such convenience, however, leads to privacy concerns. While storage providers may not read users’ documents, attackers may possibly gain access by exploiting vulnerabilities in the storage system. Documents may also be leaked by curious administrators. A simple solution is for the user to encrypt all documents before submitting them. This method, however, makes it impossible to efficiently search for documents as they are all encrypted. To resolve this problem, we propose a multi-server searchable symmetric encryption (SSE) scheme and construct a system called the searchable data vault (SDV). A unique feature of the scheme is that it allows an encrypted document to be divided into blocks and distributed to different storage servers so that no single storage provider has a complete document. By incorporating the scheme, the SDV protects the privacy of documents while allowing for efficient private queries. It utilizes a web interface and a controller that manages user credentials, query indexes and submission of encrypted documents to cloud storage services. It is also the first system that enables a user to simultaneously outsource and privately query documents from a few cloud storage services. Our preliminary performance evaluation shows that this feature introduces acceptable computation overheads when compared to submitting documents directly to a cloud storage service.Algorithms2017-05-09102Article10.3390/a10020052521999-48932017-05-09doi: 10.3390/a10020052Geong PohVishnu BaskaranJi-Jian ChinMoesfa MohamadKay Win LeeDharmadharshni ManiamMuhammad Z’aba<![CDATA[Algorithms, Vol. 10, Pages 50: Hierarchical Parallel Evaluation of a Hamming Code]]>
http://www.mdpi.com/1999-4893/10/2/50
The Hamming code is a well-known error correction code and can correct a single error in an input vector of size n bits by adding logn parity checks. A new parallel implementation of the code is presented, using a hierarchical structure of n processors in logn layers. All the processors perform similar simple tasks, and need only a few bytes of internal memory.Algorithms2017-04-30102Article10.3390/a10020050501999-48932017-04-30doi: 10.3390/a10020050Shmuel KleinDana Shapira<![CDATA[Algorithms, Vol. 10, Pages 49: Multivariate Statistical Process Control Using Enhanced Bottleneck Neural Network]]>
http://www.mdpi.com/1999-4893/10/2/49
Monitoring process upsets and malfunctions as early as possible and then finding and removing the factors causing the respective events is of great importance for safe operation and improved productivity. Conventional process monitoring using principal component analysis (PCA) often supposes that process data follow a Gaussian distribution. However, this kind of constraint cannot be satisfied in practice because many industrial processes frequently span multiple operating states. To overcome this difficulty, PCA can be combined with nonparametric control charts for which there is no assumption need on the distribution. However, this approach still uses a constant confidence limit where a relatively high rate of false alarms are generated. Although nonlinear PCA (NLPCA) using autoassociative bottle-neck neural networks plays an important role in the monitoring of industrial processes, it is difficult to design correct monitoring statistics and confidence limits that check new performance. In this work, a new monitoring strategy using an enhanced bottleneck neural network (EBNN) with an adaptive confidence limit for non Gaussian data is proposed. The basic idea behind it is to extract internally homogeneous segments from the historical normal data sets by filling a Gaussian mixture model (GMM). Based on the assumption that process data follow a Gaussian distribution within an operating mode, a local confidence limit can be established. The EBNN is used to reconstruct input data and estimate probabilities of belonging to the various local operating regimes, as modelled by GMM. An abnormal event for an input measurement vector is detected if the squared prediction error (SPE) is too large, or above a certain threshold which is made adaptive. Moreover, the sensor validity index (SVI) is employed successfully to identify the detected faulty variable. The results demonstrate that, compared with NLPCA, the proposed approach can effectively reduce the number of false alarms, and is hence expected to better monitor many practical processes.Algorithms2017-04-29102Article10.3390/a10020049491999-48932017-04-29doi: 10.3390/a10020049Khaled BouzenadMessaoud Ramdani<![CDATA[Algorithms, Vol. 10, Pages 48: Adaptive Mutation Dynamic Search Fireworks Algorithm]]>
http://www.mdpi.com/1999-4893/10/2/48
The Dynamic Search Fireworks Algorithm (dynFWA) is an effective algorithm for solving optimization problems. However, dynFWA easily falls into local optimal solutions prematurely and it also has a slow convergence rate. In order to improve these problems, an adaptive mutation dynamic search fireworks algorithm (AMdynFWA) is introduced in this paper. The proposed algorithm applies the Gaussian mutation or the Levy mutation for the core firework (CF) with mutation probability. Our simulation compares the proposed algorithm with the FWA-Based algorithms and other swarm intelligence algorithms. The results show that the proposed algorithm achieves better overall performance on the standard test functions.Algorithms2017-04-28102Article10.3390/a10020048481999-48932017-04-28doi: 10.3390/a10020048Xi-Guang LiShou-Fei HanLiang ZhaoChang-Qing GongXiao-Jing Liu<![CDATA[Algorithms, Vol. 10, Pages 47: Trust in the Balance: Data Protection Laws as Tools for Privacy and Security in the Cloud]]>
http://www.mdpi.com/1999-4893/10/2/47
A popular bumper sticker states: “There is no cloud. It’s just someone else’s computer.” Despite the loss of control that comes with its use, critical records are increasingly being entrusted to the cloud, generating ever-growing concern about the privacy and security of those records. Ultimately, privacy and security constitute an attempt to balance competing needs: privacy balances the need to use information against the need to protect personal data, while security balances the need to provide access to records against the need to stop unauthorized access. The importance of these issues has led to a multitude of legal and regulatory efforts to find a balance and, ultimately, to ensure trust in both digital records and their storage in the cloud. Adding a particular challenge is the fact that distinct jurisdictions approach privacy differently and an in-depth understanding of what a jurisdiction’s laws may be, or even under what jurisdiction particular data might be, requires a Herculean effort. And yet, in order to protect privacy and enhance security, this effort is required. This article examines two legal tools for ensuring the privacy and security of records in the cloud, data protection laws, and data localization laws, through the framework of “trust” as understood in archival science. This framework of trust provides new directions for algorithmic research, identifying those areas of digital record creation and preservation most in need of novel solutions.Algorithms2017-04-27102Article10.3390/a10020047471999-48932017-04-27doi: 10.3390/a10020047Darra HofmanLuciana DurantiElissa How<![CDATA[Algorithms, Vol. 10, Pages 46: An Improved Multiobjective Particle Swarm Optimization Based on Culture Algorithms]]>
http://www.mdpi.com/1999-4893/10/2/46
In this paper, we propose a new approach to raise the performance of multiobjective particle swam optimization. The personal guide and global guide are updated using three kinds of knowledge extracted from the population based on cultural algorithms. An epsilon domination criterion has been employed to enhance the convergence and diversity of the approximate Pareto front. Moreover, a simple polynomial mutation operator has been applied to both the population and the non-dominated archive. Experiments on two series of bench test suites have shown the effectiveness of the proposed approach. A comparison with several other algorithms that are considered good representatives of particle swarm optimization solutions has also been conducted, in order to verify the competitive performance of the proposed algorithm in solve multiobjective optimization problems.Algorithms2017-04-25102Article10.3390/a10020046461999-48932017-04-25doi: 10.3390/a10020046Chunhua JiaHong Zhu<![CDATA[Algorithms, Vol. 10, Pages 45: An Efficient Sixth-Order Newton-Type Method for Solving Nonlinear Systems]]>
http://www.mdpi.com/1999-4893/10/2/45
In this paper, we present a new sixth-order iterative method for solving nonlinear systems and prove a local convergence result. The new method requires solving five linear systems per iteration. An important feature of the new method is that the LU (lower upper, also called LU factorization) decomposition of the Jacobian matrix is computed only once in each iteration. The computational efficiency index of the new method is compared to that of some known methods. Numerical results are given to show that the convergence behavior of the new method is similar to the existing methods. The new method can be applied to small- and medium-sized nonlinear systems.Algorithms2017-04-25102Article10.3390/a10020045451999-48932017-04-25doi: 10.3390/a10020045Xiaofeng WangYang Li<![CDATA[Algorithms, Vol. 10, Pages 44: Revised Gravitational Search Algorithms Based on Evolutionary-Fuzzy Systems]]>
http://www.mdpi.com/1999-4893/10/2/44
The choice of the best optimization algorithm is a hard issue, and it sometime depends on specific problem. The Gravitational Search Algorithm (GSA) is a search algorithm based on the law of gravity, which states that each particle attracts every other particle with a force called gravitational force. Some revised versions of GSA have been proposed by using intelligent techniques. This work proposes some GSA versions based on fuzzy techniques powered by evolutionary methods, such as Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE), to improve GSA. The designed algorithms tune a suitable parameter of GSA through a fuzzy controller whose membership functions are optimized by GA, PSO and DE. The results show that Fuzzy Gravitational Search Algorithm (FGSA) optimized by DE is optimal for unimodal functions, whereas FGSA optimized through GA is good for multimodal functions.Algorithms2017-04-21102Article10.3390/a10020044441999-48932017-04-21doi: 10.3390/a10020044Danilo PelusiRaffaele MascellaLuca Tallini<![CDATA[Algorithms, Vol. 10, Pages 43: Reliable Portfolio Selection Problem in Fuzzy Environment: An mλ Measure Based Approach]]>
http://www.mdpi.com/1999-4893/10/2/43
This paper investigates a fuzzy portfolio selection problem with guaranteed reliability, in which the fuzzy variables are used to capture the uncertain returns of different securities. To effectively handle the fuzziness in a mathematical way, a new expected value operator and variance of fuzzy variables are defined based on the m λ measure that is a linear combination of the possibility measure and necessity measure to balance the pessimism and optimism in the decision-making process. To formulate the reliable portfolio selection problem, we particularly adopt the expected total return and standard variance of the total return to evaluate the reliability of the investment strategies, producing three risk-guaranteed reliable portfolio selection models. To solve the proposed models, an effective genetic algorithm is designed to generate the approximate optimal solution to the considered problem. Finally, the numerical examples are given to show the performance of the proposed models and algorithm.Algorithms2017-04-18102Article10.3390/a10020043431999-48932017-04-18doi: 10.3390/a10020043Yuan FengLi WangXinhong Liu<![CDATA[Algorithms, Vol. 10, Pages 42: RGloVe: An Improved Approach of Global Vectors for Distributional Entity Relation Representation]]>
http://www.mdpi.com/1999-4893/10/2/42
Most of the previous works on relation extraction between named entities are often limited to extracting the pre-defined types; which are inefficient for massive unlabeled text data. Recently; with the appearance of various distributional word representations; unsupervised methods for many natural language processing (NLP) tasks have been widely researched. In this paper; we focus on a new finding of unsupervised relation extraction; which is called distributional relation representation. Without requiring the pre-defined types; distributional relation representation aims to automatically learn entity vectors and further estimate semantic similarity between these entities. We choose global vectors (GloVe) as our original model to train entity vectors because of its excellent balance between local context and global statistics in the whole corpus. In order to train model more efficiently; we improve the traditional GloVe model by using cosine similarity between entity vectors to approximate the entity occurrences instead of dot product. Because cosine similarity can convert vector to unit vector; it is intuitively more reasonable and more easily converge to a local optimum. We call the improved model RGloVe. Experimental results on a massive corpus of Sina News show that our proposed model outperforms the traditional global vectors. Finally; a graph database of Neo4j is introduced to store these relationships between named entities. The most competitive advantage of Neo4j is that it provides a highly accessible way to query the direct and indirect relationships between entities.Algorithms2017-04-17102Article10.3390/a10020042421999-48932017-04-17doi: 10.3390/a10020042Ziyan ChenYu HuangYuexian LiangYang WangXingyu FuKun Fu<![CDATA[Algorithms, Vol. 10, Pages 41: RST Resilient Watermarking Scheme Based on DWT-SVD and Scale-Invariant Feature Transform]]>
http://www.mdpi.com/1999-4893/10/2/41
Currently, most digital image watermarking schemes are affected by geometric attacks like rotation, scaling, and translation (RST). In the watermark embedding process, a robust watermarking scheme is proposed against RST attacks. In this paper, three-level discrete wavelet transform (DWT) is applied to the original image. The three-level low frequency sub-band is decomposed by the singular value decomposition (SVD), and its singular values matrix is extracted for watermarking embedding. Before the watermarking extraction, the keypoints are selected by scale-invariant feature transform (SIFT) in the original image and attacked image. By matching the keypoints in two images, the RST attacks can be precisely corrected and the better performance can be obtained. The experimental results show that the proposed scheme achieves good performance of imperceptibility and robustness against common image processing and malicious attacks, especially geometric attacks.Algorithms2017-03-30102Article10.3390/a10020041411999-48932017-03-30doi: 10.3390/a10020041Yunpeng ZhangChengyou WangXiao Zhou<![CDATA[Algorithms, Vol. 10, Pages 40: Fuzzy Random Walkers with Second Order Bounds: An Asymmetric Analysis]]>
http://www.mdpi.com/1999-4893/10/2/40
Edge-fuzzy graphs constitute an essential modeling paradigm across a broad spectrum of domains ranging from artificial intelligence to computational neuroscience and social network analysis. Under this model, fundamental graph properties such as edge length and graph diameter become stochastic and as such they are consequently expressed in probabilistic terms. Thus, algorithms for fuzzy graph analysis must rely on non-deterministic design principles. One such principle is Random Walker, which is based on a virtual entity and selects either edges or, like in this case, vertices of a fuzzy graph to visit. This allows the estimation of global graph properties through a long sequence of local decisions, making it a viable strategy candidate for graph processing software relying on native graph databases such as Neo4j. As a concrete example, Chebyshev Walktrap, a heuristic fuzzy community discovery algorithm relying on second order statistics and on the teleportation of the Random Walker, is proposed and its performance, expressed in terms of community coherence and number of vertex visits, is compared to the previously proposed algorithms of Markov Walktrap, Fuzzy Walktrap, and Fuzzy Newman–Girvan. In order to facilitate this comparison, a metric based on the asymmetric metrics of Tversky index and Kullback–Leibler divergence is used.Algorithms2017-03-30102Article10.3390/a10020040401999-48932017-03-30doi: 10.3390/a10020040Georgios DrakopoulosAndreas KanavosKonstantinos Tsakalidis<![CDATA[Algorithms, Vol. 10, Pages 39: From Intrusion Detection to an Intrusion Response System: Fundamentals, Requirements, and Future Directions]]>
http://www.mdpi.com/1999-4893/10/2/39
In the past few decades, the rise in attacks on communication devices in networks has resulted in a reduction of network functionality, throughput, and performance. To detect and mitigate these network attacks, researchers, academicians, and practitioners developed Intrusion Detection Systems (IDSs) with automatic response systems. The response system is considered an important component of IDS, since without a timely response IDSs may not function properly in countering various attacks, especially on a real-time basis. To respond appropriately, IDSs should select the optimal response option according to the type of network attack. This research study provides a complete survey of IDSs and Intrusion Response Systems (IRSs) on the basis of our in-depth understanding of the response option for different types of network attacks. Knowledge of the path from IDS to IRS can assist network administrators and network staffs in understanding how to tackle different attacks with state-of-the-art technologies.Algorithms2017-03-27102Review10.3390/a10020039391999-48932017-03-27doi: 10.3390/a10020039Shahid AnwarJasni Mohamad ZainMohamad Fadli ZolkipliZakira InayatSuleman KhanBokolo AnthonyVictor Chang<![CDATA[Algorithms, Vol. 10, Pages 37: A Spatial-Temporal-Semantic Neural Network Algorithm for Location Prediction on Moving Objects]]>
http://www.mdpi.com/1999-4893/10/2/37
Location prediction has attracted much attention due to its important role in many location-based services, such as food delivery, taxi-service, real-time bus system, and advertisement posting. Traditional prediction methods often cluster track points into regions and mine movement patterns within the regions. Such methods lose information of points along the road and cannot meet the demand of specific services. Moreover, traditional methods utilizing classic models may not perform well with long location sequences. In this paper, a spatial-temporal-semantic neural network algorithm (STS-LSTM) has been proposed, which includes two steps. First, the spatial-temporal-semantic feature extraction algorithm (STS) is used to convert the trajectory to location sequences with fixed and discrete points in the road networks. The method can take advantage of points along the road and can transform trajectory into model-friendly sequences. Then, a long short-term memory (LSTM)-based model is constructed to make further predictions, which can better deal with long location sequences. Experimental results on two real-world datasets show that STS-LSTM has stable and higher prediction accuracy over traditional feature extraction and model building methods, and the application scenarios of the algorithm are illustrated.Algorithms2017-03-24102Article10.3390/a10020037371999-48932017-03-24doi: 10.3390/a10020037Fan WuKun FuYang WangZhibin XiaoXingyu Fu<![CDATA[Algorithms, Vol. 10, Pages 38: An Asynchronous Message-Passing Distributed Algorithm for the Generalized Local Critical Section Problem]]>
http://www.mdpi.com/1999-4893/10/2/38
This paper discusses the generalized local version of critical section problems including mutual exclusion, mutual inclusion, k-mutual exclusion and l-mutual inclusion. When a pair of numbers (li, ki) is given for each process Pi, it is the problem of controlling the system in such a way that the number of processes that can execute their critical sections at a time is at least li and at most ki among its neighboring processes and Pi itself. We propose the first solution for the generalized local (li, |Ni| + 1)-critical section problem (i.e., the generalized local li-mutual inclusion problem). Additionally, we show the relationship between the generalized local (li, ki)-critical section problem and the generalized local (|Ni| + 1 − ki, |Ni| + 1 − li)-critical section problem. Finally, we propose the first solution for the generalized local (li, ki)-critical section problem for arbitrary (li, ki), where 0 ≤ li &lt; ki + |Ni| + 1 for each process Pi.Algorithms2017-03-24102Article10.3390/a10020038381999-48932017-03-24doi: 10.3390/a10020038Sayaka KameiHirotsugu Kakugawa<![CDATA[Algorithms, Vol. 10, Pages 36: DNA Paired Fragment Assembly Using Graph Theory]]>
http://www.mdpi.com/1999-4893/10/2/36
DNA fragment assembly requirements have generated an important computational problem created by their structure and the volume of data. Therefore, it is important to develop algorithms able to produce high-quality information that use computer resources efficiently. Such an algorithm, using graph theory, is introduced in the present article. We first determine the overlaps between DNA fragments, obtaining the edges of a directed graph; with this information, the next step is to construct an adjacency list with some particularities. Using the adjacency list, it is possible to obtain the DNA contigs (group of assembled fragments building a contiguous element) using graph theory. We performed a set of experiments on real DNA data and compared our results to those obtained with common assemblers (Edena and Velvet). Finally, we searched the contigs in the original genome, in our results and in those of Edena and Velvet.Algorithms2017-03-24102Article10.3390/a10020036361999-48932017-03-24doi: 10.3390/a10020036J. Quiroz-IbarraGuillermo Mallén-FullertonGuillermo Fernández-Anaya<![CDATA[Algorithms, Vol. 10, Pages 35: A Geo-Clustering Approach for the Detection of Areas-of-Interest and Their Underlying Semantics]]>
http://www.mdpi.com/1999-4893/10/1/35
Living in the “era of social networking”, we are experiencing a data revolution, generating an astonishing amount of digital information every single day. Due to this proliferation of data volume, there has been an explosion of new application domains for information mined from social networks. In this paper, we leverage this “socially-generated knowledge” (i.e., user-generated content derived from social networks) towards the detection of areas-of-interest within an urban region. These large and homogeneous areas contain multiple points-of-interest which are of special interest to particular groups of people (e.g., tourists and/or consumers). In order to identify them, we exploit two types of metadata, namely location-based information included within geo-tagged photos that we collect from Flickr, along with plain simple textual information from user-generated tags. We propose an algorithm that divides a predefined geographical area (i.e., the center of Athens, Greece) into “tile”-shaped sub-regions and based on an iterative merging procedure, it aims to detect larger, cohesive areas. We examine the performance of the algorithm both in a qualitative and quantitative manner. Our experiments demonstrate that the proposed geo-clustering algorithm is able to correctly detect regions that contain popular tourist attractions within them with very promising results.Algorithms2017-03-18101Article10.3390/a10010035351999-48932017-03-18doi: 10.3390/a10010035Evaggelos SpyrouMichalis KorakakisVasileios CharalampidisApostolos PsallasPhivos Mylonas<![CDATA[Algorithms, Vol. 10, Pages 34: A Novel, Gradient Boosting Framework for Sentiment Analysis in Languages where NLP Resources Are Not Plentiful: A Case Study for Modern Greek]]>
http://www.mdpi.com/1999-4893/10/1/34
Sentiment analysis has played a primary role in text classification. It is an undoubted fact that some years ago, textual information was spreading in manageable rates; however, nowadays, such information has overcome even the most ambiguous expectations and constantly grows within seconds. It is therefore quite complex to cope with the vast amount of textual data particularly if we also take the incremental production speed into account. Social media, e-commerce, news articles, comments and opinions are broadcasted on a daily basis. A rational solution, in order to handle the abundance of data, would be to build automated information processing systems, for analyzing and extracting meaningful patterns from text. The present paper focuses on sentiment analysis applied in Greek texts. Thus far, there is no wide availability of natural language processing tools for Modern Greek. Hence, a thorough analysis of Greek, from the lexical to the syntactical level, is difficult to perform. This paper attempts a different approach, based on the proven capabilities of gradient boosting, a well-known technique for dealing with high-dimensional data. The main rationale is that since English has dominated the area of preprocessing tools and there are also quite reliable translation services, we could exploit them to transform Greek tokens into English, thus assuring the precision of the translation, since the translation of large texts is not always reliable and meaningful. The new feature set of English tokens is augmented with the original set of Greek, consequently producing a high dimensional dataset that poses certain difficulties for any traditional classifier. Accordingly, we apply gradient boosting machines, an ensemble algorithm that can learn with different loss functions providing the ability to work efficiently with high dimensional data. Moreover, for the task at hand, we deal with a class imbalance issues since the distribution of sentiments in real-world applications often displays issues of inequality. For example, in political forums or electronic discussions about immigration or religion, negative comments overwhelm the positive ones. The class imbalance problem was confronted using a hybrid technique that performs a variation of under-sampling the majority class and over-sampling the minority class, respectively. Experimental results, considering different settings, such as translation of tokens against translation of sentences, consideration of limited Greek text preprocessing and omission of the translation phase, demonstrated that the proposed gradient boosting framework can effectively cope with both high-dimensional and imbalanced datasets and performs significantly better than a plethora of traditional machine learning classification approaches in terms of precision and recall measures.Algorithms2017-03-06101Article10.3390/a10010034341999-48932017-03-06doi: 10.3390/a10010034Vasileios AthanasiouManolis Maragoudakis<![CDATA[Algorithms, Vol. 10, Pages 33: Large Scale Implementations for Twitter Sentiment Classification]]>
http://www.mdpi.com/1999-4893/10/1/33
Sentiment Analysis on Twitter Data is indeed a challenging problem due to the nature, diversity and volume of the data. People tend to express their feelings freely, which makes Twitter an ideal source for accumulating a vast amount of opinions towards a wide spectrum of topics. This amount of information offers huge potential and can be harnessed to receive the sentiment tendency towards these topics. However, since no one can invest an infinite amount of time to read through these tweets, an automated decision making approach is necessary. Nevertheless, most existing solutions are limited in centralized environments only. Thus, they can only process at most a few thousand tweets. Such a sample is not representative in order to define the sentiment polarity towards a topic due to the massive number of tweets published daily. In this work, we develop two systems: the first in the MapReduce and the second in the Apache Spark framework for programming with Big Data. The algorithm exploits all hashtags and emoticons inside a tweet, as sentiment labels, and proceeds to a classification method of diverse sentiment types in a parallel and distributed manner. Moreover, the sentiment analysis tool is based on Machine Learning methodologies alongside Natural Language Processing techniques and utilizes Apache Spark’s Machine learning library, MLlib. In order to address the nature of Big Data, we introduce some pre-processing steps for achieving better results in Sentiment Analysis as well as Bloom filters to compact the storage size of intermediate data and boost the performance of our algorithm. Finally, the proposed system was trained and validated with real data crawled by Twitter, and, through an extensive experimental evaluation, we prove that our solution is efficient, robust and scalable while confirming the quality of our sentiment identification.Algorithms2017-03-04101Article10.3390/a10010033331999-48932017-03-04doi: 10.3390/a10010033Andreas KanavosNikolaos NodarakisSpyros SioutasAthanasios TsakalidisDimitrios TsolisGiannis Tzimas<![CDATA[Algorithms, Vol. 10, Pages 32: A New Quintic Spline Method for Integro Interpolation and Its Error Analysis]]>
http://www.mdpi.com/1999-4893/10/1/32
In this paper, to overcome the innate drawbacks of some old methods, we present a new quintic spline method for integro interpolation. The method is free of any exact end conditions, and it can reconstruct a function and its first order to fifth order derivatives with high accuracy by only using the given integral values of the original function. The approximation properties of the obtained integro quintic spline are well studied and examined. The theoretical analysis and the numerical tests show that the new method is very effective for integro interpolation.Algorithms2017-03-03101Article10.3390/a10010032321999-48932017-03-03doi: 10.3390/a10010032Feng-Gong Lang<![CDATA[Algorithms, Vol. 10, Pages 31: Optimization-Based Approaches to Control of Probabilistic Boolean Networks]]>
http://www.mdpi.com/1999-4893/10/1/31
Control of gene regulatory networks is one of the fundamental topics in systems biology. In the last decade, control theory of Boolean networks (BNs), which is well known as a model of gene regulatory networks, has been widely studied. In this review paper, our previously proposed methods on optimal control of probabilistic Boolean networks (PBNs) are introduced. First, the outline of PBNs is explained. Next, an optimal control method using polynomial optimization is explained. The finite-time optimal control problem is reduced to a polynomial optimization problem. Furthermore, another finite-time optimal control problem, which can be reduced to an integer programming problem, is also explained.Algorithms2017-02-22101Review10.3390/a10010031311999-48932017-02-22doi: 10.3390/a10010031Koichi KobayashiKunihiko Hiraishi<![CDATA[Algorithms, Vol. 10, Pages 30: Towards Efficient Positional Inverted Index †]]>
http://www.mdpi.com/1999-4893/10/1/30
We address the problem of positional indexing in the natural language domain. The positional inverted index contains the information of the word positions. Thus, it is able to recover the original text file, which implies that it is not necessary to store the original file. Our Positional Inverted Self-Index (PISI) stores the word position gaps encoded by variable byte code. Inverted lists of single terms are combined into one inverted list that represents the backbone of the text file since it stores the sequence of the indexed words of the original file. The inverted list is synchronized with a presentation layer that stores separators, stop words, as well as variants of the indexed words. The Huffman coding is used to encode the presentation layer. The space complexity of the PISI inverted list is O ( ( N − n ) ⌈ log 2 b N ⌉ + ( ⌊ N − n α ⌋ + n ) × ( ⌈ log 2 b n ⌉ + 1 ) ) where N is a number of stems, n is a number of unique stems, α is a step/period of the back pointers in the inverted list and b is the size of the word of computer memory given in bits. The space complexity of the presentation layer is O ( − ∑ i = 1 N ⌈ log 2 p i n ( i ) ⌉ − ∑ j = 1 N ′ ⌈ log 2 p j ′ ⌉ + N ) with respect to p i n ( i ) as a probability of a stem variant at position i, p j ′ as the probability of separator or stop word at position j and N ′ as the number of separators and stop words.Algorithms2017-02-22101Article10.3390/a10010030301999-48932017-02-22doi: 10.3390/a10010030Petr ProcházkaJan Holub<![CDATA[Algorithms, Vol. 10, Pages 29: Stable Analysis of Compressive Principal Component Pursuit]]>
http://www.mdpi.com/1999-4893/10/1/29
Compressive principal component pursuit (CPCP) recovers a target matrix that is a superposition of low-complexity structures from a small set of linear measurements. Pervious works mainly focus on the analysis of the existence and uniqueness. In this paper, we address its stability. We prove that the solution to the related convex programming of CPCP gives an estimate that is stable to small entry-wise noise. We also provide numerical simulation results to support our result. Numerical results show that the solution to the related convex program is stable to small entry-wise noise under board condition.Algorithms2017-02-21101Article10.3390/a10010029291999-48932017-02-21doi: 10.3390/a10010029Qingshan YouQun Wan<![CDATA[Algorithms, Vol. 10, Pages 28: Mining Domain-Specific Design Patterns: A Case Study †]]>
http://www.mdpi.com/1999-4893/10/1/28
Domain-specific design patterns provide developers with proven solutions to common design problems that arise, particularly in a target application domain, facilitating them to produce quality designs in the domain contexts. However, research in this area is not mature and there are no techniques to support their detection. Towards this end, we propose a methodology which, when applied on a collection of websites in a specific domain, facilitates the automated identification of domain-specific design patterns. The methodology automatically extracts the conceptual models of the websites, which are subsequently analyzed in terms of all of the reusable design fragments used in them for supporting common domain functionalities. At the conceptual level, we consider these fragments as recurrent patterns consisting of a configuration of front-end interface components that interrelate each other and interact with end-users to support certain functionality. By performing a pattern-based analysis of the models, we locate the occurrences of all the recurrent patterns in the various website designs which are then evaluated towards their consistent use. The detected patterns can be used as building blocks in future designs, assisting developers to produce consistent and quality designs in the target domain. To support our case, we present a case study for the educational domain.Algorithms2017-02-21101Article10.3390/a10010028281999-48932017-02-21doi: 10.3390/a10010028Vassiliki GkantounaGiannis Tzimas<![CDATA[Algorithms, Vol. 10, Pages 27: Fragile Watermarking for Image Authentication Using the Characteristic of SVD]]>
http://www.mdpi.com/1999-4893/10/1/27
Digital image authentication has become a hot topic in the last few years. In this paper, a pixel-based fragile watermarking method is presented for image tamper identification and localization. By analyzing the left and right singular matrices of SVD, it is found that the matrix product between the first column of the left singular matrix and the transposition of the first column in the right singular matrix is closely related to the image texture features. Based on this characteristic, a binary watermark consisting of image texture information is generated and inserted into the least significant bit (LSB) of the original host image. To improve the security of the presented algorithm, the Arnold transform is applied twice in the watermark embedding process. Experimental results indicate that the proposed watermarking algorithm has high security and perceptual invisibility. Moreover, it can detect and locate the tampered region effectively for various malicious attacks.Algorithms2017-02-17101Article10.3390/a10010027271999-48932017-02-17doi: 10.3390/a10010027Heng ZhangChengyou WangXiao Zhou<![CDATA[Algorithms, Vol. 10, Pages 26: Analysis and Improvement of Fireworks Algorithm]]>
http://www.mdpi.com/1999-4893/10/1/26
The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA), this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opposition-based learning initialization population. Secondly a new explosion amplitude mechanism for the optimal firework is proposed. In addition, the adaptive t-distribution mutation for non-optimal individuals and elite opposition-based learning for the optimal individual are used. Finally, a new selection strategy, namely Disruptive Selection, is proposed to reduce the running time of the algorithm compared with FWA. In our simulation, we apply the CEC2013 standard functions and compare the proposed algorithm (IFWA) with SPSO2011, FWA, EFWA and dynFWA. The results show that the proposed algorithm has better overall performance on the test functions.Algorithms2017-02-17101Article10.3390/a10010026261999-48932017-02-17doi: 10.3390/a10010026Xi-Guang LiShou-Fei HanChang-Qing Gong<![CDATA[Algorithms, Vol. 10, Pages 25: An On-Line Tracker for a Stochastic Chaotic System Using Observer/Kalman Filter Identification Combined with Digital Redesign Method]]>
http://www.mdpi.com/1999-4893/10/1/25
This is the first paper to present such a digital redesign method for the (conventional) OKID system and apply this novel technique for nonlinear system identification. First, the Observer/Kalman filter Identification (OKID) method is used to obtain the lower-order state-space model for a stochastic chaos system. Then, a digital redesign approach with the high-gain property is applied to improve and replace the observer identified by OKID. Therefore, the proposed OKID combined with an observer-based digital redesign novel tracker not only suppresses the uncertainties and the nonlinear perturbations, but also improves more accurate observation parameters of OKID for complex Multi-Input Multi-Output systems. In this research, Chen’s stochastic chaotic system is used as an illustrative example to demonstrate the effectiveness and excellence of the proposed methodology.Algorithms2017-02-15101Article10.3390/a10010025251999-48932017-02-15doi: 10.3390/a10010025Tseng-Hsu ChienYeong-Chin Chen<![CDATA[Algorithms, Vol. 10, Pages 24: Problems on Finite Automata and the Exponential Time Hypothesis]]>
http://www.mdpi.com/1999-4893/10/1/24
We study several classical decision problems on finite automata under the (Strong) Exponential Time Hypothesis. We focus on three types of problems: universality, equivalence, and emptiness of intersection. All these problems are known to be CoNP-hard for nondeterministic finite automata, even when restricted to unary input alphabets. A different type of problems on finite automata relates to aperiodicity and to synchronizing words. We also consider finite automata that work on commutative alphabets and those working on two-dimensional words.Algorithms2017-02-05101Article10.3390/a10010024241999-48932017-02-05doi: 10.3390/a10010024Henning FernauAndreas Krebs<![CDATA[Algorithms, Vol. 10, Pages 23: An Architectural Based Framework for the Distributed Collection, Analysis and Query from Inhomogeneous Time Series Data Sets and Wearables for Biofeedback Applications]]>
http://www.mdpi.com/1999-4893/10/1/23
The increasing professionalism of sports persons and desire of consumers to imitate this has led to an increased metrification of sport. This has been driven in no small part by the widespread availability of comparatively cheap assessment technologies and, more recently, wearable technologies. Historically, whilst these have produced large data sets, often only the most rudimentary analysis has taken place (Wisbey et al in: “Quantifying movement demands of AFL football using GPS tracking”). This paucity of analysis is due in no small part to the challenges of analysing large sets of data that are often from disparate data sources to glean useful key performance indicators, which has been a largely a labour intensive process. This paper presents a framework that can be cloud based for the gathering, storing and algorithmic interpretation of large and inhomogeneous time series data sets. The framework is architecture based and technology agnostic in the data sources it can gather, and presents a model for multi set analysis for inter- and intra- devices and individual subject matter. A sample implementation demonstrates the utility of the framework for sports performance data collected from distributed inertial sensors in the sport of swimming.Algorithms2017-02-01101Article10.3390/a10010023231999-48932017-02-01doi: 10.3390/a10010023James LeeDavid RowlandsNicholas JacksonRaymond LeadbetterTomohito WadaDaniel James<![CDATA[Algorithms, Vol. 10, Pages 22: Evaluation of Diversification Techniques for Legal Information Retrieval]]>
http://www.mdpi.com/1999-4893/10/1/22
“Public legal information from all countries and international institutions is part of the common heritage of humanity. Maximizing access to this information promotes justice and the rule of law”. In accordance with the aforementioned declaration on free access to law by legal information institutes of the world, a plethora of legal information is available through the Internet, while the provision of legal information has never before been easier. Given that law is accessed by a much wider group of people, the majority of whom are not legally trained or qualified, diversification techniques should be employed in the context of legal information retrieval, as to increase user satisfaction. We address the diversification of results in legal search by adopting several state of the art methods from the web search, network analysis and text summarization domains. We provide an exhaustive evaluation of the methods, using a standard dataset from the common law domain that we objectively annotated with relevance judgments for this purpose. Our results: (i) reveal that users receive broader insights across the results they get from a legal information retrieval system; (ii) demonstrate that web search diversification techniques outperform other approaches (e.g., summarization-based, graph-based methods) in the context of legal diversification; and (iii) offer balance boundaries between reinforcing relevant documents or sampling the information space around the legal query.Algorithms2017-01-29101Article10.3390/a10010022221999-48932017-01-29doi: 10.3390/a10010022Marios KoniarisIoannis AnagnostopoulosYannis Vassiliou<![CDATA[Algorithms, Vol. 10, Pages 21: Concurrent vs. Exclusive Reading in Parallel Decoding of LZ-Compressed Files]]>
http://www.mdpi.com/1999-4893/10/1/21
Broadcasting a message from one to many processors in a network corresponds to concurrent reading on a random access shared memory parallel machine. Computing the trees of a forest, the level of each node in its tree and the path between two nodes are problems that can easily be solved with concurrent reading in a time logarithmic in the maximum height of a tree. Solving such problems with exclusive reading requires a time logarithmic in the number of nodes, implying message passing between disjoint pairs of processors on a distributed system. Allowing concurrent reading in parallel algorithm design for distributed computing might be advantageous in practice if these problems are faced on shallow trees with some specific constraints. We show an application to LZC (Lempel-Ziv-Compress)-compressed file decoding, whose parallelization employs these computations on such trees for realistic data. On the other hand, zipped files do not have this advantage, since they are compressed by the Lempel–Ziv sliding window technique.Algorithms2017-01-28101Article10.3390/a10010021211999-48932017-01-28doi: 10.3390/a10010021Sergio AgostinoBruno CarpentieriRaffaele Pizzolante<![CDATA[Algorithms, Vol. 10, Pages 20: Computing a Clique Tree with the Algorithm Maximal Label Search]]>
http://www.mdpi.com/1999-4893/10/1/20
The algorithm MLS (Maximal Label Search) is a graph search algorithm that generalizes the algorithms Maximum Cardinality Search (MCS), Lexicographic Breadth-First Search (LexBFS), Lexicographic Depth-First Search (LexDFS) and Maximal Neighborhood Search (MNS). On a chordal graph, MLS computes a PEO (perfect elimination ordering) of the graph. We show how the algorithm MLS can be modified to compute a PMO (perfect moplex ordering), as well as a clique tree and the minimal separators of a chordal graph. We give a necessary and sufficient condition on the labeling structure of MLS for the beginning of a new clique in the clique tree to be detected by a condition on labels. MLS is also used to compute a clique tree of the complement graph, and new cliques in the complement graph can be detected by a condition on labels for any labeling structure. We provide a linear time algorithm computing a PMO and the corresponding generators of the maximal cliques and minimal separators of the complement graph. On a non-chordal graph, the algorithm MLSM, a graph search algorithm computing an MEO and a minimal triangulation of the graph, is used to compute an atom tree of the clique minimal separator decomposition of any graph.Algorithms2017-01-25101Article10.3390/a10010020201999-48932017-01-25doi: 10.3390/a10010020Anne BerryGeneviève Simonet<![CDATA[Algorithms, Vol. 10, Pages 19: Pressure Control for a Hydraulic Cylinder Based on a Self-Tuning PID Controller Optimized by a Hybrid Optimization Algorithm]]>
http://www.mdpi.com/1999-4893/10/1/19
In order to improve the performance of the hydraulic support electro-hydraulic control system test platform, a self-tuning proportion integration differentiation (PID) controller is proposed to imitate the actual pressure of the hydraulic support. To avoid the premature convergence and to improve the convergence velocity for tuning PID parameters, the PID controller is optimized with a hybrid optimization algorithm integrated with the particle swarm algorithm (PSO) and genetic algorithm (GA). A selection probability and an adaptive cross probability are introduced into the PSO to enhance the diversity of particles. The proportional overflow valve is installed to control the pressure of the pillar cylinder. The data of the control voltage of the proportional relief valve amplifier and pillar pressure are collected to acquire the system transfer function. Several simulations with different methods are performed on the hydraulic cylinder pressure system. The results demonstrate that the hybrid algorithm for a PID controller has comparatively better global search ability and faster convergence velocity on the pressure control of the hydraulic cylinder. Finally, an experiment is conducted to verify the validity of the proposed method.Algorithms2017-01-23101Article10.3390/a10010019191999-48932017-01-23doi: 10.3390/a10010019Ru WangChao TanJing XuZhongbin WangJingfei JinYiqiao Man<![CDATA[Algorithms, Vol. 10, Pages 18: Imperialist Competitive Algorithm with Dynamic Parameter Adaptation Using Fuzzy Logic Applied to the Optimization of Mathematical Functions]]>
http://www.mdpi.com/1999-4893/10/1/18
In this paper we are presenting a method using fuzzy logic for dynamic parameter adaptation in the imperialist competitive algorithm, which is usually known by its acronym ICA. The ICA algorithm was initially studied in its original form to find out how it works and what parameters have more effect upon its results. Based on this study, several designs of fuzzy systems for dynamic adjustment of the ICA parameters are proposed. The experiments were performed on the basis of solving complex optimization problems, particularly applied to benchmark mathematical functions. A comparison of the original imperialist competitive algorithm and our proposed fuzzy imperialist competitive algorithm was performed. In addition, the fuzzy ICA was compared with another metaheuristic using a statistical test to measure the advantage of the proposed fuzzy approach for dynamic parameter adaptation.Algorithms2017-01-23101Article10.3390/a10010018181999-48932017-01-23doi: 10.3390/a10010018Emer BernalOscar CastilloJosé SoriaFevrier Valdez<![CDATA[Algorithms, Vol. 10, Pages 16: Length-Bounded Hybrid CPU/GPU Pattern Matching Algorithm for Deep Packet Inspection]]>
http://www.mdpi.com/1999-4893/10/1/16
Since frequent communication between applications takes place in high speed networks, deep packet inspection (DPI) plays an important role in the network application awareness. The signature-based network intrusion detection system (NIDS) contains a DPI technique that examines the incoming packet payloads by employing a pattern matching algorithm that dominates the overall inspection performance. Existing studies focused on implementing efficient pattern matching algorithms by parallel programming on software platforms because of the advantages of lower cost and higher scalability. Either the central processing unit (CPU) or the graphic processing unit (GPU) were involved. Our studies focused on designing a pattern matching algorithm based on the cooperation between both CPU and GPU. In this paper, we present an enhanced design for our previous work, a length-bounded hybrid CPU/GPU pattern matching algorithm (LHPMA). In the preliminary experiment, the performance and comparison with the previous work are displayed, and the experimental results show that the LHPMA can achieve not only effective CPU/GPU cooperation but also higher throughput than the previous method.Algorithms2017-01-18101Article10.3390/a10010016161999-48932017-01-18doi: 10.3390/a10010016Yi-Shan LinChun-Liang LeeYaw-Chung Chen<![CDATA[Algorithms, Vol. 10, Pages 17: A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity]]>
http://www.mdpi.com/1999-4893/10/1/17
A modification to an existing iterative method for computing zeros with unknown multiplicities of nonlinear equations or a system of nonlinear equations is presented. We introduce preconditioners to nonlinear equations or a system of nonlinear equations and their corresponding Jacobians. The inclusion of preconditioners provides numerical stability and accuracy. The different selection of preconditioner offers a family of iterative methods. We modified an existing method in a way that we do not alter its inherited quadratic convergence. Numerical simulations confirm the quadratic convergence of the preconditioned iterative method. The influence of preconditioners is clearly reflected in the numerically achieved accuracy of computed solutions.Algorithms2017-01-18101Article10.3390/a10010017171999-48932017-01-18doi: 10.3390/a10010017Fayyaz AhmadToseef BhuttaUmar ShoaibMalik Zaka UllahAli AlshomraniShamshad AhmadShahid Ahmad<![CDATA[Algorithms, Vol. 10, Pages 15: Toward Personalized Vibrotactile Support When Learning Motor Skills]]>
http://www.mdpi.com/1999-4893/10/1/15
Personal tracking technologies allow sensing of the physical activity carried out by people. Data flows collected with these sensors are calling for big data techniques to support data collection, integration and analysis, aimed to provide personalized support when learning motor skills through varied multisensorial feedback. In particular, this paper focuses on vibrotactile feedback as it can take advantage of the haptic sense when supporting the physical interaction to be learnt. Despite each user having different needs, when providing this vibrotactile support, personalization issues are hardly taken into account, but the same response is delivered to each and every user of the system. The challenge here is how to design vibrotactile user interfaces for adaptive learning of motor skills. TORMES methodology is proposed to facilitate the elicitation of this personalized support. The resulting systems are expected to dynamically adapt to each individual user’s needs by monitoring, comparing and, when appropriate, correcting in a personalized way how the user should move when practicing a predefined movement, for instance, when performing a sport technique or playing a musical instrument.Algorithms2017-01-16101Article10.3390/a10010015151999-48932017-01-16doi: 10.3390/a10010015Olga Santos<![CDATA[Algorithms, Vol. 10, Pages 13: A Fault Detection and Data Reconciliation Algorithm in Technical Processes with the Help of Haar Wavelets Packets]]>
http://www.mdpi.com/1999-4893/10/1/13
This article is focused on the detection of errors using an approach that is signal based. The proposed algorithm considers several criteria: soft, hard and very hard recognition error. After the recognition of the error, the error is replaced. In this sense, different strategies for data reconciliation are associated with the proposed criteria error detection. Algorithms in several industrial software platforms are used for detecting errors of sensors. Computer simulations confirm the validation of the presented applications. Results with actual sensor measurements in industrial processes are presented.Algorithms2017-01-14101Article10.3390/a10010013131999-48932017-01-14doi: 10.3390/a10010013Paolo Mercorelli<![CDATA[Algorithms, Vol. 10, Pages 14: Kernel Clustering with a Differential Harmony Search Algorithm for Scheme Classification]]>
http://www.mdpi.com/1999-4893/10/1/14
This paper presents a kernel fuzzy clustering with a novel differential harmony search algorithm to coordinate with the diversion scheduling scheme classification. First, we employed a self-adaptive solution generation strategy and differential evolution-based population update strategy to improve the classical harmony search. Second, we applied the differential harmony search algorithm to the kernel fuzzy clustering to help the clustering method obtain better solutions. Finally, the combination of the kernel fuzzy clustering and the differential harmony search is applied for water diversion scheduling in East Lake. A comparison of the proposed method with other methods has been carried out. The results show that the kernel clustering with the differential harmony search algorithm has good performance to cooperate with the water diversion scheduling problems.Algorithms2017-01-14101Article10.3390/a10010014141999-48932017-01-14doi: 10.3390/a10010014Yu FengJianzhong ZhouMuhammad Tayyab<![CDATA[Algorithms, Vol. 10, Pages 12: Coupled Least Squares Identification Algorithms for Multivariate Output-Error Systems]]>
http://www.mdpi.com/1999-4893/10/1/12
This paper focuses on the recursive identification problems for a multivariate output-error system. By decomposing the system into several subsystems and by forming a coupled relationship between the parameter estimation vectors of the subsystems, two coupled auxiliary model based recursive least squares (RLS) algorithms are presented. Moreover, in contrast to the auxiliary model based recursive least squares algorithm, the proposed algorithms provide a reference to improve the identification accuracy of the multivariate output-error system. The simulation results confirm the effectiveness of the proposed algorithms.Algorithms2017-01-12101Article10.3390/a10010012121999-48932017-01-12doi: 10.3390/a10010012Wu HuangFeng Ding<![CDATA[Algorithms, Vol. 10, Pages 11: Acknowledgement to Reviewers of Algorithms in 2016]]>
http://www.mdpi.com/1999-4893/10/1/11
The editors of Algorithms would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...]Algorithms2017-01-10101Editorial10.3390/a10010011111999-48932017-01-10doi: 10.3390/a10010011 Algorithms Editorial Office<![CDATA[Algorithms, Vol. 10, Pages 8: Modeling Delayed Dynamics in Biological Regulatory Networks from Time Series Data]]>
http://www.mdpi.com/1999-4893/10/1/8
Background: The modeling of Biological Regulatory Networks (BRNs) relies on background knowledge, deriving either from literature and/or the analysis of biological observations. However, with the development of high-throughput data, there is a growing need for methods that automatically generate admissible models. Methods: Our research aim is to provide a logical approach to infer BRNs based on given time series data and known influences among genes. Results: We propose a new methodology for models expressed through a timed extension of the automata networks (well suited for biological systems). The main purpose is to have a resulting network as consistent as possible with the observed datasets. Conclusion: The originality of our work is three-fold: (i) identifying the sign of the interaction; (ii) the direct integration of quantitative time delays in the learning approach; and (iii) the identification of the qualitative discrete levels that lead to the systems’ dynamics. We show the benefits of such an automatic approach on dynamical biological models, the DREAM4(in silico) and DREAM8 (breast cancer) datasets, popular reverse-engineering challenges, in order to discuss the precision and the computational performances of our modeling method.Algorithms2017-01-09101Article10.3390/a1001000881999-48932017-01-09doi: 10.3390/a10010008Emna Ben AbdallahTony RibeiroMorgan MagninOlivier RouxKatsumi Inoue<![CDATA[Algorithms, Vol. 10, Pages 10: Estimating the Local Radius of Convergence for Picard Iteration]]>
http://www.mdpi.com/1999-4893/10/1/10
In this paper, we propose an algorithm to estimate the radius of convergence for the Picard iteration in the setting of a real Hilbert space. Numerical experiments show that the proposed algorithm provides convergence balls close to or even identical to the best ones. As the algorithm does not require to evaluate the norm of derivatives, the computing effort is relatively low.Algorithms2017-01-09101Article10.3390/a10010010101999-48932017-01-09doi: 10.3390/a10010010Ştefan Măruşter<![CDATA[Algorithms, Vol. 10, Pages 9: Elite Opposition-Based Social Spider Optimization Algorithm for Global Function Optimization]]>
http://www.mdpi.com/1999-4893/10/1/9
The Social Spider Optimization algorithm (SSO) is a novel metaheuristic optimization algorithm. To enhance the convergence speed and computational accuracy of the algorithm, in this paper, an elite opposition-based Social Spider Optimization algorithm (EOSSO) is proposed; we use an elite opposition-based learning strategy to enhance the convergence speed and computational accuracy of the SSO algorithm. The 23 benchmark functions are tested, and the results show that the proposed elite opposition-based Social Spider Optimization algorithm is able to obtain an accurate solution, and it also has a fast convergence speed and a high degree of stability.Algorithms2017-01-08101Article10.3390/a1001000991999-48932017-01-08doi: 10.3390/a10010009Ruxin ZhaoQifang LuoYongquan Zhou<![CDATA[Algorithms, Vol. 10, Pages 7: Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery]]>
http://www.mdpi.com/1999-4893/10/1/7
This paper presents a variant of the iterative shrinkage-thresholding (IST) algorithm, called backtracking-based adaptive IST (BAIST), for image compressive sensing (CS) reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.Algorithms2017-01-06101Article10.3390/a1001000771999-48932017-01-06doi: 10.3390/a10010007Lingjun LiuZhonghua XieJiuchao Feng<![CDATA[Algorithms, Vol. 10, Pages 5: Efficient Algorithms for the Maximum Sum Problems]]>
http://www.mdpi.com/1999-4893/10/1/5
We present efficient sequential and parallel algorithms for the maximum sum (MS) problem, which is to maximize the sum of some shape in the data array. We deal with two MS problems; the maximum subarray (MSA) problem and the maximum convex sum (MCS) problem. In the MSA problem, we find a rectangular part within the given data array that maximizes the sum in it. The MCS problem is to find a convex shape rather than a rectangular shape that maximizes the sum. Thus, MCS is a generalization of MSA. For the MSA problem, O ( n ) time parallel algorithms are already known on an ( n , n ) 2D array of processors. We improve the communication steps from 2 n − 1 to n, which is optimal. For the MCS problem, we achieve the asymptotic time bound of O ( n ) on an ( n , n ) 2D array of processors. We provide rigorous proofs for the correctness of our parallel algorithm based on Hoare logic and also provide some experimental results of our algorithm that are gathered from the Blue Gene/P super computer. Furthermore, we briefly describe how to compute the actual shape of the maximum convex sum.Algorithms2017-01-04101Article10.3390/a1001000551999-48932017-01-04doi: 10.3390/a10010005Sung BaeTong-Wook ShinnTadao Takaoka<![CDATA[Algorithms, Vol. 10, Pages 6: Using Force-Field Grids for Sampling Translation/Rotation of Partially Rigid Macromolecules]]>
http://www.mdpi.com/1999-4893/10/1/6
An algorithm is presented for the simulation of two partially flexible macromolecules where the interaction between the flexible parts and rigid parts is represented by energy grids associated with the rigid part of each macromolecule. The proposed algorithm avoids the transformation of the grid upon molecular movement at the expense of the significantly lesser effect of transforming the flexible part.Algorithms2017-01-04101Communication10.3390/a1001000661999-48932017-01-04doi: 10.3390/a10010006Mihaly Mezei<![CDATA[Algorithms, Vol. 10, Pages 4: Dependent Shrink of Transitions for Calculating Firing Frequencies in Signaling Pathway Petri Net Model]]>
http://www.mdpi.com/1999-4893/10/1/4
Despite the recent rapid progress in high throughput measurements of biological data, it is still difficult to gather all of the reaction speed data in biological pathways. This paper presents a Petri net-based algorithm that can derive estimated values for non-valid reaction speeds in a signaling pathway from biologically-valid data. In fact, these reaction speeds are reflected based on the delay times in the timed Petri net model of the signaling pathway. We introduce the concept of a “dependency relation” over a transition set of a Petri net and derive the properties of the dependency relation through a structural analysis. Based on the theoretical results, the proposed algorithm can efficiently shrink the transitions with two elementary structures into a single transition repeatedly to reduce the Petri net size in order to eventually discover all transition sets with a dependency relation. Finally, to show the usefulness of our algorithm, we apply our algorithm to the IL-3 Petri net model.Algorithms2016-12-31101Article10.3390/a1001000441999-48932016-12-31doi: 10.3390/a10010004Atsushi MizutaQi-Wei GeHiroshi Matsuno<![CDATA[Algorithms, Vol. 10, Pages 3: A Pilot-Pattern Based Algorithm for MIMO-OFDM Channel Estimation]]>
http://www.mdpi.com/1999-4893/10/1/3
An improved pilot pattern algorithm for facilitating the channel estimation in multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems is proposed in this paper. The presented algorithm reconfigures the parameter in the least square (LS) algorithm, which belongs to the space-time block-coded (STBC) category for channel estimation in pilot-based MIMO-OFDM system. Simulation results show that the algorithm has better performance in contrast to the classical single symbol scheme. In contrast to the double symbols scheme, the proposed algorithm can achieve nearly the same performance with only half of the complexity of the double symbols scheme.Algorithms2016-12-28101Article10.3390/a1001000331999-48932016-12-28doi: 10.3390/a10010003Guomin LiGuisheng Liao<![CDATA[Algorithms, Vol. 10, Pages 1: MultiAspect Graphs: Algebraic Representation and Algorithms]]>
http://www.mdpi.com/1999-4893/10/1/1
We present the algebraic representation and basic algorithms for MultiAspect Graphs (MAGs). A MAG is a structure capable of representing multilayer and time-varying networks, as well as higher-order networks, while also having the property of being isomorphic to a directed graph. In particular, we show that, as a consequence of the properties associated with the MAG structure, a MAG can be represented in matrix form. Moreover, we also show that any possible MAG function (algorithm) can be obtained from this matrix-based representation. This is an important theoretical result since it paves the way for adapting well-known graph algorithms for application in MAGs. We present a set of basic MAG algorithms, constructed from well-known graph algorithms, such as degree computing, Breadth First Search (BFS), and Depth First Search (DFS). These algorithms adapted to the MAG context can be used as primitives for building other more sophisticated MAG algorithms. Therefore, such examples can be seen as guidelines on how to properly derive MAG algorithms from basic algorithms on directed graphs. We also make available Python implementations of all the algorithms presented in this paper.Algorithms2016-12-25101Article10.3390/a1001000111999-48932016-12-25doi: 10.3390/a10010001Klaus WehmuthÉric FleuryArtur Ziviani<![CDATA[Algorithms, Vol. 10, Pages 2: A Tensor Decomposition Based Multiway Structured Sparse SAR Imaging Algorithm with Kronecker Constraint]]>
http://www.mdpi.com/1999-4893/10/1/2
This paper investigates a structured sparse SAR imaging algorithm for point scattering model based on tensor decomposition. Several SAR imaging schemes have been developed by researchers for improving the imaging quality. For a typical SAR target scenario, the scatterers distribution usually has the feature of structured sparsity. Without considering this feature thoroughly, the existing schemes have still certain drawbacks. The classic matching pursuit algorithms can obtain clearer imaging results, but the cost is resulting in an extreme complexity and a huge computation resource consumption. Therefore, this paper put forward a tensor-based SAR imaging algorithm by means of multiway structured sparsity which makes full use of the above geometrical feature of the scatterers distribution. The spotlight SAR observation signal is formulated as a Tucker model considering the Kronecker constraint, and then a sparse reconstruction algorithm is introduced by utilizing the structured sparsity of the scene. The proposed tensor-based SAR imaging model is able to take advantage of the Kronecker information in each mode, which ensures the robustness for the signal reconstruction. Both the algorithm complexity analysis and numerical simulations show that the proposed method requires less computation than the existing sparsity-driven SAR imaging algorithms. The imaging realizations based on the practical measured data also indicate that the proposed algorithm is superior to the reference methods even in the severe noisy environment, under the condition of multiway structured sparsity.Algorithms2016-12-25101Article10.3390/a1001000221999-48932016-12-25doi: 10.3390/a10010002Yu-Fei GaoXun-Chao CongYue YangQun WanGuan Gui<![CDATA[Algorithms, Vol. 9, Pages 88: Which, When, and How: Hierarchical Clustering with Human–Machine Cooperation]]>
http://www.mdpi.com/1999-4893/9/4/88
Human–Machine Cooperations (HMCs) can balance the advantages and disadvantages of human computation (accurate but costly) and machine computation (cheap but inaccurate). This paper studies HMCs in agglomerative hierarchical clusterings, where the machine can ask the human some questions. The human will return the answers to the machine, and the machine will use these answers to correct errors in its current clustering results. We are interested in the machine’s strategy on handling the question operations, in terms of three problems: (1) Which question should the machine ask? (2) When should the machine ask the question (early or late)? (3) How does the machine adjust the clustering result, if the machine’s mistake is found by the human? Based on the insights of these problems, an efficient algorithm is proposed with five implementation variations. Experiments on image clusterings show that the proposed algorithm can improve the clustering accuracy with few question operations.Algorithms2016-12-2194Article10.3390/a9040088881999-48932016-12-21doi: 10.3390/a9040088Huanyang ZhengJie Wu<![CDATA[Algorithms, Vol. 9, Pages 87: A No Reference Image Quality Assessment Metric Based on Visual Perception]]>
http://www.mdpi.com/1999-4893/9/4/87
Nowadays, how to evaluate image quality reasonably is a basic and challenging problem. In view of the present no reference evaluation methods, they cannot reflect the human visual perception of image quality accurately. In this paper, we propose an efficient general-purpose no reference image quality assessment (NRIQA) method based on visual perception, and effectively integrates human visual characteristics into the NRIQA fields. First, a novel algorithm for salient region extraction is presented. Two characteristics graphs of texture and edging of the original image are added to the Itti model. Due to the normalized luminance coefficients of natural images obey the generalized Gauss probability distribution, we utilize this characteristic to extract statistical features in the regions of interest (ROI) and regions of non-interest respectively. Then, the extracted features are fused to be an input to establish the support vector regression (SVR) model. Finally, the IQA model obtained by training is used to predict the quality of the image. Experimental results show that this method has good predictive ability, and the evaluation effect is better than existing classical algorithms. Moreover, the predicted results are more consistent with human subjective perception, which can accurately reflect the human visual perception to image quality.Algorithms2016-12-1694Article10.3390/a9040087871999-48932016-12-16doi: 10.3390/a9040087Yan FuShengchun Wang<![CDATA[Algorithms, Vol. 9, Pages 84: Evaluation of Cloud Services: A Fuzzy Multi-Criteria Group Decision Making Method]]>
http://www.mdpi.com/1999-4893/9/4/84
This paper presents a fuzzy multi-criteria group decision making method for evaluating the performance of Cloud services in an uncertain environment. Intuitionistic fuzzy numbers are used to better model the subjectivity and imprecision in the performance evaluation process. An effective algorithm is developed based on the technique for order preference by similarity to the ideal solution and the Choquet integral operator for adequately solving the performance evaluation problem. An example is presented for demonstrating the applicability of the proposed method for solving the multi-criteria group decision making problem in real situations.Algorithms2016-12-1694Article10.3390/a9040084841999-48932016-12-16doi: 10.3390/a9040084Santoso WibowoHepu DengWei Xu<![CDATA[Algorithms, Vol. 9, Pages 86: Moving Mesh Strategies of Adaptive Methods for Solving Nonlinear Partial Differential Equations]]>
http://www.mdpi.com/1999-4893/9/4/86
This paper proposes moving mesh strategies for the moving mesh methods when solving the nonlinear time dependent partial differential equations (PDEs). Firstly we analyse Huang’s moving mesh PDEs (MMPDEs) and observe that, after Euler discretion they could be taken as one step of the root searching iteration methods. We improve Huang’s MMPDE by adding one Lagrange speed term. The proposed moving mesh PDE could draw the mesh to equidistribution quickly and stably. The numerical algorithm for the coupled system of the original PDE and the moving mesh equation is proposed and the computational experiments are given to illustrate the validity of the new method.Algorithms2016-12-1594Article10.3390/a9040086861999-48932016-12-15doi: 10.3390/a9040086Qinjiao GaoShenggang Zhang<![CDATA[Algorithms, Vol. 9, Pages 85: A Differentiated Anonymity Algorithm for Social Network Privacy Preservation]]>
http://www.mdpi.com/1999-4893/9/4/85
Devising methods to publish social network data in a form that affords utility without compromising privacy remains a longstanding challenge, while many existing methods based on k-anonymity algorithms on social networks may result in nontrivial utility loss without analyzing the social network topological structure and without considering the attributes of sparse distribution. Toward this objective, we explore the impact of the attributes of sparse distribution on data utility. Firstly, we propose a new utility metric that emphasizes network structure distortion and attribute value loss. Furthermore, we design and implement a differentiated k-anonymity l-diversity social network anonymity algorithm, which seeks to protect users’ privacy in social networks and increase the usability of the published anonymized data. Its key idea is that it divides a node into two child nodes and only anonymizes sensitive values to satisfy anonymity requirements. The evaluation results show that our method can effectively improve the data utility as compared to generalized anonymizing algorithms.Algorithms2016-12-1494Article10.3390/a9040085851999-48932016-12-14doi: 10.3390/a9040085Yuqin XieMingchun Zheng<![CDATA[Algorithms, Vol. 9, Pages 83: Nonsmooth Levenberg-Marquardt Type Method for Solving a Class of Stochastic Linear Complementarity Problems with Finitely Many Elements]]>
http://www.mdpi.com/1999-4893/9/4/83
Our purpose of this paper is to solve a class of stochastic linear complementarity problems (SLCP) with finitely many elements. Based on a new stochastic linear complementarity problem function, a new semi-smooth least squares reformulation of the stochastic linear complementarity problem is introduced. For solving the semi-smooth least squares reformulation, we propose a feasible nonsmooth Levenberg–Marquardt-type method. The global convergence properties of the nonsmooth Levenberg–Marquardt-type method are also presented. Finally, the related numerical results illustrate that the proposed method is efficient for the related refinery production problem and the large-scale stochastic linear complementarity problems.Algorithms2016-12-0694Article10.3390/a9040083831999-48932016-12-06doi: 10.3390/a9040083Zhimin LiuShouqiang DuRuiying Wang<![CDATA[Algorithms, Vol. 9, Pages 82: Linear Algorithms for Radioelectric Spectrum Forecast]]>
http://www.mdpi.com/1999-4893/9/4/82
This paper presents the development and evaluation of two linear algorithms for forecasting reception power for different channels at an assigned spectrum band of global systems for mobile communications (GSM), in order to analyze the spatial opportunity for reuse of frequencies by secondary users (SUs) in a cognitive radio (CR) network. The algorithms employed correspond to seasonal autoregressive integrated moving average (SARIMA) and generalized autoregressive conditional heteroskedasticity (GARCH), which allow for a forecast of channel occupancy status. Results are evaluated using the following criteria: availability and occupancy time for channels, different types of mean absolute error, and observation time. The contributions of this work include a more integral forecast as the algorithm not only forecasts reception power but also the occupancy and availability time of a channel to determine its precision percentage during the use by primary users (PUs) and SUs within a CR system. Algorithm analyses demonstrate a better performance for SARIMA over GARCH algorithm in most of the evaluated variables.Algorithms2016-12-0294Article10.3390/a9040082821999-48932016-12-02doi: 10.3390/a9040082Luis PedrazaCesar HernandezIngrid PaezJorge OrtizE. Rodriguez-Colina<![CDATA[Algorithms, Vol. 9, Pages 81: Cross-Coupled Contouring Control of Multi-DOF Robotic Manipulator]]>
http://www.mdpi.com/1999-4893/9/4/81
Reduction of contour error is a very important issue for high precise contour tracking applications, and many control systems were proposed to deal with contour tracking problems for two/three axial translational motion systems. However, there is no research on cross-coupled contour tracking control for serial multi-DOF robot manipulators. In this paper, the contouring control of multi-DOF serial manipulators is developed for the first time and a new cross-coupled PD (CC-PD) control law is proposed, based on contour errors of the end-effector and tracking errors of the joints. It is a combination of PD control for trajectory tracking at joint level and PD control for contour tracking at the end-effector level. The contour error of the end-effector is transformed to the equivalent tracking errors of the joints using the Jacobian regulation, and the CC-PD control law is implemented in the joint level. Stability analysis of the proposed CC-PD control system is conducted using the Lyapunov method, followed by some simulation studies for linear and nonlinear contour tracking to verify the effectiveness of the proposed CC-PD control system.Algorithms2016-11-2494Article10.3390/a9040081811999-48932016-11-24doi: 10.3390/a9040081Puren OuyangYuqi HuWenhui YueDeshun Liu<![CDATA[Algorithms, Vol. 9, Pages 80: Short-Term Load Forecasting Based on the Analysis of User Electricity Behavior]]>
http://www.mdpi.com/1999-4893/9/4/80
The smart meter is an important part of the smart grid, and in order to take full advantage of smart meter data, this paper mines the electricity behaviors of smart meter users to improve the accuracy of load forecasting. First, the typical day loads of users are calculated separately according to different date types (ordinary workdays, day before holidays, holidays). Second, the similarity between user electricity behaviors is mined and the user electricity loads are clustered to classify the users with similar behaviors into the same cluster. Finally, the load forecasting model based on the Online Sequential Extreme Learning Machine (OS-ELM) is applied to different clusters to conduct load forecasting and the load forecast is summed to obtain the system load. In order to prove the validity of the proposed method, we performed simulation experiments on the MATLAB platform using smart meter data from the Ireland electric power cooperation. The experimental results show that the proposed method is able to mine the user electricity behaviors deeply, improve the accuracy of load forecasting by the reasonable clustering of users, and reveal the relationship between forecasting accuracy and cluster numbers.Algorithms2016-11-2394Article10.3390/a9040080801999-48932016-11-23doi: 10.3390/a9040080Yuancheng LiPanpan GuoXiang Li<![CDATA[Algorithms, Vol. 9, Pages 79: A Two-Stage Method to Test the Robustness of the Generalized Approximate Message Passing Algorithm]]>
http://www.mdpi.com/1999-4893/9/4/79
We propose a two-stage method to test the robustness of the generalized approximate message passing algorithm (GAMP). A pursuit process based on the marginal posterior probability is inserted in the standard GAMP algorithm to find the support of a sparse vector, and a revised GAMP process is used to estimate the amplitudes of the support. The numerical experiments with simulation and real world data confirm the robustness and performance of our proposed algorithm.Algorithms2016-11-2294Article10.3390/a9040079791999-48932016-11-22doi: 10.3390/a9040079Qingshan YouYongjie LuoQun Wan<![CDATA[Algorithms, Vol. 9, Pages 78: A Modified Cloud Particles Differential Evolution Algorithm for Real-Parameter Optimization]]>
http://www.mdpi.com/1999-4893/9/4/78
The issue of exploration-exploitation remains one of the most challenging tasks within the framework of evolutionary algorithms. To effectively balance the exploration and exploitation in the search space, this paper proposes a modified cloud particles differential evolution algorithm (MCPDE) for real-parameter optimization. In contrast to the original Cloud Particles Differential Evolution (CPDE) algorithm, firstly, control parameters adaptation strategies are designed according to the quality of the control parameters. Secondly, the inertia factor is introduced to effectively keep a better balance between exploration and exploitation. Accordingly, this is helpful for maintaining the diversity of the population and discouraging premature convergence. In addition, the opposition mechanism and the orthogonal crossover are used to increase the search ability during the evolutionary process. Finally, CEC2013 contest benchmark functions are selected to verify the feasibility and effectiveness of the proposed algorithm. The experimental results show that the proposed MCPDE is an effective method for global optimization problems.Algorithms2016-11-1894Article10.3390/a9040078781999-48932016-11-18doi: 10.3390/a9040078Wei Li<![CDATA[Algorithms, Vol. 9, Pages 77: Algorithms for Drug Sensitivity Prediction]]>
http://www.mdpi.com/1999-4893/9/4/77
Precision medicine entails the design of therapies that are matched for each individual patient. Thus, predictive modeling of drug responses for specific patients constitutes a significant challenge for personalized therapy. In this article, we consider a review of approaches that have been proposed to tackle the drug sensitivity prediction problem especially with respect to personalized cancer therapy. We first discuss modeling approaches that are based on genomic characterizations alone and further the discussion by including modeling techniques that integrate both genomic and functional information. A comparative analysis of the prediction performance of four representative algorithms, elastic net, random forest, kernelized Bayesian multi-task learning and deep learning, reflecting the broad classes of regularized linear, ensemble, kernelized and neural network-based models, respectively, has been included in the paper. The review also considers the challenges that need to be addressed for successful implementation of the algorithms in clinical practice.Algorithms2016-11-1794Review10.3390/a9040077771999-48932016-11-17doi: 10.3390/a9040077Carlos De NizRaziur RahmanXiangyuan ZhaoRanadip Pal<![CDATA[Algorithms, Vol. 9, Pages 75: A Modified Iterative Algorithm for Split Feasibility Problems of Right Bregman Strongly Quasi-Nonexpansive Mappings in Banach Spaces with Applications]]>
http://www.mdpi.com/1999-4893/9/4/75
In this paper, we present a new iterative scheme for finding a common element of the solution set F of the split feasibility problem and the fixed point set F ( T ) of a right Bregman strongly quasi-nonexpansive mapping T in p-uniformly convex Banach spaces which are also uniformly smooth. We prove strong convergence theorem of the sequences generated by our scheme under some appropriate conditions in real p-uniformly convex and uniformly smooth Banach spaces. Furthermore, we give some examples and applications to illustrate our main results in this paper. Our results extend and improve the recent ones of some others in the literature.Algorithms2016-11-1094Article10.3390/a9040075751999-48932016-11-10doi: 10.3390/a9040075Anantachai PadcharoenPoom KumamYeol ChoPhatiphat Thounthong<![CDATA[Algorithms, Vol. 9, Pages 76: A Procedure for Identification of Appropriate State Space and ARIMA Models Based on Time-Series Cross-Validation]]>
http://www.mdpi.com/1999-4893/9/4/76
In this work, a cross-validation procedure is used to identify an appropriate Autoregressive Integrated Moving Average model and an appropriate state space model for a time series. A minimum size for the training set is specified. The procedure is based on one-step forecasts and uses different training sets, each containing one more observation than the previous one. All possible state space models and all ARIMA models where the orders are allowed to range reasonably are fitted considering raw data and log-transformed data with regular differencing (up to second order differences) and, if the time series is seasonal, seasonal differencing (up to first order differences). The value of root mean squared error for each model is calculated averaging the one-step forecasts obtained. The model which has the lowest root mean squared error value and passes the Ljung–Box test using all of the available data with a reasonable significance level is selected among all the ARIMA and state space models considered. The procedure is exemplified in this paper with a case study of retail sales of different categories of women’s footwear from a Portuguese retailer, and its accuracy is compared with three reliable forecasting approaches. The results show that our procedure consistently forecasts more accurately than the other approaches and the improvements in the accuracy are significant.Algorithms2016-11-0994Article10.3390/a9040076761999-48932016-11-09doi: 10.3390/a9040076Patrícia RamosJosé Oliveira<![CDATA[Algorithms, Vol. 9, Pages 74: An Optimization Algorithm for the Design of an Irregularly-Shaped Bridge Based on the Orthogonal Test and Analytic Hierarchy Process]]>
http://www.mdpi.com/1999-4893/9/4/74
Irregularly-shaped bridges are usually adopted to connect the main bridge and ramps in urban overpasses, which are under significant flexion-torsion coupling effects and in complicated stress states. In irregular-shaped bridge design, the parameters such as ramp radius, bifurcation diaphragm stiffness, box girder height, and supporting condition could affect structural performance in different manners. In this paper, the influence of various parameters on three indices, including maximum stress, the stress variation coefficient, and the fundamental frequency of torsional vibration, is investigated and analyzed based on orthogonal test method. Through orthogonal analysis, the major influence parameters and corresponding optimal values for these indices are achieved. Combining with the analytic hierarchy process (AHP), the hierarchical structure model of the multi-indices orthogonal test is established and a comprehensive weight analysis method is proposed to reflect the parameter influence on overall mechanical properties of an irregularly-shaped bridge. Influence order and optimal values of parameters for overall mechanical properties are determined based on the weight of factors and levels calculated by the comprehensive weight analysis method. The results indicate that the comprehensive weight analysis method is superior to the overall balance method, which verifies the effectiveness and accuracy of the comprehensive weight analysis in the parameter optimization of the multi-indices orthogonal test for an irregularly-shaped bridge. Optimal parameters obtained in this paper can provide reference and guidance for parameter control in irregularly-shaped bridge design.Algorithms2016-11-0594Article10.3390/a9040074741999-48932016-11-05doi: 10.3390/a9040074Hanbing LiuXin HeXianqiang WangYubo JiaoGang Song<![CDATA[Algorithms, Vol. 9, Pages 73: Community Structure Detection for Directed Networks through Modularity Optimisation]]>
http://www.mdpi.com/1999-4893/9/4/73
Networks constitute powerful means of representing various types of complex systems, where nodes denote the system entities and edges express the interactions between the entities. An important topological property in complex networks is community structure, where the density of edges within subgraphs is much higher than across different subgraphs. Each of these subgraphs forms a community (or module). In literature, a metric called modularity is defined that measures the quality of a partition of nodes into different mutually exclusive communities. One means of deriving community structure is modularity maximisation. In this paper, a novel mathematical programming-based model, DiMod, is proposed that tackles the problem of maximising modularity for directed networks.Algorithms2016-11-0194Article10.3390/a9040073731999-48932016-11-01doi: 10.3390/a9040073Lingjian YangJonathan SilvaLazaros PapageorgiouSophia Tsoka<![CDATA[Algorithms, Vol. 9, Pages 72: Engineering a Combinatorial Laplacian Solver: Lessons Learned]]>
http://www.mdpi.com/1999-4893/9/4/72
Linear system solving is a main workhorse in applied mathematics. Recently, theoretical computer scientists contributed sophisticated algorithms for solving linear systems with symmetric diagonally-dominant (SDD) matrices in provably nearly-linear time. These algorithms are very interesting from a theoretical perspective, but their practical performance was unclear. Here, we address this gap. We provide the first implementation of the combinatorial solver by Kelner et al. (STOC 2013), which is appealing for implementation due to its conceptual simplicity. The algorithm exploits that a Laplacian matrix (which is SDD) corresponds to a graph; solving symmetric Laplacian linear systems amounts to finding an electrical flow in this graph with the help of cycles induced by a spanning tree with the low-stretch property. The results of our experiments are ambivalent. While they confirm the predicted nearly-linear running time, the constant factors make the solver much slower for reasonable inputs than basic methods with higher asymptotic complexity. We were also not able to use the solver effectively as a smoother or preconditioner. Moreover, while spanning trees with lower stretch indeed reduce the solver’s running time, we experience again a discrepancy in practice: in our experiments, simple spanning tree algorithms perform better than those with a guaranteed low stretch. We expect that our results provide insights for future improvements of combinatorial linear solvers.Algorithms2016-10-3194Article10.3390/a9040072721999-48932016-10-31doi: 10.3390/a9040072Daniel HoskeDimitar LukarskiHenning MeyerhenkeMichael Wegner<![CDATA[Algorithms, Vol. 9, Pages 71: A Variable Block Insertion Heuristic for the Blocking Flowshop Scheduling Problem with Total Flowtime Criterion]]>
http://www.mdpi.com/1999-4893/9/4/71
In this paper, we present a variable block insertion heuristic (VBIH) algorithm to solve the blocking flowshop scheduling problem with the total flowtime criterion. In the VBIH algorithm, we define a minimum and a maximum block size. After constructing the initial sequence, the VBIH algorithm starts with a minimum block size being equal to one. It removes the block from the current sequence and inserts it into the partial sequence sequentially with a predetermined move size. The sequence, which is obtained after several block moves, goes under a variable local search (VLS), which is based on traditional insertion and swap neighborhood structures. If the new sequence obtained after the VLS local search is better than the current sequence, it replaces the current sequence. As long as it improves, it keeps the same block size. However, if it does not improve, the block size is incremented by one and a simulated annealing-type of acceptance criterion is used to accept the current sequence. This process is repeated until the block size reaches at the maximum block size. Furthermore, we present a novel constructive heuristic, which is based on the profile fitting heuristic from the literature. The proposed constructive heuristic is able to further improve the best known solutions for some larger instances in a few seconds. Parameters of the constructive heuristic and the VBIH algorithm are determined through a design of experiment approach. Extensive computational results on the Taillard’s well-known benchmark suite show that the proposed VBIH algorithm outperforms the discrete artificial bee colony algorithm, which is one of the most efficient algorithms recently in the literature. Ultimately, 52 out of the 150 best known solutions are further improved with substantial margins.Algorithms2016-10-2094Article10.3390/a9040071711999-48932016-10-20doi: 10.3390/a9040071Mehmet TasgetirenQuan-Ke PanDamla KizilayKaizhou Gao<![CDATA[Algorithms, Vol. 9, Pages 70: Plant Electrical Signal Classification Based on Waveform Similarity]]>
http://www.mdpi.com/1999-4893/9/4/70
(1) Background: Plant electrical signals are important physiological traits which reflect plant physiological state. As a kind of phenotypic data, plant action potential (AP) evoked by external stimuli—e.g., electrical stimulation, environmental stress—may be associated with inhibition of gene expression related to stress tolerance. However, plant AP is a response to environment changes and full of variability. It is an aperiodic signal with refractory period, discontinuity, noise, and artifacts. In consequence, there are still challenges to automatically recognize and classify plant AP; (2) Methods: Therefore, we proposed an AP recognition algorithm based on dynamic difference threshold to extract all waveforms similar to AP. Next, an incremental template matching algorithm was used to classify the AP and non-AP waveforms; (3) Results: Experiment results indicated that the template matching algorithm achieved a classification rate of 96.0%, and it was superior to backpropagation artificial neural networks (BP-ANNs), supported vector machine (SVM) and deep learning method; (4) Conclusion: These findings imply that the proposed methods are likely to expand possibilities for rapidly recognizing and classifying plant action potentials in the database in the future.Algorithms2016-10-1594Article10.3390/a9040070701999-48932016-10-15doi: 10.3390/a9040070Yang ChenDong-Jie ZhaoZi-Yang WangZhong-Yi WangGuiliang TangLan Huang<![CDATA[Algorithms, Vol. 9, Pages 69: A New Fuzzy Harmony Search Algorithm Using Fuzzy Logic for Dynamic Parameter Adaptation]]>
http://www.mdpi.com/1999-4893/9/4/69
In this paper, a new fuzzy harmony search algorithm (FHS) for solving optimization problems is presented. FHS is based on a recent method using fuzzy logic for dynamic adaptation of the harmony memory accepting (HMR) and pitch adjustment (PArate) parameters that improve the convergence rate of traditional harmony search algorithm (HS). The objective of the method is to dynamically adjust the parameters in the range from 0.7 to 1. The impact of using fixed parameters in the harmony search algorithm is discussed and a strategy for efficiently tuning these parameters using fuzzy logic is presented. The FHS algorithm was successfully applied to different benchmarking optimization problems. The results of simulation and comparison studies demonstrate the effectiveness and efficiency of the proposed approach.Algorithms2016-10-1494Article10.3390/a9040069691999-48932016-10-14doi: 10.3390/a9040069Cinthia PerazaFevrier ValdezMario GarciaPatricia MelinOscar Castillo<![CDATA[Algorithms, Vol. 9, Pages 68: An Overview on the Applications of Matrix Theory in Wireless Communications and Signal Processing]]>
http://www.mdpi.com/1999-4893/9/4/68
This paper overviews the key applications enabled by matrix theory in two major fields of interest in electrical engineering, namely wireless communications and signal processing. The paper focuses on the fundamental role played by matrices in modeling and optimization of wireless communication systems, and in detection, extraction and processing of the information embedded in signals. Among the major applications in wireless communications, the role of matrix representations and decompositions in characterizing multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) communication systems is described. In addition, this paper points out the important contribution made by matrices in solving signal estimation and detection problems. Special attention is given to the implementation of matrices in sensor array signal processing and the design of adaptive filters. Furthermore, the crucial role played by matrices in representing and processing digital images is depicted by several illustrative applications. This paper concludes with some applications of matrix theory in the area of compressive sensing of signals and by outlining a few open research problems for future study.Algorithms2016-10-1494Review10.3390/a9040068681999-48932016-10-14doi: 10.3390/a9040068Xu WangErchin Serpedin<![CDATA[Algorithms, Vol. 9, Pages 67: Comparison of Different Algorithms to Orthorectify WorldView-2 Satellite Imagery]]>
http://www.mdpi.com/1999-4893/9/4/67
Due to their level of spatial detail (pixel dimensions equal to or less than 1 m), very high-resolution satellite images (VHRSIs) need particular georeferencing and geometric corrections which require careful orthorectification. Although there are several dedicated algorithms, mainly commercial and free software for geographic information system (GIS) and remote sensing applications, the quality of the results may be inadequate in terms of the representation scale for which these images are intended. This paper compares the most common orthorectification algorithms in order to define the best approach for VHRSIs. Both empirical models (such as 2D polynomial functions, PFs; or 3D rational polynomial functions, RPFs) and rigorous physical and deterministic models (such as Toutin) are considered. Ground control points (GCPs) and check points (CPs)—whose positions in the image as, well as in the real world, are known—support algorithm applications. Tests were executed on a WorldView-2 (WV-2) panchromatic image of an area near the Gulf of Naples in Campania (Italy) to establish the best-performing algorithm. Combining 3D RPFs with 2D PFs produced the best results.Algorithms2016-10-1194Article10.3390/a9040067671999-48932016-10-11doi: 10.3390/a9040067Oscar BelfioreClaudio Parente<![CDATA[Algorithms, Vol. 9, Pages 66: Fault Sensing Using Fractal Dimension and Wavelet]]>
http://www.mdpi.com/1999-4893/9/4/66
A new fusion sensing (FS) method was proposed by using the improved fractal box dimension (IFBD) and a developed maximum wavelet coefficient (DMWC) for fault sensing of an online power cable. There are four strategies that were used. Firstly, the traditional fractal box dimension was improved to enlarge the feature distances between the different fault classes. Secondly, the IFBD recognition algorithm was proposed by using the improved fractal dimension feature extracted from the three-phase currents for the first stage of fault recognition. Thirdly, the DMWC recognition algorithm was developed based on the K-transform and wavelet analysis to establish the relationship between the maximum wavelet coefficient and the fault class. Fourthly, the FS method was formed by combining the IFBD algorithm and the DMWC algorithm in order to recognize the 10 types of short circuit faults of online power. The designed test system proved that the FS method increased the fault recognition accuracy obviously. In addition, the parameters of the initial angle, transient resistance, and fault distance had no influence on the FS method.Algorithms2016-10-1194Article10.3390/a9040066661999-48932016-10-11doi: 10.3390/a9040066Mei WangLiang ZhuYanan Guo<![CDATA[Algorithms, Vol. 9, Pages 65: Local Convergence Analysis of an Eighth Order Scheme Using Hypothesis Only on the First Derivative]]>
http://www.mdpi.com/1999-4893/9/4/65
In this paper, we propose a local convergence analysis of an eighth order three-step method to approximate a locally unique solution of a nonlinear equation in a Banach space setting. Further, we also study the dynamic behaviour of that scheme. In an earlier study, Sharma and Arora (2015) did not discuss these properties. Furthermore, the order of convergence was shown using Taylor series expansions and hypotheses up to the fourth order derivative or even higher of the function involved which restrict the applicability of the proposed scheme. However, only the first order derivatives appear in the proposed scheme. To overcome this problem, we present the hypotheses for the proposed scheme maximum up to first order derivative. In this way, we not only expand the applicability of the methods but also suggest convergence domain. Finally, a variety of concrete numerical examples are proposed where earlier studies can not be applied to obtain the solutions of nonlinear equations on the other hand our study does not exhibit this type of problem/restriction.Algorithms2016-09-2994Article10.3390/a9040065651999-48932016-09-29doi: 10.3390/a9040065Ioannis ArgyrosRamandeep BehlSandile Motsa<![CDATA[Algorithms, Vol. 9, Pages 64: Theorietage der Gesellschaft für Informatik in Speyer 2015—Special Issue]]>
http://www.mdpi.com/1999-4893/9/4/64
We briefly report on the national workshops on Formal Languages and Automata Theory as well as on Algorithms and Complexity Theory held in early Autumn, 2015.Algorithms2016-09-2694Meeting Report10.3390/a9040064641999-48932016-09-26doi: 10.3390/a9040064Henning Fernau<![CDATA[Algorithms, Vol. 9, Pages 62: Noise Reduction of Steel Cord Conveyor Belt Defect Electromagnetic Signal by Combined Use of Improved Wavelet and EMD]]>
http://www.mdpi.com/1999-4893/9/4/62
In order to reduce the noise of a defect electromagnetic signal of the steel cord conveyor belt used in coal mines, a new signal noise reduction method by combined use of the improved threshold wavelet and Empirical Mode Decomposition (EMD) is proposed. Firstly, the denoising method based on the improved threshold wavelet is applied to reduce the noise of a defect electromagnetic signal obtained by an electromagnetic testing system. Then, the EMD is used to decompose the denoised signal and then the effective Intrinsic Mode Function (IMF) is extracted by the dominant eigenvalue strategy. Finally, the signal reconstruction is carried out by utilizing the obtained IMF. In order to verify the proposed noise reduction method, the experiments are carried out in two cases including the defective joint and steel wire rope break. The experimental results show that the proposed method in this paper obtains the higher Signal to Noise Ratio (SNR) for the defect electromagnetic signal noise reduction of steel cord conveyor belts.Algorithms2016-09-2694Article10.3390/a9040062621999-48932016-09-26doi: 10.3390/a9040062Hong-Wei MaHong-Wei FanQing-Hua MaoXu-Hui ZhangWang Xing<![CDATA[Algorithms, Vol. 9, Pages 63: A Practical and Robust Execution Time-Frame Procedure for the Multi-Mode Resource-Constrained Project Scheduling Problem with Minimal and Maximal Time Lags]]>
http://www.mdpi.com/1999-4893/9/4/63
Modeling and optimizing organizational processes, such as the one represented by the Resource-Constrained Project Scheduling Problem (RCPSP), improve outcomes. Based on assumptions and simplification, this model tackles the allocation of resources so that organizations can continue to generate profits and reinvest in future growth. Nonetheless, despite all of the research dedicated to solving the RCPSP and its multi-mode variations, there is no standardized procedure that can guide project management practitioners in their scheduling tasks. This is mainly because many of the proposed approaches are either based on unrealistic/oversimplified scenarios or they propose solution procedures not easily applicable or even feasible in real-life situations. In this study, we solve a more true-to-life and complex model, Multimode RCPSP with minimal and maximal time lags (MRCPSP/max). The complexity of the model solved is presented, and the practicality of the proposed approach is justified depending on only information that is available for every project regardless of its industrial context. The results confirm that it is possible to determine a robust makespan and to calculate an execution time-frame with gaps lower than 11% between their lower and upper bounds. In addition, in many instances, the solved lower bound obtained was equal to the best-known optimum.Algorithms2016-09-2494Article10.3390/a9040063631999-48932016-09-24doi: 10.3390/a9040063Angela ChenYun-Chia LiangJose Padilla<![CDATA[Algorithms, Vol. 9, Pages 61: Noncircular Sources-Based Sparse Representation Algorithm for Direction of Arrival Estimation in MIMO Radar with Mutual Coupling]]>
http://www.mdpi.com/1999-4893/9/3/61
In this paper, a reweighted sparse representation algorithm based on noncircular sources is proposed, and the problem of the direction of arrival (DOA) estimation for multiple-input multiple-output (MIMO) radar with mutual coupling is addressed. Making full use of the special structure of banded symmetric Toeplitz mutual coupling matrices (MCM), the proposed algorithm firstly eliminates the effect of mutual coupling by linear transformation. Then, a reduced dimensional transformation is exploited to reduce the computational complexity of the proposed algorithm. Furthermore, by utilizing the noncircular feature of signals, the new extended received data matrix is formulated to enlarge the array aperture. Finally, based on the new received data, a reweighted matrix is constructed, and the proposed method further designs the joint reweighted sparse representation scheme to achieve the DOA estimation by solving the l 1 -norm constraint minimization problem. The proposed method enlarges the array aperture due to the application of signal noncircularity, and in the presence of mutual coupling, the proposed algorithm provides higher resolution and better angle estimation performance than ESPRIT-like, l 1 -SVD and l 1 -SRDML (sparse representation deterministic maximum likelihood) algorithms. Numerical experiment results verify the effectiveness and advantages of the proposed method.Algorithms2016-09-0893Article10.3390/a9030061611999-48932016-09-08doi: 10.3390/a9030061Weidong ZhouJing LiuPengxiang ZhuWenhe GongJiaxin Hou<![CDATA[Algorithms, Vol. 9, Pages 60: HMM Adaptation for Improving a Human Activity Recognition System]]>
http://www.mdpi.com/1999-4893/9/3/60
When developing a fully automatic system for evaluating motor activities performed by a person, it is necessary to segment and recognize the different activities in order to focus the analysis. This process must be carried out by a Human Activity Recognition (HAR) system. This paper proposes a user adaptation technique for improving a HAR system based on Hidden Markov Models (HMMs). This system segments and recognizes six different physical activities (walking, walking upstairs, walking downstairs, sitting, standing and lying down) using inertial signals from a smartphone. The system is composed of a feature extractor for obtaining the most relevant characteristics from the inertial signals, a module for training the six HMMs (one per activity), and the last module for segmenting new activity sequences using these models. The user adaptation technique consists of a Maximum A Posteriori (MAP) approach that adapts the activity HMMs to the user, using some activity examples from this specific user. The main results on a public dataset have reported a significant relative error rate reduction of more than 30%. In conclusion, adapting a HAR system to the user who is performing the physical activities provides significant improvement in the system’s performance.Algorithms2016-09-0293Article10.3390/a9030060601999-48932016-09-02doi: 10.3390/a9030060Rubén San-SegundoJuan MonteroJosé Moreno-PimentelJosé Pardo<![CDATA[Algorithms, Vol. 9, Pages 59: Binary Cockroach Swarm Optimization for Combinatorial Optimization Problem]]>
http://www.mdpi.com/1999-4893/9/3/59
The Cockroach Swarm Optimization (CSO) algorithm is inspired by cockroach social behavior. It is a simple and efficient meta-heuristic algorithm and has been applied to solve global optimization problems successfully. The original CSO algorithm and its variants operate mainly in continuous search space and cannot solve binary-coded optimization problems directly. Many optimization problems have their decision variables in binary. Binary Cockroach Swarm Optimization (BCSO) is proposed in this paper to tackle such problems and was evaluated on the popular Traveling Salesman Problem (TSP), which is considered to be an NP-hard Combinatorial Optimization Problem (COP). A transfer function was employed to map a continuous search space CSO to binary search space. The performance of the proposed algorithm was tested firstly on benchmark functions through simulation studies and compared with the performance of existing binary particle swarm optimization and continuous space versions of CSO. The proposed BCSO was adapted to TSP and applied to a set of benchmark instances of symmetric TSP from the TSP library. The results of the proposed Binary Cockroach Swarm Optimization (BCSO) algorithm on TSP were compared to other meta-heuristic algorithms.Algorithms2016-09-0293Article10.3390/a9030059591999-48932016-09-02doi: 10.3390/a9030059Ibidun ObagbuwaAdemola Abidoye<![CDATA[Algorithms, Vol. 9, Pages 58: LR Parsing for LCFRS]]>
http://www.mdpi.com/1999-4893/9/3/58
LR parsing is a popular parsing strategy for variants of Context-Free Grammar (CFG). It has also been used for mildly context-sensitive formalisms, such as Tree-Adjoining Grammar. In this paper, we present the first LR-style parsing algorithm for Linear Context-Free Rewriting Systems (LCFRS), a mildly context-sensitive extension of CFG which has received considerable attention in the last years in the context of natural language processing.Algorithms2016-08-2793Article10.3390/a9030058581999-48932016-08-27doi: 10.3390/a9030058Laura KallmeyerWolfgang Maier<![CDATA[Algorithms, Vol. 9, Pages 57: Uniform Page Migration Problem in Euclidean Space]]>
http://www.mdpi.com/1999-4893/9/3/57
The page migration problem in Euclidean space is revisited. In this problem, online requests occur at any location to access a single page located at a server. Every request must be served, and the server has the choice to migrate from its current location to a new location in space. Each service costs the Euclidean distance between the server and request. A migration costs the distance between the former and the new server location, multiplied by the page size. We study the problem in the uniform model, in which the page has size D = 1 . All request locations are not known in advance; however, they are sequentially presented in an online fashion. We design a 2.75 -competitive online algorithm that improves the current best upper bound for the problem with the unit page size. We also provide a lower bound of 2.732 for our algorithm. It was already known that 2.5 is a lower bound for this problem.Algorithms2016-08-2393Article10.3390/a9030057571999-48932016-08-23doi: 10.3390/a9030057Amanj KhorramianAkira Matsubayashi<![CDATA[Algorithms, Vol. 9, Pages 56: Multiple Artificial Neural Networks with Interaction Noise for Estimation of Spatial Categorical Variables]]>
http://www.mdpi.com/1999-4893/9/3/56
This paper presents a multiple artificial neural networks (MANN) method with interaction noise for estimating the occurrence probabilities of different classes at any site in space. The MANN consists of several independent artificial neural networks, the number of which is determined by the neighbors around the target location. In the proposed algorithm, the conditional or pre-posterior (multi-point) probabilities are viewed as output nodes, which can be estimated by weighted combinations of input nodes: two-point transition probabilities. The occurrence probability of a certain class at a certain location can be easily computed by the product of output probabilities using Bayes’ theorem. Spatial interaction or redundancy information can be measured in the form of interaction noises. Prediction results show that the method of MANN with interaction noise has a higher classification accuracy than the traditional Markov chain random fields (MCRF) model and can successfully preserve small-scale features.Algorithms2016-08-2093Article10.3390/a9030056561999-48932016-08-20doi: 10.3390/a9030056Xiang HuangZhizhong Wang<![CDATA[Algorithms, Vol. 9, Pages 55: A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis]]>
http://www.mdpi.com/1999-4893/9/3/55
With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monitor wheelchair propulsion in court sports, they all have limitations. Through experimental testing, we have shown the Attitude and Heading Reference System (AHRS)-based algorithm to be a suitable and reliable candidate algorithm for estimating velocity, distance, and approximating trajectory. The proposed algorithm is computationally inexpensive, agnostic of wheel camber, not sensitive to sensor placement, and can be embedded for real-time implementations. The research is conducted under Griffith University Ethics (GU Ref No: 2016/294).Algorithms2016-08-1793Article10.3390/a9030055551999-48932016-08-17doi: 10.3390/a9030055Jonathan ShepherdTomohito WadaDavid RowlandsDaniel James<![CDATA[Algorithms, Vol. 9, Pages 54: Sign Function Based Sparse Adaptive Filtering Algorithms for Robust Channel Estimation under Non-Gaussian Noise Environments]]>
http://www.mdpi.com/1999-4893/9/3/54
Robust channel estimation is required for coherent demodulation in multipath fading wireless communication systems which are often deteriorated by non-Gaussian noises. Our research is motivated by the fact that classical sparse least mean square error (LMS) algorithms are very sensitive to impulsive noise while standard SLMS algorithm does not take into account the inherent sparsity information of wireless channels. This paper proposes a sign function based sparse adaptive filtering algorithm for developing robust channel estimation techniques. Specifically, sign function based least mean square error (SLMS) algorithms to remove the non-Gaussian noise that is described by a symmetric α-stable noise model. By exploiting channel sparsity, sparse SLMS algorithms are proposed by introducing several effective sparse-promoting functions into the standard SLMS algorithm. The convergence analysis of the proposed sparse SLMS algorithms indicates that they outperform the standard SLMS algorithm for robust sparse channel estimation, which can be also verified by simulation results.Algorithms2016-08-1293Article10.3390/a9030054541999-48932016-08-12doi: 10.3390/a9030054Tingping ZhangGuan Gui<![CDATA[Algorithms, Vol. 9, Pages 52: Control for Ship Course-Keeping Using Optimized Support Vector Machines]]>
http://www.mdpi.com/1999-4893/9/3/52
Support vector machines (SVM) are proposed in order to obtain a robust controller for ship course-keeping. A cascaded system is constructed by combining the dynamics of the rudder actuator with the dynamics of ship motion. Modeling errors and disturbances are taken into account in the plant. A controller with a simple structure is produced by applying an SVM and L2-gain design. The SVM is used to identify the complicated nonlinear functions and the modeling errors in the plant. The Lagrangian factors in the SVM are obtained using on-line tuning algorithms. L2-gain design is applied to suppress the disturbances. To obtain the optimal parameters in the SVM, then particle swarm optimization (PSO) method is incorporated. The stability and robustness of the close-loop system are confirmed by Lyapunov stability analysis. Numerical simulation is performed to demonstrate the validity of the proposed hybrid controller and its superior performance over a conventional PD controller.Algorithms2016-08-1093Article10.3390/a9030052521999-48932016-08-10doi: 10.3390/a9030052Weilin LuoHongchao Cong<![CDATA[Algorithms, Vol. 9, Pages 53: Faster Force-Directed Graph Drawing with the Well-Separated Pair Decomposition]]>
http://www.mdpi.com/1999-4893/9/3/53
The force-directed paradigm is one of the few generic approaches to drawing graphs. Since force-directed algorithms can be extended easily, they are used frequently. Most of these algorithms are, however, quite slow on large graphs, as they compute a quadratic number of forces in each iteration. We give a new algorithm that takes only O ( m + n log n ) time per iteration when laying out a graph with n vertices and m edges. Our algorithm approximates the true forces using the so-called well-separated pair decomposition. We perform experiments on a large number of graphs and show that we can strongly reduce the runtime, even on graphs with less than a hundred vertices, without a significant influence on the quality of the drawings (in terms of the number of crossings and deviation in edge lengths).Algorithms2016-08-0493Article10.3390/a9030053531999-48932016-08-04doi: 10.3390/a9030053Fabian LippAlexander WolffJohannes Zink<![CDATA[Algorithms, Vol. 9, Pages 51: A Multi-Objective Harmony Search Algorithm for Sustainable Design of Floating Settlements]]>
http://www.mdpi.com/1999-4893/9/3/51
This paper is concerned with the application of computational intelligence techniques to the conceptual design and development of a large-scale floating settlement. The settlement in question is a design for the area of Urla, which is a rural touristic region located on the west coast of Turkey, near the metropolis of Izmir. The problem at hand includes both engineering and architectural aspects that need to be addressed in a comprehensive manner. We thus adapt the view as a multi-objective constrained real-parameter optimization problem. Specifically, we consider three objectives, which are conflicting. The first one aims at maximizing accessibility of urban functions such as housing and public spaces, as well as special functions, such as a marina for yachts and a yacht club. The second one aims at ensuring the wind protection of the general areas of the settlement, by adequately placing them in between neighboring land masses. The third one aims at maximizing visibility of the settlement from external observation points, so as to maximize the exposure of the settlement. To address this complex multi-objective optimization problem and identify lucrative alternative design solutions, a multi-objective harmony search algorithm (MOHS) is developed and applied in this paper. When compared to the Differential Evolution algorithm developed for the problem in the literature, we demonstrate that MOHS achieves competitive or slightly better performance in terms of hyper volume calculation, and gives promising results when the Pareto front approximation is examined.Algorithms2016-07-3093Article10.3390/a9030051511999-48932016-07-30doi: 10.3390/a9030051Cemre CubukcuogluIoannis ChatzikonstantinouMehmet TasgetirenI. SariyildizQuan-Ke Pan<![CDATA[Algorithms, Vol. 9, Pages 50: Utilizing Network Structure to Accelerate Markov Chain Monte Carlo Algorithms]]>
http://www.mdpi.com/1999-4893/9/3/50
We consider the problem of estimating the measure of subsets in very large networks. A prime tool for this purpose is the Markov Chain Monte Carlo (MCMC) algorithm. This algorithm, while extremely useful in many cases, still often suffers from the drawback of very slow convergence. We show that in a special, but important case, it is possible to obtain significantly better bounds on the convergence rate. This special case is when the huge state space can be aggregated into a smaller number of clusters, in which the states behave approximately the same way (but their behavior still may not be identical). A Markov chain with this structure is called quasi-lumpable. This property allows the aggregation of states (nodes) into clusters. Our main contribution is a rigorously proved bound on the rate at which the aggregated state distribution approaches its limit in quasi-lumpable Markov chains. We also demonstrate numerically that in certain cases this can indeed lead to a significantly accelerated way of estimating the measure of subsets. The result can be a useful tool in the analysis of complex networks, whenever they have a clustering that aggregates nodes with similar (but not necessarily identical) behavior.Algorithms2016-07-2993Article10.3390/a9030050501999-48932016-07-29doi: 10.3390/a9030050Ahmad AskarianRupei XuAndrás Faragó<![CDATA[Algorithms, Vol. 9, Pages 49: Data Filtering Based Recursive and Iterative Least Squares Algorithms for Parameter Estimation of Multi-Input Output Systems]]>
http://www.mdpi.com/1999-4893/9/3/49
This paper discusses the parameter estimation problems of multi-input output-error autoregressive (OEAR) systems. By combining the auxiliary model identification idea and the data filtering technique, a data filtering based recursive generalized least squares (F-RGLS) identification algorithm and a data filtering based iterative least squares (F-LSI) identification algorithm are derived. Compared with the F-RGLS algorithm, the proposed F-LSI algorithm is more effective and can generate more accurate parameter estimates. The simulation results confirm this conclusion.Algorithms2016-07-2693Article10.3390/a9030049491999-48932016-07-26doi: 10.3390/a9030049Jiling Ding<![CDATA[Algorithms, Vol. 9, Pages 48: Semi-Supervised Classification Based on Low Rank Representation]]>
http://www.mdpi.com/1999-4893/9/3/48
Graph-based semi-supervised classification uses a graph to capture the relationship between samples and exploits label propagation techniques on the graph to predict the labels of unlabeled samples. However, it is difficult to construct a graph that faithfully describes the relationship between high-dimensional samples. Recently, low-rank representation has been introduced to construct a graph, which can preserve the global structure of high-dimensional samples and help to train accurate transductive classifiers. In this paper, we take advantage of low-rank representation for graph construction and propose an inductive semi-supervised classifier called Semi-Supervised Classification based on Low-Rank Representation (SSC-LRR). SSC-LRR first utilizes a linearized alternating direction method with adaptive penalty to compute the coefficient matrix of low-rank representation of samples. Then, the coefficient matrix is adopted to define a graph. Finally, SSC-LRR incorporates this graph into a graph-based semi-supervised linear classifier to classify unlabeled samples. Experiments are conducted on four widely used facial datasets to validate the effectiveness of the proposed SSC-LRR and the results demonstrate that SSC-LRR achieves higher accuracy than other related methods.Algorithms2016-07-2293Article10.3390/a9030048481999-48932016-07-22doi: 10.3390/a9030048Xuan HouGuangjun YaoJun Wang<![CDATA[Algorithms, Vol. 9, Pages 47: A Hybrid Course Recommendation System by Integrating Collaborative Filtering and Artificial Immune Systems]]>
http://www.mdpi.com/1999-4893/9/3/47
This research proposes a two-stage user-based collaborative filtering process using an artificial immune system for the prediction of student grades, along with a filter for professor ratings in the course recommendation for college students. We test for cosine similarity and Karl Pearson (KP) correlation in affinity calculations for clustering and prediction. This research uses student information and professor information datasets of Yuan Ze University from the years 2005–2009 for the purpose of testing and training. The mean average error and confusion matrix analysis form the testing parameters. A minimum professor rating was tested to check the results, and observed that the recommendation systems herein provide highly accurate results for students with higher mean grades.Algorithms2016-07-2293Article10.3390/a9030047471999-48932016-07-22doi: 10.3390/a9030047Pei-Chann ChangCheng-Hui LinMeng-Hui Chen<![CDATA[Algorithms, Vol. 9, Pages 46: Affinity Propagation Clustering Using Path Based Similarity]]>
http://www.mdpi.com/1999-4893/9/3/46
Clustering is a fundamental task in data mining. Affinity propagation clustering (APC) is an effective and efficient clustering technique that has been applied in various domains. APC iteratively propagates information between affinity samples, updates the responsibility matrix and availability matrix, and employs these matrices to choose cluster centers (or exemplars) of respective clusters. However, since it mainly uses negative Euclidean distance between exemplars and samples as the similarity between them, it is difficult to identify clusters with complex structure. Therefore, the performance of APC deteriorates on samples distributed with complex structure. To mitigate this problem, we propose an improved APC based on a path-based similarity (APC-PS). APC-PS firstly utilizes negative Euclidean distance to find exemplars of clusters. Then, it employs the path-based similarity to measure the similarity between exemplars and samples, and to explore the underlying structure of clusters. Next, it assigns non-exemplar samples to their respective clusters via that similarity. Our empirical study on synthetic and UCI datasets shows that the proposed APC-PS significantly outperforms original APC and other related approaches.Algorithms2016-07-2193Article10.3390/a9030046461999-48932016-07-21doi: 10.3390/a9030046Yuan JiangYuliang LiaoGuoxian Yu