Algorithms
http://www.mdpi.com/journal/algorithms
Latest open access articles published in Algorithms at http://www.mdpi.com/journal/algorithms<![CDATA[Algorithms, Vol. 7, Pages 650-662: Time Series Prediction Method of Bank Cash Flow and Simulation Comparison]]>
http://www.mdpi.com/1999-4893/7/4/650
In order to improve the accuracy of all kinds of information in the cash business and enhance the linkage between cash inventory forecasting and cash management information in the commercial bank, the first moving average prediction method, the second moving average prediction method, the first exponential smoothing prediction and the second exponential smoothing prediction methods are adopted to realize the time series prediction of bank cash flow, respectively. The prediction accuracy of the cash flow time series is improved by optimizing the algorithm parameters. The simulation experiments are carried out on the reality commercial bank’s cash flow data and the predictive performance comparison results show the effectiveness of the proposed methods.Algorithms2014-11-2674Article10.3390/a70406506506621999-48932014-11-26doi: 10.3390/a7040650Wen-Hua CuiJie-Sheng WangChen-Xu Ning<![CDATA[Algorithms, Vol. 7, Pages 635-649: The Lobe Fissure Tracking by the Modified Ant Colony Optimization Framework in CT Images]]>
http://www.mdpi.com/1999-4893/7/4/635
Chest computed tomography (CT) is the most commonly used technique for the inspection of lung lesions. However, the lobe fissures in lung CT is still difficult to observe owing to its imaging structure. Therefore, in this paper, we aimed to develop an efficient tracking framework to extract the lobe fissures by the proposed modified ant colony optimization (ACO) algorithm. We used the method of increasing the consistency of pheromone on lobe fissure to improve the accuracy of path tracking. In order to validate the proposed system, we had tested our method in a database from 15 lung patients. In the experiment, the quantitative assessment shows that the proposed ACO method achieved the average F-measures of 80.9% and 82.84% in left and right lungs, respectively. The experiments indicate our method results more satisfied performance, and can help investigators detect lung lesion for further examination.Algorithms2014-11-2474Article10.3390/a70406356356491999-48932014-11-24doi: 10.3390/a7040635Chii-Jen ChenYou-Wei WangWei-Chih ShenChih-Yi ChenWen-Pinn Fang<![CDATA[Algorithms, Vol. 7, Pages 621-634: Neural Networks for Muscle Forces Prediction in Cycling]]>
http://www.mdpi.com/1999-4893/7/4/621
This paper documents the research towards the development of a system based on Artificial Neural Networks to predict muscle force patterns of an athlete during cycling. Two independent inverse problems must be solved for the force estimation: evaluation of the kinematic model and evaluation of the forces distribution along the limb. By solving repeatedly the two inverse problems for different subjects and conditions, a training pattern for an Artificial Neural Network was created. Then, the trained network was validated against an independent validation set, and compared to evaluate agreement between the two alternative approaches using Bland-Altman method. The obtained neural network for the different test patterns yields a normalized error well below 1% and the Bland-Altman plot shows a considerable correlation between the two methods. The new approach proposed herein allows a direct and fast computation for the inverse dynamics of a cyclist, opening the possibility of integrating such algorithm in a real time environment such as an embedded application.Algorithms2014-11-1374Article10.3390/a70406216216341999-48932014-11-13doi: 10.3390/a7040621Giulio CecchiniGabriele LozitoMaurizio SchmidSilvia ConfortoFrancesco FulgineiDaniele Bibbo<![CDATA[Algorithms, Vol. 7, Pages 608-620: High-Order Entropy Compressed Bit Vectors with Rank/Select]]>
http://www.mdpi.com/1999-4893/7/4/608
We design practical implementations of data structures for compressing bit-vectors to support efficient rank-queries (counting the number of ones up to a given point). Unlike previous approaches, which either store the bit vectors plainly, or focus on compressing bit-vectors with low densities of ones or zeros, we aim at low entropies of higher order, for example 101010...10. Our implementations achieve very good compression ratios, while showing only a modest increase in query time.Algorithms2014-11-0374Article10.3390/a70406086086201999-48932014-11-03doi: 10.3390/a7040608Kai BeskersJohannes Fischer<![CDATA[Algorithms, Vol. 7, Pages 597-607: Eight-Scale Image Contrast Enhancement Based on Adaptive Inverse Hyperbolic Tangent Algorithm]]>
http://www.mdpi.com/1999-4893/7/4/597
The Eight-Scale parameter adjustment is a natural extension of Adaptive Inverse Hyperbolic Tangent (AIHT) algorithm. It has long been known that the Human Vision System (HVS) heavily depends on detail and edge in the understanding and perception of scenes. The main goal of this study is to produce a contrast enhancement technique to recover an image from blurring and darkness, and at the same time to improve visual quality. Eight-scale coefficient adjustments can provide a further local refinement in detail under the AIHT algorithm. The proposed Eight-Scale Adaptive Inverse Hyperbolic Tangent (8SAIHT) method uses the sub-band to calculate the local mean and local variance before the AIHT algorithm is applied. This study also shows that this approach is convenient and effective in the enhancement processes for various types of images. The 8SAIHT is also capable of adaptively enhancing the local contrast of the original image while simultaneously extruding more on object details.Algorithms2014-10-2874Article10.3390/a70405975976071999-48932014-10-28doi: 10.3390/a7040597Cheng-Yi YuChi-Yuan LinSheng-Chih YangHsueh-Yi Lin<![CDATA[Algorithms, Vol. 7, Pages 582-596: Processing KNN Queries in Grid-Based Sensor Networks]]>
http://www.mdpi.com/1999-4893/7/4/582
Recently, developing efficient processing techniques in spatio-temporal databases has been a much discussed topic. Many applications, such as mobile information systems, traffic control system, and geographical information systems, can benefit from efficient processing of spatio-temporal queries. In this paper, we focus on processing an important type of spatio-temporal queries, the K-nearest neighbor (KNN) queries. Different from the previous research, the locations of objects are located by the sensors which are deployed in a grid-based manner. As the positioning technique used is not the GPS technique, but the sensor network technique, this results in a greater uncertainty regarding object location. With the uncertain location information of objects, we try to develop an efficient algorithm to process the KNN queries. Moreover, we design a probability model to quantify the possibility of each object being the query result. Finally, extensive experiments are conducted to demonstrate the efficiency of the proposed algorithms.Algorithms2014-10-2374Article10.3390/a70405825825961999-48932014-10-23doi: 10.3390/a7040582Yuan-Ko Huang<![CDATA[Algorithms, Vol. 7, Pages 554-581: Parallelizing Particle Swarm Optimization in a Functional Programming Environment]]>
http://www.mdpi.com/1999-4893/7/4/554
Many bioinspired methods are based on using several simple entities which search for a reasonable solution (somehow) independently. This is the case of Particle Swarm Optimization (PSO), where many simple particles search for the optimum solution by using both their local information and the information of the best solution found so far by any of the other particles. Particles are partially independent, and we can take advantage of this fact to parallelize PSO programs. Unfortunately, providing good parallel implementations for each specific PSO program can be tricky and time-consuming for the programmer. In this paper we introduce several parallel functional skeletons which, given a sequential PSO implementation, automatically provide the corresponding parallel implementations of it. We use these skeletons and report some experimental results. We observe that, despite the low effort required by programmers to use these skeletons, empirical results show that skeletons reach reasonable speedups.Algorithms2014-10-2374Article10.3390/a70405545545811999-48932014-10-23doi: 10.3390/a7040554Pablo RabanalIsmael RodríguezFernando Rubio<![CDATA[Algorithms, Vol. 7, Pages 538-553: Predicting Student Academic Performance: A Comparison of Two Meta-Heuristic Algorithms Inspired by Cuckoo Birds for Training Neural Networks]]>
http://www.mdpi.com/1999-4893/7/4/538
Predicting student academic performance with a high accuracy facilitates admission decisions and enhances educational services at educational institutions. This raises the need to propose a model that predicts student performance, based on the results of standardized exams, including university entrance exams, high school graduation exams, and other influential factors. In this study, an approach to the problem based on the artificial neural network (ANN) with the two meta-heuristic algorithms inspired by cuckoo birds and their lifestyle, namely, Cuckoo Search (CS) and Cuckoo Optimization Algorithm (COA) is proposed. In particular, we used previous exam results and other factors, such as the location of the student’s high school and the student’s gender as input variables, and predicted the student academic performance. The standard CS and standard COA were separately utilized to train the feed-forward network for prediction. The algorithms optimized the weights between layers and biases of the neuron network. The simulation results were then discussed and analyzed to investigate the prediction ability of the neural network trained by these two algorithms. The findings demonstrated that both CS and COA have potential in training ANN and ANN-COA obtained slightly better results for predicting student academic performance in this case. It is expected that this work may be used to support student admission procedures and strengthen the service system in educational institutions.Algorithms2014-10-1674Article10.3390/a70405385385531999-48932014-10-16doi: 10.3390/a7040538Jeng-Fung ChenHo-Nien HsiehQuang Do<![CDATA[Algorithms, Vol. 7, Pages 523-537: Multi-Sensor Building Fire Alarm System with Information Fusion Technology Based on D-S Evidence Theory]]>
http://www.mdpi.com/1999-4893/7/4/523
Multi-sensor and information fusion technology based on Dempster-Shafer evidence theory is applied in the system of a building fire alarm to realize early detecting and alarming. By using a multi-sensor to monitor the parameters of the fire process, such as light, smoke, temperature, gas and moisture, the range of fire monitoring in space and time is expanded compared with a single-sensor system. Then, the D-S evidence theory is applied to fuse the information from the multi-sensor with the specific fire model, and the fire alarm is more accurate and timely. The proposed method can avoid the failure of the monitoring data effectively, deal with the conflicting evidence from the multi-sensor robustly and improve the reliability of fire warning significantly.Algorithms2014-10-1474Article10.3390/a70405235235371999-48932014-10-14doi: 10.3390/a7040523Qian DingZhenghong PengTianzhen LiuQiaohui Tong<![CDATA[Algorithms, Vol. 7, Pages 510-522: A CR Spectrum Allocation Algorithm in Smart Grid Wireless Sensor Network]]>
http://www.mdpi.com/1999-4893/7/4/510
Cognitive radio (CR) method was introduced in smart grid communication systems to resolve potential maladies such as the coexistence of heterogeneous networks, overloaded data flow, diversity in data structures, and unstable quality of service (QOS). In this paper, a cognitive spectrum allocation algorithm based on non-cooperative game theory is proposed. The CR spectrum allocation model was developed by modifying the traditional game model via the insertion of a time variable and a critical function. The computing simulation result shows that the improved spectrum allocation algorithm can achieve stable spectrum allocation strategies and avoid the appearance of multi-Nash equilibrium at the expense of certain sacrifices in the system utility. It is suitable for application in distributed cognitive networks in power grids, thus contributing to the improvement of the isomerism and data capacity of power communication systems.Algorithms2014-10-1374Article10.3390/a70405105105221999-48932014-10-13doi: 10.3390/a7040510Wei HeKe LiQiang ZhouSongnong Li<![CDATA[Algorithms, Vol. 7, Pages 492-509: Power Cable Fault Recognition Based on an Annealed Chaotic Competitive Learning Network]]>
http://www.mdpi.com/1999-4893/7/4/492
In electric power systems, power cable operation under normal conditions is very important. Various cable faults will happen in practical applications. Recognizing the cable faults correctly and in a timely manner is crucial. In this paper we propose a method that an annealed chaotic competitive learning network recognizes power cable types. The result shows a good performance using the support vector machine (SVM) and improved Particle Swarm Optimization (IPSO)-SVM method. The experimental result shows that the fault recognition accuracy reached was 96.2%, using 54 data samples. The network training time is about 0.032 second. The method can achieve cable fault classification effectively.Algorithms2014-09-2674Article10.3390/a70404924925091999-48932014-09-26doi: 10.3390/a7040492Xuebin QinMei WangJzau-Sheng LinXiaowei Li<![CDATA[Algorithms, Vol. 7, Pages 471-491: Applying a Dynamic Resource Supply Model in a Smart Grid]]>
http://www.mdpi.com/1999-4893/7/3/471
Dynamic resource supply is a complex issue to resolve in a cyber-physical system (CPS). In our previous work, a resource model called the dynamic resource supply model (DRSM) has been proposed to handle resources specification, management and allocation in CPS. In this paper, we are integrating the DRSM with service-oriented architecture and applying it to a smart grid (SG), one of the most complex CPS examples. We give the detailed design of the SG for electricity charging request and electricity allocation between plug-in hybrid electric vehicles (PHEV) and DRSM through the Android system. In the design, we explain a mechanism for electricity consumption with data collection and re-allocation through ZigBee network. In this design, we verify the correctness of this resource model for expected electricity allocation.Algorithms2014-09-2273Article10.3390/a70304714714911999-48932014-09-22doi: 10.3390/a7030471Kaiyu WanYuji DongQian ChangTengfei Qian<![CDATA[Algorithms, Vol. 7, Pages 456-470: A Fovea Localization Scheme Using Vessel Origin-Based Parabolic Model]]>
http://www.mdpi.com/1999-4893/7/3/456
At the center of the macula, fovea plays an important role in computer-aided diagnosis. To locate the fovea, this paper proposes a vessel origin (VO)-based parabolic model, which takes the VO as the vertex of the parabola-like vasculature. Image processing steps are applied to accurately locate the fovea on retinal images. Firstly, morphological gradient and the circular Hough transform are used to find the optic disc. The structure of the vessel is then segmented with the line detector. Based on the characteristics of the VO, four features of VO are extracted, following the Bayesian classification procedure. Once the VO is identified, the VO-based parabolic model will locate the fovea. To find the fittest parabola and the symmetry axis of the retinal vessel, an Shift and Rotation (SR)-Hough transform that combines the Hough transform with the shift and rotation of coordinates is presented. Two public databases of retinal images, DRIVE and STARE, are used to evaluate the proposed method. The experiment results show that the average Euclidean distances between the located fovea and the fovea marked by experts in two databases are 9.8 pixels and 30.7 pixels, respectively. The results are stronger than other methods and thus provide a better macular detection for further disease discovery.Algorithms2014-09-1073Article10.3390/a70304564564701999-48932014-09-10doi: 10.3390/a7030456Chun-Yuan YuChen-Chung LiuShyr-Shen Yu<![CDATA[Algorithms, Vol. 7, Pages 444-455: A Novel Contrast Enhancement Technique on Palm Bone Images]]>
http://www.mdpi.com/1999-4893/7/3/444
Contrast enhancement plays a fundamental role in image processing. Many histogram-based techniques are widely used for contrast enhancement of given images, due to their simple function and effectiveness. However, the conventional histogram equalization (HE) methods result in excessive contrast enhancement, which causes natural looking and satisfactory results for a variety of low contrast images. To solve such problems, a novel multi-histogram equalization technique is proposed to enhance the contrast of the palm bone X-ray radiographs in this paper. For images, the mean-variance analysis method is employed to partition the histogram of the original grey scale image into multiple sub-histograms. These histograms are independently equalized. By using this mean-variance partition method, a proposed multi-histogram equalization technique is employed to achieve the contrast enhancement of the palm bone X-ray radiographs. Experimental results show that the multi-histogram equalization technique achieves a lower average absolute mean brightness error (AMBE) value. The multi-histogram equalization technique simultaneously preserved the mean brightness and enhanced the local contrast of the original image.Algorithms2014-09-0573Article10.3390/a70304444444551999-48932014-09-05doi: 10.3390/a7030444Yung-Tsang ChangJen-Tse WangWang-Hsai Yang<![CDATA[Algorithms, Vol. 7, Pages 429-443: ℓ1 Major Component Detection and Analysis (ℓ1 MCDA) in Three and Higher Dimensional Spaces]]>
http://www.mdpi.com/1999-4893/7/3/429
Based on the recent development of two dimensional ℓ1 major component detection and analysis (ℓ1 MCDA), we develop a scalable ℓ1 MCDA in the n-dimensional space to identify the major directions of star-shaped heavy-tailed statistical distributions with irregularly positioned “spokes” and “clutters”. In order to achieve robustness and efficiency, the proposed ℓ1 MCDA in n-dimensional space adopts a two-level median fit process in a local neighbor of a given direction in each iteration. Computational results indicate that in terms of accuracy ℓ1 MCDA is competitive with two well-known PCAs when there is only one major direction in the data, and ℓ1 MCDA can further determine multiple major directions of the n-dimensional data from superimposed Gaussians or heavy-tailed distributions without and with patterned artificial outliers. With the ability to recover complex spoke structures with heavy-tailed noise and clutter in the data, ℓ1 MCDA has potential to generate better semantics than other methods.Algorithms2014-08-1973Article10.3390/a70304294294431999-48932014-08-19doi: 10.3390/a7030429Zhibin DengJohn LaveryShu-Cherng FangJian Luo<![CDATA[Algorithms, Vol. 7, Pages 418-428: Target Channel Visiting Order Design Using Particle Swarm Optimization for Spectrum Handoff in Cognitive Radio Networks]]>
http://www.mdpi.com/1999-4893/7/3/418
In a dynamic spectrum access network, when a primary user (licensed user) reappears on the current channel, cognitive radios (CRs) need to vacate the channel and reestablish a communications link on some other channel to avoid interference to primary users, resulting in spectrum handoff. This paper studies the problem of designing target channel visiting order for spectrum handoff to minimize expected spectrum handoff delay. A particle swarm optimization (PSO) based algorithm is proposed to solve the problem. Simulation results show that the proposed algorithm performs far better than random target channel visiting scheme. The solutions obtained by PSO are very close to the optimal solution which further validates the effectiveness of the proposed method.Algorithms2014-08-1873Article10.3390/a70304184184281999-48932014-08-18doi: 10.3390/a7030418Shilian ZhengZhijin ZhaoChanglin LuoXiaoniu Yang<![CDATA[Algorithms, Vol. 7, Pages 405-417: Seminal Quality Prediction Using Clustering-Based Decision Forests]]>
http://www.mdpi.com/1999-4893/7/3/405
Prediction of seminal quality with statistical learning tools is an emerging methodology in decision support systems in biomedical engineering and is very useful in early diagnosis of seminal patients and selection of semen donors candidates. However, as is common in medical diagnosis, seminal quality prediction faces the class imbalance problem. In this paper, we propose a novel supervised ensemble learning approach, namely Clustering-Based Decision Forests, to tackle unbalanced class learning problem in seminal quality prediction. Experiment results on real fertility diagnosis dataset have shown that Clustering-Based Decision Forests outperforms decision tree, Support Vector Machines, random forests, multilayer perceptron neural networks and logistic regression by a noticeable margin. Clustering-Based Decision Forests can also be used to evaluate variables’ importance and the top five important factors that may affect semen concentration obtained in this study are age, serious trauma, sitting time, the season when the semen sample is produced, and high fevers in the last year. The findings could be helpful in explaining seminal concentration problems in infertile males or pre-screening semen donor candidates.Algorithms2014-08-1173Article10.3390/a70304054054171999-48932014-08-11doi: 10.3390/a7030405Hong WangQingsong XuLifeng Zhou<![CDATA[Algorithms, Vol. 7, Pages 397-404: Algorithm Based on Heuristic Strategy to Infer Lossy Links in Wireless Sensor Networks]]>
http://www.mdpi.com/1999-4893/7/3/397
With the maturing of the actual application of wireless sensor networks, network fault management is eagerly demanded. Severe link packet loss affects the performance of wireless sensor networks, so it must be found and repaired. Subject to the constraints on limited resources, lossy link is inferred using end to end measurement and network tomography. The algorithm based on heuristic strategy is proposed. This maps the problem of lossy links inferences to minimal set-cover problems. The performance of inference algorithms is evaluated by simulation, and the simulation results indicate feasibility and efficiency of the method.Algorithms2014-07-2973Article10.3390/a70303973974041999-48932014-07-29doi: 10.3390/a7030397Wen-Qing MaJing Zhang<![CDATA[Algorithms, Vol. 7, Pages 376-396: A Hybrid Metaheuristic Approach for Minimizing the Total Flow Time in A Flow Shop Sequence Dependent Group Scheduling Problem]]>
http://www.mdpi.com/1999-4893/7/3/376
Production processes in Cellular Manufacturing Systems (CMS) often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS) problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS) problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs) and Biased Random Sampling (BRS) search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.Algorithms2014-07-1473Article10.3390/a70303763763961999-48932014-07-14doi: 10.3390/a7030376Antonio CostaFulvio CappadonnaSergio Fichera<![CDATA[Algorithms, Vol. 7, Pages 363-375: Pressure Model of Control Valve Based on LS-SVM with the Fruit Fly Algorithm]]>
http://www.mdpi.com/1999-4893/7/3/363
Control valve is a kind of essential terminal control component which is hard to model by traditional methodologies because of its complexity and nonlinearity. This paper proposes a new modeling method for the upstream pressure of control valve using the least squares support vector machine (LS-SVM), which has been successfully used to identify nonlinear system. In order to improve the modeling performance, the fruit fly optimization algorithm (FOA) is used to optimize two critical parameters of LS-SVM. As an example, a set of actual production data from a controlling system of chlorine in a salt chemistry industry is applied. The validity of LS-SVM modeling method using FOA is verified by comparing the predicted results with the actual data with a value of MSE 2.474 × 10−3. Moreover, it is demonstrated that the initial position of FOA does not affect its optimal ability. By comparison, simulation experiments based on PSO algorithm and the grid search method are also carried out. The results show that LS-SVM based on FOA has equal performance in prediction accuracy. However, from the respect of calculation time, FOA has a significant advantage and is more suitable for the online prediction.Algorithms2014-07-1173Article10.3390/a70303633633751999-48932014-07-11doi: 10.3390/a7030363Huang AiqinWang Yong<![CDATA[Algorithms, Vol. 7, Pages 339-362: Model Checking Properties on Reduced Trace Systems]]>
http://www.mdpi.com/1999-4893/7/3/339
Temporal logic has become a well-established method for specifying the behavior of distributed systems. In this paper, we interpret a temporal logic over a partial order model that is a trace system. The satisfaction of the formulae is directly defined on traces on the basis of rewriting rules; so, the graph representation of the system can be completely avoided; moreover, a method is presented that keeps the trace system finite, also in the presence of infinite computations. To further reduce the complexity of model checking temporal logic formulae, an abstraction technique is applied to trace systems.Algorithms2014-07-0873Article10.3390/a70303393393621999-48932014-07-08doi: 10.3390/a7030339Antonella SantoneGigliola Vaglini<![CDATA[Algorithms, Vol. 7, Pages 328-338: Economic Dispatch Using Modified Bat Algorithm]]>
http://www.mdpi.com/1999-4893/7/3/328
Economic dispatch is an important non-linear optimization task in power systems. In this process, the total power demand is distributed amongst the generating units such that each unit satisfies its generation limit constraints and the cost of power production is minimized. This paper presents an over view of three optimization algorithms namely real coded genetic algorithm, particle swarm optimization and a relatively new optimization technique called bat algorithm. This study will further propose modifications to the original bat. Simulations are carried out for two test cases. First is a six-generator power system with a simplified convex objective function. The second test case is a five-generator system with a non-convex objective function. Finally the results of the modified algorithm are compared with the results of genetic algorithm, particle swarm and the original bat algorithm. The results demonstrate the improvement in the Bat Algorithm.Algorithms2014-07-0373Article10.3390/a70303283283381999-48932014-07-03doi: 10.3390/a7030328Aadil LatifPeter Palensky<![CDATA[Algorithms, Vol. 7, Pages 295-327: Solving the Examination Timetabling Problem in GPUs]]>
http://www.mdpi.com/1999-4893/7/3/295
The examination timetabling problem belongs to the class of combinatorial optimization problems and is of great importance for every University. In this paper, a hybrid evolutionary algorithm running on a GPU is employed to solve the examination timetabling problem. The hybrid evolutionary algorithm proposed has a genetic algorithm component and a greedy steepest descent component. The GPU computational capabilities allow the use of very large population sizes, leading to a more thorough exploration of the problem solution space. The GPU implementation, depending on the size of the problem, is up to twenty six times faster than the identical single-threaded CPU implementation of the algorithm. The algorithm is evaluated with the well known Toronto datasets and compares well with the best results found in the bibliography. Moreover, the selection of the encoding of the chromosomes and the tournament selection size as the population grows are examined and optimized. The compressed sparse row format is used for the conflict matrix and was proven essential to the process, since most of the datasets have a small conflict density, which translates into an extremely sparse matrix.Algorithms2014-07-0373Article10.3390/a70302952953271999-48932014-07-03doi: 10.3390/a7030295Vasileios KoloniasGeorge GoulasChristos GogosPanayiotis AlefragisEfthymios Housos<![CDATA[Algorithms, Vol. 7, Pages 276-294: Group Sparse Reconstruction of Multi-Dimensional Spectroscopic Imaging in Human Brain in vivo]]>
http://www.mdpi.com/1999-4893/7/3/276
Four-dimensional (4D) Magnetic Resonance Spectroscopic Imaging (MRSI) data combining 2 spatial and 2 spectral dimensions provides valuable biochemical information in vivo; however, its 20–40 min acquisition time is too long to be used for a clinical protocol. Data acquisition can be accelerated by non-uniformly under-sampling (NUS) the ky− t1 plane, but this causes artifacts in the spatial-spectral domain that must be removed by non-linear, iterative reconstruction. Previous work has demonstrated the feasibility of accelerating 4D MRSI data acquisition through NUS and iterative reconstruction using Compressed Sensing (CS), Total Variation (TV), and Maximum Entropy (MaxEnt) reconstruction. Group Sparse (GS) reconstruction is a variant of CS that exploits the structural sparsity of transform coefficients to achieve higher acceleration factors than traditional CS. In this article, we derive a solution to the GS reconstruction problem within the Split Bregman iterative framework that uses arbitrary transform grouping patterns of overlapping or non-overlapping groups. The 4D Echo-Planar Correlated Spectroscopic Imaging (EP-COSI) gray matter brain phantom and in vivo brain data are retrospectively under-sampled 2×, 4×, 6×, 8×, and 10___ and reconstructed using CS, TV, MaxEnt, and GS with overlapping or non-overlapping groups. Results show that GS reconstruction with overlapping groups outperformed the other reconstruction methods at each NUS rate for both phantom and in vivo data. These results can potentially reduce the scan time of a 4D EP-COSI brain scan from 40 min to under 5 min in vivo.Algorithms2014-06-2673Article10.3390/a70302762762941999-48932014-06-26doi: 10.3390/a7030276Brian BurnsNeil WilsonM. Thomas<![CDATA[Algorithms, Vol. 7, Pages 253-275: A Faster Quick Search Algorithm]]>
http://www.mdpi.com/1999-4893/7/2/253
We present the FQS (faster quick search) algorithm, an improved variation of the quick search algorithm. The quick search (QS) exact pattern matching algorithm and its variants are among the fastest practical matching algorithms today. The FQS algorithm computes a statistically expected shift value, which allows maximal shifts and a smaller number of comparisons between the pattern and the text. Compared to the state-of-the-art QS variants of exact pattern matching algorithms, the proposed FQS algorithm is the fastest when lΣl ≤ 128, where lΣl is the alphabet size. FQS also has a competitive running time when lΣl &gt; 128. Running on three practical text files, E. coli (lΣl = 4), Bible (lΣl = 63) and World192 (lΣl = 94), FQS resulted in the best performance in practice. Our FQS algorithm will have important applications in the domain of genomic database searching, involving DNA or RNA sequence databases with four symbols Σ = {A, C, G, T(/U)} or protein databases with lΣl = 20.Algorithms2014-06-2372Article10.3390/a70202532532751999-48932014-06-23doi: 10.3390/a7020253Jie LinDonald AdjerohYue Jiang<![CDATA[Algorithms, Vol. 7, Pages 243-252: Efficient Algorithms for Subgraph Listing]]>
http://www.mdpi.com/1999-4893/7/2/243
Subgraph isomorphism is a fundamental problem in graph theory. In this paper we focus on listing subgraphs isomorphic to a given pattern graph. First, we look at the algorithm due to Chiba and Nishizeki for listing complete subgraphs of fixed size, and show that it cannot be extended to general subgraphs of fixed size. Then, we consider the algorithm due to Ga̧sieniec et al. for finding multiple witnesses of a Boolean matrix product, and use it to design a new output-sensitive algorithm for listing all triangles in a graph. As a corollary, we obtain an output-sensitive algorithm for listing subgraphs and induced subgraphs isomorphic to an arbitrary fixed pattern graph.Algorithms2014-05-1572Article10.3390/a70202432432521999-48932014-05-15doi: 10.3390/a7020243Niklas ZechnerAndrzej Lingas<![CDATA[Algorithms, Vol. 7, Pages 229-242: Application of Imperialist Competitive Algorithm on Solving the Traveling Salesman Problem]]>
http://www.mdpi.com/1999-4893/7/2/229
The imperialist competitive algorithm (ICA) is a new heuristic algorithm proposed for continuous optimization problems. The research about its application on solving the traveling salesman problem (TSP) is still very limited. Aiming to explore its ability on solving TSP, we present a discrete imperialist competitive algorithm in this paper. The proposed algorithm modifies the original rules of the assimilation and introduces the 2-opt algorithm into the revolution process. To examine its performance, we tested the proposed algorithm on 10 small-scale and 2 large-scale standard benchmark instances from the TSPLIB and compared the experimental results with that obtained by two other ICA-based algorithms and six other existing algorithms. The proposed algorithm shows excellent performance in the experiments and comparisons.Algorithms2014-05-1372Article10.3390/a70202292292421999-48932014-05-13doi: 10.3390/a7020229Shuhui XuYong WangAiqin Huang<![CDATA[Algorithms, Vol. 7, Pages 206-228: Stochastic Diffusion Search: A Comparison of Swarm Intelligence Parameter Estimation Algorithms with RANSAC]]>
http://www.mdpi.com/1999-4893/7/2/206
Stochastic diffusion search (SDS) is a multi-agent global optimisation technique based on the behaviour of ants, rooted in the partial evaluation of an objective function and direct communication between agents. Standard SDS, the fundamental algorithm at work in all SDS processes, is presented here. Parameter estimation is the task of suitably fitting a model to given data; some form of parameter estimation is a key element of many computer vision processes. Here, the task of hyperplane estimation in many dimensions is investigated. Following RANSAC (random sample consensus), a widely used optimisation technique and a standard technique for many parameter estimation problems, increasingly sophisticated data-driven forms of SDS are developed. The performance of these SDS algorithms and RANSAC is analysed and compared for a hyperplane estimation task. SDS is shown to perform similarly to RANSAC, with potential for tuning to particular search problems for improved results.Algorithms2014-05-0572Article10.3390/a70202062062281999-48932014-05-05doi: 10.3390/a7020206Howard WilliamsMark Bishop<![CDATA[Algorithms, Vol. 7, Pages 203-205: Editorial: Special Issue on Matching under Preferences]]>
http://www.mdpi.com/1999-4893/7/2/203
This special issue of Algorithms is devoted to the study of matching problems involving ordinal preferences from the standpoint of algorithms and complexity.Algorithms2014-04-0872Editorial10.3390/a70202032032051999-48932014-04-08doi: 10.3390/a7020203Péter BiróDavid Manlove<![CDATA[Algorithms, Vol. 7, Pages 189-202: Faster and Simpler Approximation of Stable Matchings]]>
http://www.mdpi.com/1999-4893/7/2/189
We give a 3 2 -approximation algorithm for finding stable matchings that runs in O(m) time. The previous most well-known algorithm, by McDermid, has the same approximation ratio but runs in O(n3/2m) time, where n denotes the number of people andm is the total length of the preference lists in a given instance. In addition, the algorithm and the analysis are much simpler. We also give the extension of the algorithm for computing stable many-to-many matchings.Algorithms2014-04-0472Article10.3390/a70201891892021999-48932014-04-04doi: 10.3390/a7020189Katarzyna Paluch<![CDATA[Algorithms, Vol. 7, Pages 188: Correction: Pareto Optimization or Cascaded Weighted Sum: A Comparison of Concepts. Algorithms 2014, 7, 166–185]]>
http://www.mdpi.com/1999-4893/7/2/188
It has come to our attention that due to an error in producing the PDF version of the paper [1], doi:10.3390/a7010166, website: http://www.mdpi.com/1999-4893/7/1/166, Figures 1 and 9 are displayed incorrectly. [...]Algorithms2014-04-0272Correction10.3390/a70201881881881999-48932014-04-02doi: 10.3390/a7020188Kazuo Iwama<![CDATA[Algorithms, Vol. 7, Pages 186-187: Editorial: Special Issue on Algorithms for Sequence Analysis and Storage]]>
http://www.mdpi.com/1999-4893/7/1/186
This special issue of Algorithms is dedicated to approaches to biological sequence analysis that have algorithmic novelty and potential for fundamental impact in methods used for genome research.Algorithms2014-03-2571Editorial10.3390/a70101861861871999-48932014-03-25doi: 10.3390/a7010186Veli Mäkinen<![CDATA[Algorithms, Vol. 7, Pages 166-185: Pareto Optimization or Cascaded Weighted Sum: A Comparison of Concepts]]>
http://www.mdpi.com/1999-4893/7/1/166
Looking at articles or conference papers published since the turn of the century, Pareto optimization is the dominating assessment method for multi-objective nonlinear optimization problems. However, is it always the method of choice for real-world applications, where either more than four objectives have to be considered, or the same type of task is repeated again and again with only minor modifications, in an automated optimization or planning process? This paper presents a classification of application scenarios and compares the Pareto approach with an extended version of the weighted sum, called cascaded weighted sum, for the different scenarios. Its range of application within the field of multi-objective optimization is discussed as well as its strengths and weaknesses.Algorithms2014-03-2171Article10.3390/a70101661661851999-48932014-03-21doi: 10.3390/a7010166Wilfried JakobChristian Blume<![CDATA[Algorithms, Vol. 7, Pages 145-165: The Minimum Scheduling Time for Convergecast in Wireless Sensor Networks]]>
http://www.mdpi.com/1999-4893/7/1/145
We study the scheduling problem for data collection from sensor nodes to the sink node in wireless sensor networks, also referred to as the convergecast problem. The convergecast problem in general network topology has been proven to be NP-hard. In this paper, we propose our heuristic algorithm (finding the minimum scheduling time for convergecast (FMSTC)) for general network topology and evaluate the performance by simulation. The results of the simulation showed that the number of time slots to reach the sink node decreased with an increase in the power. We compared the performance of the proposed algorithm to the optimal time slots in a linear network topology. The proposed algorithm for convergecast in a general network topology has 2.27 times more time slots than that of a linear network topology. To the best of our knowledge, the proposed method is the first attempt to apply the optimal algorithm in a linear network topology to a general network topology.Algorithms2014-03-1771Article10.3390/a70101451451651999-48932014-03-17doi: 10.3390/a7010145Changyong JungSuk LeeVijay Bhuse<![CDATA[Algorithms, Vol. 7, Pages 62-144: Modeling Dynamic Programming Problems over Sequences and Trees with Inverse Coupled Rewrite Systems]]>
http://www.mdpi.com/1999-4893/7/1/62
Dynamic programming is a classical algorithmic paradigm, which often allows the evaluation of a search space of exponential size in polynomial time. Recursive problem decomposition, tabulation of intermediate results for re-use, and Bellman’s Principle of Optimality are its well-understood ingredients. However, algorithms often lack abstraction and are difficult to implement, tedious to debug, and delicate to modify. The present article proposes a generic framework for specifying dynamic programming problems. This framework can handle all kinds of sequential inputs, as well as tree-structured data. Biosequence analysis, document processing, molecular structure analysis, comparison of objects assembled in a hierarchic fashion, and generally, all domains come under consideration where strings and ordered, rooted trees serve as natural data representations. The new approach introduces inverse coupled rewrite systems. They describe the solutions of combinatorial optimization problems as the inverse image of a term rewrite relation that reduces problem solutions to problem inputs. This specification leads to concise yet translucent specifications of dynamic programming algorithms. Their actual implementation may be challenging, but eventually, as we hope, it can be produced automatically. The present article demonstrates the scope of this new approach by describing a diverse set of dynamic programming problems which arise in the domain of computational biology, with examples in biosequence and molecular structure analysis.Algorithms2014-03-0771Article10.3390/a7010062621441999-48932014-03-07doi: 10.3390/a7010062Robert GiegerichH´el'ene Touzet<![CDATA[Algorithms, Vol. 7, Pages 60-61: Acknowledgement to Reviewers of Algorithms in 2013]]>
http://www.mdpi.com/1999-4893/7/1/60
The editors of Algorithms would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2013. [...]Algorithms2014-02-2571Editorial10.3390/a701006060611999-48932014-02-25doi: 10.3390/a7010060 Algorithms Editorial Office<![CDATA[Algorithms, Vol. 7, Pages 32-59: Choice Function-Based Two-Sided Markets: Stability, Lattice Property, Path Independence and Algorithms]]>
http://www.mdpi.com/1999-4893/7/1/32
We build an abstract model, closely related to the stable marriage problem and motivated by Hungarian college admissions. We study different stability notions and show that an extension of the lattice property of stable marriages holds in these more general settings, even if the choice function on one side is not path independent. We lean on Tarski’s fixed point theorem and the substitutability property of choice functions. The main virtue of the work is that it exhibits practical, interesting examples, where non-path independent choice functions play a role, and proves various stability-related results.Algorithms2014-02-1471Article10.3390/a701003232591999-48932014-02-14doi: 10.3390/a7010032Tamàs FleinerZsuzsanna Jankó<![CDATA[Algorithms, Vol. 7, Pages 15-31: Bio-Inspired Meta-Heuristics for Emergency Transportation Problems]]>
http://www.mdpi.com/1999-4893/7/1/15
Emergency transportation plays a vital role in the success of disaster rescue and relief operations, but its planning and scheduling often involve complex objectives and search spaces. In this paper, we conduct a survey of recent advances in bio-inspired meta-heuristics, including genetic algorithms (GA), particle swarm optimization (PSO), ant colony optimization (ACO), etc., for solving emergency transportation problems. We then propose a new hybrid biogeography-based optimization (BBO) algorithm, which outperforms some state-of-the-art heuristics on a typical transportation planning problem.Algorithms2014-02-1171Article10.3390/a701001515311999-48932014-02-11doi: 10.3390/a7010015Min-Xia ZhangBei ZhangYu-Jun Zheng<![CDATA[Algorithms, Vol. 7, Pages 1-14: On Stable Matchings and Flows]]>
http://www.mdpi.com/1999-4893/7/1/1
We describe a flow model related to ordinary network flows the same way as stable matchings are related to maximum matchings in bipartite graphs. We prove that there always exists a stable flow and generalize the lattice structure of stable marriages to stable flows. Our main tool is a straightforward reduction of the stable flow problem to stable allocations. For the sake of completeness, we prove the results we need on stable allocations as an application of Tarski’s fixed point theorem.Algorithms2014-01-2271Article10.3390/a70100011141999-48932014-01-22doi: 10.3390/a7010001Tamás Fleiner<![CDATA[Algorithms, Vol. 6, Pages 871-882: Sparse Signal Recovery from Fixed Low-Rank Subspace via Compressive Measurement]]>
http://www.mdpi.com/1999-4893/6/4/871
This paper designs and evaluates a variant of CoSaMP algorithm, for recovering the sparse signal s from the compressive measurement v = A(Uw+s) given a fixed low-rank subspace spanned by U. Instead of firstly recovering the full vector then separating the sparse part from the structured dense part, the proposed algorithm directly works on the compressive measurement to do the separation. We investigate the performance of the algorithm on both simulated data and video compressive sensing. The results show that for a fixed low-rank subspace and truly sparse signal the proposed algorithm could successfully recover the signal only from a few compressive sensing (CS) measurements, and it performs better than ordinary CoSaMP when the sparse signal is corrupted by additional Gaussian noise.Algorithms2013-12-1764Article10.3390/a60408718718821999-48932013-12-17doi: 10.3390/a6040871Jun HeMing-Wei GaoLei ZhangHao Wu<![CDATA[Algorithms, Vol. 6, Pages 857-870: Solving Matrix Equations on Multi-Core and Many-Core Architectures]]>
http://www.mdpi.com/1999-4893/6/4/857
We address the numerical solution of Lyapunov, algebraic and differential Riccati equations, via the matrix sign function, on platforms equipped with general-purpose multicore processors and, optionally, one or more graphics processing units (GPUs). In particular, we review the solvers for these equations, as well as the underlying methods, analyze their concurrency and scalability and provide details on their parallel implementation. Our experimental results show that this class of hardware provides sufficient computational power to tackle large-scale problems, which only a few years ago would have required a cluster of computers.Algorithms2013-11-2564Article10.3390/a60408578578701999-48932013-11-25doi: 10.3390/a6040857Peter BennerPablo EzzattiHermann MenaEnrique Quintana-OrtíAlfredo Remón<![CDATA[Algorithms, Vol. 6, Pages 824-856: Overlays with Preferences: Distributed, Adaptive Approximation Algorithms for Matching with Preference Lists]]>
http://www.mdpi.com/1999-4893/6/4/824
A key property of overlay networks is the overlay nodes’ ability to establish connections (or be matched) to other nodes by preference, based on some suitability metric related to, e.g., the node’s distance, interests, recommendations, transaction history or available resources. When there are no preference cycles among the nodes, a stable matching exists in which nodes have maximized individual satisfaction, due to their choices, however no such guarantees are currently being given in the generic case. In this work, we employ the notion of node satisfaction to suggest a novel modeling for matching problems, suitable for overlay networks. We start by presenting a simple, yet powerful, distributed algorithm that solves the many-to-many matching problem with preferences. It achieves that by using local information and aggregate satisfaction as an optimization metric, while providing a guaranteed convergence and approximation ratio. Subsequently, we show how to extend the algorithm in order to support and adapt to changes in the nodes’ connectivity and preferences. In addition, we provide a detailed experimental study that focuses on the levels of achieved satisfaction, as well as convergence and reconvergence speed.Algorithms2013-11-1964Article10.3390/a60408248248561999-48932013-11-19doi: 10.3390/a6040824Giorgos GeorgiadisMarina Papatriantafilou<![CDATA[Algorithms, Vol. 6, Pages 805-823: PMS6MC: A Multicore Algorithm for Motif Discovery]]>
http://www.mdpi.com/1999-4893/6/4/805
We develop an efficient multicore algorithm, PMS6MC, for the (l; d)-motif discovery problem in which we are to find all strings of length l that appear in every string of a given set of strings with at most d mismatches. PMS6MC is based on PMS6, which is currently the fastest single-core algorithm for motif discovery in large instances. The speedup, relative to PMS6, attained by our multicore algorithm ranges from a high of 6.62 for the (17,6) challenging instances to a low of 2.75 for the (13,4) challenging instances on an Intel 6-core system. We estimate that PMS6MC is 2 to 4 times faster than other parallel algorithms for motif search on large instances.Algorithms2013-11-1864Article10.3390/a60408058058231999-48932013-11-18doi: 10.3390/a6040805Shibdas BandyopadhyaySartaj SahniSanguthevar Rajasekaran<![CDATA[Algorithms, Vol. 6, Pages 782-804: Stability, Optimality and Manipulation in Matching Problems with Weighted Preferences]]>
http://www.mdpi.com/1999-4893/6/4/782
The stable matching problem (also known as the stable marriage problem) is a well-known problem of matching men to women, so that no man and woman, who are not married to each other, both prefer each other. Such a problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools or, more generally, to any two-sided market. In the classical stable marriage problem, both men and women express a strict preference order over the members of the other sex, in a qualitative way. Here, we consider stable marriage problems with weighted preferences: each man (resp., woman) provides a score for each woman (resp., man). Such problems are more expressive than the classical stable marriage problems. Moreover, in some real-life situations, it is more natural to express scores (to model, for example, profits or costs) rather than a qualitative preference ordering. In this context, we define new notions of stability and optimality, and we provide algorithms to find marriages that are stable and/or optimal according to these notions. While expressivity greatly increases by adopting weighted preferences, we show that, in most cases, the desired solutions can be found by adapting existing algorithms for the classical stable marriage problem. We also consider the manipulability properties of the procedures that return such stable marriages. While we know that all procedures are manipulable by modifying the preference lists or by truncating them, here, we consider if manipulation can occur also by just modifying the weights while preserving the ordering and avoiding truncation. It turns out that, by adding weights, in some cases, we may increase the possibility of manipulating, and this cannot be avoided by any reasonable restriction on the weights.Algorithms2013-11-1864Article10.3390/a60407827828041999-48932013-11-18doi: 10.3390/a6040782Maria PiniFrancesca RossiK. VenableToby Walsh<![CDATA[Algorithms, Vol. 6, Pages 762-781: Very High Resolution Satellite Image Classification Using Fuzzy Rule-Based Systems]]>
http://www.mdpi.com/1999-4893/6/4/762
The aim of this research is to present a detailed step-by-step method for classification of very high resolution urban satellite images (VHRSI) into specific classes such as road, building, vegetation, etc., using fuzzy logic. In this study, object-based image analysis is used for image classification. The main problems in high resolution image classification are the uncertainties in the position of object borders in satellite images and also multiplex resemblance of the segments to different classes. In order to solve this problem, fuzzy logic is used for image classification, since it provides the possibility of image analysis using multiple parameters without requiring inclusion of certain thresholds in the class assignment process. In this study, an inclusive semi-automatic method for image classification is offered, which presents the configuration of the related fuzzy functions as well as fuzzy rules. The produced results are compared to the results of a normal classification using the same parameters, but with crisp rules. The overall accuracies and kappa coefficients of the presented method stand higher than the check projects.Algorithms2013-11-1264Article10.3390/a60407627627811999-48932013-11-12doi: 10.3390/a6040762Shabnam JabariYun Zhang<![CDATA[Algorithms, Vol. 6, Pages 747-761: Multi-Core Parallel Gradual Pattern Mining Based on Multi-Precision Fuzzy Orderings]]>
http://www.mdpi.com/1999-4893/6/4/747
Gradual patterns aim at describing co-variations of data such as the higher the size, the higher the weight. In recent years, such patterns have been studied more and more from the data mining point of view. The extraction of such patterns relies on efficient and smart orderings that can be built among data, for instance, when ordering the data with respect to the size, then the data are also ordered with respect to the weight. However, in many application domains, it is hardly possible to consider that data values are crisply ordered. When considering gene expression, it is not true from the biological point of view that Gene 1 is more expressed than Gene 2, if the levels of expression only differ from the tenth decimal. We thus consider fuzzy orderings and fuzzy gamma rank correlation. In this paper, we address two major problems related to this framework: (i) the high memory consumption and (ii) the precision, representation and efficient storage of the fuzzy concordance degrees versus the loss or gain of computing power. For this purpose, we consider multi-precision matrices represented using sparse matrices coupled with parallel algorithms. Experimental results show the interest of our proposal.Algorithms2013-11-0164Article10.3390/a60407477477611999-48932013-11-01doi: 10.3390/a6040747Nicolas SicardYogi AryadinataFederico Del Razo LopezAnne LaurentPerfecto Flores<![CDATA[Algorithms, Vol. 6, Pages 726-746: An Efficient Local Search for the Feedback Vertex Set Problem]]>
http://www.mdpi.com/1999-4893/6/4/726
Inspired by many deadlock detection applications, the feedback vertex set is defined as a set of vertices in an undirected graph, whose removal would result in a graph without cycle. The Feedback Vertex Set Problem, known to be NP-complete, is to search for a feedback vertex set with the minimal cardinality to benefit the deadlock recovery. To address the issue, this paper presents NewkLS FVS(LS, local search; FVS, feedback vertex set), a variable depth-based local search algorithm with a randomized scheme to optimize the efficiency and performance. Experimental simulations are conducted to compare the algorithm with recent metaheuristics, and the computational results show that the proposed algorithm can outperform the other state-of-art algorithms and generate satisfactory solutions for most DIMACSbenchmarks.Algorithms2013-11-0164Article10.3390/a60407267267461999-48932013-11-01doi: 10.3390/a6040726Zhiqiang ZhangAnsheng YeXiaoqing ZhouZehui Shao<![CDATA[Algorithms, Vol. 6, Pages 702-725: New Parallel Sparse Direct Solvers for Multicore Architectures]]>
http://www.mdpi.com/1999-4893/6/4/702
At the heart of many computations in science and engineering lies the need to efficiently and accurately solve large sparse linear systems of equations. Direct methods are frequently the method of choice because of their robustness, accuracy and potential for use as black-box solvers. In the last few years, there have been many new developments, and a number of new modern parallel general-purpose sparse solvers have been written for inclusion within the HSL mathematical software library. In this paper, we introduce and briefly review these solvers for symmetric sparse systems. We describe the algorithms used, highlight key features (including bit-compatibility and out-of-core working) and then, using problems arising from a range of practical applications, we illustrate and compare their performances. We demonstrate that modern direct solvers are able to accurately solve systems of order 106 in less than 3 minutes on a 16-core machine.Algorithms2013-11-0164Article10.3390/a60407027027251999-48932013-11-01doi: 10.3390/a6040702Jonathan HoggJennifer Scott<![CDATA[Algorithms, Vol. 6, Pages 678-701: Pattern-Guided k-Anonymity]]>
http://www.mdpi.com/1999-4893/6/4/678
We suggest a user-oriented approach to combinatorial data anonymization. A data matrix is called k-anonymous if every row appears at least k times—the goal of the NP-hard k-ANONYMITY problem then is to make a given matrix k-anonymous by suppressing (blanking out) as few entries as possible. Building on previous work and coping with corresponding deficiencies, we describe an enhanced k-anonymization problem called PATTERN-GUIDED k-ANONYMITY, where the users specify in which combinations suppressions may occur. In this way, the user of the anonymized data can express the differing importance of various data features. We show that PATTERN-GUIDED k-ANONYMITY is NP-hard. We complement this by a fixed-parameter tractability result based on a “data-driven parameterization” and, based on this, develop an exact integer linear program (ILP)-based solution method, as well as a simple, but very effective, greedy heuristic. Experiments on several real-world datasets show that our heuristic easily matches up to the established “Mondrian” algorithm for k-ANONYMITY in terms of the quality of the anonymization and outperforms it in terms of running time.Algorithms2013-10-1764Article10.3390/a60406786787011999-48932013-10-17doi: 10.3390/a6040678Robert BredereckAndré NichterleinRolf Niedermeier<![CDATA[Algorithms, Vol. 6, Pages 636-677: Sublinear Time Motif Discovery from Multiple Sequences]]>
http://www.mdpi.com/1999-4893/6/4/636
In this paper, a natural probabilistic model for motif discovery has been used to experimentally test the quality of motif discovery programs. In this model, there are k background sequences, and each character in a background sequence is a random character from an alphabet, Σ. A motif G = g1g2 ... gm is a string of m characters. In each background sequence is implanted a probabilistically-generated approximate copy of G. For a probabilistically-generated approximate copy b1b2 ... bm of G, every character, bi, is probabilistically generated, such that the probability for bi ≠ gi is at most α. We develop two new randomized algorithms and one new deterministic algorithm. They make advancements in the following aspects: (1) The algorithms are much faster than those before. Our algorithms can even run in sublinear time. (2) They can handle any motif pattern. (3) The restriction for the alphabet size is a lower bound of four. This gives them potential applications in practical problems, since gene sequences have an alphabet size of four. (4) All algorithms have rigorous proofs about their performances. The methods developed in this paper have been used in the software implementation. We observed some encouraging results that show improved performance for motif detection compared with other software.Algorithms2013-10-1464Article10.3390/a60406366366771999-48932013-10-14doi: 10.3390/a6040636Bin FuYunhui FuYuan Xue<![CDATA[Algorithms, Vol. 6, Pages 618-635: Multi-Threading a State-of-the-Art Maximum Clique Algorithm]]>
http://www.mdpi.com/1999-4893/6/4/618
We present a threaded parallel adaptation of a state-of-the-art maximum clique algorithm for dense, computationally challenging graphs. We show that near-linear speedups are achievable in practice and that superlinear speedups are common. We include results for several previously unsolved benchmark problems.Algorithms2013-10-0364Article10.3390/a60406186186351999-48932013-10-03doi: 10.3390/a6040618Ciaran McCreeshPatrick Prosser<![CDATA[Algorithms, Vol. 6, Pages 591-617: Local Search Approaches in Stable Matching Problems]]>
http://www.mdpi.com/1999-4893/6/4/591
The stable marriage (SM) problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools or, more generally, to any two-sided market. In the classical formulation, n men and n women express their preferences (via a strict total order) over the members of the other sex. Solving an SM problem means finding a stable marriage where stability is an envy-free notion: no man and woman who are not married to each other would both prefer each other to their partners or to being single. We consider both the classical stable marriage problem and one of its useful variations (denoted SMTI (Stable Marriage with Ties and Incomplete lists)) where the men and women express their preferences in the form of an incomplete preference list with ties over a subset of the members of the other sex. Matchings are permitted only with people who appear in these preference lists, and we try to find a stable matching that marries as many people as possible. Whilst the SM problem is polynomial to solve, the SMTI problem is NP-hard. We propose to tackle both problems via a local search approach, which exploits properties of the problems to reduce the size of the neighborhood and to make local moves efficiently. We empirically evaluate our algorithm for SM problems by measuring its runtime behavior and its ability to sample the lattice of all possible stable marriages. We evaluate our algorithm for SMTI problems in terms of both its runtime behavior and its ability to find a maximum cardinality stable marriage. Experimental results suggest that for SM problems, the number of steps of our algorithm grows only as O(n log(n)), and that it samples very well the set of all stable marriages. It is thus a fair and efficient approach to generate stable marriages. Furthermore, our approach for SMTI problems is able to solve large problems, quickly returning stable matchings of large and often optimal size, despite the NP-hardness of this problem.Algorithms2013-10-0364Article10.3390/a60405915916171999-48932013-10-03doi: 10.3390/a6040591Mirco GelainMaria PiniFrancesca RossiK. VenableToby Walsh<![CDATA[Algorithms, Vol. 6, Pages 565-590: An Emergent Approach to Text Analysis Based on a Connectionist Model and the Web]]>
http://www.mdpi.com/1999-4893/6/3/565
In this paper, we present a method to provide proactive assistance in text checking, based on usage relationships between words structuralized on the Web. For a given sentence, the method builds a connectionist structure of relationships between word n-grams. Such structure is then parameterized by means of an unsupervised and language agnostic optimization process. Finally, the method provides a representation of the sentence that allows emerging the least prominent usage-based relational patterns, helping to easily find badly-written and unpopular text. The study includes the problem statement and its characterization in the literature, as well as the proposed solving approach and some experimental use.Algorithms2013-09-1763Article10.3390/a60305655655901999-48932013-09-17doi: 10.3390/a6030565Mario CiminoGigliola Vaglini<![CDATA[Algorithms, Vol. 6, Pages 546-564: Quantitative Trait Loci Mapping Problem: An Extinction-Based Multi-Objective Evolutionary Algorithm Approach]]>
http://www.mdpi.com/1999-4893/6/3/546
The Quantitative Trait Loci (QTL) mapping problem aims to identify regions in the genome that are linked to phenotypic features of the developed organism that vary in degree. It is a principle step in determining targets for further genetic analysis and is key in decoding the role of specific genes that control quantitative traits within species. Applications include identifying genetic causes of disease, optimization of cross-breeding for desired traits and understanding trait diversity in populations. In this paper a new multi-objective evolutionary algorithm (MOEA) method is introduced and is shown to increase the accuracy of QTL mapping identification for both independent and epistatic loci interactions. The MOEA method optimizes over the space of possible partial least squares (PLS) regression QTL models and considers the conflicting objectives of model simplicity versus model accuracy. By optimizing for minimal model complexity, MOEA has the advantage of solving the over-fitting problem of conventional PLS models. The effectiveness of the method is confirmed by comparing the new method with Bayesian Interval Mapping approaches over a series of test cases where the optimal solutions are known. This approach can be applied to many problems that arise in analysis of genomic data sets where the number of features far exceeds the number of observations and where features can be highly correlated.Algorithms2013-09-0263Article10.3390/a60305465465641999-48932013-09-02doi: 10.3390/a6030546Ahmadreza GhaffarizadehMehdi EftekhariAli EsmailizadehNicholas Flann<![CDATA[Algorithms, Vol. 6, Pages 532-545: Stable Flows over Time]]>
http://www.mdpi.com/1999-4893/6/3/532
In this paper, the notion of stability is extended to network flows over time. As a useful device in our proofs, we present an elegant preflow-push variant of the Gale-Shapley algorithm that operates directly on the given network and computes stable flows in pseudo-polynomial time, both in the static flow and the flow over time case. We show periodical properties of stable flows over time on networks with an infinite time horizon. Finally, we discuss the influence of storage at vertices, with different results depending on the priority of the corresponding holdover edges.Algorithms2013-08-2163Article10.3390/a60305325325451999-48932013-08-21doi: 10.3390/a6030532Ágnes CsehJannik MatuschkeMartin Skutella<![CDATA[Algorithms, Vol. 6, Pages 512-531: Extraction and Segmentation of Sputum Cells for Lung Cancer Early Diagnosis]]>
http://www.mdpi.com/1999-4893/6/3/512
Lung cancer has been the largest cause of cancer deaths worldwide with an overall 5-year survival rate of only 15%. Its symptoms can be found exclusively in advanced stages where the chances for patients to survive are very low, thus making the mortality rate the highest among all other types of cancer. The present work deals with the attempt to design computer-aided detection or diagnosis (CAD) systems for early detection of lung cancer based on the analysis of sputum color images. The aim is to reduce the false negative rate and to increase the true positive rate as much as possible. The early detection of lung cancer from sputum images is a challenging problem, due to both the structure of the cancer cells and the stained method which are employed in the formulation of the sputum cells. We present here a framework for the extraction and segmentation of sputum cells in sputum images using, respectively, a threshold classifier, a Bayesian classification and mean shift segmentation. Our methods are validated and compared with other competitive techniques via a series of experimentation conducted with a data set of 100 images. The extraction and segmentation results will be used as a base for a CAD system for early detection of lung cancer which will improve the chances of survival for the patient.Algorithms2013-08-2163Article10.3390/a60305125125311999-48932013-08-21doi: 10.3390/a6030512Fatma TaherNaoufel WerghiHussain Al-AhmadChristian Donner<![CDATA[Algorithms, Vol. 6, Pages 494-511: An Algorithm for Managing Aircraft Movement on an Airport Surface]]>
http://www.mdpi.com/1999-4893/6/3/494
The present paper focuses on the development of an algorithm for safely and optimally managing the routing of aircraft on an airport surface in future airport operations. This tool is intended to support air traffic controllers’ decision-making in selecting the paths of all aircraft and the engine startup approval time for departing ones. Optimal routes are sought for minimizing the time both arriving and departing aircraft spend on an airport surface with engines on, with benefits in terms of safety, efficiency and costs. The proposed algorithm first computes a standalone, shortest path solution from runway to apron or vice versa, depending on the aircraft being inbound or outbound, respectively. For taking into account the constraints due to other traffic on an airport surface, this solution is amended by a conflict detection and resolution task that attempts to reduce and possibly nullify the number of conflicts generated in the first phase. An example application on a simple Italian airport exemplifies how the algorithm can be applied to true-world applications. Emphasis is given on how to model an airport surface as a weighted and directed graph with non-negative weights, as required for the input to the algorithm.Algorithms2013-08-1663Article10.3390/a60304944945111999-48932013-08-16doi: 10.3390/a6030494Urbano TancrediDomenico AccardoGiancarmine FasanoAlfredo RengaGiancarlo RufinoGiuseppe Maresca<![CDATA[Algorithms, Vol. 6, Pages 485-493: A Simple Algorithm for Solving for the Generalized Longest Common Subsequence (LCS) Problem with a Substring Exclusion Constraint]]>
http://www.mdpi.com/1999-4893/6/3/485
This paper studies the string-excluding (STR-EC)-constrained longest common subsequence (LCS) problem, a generalized LCS problem. For the two input sequences, X and Y , of lengths n andmand a constraint string, P, of length r, the goal is to find the longest common subsequence, Z, of X and Y that excludes P as a substring. The problem and its solution were first proposed by Chen and Chao, but we found that their algorithm cannot solve the problem correctly. A new dynamic programming solution for the STR-EC-LCS problem is then presented in this paper, and the correctness of the new algorithm is proven. The time complexity of the new algorithm is O(nmr).Algorithms2013-08-1563Article10.3390/a60304854854931999-48932013-08-15doi: 10.3390/a6030485Daxin ZhuXiaodong Wang<![CDATA[Algorithms, Vol. 6, Pages 471-484: Linear Time Local Approximation Algorithm for Maximum Stable Marriage]]>
http://www.mdpi.com/1999-4893/6/3/471
We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley). Some consequences to the Hospitals/Residents problem are also discussed.Algorithms2013-08-1563Article10.3390/a60304714714841999-48932013-08-15doi: 10.3390/a6030471Zoltán Király<![CDATA[Algorithms, Vol. 6, Pages 459-470: Ubiquitous Integrity via Network Integration and Parallelism—Sustaining Pedestrian/Bike Urbanism]]>
http://www.mdpi.com/1999-4893/6/3/459
Nowadays, due to the concern regarding environmental issues, establishing pedestrian/bike friendly urbanism is widely encouraged. To promote safety-assured, mobile communication environments, efficient, reliable maintenance, and information integrity need to be designed, especially in highly possibly interfered places. For busy traffic areas, regular degree-3 dedicated short range communication (DSRC) networks are safety and information featured with availability, reliability, and maintainability in paths of multi-lanes. For sparsely populated areas, probes of wireless sensors are rational, especially if sensor nodes can be organized to enhance security, reliability, and flexibility. Applying alternative network topologies, such as spider-webs, generalized honeycomb tori, and cube-connected cycles, for comparing and analyzing is proposed in DSRC and cellular communications to enhance integrity in communications.Algorithms2013-08-1263Article10.3390/a60304594594701999-48932013-08-12doi: 10.3390/a6030459Li-Yen Hsu<![CDATA[Algorithms, Vol. 6, Pages 457-458: Special Issue on Graph Algorithms]]>
http://www.mdpi.com/1999-4893/6/3/457
This special issue of Algorithms is devoted to the design and analysis of algorithms for solving combinatorial problems of a theoretical or practical nature involving graphs, with a focus on computational complexity.Algorithms2013-08-1263Editorial10.3390/a60304574574581999-48932013-08-12doi: 10.3390/a6030457Jesper Jansson<![CDATA[Algorithms, Vol. 6, Pages 442-456: A Review of Routing Protocols Based on Ant-Like Mobile Agents]]>
http://www.mdpi.com/1999-4893/6/3/442
A survey on the routing protocols based on ant-like mobile agents is given. These protocols are often employed in Mobile Ad Hoc Networks (MANET). Mobile Ad Hoc Networks are collections of wireless mobile nodes such as PDAs, laptop computers, and cellular phones having wireless communication capability that dynamically form a temporary network without using any existing network infrastructures such as wireless access points. The only infrastructure in MANET is the wireless communication interfaces on the devices. In such a circumstance, where some of the wireless devices are not within wireless range of each other, multi-hop routing is required to transmit messages to the destination. A node that wants to start communication with other nodes that are not within its one-hop wireless transmission range has to request intermediate nodes to forward their communication packets to the destination. In this paper, we survey a variety of proposed network protocols to accommodate this situation. We focus especially on biologically-inspired routing algorithms that are based on the ant colony optimization algorithm.Algorithms2013-08-0663Review10.3390/a60304424424561999-48932013-08-06doi: 10.3390/a6030442Yasushi Kambayashi<![CDATA[Algorithms, Vol. 6, Pages 430-441: Efficient in silico Chromosomal Representation of Populations via Indexing Ancestral Genomes]]>
http://www.mdpi.com/1999-4893/6/3/430
One of the major challenges in handling realistic forward simulations for plant and animal breeding is the sheer number of markers. Due to advancing technologies, the requirement has quickly grown from hundreds of markers to millions. Most simulators are lagging behind in handling these sizes, since they do not scale well. We present a scheme for representing and manipulating such realistic size genomes, without any loss of information. Usually, the simulation is forward and over tens to hundreds of generations with hundreds of thousands of individuals at each generation. We demonstrate through simulations that our representation can be two orders of magnitude faster and handle at least two orders of magnitude more markers than existing software on realistic breeding scenarios.Algorithms2013-07-3063Article10.3390/a60304304304411999-48932013-07-30doi: 10.3390/a6030430Niina HaiminenFilippo UtroClaude LebretonPascal FlamentZivan KaramanLaxmi Parida<![CDATA[Algorithms, Vol. 6, Pages 407-429: Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function]]>
http://www.mdpi.com/1999-4893/6/3/407
A novel noise filtering algorithm based on averaging Intrinsic Mode Function (aIMF), which is a derivation of Empirical Mode Decomposition (EMD), is proposed to remove white-Gaussian noise of foreign currency exchange rates that are nonlinear nonstationary times series signals. Noise patterns with different amplitudes and frequencies were randomly mixed into the five exchange rates. A number of filters, namely; Extended Kalman Filter (EKF), Wavelet Transform (WT), Particle Filter (PF) and the averaging Intrinsic Mode Function (aIMF) algorithm were used to compare filtering and smoothing performance. The aIMF algorithm demonstrated high noise reduction among the performance of these filters.Algorithms2013-07-1963Article10.3390/a60304074074291999-48932013-07-19doi: 10.3390/a6030407Bhusana PremanodeJumlong VongprasertChristofer Toumazou<![CDATA[Algorithms, Vol. 6, Pages 396-406: New Heuristics for Rooted Triplet Consistency]]>
http://www.mdpi.com/1999-4893/6/3/396
Rooted triplets are becoming one of the most important types of input for reconstructing rooted phylogenies. A rooted triplet is a phylogenetic tree on three leaves and shows the evolutionary relationship of the corresponding three species. In this paper, we investigate the problem of inferring the maximum consensus evolutionary tree from a set of rooted triplets. This problem is known to be APX-hard. We present two new heuristic algorithms. For a given set of m triplets on n species, the FastTree algorithm runs in O(m + α(n)n2) time, where α(n) is the functional inverse of Ackermann’s function. This is faster than any other previously known algorithms, although the outcome is less satisfactory. The Best Pair Merge with Total Reconstruction (BPMTR) algorithm runs in O(mn3) time and, on average, performs better than any other previously known algorithms for this problem.Algorithms2013-07-1163Article10.3390/a60303963964061999-48932013-07-11doi: 10.3390/a6030396Soheil JahangiriSeyed HashemiHadi Poormohammadi<![CDATA[Algorithms, Vol. 6, Pages 383-395: Maximum Locally Stable Matchings]]>
http://www.mdpi.com/1999-4893/6/3/383
Motivated by the observation that most companies are more likely to consider job applicants referred by their employees than those who applied on their own, Arcaute and Vassilvitskii modeled a job market that integrates social networks into stable matchings in an interesting way. We call their model HR+SN because an instance of their model is an ordered pair (I, G) where I is a typical instance of the Hospital/Residents problem (HR) and G is a graph that describes the social network (SN) of the residents in I. A matching p, of hospitals and residents has a local blocking pair (h, r) if (h, r) is a blocking pair of ii, and there is a resident r' such that r' is simultaneously an employee of h in the matching and a neighbor of r in G. Such a pair is likely to compromise the matching because the participants have access to each other through r': r can give her resume to r' who can then forward it to h. A locally stable matching is a matching with no local blocking pairs. The cardinality of the locally stable matchings of I can vary. This paper presents a variety of results on computing a locally stable matching with maximum cardinality.Algorithms2013-06-2463Article10.3390/a60303833833951999-48932013-06-24doi: 10.3390/a6030383Christine ChengEric McDermid<![CDATA[Algorithms, Vol. 6, Pages 371-382: Improving Man-Optimal Stable Matchings by Minimum Change of Preference Lists]]>
http://www.mdpi.com/1999-4893/6/2/371
In the stable marriage problem, any instance admits the so-called man-optimal stable matching, in which every man is assigned the best possible partner. However, there are instances for which all men receive low-ranked partners even in the man-optimal stable matching. In this paper we consider the problem of improving the man-optimal stable matching by changing only one man’s preference list. We show that the optimization variant and the decision variant of this problem can be solved in time O(n3) and O(n2), respectively, where n is the number of men (women) in an input. We further extend the problem so that we are allowed to change k men’s preference lists. We show that the problem is W[1]-hard with respect to the parameter k and give O(n2k+1)-time and O(nk+1)-time exact algorithms for the optimization and decision variants, respectively. Finally, we show that the problems become easy when k = n; we give O(n2.5 log n)-time and O(n2)-time algorithms for the optimization and decision variants, respectively.Algorithms2013-05-2862Article10.3390/a60203713713821999-48932013-05-28doi: 10.3390/a6020371Takao InoshitaRobert IrvingKazuo IwamaShuichi MiyazakiTakashi Nagase<![CDATA[Algorithms, Vol. 6, Pages 352-370: Filtering Degenerate Patterns with Application to Protein Sequence Analysis]]>
http://www.mdpi.com/1999-4893/6/2/352
In biology, the notion of degenerate pattern plays a central role for describing various phenomena. For example, protein active site patterns, like those contained in the PROSITE database, e.g., [FY ]DPC[LIM][ASG]C[ASG], are, in general, represented by degenerate patterns with character classes. Researchers have developed several approaches over the years to discover degenerate patterns. Although these methods have been exhaustively and successfully tested on genomes and proteins, their outcomes often far exceed the size of the original input, making the output hard to be managed and to be interpreted by refined analysis requiring manual inspection. In this paper, we discuss a characterization of degenerate patterns with character classes, without gaps, and we introduce the concept of pattern priority for comparing and ranking different patterns. We define the class of underlying patterns for filtering any set of degenerate patterns into a new set that is linear in the size of the input sequence. We present some preliminary results on the detection of subtle signals in protein families. Results show that our approach drastically reduces the number of patterns in output for a tool for protein analysis, while retaining the representative patterns.Algorithms2013-05-2262Article10.3390/a60203523523701999-48932013-05-22doi: 10.3390/a6020352Matteo CominDavide Verzotto<![CDATA[Algorithms, Vol. 6, Pages 319-351: Practical Compressed Suffix Trees]]>
http://www.mdpi.com/1999-4893/6/2/319
The suffix tree is an extremely important data structure in bioinformatics. Classical implementations require much space, which renders them useless to handle large sequence collections. Recent research has obtained various compressed representations for suffix trees, with widely different space-time tradeoffs. In this paper we show how the use of range min-max trees yields novel representations achieving practical space/time tradeoffs. In addition, we show how those trees can be modified to index highly repetitive collections, obtaining the first compressed suffix tree representation that effectively adapts to that scenario.Algorithms2013-05-2162Article10.3390/a60203193193511999-48932013-05-21doi: 10.3390/a6020319Andrés AbeliukRodrigo CánovasGonzalo Navarro<![CDATA[Algorithms, Vol. 6, Pages 309-318: Multi-Sided Compression Performance Assessment of ABI SOLiD WES Data]]>
http://www.mdpi.com/1999-4893/6/2/309
Data storage is a major and growing part of IT budgets for research since manyyears. Especially in biology, the amount of raw data products is growing continuously,and the advent of the so-called "next-generation" sequencers has made things worse.Affordable prices have pushed scientists to massively sequence whole genomes and to screenlarge cohort of patients, thereby producing tons of data as a side effect. The need formaximally fitting data into the available storage volumes has encouraged and welcomednew compression algorithms and tools. We focus here on state-of-the-art compression toolsand measure their compression performance on ABI SOLiD data.Algorithms2013-05-2162Article10.3390/a60203093093181999-48932013-05-21doi: 10.3390/a6020309Tommaso MazzaStefano Castellana<![CDATA[Algorithms, Vol. 6, Pages 278-308: A Generic Two-Phase Stochastic Variable Neighborhood Approach for Effectively Solving the Nurse Rostering Problem]]>
http://www.mdpi.com/1999-4893/6/2/278
In this contribution, a generic two-phase stochastic variable neighborhood approach is applied to nurse rostering problems. The proposed algorithm is used for creating feasible and efficient nurse rosters for many different nurse rostering cases. In order to demonstrate the efficiency and generic applicability of the proposed approach, experiments with real-world input data coming from many different nurse rostering cases have been conducted. The nurse rostering instances used have significant differences in nature, structure, philosophy and the type of hard and soft constraints. Computational results show that the proposed algorithm performs better than six different existing approaches applied to the same nurse rostering input instances using the same evaluation criteria. In addition, in all cases, it manages to reach the best-known fitness achieved in the literature, and in one case, it manages to beat the best-known fitness achieved till now.Algorithms2013-05-2162Article10.3390/a60202782783081999-48932013-05-21doi: 10.3390/a6020278Ioannis SolosIoannis TassopoulosGrigorios Beligiannis<![CDATA[Algorithms, Vol. 6, Pages 245-277: Fast Rescheduling of Multiple Workflows to Constrained Heterogeneous Resources Using Multi-Criteria Memetic Computing]]>
http://www.mdpi.com/1999-4893/6/2/245
This paper is motivated by, but not limited to, the task of scheduling jobs organized in workflows to a computational grid. Due to the dynamic nature of grid computing, more or less permanent replanning is required so that only very limited time is available to come up with a revised plan. To meet the requirements of both users and resource owners, a multi-objective optimization comprising execution time and costs is needed. This paper summarizes our work over the last six years in this field, and reports new results obtained by the combination of heuristics and evolutionary search in an adaptive Memetic Algorithm. We will show how different heuristics contribute to solving varying replanning scenarios and investigate the question of the maximum manageable work load for a grid of growing size starting with a load of 200 jobs and 20 resources up to 7000 jobs and 700 resources. Furthermore, the effect of four different local searchers incorporated into the evolutionary search is studied. We will also report briefly on approaches that failed within the short time frame given for planning.Algorithms2013-04-2262Article10.3390/a60202452452771999-48932013-04-22doi: 10.3390/a6020245Wilfried JakobSylvia StrackAlexander QuinteGünther BengelKarl-Uwe StuckyWolfgang Süß<![CDATA[Algorithms, Vol. 6, Pages 227-244: Solving University Course Timetabling Problems Using Constriction Particle Swarm Optimization with Local Search]]>
http://www.mdpi.com/1999-4893/6/2/227
Course timetabling is a combinatorial optimization problem and has been confirmed to be an NP-complete problem. Course timetabling problems are different for different universities. The studied university course timetabling problem involves hard constraints such as classroom, class curriculum, and other variables. Concurrently, some soft constraints need also to be considered, including teacher’s preferred time, favorite class time etc. These preferences correspond to satisfaction values obtained via questionnaires. Particle swarm optimization (PSO) is a promising scheme for solving NP-complete problems due to its fast convergence, fewer parameter settings and ability to fit dynamic environmental characteristics. Therefore, PSO was applied towards solving course timetabling problems in this work. To reduce the computational complexity, a timeslot was designated in a particle’s encoding as the scheduling unit. Two types of PSO, the inertia weight version and constriction version, were evaluated. Moreover, an interchange heuristic was utilized to explore the neighboring solution space to improve solution quality. Additionally, schedule conflicts are handled after a solution has been generated. Experimental results demonstrate that the proposed scheme of constriction PSO with interchange heuristic is able to generate satisfactory course timetables that meet the requirements of teachers and classes according to the various applied constraints.Algorithms2013-04-1962Article10.3390/a60202272272441999-48932013-04-19doi: 10.3390/a6020227Ruey-Maw ChenHsiao-Fang Shih<![CDATA[Algorithms, Vol. 6, Pages 197-226: Enforcing Security Mechanisms in the IP-Based Internet of Things: An Algorithmic Overview]]>
http://www.mdpi.com/1999-4893/6/2/197
The Internet of Things (IoT) refers to the Internet-like structure of billions of interconnected constrained devices, denoted as “smart objects”. Smart objects have limited capabilities, in terms of computational power and memory, and might be battery-powered devices, thus raising the need to adopt particularly energy efficient technologies. Among the most notable challenges that building interconnected smart objects brings about, there are standardization and interoperability. The use of IP has been foreseen as the standard for interoperability for smart objects. As billions of smart objects are expected to come to life and IPv4 addresses have eventually reached depletion, IPv6 has been identified as a candidate for smart-object communication. The deployment of the IoT raises many security issues coming from (i) the very nature of smart objects, e.g., the adoption of lightweight cryptographic algorithms, in terms of processing and memory requirements; and (ii) the use of standard protocols, e.g., the need to minimize the amount of data exchanged between nodes. This paper provides a detailed overview of the security challenges related to the deployment of smart objects. Security protocols at network, transport, and application layers are discussed, together with lightweight cryptographic algorithms proposed to be used instead of conventional and demanding ones, in terms of computational resources. Security aspects, such as key distribution and security bootstrapping, and application scenarios, such as secure data aggregation and service authorization, are also discussed.Algorithms2013-04-0262Article10.3390/a60201971972261999-48932013-04-02doi: 10.3390/a6020197Simone CiraniGianluigi FerrariLuca Veltri<![CDATA[Algorithms, Vol. 6, Pages 169-196: An Open-Source Implementation of the Critical-Line Algorithm for Portfolio Optimization]]>
http://www.mdpi.com/1999-4893/6/1/169
Portfolio optimization is one of the problems most frequently encountered by financial practitioners. The main goal of this paper is to fill a gap in the literature by providing a well-documented, step-by-step open-source implementation of Critical Line Algorithm (CLA) in scientific language. The code is implemented as a Python class object, which allows it to be imported like any other Python module, and integrated seamlessly with pre-existing code. We discuss the logic behind CLA following the algorithm’s decision flow. In addition, we developed several utilities that support finding answers to recurrent practical problems. We believe this publication will offer a better alternative to financial practitioners, many of whom are currently relying on generic-purpose optimizers which often deliver suboptimal solutions. The source code discussed in this paper can be downloaded at the authors’ websites (see Appendix).Algorithms2013-03-2261Article10.3390/a60101691691961999-48932013-03-22doi: 10.3390/a6010169David BaileyMarcos López de Prado<![CDATA[Algorithms, Vol. 6, Pages 161-168: Stable Multicommodity Flows]]>
http://www.mdpi.com/1999-4893/6/1/161
We extend the stable flow model of Fleiner to multicommodity flows. In addition to the preference lists of agents on trading partners for each commodity, every trading pair has a preference list on the commodities that the seller can sell to the buyer. A blocking walk (with respect to a certain commodity) may include saturated arcs, provided that a positive amount of less preferred commodity is traded along the arc. We prove that a stable multicommodity flow always exists, although it is PPAD-hard to find one.Algorithms2013-03-1861Article10.3390/a60101611611681999-48932013-03-18doi: 10.3390/a6010161Tamás KirályJúlia Pap<![CDATA[Algorithms, Vol. 6, Pages 136-160: Algorithms for Non-Negatively Constrained Maximum Penalized Likelihood Reconstruction in Tomographic Imaging]]>
http://www.mdpi.com/1999-4893/6/1/136
Image reconstruction is a key component in many medical imaging modalities. The problem of image reconstruction can be viewed as a special inverse problem where the unknown image pixel intensities are estimated from the observed measurements. Since the measurements are usually noise contaminated, statistical reconstruction methods are preferred. In this paper we review some non-negatively constrained simultaneous iterative algorithms for maximum penalized likelihood reconstructions, where all measurements are used to estimate all pixel intensities in each iteration.Algorithms2013-03-1261Review10.3390/a60101361361601999-48932013-03-12doi: 10.3390/a6010136Jun Ma<![CDATA[Algorithms, Vol. 6, Pages 119-135: A Polynomial-Time Algorithm for Computing the Maximum Common Connected Edge Subgraph of Outerplanar Graphs of Bounded Degree]]>
http://www.mdpi.com/1999-4893/6/1/119
The maximum common connected edge subgraph problem is to find a connected graph with the maximum number of edges that is isomorphic to a subgraph of each of the two input graphs, where it has applications in pattern recognition and chemistry. This paper presents a dynamic programming algorithm for the problem when the two input graphs are outerplanar graphs of a bounded vertex degree, where it is known that the problem is NP-hard, even for outerplanar graphs of an unbounded degree. Although the algorithm repeatedly modifies input graphs, it is shown that the number of relevant subproblems is polynomially bounded, and thus, the algorithm works in polynomial time.Algorithms2013-02-1861Article10.3390/a60101191191351999-48932013-02-18doi: 10.3390/a6010119Tatsuya AkutsuTakeyuki Tamura<![CDATA[Algorithms, Vol. 6, Pages 100-118: Computing the Eccentricity Distribution of Large Graphs]]>
http://www.mdpi.com/1999-4893/6/1/100
The eccentricity of a node in a graph is defined as the length of a longest shortest path starting at that node. The eccentricity distribution over all nodes is a relevant descriptive property of the graph, and its extreme values allow the derivation of measures such as the radius, diameter, center and periphery of the graph. This paper describes two new methods for computing the eccentricity distribution of large graphs such as social networks, web graphs, biological networks and routing networks.We first propose an exact algorithm based on eccentricity lower and upper bounds, which achieves significant speedups compared to the straightforward algorithm when computing both the extreme values of the distribution as well as the eccentricity distribution as a whole. The second algorithm that we describe is a hybrid strategy that combines the exact approach with an efficient sampling technique in order to obtain an even larger speedup on the computation of the entire eccentricity distribution. We perform an extensive set of experiments on a number of large graphs in order to measure and compare the performance of our algorithms, and demonstrate how we can efficiently compute the eccentricity distribution of various large real-world graphs.Algorithms2013-02-1861Article10.3390/a60101001001181999-48932013-02-18doi: 10.3390/a6010100Frank TakesWalter Kosters<![CDATA[Algorithms, Vol. 6, Pages 84-99: Dubins Traveling Salesman Problem with Neighborhoods: A Graph-Based Approach]]>
http://www.mdpi.com/1999-4893/6/1/84
We study the problem of finding the minimum-length curvature constrained closed path through a set of regions in the plane. This problem is referred to as the Dubins Traveling Salesperson Problem with Neighborhoods (DTSPN). An algorithm is presented that uses sampling to cast this infinite dimensional combinatorial optimization problem as a Generalized Traveling Salesperson Problem (GTSP) with intersecting node sets. The GTSP is then converted to an Asymmetric Traveling Salesperson Problem (ATSP) through a series of graph transformations, thus allowing the use of existing approximation algorithms. This algorithm is shown to perform no worse than the best existing DTSPN algorithm and is shown to perform significantly better when the regions overlap. We report on the application of this algorithm to route an Unmanned Aerial Vehicle (UAV) equipped with a radio to collect data from sparsely deployed ground sensors in a field demonstration of autonomous detection, localization, and verification of multiple acoustic events.Algorithms2013-02-0461Article10.3390/a601008484991999-48932013-02-04doi: 10.3390/a6010084Jason IsaacsJoão Hespanha<![CDATA[Algorithms, Vol. 6, Pages 60-83: Tractabilities and Intractabilities on Geometric Intersection Graphs]]>
http://www.mdpi.com/1999-4893/6/1/60
A graph is said to be an intersection graph if there is a set of objects such that each vertex corresponds to an object and two vertices are adjacent if and only if the corresponding objects have a nonempty intersection. There are several natural graph classes that have geometric intersection representations. The geometric representations sometimes help to prove tractability/intractability of problems on graph classes. In this paper, we show some results proved by using geometric representations.Algorithms2013-01-2561Article10.3390/a601006060831999-48932013-01-25doi: 10.3390/a6010060Ryuhei Uehara<![CDATA[Algorithms, Vol. 6, Pages 43-59: Computational Study on a PTAS for Planar Dominating Set Problem]]>
http://www.mdpi.com/1999-4893/6/1/43
The dominating set problem is a core NP-hard problem in combinatorial optimization and graph theory, and has many important applications. Baker [JACM 41,1994] introduces a k-outer planar graph decomposition-based framework for designing polynomial time approximation scheme (PTAS) for a class of NP-hard problems in planar graphs. It is mentioned that the framework can be applied to obtain an O(2ckn) time, c is a constant, (1+1/k)-approximation algorithm for the planar dominating set problem. We show that the approximation ratio achieved by the mentioned application of the framework is not bounded by any constant for the planar dominating set problem. We modify the application of the framework to give a PTAS for the planar dominating set problem. With k-outer planar graph decompositions, the modified PTAS has an approximation ratio (1 + 2/k). Using 2k-outer planar graph decompositions, the modified PTAS achieves the approximation ratio (1+1/k) in O(22ckn) time. We report a computational study on the modified PTAS. Our results show that the modified PTAS is practical.Algorithms2013-01-2161Article10.3390/a601004343591999-48932013-01-21doi: 10.3390/a6010043Marjan MarzbanQian-Ping Gu<![CDATA[Algorithms, Vol. 6, Pages 29-42: Energy Efficient Routing in Wireless Sensor Networks Through Balanced Clustering]]>
http://www.mdpi.com/1999-4893/6/1/29
The wide utilization of Wireless Sensor Networks (WSNs) is obstructed by the severely limited energy constraints of the individual sensor nodes. This is the reason why a large part of the research in WSNs focuses on the development of energy efficient routing protocols. In this paper, a new protocol called Equalized Cluster Head Election Routing Protocol (ECHERP), which pursues energy conservation through balanced clustering, is proposed. ECHERP models the network as a linear system and, using the Gaussian elimination algorithm, calculates the combinations of nodes that can be chosen as cluster heads in order to extend the network lifetime. The performance evaluation of ECHERP is carried out through simulation tests, which evince the effectiveness of this protocol in terms of network energy efficiency when compared against other well-known protocols.Algorithms2013-01-1861Article10.3390/a601002929421999-48932013-01-18doi: 10.3390/a6010029Stefanos NikolidakisDionisis KandrisDimitrios VergadosChristos Douligeris<![CDATA[Algorithms, Vol. 6, Pages 12-28: ℓ1 Major Component Detection and Analysis (ℓ1 MCDA): Foundations in Two Dimensions]]>
http://www.mdpi.com/1999-4893/6/1/12
Principal Component Analysis (PCA) is widely used for identifying the major components of statistically distributed point clouds. Robust versions of PCA, often based in part on the ℓ1 norm (rather than the ℓ2 norm), are increasingly used, especially for point clouds with many outliers. Neither standard PCA nor robust PCAs can provide, without additional assumptions, reliable information for outlier-rich point clouds and for distributions with several main directions (spokes). We carry out a fundamental and complete reformulation of the PCA approach in a framework based exclusively on the ℓ1 norm and heavy-tailed distributions. The ℓ1 Major Component Detection and Analysis (ℓ1 MCDA) that we propose can determine the main directions and the radial extent of 2D data from single or multiple superimposed Gaussian or heavy-tailed distributions without and with patterned artificial outliers (clutter). In nearly all cases in the computational results, 2D ℓ1 MCDA has accuracy superior to that of standard PCA and of two robust PCAs, namely, the projection-pursuit method of Croux and Ruiz-Gazen and the ℓ1 factorization method of Ke and Kanade. (Standard PCA is, of course, superior to ℓ1 MCDA for Gaussian-distributed point clouds.) The computing time of ℓ1 MCDA is competitive with the computing times of the two robust PCAs.Algorithms2013-01-1761Article10.3390/a601001212281999-48932013-01-17doi: 10.3390/a6010012Ye TianQingwei JinJohn LaveryShu-Cherng Fang<![CDATA[Algorithms, Vol. 6, Pages 1-11: Maximum Disjoint Paths on Edge-Colored Graphs: Approximability and Tractability]]>
http://www.mdpi.com/1999-4893/6/1/1
The problem of finding the maximum number of vertex-disjoint uni-color paths in an edge-colored graph has been recently introduced in literature, motivated by applications in social network analysis. In this paper we investigate the approximation and parameterized complexity of the problem. First, we show that, for any constant ε &gt; 0, the problem is not approximable within factor c1-ε, where c is the number of colors, and that the corresponding decision problem is W[1]-hard when parametrized by the number of disjoint paths. Then, we present a fixed-parameter algorithm for the problem parameterized by the number and the length of the disjoint paths.Algorithms2012-12-2761Article10.3390/a60100011111999-48932012-12-27doi: 10.3390/a6010001Paola BonizzoniRiccardo DondiYuri Pirola<![CDATA[Algorithms, Vol. 5, Pages 654-667: Extracting Co-Occurrence Relations from ZDDs]]>
http://www.mdpi.com/1999-4893/5/4/654
A zero-suppressed binary decision diagram (ZDD) is a graph representation suitable for handling sparse set families. Given a ZDD representing a set family, we present an efficient algorithm to discover a hidden structure, called a co-occurrence relation, on the ground set. This computation can be done in time complexity that is related not to the number of sets, but to some feature values of the ZDD. We furthermore introduce a conditional co-occurrence relation and present an extraction algorithm, which enables us to discover further structural information.Algorithms2012-12-1354Article10.3390/a50406546546671999-48932012-12-13doi: 10.3390/a5040654Takahisa Toda<![CDATA[Algorithms, Vol. 5, Pages 636-653: Edge Detection from MRI and DTI Images with an Anisotropic Vector Field Flow Using a Divergence Map]]>
http://www.mdpi.com/1999-4893/5/4/636
The aim of this work is the extraction of edges from Magnetic Resonance Imaging (MRI) and Diffusion Tensor Imaging (DTI) images by a deformable contour procedure, using an external force field derived from an anisotropic flow. Moreover, we introduce a divergence map in order to check the convergence of the process. As we know from vector calculus, divergence is a measure of the magnitude of a vector field convergence at a given point. Thus by means level curves of the divergence map, we have automatically selected an initial contour for the deformation process. If the initial curve includes the areas from which the vector field diverges, it will be able to push the curve towards the edges. Furthermore the divergence map highlights the presence of curves pointing to the most significant geometric parts of boundaries corresponding to high curvature values. In this way, the skeleton of the extracted object will be rather well defined and may subsequently be employed in shape analysis and morphological studies.Algorithms2012-12-1354Article10.3390/a50406366366531999-48932012-12-13doi: 10.3390/a5040636Donatella Giuliani<![CDATA[Algorithms, Vol. 5, Pages 629-635: Testing Goodness of Fit of Random Graph Models]]>
http://www.mdpi.com/1999-4893/5/4/629
Random graphs are matrices with independent 0–1 elements with probabilities determined by a small number of parameters. One of the oldest models is the Rasch model where the odds are ratios of positive numbers scaling the rows and columns. Later Persi Diaconis with his coworkers rediscovered the model for symmetric matrices and called the model beta. Here we give goodness-of-fit tests for the model and extend the model to a version of the block model introduced by Holland, Laskey and Leinhard.Algorithms2012-12-0654Article10.3390/a50406296296351999-48932012-12-06doi: 10.3390/a5040629Villõ CsiszárPéter HussamiJános KomlósTamás MóriLídia RejtõGábor Tusnády<![CDATA[Algorithms, Vol. 5, Pages 604-628: Laplace–Fourier Transform of the Stretched Exponential Function: Analytic Error Bounds, Double Exponential Transform, and Open-Source Implementation “libkww”]]>
http://www.mdpi.com/1999-4893/5/4/604
The C library libkww provides functions to compute the Kohlrausch–Williams– Watts function, i.e., the Laplace–Fourier transform of the stretched (or compressed) exponential function exp(-tβ ) for exponents β between 0.1 and 1.9 with double precision. Analytic error bounds are derived for the low and high frequency series expansions. For intermediate frequencies, the numeric integration is enormously accelerated by using the Ooura–Mori double exponential transformation. The primitive of the cosine transform needed for the convolution integrals is also implemented. The software is hosted at http://apps.jcns.fz-juelich.de/kww; version 3.0 is deposited as supplementary material to this article.Algorithms2012-11-2254Article10.3390/a50406046046281999-48932012-11-22doi: 10.3390/a5040604Joachim Wuttke<![CDATA[Algorithms, Vol. 5, Pages 588-603: An Efficient Algorithm for Automatic Peak Detection in Noisy Periodic and Quasi-Periodic Signals]]>
http://www.mdpi.com/1999-4893/5/4/588
We present a new method for automatic detection of peaks in noisy periodic and quasi-periodic signals. The new method, called automatic multiscale-based peak detection (AMPD), is based on the calculation and analysis of the local maxima scalogram, a matrix comprising the scale-dependent occurrences of local maxima. The usefulness of the proposed method is shown by applying the AMPD algorithm to simulated and real-world signals.Algorithms2012-11-2154Article10.3390/a50405885886031999-48932012-11-21doi: 10.3390/a5040588Felix ScholkmannJens BossMartin Wolf<![CDATA[Algorithms, Vol. 5, Pages 545-587: Exact Algorithms for Maximum Clique: A Computational Study ]]>
http://www.mdpi.com/1999-4893/5/4/545
We investigate a number of recently reported exact algorithms for the maximum clique problem. The program code is presented and analyzed to show how small changes in implementation can have a drastic effect on performance. The computational study demonstrates how problem features and hardware platforms influence algorithm behaviour. The effect of vertex ordering is investigated. One of the algorithms (MCS) is broken into its constituent parts and we discover that one of these parts frequently degrades performance. It is shown that the standard procedure used for rescaling published results (i.e., adjusting run times based on the calibration of a standard program over a set of benchmarks) is unsafe and can lead to incorrect conclusions being drawn from empirical data.Algorithms2012-11-1954Article10.3390/a50405455455871999-48932012-11-19doi: 10.3390/a5040545Patrick Prosser<![CDATA[Algorithms, Vol. 5, Pages 529-544: Finite Element Quadrature of Regularized Discontinuous and Singular Level Set Functions in 3D Problems]]>
http://www.mdpi.com/1999-4893/5/4/529
Regularized Heaviside and Dirac delta function are used in several fields of computational physics and mechanics. Hence the issue of the quadrature of integrals of discontinuous and singular functions arises. In order to avoid ad-hoc quadrature procedures, regularization of the discontinuous and the singular fields is often carried out. In particular, weight functions of the signed distance with respect to the discontinuity interface are exploited. Tornberg and Engquist (Journal of Scientific Computing, 2003, 19: 527–552) proved that the use of compact support weight function is not suitable because it leads to errors that do not vanish for decreasing mesh size. They proposed the adoption of non-compact support weight functions. In the present contribution, the relationship between the Fourier transform of the weight functions and the accuracy of the regularization procedure is exploited. The proposed regularized approach was implemented in the eXtended Finite Element Method. As a three-dimensional example, we study a slender solid characterized by an inclined interface across which the displacement is discontinuous. The accuracy is evaluated for varying position of the discontinuity interfaces with respect to the underlying mesh. A procedure for the choice of the regularization parameters is proposed.Algorithms2012-11-0754Article10.3390/a50405295295441999-48932012-11-07doi: 10.3390/a5040529Elena BenvenutiGiulio VenturaNicola Ponara<![CDATA[Algorithms, Vol. 5, Pages 521-528: Alpha-Beta Pruning and Althöfer’s Pathology-Free Negamax Algorithm]]>
http://www.mdpi.com/1999-4893/5/4/521
The minimax algorithm, also called the negamax algorithm, remains today the most widely used search technique for two-player perfect-information games. However, minimaxing has been shown to be susceptible to game tree pathology, a paradoxical situation in which the accuracy of the search can decrease as the height of the tree increases. Althöfer’s alternative minimax algorithm has been proven to be invulnerable to pathology. However, it has not been clear whether alpha-beta pruning, a crucial component of practical game programs, could be applied in the context of Alhöfer’s algorithm. In this brief paper, we show how alpha-beta pruning can be adapted to Althöfer’s algorithm.Algorithms2012-11-0554Article10.3390/a50405215215281999-48932012-11-05doi: 10.3390/a5040521Ashraf M. Abdelbar<![CDATA[Algorithms, Vol. 5, Pages 506-520: Extracting Hierarchies from Data Clusters for Better Classification]]>
http://www.mdpi.com/1999-4893/5/4/506
In this paper we present the PHOCS-2 algorithm, which extracts a “Predicted Hierarchy Of ClassifierS”. The extracted hierarchy helps us to enhance performance of flat classification. Nodes in the hierarchy contain classifiers. Each intermediate node corresponds to a set of classes and each leaf node corresponds to a single class. In the PHOCS-2 we make estimation for each node and achieve more precise computation of false positives, true positives and false negatives. Stopping criteria are based on the results of the flat classification. The proposed algorithm is validated against nine datasets.Algorithms2012-10-2354Article10.3390/a50405065065201999-48932012-10-23doi: 10.3390/a5040506German SapozhnikovAlexander Ulanov<![CDATA[Algorithms, Vol. 5, Pages 490-505: The Effects of Tabular-Based Content Extraction on Patent Document Clustering]]>
http://www.mdpi.com/1999-4893/5/4/490
Data can be represented in many different ways within a particular document or set of documents. Hence, attempts to automatically process the relationships between documents or determine the relevance of certain document objects can be problematic. In this study, we have developed software to automatically catalog objects contained in HTML files for patents granted by the United States Patent and Trademark Office (USPTO). Once these objects are recognized, the software creates metadata that assigns a data type to each document object. Such metadata can be easily processed and analyzed for subsequent text mining tasks. Specifically, document similarity and clustering techniques were applied to a subset of the USPTO document collection. Although our preliminary results demonstrate that tables and numerical data do not provide quantifiable value to a document’s content, the stage for future work in measuring the importance of document objects within a large corpus has been set.Algorithms2012-10-2254Article10.3390/a50404904905051999-48932012-10-22doi: 10.3390/a5040490Denise R. KoesslerBenjamin W. MartinBruce E. KieferMichael W. Berry<![CDATA[Algorithms, Vol. 5, Pages 469-489: Contextual Anomaly Detection in Text Data]]>
http://www.mdpi.com/1999-4893/5/4/469
We propose using side information to further inform anomaly detection algorithms of the semantic context of the text data they are analyzing, thereby considering both divergence from the statistical pattern seen in particular datasets and divergence seen from more general semantic expectations. Computational experiments show that our algorithm performs as expected on data that reflect real-world events with contextual ambiguity, while replicating conventional clustering on data that are either too specialized or generic to result in contextual information being actionable. These results suggest that our algorithm could potentially reduce false positive rates in existing anomaly detection systems.Algorithms2012-10-1954Article10.3390/a50404694694891999-48932012-10-19doi: 10.3390/a5040469Amogh MahapatraNisheeth SrivastavaJaideep Srivastava<![CDATA[Algorithms, Vol. 5, Pages 449-468: Forecasting the Unit Cost of a Product with Some Linear Fuzzy Collaborative Forecasting Models]]>
http://www.mdpi.com/1999-4893/5/4/449
Forecasting the unit cost of every product type in a factory is an important task. However, it is not easy to deal with the uncertainty of the unit cost. Fuzzy collaborative forecasting is a very effective treatment of the uncertainty in the distributed environment. This paper presents some linear fuzzy collaborative forecasting models to predict the unit cost of a product. In these models, the experts&rsquo; forecasts differ and therefore need to be aggregated through collaboration. According to the experimental results, the effectiveness of forecasting the unit cost was considerably improved through collaboration.Algorithms2012-10-1554Article10.3390/a50404494494681999-48932012-10-15doi: 10.3390/a5040449Toly Chen<![CDATA[Algorithms, Vol. 5, Pages 433-448: Interaction Enhanced Imperialist Competitive Algorithms]]>
http://www.mdpi.com/1999-4893/5/4/433
Imperialist Competitive Algorithm (ICA) is a new population-based evolutionary algorithm. It divides its population of solutions into several sub-populations, and then searches for the optimal solution through two operations: assimilation and competition. The assimilation operation moves each non-best solution (called colony) in a sub-population toward the best solution (called imperialist) in the same sub-population. The competition operation removes a colony from the weakest sub-population and adds it to another sub-population. Previous work on ICA focuses mostly on improving the assimilation operation or replacing the assimilation operation with more powerful meta-heuristics, but none focuses on the improvement of the competition operation. Since the competition operation simply moves a colony (i.e., an inferior solution) from one sub-population to another sub-population, it incurs weak interaction among these sub-populations. This work proposes Interaction Enhanced ICA that strengthens the interaction among the imperialists of all sub-populations. The performance of Interaction Enhanced ICA is validated on a set of benchmark functions for global optimization. The results indicate that the performance of Interaction Enhanced ICA is superior to that of ICA and its existing variants.Algorithms2012-10-1554Article10.3390/a50404334334481999-48932012-10-15doi: 10.3390/a5040433Jun-Lin LinYu-Hsiang TsaiChun-Ying YuMeng-Shiou Li<![CDATA[Algorithms, Vol. 5, Pages 421-432: Univariate Lp and ɭ p Averaging, 0 < p < 1, in Polynomial Time by Utilization of Statistical Structure]]>
http://www.mdpi.com/1999-4893/5/4/421
We present evidence that one can calculate generically combinatorially expensive Lp and lp averages, 0 < p < 1, in polynomial time by restricting the data to come from a wide class of statistical distributions. Our approach differs from the approaches in the previous literature, which are based on a priori sparsity requirements or on accepting a local minimum as a replacement for a global minimum. The functionals by which Lp averages are calculated are not convex but are radially monotonic and the functionals by which lp averages are calculated are nearly so, which are the keys to solvability in polynomial time. Analytical results for symmetric, radially monotonic univariate distributions are presented. An algorithm for univariate lp averaging is presented. Computational results for a Gaussian distribution, a class of symmetric heavy-tailed distributions and a class of asymmetric heavy-tailed distributions are presented. Many phenomena in human-based areas are increasingly known to be represented by data that have large numbers of outliers and belong to very heavy-tailed distributions. When tails of distributions are so heavy that even medians (L1 and l1 averages) do not exist, one needs to consider using lp minimization principles with 0 < p < 1.Algorithms2012-10-0554Article10.3390/a50404214214321999-48932012-10-05doi: 10.3390/a5040421John E. Lavery<![CDATA[Algorithms, Vol. 5, Pages 398-420: Better Metrics to Automatically Predict the Quality of a Text Summary]]>
http://www.mdpi.com/1999-4893/5/4/398
In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE.Algorithms2012-09-2654Article10.3390/a50403983984201999-48932012-09-26doi: 10.3390/a5040398Peter A. RankelJohn M. ConroyJudith D. Schlesinger