Previous Issue
Volume 12, October

Table of Contents

Algorithms, Volume 12, Issue 11 (November 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessFeature PaperArticle
Fingerprints Classification through Image Analysis and Machine Learning Method
Algorithms 2019, 12(11), 241; https://doi.org/10.3390/a12110241 (registering DOI) - 11 Nov 2019
Abstract
The system that automatically identifies the anthropometric fingerprint is one of the systems that interact directly with the user, which every day will be provided with a diverse database. This requires the system to be optimized to handle the process to meet the [...] Read more.
The system that automatically identifies the anthropometric fingerprint is one of the systems that interact directly with the user, which every day will be provided with a diverse database. This requires the system to be optimized to handle the process to meet the needs of users such as fast processing time, almost absolute accuracy, no errors in the real process. Therefore, in this paper, we propose the application of machine learning methods to develop fingerprint classification algorithms based on the singularity feature. The goal of the paper is to reduce the number of comparisons in automatic fingerprint recognition systems with large databases. The combination of using computer vision algorithms in the image pre-processing stage increases the calculation time, improves the quality of the input images, making the process of feature extraction highly effective and the classification process fast and accurate. The classification results on 3 datasets with the criteria for Precision, Recall, Accuracy evaluation and ROC analysis of algorithms show that the Random Forest (RF) algorithm has the best accuracy (≥96.75%) on all 3 databases, Support Vector Machine (SVM) has the best results (≥95.5%) 2 / 3 databases. Full article
(This article belongs to the Special Issue Algorithms for Content Based Image Retrieval)
Show Figures

Figure 1

Open AccessArticle
Tensor-Based Algorithms for Image Classification
Algorithms 2019, 12(11), 240; https://doi.org/10.3390/a12110240 (registering DOI) - 09 Nov 2019
Abstract
Interest in machine learning with tensor networks has been growing rapidly in recent years. We show that tensor-based methods developed for learning the governing equations of dynamical systems from data can, in the same way, be used for supervised learning problems and propose [...] Read more.
Interest in machine learning with tensor networks has been growing rapidly in recent years. We show that tensor-based methods developed for learning the governing equations of dynamical systems from data can, in the same way, be used for supervised learning problems and propose two novel approaches for image classification. One is a kernel-based reformulation of the previously introduced multidimensional approximation of nonlinear dynamics (MANDy), the other an alternating ridge regression in the tensor train format. We apply both methods to the MNIST and fashion MNIST data set and show that the approaches are competitive with state-of-the-art neural network-based classifiers. Full article
Open AccessArticle
A Hybrid Ontology-Based Recommendation System in e-Commerce
Algorithms 2019, 12(11), 239; https://doi.org/10.3390/a12110239 - 08 Nov 2019
Abstract
The growth of the Internet has increased the amount of data and information available to any person at any time. Recommendation Systems help users find the items that meet their preferences, among the large number of items available. Techniques such as collaborative filtering [...] Read more.
The growth of the Internet has increased the amount of data and information available to any person at any time. Recommendation Systems help users find the items that meet their preferences, among the large number of items available. Techniques such as collaborative filtering and content-based recommenders have played an important role in the implementation of recommendation systems. In the last few years, other techniques, such as, ontology-based recommenders, have gained significance when reffering better active user recommendations; however, building an ontology-based recommender is an expensive process, which requires considerable skills in Knowledge Engineering. This paper presents a new hybrid approach that combines the simplicity of collaborative filtering with the efficiency of the ontology-based recommenders. The experimental evaluation demonstrates that the proposed approach presents higher quality recommendations when compared to collaborative filtering. The main improvement is verified on the results regarding the products, which, in spite of belonging to unknown categories to the users, still match their preferences and become recommended. Full article
(This article belongs to the Special Issue Algorithms for Personalization Techniques and Recommender Systems)
Show Figures

Figure 1

Open AccessArticle
Microscopic Object Recognition and Localization Based on Multi-Feature Fusion for In-Situ Measurement In Vivo
Algorithms 2019, 12(11), 238; https://doi.org/10.3390/a12110238 - 07 Nov 2019
Abstract
Microscopic object recognition and analysis is very important in micromanipulation. Micromanipulation has been extensively used in many fields, e.g., micro-assembly operation, microsurgery, agriculture, and biological research. Conducting micro-object recognition in the in-situ measurement of tissue, e.g., in the ion flux measurement by moving [...] Read more.
Microscopic object recognition and analysis is very important in micromanipulation. Micromanipulation has been extensively used in many fields, e.g., micro-assembly operation, microsurgery, agriculture, and biological research. Conducting micro-object recognition in the in-situ measurement of tissue, e.g., in the ion flux measurement by moving an ion-selective microelectrode (ISME), is a complex problem. For living tissues growing at a rate, it remains a challenge to accurately recognize and locate an ISME to protect living tissues and to prevent an ISME from being damaged. Thus, we proposed a robust and fast recognition method based on local binary pattern (LBP) and Haar-like features fusion by training a cascade of classifiers using the gentle AdaBoost algorithm to recognize microscopic objects. Then, we could locate the electrode tip from the background with strong noise by using the Hough transform and edge extraction with an improved contour detection method. Finally, the method could be used to automatically and accurately calculate the relative distance between the two micro-objects in the microscopic image. The results show that the proposed method can achieve good performance in micro-object recognition with a recognition rate up to 99.14% and a tip recognition speed up to 14 frames/s at a resolution of 1360 × 1024. The max error of tip positioning is 6.10 μm, which meets the design requirements of the ISME system. Furthermore, this study provides an effective visual guidance method for micromanipulation, which can facilitate automated micromanipulation research. Full article
Show Figures

Figure 1

Open AccessArticle
What Do a Longest Increasing Subsequence and a Longest Decreasing Subsequence Know about Each Other?
Algorithms 2019, 12(11), 237; https://doi.org/10.3390/a12110237 - 07 Nov 2019
Abstract
As a kind of converse of the celebrated Erdős–Szekeres theorem, we present a necessary and sufficient condition for a sequence of length n to contain a longest increasing subsequence and a longest decreasing subsequence of given lengths x and y, respectively. Full article
Open AccessArticle
Stability Analysis of Jacobian-Free Newton’s Iterative Method
Algorithms 2019, 12(11), 236; https://doi.org/10.3390/a12110236 - 06 Nov 2019
Abstract
It is well known that scalar iterative methods with derivatives are highly more stable than their derivative-free partners, understanding the term stability as a measure of the wideness of the set of converging initial estimations. In multivariate case, multidimensional dynamical analysis allows us [...] Read more.
It is well known that scalar iterative methods with derivatives are highly more stable than their derivative-free partners, understanding the term stability as a measure of the wideness of the set of converging initial estimations. In multivariate case, multidimensional dynamical analysis allows us to afford this task and it is made on different Jacobian-free variants of Newton’s method, whose estimations of the Jacobian matrix have increasing order. The respective basins of attraction and the number of fixed and critical points give us valuable information in this sense. Full article
Show Figures

Figure 1

Open AccessArticle
Exploring an Ensemble of Methods that Combines Fuzzy Cognitive Maps and Neural Networks in Solving the Time Series Prediction Problem of Gas Consumption in Greece
Algorithms 2019, 12(11), 235; https://doi.org/10.3390/a12110235 - 06 Nov 2019
Abstract
This paper introduced a new ensemble learning approach, based on evolutionary fuzzy cognitive maps (FCMs), artificial neural networks (ANNs), and their hybrid structure (FCM-ANN), for time series prediction. The main aim of time series forecasting is to obtain reasonably accurate forecasts of future [...] Read more.
This paper introduced a new ensemble learning approach, based on evolutionary fuzzy cognitive maps (FCMs), artificial neural networks (ANNs), and their hybrid structure (FCM-ANN), for time series prediction. The main aim of time series forecasting is to obtain reasonably accurate forecasts of future data from analyzing records of data. In the paper, we proposed an ensemble-based forecast combination methodology as an alternative approach to forecasting methods for time series prediction. The ensemble learning technique combines various learning algorithms, including SOGA (structure optimization genetic algorithm)-based FCMs, RCGA (real coded genetic algorithm)-based FCMs, efficient and adaptive ANNs architectures, and a hybrid structure of FCM-ANN, recently proposed for time series forecasting. All ensemble algorithms execute according to the one-step prediction regime. The particular forecast combination approach was specifically selected due to the advanced features of each ensemble component, where the findings of this work evinced the effectiveness of this approach, in terms of prediction accuracy, when compared against other well-known, independent forecasting approaches, such as ANNs or FCMs, and the long short-term memory (LSTM) algorithm as well. The suggested ensemble learning approach was applied to three distribution points that compose the natural gas grid of a Greek region. For the evaluation of the proposed approach, a real-time series dataset for natural gas prediction was used. We also provided a detailed discussion on the performance of the individual predictors, the ensemble predictors, and their combination through two well-known ensemble methods (the average and the error-based) that are characterized in the literature as particularly accurate and effective. The prediction results showed the efficacy of the proposed ensemble learning approach, and the comparative analysis demonstrated enough evidence that the approach could be used effectively to conduct forecasting based on multivariate time series. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

Open AccessArticle
Complex Neutrosophic Hypergraphs: New Social Network Models
Algorithms 2019, 12(11), 234; https://doi.org/10.3390/a12110234 - 06 Nov 2019
Abstract
A complex neutrosophic set is a useful model to handle indeterminate situations with a periodic nature. This is characterized by truth, indeterminacy, and falsity degrees which are the combination of real-valued amplitude terms and complex-valued phase terms. Hypergraphs are objects that enable us [...] Read more.
A complex neutrosophic set is a useful model to handle indeterminate situations with a periodic nature. This is characterized by truth, indeterminacy, and falsity degrees which are the combination of real-valued amplitude terms and complex-valued phase terms. Hypergraphs are objects that enable us to dig out invisible connections between the underlying structures of complex systems such as those leading to sustainable development. In this paper, we apply the most fruitful concept of complex neutrosophic sets to theory of hypergraphs. We define complex neutrosophic hypergraphs and discuss their certain properties including lower truncation, upper truncation, and transition levels. Furthermore, we define T-related complex neutrosophic hypergraphs and properties of minimal transversals of complex neutrosophic hypergraphs. Finally, we represent the modeling of certain social networks with intersecting communities through the score functions and choice values of complex neutrosophic hypergraphs. We also give a brief comparison of our proposed model with other existing models. Full article
(This article belongs to the Special Issue Graph-Theoretical Algorithms and Hybrid/Collaborative Technologies)
Open AccessArticle
Comparison of Satellite Repeat Shift Time for GPS, BDS, and Galileo Navigation Systems by Three Methods
Algorithms 2019, 12(11), 233; https://doi.org/10.3390/a12110233 - 05 Nov 2019
Abstract
The characteristic of the satellite repeat shift time can reflect the status of the satellite operation, and is also one of the key factors of the sidereal filtering multipath correction. Although some methods have been developed to calculate the repeat shift time, few [...] Read more.
The characteristic of the satellite repeat shift time can reflect the status of the satellite operation, and is also one of the key factors of the sidereal filtering multipath correction. Although some methods have been developed to calculate the repeat shift time, few efforts have been made to analyze and compare the performance of this feature for the GPS (Global Positioning System), BDS (BeiDou System), and Galileo in depth. Hence, three methods used for calculating the repeat shift time are presented, and used to compare and analyze the three global systems in depth, named the broadcast ephemeris method (BEM), correlation coefficient method (CCM), and aspect repeat time method (ARTM). The experiment results show that the repeat shift time of each satellite is different. Also, the difference between the maximum and minimum varies from different systems. The maximum difference is about 25 s for the BDS IGSO (Inclined Geosynchronous Orbit) and the minimum is merely 10 s for the GPS system. Furthermore, for the same satellite, the shift time calculated by the three methods is almost identical, and the maximum difference is only about 7 s between the CCM and the ARTM method for the BDS MEO (Medium Earth Orbit) satellite. Although the repeat shift time is different daily for the same satellite and the same method, the changes are very small. Moreover, in terms of the STD (Standard Deviation) of the BS (between satellites) and MS (mean shift for the same satellite), the GPS system is the best, the performance of the BDS system is medium, and the Galileo performs slightly worse than the GPS and BDS. Full article
Show Figures

Figure 1

Open AccessArticle
Comparison and Interpretation Methods for Predictive Control of Mechanics
Algorithms 2019, 12(11), 232; https://doi.org/10.3390/a12110232 - 04 Nov 2019
Abstract
Objects that possess mass (e.g., automobiles, manufactured items, etc.) translationally accelerate in direct proportion to the force applied scaled by the object’s mass in accordance with Newton’s Law, while the rotational companion is Euler’s moment equations relating angular acceleration of objects that possess [...] Read more.
Objects that possess mass (e.g., automobiles, manufactured items, etc.) translationally accelerate in direct proportion to the force applied scaled by the object’s mass in accordance with Newton’s Law, while the rotational companion is Euler’s moment equations relating angular acceleration of objects that possess mass moments of inertia. Michel Chasles’s theorem allows us to simply invoke Newton and Euler’s equations to fully describe the six degrees of freedom of mechanical motion. Many options are available to control the motion of objects by controlling the applied force and moment. A long, distinguished list of references has matured the field of controlling a mechanical motion, which culminates in the burgeoning field of deterministic artificial intelligence as a natural progression of the laudable goal of adaptive and/or model predictive controllers that can be proven to be optimal subsequent to their development. Deterministic A.I. uses Chasle’s claim to assert Newton’s and Euler’s relations as deterministic self-awareness statements that are optimal with respect to state errors. Predictive controllers (both continuous and sampled-data) derived from the outset to be optimal by first solving an optimization problem with the governing dynamic equations of motion lead to several controllers (including a controller that twice invokes optimization to formulate robust, predictive control). These controllers are compared to each other with noise and modeling errors, and the many figures of merit are used: tracking error and rate error deviations and means, in addition to total mean cost. Robustness is evaluated using Monte Carlo analysis where plant parameters are randomly assumed to be incorrectly modeled. Six instances of controllers are compared against these methods and interpretations, which allow engineers to select a tailored control for their given circumstances. Novel versions of the ubiquitous classical proportional-derivative, “PD” controller, is developed from the optimization statement at the outset by using a novel re-parameterization of the optimal results from time-to-state parameterization. Furthermore, time-optimal controllers, continuous predictive controllers, and sampled-data predictive controllers, as well as combined feedforward plus feedback controllers, and the two degree of freedom controllers (i.e., 2DOF). The context of the term “feedforward” used in this study is the context of deterministic artificial intelligence, where analytic self-awareness statements are strictly determined by the governing physics (of mechanics in this case, e.g., Chasle, Newton, and Euler). When feedforward is combined with feedback per the previously mentioned method (provenance foremost in optimization), the combination is referred to as “2DOF” or two degrees of freedom to indicate the twice invocation of optimization at the genesis of the feedforward and the feedback, respectively. The feedforward plus feedback case is augmented by an online (real time) comparison to the optimal case. This manuscript compares these many optional control strategies against each other. Nominal plants are used, but the addition of plant noise reveals the robustness of each controller, even without optimally rejecting assumed-Gaussian noise (e.g., via the Kalman filter). In other words, noise terms are intentionally left unaddressed in the problem formulation to evaluate the robustness of the proposed method when the real-world noise is added. Lastly, mismodeled plants controlled by each strategy reveal relative performance. Well-anticipated results include the lowest cost, which is achieved by the optimal controller (with very poor robustness), while low mean errors and deviations are achieved by the classical controllers (at the highest cost). Both continuous predictive control and sampled-data predictive control perform well at both cost as well as errors and deviations, while the 2DOF controller performance was the best overall. Full article
(This article belongs to the Special Issue Model Predictive Control: Algorithms and Applications)
Show Figures

Figure 1

Open AccessArticle
A GA-SA Hybrid Planning Algorithm Combined with Improved Clustering for LEO Observation Satellite Missions
Algorithms 2019, 12(11), 231; https://doi.org/10.3390/a12110231 - 04 Nov 2019
Abstract
This paper presents a space mission planning tool, which was developed for LEO (Low Earth Orbit) observation satellites. The tool is focused on a two-phase planning strategy with clustering preprocessing and mission planning, where an improved clustering algorithm is applied, and a hybrid [...] Read more.
This paper presents a space mission planning tool, which was developed for LEO (Low Earth Orbit) observation satellites. The tool is focused on a two-phase planning strategy with clustering preprocessing and mission planning, where an improved clustering algorithm is applied, and a hybrid algorithm that combines the genetic algorithm with the simulated annealing algorithm (GA–SA) is given and discussed. Experimental simulation studies demonstrate that the GA–SA algorithm with the improved clique partition algorithm based on the graph theory model exhibits higher fitness value and better optimization performance and reliability than the GA or SA algorithms alone. Full article
Show Figures

Figure 1

Open AccessArticle
The Inapproximability of k-DominatingSet for Parameterized AC 0 Circuits
Algorithms 2019, 12(11), 230; https://doi.org/10.3390/a12110230 - 04 Nov 2019
Abstract
Chen and Flum showed that any FPT-approximation of the k-Clique problem is not in para- AC 0 and the k-DominatingSet (k-DomSet) problem could not be computed by para- AC 0 circuits. It is natural to [...] Read more.
Chen and Flum showed that any FPT-approximation of the k-Clique problem is not in para- AC 0 and the k-DominatingSet (k-DomSet) problem could not be computed by para- AC 0 circuits. It is natural to ask whether the f ( k ) -approximation of the k-DomSet problem is in para- AC 0 for some computable function f. Very recently it was proved that assuming W [ 1 ] FPT , the k-DomSet problem cannot be f ( k ) -approximated by FPT algorithms for any computable function f by S., Laekhanukit and Manurangsi and Lin, seperately. We observe that the constructions used in Lin’s work can be carried out using constant-depth circuits, and thus we prove that para- AC 0 circuits could not approximate this problem with ratio f ( k ) for any computable function f. Moreover, under the hypothesis that the 3-CNF-SAT problem cannot be computed by constant-depth circuits of size 2 ε n for some ε > 0 , we show that constant-depth circuits of size n o ( k ) cannot distinguish graphs whose dominating numbers are either ≤k or > log n 3 log log n 1 / k . However, we find that the hypothesis may be hard to settle by showing that it implies NP NC 1 . Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
Open AccessEditorial
Special Issue on “Algorithm Engineering: Towards Practically Efficient Solutions to Combinatorial Problems”
Algorithms 2019, 12(11), 229; https://doi.org/10.3390/a12110229 - 03 Nov 2019
Abstract
The purpose of this special issue of Algorithms was to attract papers presenting original research in the area of algorithm engineering. In particular, submissions concerning the design, analysis, implementation, tuning, and experimental evaluation of discrete algorithms and data structures, and/or addressing methodological issues [...] Read more.
The purpose of this special issue of Algorithms was to attract papers presenting original research in the area of algorithm engineering. In particular, submissions concerning the design, analysis, implementation, tuning, and experimental evaluation of discrete algorithms and data structures, and/or addressing methodological issues and standards in algorithmic experimentation were encouraged. Papers dealing with advanced models of computing, including memory hierarchies, cloud architectures, and parallel processing were also welcome. In this regard, we solicited contributions from all most prominent areas of applied algorithmic research, which include but are not limited to graphs, databases, computational geometry, big data, networking, combinatorial aspects of scientific computing, and computational problems in the natural sciences or engineering. Full article
Open AccessArticle
Blended Filter-Based Detection for Thruster Valve Failure and Control Recovery Evaluation for RLV
Algorithms 2019, 12(11), 228; https://doi.org/10.3390/a12110228 - 01 Nov 2019
Abstract
Security enhancement and cost reduction have become crucial goals for second-generation reusable launch vehicles (RLV). The thruster is an important actuator for an RLV, and its control normally requires a valve capable of high-frequency operation, which may lead to excessive wear or failure [...] Read more.
Security enhancement and cost reduction have become crucial goals for second-generation reusable launch vehicles (RLV). The thruster is an important actuator for an RLV, and its control normally requires a valve capable of high-frequency operation, which may lead to excessive wear or failure of the thruster valve. This paper aims at developing a thruster fault detection method that can deal with the thruster fault caused by the failure of the thruster valve and play an emergency role in the cases of hardware sensor failure. Firstly, the failure mechanism of the thruster was analyzed and modeled. Then, thruster fault detection was employed by introducing an angular velocity signal, using a blended filter, and determining an isolation threshold. In addition, to support the redundancy management of the thruster, an evaluation method of the nonlinear model-based numerical control prediction was proposed to evaluate whether the remaining fault-free thruster can track the attitude control response performance under the failure of the thruster valve. The simulation results showed that the method is stable and allowed for the effective detection of thruster faults and timely evaluation of recovery performance. Full article
(This article belongs to the Special Issue Algorithms for Fault Detection and Diagnosis)
Show Figures

Figure 1

Open AccessArticle
Facial Expression Recognition Based on Auxiliary Models
Algorithms 2019, 12(11), 227; https://doi.org/10.3390/a12110227 - 31 Oct 2019
Abstract
In recent years, with the development of artificial intelligence and human–computer interaction, more attention has been paid to the recognition and analysis of facial expressions. Despite much great success, there are a lot of unsatisfying problems, because facial expressions are subtle and complex. [...] Read more.
In recent years, with the development of artificial intelligence and human–computer interaction, more attention has been paid to the recognition and analysis of facial expressions. Despite much great success, there are a lot of unsatisfying problems, because facial expressions are subtle and complex. Hence, facial expression recognition is still a challenging problem. In most papers, the entire face image is often chosen as the input information. In our daily life, people can perceive other’s current emotions only by several facial components (such as eye, mouth and nose), and other areas of the face (such as hair, skin tone, ears, etc.) play a smaller role in determining one’s emotion. If the entire face image is used as the only input information, the system will produce some unnecessary information and miss some important information in the process of feature extraction. To solve the above problem, this paper proposes a method that combines multiple sub-regions and the entire face image by weighting, which can capture more important feature information that is conducive to improving the recognition accuracy. Our proposed method was evaluated based on four well-known publicly available facial expression databases: JAFFE, CK+, FER2013 and SFEW. The new method showed better performance than most state-of-the-art methods. Full article
(This article belongs to the Special Issue Algorithms for Human-Computer Interaction)
Show Figures

Figure 1

Open AccessArticle
Multiple-Attribute Decision Making ELECTRE II Method under Bipolar Fuzzy Model
Algorithms 2019, 12(11), 226; https://doi.org/10.3390/a12110226 - 29 Oct 2019
Abstract
The core aim of this paper is to provide a new multiple-criteria decision making (MCDM) model, namely bipolar fuzzy ELimination and Choice Translating REality (ELECTRE) II method, by combining the bipolar fuzzy set with ELECTRE II technique. It can be used to solve [...] Read more.
The core aim of this paper is to provide a new multiple-criteria decision making (MCDM) model, namely bipolar fuzzy ELimination and Choice Translating REality (ELECTRE) II method, by combining the bipolar fuzzy set with ELECTRE II technique. It can be used to solve the problems having bipolar uncertainty. The proposed method is established by defining the concept of bipolar fuzzy strong, median and weak concordance as well as discordance sets and indifferent set to define two types of outranking relations, namely strong outranking relation and weak outranking relation. The normalized weights of criteria, which may be partly or completely unknown for decision makers, are calculated by using an optimization technique, which is based on maximizing deviation method. A systematic iterative procedure is applied to strongly outrank as well as weakly outrank graphs to determine the ranking of favorable actions or alternatives or to choose the best possible solution. The implementation of the proposed method is presented by numerical examples such as selection of business location and to chose the best supplier. A comparative analysis of proposed ELECTRE II method is also presented with already existing multiple-attribute decision making methods, including Technique for the Order of Preference by Similarity to an Ideal Solution (TOPSIS) and ELECTRE I under bipolar fuzzy environment by solving the problem of business location. Full article
(This article belongs to the Special Issue Algorithms for Multi-Criteria Decision-Making)
Show Figures

Figure 1

Open AccessArticle
Enhancing Backtracking Search Algorithm using Reflection Mutation Strategy Based on Sine Cosine
Algorithms 2019, 12(11), 225; https://doi.org/10.3390/a12110225 - 28 Oct 2019
Abstract
Backtracking Search Algorithm (BSA) is a younger population-based evolutionary algorithm and widely researched. Due to the introduction of historical population and no guidance toward to the best individual, BSA does not adequately use the information in the current population, which leads to a [...] Read more.
Backtracking Search Algorithm (BSA) is a younger population-based evolutionary algorithm and widely researched. Due to the introduction of historical population and no guidance toward to the best individual, BSA does not adequately use the information in the current population, which leads to a slow convergence speed and poor exploitation ability of BSA. To address these drawbacks, a novel backtracking search algorithm with reflection mutation based on sine cosine is proposed, named RSCBSA. The best individual found so far is employed to improve convergence speed, while sine and cosine math models are introduced to enhance population diversity. To sufficiently use the information in the historical population and current population, four individuals are selected from the historical or current population randomly to construct an unit simplex, and the center of the unit simplex can enhance exploitation ability of RSCBSA. Comprehensive experimental results and analyses show that RSCBSA is competitive enough with other state-of-the-art meta-heuristic algorithms. Full article
(This article belongs to the Special Issue Hybrid Intelligent Algorithms)
Show Figures

Figure 1

Open AccessArticle
A QUBO Model for the Traveling Salesman Problem with Time Windows
Algorithms 2019, 12(11), 224; https://doi.org/10.3390/a12110224 - 25 Oct 2019
Abstract
This work focuses on expressing the TSP with Time Windows (TSPTW for short) as a quadratic unconstrained binary optimization (QUBO) problem. The time windows impose time constraints that a feasible solution must satisfy. These take the form of inequality constraints, which are known [...] Read more.
This work focuses on expressing the TSP with Time Windows (TSPTW for short) as a quadratic unconstrained binary optimization (QUBO) problem. The time windows impose time constraints that a feasible solution must satisfy. These take the form of inequality constraints, which are known to be particularly difficult to articulate within the QUBO framework. This is, we believe, the first time this major obstacle is overcome and the TSPTW is cast in the QUBO formulation. We have every reason to anticipate that this development will lead to the actual execution of small scale TSPTW instances on the D-Wave platform. Full article
Show Figures

Figure 1

Open AccessArticle
(Hyper)Graph Embedding and Classification via Simplicial Complexes
Algorithms 2019, 12(11), 223; https://doi.org/10.3390/a12110223 - 25 Oct 2019
Abstract
This paper investigates a novel graph embedding procedure based on simplicial complexes. Inherited from algebraic topology, simplicial complexes are collections of increasing-order simplices (e.g., points, lines, triangles, tetrahedrons) which can be interpreted as possibly meaningful substructures (i.e., information granules) on the top of [...] Read more.
This paper investigates a novel graph embedding procedure based on simplicial complexes. Inherited from algebraic topology, simplicial complexes are collections of increasing-order simplices (e.g., points, lines, triangles, tetrahedrons) which can be interpreted as possibly meaningful substructures (i.e., information granules) on the top of which an embedding space can be built by means of symbolic histograms. In the embedding space, any Euclidean pattern recognition system can be used, possibly equipped with feature selection capabilities in order to select the most informative symbols. The selected symbols can be analysed by field-experts in order to extract further knowledge about the process to be modelled by the learning system, hence the proposed modelling strategy can be considered as a grey-box. The proposed embedding has been tested on thirty benchmark datasets for graph classification and, further, we propose two real-world applications, namely predicting proteins’ enzymatic function and solubility propensity starting from their 3D structure in order to give an example of the knowledge discovery phase which can be carried out starting from the proposed embedding strategy. Full article
(This article belongs to the Special Issue Granular Computing: From Foundations to Applications)
Show Figures

Figure 1

Open AccessArticle
A Reinforcement Learning Method for a Hybrid Flow-Shop Scheduling Problem
Algorithms 2019, 12(11), 222; https://doi.org/10.3390/a12110222 - 23 Oct 2019
Abstract
The scheduling problems in mass production, manufacturing, assembly, synthesis, and transportation, as well as internet services, can partly be attributed to a hybrid flow-shop scheduling problem (HFSP). To solve the problem, a reinforcement learning (RL) method for HFSP is studied for the first [...] Read more.
The scheduling problems in mass production, manufacturing, assembly, synthesis, and transportation, as well as internet services, can partly be attributed to a hybrid flow-shop scheduling problem (HFSP). To solve the problem, a reinforcement learning (RL) method for HFSP is studied for the first time in this paper. HFSP is described and attributed to the Markov Decision Processes (MDP), for which the special states, actions, and reward function are designed. On this basis, the MDP framework is established. The Boltzmann exploration policy is adopted to trade-off the exploration and exploitation during choosing action in RL. Compared with the first-come-first-serve strategy that is frequently adopted when coding in most of the traditional intelligent algorithms, the rule in the RL method is first-come-first-choice, which is more conducive to achieving the global optimal solution. For validation, the RL method is utilized for scheduling in a metal processing workshop of an automobile engine factory. Then, the method is applied to the sortie scheduling of carrier aircraft in continuous dispatch. The results demonstrate that the machining and support scheduling obtained by this RL method are reasonable in result quality, real-time performance and complexity, indicating that this RL method is practical for HFSP. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop