Special Issue "Evolutionary Computation"

A special issue of Mathematics (ISSN 2227-7390).

Deadline for manuscript submissions: closed (31 March 2019)

Special Issue Editors

Guest Editor
Dr. Gai-Ge Wang

Department of Computer Science and Technology, Ocean University of China, 266100 Qingdao, China
E-Mail
Guest Editor
Dr. Amir H. Alavi

College of Engineering, University of Missouri, Columbia, MO 65211, USA
Website | E-Mail
Interests: structural health monitoring; smart cities; cyber-physical systems; smart energy harvesting; machine learning; big data analytics

Special Issue Information

Dear Colleagues,

Evolutionary computation (EC) is a family of algorithms for global optimization inspired by biological evolution. It includes various population-based trial and error problem solvers with a metaheuristic or stochastic optimization character. In EC, each individual has a simple structure and function. EC system is composed of many of these individuals and can address difficult real-world problems, which are impossible to be solved by single individuals. During the recent decades, the EC methods have been successfully applied to solve complex and time-consuming problems. The EC is indeed a topic of interest amongst researchers in various fields of science and engineering. Some of the most popular EC paradigms are genetic algorithm, genetic programming, and evolution strategy. Many theoretical and experimental studies have proved the significant properties of EC such as reasoning with vague and/or ambiguous data, adaptation to dynamic and uncertain environments, and learning from noisy and/or incomplete information.

The aim of this special issue is to compile the latest theory and applications in the field of EC. Submissions should be original and unpublished, and present novel in-depth fundamental research contributions either from a methodological perspective or from an application point of view. In general, we are soliciting contributions on (but not only limited to) the following topics:
  • Improvements of traditional EC methods (e.g., genetic algorithm, differential evolution, ant colony optimization and particle swarm optimization)
  • Recent development of EC methods (e.g., biogeography-based optimization, krill herd (KH) algorithm, monarch butterfly optimization (MBO), earthworm optimization algorithm (EWA), elephant herding optimization (EHO), moth search (MS) algorithm, rhino herd (RH) algorithm)
  • Theoretical study on EC algorithms using various techniques (e.g., Markov chain, dynamic system, complex system/networks, and Martingale)
  • Application of EC methods (e.g., scheduling, data mining, machine learning, reliability, planning, task assignment problem, IIR filter design, traveling salesman problem, optimization under dynamic and uncertain environments)

Dr. Gai-Ge Wang
Dr. Amir H. Alavi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 850 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (18 papers)

View options order results:
result details:
Displaying articles 1-18
Export citation of selected articles as:

Research

Open AccessArticle
An Efficient Memetic Algorithm for the Minimum Load Coloring Problem
Mathematics 2019, 7(5), 475; https://doi.org/10.3390/math7050475 (registering DOI)
Received: 29 March 2019 / Revised: 20 May 2019 / Accepted: 21 May 2019 / Published: 25 May 2019
PDF Full-text (1164 KB)
Abstract
Given a graph G with n vertices and l edges, the load distribution of a coloring q: V → {red, blue} is defined as dq = (rq, bq), in which rq is the number of [...] Read more.
Given a graph G with n vertices and l edges, the load distribution of a coloring q: V → {red, blue} is defined as dq = (rq, bq), in which rq is the number of edges with at least one end-vertex colored red and bq is the number of edges with at least one end-vertex colored blue. The minimum load coloring problem (MLCP) is to find a coloring q such that the maximum load, lq = 1/l × max{rq, bq}, is minimized. This problem has been proved to be NP-complete. This paper proposes a memetic algorithm for MLCP based on an improved K-OPT local search and an evolutionary operation. Furthermore, a data splitting operation is executed to expand the data amount of global search, and a disturbance operation is employed to improve the search ability of the algorithm. Experiments are carried out on the benchmark DIMACS to compare the searching results from memetic algorithm and the proposed algorithms. The experimental results show that a greater number of best results for the graphs can be found by the memetic algorithm, which can improve the best known results of MLCP. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Open AccessArticle
An Entropy-Assisted Particle Swarm Optimizer for Large-Scale Optimization Problem
Mathematics 2019, 7(5), 414; https://doi.org/10.3390/math7050414
Received: 7 April 2019 / Revised: 1 May 2019 / Accepted: 5 May 2019 / Published: 9 May 2019
PDF Full-text (2995 KB) | HTML Full-text | XML Full-text
Abstract
Diversity maintenance is crucial for particle swarm optimizer’s (PSO) performance. However, the update mechanism for particles in the conventional PSO is poor in the performance of diversity maintenance, which usually results in a premature convergence or a stagnation of exploration in the searching [...] Read more.
Diversity maintenance is crucial for particle swarm optimizer’s (PSO) performance. However, the update mechanism for particles in the conventional PSO is poor in the performance of diversity maintenance, which usually results in a premature convergence or a stagnation of exploration in the searching space. To help particle swarm optimization enhance the ability in diversity maintenance, many works have proposed to adjust the distances among particles. However, such operators will result in a situation where the diversity maintenance and fitness evaluation are conducted in the same distance-based space. Therefore, it also brings a new challenge in trade-off between convergence speed and diversity preserving. In this paper, a novel PSO is proposed that employs competitive strategy and entropy measurement to manage convergence operator and diversity maintenance respectively. The proposed algorithm was applied to the large-scale optimization benchmark suite on CEC 2013 and the results demonstrate the proposed algorithm is feasible and competitive to address large scale optimization problems. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
Enhancing Elephant Herding Optimization with Novel Individual Updating Strategies for Large-Scale Optimization Problems
Mathematics 2019, 7(5), 395; https://doi.org/10.3390/math7050395
Received: 16 February 2019 / Revised: 26 April 2019 / Accepted: 27 April 2019 / Published: 30 April 2019
PDF Full-text (3547 KB) | HTML Full-text | XML Full-text
Abstract
Inspired by the behavior of elephants in nature, elephant herd optimization (EHO) was proposed recently for global optimization. Like most other metaheuristic algorithms, EHO does not use the previous individuals in the later updating process. If the useful information in the previous individuals [...] Read more.
Inspired by the behavior of elephants in nature, elephant herd optimization (EHO) was proposed recently for global optimization. Like most other metaheuristic algorithms, EHO does not use the previous individuals in the later updating process. If the useful information in the previous individuals were fully exploited and used in the later optimization process, the quality of solutions may be improved significantly. In this paper, we propose several new updating strategies for EHO, in which one, two, or three individuals are selected from the previous iterations, and their useful information is incorporated into the updating process. Accordingly, the final individual at this iteration is generated according to the elephant generated by the basic EHO, and the selected previous elephants through a weighted sum. The weights are determined by a random number and the fitness of the elephant individuals at the previous iteration. We incorporated each of the six individual updating strategies individually into the basic EHO, creating six improved variants of EHO. We benchmarked these proposed methods using sixteen test functions. Our experimental results demonstrated that the proposed improved methods significantly outperformed the basic EHO. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
Improved Whale Algorithm for Solving the Flexible Job Shop Scheduling Problem
Mathematics 2019, 7(5), 384; https://doi.org/10.3390/math7050384
Received: 6 March 2019 / Revised: 24 April 2019 / Accepted: 24 April 2019 / Published: 28 April 2019
PDF Full-text (1226 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a novel improved whale optimization algorithm (IWOA), based on the integrated approach, is presented for solving the flexible job shop scheduling problem (FJSP) with the objective of minimizing makespan. First of all, to make the whale optimization algorithm (WOA) adaptive [...] Read more.
In this paper, a novel improved whale optimization algorithm (IWOA), based on the integrated approach, is presented for solving the flexible job shop scheduling problem (FJSP) with the objective of minimizing makespan. First of all, to make the whale optimization algorithm (WOA) adaptive to the FJSP, the conversion method between the whale individual position vector and the scheduling solution is firstly proposed. Secondly, a resultful initialization scheme with certain quality is obtained using chaotic reverse learning (CRL) strategies. Thirdly, a nonlinear convergence factor (NFC) and an adaptive weight (AW) are introduced to balance the abilities of exploitation and exploration of the algorithm. Furthermore, a variable neighborhood search (VNS) operation is performed on the current optimal individual to enhance the accuracy and effectiveness of the local exploration. Experimental results on various benchmark instances show that the proposed IWOA can obtain competitive results compared to the existing algorithms in a short time. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
Topology Structure Implied in β-Hilbert Space, Heisenberg Uncertainty Quantum Characteristics and Numerical Simulation of the DE Algorithm
Mathematics 2019, 7(4), 330; https://doi.org/10.3390/math7040330
Received: 9 March 2019 / Revised: 30 March 2019 / Accepted: 1 April 2019 / Published: 4 April 2019
PDF Full-text (337 KB) | HTML Full-text | XML Full-text
Abstract
The differential evolutionary (DE) algorithm is a global optimization algorithm. To explore the convergence implied in the Hilbert space with the parameter β of the DE algorithm and the quantum properties of the [...] Read more.
The differential evolutionary ( D E ) algorithm is a global optimization algorithm. To explore the convergence implied in the H i l b e r t space with the parameter β of the D E algorithm and the quantum properties of the optimal point in the space, we establish a control convergent iterative form of a higher-order differential equation under the conditions of P ε and analyze the control convergent properties of its iterative sequence; analyze the three topological structures implied in H i l b e r t space of the single-point topological structure, branch topological structure, and discrete topological structure; and establish and analyze the association between the H e i s e n b e r g uncertainty quantum characteristics depending on quantum physics and its topological structure implied in the β -Hilbert space of the D E algorithm as follows: The speed resolution Δ v 2 of the iterative sequence convergent speed and the position resolution Δ x β ε of the global optimal point with the swinging range are a pair of conjugate variables of the quantum states in β -Hilbert space about eigenvalues λ i R , corresponding to the uncertainty characteristics on quantum states, and they cannot simultaneously achieve bidirectional efficiency between convergent speed and the best point precision with any procedural improvements. Where λ i R is a constant in the β -Hilbert space. Finally, the conclusion is verified by the quantum numerical simulation of high-dimensional data. We get the following important quantitative conclusions by numerical simulation: except for several dead points and invalid points, under the condition of spatial dimension, the number of the population, mutated operator, crossover operator, and selected operator are generally decreasing or increasing with a variance deviation rate + 0.50 and the error of less than ± 0.5 ; correspondingly, speed changing rate of the individual iterative points and position changing rate of global optimal point β exhibit a inverse correlation in β -Hilbert space in the statistical perspectives, which illustrates the association between the H e i s e n b e r g uncertainty quantum characteristics and its topological structure implied in the β -Hilbert space of the D E algorithm. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Open AccessArticle
An Improved Artificial Bee Colony Algorithm Based on Elite Strategy and Dimension Learning
Mathematics 2019, 7(3), 289; https://doi.org/10.3390/math7030289
Received: 19 February 2019 / Revised: 12 March 2019 / Accepted: 13 March 2019 / Published: 21 March 2019
PDF Full-text (898 KB) | HTML Full-text | XML Full-text
Abstract
Artificial bee colony is a powerful optimization method, which has strong search abilities to solve many optimization problems. However, some studies proved that ABC has poor exploitation abilities in complex optimization problems. To overcome this issue, an improved ABC variant based on elite [...] Read more.
Artificial bee colony is a powerful optimization method, which has strong search abilities to solve many optimization problems. However, some studies proved that ABC has poor exploitation abilities in complex optimization problems. To overcome this issue, an improved ABC variant based on elite strategy and dimension learning (called ABC-ESDL) is proposed in this paper. The elite strategy selects better solutions to accelerate the search of ABC. The dimension learning uses the differences between two random dimensions to generate a large jump. In the experiments, a classical benchmark set and the 2013 IEEE Congress on Evolutionary (CEC 2013) benchmark set are tested. Computational results show the proposed ABC-ESDL achieves more accurate solutions than ABC and five other improved ABC variants. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
SRIFA: Stochastic Ranking with Improved-Firefly-Algorithm for Constrained Optimization Engineering Design Problems
Mathematics 2019, 7(3), 250; https://doi.org/10.3390/math7030250
Received: 2 February 2019 / Revised: 1 March 2019 / Accepted: 5 March 2019 / Published: 11 March 2019
PDF Full-text (450 KB) | HTML Full-text | XML Full-text
Abstract
Firefly-Algorithm (FA) is an eminent nature-inspired swarm-based technique for solving numerous real world global optimization problems. This paper presents an overview of the constraint handling techniques. It also includes a hybrid algorithm, namely the Stochastic Ranking with Improved Firefly Algorithm (SRIFA) for solving [...] Read more.
Firefly-Algorithm (FA) is an eminent nature-inspired swarm-based technique for solving numerous real world global optimization problems. This paper presents an overview of the constraint handling techniques. It also includes a hybrid algorithm, namely the Stochastic Ranking with Improved Firefly Algorithm (SRIFA) for solving constrained real-world engineering optimization problems. The stochastic ranking approach is broadly used to maintain balance between penalty and fitness functions. FA is extensively used due to its faster convergence than other metaheuristic algorithms. The basic FA is modified by incorporating opposite-based learning and random-scale factor to improve the diversity and performance. Furthermore, SRIFA uses feasibility based rules to maintain balance between penalty and objective functions. SRIFA is experimented to optimize 24 CEC 2006 standard functions and five well-known engineering constrained-optimization design problems from the literature to evaluate and analyze the effectiveness of SRIFA. It can be seen that the overall computational results of SRIFA are better than those of the basic FA. Statistical outcomes of the SRIFA are significantly superior compared to the other evolutionary algorithms and engineering design problems in its performance, quality and efficiency. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
A Novel Hybrid Algorithm for Minimum Total Dominating Set Problem
Mathematics 2019, 7(3), 222; https://doi.org/10.3390/math7030222
Received: 14 January 2019 / Revised: 20 February 2019 / Accepted: 24 February 2019 / Published: 27 February 2019
PDF Full-text (751 KB) | HTML Full-text | XML Full-text
Abstract
The minimum total dominating set (MTDS) problem is a variant of the classical dominating set problem. In this paper, we propose a hybrid evolutionary algorithm, which combines local search and genetic algorithm to solve MTDS. Firstly, a novel scoring heuristic is implemented to [...] Read more.
The minimum total dominating set (MTDS) problem is a variant of the classical dominating set problem. In this paper, we propose a hybrid evolutionary algorithm, which combines local search and genetic algorithm to solve MTDS. Firstly, a novel scoring heuristic is implemented to increase the searching effectiveness and thus get better solutions. Specially, a population including several initial solutions is created first to make the algorithm search more regions and then the local search phase further improves the initial solutions by swapping vertices effectively. Secondly, the repair-based crossover operation creates new solutions to make the algorithm search more feasible regions. Experiments on the classical benchmark DIMACS are carried out to test the performance of the proposed algorithm, and the experimental results show that our algorithm performs much better than its competitor on all instances. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
First-Arrival Travel Times Picking through Sliding Windows and Fuzzy C-Means
Mathematics 2019, 7(3), 221; https://doi.org/10.3390/math7030221
Received: 27 January 2019 / Revised: 21 February 2019 / Accepted: 25 February 2019 / Published: 27 February 2019
PDF Full-text (6344 KB) | HTML Full-text | XML Full-text
Abstract
First-arrival picking is a critical step in seismic data processing. This paper proposes the first-arrival picking through sliding windows and fuzzy c-means (FPSF) algorithm with two stages. The first stage detects a range using sliding windows on vertical and horizontal directions. The second [...] Read more.
First-arrival picking is a critical step in seismic data processing. This paper proposes the first-arrival picking through sliding windows and fuzzy c-means (FPSF) algorithm with two stages. The first stage detects a range using sliding windows on vertical and horizontal directions. The second stage obtains the first-arrival travel times from the range using fuzzy c-means coupled with particle swarm optimization. Results on both noisy and preprocessed field data show that the FPSF algorithm is more accurate than classical methods. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
A Multi-Objective DV-Hop Localization Algorithm Based on NSGA-II in Internet of Things
Mathematics 2019, 7(2), 184; https://doi.org/10.3390/math7020184
Received: 17 December 2018 / Revised: 6 February 2019 / Accepted: 7 February 2019 / Published: 15 February 2019
Cited by 4 | PDF Full-text (3675 KB) | HTML Full-text | XML Full-text
Abstract
Locating node technology, as the most fundamental component of wireless sensor networks (WSNs) and internet of things (IoT), is a pivotal problem. Distance vector-hop technique (DV-Hop) is frequently used for location node estimation in WSN, but it has a poor estimation precision. In [...] Read more.
Locating node technology, as the most fundamental component of wireless sensor networks (WSNs) and internet of things (IoT), is a pivotal problem. Distance vector-hop technique (DV-Hop) is frequently used for location node estimation in WSN, but it has a poor estimation precision. In this paper, a multi-objective DV-Hop localization algorithm based on NSGA-II is designed, called NSGA-II-DV-Hop. In NSGA-II-DV-Hop, a new multi-objective model is constructed, and an enhanced constraint strategy is adopted based on all beacon nodes to enhance the DV-Hop positioning estimation precision, and test four new complex network topologies. Simulation results demonstrate that the precision performance of NSGA-II-DV-Hop significantly outperforms than other algorithms, such as CS-DV-Hop, OCS-LC-DV-Hop, and MODE-DV-Hop algorithms. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
Monarch Butterfly Optimization for Facility Layout Design Based on a Single Loop Material Handling Path
Mathematics 2019, 7(2), 154; https://doi.org/10.3390/math7020154
Received: 26 December 2018 / Revised: 30 January 2019 / Accepted: 31 January 2019 / Published: 6 February 2019
PDF Full-text (3615 KB) | HTML Full-text | XML Full-text
Abstract
Facility layout problems (FLPs) are concerned with the non-overlapping arrangement of facilities. The objective of many FLP-based studies is to minimize the total material handling cost between facilities, which are considered as rectangular blocks of given space. However, it is important to integrate [...] Read more.
Facility layout problems (FLPs) are concerned with the non-overlapping arrangement of facilities. The objective of many FLP-based studies is to minimize the total material handling cost between facilities, which are considered as rectangular blocks of given space. However, it is important to integrate a layout design associated with continual material flow when the system uses circulating material handling equipment. The present study proposes approaches to solve the layout design and shortest single loop material handling path. Monarch butterfly optimization (MBO), a recently-announced meta-heuristic algorithm, is applied to determine the layout configuration. A loop construction method is proposed to construct a single loop material handling path for the given layout in every MBO iteration. A slicing tree structure (STS) is used to represent the layout configuration in solution form. A total of 11 instances are tested to evaluate the algorithm’s performance. The proposed approach generates solutions as intended within a reasonable amount of time. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Graphical abstract

Open AccessFeature PaperArticle
A Novel Bat Algorithm with Multiple Strategies Coupling for Numerical Optimization
Mathematics 2019, 7(2), 135; https://doi.org/10.3390/math7020135
Received: 10 December 2018 / Revised: 19 January 2019 / Accepted: 21 January 2019 / Published: 1 February 2019
Cited by 4 | PDF Full-text (2881 KB) | HTML Full-text | XML Full-text
Abstract
A bat algorithm (BA) is a heuristic algorithm that operates by imitating the echolocation behavior of bats to perform global optimization. The BA is widely used in various optimization problems because of its excellent performance. In the bat algorithm, the global search capability [...] Read more.
A bat algorithm (BA) is a heuristic algorithm that operates by imitating the echolocation behavior of bats to perform global optimization. The BA is widely used in various optimization problems because of its excellent performance. In the bat algorithm, the global search capability is determined by the parameter loudness and frequency. However, experiments show that each operator in the algorithm can only improve the performance of the algorithm at a certain time. In this paper, a novel bat algorithm with multiple strategies coupling (mixBA) is proposed to solve this problem. To prove the effectiveness of the algorithm, we compared it with CEC2013 benchmarks test suits. Furthermore, the Wilcoxon and Friedman tests were conducted to distinguish the differences between it and other algorithms. The results prove that the proposed algorithm is significantly superior to others on the majority of benchmark functions. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
Search Acceleration of Evolutionary Multi-Objective Optimization Using an Estimated Convergence Point
Mathematics 2019, 7(2), 129; https://doi.org/10.3390/math7020129
Received: 21 November 2018 / Revised: 23 January 2019 / Accepted: 23 January 2019 / Published: 28 January 2019
PDF Full-text (530 KB) | HTML Full-text | XML Full-text
Abstract
We propose a method to accelerate evolutionary multi-objective optimization (EMO) search using an estimated convergence point. Pareto improvement from the last generation to the current generation supports information of promising Pareto solution areas in both an objective space and a parameter space. We [...] Read more.
We propose a method to accelerate evolutionary multi-objective optimization (EMO) search using an estimated convergence point. Pareto improvement from the last generation to the current generation supports information of promising Pareto solution areas in both an objective space and a parameter space. We use this information to construct a set of moving vectors and estimate a non-dominated Pareto point from these moving vectors. In this work, we attempt to use different methods for constructing moving vectors, and use the convergence point estimated by using the moving vectors to accelerate EMO search. From our evaluation results, we found that the landscape of Pareto improvement has a uni-modal distribution characteristic in an objective space, and has a multi-modal distribution characteristic in a parameter space. Our proposed method can enhance EMO search when the landscape of Pareto improvement has a uni-modal distribution characteristic in a parameter space, and by chance also does that when landscape of Pareto improvement has a multi-modal distribution characteristic in a parameter space. The proposed methods can not only obtain more Pareto solutions compared with the conventional non-dominant sorting genetic algorithm (NSGA)-II algorithm, but can also increase the diversity of Pareto solutions. This indicates that our proposed method can enhance the search capability of EMO in both Pareto dominance and solution diversity. We also found that the method of constructing moving vectors is a primary issue for the success of our proposed method. We analyze and discuss this method with several evaluation metrics and statistical tests. The proposed method has potential to enhance EMO embedding deterministic learning methods in stochastic optimization algorithms. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
The Importance of Transfer Function in Solving Set-Union Knapsack Problem Based on Discrete Moth Search Algorithm
Mathematics 2019, 7(1), 17; https://doi.org/10.3390/math7010017
Received: 1 September 2018 / Revised: 10 December 2018 / Accepted: 10 December 2018 / Published: 24 December 2018
Cited by 1 | PDF Full-text (4527 KB) | HTML Full-text | XML Full-text
Abstract
Moth search (MS) algorithm, originally proposed to solve continuous optimization problems, is a novel bio-inspired metaheuristic algorithm. At present, there seems to be little concern about using MS to solve discrete optimization problems. One of the most common and efficient ways to discretize [...] Read more.
Moth search (MS) algorithm, originally proposed to solve continuous optimization problems, is a novel bio-inspired metaheuristic algorithm. At present, there seems to be little concern about using MS to solve discrete optimization problems. One of the most common and efficient ways to discretize MS is to use a transfer function, which is in charge of mapping a continuous search space to a discrete search space. In this paper, twelve transfer functions divided into three families, S-shaped (named S1, S2, S3, and S4), V-shaped (named V1, V2, V3, and V4), and other shapes (named O1, O2, O3, and O4), are combined with MS, and then twelve discrete versions MS algorithms are proposed for solving set-union knapsack problem (SUKP). Three groups of fifteen SUKP instances are employed to evaluate the importance of these transfer functions. The results show that O4 is the best transfer function when combined with MS to solve SUKP. Meanwhile, the importance of the transfer function in terms of improving the quality of solutions and convergence rate is demonstrated as well. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
A Novel Simple Particle Swarm Optimization Algorithm for Global Optimization
Mathematics 2018, 6(12), 287; https://doi.org/10.3390/math6120287
Received: 28 October 2018 / Revised: 17 November 2018 / Accepted: 19 November 2018 / Published: 27 November 2018
PDF Full-text (2870 KB) | HTML Full-text | XML Full-text
Abstract
In order to overcome the several shortcomings of Particle Swarm Optimization (PSO) e.g., premature convergence, low accuracy and poor global searching ability, a novel Simple Particle Swarm Optimization based on Random weight and Confidence term (SPSORC) is proposed in this paper. The original [...] Read more.
In order to overcome the several shortcomings of Particle Swarm Optimization (PSO) e.g., premature convergence, low accuracy and poor global searching ability, a novel Simple Particle Swarm Optimization based on Random weight and Confidence term (SPSORC) is proposed in this paper. The original two improvements of the algorithm are called Simple Particle Swarm Optimization (SPSO) and Simple Particle Swarm Optimization with Confidence term (SPSOC), respectively. The former has the characteristics of more simple structure and faster convergence speed, and the latter increases particle diversity. SPSORC takes into account the advantages of both and enhances exploitation capability of algorithm. Twenty-two benchmark functions and four state-of-the-art improvement strategies are introduced so as to facilitate more fair comparison. In addition, a t-test is used to analyze the differences in large amounts of data. The stability and the search efficiency of algorithms are evaluated by comparing the success rates and the average iteration times obtained from 50-dimensional benchmark functions. The results show that the SPSO and its improved algorithms perform well comparing with several kinds of improved PSO algorithms according to both search time and computing accuracy. SPSORC, in particular, is more competent for the optimization of complex problems. In all, it has more desirable convergence, stronger stability and higher accuracy. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
Energy-Efficient Scheduling for a Job Shop Using an Improved Whale Optimization Algorithm
Mathematics 2018, 6(11), 220; https://doi.org/10.3390/math6110220
Received: 28 September 2018 / Revised: 20 October 2018 / Accepted: 26 October 2018 / Published: 28 October 2018
Cited by 3 | PDF Full-text (3075 KB) | HTML Full-text | XML Full-text
Abstract
Under the current environmental pressure, many manufacturing enterprises are urged or forced to adopt effective energy-saving measures. However, environmental metrics, such as energy consumption and CO2 emission, are seldom considered in the traditional production scheduling problems. Recently, the energy-related scheduling problem has [...] Read more.
Under the current environmental pressure, many manufacturing enterprises are urged or forced to adopt effective energy-saving measures. However, environmental metrics, such as energy consumption and CO2 emission, are seldom considered in the traditional production scheduling problems. Recently, the energy-related scheduling problem has been paid increasingly more attention by researchers. In this paper, an energy-efficient job shop scheduling problem (EJSP) is investigated with the objective of minimizing the sum of the energy consumption cost and the completion-time cost. As the classical JSP is well known as a non-deterministic polynomial-time hard (NP-hard) problem, an improved whale optimization algorithm (IWOA) is presented to solve the energy-efficient scheduling problem. The improvement is performed using dispatching rules (DR), a nonlinear convergence factor (NCF), and a mutation operation (MO). The DR is used to enhance the initial solution quality and overcome the drawbacks of the random population. The NCF is adopted to balance the abilities of exploration and exploitation of the algorithm. The MO is employed to reduce the possibility of falling into local optimum to avoid the premature convergence. To validate the effectiveness of the proposed algorithm, extensive simulations have been performed in the experiment section. The computational data demonstrate the promising advantages of the proposed IWOA for the energy-efficient job shop scheduling problem. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
Urban-Tissue Optimization through Evolutionary Computation
Mathematics 2018, 6(10), 189; https://doi.org/10.3390/math6100189
Received: 1 August 2018 / Revised: 27 September 2018 / Accepted: 29 September 2018 / Published: 2 October 2018
PDF Full-text (5988 KB) | HTML Full-text | XML Full-text
Abstract
The experiments analyzed in this paper focus their research on the use of Evolutionary Computation (EC) applied to a parametrized urban tissue. Through the application of EC, it is possible to develop a design under a single model that addresses multiple conflicting objectives. [...] Read more.
The experiments analyzed in this paper focus their research on the use of Evolutionary Computation (EC) applied to a parametrized urban tissue. Through the application of EC, it is possible to develop a design under a single model that addresses multiple conflicting objectives. The experiments presented are based on Cerdà’s master plan in Barcelona, specifically on the iconic Eixample block which is grouped into a 4 × 4 urban Superblock. The proposal aims to reach the existing high density of the city while reclaiming the block relations proposed by Cerdà’s original plan. Generating and ranking multiple individuals in a population through several generations ensures a flexible solution rather than a single “optimal” one. Final results in the Pareto front show a successful and diverse set of solutions that approximate Cerdà’s and the existing Barcelona’s Eixample states. Further analysis proposes different methodologies and considerations to choose appropriate individuals within the front depending on design requirements. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Open AccessArticle
A Developed Artificial Bee Colony Algorithm Based on Cloud Model
Mathematics 2018, 6(4), 61; https://doi.org/10.3390/math6040061
Received: 11 March 2018 / Revised: 7 April 2018 / Accepted: 10 April 2018 / Published: 18 April 2018
Cited by 2 | PDF Full-text (846 KB) | HTML Full-text | XML Full-text
Abstract
The Artificial Bee Colony (ABC) algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T˜ that is presented by nature language and its quantitative expression, which integrates probability theory and [...] Read more.
The Artificial Bee Colony (ABC) algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Figures

Figure 1

Mathematics EISSN 2227-7390 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top