Special Issue "Evolutionary Computation and Mathematical Programming"

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (31 October 2019).

Special Issue Editors

Prof. Dr. Marjan Mernik
grade Website
Guest Editor
Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, 2000 Maribor, Slovenia
Interests: concepts and implementation of programming languages; formal language definition, attribute grammars; compiler generators; domain specific languages; grammar-based systems; grammatical inference; meta-heuristics, single and multi-objective optimization
Special Issues and Collections in MDPI journals
Dr. Foad Nazari
Website
Guest Editor
Villanova Center for Analytics of Dynamic Systems (VCADS), Faculty of Engineering, Villanova University Villanova, PA 19085, USA
Interests: artificial intelligence, machine learning, meta-heuristics, single and multi-objective optimization, condition monitoring, vibration analysis, bio-medical diagnosis
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Evolutionary computation is an abstraction from the theory of biological evolution for creating techniques or methodologies to obtain highly optimized solutions in a wide range of complex optimization problems. Nowadays, in various areas of human activity, it is needed to use computer programs based on probability and mathematical models to choose the best alternative from a set of available options and to determine the most efficient way to allocate scarce resources. Here, the concept of Mathematical Programming is employed.

This Special Issue on “Evolutionary Computation and Mathematical Programming” intends to collect the recent advances and studies associated with the theoretical development of this field as well as its new applications in engineering and science. All researchers are invited to contribute to this Special Issue with their original research articles, review papers, and short communications. Submitted manuscripts should clearly indicate the contribution in the work and meet high standards of exposition.

Contributions related to the advancement of metaheuristic optimization algorithms will be particularly appreciated.

Prof. Marjan Mernik
Dr. Foad Nazari
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • linear, nonlinear, stochastic, integer, and combinatorial optimization
  • mathematical programming
  • evolutionary algorithms
  • swarm intelligence
  • memetic algorithms

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Determination of a Hysteresis Model Parameters with the Use of Different Evolutionary Methods for an Innovative Hysteresis Model
Mathematics 2020, 8(2), 201; https://doi.org/10.3390/math8020201 - 06 Feb 2020
Cited by 2
Abstract
For precise modeling of electromagnetic devices, we have to model material hysteresis. A Genetic Algorithm, Differential Evolution with three different strategies, teaching–learning-based optimization and Artificial Bee Colony, were used for testing seven different modified mathematical expressions, and the best combination of mathematical expression [...] Read more.
For precise modeling of electromagnetic devices, we have to model material hysteresis. A Genetic Algorithm, Differential Evolution with three different strategies, teaching–learning-based optimization and Artificial Bee Colony, were used for testing seven different modified mathematical expressions, and the best combination of mathematical expression and solving method was used for hysteresis modeling. The parameters of the hysteresis model were determined based on the measured major hysteresis loop and first-order reversal curves. The model offers a simple determination of the magnetization procedure in the areas between measured curves, with the only correction of two parameters based on only two known points in the magnetization process. It was tested on two very different magnetic materials, and results show good agreement between the measured and calculated curves. The calculated curves between the measured curves have correct shapes. The main difference between our model and other models is that, in our model, each measured curve, major and reversal, is described with different parameters. The magnetization process between measured curves is described according to the nearest measured curve, and this ensures the best fit for each measured curve. In other models, there is mainly only one curve, a major hysteresis or magnetization curve, used for the determination of the parameters, and all other curves are then dependent on this curve. Results confirm that the evolutionary optimization method offers a reliable procedure for precise determination of the parameters. Full article
(This article belongs to the Special Issue Evolutionary Computation and Mathematical Programming)
Show Figures

Figure 1

Open AccessArticle
Characterizations of the Beta Kumaraswamy Exponential Distribution
Mathematics 2020, 8(1), 23; https://doi.org/10.3390/math8010023 - 20 Dec 2019
Abstract
In this article, the five-parameter beta Kumaraswamy exponential distribution (BKw-E) is introduced, and some characterizations of this distribution are obtained. The shape of the hazard function and some other important properties—such as median, mode, quantile function, and mean—are studied. In addition, the moments, [...] Read more.
In this article, the five-parameter beta Kumaraswamy exponential distribution (BKw-E) is introduced, and some characterizations of this distribution are obtained. The shape of the hazard function and some other important properties—such as median, mode, quantile function, and mean—are studied. In addition, the moments, skewness, and kurtosis are found. Furthermore, important measures such as Rényi entropy and order statistics are obtained; these have applications in many fields. An example of a real data set is discussed. Full article
(This article belongs to the Special Issue Evolutionary Computation and Mathematical Programming)
Show Figures

Figure 1

Open AccessArticle
Discrete Mutation Hopfield Neural Network in Propositional Satisfiability
Mathematics 2019, 7(11), 1133; https://doi.org/10.3390/math7111133 - 19 Nov 2019
Cited by 3
Abstract
The dynamic behaviours of an artificial neural network (ANN) system are strongly dependent on its network structure. Thus, the output of ANNs has long suffered from a lack of interpretability and variation. This has severely limited the practical usability of the logical rule [...] Read more.
The dynamic behaviours of an artificial neural network (ANN) system are strongly dependent on its network structure. Thus, the output of ANNs has long suffered from a lack of interpretability and variation. This has severely limited the practical usability of the logical rule in the ANN. The work presents an integrated representation of k-satisfiability (kSAT) in a mutation hopfield neural network (MHNN). Neuron states of the hopfield neural network converge to minimum energy, but the solution produced is confined to the limited number of solution spaces. The MHNN is incorporated with the global search capability of the estimation of distribution algorithms (EDAs), which typically explore various solution spaces. The main purpose is to estimate other possible neuron states that lead to global minimum energy through available output measurements. Furthermore, it is shown that the MHNN can retrieve various neuron states with the lowest minimum energy. Subsequent simulations performed on the MHNN reveal that the approach yields a result that surpasses the conventional hybrid HNN. Furthermore, this study provides a new paradigm in the field of neural networks by overcoming the overfitting issue. Full article
(This article belongs to the Special Issue Evolutionary Computation and Mathematical Programming)
Show Figures

Figure 1

Open AccessArticle
Long Term Memory Assistance for Evolutionary Algorithms
Mathematics 2019, 7(11), 1129; https://doi.org/10.3390/math7111129 - 18 Nov 2019
Cited by 3
Abstract
Short term memory that records the current population has been an inherent component of Evolutionary Algorithms (EAs). As hardware technologies advance currently, inexpensive memory with massive capacities could become a performance boost to EAs. This paper introduces a Long Term Memory Assistance (LTMA) [...] Read more.
Short term memory that records the current population has been an inherent component of Evolutionary Algorithms (EAs). As hardware technologies advance currently, inexpensive memory with massive capacities could become a performance boost to EAs. This paper introduces a Long Term Memory Assistance (LTMA) that records the entire search history of an evolutionary process. With LTMA, individuals already visited (i.e., duplicate solutions) do not need to be re-evaluated, and thus, resources originally designated to fitness evaluations could be reallocated to continue search space exploration or exploitation. Three sets of experiments were conducted to prove the superiority of LTMA. In the first experiment, it was shown that LTMA recorded at least 50 % more duplicate individuals than a short term memory. In the second experiment, ABC and jDElscop were applied to the CEC-2015 benchmark functions. By avoiding fitness re-evaluation, LTMA improved execution time of the most time consuming problems F 03 and F 05 between 7% and 28% and 7% and 16%, respectively. In the third experiment, a hard real-world problem for determining soil models’ parameters, LTMA improved execution time between 26% and 69%. Finally, LTMA was implemented under a generalized and extendable open source system, called EARS. Any EA researcher could apply LTMA to a variety of optimization problems and evolutionary algorithms, either existing or new ones, in a uniform way. Full article
(This article belongs to the Special Issue Evolutionary Computation and Mathematical Programming)
Show Figures

Figure 1

Open AccessArticle
Memory-Based Evolutionary Algorithms for Nonlinear and Stochastic Programming Problems
Mathematics 2019, 7(11), 1126; https://doi.org/10.3390/math7111126 - 17 Nov 2019
Cited by 2
Abstract
In this paper, we target the problems of finding a global minimum of nonlinear and stochastic programming problems. To solve this type of problem, we propose new approaches based on combining direct search methods with Evolution Strategies (ESs) and Scatter Search (SS) metaheuristics [...] Read more.
In this paper, we target the problems of finding a global minimum of nonlinear and stochastic programming problems. To solve this type of problem, we propose new approaches based on combining direct search methods with Evolution Strategies (ESs) and Scatter Search (SS) metaheuristics approaches. First, we suggest new designs of ESs and SS with a memory-based element called Gene Matrix (GM) to deal with those type of problems. These methods are called Directed Evolution Strategies (DES) and Directed Scatter Search (DSS), respectively, and they are able to search for a global minima. Moreover, a faster convergence can be achieved by accelerating the evolutionary search process using GM, and in the final stage we apply the Nelder-Mead algorithm to find the global minimum from the solutions found so far. Then, the variable-sample method is invoked in the DES and DSS to compose new stochastic programming techniques. Extensive numerical experiments have been applied on some well-known functions to test the performance of the proposed methods. Full article
(This article belongs to the Special Issue Evolutionary Computation and Mathematical Programming)
Show Figures

Figure 1

Open AccessArticle
Tuning Multi-Objective Evolutionary Algorithms on Different Sized Problem Sets
Mathematics 2019, 7(9), 824; https://doi.org/10.3390/math7090824 - 06 Sep 2019
Cited by 2
Abstract
Multi-Objective Evolutionary Algorithms (MOEAs) have been applied successfully for solving real-world multi-objective problems. Their success can depend highly on the configuration of their control parameters. Different tuning methods have been proposed in order to solve this problem. Tuning can be performed on a [...] Read more.
Multi-Objective Evolutionary Algorithms (MOEAs) have been applied successfully for solving real-world multi-objective problems. Their success can depend highly on the configuration of their control parameters. Different tuning methods have been proposed in order to solve this problem. Tuning can be performed on a set of problem instances in order to obtain robust control parameters. However, for real-world problems, the set of problem instances at our disposal usually are not very plentiful. This raises the question: What is a sufficient number of problems used in the tuning process to obtain robust enough parameters? To answer this question, a novel method called MOCRS-Tuning was applied on different sized problem sets for the real-world integration and test order problem. The configurations obtained by the tuning process were compared on all the used problem instances. The results show that tuning greatly improves the algorithms’ performance and that a bigger subset used for tuning does not guarantee better results. This indicates that it is possible to obtain robust control parameters with a small subset of problem instances, which also substantially reduces the time required for tuning. Full article
(This article belongs to the Special Issue Evolutionary Computation and Mathematical Programming)
Show Figures

Figure 1

Open AccessArticle
New Analytical Solutions for Time-Fractional Kolmogorov-Petrovsky-Piskunov Equation with Variety of Initial Boundary Conditions
Mathematics 2019, 7(9), 813; https://doi.org/10.3390/math7090813 - 03 Sep 2019
Abstract
The generalized time fractional Kolmogorov-Petrovsky-Piskunov equation (FKPP), Dtαω(x,t)=a(x,t)Dxxω(x,t)+F(ω(x,t)), [...] Read more.
The generalized time fractional Kolmogorov-Petrovsky-Piskunov equation (FKPP), D t α ω ( x , t ) = a ( x , t ) D x x ω ( x , t ) + F ( ω ( x , t ) ) , which plays an important role in engineering, chemical reaction problem is proposed by Caputo fractional order derivative sense. In this paper, we develop a framework wavelet, including shift Chebyshev polynomial of the first kind as a mother wavelet, and also construct some operational matrices that represent Caputo fractional derivative to obtain analytical solutions for FKPP equation with three different types of Initial Boundary conditions (Dirichlet, Dirichlet-Neumann, and Neumann-Robin). Our results shown that the Chebyshev wavelet is a powerful method, due to its simplicity, efficiency in analytical approximations, and its fast convergence. The comparison of the Chebyshev wavelet results indicates that the proposed method not only gives satisfactory results but also do not need large amount of CPU times. Full article
(This article belongs to the Special Issue Evolutionary Computation and Mathematical Programming)
Show Figures

Figure 1

Open AccessFeature PaperArticle
On a New Formula for Fibonacci’s Family m-step Numbers and Some Applications
Mathematics 2019, 7(9), 805; https://doi.org/10.3390/math7090805 - 01 Sep 2019
Cited by 3
Abstract
In this work, we obtain a new formula for Fibonacci’s family m-step sequences. We use our formula to find the nth term with less time complexity than the matrix multiplication method. Then, we extend our results for all linear homogeneous recurrence [...] Read more.
In this work, we obtain a new formula for Fibonacci’s family m-step sequences. We use our formula to find the nth term with less time complexity than the matrix multiplication method. Then, we extend our results for all linear homogeneous recurrence m-step relations with constant coefficients by using the last few terms of its corresponding Fibonacci’s family m-step sequence. As a computational number theory application, we develop a method to estimate the square roots. Full article
(This article belongs to the Special Issue Evolutionary Computation and Mathematical Programming)
Open AccessArticle
Optimizing the Low-Carbon Flexible Job Shop Scheduling Problem with Discrete Whale Optimization Algorithm
Mathematics 2019, 7(8), 688; https://doi.org/10.3390/math7080688 - 01 Aug 2019
Cited by 3
Abstract
The flexible job shop scheduling problem (FJSP) is a difficult discrete combinatorial optimization problem, which has been widely studied due to its theoretical and practical significance. However, previous researchers mostly emphasized on the production efficiency criteria such as completion time, workload, flow time, [...] Read more.
The flexible job shop scheduling problem (FJSP) is a difficult discrete combinatorial optimization problem, which has been widely studied due to its theoretical and practical significance. However, previous researchers mostly emphasized on the production efficiency criteria such as completion time, workload, flow time, etc. Recently, with considerations of sustainable development, low-carbon scheduling problems have received more and more attention. In this paper, a low-carbon FJSP model is proposed to minimize the sum of completion time cost and energy consumption cost in the workshop. A new bio-inspired metaheuristic algorithm called discrete whale optimization algorithm (DWOA) is developed to solve the problem efficiently. In the proposed DWOA, an innovative encoding mechanism is employed to represent two sub-problems: Machine assignment and job sequencing. Then, a hybrid variable neighborhood search method is adapted to generate a high quality and diverse population. According to the discrete characteristics of the problem, the modified updating approaches based on the crossover operator are applied to replace the original updating method in the exploration and exploitation phase. Simultaneously, in order to balance the ability of exploration and exploitation in the process of evolution, six adjustment curves of a are used to adjust the transition between exploration and exploitation of the algorithm. Finally, some well-known benchmark instances are tested to verify the effectiveness of the proposed algorithms for the low-carbon FJSP. Full article
(This article belongs to the Special Issue Evolutionary Computation and Mathematical Programming)
Show Figures

Figure 1

Back to TopTop