Next Article in Journal
Design of Manufacturing Lines Using the Reconfigurability Principle
Next Article in Special Issue
An Evolutionary Approach to Improve the Halftoning Process
Previous Article in Journal
Improving the Representativeness of a Simple Random Sample: An Optimization Model and Its Application to the Continuous Sample of Working Lives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Circular Symmetric Low-Pass Digital IIR Filter Design Using Evolutionary Computation Techniques

1
Departamento de Electrónica, Universidad de Guadalajara, CUCEI, Av. Revolución, Guadalajara 1500, Mexico
2
Department of Computer Science, Faculty of Computers & Information, Minia University, Minia 61519, Egypt
3
Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(8), 1226; https://doi.org/10.3390/math8081226
Submission received: 10 July 2020 / Revised: 25 July 2020 / Accepted: 25 July 2020 / Published: 26 July 2020
(This article belongs to the Special Issue Evolutionary Image Processing)

Abstract

:
The design of two-dimensional Infinite Impulse Response (2D-IIR) filters has recently attracted attention in several areas of engineering because of their wide range of applications. Synthesizing a user-defined filter in a 2D-IIR structure can be interpreted as an optimization problem. However, since 2D-IIR filters can easily produce unstable transfer functions, they tend to compose multimodal error surfaces, which are computationally difficult to optimize. On the other hand, Evolutionary Computation (EC) algorithms are well-known global optimization methods with the capacity to explore complex search spaces for a suitable solution. Every EC technique holds distinctive attributes to properly satisfy particular requirements of specific problems. Hence, a particular EC algorithm is not able to solve all problems adequately. To determine the advantages and flaws of EC techniques, their correct evaluation is a critical task in the computational intelligence community. Furthermore, EC algorithms are stochastic processes with random operations. Under such conditions, for obtaining significant conclusions, appropriate statistical methods must be considered. Although several comparisons among EC methods have been reported in the literature, their conclusions are based on a set of synthetic functions, without considering the context of the problem or appropriate statistical treatment. This paper presents a comparative study of various EC techniques currently in use employed for designing 2D-IIR digital filters. The results of several experiments are presented and statistically analyzed.

1. Introduction

Two-dimensional (2D) digital filters have been widely used in many areas of engineering, such as signal processing, computer vision, among others [1,2,3]. Based on their impulse response, 2D filters are categorized into two main groups: Infinite impulse response (IIR) filters and finite impulse response (FIR) models. In general, IIR filters produce higher performance than their similar FIR models considering fewer parameters in their configuration [4,5]. However, since 2D-IIR filters easily produce unstable transfer functions, it is quite difficult to design them in comparison with their counterparts [6].
The objective of filter design is to approximate the response of a determined filter to the desired behavior provided by the user requirements. The design methodologies for 2D-IIR filters can be grouped into two categories [7,8,9]: transformation methods and optimization-based techniques. In a transformation method, the design of a 2D filter is constructed from a one-dimensional prototype that fulfills the required response. Under this scheme, filters are designed in a fast way, but adversely with low accuracy. On the other hand, in optimization-based techniques, the design of a 2D-IIR filter is interpreted as an optimization problem. Therefore, optimization algorithms are used to find the best suitable filter over a set of candidate solutions. Due to their synthesizing problems, the design of 2D-IIR filters is normally conducted through optimization tools instead of analytical methods [10]. Under the optimization-based methodology, filters maintain a better accuracy than transformation methods. However, their design is highly dependent on the used optimization method. Normally, the generation of a filter through an optimization-based approach is computationally expensive and takes more time regarding transformation methods.
In optimization-based methods, the design process consists of finding the optimal parameters of a 2D-IIR filter that present the best possible resemblance with the desired response. To guide the optimization process, an objective function J that evaluates the similarity between the candidate filter and the desired response must be defined. However, since 2D-IIR filters can easily produce unstable transfer functions, they tend to generate multimodal error surfaces, which are computationally difficult to optimize [11]. This kind of objective function makes the operation of derivative-based techniques difficult since they can be easily trapped in a suboptimal solution [12]. Under such conditions, approaches that use the gradient descent method as basis algorithms are scarcely considered for the design of 2D-IIR filters [13].
On the other hand, EC techniques are optimization methods motivated in our scientific understanding of natural or social behavior, which at some abstraction level can be considered as search procedures [14]. In the literature, EC approaches have shown better performance than those based on gradient methods in terms of robustness and accuracy [12]. As a result, several design methods for 2D-IIR filters have been proposed considering different EC techniques such as the Particle Swarm Optimization (PSO) [15], Genetic Algorithms (GA) [16], and Artificial Bee Colony (ABC) [17].
The capability of determining an optimal solution of an EC method is directly related to its ability to determine the balance between exploration and exploitation of the search space [18] correctly. In the exploration stage the EC technique examines new solutions through the search space, while in the exploitation stage the local search of previously visited locations is carried out in order to improve its quality. If the exploration stage is mainly conducted, the efficiency of the optimization process is degraded, but the capability of determining new potential solutions increases [19]. Otherwise, if the exploitation stage is privileged, the current solutions are highly refined. However, in this particular case, the technique achieves sub-optimal solutions as a result of a scarce exploration [20]. Furthermore, each specific problem usually requires a different trade-off between both stages [21]. Since the exploration–exploitation balance is an unsolved issue within EC community, each EC technique introduces an experimental solution through the combination of deterministic models and stochastic principles. Under such conditions, each EC technique holds distinctive characteristics that properly satisfy the requirements of particular problems [22,23,24]. Therefore, a particular EC algorithm is not able to solve all problems adequately.
Recently, it has been also identified that each EC scheme presents a different tendency to analyze repeatedly certain areas of the search space or even outside the search space with the independence of the obtained objective function values. This effect is presented as a result of the operation of the specific operators of each EC method or in the early stages of the evolution process. Since this fact adversely affects the obtained results for each EC method, some strategies [25] have been suggested to handle this problem. Among these techniques, the appropriate configuration of parameters represents one of the most common strategies reducing this effect and other undesired behaviors.
In order to determine the advantages and limitations of EC methods, their correct evaluation is an important task in the computational intelligence community. In general, EC algorithms are complex pieces of software with several operators and stochastic branches. Under such circumstances, appropriate statistical methods must be considered for obtaining significant conclusions. Although several comparisons among EC methods have been reported in the literature [26,27,28], their conclusions are based on a set of synthetic functions, without considering the context of the problem or appropriate statistical treatment.
This paper reports a comparative study of various EC techniques when they are applied to the complex application of 2D-IIR filter design. In the comparison, special attention is paid to the most popular EC algorithms currently in use, such as PSO, ABC, DE, HS, GSA and FPA. In the experiments, the statistical analysis of Wilcoxon and Bonferroni has been conducted to validate the results obtained by these techniques.
The rest of this manuscript is organized as follows: In Section 2, a brief description of each algorithm used in the comparison is presented. In Section 3 the 2D-IIR filter design problem is formulated from the optimization point of view. The experimental results and the statistical analysis are presented in Section 4. In Section 5, a discussion is conducted. Finally, in Section 6 the conclusions are reported.

2. Evolutionary Computation Techniques

Evolutionary computation (EC) techniques are powerful tools for solving complex optimization problems. Different from classical methods, EC algorithms are mainly appropriated to face multimodal objective functions.
Most of the EC techniques are inspired by nature or social phenomena, which a certain abstraction level can be considered as an optimization process. In general, EC techniques are formed of two principal components: determinist rules and stochastic operations. Combining both elements, each EC method proposes a particular solution to the exploration–exploitation balance. Under such conditions, each EC technique holds distinctive characteristics that appropriately fulfill the requirements of particular problems.
Several EC methods have been proposed in the literature. According to their applications [14], the algorithms Particle Swarm Optimization (PSO) [29], the Artificial Bee Colony (ABC) algorithm [30], the Differential Evolution (DE) method [31], the Harmony Search (HS) strategy [32], the Gravitational Search Algorithm (GSA) [33] and the Flower Pollination Algorithm (FPA) [34] are considered the most popular currently in use. Therefore, they will be used in our comparison study. In this Section, a description of each algorithm used in the comparison is presented.

2.1. Particle Swarm Optimization (PSO)

The PSO is probably the most popular optimization technique; it was introduced by Kennedy in 1995 [29], and is motivated by the action of bird flocking or fish schooling. The algorithm uses a group or swarm of particles as possible solutions p k ( { p 1 k , p 2 k , , p N k } ) of N particles, which are evolved from an initial point ( k = 0 ) to a maximum number of generations (k = max_gen). Each particle p i k ( i [ 1 , , N ] ) symbolizes a d-dimensional vector { p i , 1 k , p i , 2 k , , p i , d k } where each dimension is a decision variable of the treated problem. After initializing the particles randomly through the search space, each particle p i k is evaluated by the fitness function f ( p i k ) , the best value of fitness during the evolution process is stored in a vector g ( g 1 , g 2 , , g d ) with their respective positions p i * ( p i , 1 * , p i , 2 * , , p i , d * ) .
For practical proposes, the PSO implemented has been proposed by Lin in [35], which compute a new position p i k + 1 of each particle p i k as follows:
v i , j k + 1 = ω v i , j k + c 1 r 1 ( p i , j * p i , j k ) + c 2 r 2 ( g j p i , j k ) , p i , j k + 1 = p i , j k + v i , j k + 1
where ω is the inertia weight, and rules the current and the updated velocity, c 1 and c 2 are positive numbers that control the acceleration of each individual to the positions g and p i k , respectively. r 1 and r 2 are random numbers in the interval [0, 1].

2.2. Artificial Bee Colony (ABC)

The ABC was introduced by Karaboga [30]. It is based on the intelligent foraging behavior of honeybee swarms. In the ABC implementation, the algorithm generates a population X k ( { x 1 k , x 2 k , , x N k } ) of N possible food locations, which are evolved from the initial point ( k = 0 ) to the maximum number of generations (k = max_gen). Each food position X i k ( i [ 1 , , N ] ) symbolizes a d-dimensional vector { x i , 1 k , x i , 2 k , , x i , d k } where each dimension is a decision variable of the treated problem. The initial feasible food locations (solutions) are evaluated by an objective function to determinate which food location represents a satisfactory solution (nectar-amount) or not. Once the values from a cost function are computed, the candidate solution x i k evolves by different ABC operators (honeybee types), where each food location x i k generates a new food source t i and is computed as follows:
t i = x i k + ϕ ( x i k x r k ) ,   i , r ( 1 , 2 , N )
where x i k is a food location randomly selected, which satisfies the condition r i . A factor ϕ is a random number in the interval [−1, 1]. When a new solution t i is produced, a probability associated with the fitness value of a particular solution fit ( t i ) is calculated. The quality of the solution determined by the fitness function can be assigned to the solution x i k using the following expression:
fit ( x i k ) = { 1 1 + f ( x i k ) if   f ( x i k ) 0 1 + | f ( x i k ) | if   f ( x i k ) < 0
where f ( ) describes the cost function to be optimized. When the fitness values of the complete population are computed, a greedy selection between t i and x i k is employed. If the quality of t i is better than x i k then the solution x i k is changed by t i ; otherwise, the original solution remains.
If a candidate solution i (food source) cannot improve within a set trial value named as “limit”, the candidate solution is expected to be discarded and the related bee becomes a scout. A scout explores the search space without any information. To verify if a candidate solution reached the fixed “limit”, a counter is assigned to every food source i. The counter is incremented, resulting in a bee-operator failing to improve the food source’s quality.

2.3. Differential Evolution (DE)

The DE was developed by Storn and Price [31], where the feasible solution is defined as a vector and is represented as follow:
x i t = ( x 1 , i t , x 2 , i t , , x d , i t ) ,   i = 1 , 2 , n
where x i t is the i-th vector at iteration t. The standard DE procedure has three main stages: (A) mutation, (B) crossover and (C) selection which are described below:
(A) Mutation. This process is applied to a given vector x i t at the t generation. Firstly, three different vectors are randomly selected from the entire population: x p , x q and x r , then, a new vector (candidate solution) is generated as follow:
v i t + 1 = x p t + F ( x q t x r t )
where F [ 0 , 1 ] is a parameter related to the differential weight and is adjusted depending on the problem to be optimized.
(B) Crossover. This operator controlled by the crossover rate C r [ 0 , 1 ] . The C r [ 0 , 1 ] helps to modify only a few vectors of the population and it is applied on each dimension (or element) of a solution. The crossover begins by generating a random number r i [ 0 , 1 ] and then, the j-th component v i is handled as follow:
u j , i t + 1 = { v j , i r i C r x j , i t otherwise , j = 1 , 2 , , d ; i = 1 , 2 , , n
(C) Selection. The selection process is essentially the same as that used in Genetic Algorithms (GA) [16]. The new solution is evaluated by the fitness function; if the new candidate solution is better than the current, then it is replaced. We can consider the following criteria:
x i t + 1 = { u i t + 1 if   f ( u i t + 1 ) f ( x i t ) x i t otherwise

2.4. Harmony Search (HS)

The HS was proposed by Geem [32]; this algorithm is based on music. It was inspired by the observation that music aims are searching for the perfect state of harmony by improvisation. The analogy in optimization is the process of searching the optimal value of a problem. When a musician is improvising, he follows three rules to obtain a good melody: plays any well-known portion of music precisely from his memory, plays something similar to a famous piece and composes new music or plays random notes. In the HS operation those rules become; (a) Harmony Memory (HM) consideration, (b) pitch adjustment and (c) initialize a random solution.
(a) Harmony Memory (HM) Consideration. This stage will guarantee that the best harmonies will be transferred to a new harmony memory. To apply the HM more efficiently, it is assigned a parameter that helps to select the elements of HM. Such a parameter is the Harmony Memory Consideration Rate (HMCR) and is defined as H M C R [ 0 , 1 ] . If the value of the HMCR is low, only a few good harmonies are chosen and it can slowly converge. If such a value is notably high, most of the harmonies are employed in HM, and then additional harmonies are not used in the exploration stage.
(b) Pitch Adjustment. Each element achieved by memory consideration is additionally analyzed to decide whether it should be pitch-modified. For this action, the Pitch-Adjusting Rate (PAR) is described to select the recurrence of the adjustment and the Bandwidth value (BW) to regulate the local search of the chosen elements of the HM. Therefore, the pitch adjust determination is computed as:
x n e w = { x n e w x n e w ± rand ( 0 , 1 ) B W with   probability   P A R x n e w with   probability   ( 1 - P A R )
Pitch modification is responsible for producing new likely harmonies by lightly adjusting original value positions. This action can be considered as the mutation rule in evolutionary techniques. Consequently, the decision variable is either modified by a random number in the range [0, BW] or remains with no changes.
(c) Initialize a random solution. This operation tries to increase the diversity of the solutions. As is mentioned in a), if a new solution is not taken from the harmony memory, it must be created randomly in the search space. It helps to explore new areas avoiding the local optimal. This process is described as follows:
x n e w = { x i { x 1 , x 2 , , x H M S } with   probability   H M C R l + ( u l rand ( 0 , 1 ) with   probability   ( 1 - H M C R )
where HMS is the size of the HM, x i is an element chosen from the HM, and u and l are the upper and lower bounds. Finally, rand is a random value uniformly distributed between 0 and 1.

2.5. Gravitational Search Algorithm (GSA)

The GSA is an evolutionary technique inspired by the Newtonian laws of gravity [33]. The GSA has been used widely in different engineering fields due to its capabilities for determining competitive solutions [36]. In GSA each mass (solution) represents a d-dimensional vector x i ( t ) = ( x i 1 , , x i d ) ; ( i 1 , , N ) , at time t, the mass i acts to the mass j in the variable h ( h 1 , , d ) with force defined as:
F i j h ( t ) = G ( t ) M p i ( t ) M a j ( t ) R i j ( t ) + ε ( x j h ( t ) x i h ( t ) )
where M a j is the active gravitational mass associated with the solution j, M p i represents the passive gravitational mass of the element i, G ( t ) is a gravitational constant at the time t, ε is a small constant, and R i j is the distance between the elements i and j. The G(t) function is adjusted through the evolution process for balancing the exploration and exploitation by the modification of attraction forces between solutions. The total force operating over the solution i is represented as follows:
F i h ( t ) = j = 1 , j i N F i j h ( t )
The acceleration at time t of the candidate solution is estimated as below:
a i h ( t ) = F i h ( t ) M n i ( t )
where M n i describes the inertial mass of the solution i. The new location of each individual i is calculated as:
x i h ( t + 1 ) = x i h ( t ) + v i h ( t + 1 ) v i h ( t + 1 ) = rand ( ) v i h ( t ) + a i h ( t )
The inertial and gravitational masses of each individual are estimated directly by its fitness function value. Consequently, the inertial and gravitational masses are updated by the following expressions:
M a i = M p i = M i i = M i
m i ( t ) = f ( x i ( t ) ) worst ( t ) best ( t ) worst ( t )
M i ( t ) = m i ( t ) j = 1 N m j ( t )
where f ( ) describes the cost function that determines the quality of the solutions.

2.6. Flower Pollination Algorithm (FPA)

The FPA was introduced by Xin-She Yang in 2012 [34]. FPA stimulates the flower pollination process of plants. In the biological approach, this process is key for the reproduction of a plant and is associated with the transfer of pollen. This transfer process is performed by pollinator agents (insects, birds, the wind, etc.). Pollination can be classified into two forms: abiotic and biotic. Most of the plants use biotic pollination, in which the use of external pollinator is necessary; this process is known as cross-pollination. On the other hand, abiotic pollination is known as self-pollination; it does not require the intervention of any external pollinator. Contrarily to the biological approach, the FPA for simplicity contemplates just one flower that provides one pollen gamete. In the FPA, a solution f i corresponds to a flower or pollen gamete. In the implementation, the algorithm generates a population F k = { f 1 k , f 2 k , , f m k } of m flowers, where each flower is represented by a d-dimensional vector { f i , 1 k , f i , 2 k , , f i , d k } . The dimension vector d corresponds to the number of decision variables of the problem to treat.
(A) Lévy flights. One of the most powerful operators in flower pollination algorithm is the Lévy Flights [37], which generates a random walk. The walk length is defined by a heavy-tailed probability distribution. The random walk is generated as follows:
s i = u | v | 1 / β
where u = { u 1 , , u d } and v = { v 1 , , v d } are vectors of dimensionality d, and the value β = 3 / 2 . The elements of the vectors u and v are computed using normal distributions as below:
u ~ N ( 0 , σ u 2 ) , v ~ N ( 0 , σ v 2 ) , σ u = ( Γ ( 1 + β ) s i n ( π β / 2 ) Γ ( ( 1 + β ) / 2 ) β 2 ( β 1 ) / 2 ) 1 / β , σ v = 1 ,
where Γ ( ) represents the Gamma distribution.
(B) Local pollination. The local pollination is described in Equation (19), where f j k and f h k are pollens from two different flowers of the same plant species.
f i k + 1 = f i k + ε ( f i k f h k )
where ε is a scalar random value within [0, 1].
(C) Global pollination. The original position f i k is replaced with the new position f i k + 1 according to the following equation:
f i k + 1 = f i k + s i ( g * f i k )
where s i is a random scalar within [0, 1] while g * is the best candidate solution seen so-far.

3. 2D-IIR Filter Design Process as an Optimization Problem

This section formulates the process of the 2D-IIR filter design as an optimization problem. In this paper, the zero-phase IIR filter design is considered. A zero-phase filter is a particular example of linear phase filters where the phase slope is zero. This fact suggests that the impulse response is always even ( h ( n ) = h ( n ) ) [8]. The structure of a 2D-IIR filter is presented in Equation (21) [15,16,38,39,40].
H ( z 1 , z 2 ) = H 0 i = 0 K j = 0 K a i j z 1 i z 2 i k = 1 K ( 1 + b k z 1 + c k z 2 + d k z 1 z 2 )
where a i j , b k , c k , d k are the filter coefficients, H 0 is an intensity constant while K symbolizes the filter order. In case of K = 2, the model of Equation (21) corresponds to a 2nd order filter whereas a higher value (K > 3) specifies a high order filter. In Equation (21), it is usually configured a 00 = 1 (by normalizing a i j to the value of a 00 ). z 1 and z 2 represent complex variables in the z-transform domain. However, for convenience, the Fourier frequency domain is used, where z 1 and z 2 are defined as follows:
z 1 = e j ω 1 , z 2 = e j ω 2
where j is the imaginary constant and ω 1 , ω 2 [ 0 , π ] are frequency variables. With the substitution of Equation (22) in Equation (21), H ( z 1 , z 2 ) changes to M ( ω 1 , ω 2 ) .
The objective of filter design is to approximate the response of a determined 2D-IIR filter structure to the desired behavior M d ( ω 1 , ω 2 ) provided by the user requirements. Therefore, the design process consists of finding the optimal parameters M ( ω 1 , ω 2 ) that present the best possible resemblance with the desired response M d ( ω 1 , ω 2 ) . Both responses M ( ω 1 , ω 2 ) and M d ( ω 1 , ω 2 ) are discretely computed at N 1 × N 2 points.
To guide the search process, an objective function J that evaluates the similarity between the candidate filter M ( ω 1 , ω 2 ) and the desired response M d ( ω 1 , ω 2 ) must be defined. This similarity can be defined as the difference that exists between both responses, such as follows:
J ( a i j , b k , c k , d k , H 0 ) = n 1 = 0 N 1 n 2 = 0 N 2 [ | M ( ω 1 , ω 2 ) | M d ( ω 1 , ω 2 ) ] 2
where ω 1 = ( π / N 1 ) n 1 , ω 2 = ( π / N 2 ) n 2 . For the sake of clarity, the Equation (23) can be rewritten as:
J = n 1 = 0 N 1 n 2 = 0 N 2 [ | M { π n 1 N 1 , π n 2 N 2 } | M d { π n 1 N 1 , π n 2 N 2 } ] 2
Figure 1 illustrates the filter design process. In the learning schema, the difference between the computed (y(t)) and desired (d(t)) filter response is used to guide the evolution process.
One main difficulty of 2D IIR filters is the tendency to produce unstable transfer functions. Therefore, in the evaluation of Equation (24), it is necessary to verify if the poles of the transfer function are located within the unitary circle of the z-plane. To validate this state, several conditions must be fulfilled, which, for Equation (25) can be expressed as follows [15,16,38]:
| b k + c k | 1 < d k < 1 | b k c k | ,   k = 1 , 2 , , K
Under such circumstances, the filter design process can be translated into an optimization problem with constraints.

Comparison Setting

In order to conduct the comparison, it is necessary to define the experiment and its main characteristics. In the test, the filter design problem has been selected according to [16,38]. The idea with this selection is to conserve compatibility with the results produced by such studies. In the experiment, the 2D-IIR filter structure is formulated as follows:
H ( z 1 , z 2 ) = H 0 a 00 + a 01 z 2 + a 02 z 2 2 + a 10 z 1 + a 20 z 1 2 + a 11 z 1 z 2 + a 12 z 1 z 2 2 + a 21 z 1 2 z 2 + a 22 z 1 2 z 2 2 ( 1 + b 1 z 1 + c 1 z 2 + d 1 z 1 z 2 ) ( 1 + b 2 z 1 + c 2 z 2 + d 2 z 1 z 2 )
For the sake of computational ease, the variables z 1 and z 2 are replaced by f x y = cos ( x ω 1 + y ω 2 ) and g x y = sin ( x ω 1 + y ω 2 ) , [ x , y ] = [ 0 , 1 , 2 ] . This simplification has been proposed by Swagatam and Konar in [15]. The objective is to make the implementation in digital filters easier. Therefore, Equation (26) is rewritten as:
M ( ω 1 , ω 2 ) = H 0 [ a 00 + a 01 f 01 + a 02 f 02 + a 10 f 10 + a 20 f 20 + a 11 f 11 + a 12 f 12 + a 21 f 21 + a 22 f 22 V j a 00 + a 01 g 01 + a 02 g 02 + a 10 g 10 + a 20 g 20 + a 11 g 11 + a 12 g 12 + a 21 g 21 + a 22 g 22 V ]
where V is defined by:
V = [ ( 1 + b 1 f 10 + c 1 f 01 + d 1 f 11 ) j ( b 1 g 10 + c 1 g 01 + d 1 g 11 ] [ ( 1 + b 1 f 10 + c 1 f 01 + d 1 f 11 ) j ( b 1 g 10 + c 1 g 01 + d 1 g 11 ]
Under such conditions, the Equation (27) can be compacted as follows:
M ( ω 1 , ω 2 ) = H 0 N R j N I ( D 1 R j D 1 R ) ( D 2 R j D 2 I )
where the elements of Equation (29) are defined as:
N R = a 00 + a 01 f 01 + a 02 f 02 + a 10 f 10 + a 20 f 20 + a 11 f 11 + a 12 f 12 + a 21 f 21 + a 22 f 22 N I = a 00 + a 01 g 01 + a 02 g 02 + a 10 g 10 + a 20 g 20 + a 11 g 11 + a 12 g 12 + a 21 g 21 + a 22 g 22 D 1 R = 1 + b 1 f 10 + c 1 f 01 + d 1 f 11 D 1 I = 1 + b 1 g 10 + c 1 g 01 + d 1 g 11 D 2 R = 1 + b 2 f 10 + c 2 f 01 + d 2 f 11 D 2 I = 1 + b 2 g 10 + c 2 g 01 + d 2 g 11
Considering Equations (29) and (30), the magnitude M ( ω 1 , ω 2 ) of the filter is computed as:
| M ( ω 1 , ω 2 ) | = H 0 N R 2 N I 2 ( D 1 R 2 D 1 I 2 ) ( D 2 R 2 D 2 I 2 )
In the test, the desired response M d ( ω 1 , ω 2 ) is defined as a circular symmetric low-pass response [10,15,16,38,39,40], which is given by:
M d ( ω 1 , ω 2 ) = { 1 if   ω 1 2 + ω 2 2 0.08 π 0.5 if   0.08 π ω 1 2 + ω 2 2 0.12 π 0 otherwise
For approximating both responses M ( ω 1 , ω 2 ) and M d ( ω 1 , ω 2 ) , it is used a matrix of N 1 × N 2 points, where N 1 and N 2 are set to 50.
In order to ensure the stability of the filter for Equation (26), the candidate solution must fulfill the following four constraints:
( 1 + d k ) < ( b k + c k ) < ( 1 + d k ) ( 1 d k ) < ( b k + c k ) < ( 1 d k ) ( 1 + d k ) > 0 ( 1 d k ) > 0
Under these conditions, the filter design process can be interpreted as a constrained optimization problem. This optimization task can be formulated as follows:
min   J ( a i j , b k , c k , d k , H 0 ) = n 1 = 0 N 1 n 2 = 0 N 2 [ | M { π n 1 50 , π n 2 50 } | M d { π n 1 50 , π n 2 50 } ] 2 subject   to ( 1 + d k ) < ( b k + c k ) < ( 1 + d k ) ( 1 d k ) < ( b k + c k ) < ( 1 d k ) ( 1 + d k ) > 0   k = 1 , 2 , , K ( 1 d k ) > 0 ,
In order to use an EC algorithm for the problem formulated in Equation (34), it is necessary to represent each candidate solution as one location in the multi-dimensional search space. A candidate solution to Equation (34), considering a 2nd order filter K = 2, is represented by the filter parameters concentered in the 15-dimensional vector x i :
x i = [ a 01 , a 02 , a 10 , a 11 , a 12 , a 20 , a 21 , a 22 , b 1 , c 1 , d 1 , b 2 , c 2 , d 2 , H 0 ] T
Under such conditions, the optimal solution is represented by the solution x i whose parameters minimize Equation (34), while they simultaneously hold the requirements imposed by its respective constraints. The optimal solution corresponds to the 2D-IIR filter whose response M ( ω 1 , ω 2 ) is the most similar to M d ( ω 1 , ω 2 ) .
For constraint handling, all EC methods incorporate a penalty factor [41] whose value corresponds to the violated constraints. The main purpose is to include a tendency term in the original cost function to penalize constraint violations for determining only feasible solutions

4. Experimental Results

In this section, the comparison results are presented. In the experiments, we have used the 2D-IIR filter design problem formulated in Section 3. In the tests, it is compared the algorithms: Particle Swarm Optimization (PSO) [29], the Artificial Bee Colony (ABC) algorithm [30], the Differential Evolution (DE) method [31], the Harmony Search (HS) strategy [32], the Gravitational Search Algorithm (GSA) [33] and the Flower Pollination Algorithm (FPA) [34]. These methods are considered as the most popular EC algorithms currently in use [11]. In comparison, the parameters of all algorithms are set as follows:
  • PSO, parameters c 1 = 2 , c 2 = 2 and weights factors were set w max = 0.9 , and w min = 0.4 [42].
  • ABC, the parameters are given in [43], limit = 90.
  • HS, corresponding to [44] the parameters were set HMCR = 0.9, and PAR = 0.3.
  • DE, as is reported in [31] the parameters were set CR = 0.9 and F = 0.5
  • GSA, G 0 = 100 and α = 20 , according to [33].
  • FPA, the probabilistic global pollination factor, was set p = 0.8 [34].
All selected techniques have been set with a population of 25 elements, while the stop criteria is configured to 3000 iterations. In contrast with most of the related works that usually employ less than 500 iterations [16,43,45], in this manuscript, the maximum iterations number is set as 3000 in order to perform a convergence analysis, which is described in Section 4.2. In this work, the produced filters have not been implemented in the hardware of fixed arithmetic. For this reason, an analysis of the quantization error has not been considered. All conclusions have been obtained through simulated experiments.

4.1. Accuracy Comparison

In the first experiment, the design of a 2nd order filter from Equations (32) and (34) is considered. Table 1 reports the J values (Equation (34)) generated by the experimentation. They evaluate the differences between the ideal filter response and those produced by the EC techniques. A lower value of J involves a better filter performance. Table 1 reports minimal ( J min ), maximal ( J max ) and mean ( J ¯ ) values of J along with their standard deviations ( σ ) obtained for each algorithm during 250 executions. The first three indexes consider the accuracy of each solution, whereas the last one assesses its robustness. The results show that the ABC and FPA algorithms present the best performance in terms of accuracy and robustness. On the other hand, Table 1 also indicates that the ABC algorithm maintains the lowest standard deviation, which means that it can find a similar solution with a very small variation. The rest of the methods obtain different performance levels. In Table 1, the highlighted values represent the best values.
Table 2 reports the filter parameters of the best filter obtained by each algorithm PSO, ABC, DE, HS, GSA and FPA after 250 individual runs, where each execution contemplates 3000 iterations. They correspond to the parameters that produce minimal J value ( J min ). In Table 2, the highlighted values represent the best-found parameters among all techniques.
Figure 2 graphically shows the filter responses produced by each algorithm. Each graph (b–g) presents the filter response that is produced according to the parameters exhibited in Table 2. After an analysis of Figure 2, it is clear that algorithms PSO and HS produce the worst performance since prominent ripples are appreciated in their responses. GSA and DE present lower ripples than PSO and HS. Under such conditions, they maintain better performance. Finally, ABC and FPA obtain the best filter responses. In both cases, ABC keeps very small ripples, whereas FPA practically eliminates them.

4.2. Convergence Analysis

The comparative study of the final J ¯ values cannot effectively represent the searching capabilities of the EC techniques. Hence, in Figure 3, a convergence experiment on the seven methods is carried out. The main idea with this test is evaluating the velocity with which a compared technique approaches the optimum solution. In the test, the performance of each employed technique is examined over the minimization of the filter design. The graphs of Figure 3 represent the median final result considering 250 executions, where each execution contemplates 3000 iterations. In the Figure 3, it is clear that PSO and GSA converge faster than the other algorithms, but prematurely.

4.3. Computational Cost

In this experiment, the objective is to measure the number of function evaluations (NFE) and the computing time (CT) spent by each method required to calculate the parameters of a certain filter. All experiments consider the filter design formulated by Equations (32) and (34), which correspond to the parameters exhibited in Table 2.
The index NFE refers to the number of required function evaluations from which improvement is not detected in the objective function J ¯ (Equation (34)). CT represents the elapsed time in seconds consumed by each algorithm corresponding to the NFE. The test has been executed in Matlab running on a Computer Nitro, Core i7. Table 3 shows the averaged measurements as they are obtained from 20 independent executions. A close inspection of Table 3 demonstrates that PSO and HS are the fastest methods. Nevertheless, this apparent velocity corresponds to premature convergence. ABC, DE and GSA are the slowest algorithms. On the other hand, FPA represents the scheme which obtains the best trade-off between velocity and accuracy.

4.4. Comparison with Different Bandwidth Sizes

In order to extend the comparison analysis, the performance of each method in the design of a low pass filter with different bandwidth sizes is evaluated. For this experiment, two particular cases are considered. In the first case, the objective is to design a filter with an ideal response defined by:
M d 1 ( ω 1 , ω 2 ) = { 1 if   ω 1 2 + ω 2 2 0.16 π 0.5 if   0.16 π ω 1 2 + ω 2 2 0.24 π 0 otherwise
The second case considers the design of a filter with an ideal response defined as follows:
M d 2 ( ω 1 , ω 2 ) = { 1 if   ω 1 2 + ω 2 2 0.32 π 0.5 if   0.32 π ω 1 2 + ω 2 2 0.48 π 0 otherwise
Assuming J 1 and J 2 as the objective functions corresponding to each ideal response M d 1 and M d 2 , Table 4 presents comparison results. They evaluate the differences between the ideal filter response and those produced by the designed filters through EC techniques for different bandwidth sizes. The Table 4 reports minimal ( J min p ), maximal ( J max p ) and mean ( J ¯ p ) values of J along with their standard deviations ( σ p ) obtained for each algorithm for every case p = 1.2. The obtained values are calculated from 250 different executions of each method. A close inspection of Table 4 shows that the FPA scheme produces the best performance for both cases J 1 and J 2 .

4.5. Filter Performance Characteristics

In this subsection, the performance of the filter characteristics is analyzed. The objective is to evaluate the responses of each of the best filters produced by every algorithm PSO, ABC, DE, HS, GSA and FPA (reported in Table 2).
The responses are assessed considering the following indexes: (a) error in the passband, (b) magnitude ripple, (c) stop-band attenuation and (d) the magnitude of poles.
(a) Pass-band error. It evaluates the error in frequency within the passband according to the following formulation:
E R P ( ω ) = | M ( e j ω ) M d ( e j ω ) |   for   ω [ 0 , ω p ] ,
where ω p corresponds to the passband frequency.
(b) Ripple magnitude in the passband. It considers the ripple size considering the model below:
E M R ( ω ) = | M ( e j ω ) | 1   for   ω [ 0 , ω p ] .
(c) The attenuation in the stop-band. It describes the response debilitation within the stop-band according to:
A T ( ω ) = 20 log 10 ( | M ( e j ω ) | )   for   ω [ ω s , π ] ,
where ω s represents the stop-band frequency.
(d) The maximal pole size (MPs). This index evaluates the maximal magnitude of poles present in M ( e j ω ) .
The error produced by the frequency response in the passband E R P ( ω ) corresponds to the mismatch between the transfer function of the synthesized filter M ( e j ω ) and the desired transfer function M d ( e j ω ) , which models the ideal filter response. Under these conditions, a passband error as small as possible characterizes a better filter response. On the other hand, the ripple magnitude in passband E M R ( ω ) has to maintain a small enough value to guarantee lower distortions in the passband behavior. Conversely, the attenuation E M R ( ω ) should be large with the objective to eliminate undesired components of high frequency. Table 5 presents the performance comparison of the filter characteristics. The results correspond to the responses of each of the best filters produced by each algorithm PSO, ABC, DE, HS, GSA and FPA (reported in Table 2). On close inspection of Table 5, it is evident that the FPA algorithm archives the best performance in error terms in passband, magnitude ripple, stop-band attenuation and magnitude of poles.

4.6. Statistical Non-Parametrical Analysis

To statistically verify the results provided by the six techniques, a nonparametric statistical significance analysis is implemented. This statistical analysis is known as Wilcoxon’s rank-sum test [26,28,46,47]. To make use of this analysis, 35 different executions for each algorithm were needed to prove that there is a significant statistical difference between the J ¯ values produced by each technique. The Wilcoxon’s rank-sum analysis provides estimating result variations between two related techniques. The study is conducted considering a 5% significance value (0.05) over the J ¯ data. Being the ABC and FPA, the techniques that better perform (according to Table 1), we proceed to compare them with the rest of the methods. Table 6 summarizes the p-values performed by Wilcoxon’s analysis for the pair-wise comparison among FPA vs. PSO, FPA vs. ABC, FPA vs. DE, FPA vs. HS, FPA vs. GSA. Table 7 reports the results among ABC vs. PSO, ABC vs. DE, ABC vs. HS, ABC vs. FPA. In the case of PSO, DE, HS and GSA, the non-parametric analysis was not conducted since the results obtained by these techniques are worse than ABC and FPA. In the Wilcoxon study, it is contemplated as the null hypothesis that there is no important difference between the two techniques. On the other hand, the alternative hypothesis is that the difference between the two approaches is important. To simplify the study of Table 6 and Table 7, the symbols ▲, ▼, and ► are chosen. ▲ symbolizes that the proposed approach achieves significantly better results than the compared algorithm on a specified function. ▼ expresses that the proposed approach achieves worse results than the compared technique, and ► indicates that the Wilcoxon analysis is not able to distinguish between the results of the proposed approach and the tested technique. From Table 6 and Table 7, it is visible that all p-values are less than 0.05, which means that the FPA performs better than the rest of the algorithms.
Since we have run several Wilcoxon’s rank-sum tests, the probability of committing an error type 1 (false positive) increases and we can reject the null hypothesis even when it is true. To avoid this problem, a correction for the p-values of the Wilcoxon test is adjusted by using the Bonferroni correction method [48,49]. Combining the Wilcoxon test with the Bonferroni method, we can show that there exists a significant statistical difference between FPA and each of the selected algorithms. The results of Bonferroni correction for FPA and ABC are shown in Table 8 and Table 9, respectively. The significance value of Bonferroni correction was set to p = 0.001428 for the case of FPA comparison, and for the ABC comparison the value of Bonferroni has been set to p = 0.000084034. It is important to point out that the absolute values of the correction factors in Bonferroni are not used for comparative proposes. In the Bonferroni test, if the new significance value is marked with 1, it means that this element confirms the result of the Wilcoxon analysis for this particular case.

4.7. Comparison Study in Images

Figure 4 and Figure 5 present the visual performance of the designed filters when they are applied to eliminate Gaussian noise for two different images.
Figure 4 shows the noise removing the results of the filters whose corresponding parameters are exhibited in Table 2. They have been designed through the use of PSO, ABC, DE, HS, GSA and FPA considering the approximation to an ideal low pass behavior. The visual results reflex the capacity of each filter to remove noise. Figure 4a presents the original image “Lenna”. Figure 4b shows the image corrupted with Gaussian noise of mean = 0 and variance of 0.005. The set of images from Figure 4c–h exhibits the performance of each method PSO, ABC, DE, HS, GSA and FPA in the noise removing process. According to the Figure 4, it is clear that the filters produced by the FPA and ABC algorithms present the best performance. On the other hand, the filters generated by the algorithms PSO, DE, HS and GSA obtain the worst filtering effect. They are not able to remove appropriately, also producing distortions in the image due to the frequency ripples.
Figure 5 shows the noise removing the results of the filters with a wider bandwidth size. They correspond to the filters produced by the methods PSO, ABC, DE, HS, GSA and FPA, considering as ideal response that defined by M d 1 . Figure 5a presents the original image “Cameraman”. Figure 5b shows the image corrupted with Gaussian noise of mean = 0 and variance of 0.005. The set of images from Figure 5c–h exhibits the performance of each method PSO, ABC, DE, HS, GSA and FPA in the noise removal process. Also, in this case, the filters produced by the FPA and ABC algorithms present the best noise removal result. Contrarily, the filters obtained by the algorithms PSO, DE, HS and GSA produce a bad filtering effect.

5. Discussion

The performance of EC methods is based on the balance achieved by two search procedures, namely, exploration and exploitation. In the exploration procedure, the search mechanism of EC algorithms allows producing large movements within the search space, increasing the possibility of finding potential zones. As a counterpart, the exploitation procedure increases the population diversity within the discovered zones by the exploration stage, producing better and more accurate solutions. Additionally, to know the advantages and limitations of EC methods, their correct evaluation is an important task in the computational intelligence community. In general, EC algorithms are complex pieces of software with several operators and stochastic branches. Under such circumstances, appropriate statistical methods must be considered for obtaining significant conclusions.
In this paper, a comparison study among several EC methods is conducted for solving the 2D-IIR filter design optimization problem. The experimental study compares commonly used EC methods for solving complex optimization methods. From the results, it is quite evident that the GSA algorithm is capable of producing better results than PSO, DE and HS. However, the FPA overcomes the GSA method due to the valuable combination of Lévy flights in its global pollination structure and the local pollination procedure. It can be demonstrated that for most of the experiments, FPA can be used as an alternative method for solving 2D-IIR filters.
The performance results obtained by FPA outperform the PSO, DE, HS, and GSA concerning the average and standard deviation of the values after 250 individual executions. On the other hand, the ABC overcomes the FPA in standard deviation terms, which means that ABC can converge to almost the same solution with low variation, which indicates that FPA and ABC are good alternatives for this specific application. In the case of PSO, DE, HS and GSA the fitness values obtained by these techniques were oscillatory, showing they were not able to perform the 2D-IIR filter design well.
To validate the performance results based on the minimization process, a non-parametric test known as Wilcoxon’s rank-sum was conducted. In this study, the significance level of 0.05 has been chosen, to prove if there is a statistical difference among the fitness values achieved by FPA and its competitors. Additionally, to avoid type 1 error, the Bonferroni correction has been computed to avoid false-positive results. From the statistical results, the significant performance of FPA is achieved by the balance between the global and local pollination operators.

6. Conclusions

Recently, the design of two-dimensional Infinite Impulse Response (2D-IIR) has attracted attention in several areas of engineering due to its wide range of applications. Most of the related literature concerning the optimal design of 2D-IIR structure is based on traditional optimization methods that are prone to fail in the presence of multiple optima. Under such circumstances, the optimal design of 2D-IIR filters is considered as a complex optimization problem involving multimodal surfaces.
On the other hand, Evolutionary Computation (EC) algorithms are considered as stochastic global search methods that can operate under complex multimodal error surfaces. This paper reports a comparative study considering several EC methodologies when they are applied to the complex application of 2D-IIR filter design. In comparison, special attention is paid to the most popular EC algorithms currently in use such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Differential Evolution (DE), Harmony Search (HS), Gravitational Search Algorithm (GSA) and Flower Pollination Algorithm (FPA).
From the experimental results, it can be shown that the FPA method outperforms the rest of the EC techniques in terms of accuracy, consistency and robustness. Additionally, the remarkable performance of the FPA is statistically compared against the performance of its competitors considering a non-parametric test and a type 1 error correction based on a Bonferroni correction to avoid false-positive solutions. From the statistical tests, it can be corroborated that the FPA can find the optimal structure for the 2D-IIR filter design optimization problem.

Author Contributions

Formal analysis, J.G.; Investigation, O.A.; Methodology, E.H.H.; Software, K.H.; Writing—original draft, E.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. Srivastava, V.K.; Ray, G.C. Design of 2D-multiple notch filter and its application in reducing blocking artifact from DCT coded image. In Proceedings of the 22nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, IL, USA, 23–28 July 2000; pp. 2829–2833. [Google Scholar]
  2. Rashedi, E.; Zarezadeh, A. Noise filtering in ultrasound images using gravitational search algorithm. In Proceedings of the 2014 Iranian Conference on Intelligent Systems (ICIS), Bam, Iran, 4–6 February 2014; pp. 1–4. [Google Scholar]
  3. Sgro, J.A.; Emerson, R.G.; Pedley, T.A. Real-time reconstruction of evoked potentials using a new two-dimensional filter method. Electroencephalogr. Clin. Neurophysiol. Potentials Sect. 1985, 62, 372–380. [Google Scholar] [CrossRef]
  4. Razee, A.M.; Dziyauddin, R.A.; Azmi, M.H.; Sadon, S.K. Comparative Performance Analysis of IIR and FIR Filters for 5G Networks. In Proceedings of the 2018 2nd International Conference on Telematics and Future Generation Networks (TAFGEN), Kuching, Malaysia, 24–26 July 2018; pp. 143–148. [Google Scholar]
  5. Pal, R. Comparison of the design of FIR and IIR filters for a given specification and removal of phase distortion from IIR filters. In Proceedings of the 2017 International Conference on Advances in Computing, Communication and Control (ICAC3), Mumbai, India, 1–2 December 2017; pp. 1–3. [Google Scholar]
  6. Dumitrescu, B. Optimization of two-dimensional IIR filters with nonseparable and separable denominator. IEEE Trans. Signal. Process. 2005, 53, 1768–1777. [Google Scholar] [CrossRef]
  7. Das, S.; Konar, A. Two-dimensional iir filter design with modern search heuristics: A comparative study. Int. J. Comput. Intell. Appl. 2006, 06, 329–355. [Google Scholar] [CrossRef]
  8. Whalley, R. Two-dimensional digital filters. Appl. Math. Model. 1990, 14, 304–311. [Google Scholar] [CrossRef]
  9. Lu, W.-S.; Andreas, A. Two-Dimensional Digital Filters; CRC Press: New York, NY, USA, 1992. [Google Scholar]
  10. Dhabal, S.; Venkateswaran, P. Two-Dimensional IIR filter design using simulated annealing based particle swarm optimization. J. Optim. 2014, 2014, 239721. [Google Scholar] [CrossRef] [Green Version]
  11. Ampazis, N.; Perantonis, S.J. An efficient constrained learning algorithm for stable 2D IIR filter factorization. Adv. Artif. Neural Syst. 2013, 2013, 1–7. [Google Scholar] [CrossRef] [Green Version]
  12. Jin, Y.; Branke, J. Evolutionary optimization in uncertain environments—A survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317. [Google Scholar] [CrossRef] [Green Version]
  13. Kukrer, O. Analysis of the dynamics of a memoryless nonlinear gradient IIR adaptive notch filter. Signal. Process. 2011, 91, 2379–2394. [Google Scholar] [CrossRef]
  14. Nanda, S.J.; Panda, G. A survey on nature inspired metaheuristic algorithms for partitional clustering. Swarm Evol. Comput. 2014, 16, 1–18. [Google Scholar] [CrossRef]
  15. Das, S.; Konar, A. A swarm intelligence approach to the synthesis of two-dimensional IIR filters. Eng. Appl. Artif. Intell. 2007, 20, 1086–1096. [Google Scholar] [CrossRef]
  16. Mastorakis, N.E.; Gonos, I.F.; Swamy, M.N.S. Design of two-dimensional recursive filters using genetic algorithms. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 2003, 50, 634–639. [Google Scholar] [CrossRef]
  17. Karaboga, N. A new design method based on artificial bee colony algorithm for digital IIR filters. J. Franklin Inst. 2009, 346, 328–348. [Google Scholar] [CrossRef]
  18. Tan, K.C.; Chiam, S.C.; Mamun, A.A.; Goh, C.K. Balancing exploration and exploitation with adaptive variation for evolutionary multi-objective optimization. Eur. J. Oper. Res. 2009, 197, 701–713. [Google Scholar] [CrossRef]
  19. Alba, E.; Dorronsoro, B. The exploration/exploitation tradeoff in dynamic cellular genetic algorithms. IEEE Trans. Evol. Comput. 2005, 9, 126–142. [Google Scholar] [CrossRef] [Green Version]
  20. Paenke, I.; Jin, Y.; Branke, J. Balancing population and individual-level adaptation in changing environments. Adapt. Behav. 2009, 17, 153–174. [Google Scholar] [CrossRef]
  21. Cuevas, E.; Echavarría, A.; Ramírez-Ortegón, M.A. An optimization algorithm inspired by the States of matter that improves the balance between exploration and exploitation. Appl. Intell. 2014, 40, 256–272. [Google Scholar] [CrossRef] [Green Version]
  22. Reddy, S.S.; Rathnam, C.S. Optimal power flow using glowworm swarm optimization. Int. J. Electr. Power Energy Syst. 2016, 80, 128–139. [Google Scholar] [CrossRef]
  23. Reddy, S.S. Optimal power flow using hybrid differential evolution and harmony search algorithm. Int. J. Mach. Learn. Cybern. 2019, 10, 1077–1091. [Google Scholar]
  24. Reddy, S.S.; Bijwe, P.R.; Abhyankar, A.R. Faster evolutionary algorithm based optimal power flow using incremental variables. Int. J. Electr. Power Energy Syst. 2014, 54, 198–210. [Google Scholar] [CrossRef]
  25. Caraffini, F.; Kononova, A.V.; Corne, D. Infeasibility and structural bias in differential evolution. Inf. Sci. 2019, 496, 161–179. [Google Scholar] [CrossRef] [Green Version]
  26. Garcia, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 special session on real parameter optimization. J. Heuristics 2009, 15, 617–644. [Google Scholar] [CrossRef]
  27. Elbeltagi, E.; Hegazy, T.; Grierson, D. Comparison among five evolutionary-based optimization algorithms. Adv. Eng. Inform. 2005, 19, 43–53. [Google Scholar] [CrossRef]
  28. Shilane, D.; Martikainen, J.; Dudoit, S.; Ovaska, S.J. A general framework for statistical performance comparison of evolutionary computation algorithms. Inf. Sci. 2008, 178, 2870–2879. [Google Scholar] [CrossRef] [Green Version]
  29. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  30. Karaboga, D. An idea based on honey bee swarm for numerical optimization. Tech. Rep. TR06 Erciyes Univ. 2005, TR06, 10. [Google Scholar]
  31. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  32. Geem, Z.W. A new heuristic optimization algorithm: Harmony search. Simulation 2001. [Google Scholar] [CrossRef]
  33. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. (Ny) 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  34. Yang, X.S. Flower pollination algorithm for global optimization. Lect. Notes Comput. Sci. (Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinform.) 2012, 7445, 240–249. [Google Scholar]
  35. Lin, Y.-L.; Chang, W.-D.; Hsieh, J.-G. A particle swarm optimization approach to nonlinear rational filter modeling. Expert Syst. Appl. 2008, 34, 1194–1199. [Google Scholar] [CrossRef]
  36. Rashedi, E.; Rashedi, E.; Nezamabadi-pour, H. A comprehensive survey on gravitational search algorithm. Swarm Evol. Comput. 2018, 41, 141–158. [Google Scholar] [CrossRef]
  37. Barthelemy, P.; Bertolotti, J.; Wiersma, D.S. A Lévy flight for light. Nature 2008, 453, 495–498. [Google Scholar] [CrossRef] [PubMed]
  38. Mladenov, V.M.; Mastorakis, N.E. Design of two-dimensional recursive filters by using neural networks. IEEE Trans. Neural Netw. 2001, 12, 585–590. [Google Scholar] [CrossRef] [PubMed]
  39. Tsai, J.-T.; Ho, W.-H.; Chou, J.-H. Design of two-dimensional IIR digital structure-specified filters by using an improved genetic algorithm. Expert Syst. Appl. 2009, 36, 6928–6934. [Google Scholar] [CrossRef]
  40. Sarangi, S.K.; Panda, R.; Dash, M. Design of 1-D and 2-D recursive filters using crossover bacterial foraging and cuckoo search techniques. Eng. Appl. Artif. Intell. 2014, 34, 109–121. [Google Scholar] [CrossRef]
  41. Cuevas, E.; Cienfuegos, M. A new algorithm inspired in the behavior of the social-spider for constrained optimization. Expert Syst. Appl. 2014, 41, 412–425. [Google Scholar] [CrossRef]
  42. Sun, J.; Fang, W.; Xu, W. A quantum-behaved particle swarm optimization with diversity-guided mutation for the design of two-dimensional IIR digital filters. IEEE Trans. Circuits Syst. II Express Briefs 2010, 57, 141–145. [Google Scholar] [CrossRef]
  43. Kockanat, S.; Karaboga, N.; Koza, T. Image denoising with 2-D FIR filter by using artificial bee colony algorithm. In Proceedings of the 2012 International Symposium on INnovations in Intelligent SysTems and Applications, Trabzon, Turkey, 2–4 July 2012. [Google Scholar]
  44. Geem, Z.W. Global optimization using harmony search: Theoretical foundations and applications. Stud. Comput. Intell. 2009, 203, 57–73. [Google Scholar]
  45. Das, S.; Dey, D. Design of two-dimensional IIR filters using an improved DE algorithm. In Proceedings of the First international conference on Pattern Recognition and Machine Intelligence, Kolkata, India, 20–22 December 2005; pp. 369–375. [Google Scholar]
  46. Wilcoxon, F. Individual comparisons by ranking methods. Biom. Bull. 1945, 1, 80–83. [Google Scholar] [CrossRef]
  47. Cuevas, E.; Gálvez, J.; Hinojosa, S.; Avalos, O.; Zaldívar, D.; Pérez-Cisneros, M. A comparison of evolutionary computation techniques for IIR model identification. J. Appl. Math. 2014, 2014. [Google Scholar] [CrossRef]
  48. Hochberg, Y. A sharper Bonferroni procedure for multiple tests of significance. Biometrika 1988, 75, 800–802. [Google Scholar] [CrossRef]
  49. Armstrong, R.A. When to use the Bonferroni correction. Ophthalmic Physiol. Opt. 2014, 34, 502–508. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Scheme of two-dimensional Infinite Impulse Response (2D-IIR) filter design based on the desired amplitude response.
Figure 1. Scheme of two-dimensional Infinite Impulse Response (2D-IIR) filter design based on the desired amplitude response.
Mathematics 08 01226 g001
Figure 2. (a) Desired amplitude response, (b) Amplitude response calculated by PSO, (c) Amplitude response calculated by ABC, (d) Amplitude response calculated by DE, (e) Amplitude response calculated by HS, (f) Amplitude response calculated by GSA, (g) Amplitude response calculated by FPA.
Figure 2. (a) Desired amplitude response, (b) Amplitude response calculated by PSO, (c) Amplitude response calculated by ABC, (d) Amplitude response calculated by DE, (e) Amplitude response calculated by HS, (f) Amplitude response calculated by GSA, (g) Amplitude response calculated by FPA.
Mathematics 08 01226 g002
Figure 3. Evolution of the objective function values for FPA, GSA, DE, HS, ABC, and PSO. (a) Graphs over 3000 iterations and (b) zoom over 400 iterations.
Figure 3. Evolution of the objective function values for FPA, GSA, DE, HS, ABC, and PSO. (a) Graphs over 3000 iterations and (b) zoom over 400 iterations.
Mathematics 08 01226 g003
Figure 4. Visual comparison of the different designed filters. (a) Original image, (b) corrupted image, (c) filtering with PSO, (d) filtering with ABC, (e) filtering with DE, (f) filtering with HS, (g) filtering with GSA and (h) filtering with FPA.
Figure 4. Visual comparison of the different designed filters. (a) Original image, (b) corrupted image, (c) filtering with PSO, (d) filtering with ABC, (e) filtering with DE, (f) filtering with HS, (g) filtering with GSA and (h) filtering with FPA.
Mathematics 08 01226 g004
Figure 5. Visual comparison of the different designed filters with a wider bandwidth. (a) Original image, (b) corrupted image, (c) filtering with PSO, (d) filtering with ABC, (e) filtering with DE, (f) filtering with HS, (g) filtering with GSA and (h) filtering with FPA.
Figure 5. Visual comparison of the different designed filters with a wider bandwidth. (a) Original image, (b) corrupted image, (c) filtering with PSO, (d) filtering with ABC, (e) filtering with DE, (f) filtering with HS, (g) filtering with GSA and (h) filtering with FPA.
Mathematics 08 01226 g005
Table 1. Minimum ( J min ) and maximum ( J max ) values of the objective function, mean ( J ¯ ) and standard deviation ( σ ) for the results obtained by each algorithm over 250 executions.
Table 1. Minimum ( J min ) and maximum ( J max ) values of the objective function, mean ( J ¯ ) and standard deviation ( σ ) for the results obtained by each algorithm over 250 executions.
PSOABCDEHSGSAFPA
J min 2.69713.06552.65503.42763.26682.3030
J max 16.13314.059915.19688.549416.96473.3276
J ¯ 6.59763.51735.25824.58366.37072.5838
σ 3.45370.17362.03770.59254.11080.2759
Table 2. Coefficients for 2D-IIR filter design obtained by Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Differential Evolution (DE), Harmony Search (HS), Gravitational Search Algorithm (GSA) and Flower Pollination Algorithm (FPA) algorithms.
Table 2. Coefficients for 2D-IIR filter design obtained by Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Differential Evolution (DE), Harmony Search (HS), Gravitational Search Algorithm (GSA) and Flower Pollination Algorithm (FPA) algorithms.
2D-IIR Filter CoefficientsPSOABCDEHSGSAFPA
a010.11820.3217−0.4785−0.75210.2978−0.0388
a020.01490.3109−0.9983−0.2398−0.0654−0.6939
a100.6393−0.5132−1.0000−0.3936−0.6860−0.5417
a110.2938−0.9392−0.88390.12900.65590.0738
a120.3514−0.12890.9422−0.3705−0.8044−0.4146
a200.34590.76010.7540−0.9324−0.1292−0.2994
a210.9404−0.0376−0.85960.18050.3098−0.8339
a22−0.91370.8856−0.41310.23300.23070.6927
b1−0.22900.3588−0.9353−0.3858−0.2743−0.2251
b2−0.9081−0.92380.5207−0.6663−0.8617−0.9241
c1−0.8491−1.0000−0.9046−0.3996−0.8175−0.1543
c2−0.7313−0.6050−0.8622−0.5995−0.5501−0.9438
d10.1360−0.48270.8458−0.29740.1367−0.8157
d20.65450.5521−0.92040.21590.43690.8906
H00.00040.00190.00090.00460.00130.0038
Table 3. Averaged measurements corresponding to the number of function evaluations (NFE) and the computing time (CT) spent by each method obtained from 20 independent executions.
Table 3. Averaged measurements corresponding to the number of function evaluations (NFE) and the computing time (CT) spent by each method obtained from 20 independent executions.
IndexPSOABCDEHSGSAFPA
NFE48,543.1258,321.0957,322.2449,322.7456,654.5751,342.18
CT8.25 s12.54 s13.02 s9.10 s11.82 s10.12 s
Table 4. Minimum ( J min p ) and maximum ( J max p ) values of the objective function, mean ( J ¯ p ) and standard deviation ( σ p ) for the results obtained by every case p = 1,2.
Table 4. Minimum ( J min p ) and maximum ( J max p ) values of the objective function, mean ( J ¯ p ) and standard deviation ( σ p ) for the results obtained by every case p = 1,2.
PSOABCDEHSGSAFPA
J min 1 15.9413.0120.9323.1222.6712.68
J min 2 36.5435.7218.9024.3358.4617.52
J max 1 73.6816.8535.8447.5887.9115.24
J max 2 155.6351.9131.3941.4845.1526.35
J ¯ 1 25.4014.9425.2131.1975.8713.95
J ¯ 2 68.6244.4524.6528.9449.9422.75
σ 1 14.170.993.146.9017.601.54
σ 2 28.814.853.443.753.133.86
Table 5. Performance comparison of the filter characteristics. The results correspond to the responses of each of the best filters produced by every algorithm PSO, ABC, DE, HS, GSA and FPA (reported in Table 2).
Table 5. Performance comparison of the filter characteristics. The results correspond to the responses of each of the best filters produced by every algorithm PSO, ABC, DE, HS, GSA and FPA (reported in Table 2).
IndexPSOABCDEHSGSAFPA
E R P ( ω ) 0.08810.00970.06310.02870.07140.0051
E M R ( ω ) 0.14210.09180.19770.10210.13910.0272
E M R ( ω ) 25.8734.2126.4235.2129.2842.18
MPs0.87210.91800.91820.89160.90190.8460
Table 6. p-values of the Wilcoxon test comparing FPA vs. PSO, ABC, DE, HS and GSA.
Table 6. p-values of the Wilcoxon test comparing FPA vs. PSO, ABC, DE, HS and GSA.
FPA vs.PSOABCDEHSGSA
Wilcoxon result1.4118 × 10121.8196 × 10129.2235 × 10136.5455 × 10137.1329 × 1013
Table 7. p-values of the Wilcoxon test comparing ABC vs. PSO, DE, HS, GSA and FPA.
Table 7. p-values of the Wilcoxon test comparing ABC vs. PSO, DE, HS, GSA and FPA.
ABC vs.PSODEHSGSAFPA
Wilcoxon result4.0756 × 10113.4569 × 1091.5366 × 10122.6216 × 1061.8196 × 1012
Table 8. Bonferroni correction for the case of FPA comparison.
Table 8. Bonferroni correction for the case of FPA comparison.
FPA vsPSOABCDEHSGSA
p<= Bonferroni11111
Table 9. Bonferroni correction for the case of ABC comparison.
Table 9. Bonferroni correction for the case of ABC comparison.
ABC vsPSODEHSGSAFPA
p<= Bonferroni11111

Share and Cite

MDPI and ACS Style

Avalos, O.; Cuevas, E.; Gálvez, J.; Houssein, E.H.; Hussain, K. Comparison of Circular Symmetric Low-Pass Digital IIR Filter Design Using Evolutionary Computation Techniques. Mathematics 2020, 8, 1226. https://doi.org/10.3390/math8081226

AMA Style

Avalos O, Cuevas E, Gálvez J, Houssein EH, Hussain K. Comparison of Circular Symmetric Low-Pass Digital IIR Filter Design Using Evolutionary Computation Techniques. Mathematics. 2020; 8(8):1226. https://doi.org/10.3390/math8081226

Chicago/Turabian Style

Avalos, Omar, Erik Cuevas, Jorge Gálvez, Essam H. Houssein, and Kashif Hussain. 2020. "Comparison of Circular Symmetric Low-Pass Digital IIR Filter Design Using Evolutionary Computation Techniques" Mathematics 8, no. 8: 1226. https://doi.org/10.3390/math8081226

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop