Development and Applications of Augmented Whale Optimization Algorithm

: Metaheuristics are proven solutions for complex optimization problems. Recently, bioinspired metaheuristics have shown their capabilities for solving complex engineering problems. The Whale Optimization Algorithm is a popular metaheuristic, which is based on the hunting behavior of whale. For some problems, this algorithm suffers from local minima entrapment. To make WOA compatible with a number of challenging problems, two major modiﬁcations are proposed in this paper: the ﬁrst one is opposition-based learning in the initialization phase, while the second is inculcation of Cauchy mutation operator in the position updating phase. The proposed variant is named the Augmented Whale Optimization Algorithm (AWOA) and tested over two benchmark suits, i.e., classical benchmark functions and the latest CEC-2017 benchmark functions for 10 dimension and 30 dimension problems. Various analyses, including convergence property analysis, boxplot analysis and Wilcoxon rank sum test analysis, show that the proposed variant possesses better exploration and exploitation capabilities. Along with this, the application of AWOA has been reported for three real-world problems of various disciplines. The results revealed that the proposed variant exhibits better optimization performance.


Introduction and Literature Review
Optimization is a process to fetch the best alternative solution from the given set of alternatives. Optimization processes are evident everywhere around of us. For example, to run a generating company, the operator has to take care of operating cost and to check and deal with various type of markets to execute financial transactions.The operator has to optimize the fuel purchase cost, sell the power at maximum rate and purchase the carbon credits at minimum cost to earn profit. Sometimes, optimization processes involve various stochastic variables to model the uncertainty in the process. Such processes are quite difficult to handle and often pose a severe challenge to the optimizer or solution provider algorithms. Evolution of modern optimizers is the outcome of these complex combinatorial multimodal nonlinear optimization problems. Unlike classical optimizers, where the search starts with the initial guess, these modern optimizers are based on the stochastic variables, and hence, they are less vulnerable towards local minima entrapment. These problems become the main source of emerging of metaheuristic algorithms, which are capable of finding a near-optimal solution in less computation time. The popularity of metaheuristic algorithms [1] has increased exponentially in the last two decades due to their simplicity, derivation-free mechanism, flexibility and better results providing capacity in comparison with conventional methods. The main inspiration of these algorithms is nature, and hence, aliased as nature-inspired algorithms [2].
Social mimicry of nature and living processes, behavior analysis of animals and cognitive viability are some of the attributes of nature-inspired algorithms. Darwin's theory of evolution has inspired some nature-inspired algorithms, based on the property of "inheritance of good traits" and "competition, i.e., survival of the fittest". These algorithms are Genetic Algorithm [3], Differential Evolution and Evolutionary Strategies [4].
The other popular philosophy is to mimic the behavior of animals which search for food. In these approaches, food or prey is used as a metaphor for global minima in mathematical terms. Exploration, exploitation and convergence towards the global minima is mapped with animal behavior. Most of the nature-inspired algorithms also known as population-based algorithms can further be classified as: • Bio-inspired Swarm Intelligence (SI)-based algorithms: This category includes all algorithms inspired by any behavior of swarms or herds of animals or birds. Since most birds and animals live in a flock or group, there many algorithms that fall under this category, such as Ant Colony Optimization (ACO) [5], Artificial Bee Colony [6], Bat Algorithm [7], Cuckoo Search Algorithm [8], Krill herd Algorithm [9], Firefly Algorithm [10], Grey Wolf Optimizer [11], Bacterial Foraging Algorithm [12], Social Spider Algorithm [13], Cat Swarm Optimization [14], Moth Flame Optimization [15], Ant Lion Optimizer [16], Crow Search Algorithm [17] and Grasshopper Optimization Algorithm [18]. A social interaction-based algorithm named gaining and sharing knowledge was proposed in reference [19]. References pertaining to the applications of bioinspired algorithms affirm the suitability of these algorithms on real-world problems [20][21][22][23]. A timeline of some famous bio-inspired algorithms is presented in Figure 1. • Physics-or chemistry-based algorithms: Algorithms developed by mimicking any physical or chemical law fall under this category. Some of them are Big bang-big crunch Optimization [24], Black Hole [25], Gravitational search Algorithm [26], Central Force [27] and Charged system search [28].
Other than these population-based algorithms, a few different algorithms have also been proposed to solve specific mathematical problems. In [29,30], the authors proposed the concept of construction, solution and merging. Another Greedy randomised adaptive search-based algorithm using the improved version of integer linear programming was proposed in [31].
The No Free Lunch Theorem proposed by Wolpert et al. [32] states that there is no one metaheuristic algorithm which can solve all optimization problems. From this theorem, it can be concluded that there is no single metaheuristic that can provide the best solution for all problems. It is possible that one algorithm may be very effective for solving certain problems but ineffective in solving other problems. Due to the popularity of natureinspired algorithms in providing reasonable solutions to complex real-life problems, many new nature-inspired optimization techniques are being proposed in the literature. It is interesting to note that all bio-inspired algorithms are subsets of nature-inspired algorithms. Among all of these algorithms, the popularity of bio-inspired algorithms has increased exponentially in recent years. Despite of this popularity, these algorithms have also been critically reviewed [33]. In 2016, Mirjalili et al. [34] proposed a new nature-inspired algorithm called the Whale optimization algorithm (WOA), inspired by the bubble-net hunting behavior of humpback whales. The humpback whale belongs to the rorqual family of whales, known for their huge size. An adult can be 12-16 m long and weigh 25-30 metric tons. They have a distinctive body shape and are known for their breaching behavior in water with astonishing gymnastic skills and for haunting songs sung by males during their migration period. Humpback whales eat small fish herds and krills. For hunting their prey, they follow a unique strategy of encircling the prey spirally, while gradually shrinking the size of the circles of this spiral. With incorporation of this theory, the performance of WOA is superior to many other nature-inspired algorithms. Recently, in [35], WOA was used to solve the optimization problem of the truss structure. WOA has also been used to solve the well-known economic dispatch problem in [36]. The problem of unit commitment from electric power generation was solved through WOA in [37]. In [38], the author applied WOA to the long-term optimal operation of a single reservoir and cascade reservoirs. The following are the main reasons to select WOA: • There are few parameters to control, so it is easy to implement and very flexible. • This algorithm has a specific characteristic to transit between exploration and exploitation phasesm as both of these include one parameter only.
Sometimes, it also suffers from a slow convergence speed and local minima entrapment due to the random size of the population. To overcome these shortcomings, in this paper, we propose two major modifications to the existing WOA: • The first modification is the inculcation of the opposition-based learning (OBL) concept in the initialization phase of the search process, or in other words, the exploratory stage. The OBL is a proven tool for enhancing the exploration capabilities of metaheuristic algorithms. • The second modification is of the position updating phase, by updating the position vector with the help of Cauchy numbers.
The remaining part of this paper is organized as follows: Section 2 describes the crisp mathematical details of WOA. Section 3 is a proposal of the proposed variant; an analogy based on modified position update is also established with the proposed mathematical framework. Section 4 includes the details of benchmark functions. In Sections 5 and 6 show the results of the benchmark functions and some real-life problems that occur with different statistical analyses. Last but not the least, the paper concludes in Section 7 with a decisive evaluation of the results, and some future directions are indicated.

Mathematical Framework of WOA
The mathematical model of WOA can be presented in three steps: prey encircling, exploitation phase through bubble-net and exploration phase, i.e., prey search.

1.
Prey encircling: Humpback whales choose their target prey through the capacity to finding the location of prey. The best search agent is followed by other search agents to update their positions, which can be given mathematically as: where Y * denotes the position vector of the best obtained solution, Y is the position vector, s is the current iteration, | | denotes the absolute value and · denotes the element to element multiplication. The coefficients R and Q can be calculated as follows: where p linearly decreases with every iteration from 2 to 0 and r ∈ [0, 1]. By adjusting the values of vectors P and R, the current position of search agents shifted towards the best position. This position updating process in the neighborhood direction also helps in encircling the prey n dimensionally.

2.
Exploitation phase through bubble-net: The value of p decreases in the interval [−p, p]. Due to changes in p, P also fluctuates and represents the shrinking behavior of search agents. By choosing random values of P in the interval [−1, 1], the humpback whale updates its position. In this process, the whale swims towards the prey spirally and the circles of spirals slowly shrink in size. This shrinking of the spirals in a helix-shaped movement can be mathematically modeled as: where a is the constant factor responsible for the shape of spirals and l randomly belongs to interval [−1, 1].
In the position updating phase, whales can choose any model, i.e., the shrinking mechanism or the spiral mechanism. The probability of this simultaneous behavior is assumed to be 50 during the optimization process. The combined equation of both of these behavior can be represented as: 3.

Exploration Phase
In this phase, P is chosen opposite to the exploitation phase, i.e., the value of P must be > 1 or < −1, so that the humpback whales can move away from each other, which increases the exploration rate. This phenomenon can be represented mathematically as: where Y rand represents the position of a random whale. After achieving the termination criteria, the optimization process finishes. The main features of WOA are the presence of the dual theory of circular shrinking and spiral path, which increase the exploitation process of finding the best position around the prey. Afterwards, the exploration phase provides a larger area through the random selection of values of A .

Motivation and Development of the Augmented Whale Optimization Algorithm
It is observed in the previous reported applications that inserting mutation in the population-based schemes can enhance the performance of the optimization. Some noteworthy applications are reported in [39].

Augmented Whale Optimization Algorithm (AWOA)
By taking the motivation of the modified position update, we present the development of AWOA and the mathematical steps we have incorporated. To simulate the behavior of whale through modified position update and their connection to the position update mechanism for mating, we require two mechanisms:

1.
The mechanism that puts the whales in diverse directions.

2.
The mechanism that updates the positions of the whales by using a mathematical signal.

The Opposition-Based Position Update Method
For simulating the first mechanism, we choose the opposition number generation theory that was proposed by H. R Tizoosh. Opposition-based learning is the concept that puts the search agents in diverse (rather opposite directions) so that the search for optima can also be initiated from opposite directions. This theory has been applied in many metaheuristic algorithms, and now, it is a proven fact that the search capabilities of the optimizer can substantially be enhanced by the application of this opposition number generation technique. Some recent papers have provided evidence of this [40,41]. With these approaches, an impact of opposition-based learning can be easily seen. Furthermore, a rich review on the techniques related to opposition, application area and performance-wise comparison can be read in [42,43].
The following points can be taken as some positive arguments in favor of the application of the oppositional number generation theory (ONGT) concept:

1.
While solving multimodal optimization problems, it is required that an optimizer should start a process from the point which is nearer to the global optima; in some cases, the loose position update mechanism becomes a potential cause for local minima entrapment. The ONGT becomes a helping hand in such situations, as it places search agents in diverse directions, and hence, the probability of local minima entrapment is substantially decreased.

2.
In real applications, where the shape and nature of objective functions are unknown, the ONGT can be a beneficial tool because if the function is unimodal in nature, as per the research, the exploration capabilities of any optimizers can be substantially enhanced by the application of ONGT. On the other hand, if the function is multimodal in nature, then ONGT will help search agents to acquire opposite positions and help the optimizer's mechanism to converge to global optima.
For the reader's benefit, we are incorporating some definitions of opposite points in search space for a 2D and multidimensional space.
The same holds for Q dimensional space.
Definition 2. Let A = (x 1 , x 2 , . . . , x Q ) be a point in Q dimensional space, where x 1 , x 2 , . . . , x Q ∈ R and x i ∈ [a, b], i=1, 2, . . . , Q ; the opposite points matrix can be given by where a i and b i are the lower limit and upper limit, respectively. Furthermore, Figure 2 illustrates the search process of ONGT, where A1 and B1 are the search boundaries, and it shrinks as the iterative process progresses.

Position Updating Mechanism Based on the Cauchy Mutation Operator
For simulating the second mechanism, we require a signal that is a close replica of a whale song. In the literature, a significant amount of work has been done on the application of the Cauchy mutation operator due to the following reasons:

1.
Since the expectation of the Cauchy distribution is not defined, the variance of this distribution is infinite; due to this fact, the Cauchy operators sometimes generate a very long jump as compared to normally distributed random numbers [44,45]. This phenomenon can be observed in Figure 3.

2.
It is also shown in [44] that Cauchy distribution generates an offspring far from its parents; hence, the avoidance of local minima can be achieved.
In the proposed AWOA, the position update mechanism is derived from the Cauchy distribution operator. The Cauchy density function of the distribution is given by: where t is the scaling parameter and the corresponding distribution function can be given as: First, a random number y ∈ (0, 1) is generated, after which a random number α is generated by using following equation: We assume that α is a whale position update generated by the search agents and on the basis of this signal, the position of the whale is updated. Furthermore, we define a position-based weight matrix of jth position vector of ith whale, which is given as: where W(j) is a weight vector and NP is the population size of whale. Furthermore, the position update equation can be modified as: Summarizing all the points discussed in this section, we propose two mechanisms for the improvement of the performance of WOA. The first one is the opposition-based learning concept that places whales in diverse directions to explore the search space effectively, and based on the whale behaviour(modified position update) is created by them, the position update mechanism is proposed. To simulate whale song, we employ Cauchy numbers. Hence, both of these mechanisms can be beneficial for enhancing the exploration and exploitation capabilities of WOA. In the next section, we will evaluate the performance of the proposed variant on some conventional and CEC-17 benchmark functions.

Benchmark Test Functions
Benchmark functions are a set of functions with different known characteristics (separability, modality and dimensionality) and often used to evaluate the performance of optimization algorithms. In the present paper, we measure the performance of our proposed variant AWOA through two benchmark suites.  Table 1. For further details, one can refer to [46][47][48].
The shapes of the used benchmark functions are given in Figure 4. • Benchmark Suite 2: For further benchmarking our proposed variant, we also choose a set of 29 functions of diverse nature from CEC 2017. Table 2 showcases the minor de-tails of these functions. For other details, such as optima and mathematical definitions, we can follow [49].

Function Dim Range Minima
Unimodal Benchmark Function

Function Name Optima
Unimodal

Result Analysis
In this section, various analyses that can check the efficacy of the proposed modifications are exhibited. For judging the optimization performance of the proposed AWOA, we have chosen some recently developed variants of WOA for comparison purpose. These variants are: • Lévy flight trajectory-based whale optimization algorithm (LWOA) [50]. • Improved version of the whale optimization algorithm that uses the opposition-based learning, termed OWOA [41]. • Chaotic Whale Optimization Algorithm (CWOA) [51].  For rest of the functions, indicated mean values are competitive and the best results are indicated in bold face. From this statistical analysis, we can easily derive a conclusion that proposed modifications in AWOA are meaningful and yield positive implications on optimization performance of the AWOA specially on unimodal functions. Similarly, for multimodal functions BF-7 and BF-9 to 11, BF-15 to 19 and BF-22 have optimal values of mean parameter. We observed that the values of mean are competitive for rest of the functions and performance of proposed AWOA has not deteriorated.

Convergence Property Analysis
Similarly, the convergence plots for functions BF1 to BF4 have also been plotted in Figure 5 for the sake of clarity. From these convergence curves, it is observed that the proposed variant shows better convergence characteristics and the proposed modifications are fruitful to enhance the convergence and exploration properties of WOA. As it can be seen that convergence properties of AWOA is very swift as compared to other competitors. It is to be noted here that BF1-BF4 are unimodal functions and performance of AWOA on unimodal functions indicates enhanced exploitation properties. Furthermore, for showcasing the optimization capabilities of AWOA on multimodal functions the plots of convergence are exhibited in Figure 6. These are plotted for BF9 to BF12. From these results of proposed AWOA, it can easily be concluded that the results are also competitive.

Wilcoxon Rank Sum Test
A rank sum test analysis has been conducted and the p-values of the test are indicated in Table 4. We have shown the values of Wilcoxon rank sum test by considering a 5% level of significance [52]. Values that are indicated in boldface are less than 0.05, which indicates that there is a significance difference between the AWOA results and other opponents.

Boxplot Analysis
To present a fair comparison between these two opponents, we have plotted boxplots and convergence of some selected functions. Figure 7 shows the boxplots of function (BF1-BF12). From the boxplots, it is observed that the width of the boxplots of AWOA are optimal in these cases; hence, it can be concluded that the optimization performance of AWOA is competitive with other variants of WOA. The mean values shown in the boxplots are also optimal for these functions. The performance of AWOA on the remaining functions of this suite has been depicted through boxplots shown in Figure 8. From these, it can be concluded that the performance of proposed AWOA is competitive, as mean values depicted in the plots are optimal for most of the functions.

Benchmark Suite 2
In this section, we report the results of the proposed variant on CEC17 functions. The details of CEC 17 functions have been exhibited in Table 2

Results of the Analysis of 10D Problems
For 10D problems, the depiction of results are in terms of the mean values and standard deviation values obtained from 51 different independent runs that are indicated for each opponent of AWOA. Furthermore, the following are the noteworthy observations from this study: • From the table, it is observed that the values obtained from optimization process and their statistical calculation indicate that the substantial enhancement is evident in terms of mean and standard deviation values. These values are shown in bold face. We observe that out of 29 functions, the proposed variant provides optimal mean values for 23 functions. In addition to that, we have observed that the value of the mean parameter is optimal for 23 functions for AWOA. Except CECF16, 17, 18, 23, 24, 26 and CECF29, the mean values of the optimization runs are optimal for AWOA. This supports the fact that the proposed modifications are helpful for enhancing the optimization performance of the original WOA. Inspecting other statistical parameters, namely standard deviation values, also gives a clear insight into the enhanced performance.
• We observe that for unimodal functions, these values are optimal as compared to different versions of WOA as compared to AWOA; hence, it can be said that for unimodal functions, AWOA outperforms. Unimodal functions are useful to test the exploration capability of any optimizer. • Inspecting the performance of the proposed version of WOA on multimodal functions that are from CECF4-F10 gives a clear insight on the fact that the proposed modifications are meaningful in terms of enhanced exploitation capabilities. Naturally, in multimodal functions, more than one minimum exist, and to converge the optimization process to global minima can be a troublesome task. • The results of optimization runs indicated in bold face depict the performance of AWOA.

Statistical Significance Test by the Wilcoxon Rank Sum Test
The results of the rank sum test are depicted in Table 7. It is always important to judge the statistical significance of the optimization run in terms of calculated p-values. For this reason, the proposed AWOA has been compared with all opponents and results in terms of the p-values that are depicted. Bold face entries show that there is a significance difference between optimization runs obtained in AWOA and other opponents. This fact demonstrates the superior performance of AWOA.

Boxplot Analysis
Boxplot analysis for 10D functions are performed for 20 independent runs of objective function values. This analysis is depicted in Figures 9 and 10. From these boxplots, it is easily to state that the results obtained from the optimization process acquire an optimal Inter Quartile Range and low mean values. For showcasing the efficacy of the proposed AWOA, all the optimal entries of mean values are depicted with an oval shape in boxplots.

Results of the Analysis of 30D Problems
The results of the proposed AWOA, along with other variants of WOA, are depicted in terms of statistical attributes of independent 51 runs in Table 6. From the results, it is clearly evident that except for F24, the proposed AWOA provides optimal results as compared to other opponents. Mean values of objective functions and standard deviation of the objective functions obtained from independent runs are shown in bold face.   The results of the rank sum test are depicted in Table 8. It is always important to judge the statistical significance of the optimization run in terms of calculated p-values. For this reason, the proposed AWOA was compared with all opponents and the results in terms of p-values are depicted. Bold face entries show that there is a significance difference between optimization runs obtained in AWOA and other opponent, as the obtained p-values are less than 0.05. We observe that for the majority of the functions, calculated p-values are less than 0.05. Along with the optimal mean and standard deviation values, p-values indicated that the proposed AWOA outperforms. In addition to these analyses, a boxplot analysis was performed of the proposed AWOA with other opponents, as depicted in Figures 11 and 12. From these figures, it is easy to learn that the IQR and mean values are very competitive and optimal in almost all cases for 30-dimension problems. Inspecting the convergence curves for some of the functions, such as unimodal functions F1 and F3 and for some other multimodal and hybrid functions, as depicted in Figure 13.

Comparison with Other Algorithms
To validate the efficacy of the proposed variant, a fair comparison on CEC 2017 criteria is executed. The optimization results of the proposed variant along with some contemporary and classical optimizers are reported in Table 9. The competitive algorithms are Moth flame optimization (MFO) [15], Sine cosine algorithm [53], PSO [54] and Flower pollination Algorithm [55]. It can be easily observed that the results of our proposed variant are competitive for almost all the functions.

Model Order Reduction
In control system engineering, most of the linear time invariant systems are of a higher order, and thus, difficult to analyze. This problem has been solved using the reduced model order technique, which is easy to use and less complex in comparison to earlier control paradigm techniques. Nature-inspired optimization algorithms have proved to be an efficient tool in this field, as they help to minimize the integral square of lower-order systems. This approach was first introduced in [56] followed by [39,57,58] and many more. These works advocate the efficacy of optimization algorithm in solving the reduced model order technique, as these reduce the complexity, computation time and cost of the reducing process. For testing the applicability of AWOA on some real-world problems, we have considered the Model Order Reduction problem in this section. In MOR, large complex systems with known transfer functions are converted with the help of an optimization application to the reduced order system. The following are the steps of the conversion:

1.
Consider a large complex system with a higher order and obtain the step response of the system. Stack the response in the form of a numerical array.

2.
Construct a second-order system with the help of some unknown variables that are depicted in the following equation. Furthermore, obtain the step response of the system and stack those numbers in a numerical array.

3.
Construct a transfer function that minimizes the error function, preferably the Integral Square Error (ISE) criterion.

Problem Formulation
In this technique, a transfer function given by X(t) : u → v of a higher order is reduced, in function X(t) : u →ṽ of a lower order, without affecting the input u(x); the output isṽ(x) ≈ v(x). The integral error defined by the following equation is minimized in the process using the optimization algorithm: where X(t) is a transfer function of any Single Input and Single Output system defined by: For a reduced order system, X(s) can be given by: where (n r ≥ m r , m r , n r ∈ I). In this study, we calculate the value of coefficients of the numerator and denominator of a reduced order system defined in Equation (21) • Function 2 The results of the optimization process by depicting the values of time domain specifications, namely rise time and settling time for both functions, are exhibited in Table 10. Furthermore, the convergence proofs of the algorithm on both functions are depicted in Figures 14 and 15. Errors in the time domain specifications as compared to the original system are depicted in Table 11.   From these analyses, it is quite evident that MOR performed by AWOA leads to a configuration of the system that follows the time domain specifications of the original system quite closely. In addition to that, the error in objective function values are also optimal in the case of AWOA.

Frequency-Modulated Sound Wave Parameter Estimation Problem
This problem has been taken in many approaches to benchmark the applicability of different optimizers. This problem was included in the 2011 Congress on Evolutionary computation competition for testing different evolutionary optimization algorithms on real problems [59]. This problem is a six-dimensional problem, where the parameters of a sound wave are estimated in such a manner that it should be matched with the target wave.
The mathematical representation of this problem can be given as: The equations of the predicted sound wave and target sound wave are as follows: J 0 (t) = (1.0). sin((5.0).t.θ − (1.5). sin((4.8).t.θ + (2.0). sin((4.9)t.θ))) (24) The results of this design problem are shown in terms of different analyses that include the boxplot and convergence property, which are obtained from 20 independent runs. The Figure 16 shows this analysis. A comparison of the performance on the basis of error in the objective function values is depicted in Figure 17. Here, boxplot axis entry 1, 2, 3, 4 and 5 show LWOA, CWOA, proposed AWOA, OWOA and WOA, respectively.

PID Control of DC Motors
In today's machinery era, DC motors are used in various fields such as the textile industry, rolling mills, electric vehicles and robotics. Among the various controllers available for DC motors, the Proportional Integral Derivative (PID) is the most widely used and proved its efficiency as an accurate result provider without disturbing the steady state error and overshoot phenomena [60]. With this controller, we also needed an efficient tuning method to control the speed and other parameters of DC motors. In recent years, some researchers have explored the meta-heuristic algorithm in tuning of different types of PID controllers. In [61], the authors presented a comparative study between simulated annealing, particle swarm optimization and genetic algorithm. Stochastic fractal search has been applied to the DC motor problem in [62]. The sine cosine algorithm is also used in the determination of optimal parameters of the PID controller of DC motors in [20]. In [63], the authors proposed the chaotic atom search optimization for optimal tuning of the PID controller of DC motors with a fractional order. A hybridized version of foraging optimization and simulated annealing to solve the same problem was reported in [64].  The DC motor problem used here is a specific type of DC motor which controlled its speed through input voltage or change in current. In DC motors, the applied voltage f b (t) is directly proportional to the angular speed β(t) = dα(t) dt , while the flux is constant, i.e.: The initial voltage of armature f a (t) satisfies the following differential equation: The motor torque (due to various friction) developed in the process (neglecting the disturbance torque) is given by: Taking the Laplace transform of these equations and assuming all the initial condition to zero, we get: F a (s) = (P a s + K a )R a (s) + F b (s) Ω(s) = (Ls + T)X(s) = H m R a (s) On simplifying these equations, open loop transfer function of DC motor can be given as: X(s) F a (s) = H m (P a s + K a )(Ls + T) + H b H m (32)

Results and Discussion
All the parameters and constant values considered in this experiment are given in Table 12. The simulation results for tuning the PID controller for plant DC motors are depicted in Table 13. First column entries show the plant and controller combined realization as a closed system and the other two entries show specification of time domain simulation conducted when the system is subjected to step input.
After a careful observation, it is concluded that the closed loop system realized with the proposed AWOA possesses optimal settling and rise time that itself depicts a fast transient response of the system. Although the comparative analysis of other algorithms also depicts very competitive values of these times, the response and convergence process of AWOA are swift as compared to other opponents. The boxplot analysis and convergence property analysis are shown in Figure 18. The boxplot shows the comparison of the optimization results when the optimization is run 20 independent times. The X axis shows the AWOA, CWOA, LWOA, OWOA and WOA algorithms. The optimal entries of settling time and rise time are in bold face to showcase the efficacy of the AWOA. The step response of these controllers has been shown in Figure 19.   Step Response Analysis of Different Controllers.

Conclusions
This paper is a proposal of a new variant of WOA. The singing behavior of whales is mimicked with the help of opposition-based learning in the initialization phase and Cauchy mutation in the position update phase. The following are the major conclusions drawn from this study:

•
The proposed AWOA was validated on two benchmark suits (conventional and CEC 2017 functions). These benchmark suits comprise mathematical functions of distinct nature (unimodal, multimodal, hybrid and composite). We have observed that for the majority of the functions, AWOA shows promising results. It is also observed that the performance of AWOA is competitive with other algorithms. • The statistical significance of the obtained results is verified with the help of a boxplot analysis and Wilcoxon rank sum test. It is observed that boxplots are narrow for the proposed AWOA and the p-values are less than 0.05. These results show that the proposed variant exhibits better exploration and exploitation capabilities, and with these results, one can easily see the positive implications of the proposed modifications.

•
The proposed variant is also tested for challenging engineering design problems. The first problem is the model order reduction of a complex control system into subsequent reduced order realizations. For this problem, AWOA shows promising results as compared to WOA. As a second problem, the frequency-modulated sound wave parameter estimation problem was addressed. The performance of the proposed AWOA is competitive with contemporary variants of WOA. In addition to that, the application of AWOA was reported for tuning the PID controller of the DC motor control system. All these applications indicate that the modifications suggested for AWOA are quite meaningful and help the algorithm find global optima in an effective way.
The proposed AWOA can be applied to various other engineering design problem, such as network reconfiguration, solar cell parameter extraction and regulator design. These problems will be the focus of future research.