A New Self-Adaptive Teaching–Learning-Based Optimization with Different Distributions for Optimal Reactive Power Control in Power Networks

: Teaching–learning-based optimization has the disadvantages of weak population diversity and the tendency to fall into local optima, especially for multimodal and high-dimensional problems such as the optimal reactive power dispatch problem. To overcome these shortcomings, ﬁrst, in this study, a new enhanced TLBO is proposed through novel and effective θ -self-adaptive teaching and learning to optimize voltage and active loss management in power networks, which is called the optimal reactive power control problem with continuous and discontinuous control variables. Voltage and active loss management in any energy network can be optimized by ﬁnding the optimal control parameters, including generator voltage, shunt power compensators


Introduction
An important part of efficient, affordable, and reliable power system operation, or in other words, optimal operation, is the optimal reactive power (Volt-VAR) control problem, which includes generator voltage, shunt compensator power, and the tap positions of tap changers [1].In a competitive and deregulated environment, the optimal Volt-VAR control issue is an important and efficient tool in electrical energy transmission networks.The main objective of solving the Volt-VAR control issue is to reduce network losses and, as a result, reduce the final cost of energy transmission in energy systems while satisfying a set of operational and physical constraints imposed by network and equipment limitations.The basic goal is to minimize essential key functions, such as the summation of bus voltage deviations and active power losses while also addressing several practical limitations [2].Because generator voltage is intrinsically continuous but shunt reactive power compensators and tap changer ratios are discrete variables, the optimal VAR control problem is viewed as a complex multimodal nonlinear optimization issue including discrete variables.This problem is also a multimodal, high-dimensional, and complex sophisticated problem with some nonlinear objective functions with multiple local minima and several nonlinear and discontinuous constraints [1][2][3][4].
Numerous methods, ranging from standard mathematical techniques to those related to artificial intelligence, have been proposed over the past years for the application of optimal VAR control problems.The introduction of the harmony search algorithm In general, the author attempted to create better optimization algorithms for discovering better optimal solutions than earlier published methods.Almost all demonstrations are based on the quality of the effective solutions and the converging characteristics of the best run out of many runs.In the second section of this article, the standard formulation of the optimal VAR control issue is discussed, whereas in the third section, the arrangement of θ-SATLBO is explained.The next section summarizes the simulation results and compares and analyzes the methodologies utilized to address use cases of optimal VAR control problems.Finally, the concluding paragraph of this paper summarizes the implementation of the recommended algorithms.
The rest of this paper is organized as follows.Section 2 presents the Volt-VAR control formulation for optimization.Section 3 presents the new proposed algorithms for the optimal VAR control problem.Section 4 shows the obtained optimal numerical results of the optimal VAR control problem.Finally, Section 5 presents the conclusions.

Volt-VAR Control Formulation
For the most part, a solution to the optimal VAR control (Volt-VAR control) problem aims at minimizing the sum of bus voltage deviation (SVD) while optimizing active losses (P loss ) in the power grid, though some important criteria must also be fulfilled [1][2][3][4].
The following are the mathematical equations for formulating the optimal VAR control issue [21]: Minimize f AVR (x AVR , u AVR ) Subject to: The objective function, which should be minimized, is f AVR = (x AVR , u AVR ), and x AVR represents the dependent variables, including: 1.
Unit reactive power Q G ; 3.
Network line loading limit S l .
As a result, the x AVR vector can be defined as follows: x AVR = V L1 , . . ., V LNPQ , Q G1 , . . ., Q GNG , S l1 , . . ., S lNL T NG signifies the number of units, and u AVR is the vector of independent continuous and discontinuous decision parameters, including [21]: Unit voltages V G ; 2.
Shunt reactive sources Q C .

Optimal VAR Control Functions 2.1.1. Network Active Loss Reduction
The objective of the optimal VAR control problem is to minimize the real power transmission losses (P Loss ) in the transmission network.Total network active losses in system power are significant because the magnitude of current flowing through conductors is high, and the length of transmission lines can be up to hundreds of kilometers.In order to reduce network active losses, the optimization of network active losses is an important solution.The network active loss reduction function is described as follows: All of the parameters in the above equation, including NPQ, PQ, δ ij , NTL, and g k , are defined in [21].

Minimization of SVD
Bus voltage is regarded as a critical security and service indicator.In this situation, the goal function for the optimal VAR control issue is the minimization of SVD.The objective function is as follows: 2.1.3.Minimization of Both Objective Functions (Optimal VAR Control Problem) In an optimal VAR control problem, using only a true power loss objective will result in workable control parameters with an unsatisfactory voltage profile.The optimal VAR control function is as follows: Here, λ is the penalty factor, which is designated as 0.7 and 0.8 for the two standard IEEE test systems, respectively.

Equality Constraints
The following are some illustrations of typical load flow equations with g AVR (x AVR , u AVR ) as the equality constraint [21]: All of the parameters in the above equation, including P Gi , NB, Q Gi , Q Di , P Di , B ij , and G ij , are defined in [21].

Inequality Constraints
The inequality constraints of the problem, g AVR (x AVR , u AVR ), are as follows: 1.
Generation units' constraints: The base unit power at the base bus (P min Gi and P max Gi ), the voltages of the generation units' bus (V min Gi and V max Gi ), and the generation units' reactive power (Q min Gi and Q max Gi ) are all constrained by the following limitations (for i = 1, 2, . . ., NG): Energies 2022, 15, 2759 5 of 24

2.
Tap-changer trans limitations: The settings for tap-changer trans taps (T) are constrained by their minimum and maximum limits, respectively: where T min i and T max i define higher and lower tap limits of the ith tap changer.

3.
Network parallel compensator's reactive constraints: Compensations for parallel VARs are constrained by the following limits: Constraints on security: These include voltage restrictions on transmission line loading (S li ) and load buses: The objective function (Cost) imposes penalty terms on dependent variables.Thus, (1) is modified as follows [5]: where λ V and λ Q are penalty terms, and the number of load buses and generator buses in which voltage and injected reactive power are outside the limits (N lim V and N lim Q ) V lim i and

Q lim
Gi are clearly characterized as:

TLBO
The TLBO optimizer, which was introduced in [15], is a well-known optimizer.Because of the TLBO algorithm's foundation in simulating an old-school learning process, it is easy to understand how it works.There are two stages of learning in this process: learning from a teacher and learning by engaging with other students or learners (known as the learner phase).There are varieties of decision variables that are used as knowledge topics for a group of learners in this optimization technique.The "fitness" value is equal to the result of a learner.The teacher is often regarded as the best option for the needs of the entire community.Since the problem parameters are essential variables in the optimization problem's objective function, a good solution is one that maximizes this objective function to its optimal value.Both portions of the TLBO algorithm are executed sequentially.The "teacher phase" of the algorithm comes first, followed by the "learner phase".

Teacher Phase
During this phase, the best particle (or teacher) strives to improve the class's average outcome in the subject that he or she teaches, within the limits of his or her competence.Obviously, at this phase, the teaching position is allocated to the most qualified member (teacher).Thus, enhancing the mean outcome of the class in the TLBO method is analogous to improving other individuals (Learner i ) by repositioning them closer to the teacher's position while taking the existing mean value of the people into account (Learner mean ).Each parameter in the problem dimension is given an average value, and this technique perfectly reflects the quality of all present learners in the population.Equation (19) demonstrates how the gap between the teacher's understanding and the quality of the learners or other populations might affect the students' quality improvement.
T F denotes a teaching factor in the introduced equation determining the value of the average to be adjusted, and r is a random value in the range [0, 1].
The value of T F should be either 1 or 2, which is selected heuristically and randomly with equal probability using: T F = round[1 + rand(0, 1)].

Learner Phase
Learners should promote their understanding in two different ways: by receiving input from the teachers or through interactions among themselves, which is termed the learner phase.During this process, Learner i attempts to progress his/her understanding via peer learning from a random learner Learner ii , where Learner i is not equal to Learner ii .Two possibilities can occur subject to the values of Learner i and Learner ii : if Learner ii has more understanding than Learner i , then Learner i is moved towards Learner ii (Equation ( 20)).Then, it is moved away from Learner ii (Equation ( 21)).If Learner new has better functionality according to Equation ( 20) or ( 21), it will be allowed into the community.The TLBO optimizer will continue producing generations until it reaches the final iteration.
Students or learners can enhance their knowledge in two ways: through teacher input or through peer engagement, referred to as the learner phase.During this step, student i (Learner i ) tries to increase its understanding via peer learning from unrelated people (Learner ii ), where i is not equal to ii.Two possibilities exist subject to the values of Learner i and Learner ii : if Learner ii is greater than Learner i , Learner i is shifted toward Learner ii (Equation (20)).If not, it is shifted away from Learner ii (Equation (21).If the performance of a new learner (Learner new ) is superior to that predicted by Equation (20) or (21), it will be allowed into the community.
Additionally, it is vital to successfully deal with implausible learners to evaluate whether one learner is superior to any other learner when run on engineering optimization functions.

Mutation Operators for Improved Algorithms
When using the TLBO method, one should be aware of potential problems such as slow convergence, early convergence, poor accuracy, and a lack of diversity.The TLBO algorithm's effect is improved by using three well-known mutation techniques as upgraded phases.Mutation operators bring in new students by changing the behavior of an existing one, hence increasing diversity in the classroom and decreasing the likelihood of the search becoming trapped in local optima.Random distribution sequences can have Gaussian, Lévy, Cauchy, or Beta distributions, as well as chaotic distributions based on logistic maps or mixed types such as a cloud distribution.Personal and globally optimal position vectors, as well as randomly picked current positions and speeds, can be altered.In this section, we describe three different types of mutations that we use throughout the paper.

Cauchy Mutation Using Cauchy Distribution
There is also the Cauchy distribution [52], which has the probability density function as a distribution: where t > 0 is a scale variable and x ∈ R. To show that Y (a real-valued random parameter) is Cauchy-distributed with t > 0, the parameter Y for this distribution is as follows: The Cauchy mutation using the Cauchy distribution for t = 1 is as follows: where Learner i (d) is the dth control parameter of the ith learner, and δ d (1) shows that the random value is newly created for any position of d.

Gaussian Mutation Using Gaussian Distribution
One of the most widely used distributions is the Gaussian distribution [52].It is a simple distribution and is described by its mean µ and variance σ 2 .The formula for this probability density function is as follows: A parameter Y is shown to be normally distributed as follows: Then, the Gaussian mutation using the Gaussian distribution for σ = 1 and µ = 0 is created as follows: where N d (0, 1) specifies that every time a value is entered, a new random number is created.

Lévy Mutation Using Lévy Distribution
A probability density function can be expressed analytically for the Lévy distribution [53].To express the Lévy distribution, the formulas below are used: The Lévy mutation [54] using Lévy distribution with γ = 1 and α = 1 is created as follows: where L d (1) indicates that the random number is newly generated for each value of d.

Improved TLBO Algorithms with Mutation Strategy
The updated and improved algorithm produces a better-simulated response and performs better in global and local search factors.Figure 1 illustrates how effective the hybrid method is.It is possible that students will be subjected to self-adaptive mutations in the hybrid phase of the new algorithm if they do not perform well in teacher phase tests.

Improved TLBO Algorithms with Mutation Strategy
The updated and improved algorithm produces a better-simulated response and performs better in global and local search factors.Figure 1 illustrates how effective the hybrid method is.It is possible that students will be subjected to self-adaptive mutations in the hybrid phase of the new algorithm if they do not perform well in teacher phase tests.

Self-Adaptive Strategy in TLBO
In enhanced algorithms, students seek difficulties in their space using learning and education activities and shifting a random percentage of their distance from the teacher and other students in each repetition.By selecting excellent initial values for auxiliary mutation parameters, students can proceed more quickly toward global optima and cross local optimum points.However, when students approach global optima, the algorithm is unable to perform an effective local search since the parameters are greater than the search space.
Energies 2022, 15, 2759 9 of 24 When initial values for auxiliary mutation parameters are kept minimal, the algorithm can perform a local search with a high degree of stability and strength.However, in this instance, students progress slowly toward the goal, and their ability to cross over local optimal spots diminishes.As a result, we must exert control over auxiliary mutation parameters to boost the functionality of improved methods in global and local searches while simultaneously decreasing the value of auxiliary mutation parameters via increased repetition time.To increase the performance of mutations, we employ a comprehensive technique based on sigma adaptation to several parameters.
The self-adaptive variance approach is used by selecting a β parameter vector with the length of the problem dimensions.Here, the β parameter mutates first, and then other members mutate using this new parameter.The essential relationship is defined as follows: In the above relationship, τ is defined as the "global learning rate" and is obtained by τ = , and τ is defined as the "learning rate of different characteristics of vector" and is determined by the relationship τ = , where Iter max indicates the maximum number of authorized algorithm execution or generation production periods.
The remaining requirements are identical to those in the previous method.As a result, Equations ( 19)-( 21), ( 24), (27), and (29) self-adapt and are used: Combining SATLBO with mutation operators leads to the following new advanced and powerful optimizers: 1.

SACTLBO:
The SATLBO algorithm was improved through the application of Cauchy mutation.

SALTLBO:
The SATLBO algorithm was improved by the application of Lévy mutation.

θ-SATLBO Optimizers
Rather than optimizing the real space of control variables, phase-angle optimization is at the heart of our approach.Ultimately, the approach generates an answer similar to a phase angle, from which the last values of the decision parameters are deduced.As a result, formulations 31 and 36 are altered as follows: Decision parameters are constrained to (−π/2, π/2) values in this approach.Angle phases are included in the problem-solving process implemented by the hybrid θ-SATLBO algorithm.Phase angles are changed for each repetition through formulations (37) to (42), and at the end, control variables are determined by the mapping below: This is a one-to-one mapping extender.The intended mapping can be used to map real limits of the optimization function to a θ space, and inverse per-unit systems simplify the calculation and understanding of the problem because of this mapping.Due to the compressed θ space versus the actual problem space, the θ values are negligible, and the effectiveness of the resolution rises significantly.As a result, in a compact space such as θ space, the algorithm's local optima may be extremely close to the global optima.

Numerical Results of Optimal VAR Control Problem
The suggested procedures based on the optimal VAR control issue were evaluated on two standard power networks to verify their efficiency.TLBO optimizers were built in MATLAB 7.6 on a Pentium IV E5200 PC with 2 GB of RAM, and the simulation was performed.The chosen values of the final iterations (Iter max ) for two power systems, 30and 57-buses of standard IEEE networks, were 100 and 150 with population sizes of 45 and 60, respectively.
There are discontinuous parameters with a step value of 0.01 p.u. for shunt compensators and transformer taps' reactive powers, and penalty values in ( 16) are fixed at 500 [12].The following algorithm results represent the best possible solutions over 50 independent trails.

The First Test Network: IEEE 30-Bus Power Network (System 1)
In this part, simulation outcomes derived from the solution of the optimal VAR control issue using the provided techniques are discussed.The proposed new TLBO optimizers' performance was evaluated using the IEEE 30-bus standard depicted in Figure 2. Reference [7] described the IEEE 30-bus network and its primary working limits and situations.Table 1 shows the allowed ranges of decision variables.Six generators were situated on buses 1, 2, 5, 8, 11, and 13 in the IEEE 30-bus test system.Additionally, buses 3, 10, and 24 were designated as active compensatory shunt buses [8].
The network loads were specified as: Q load = 1.262 p.u., P load = 2.834 p.u.The entire primary units and network losses were defined as: ∑Q G = 0.980199 p.u., ∑P G = 2.893857 p.u., Q losss = −0.064327p.u., P loss =0.059879 p.u.The proposed algorithms' viability was evaluated using various goal functions on this test network, as explained below.The proposed algorithms' viability was evaluated using various goal functions on this test network, as explained below.

Minimization of Network Active Losses
The goal is to reduce total transmission losses to a minimum.Table 2 summarizes 50 trials' best optimal VAR control problem solutions for minimizing actual total transmission power losses using θ-SAGTLBO.The results indicate that using θ-SAGTLBO leads to active power losses of 0.0486217 p.u., which is smaller than the amount achieved using other methods.When evaluating convergence characteristics, Figure 3

Minimization of Network Active Losses
The goal is to reduce total transmission losses to a minimum.Table 2 summarizes 50 trials' best optimal VAR control problem solutions for minimizing actual total transmission power losses using θ-SAGTLBO.The results indicate that using θ-SAGTLBO leads to active power losses of 0.0486217 p.u., which is smaller than the amount achieved using other methods.When evaluating convergence characteristics, Figure 3 demonstrates that θ-SATLBO optimizers achieve a better set of control parameters more quickly than other TLBO optimizers.
Table 2. Best optimal parameter settings in p.u. for case 1 of system 1.

Variable θ-SAGTLBO
1.0711 T 6-9  Table 3 compares the specifications of the ideal situations acquired by the suggested algorithm techniques after 50 runs to those obtained by the references.A summary of operation symbols, including the mean execution times, the best (Best) and poorest (Worst) real losses, the standard deviation (Std.), the average real losses (Mean), and loss saving percentage (Psave) over 50 independent runs, are shown in the following table.Table 3 shows that the θ-SAGTLBO strategy reduces active power loss by 18.81%, the largest reduction in losses compared with other alternatives.According to the outcomes, the θ-SAGTLBO algorithms outperform other algorithms in terms of resilience.Table 3 compares the specifications of the ideal situations acquired by the suggested algorithm techniques after 50 runs to those obtained by the references.A summary of operation symbols, including the mean execution times, the best (Best) and poorest (Worst) real losses, the standard deviation (Std.), the average real losses (Mean), and loss saving percentage (P save ) over 50 independent runs, are shown in the following table.Table 3 shows that the θ-SAGTLBO strategy reduces active power loss by 18.81%, the largest reduction in losses compared with other alternatives.According to the outcomes, the θ-SAGTLBO algorithms outperform other algorithms in terms of resilience.
Additionally, Figures 4-6 illustrate the convergence graphs of control variable optimization generated by the θ-SAGTLBO algorithm in terms of the number of generations required to achieve the best solution.

Improvement of the Voltage Profile
In this function, the goal function for the optimal VAR control issue is the minimization of voltage deviation (SVD).The optimal control variable settings found using the various methods for case 2 are summarized in Table 4.Each algorithm's final solution and CPU time were monitored, and substantial statistical data are provided in Table 5.As shown in Table 4, the suggested -SACTLBO and θ-SAGTLBO algorithms produce an SVD of 0.1233 p.u.In terms of the features of the solutions, the results clearly show that the presented SAGTLBO algorithms trump the other state-of-the-art methods.The convergence features of the voltage deviation minimization method using the TLBO algorithms are plotted in Figure 7.
Convergence of Q C for case 1 using θ-SAGTLBO of system 1.

Improvement of the Voltage Profile
In this function, the goal function for the optimal VAR control issue is the minimization of voltage deviation (SVD).The optimal control variable settings found using the various methods for case 2 are summarized in Table 4.Each algorithm's final solution and CPU time were monitored, and substantial statistical data are provided in Table 5.As shown in Table 4, the suggested θ-SACTLBO and θ-SAGTLBO algorithms produce an SVD of 0.1233 p.u.In terms of the features of the solutions, the results clearly show that the presented SAGTLBO algorithms trump the other state-of-the-art methods.The convergence features of the voltage deviation minimization method using the TLBO algorithms are plotted in Figure 7.
Table 4. Best optimal parameters settings for case 2 of system 1.

Improvement of the Network Voltage Profile with the Minimization of Active Losses
Instead of optimizing the SVD and losses separately, the algorithms optimize both together.Table 6 summarizes the optimal control variables, SVD, and power losses associated with the methods.As can be seen from the data, the updated algorithms discovered the optimal tradeoff between active power losses and SVD.The convergence rate of SVD and loss minimization is presented in Figure 8 for all TLBO optimizers.The active power losses in this scenario are greater than those in case 1 and less than those in case 2, although SVD is superior to case 1 and inferior to case 2.

The Second Test Network: IEEE 57-Bus Power Network (System 2)
This system, as shown in Figure 9, is presented as a large-scale network for the second step of the optimal VAR control issue to show the usefulness of the proposed algorithms in larger-scale systems.Eighty transmission lines with buses 18, 25, and 53, parallel reactive power generators, and seven generators on buses 1, 2, 3, 6, 8, 9, and 12, as well as fifteen load tap setting transformer branches, make up the test system being investigated.The bus statistics, the line data, and the allowed range of real power generation were obtained from Reference [12], and the parameter limitations are shown in Table 7.
The network loads are [58]: Q load = 3.364 p.u., P load = 12.508 p.u.The entire primary units and network losses obtained are [58]: ∑Q G = 3.4545 p.u., ∑P G = 12.7926 p.u., Q losss = −1.2427p.u., P loss =0.28462 p.u. fifteen load tap setting transformer branches, make up the test system being investigated.The bus statistics, the line data, and the allowed range of real power generation were obtained from Reference [12], and the parameter limitations are shown in Table 7.

Minimization of Network Active Losses
Table 8 the statistical information and CPU time of the ideal settings found using various methods.The θ-SAGTLBO algorithm determined the optimum solution after 50 trial runs.The active power losses produced by the θ-SAGTLBO algorithm are shown to be 0.2372619 p.u.In this table, we can see that the θ-SAGTLBO method achieves a 16.64 percent reduction in power loss, which is greater than the other alternatives.The assessment of the resilience of the suggested simulation methodology is based on data from 50 separate runs with diverse initial populations.Obviously, θ-SAGTLBO shows a more robust and effective performance than other methods.To ensure a close-optimal response in any randomized attempt, the Std.index across several trials must also be extremely low.Figure 10 depicts the convergence rates for network losses as a function of iteration number.

Improvement of the Voltage Profile
This experiment assessed the objective function of SVD reduction for this network.Table 9 illustrates the statistical information and CPU time for the various algorithms.The SVD obtained by the θ-SAGTLBO method is the best result for this case, as depicted in Table 9.The algorithm convergence rate of voltage deviation minimization is illustrated in Figure 11.In short, θ-SATLBO algorithms, as novel efficient optimization algorithms, confirmed their superior efficiency and reliability in finding the optimal solutions to several optimal Volt-VAR control issues over other well-known search approaches.

Improvement of the Network Voltage Profile with the Minimization of Active Losses
Rather than optimizing the SVD and active power losses separately in this work, both objective functions are optimized simultaneously utilizing the updated methods for this popular standard network.Table 10 summarizes the optimal control variables, SVD, and network losses obtained with previous and TLBO optimizers.The presented optimizers identified the optimal tradeoff solutions for active power losses and SVD.The optimal Volt-VAR control problem reveals that, in scenario 3 for this popular standard network, both SVD and power losses cannot be further decreased without the other deteriorating.The convergence characteristics for network loss minimization and SVD minimization are presented in Figure 12 for all TLBO optimizers.In short, θ-SATLBO algorithms, as novel efficient optimization algorithms, confirmed their superior efficiency and reliability in finding the optimal solutions to several optimal Volt-VAR control issues over other well-known search approaches.In short, θ-SATLBO algorithms, as novel efficient optimization algorithms, confirmed their superior efficiency and reliability in finding the optimal solutions to several optimal Volt-VAR control issues over other well-known search approaches.Therefore, we can conclude that θ-SATLBO algorithms are suitable and powerful optimizers for optimizing real-world contemporary issues.Hence, those interested in other fields can effectively use this method in their field of work.

Conclusions
In this study, we enhanced the original TLBO method and present a θ-self-adaptive TLBO (θ-SATLBO) approach that incorporates the mutation operator into the habitat mutation.Additionally, new and effective mutation operators (i.e., Cauchy, Gaussian, and Lévy mutations) that are frequently employed in evolutionary algorithms were selected to increase the exploration potential and variety of the population in the improved θ-SATLBO technique.The proposed algorithms were run on IEEE 30-and IEEE 57-bus networks, and the resulting data were compared to those from the references.The simulation findings demonstrate that θ-SAGTLBO algorithms are more efficient than the other algorithms tested in this study at balancing global search capabilities to solve optimal Volt-VAR control issues.This study demonstrates that the provided algorithms can solve optimal Volt-VAR control issues due to their superior performance with various goal functions.
θ-SATLBO's best optimized outcomes are better than those obtained by the other studied optimization algorithms.The efficient convergence of the real-parameter θ-SATLBO algorithms is demonstrated by their rapid convergence speed.It would be fascinating to apply real-parameter θ-SATLBO optimizers to engineering and science optimization problems in the future.It would also be beneficial to study the influence of different spirals on the real-parameter θ-SATLBO algorithms' efficiency.In addition, statistical results from the perspective of standard deviations showed that the proposed algorithm was reasonably reliable compared to other comparative algorithms.However, for practical and real-time applications, further improvements in convergence speed and CPU time may be required.

Figure 1 .
Figure 1.Flow chart showing the operation of the proposed optimizers.Figure 1. Flow chart showing the operation of the proposed optimizers.

Figure 1 .
Figure 1.Flow chart showing the operation of the proposed optimizers.Figure 1. Flow chart showing the operation of the proposed optimizers.

Figure 3 .
Figure 3. Performance characteristics of TLBO optimizers for case 1 of system 1.

Figure 3 .
Figure 3. Performance characteristics of TLBO optimizers for case 1 of system 1.

Figure 4 .
Figure 4. Convergence of VG for case 1 using -SAGTLBO of system 1.

Figure 4 .
Figure 4. Convergence of V G for case 1 using θ-SAGTLBO of system 1.

Figure 4 .
Figure 4. Convergence of VG for case 1 using -SAGTLBO of system 1.

Figure 5 .
Figure 5. Convergence of T for case 1 using -SAGTLBO of system 1.

Figure 11 .
Figure 11.Performance characteristics of TLBO optimizers for case 2 of system 2.

Figure 12 .
Figure 12.Performance characteristics of TLBO optimizers for case 3 of system 2.

Figure 11 .
Figure 11.Performance characteristics of TLBO optimizers for case 2 of system 2.

Figure 12 .
Figure12.Performance characteristics of TLBO optimizers for case 3 of system 2.

Table 3 .
Statistical details for case 1 of system 1.

Table 3 .
Statistical details for case 1 of system 1.

Table 4 .
Best optimal parameters settings for case 2 of system 1.

Table 5 .
Statistical details for case 2 of system 1.

Table 6 .
Best optimal parameters settings for case 3 of system 1.

Table 8 .
Statistical details for case 1 of system 2.

Table 10 .
Statistical details for case 3 of system 2.

Table 10 .
Statistical details for case 3 of system 2. Performance characteristics of TLBO optimizers for case 2 of system 2.

Table 10 .
Statistical details for case 3 of system 2.