Meta-Heuristic Optimization Techniques Used for Maximum Power Point Tracking in Solar PV System

A critical advancement in solar photovoltaic (PV) establishment has led to robust acceleration towards the evolution of new MPPT techniques. The sun-oriented PV framework has a non-linear characteristic in varying climatic conditions, which considerably impact the PV framework yield. Furthermore, the partial shading condition (PSC) causes major problems, such as a drop in the output power yield and multiple peaks in the P–V attribute. Hence, following the global maximum power point (GMPP) under PSC is a demanding problem. Subsequently, different maximum power point tracking (MPPT) strategies have been utilized to improve the yield of a PV framework. However, the disarray lies in choosing the best MPPT technique from the wide algorithms for a particular purpose. Each algorithm has its benefits and drawbacks. Hence, there is a fundamental need for an appropriate audit of the MPPT strategies from time to time. This article presents new works done in the global power point tracking (GMPPT) algorithm field under the PSCs. It sums up different MPPT strategies alongside their working principle, mathematical representation, and flow charts. Moreover, tables depicted in this study briefly organize the significant attributes of algorithms. This work will serve as a reference for sorting an MPPT technique while designing PV systems.


Introduction
Reserves of natural fossil fuels are getting depleted at a rapid pace. Therefore, the growing electricity demand can be met by employing renewable energy sources. Renewable energy sources show potential avenues for electricity generation. Among various sustainable energy sources, solar energy proves to be a viable substitute for electricity generation, since it is an ample, inexhaustible, and non-polluting source of energy. Solar energy production is booming at a fast rate. This growth is due to the recent advancements in accuracy, convergence speed for harvesting maximum energy [1,2]. As suggested by the International Energy Agency report 'Global Energy Review 2021', global electricity demand is due to increase by 4.5% in 2021, or more than 1000 TWh. In 2020, renewable energy grew by 3%. Furthermore, the demand for renewable energy will increase in 2021 in every sector, such as heating, power, etc. Sun-powered photovoltaic (PV) and wind are estimated to be a factor of two-thirds of renewable development. The contribution of renewable energy sources in electricity production will grow practically by 30% in 2021, The current produced by the solar cell is given by Equation (1).
where represents output current; indicates the photoelectric current; denotes the diode current; signifies the shunt current. To compute current flowing in the diode, utilize the Shockley equation as per Equation (2).
By Ohms law, the current flowing in the shunt resistor, , is calculated by using Equation (3).
The characteristic equation of a solar cell is specified in Equation (4).
where signifies the photoelectric current, q denotes the electron charge; I0 represents the reverse saturation current of the diode; indicates the voltage across the diode; and represent the series and shunt resistors of the solar cell in (Ω), respectively; T denotes the temperature at the junction; K means the Boltzmann's constant [4]; n signifies the ideality factor of the diode.

Solar Cell Characteristic
The short circuit current (Isc)   The current produced by the solar cell is given by Equation (1).
where I pv represents output current; I ph indicates the photoelectric current; I D denotes the diode current; I sh signifies the shunt current.
To compute current flowing in the diode, utilize the Shockley equation as per Equation (2).
By Ohms law, the current flowing in the shunt resistor, I sh , is calculated by using Equation (3).
The characteristic equation of a solar cell is specified in Equation (4).
I pv = I ph − I 0 exp q nkT V pv + I pv R S − 1 − V pv + I pv R S R sh (4)

Effect of Varying Insolation on I-V and P-V Characteristicsof the Solar Cell
The effect of change of insolation on the I-V and P-V curve is observed by varying the insolation from 200 W/m 2 to 1000 W/m 2 , by an increment of 200 W/m 2 . As the solar irradiance increases, the PV module can generate more output power due to the rise in the current. The upgrade in current exemplifies the higher peaks on the I-V and P-V curves, as depicted in Figures 5 and 6, respectively.

Effect of Varying Insolation on I-V and P-V Characteristicsof the Solar Cell
The effect of change of insolation on the I-V and P-V curve is observed by varying the insolation from 200 W/m to 1000 W/m , by an increment of 200 W/m . As the solar irradiance increases, the PV module can generate more output power due to the rise in the current. The upgrade in current exemplifies the higher peaks on the I-V and P-V curves, as depicted in Figures 5 and 6, respectively.

Partial Shading Condition
Photovoltaic systems are highly prone to partial shading. A shadow is an image cast obtained upon a surface (like a solar panel) by any obstruction intercepting the solar rays resulting in a partial shading effect (PSC). Thus, Uniform irradiance is not possible continually because of the changing environmental conditions such as rain, clouds, storms, etc. Furthermore, building shade and tree shade also contribute to shading. Hence, this effect prevents a solar panel connected in series from receiving the same incident irradiance level [8].
Due to shading on the PV array, the output power yield of the PV module decreases. The non-linearity in the PV module's output I-V characteristics has led to multiple local maxima on the P-V curve. Thus, shading leads to hot spots that cause severe damage to these cells. Additionally, current mismatch within a PV string and voltage mismatch between parallel modules are also significant drawbacks of shadowing. The severity of the impact of shade depends on the configuration of the PV string, the type of module used, placement of the bypass diode, partial shading patterns, and the shading heaviness [9].
If partial shading of one cell occurs, then less current flows in the shaded cell in contrast with the other cells of the string. Consequently, a higher current will be forced to flow through the un-shaded cells. As a result, cells act as a diode in the reverse direction. Furthermore, the shaded cell limits the current flow in the string. Hence, the output power of the PV string decreases. Moreover, as the number of shaded cells increases, the decrease in the output power of the PV string will be more prominent.
The number of multiple peaks in the P-V curve increases with an increase in the shaded modules. Therefore, to mitigate the shading effect, a bypass diode is introduced across the string of particular cells connected in series. A bypass diode allows only unidirectional current flow. Bypass diodes, connected in anti-parallel, offer the low impedance path to power when the power flows toward the sink [10].
Under the ordinary conditions of uniform irradiance, the P-V curve presents a unique MPP, as illustrated by the curve in Figure 2. However, during partial shading, a staircase current waveform is obtained as the I-V curve. Meanwhile, the corresponding P-V curve shows the multiple peaks as depicted by Figures 7 and 8, respectively.

Partial Shading Condition
Photovoltaic systems are highly prone to partial shading. A shadow is an image cast obtained upon a surface (like a solar panel) by any obstruction intercepting the solar rays resulting in a partial shading effect (PSC). Thus, Uniform irradiance is not possible continually because of the changing environmental conditions such as rain, clouds, storms, etc. Furthermore, building shade and tree shade also contribute to shading. Hence, this effect prevents a solar panel connected in series from receiving the same incident irradiance level [8].
Due to shading on the PV array, the output power yield of the PV module decreases. The non-linearity in the PV module's output I-V characteristics has led to multiple local maxima on the P-V curve. Thus, shading leads to hot spots that cause severe damage to these cells. Additionally, current mismatch within a PV string and voltage mismatch between parallel modules are also significant drawbacks of shadowing. The severity of the impact of shade depends on the configuration of the PV string, the type of module used, placement of the bypass diode, partial shading patterns, and the shading heaviness [9].
If partial shading of one cell occurs, then less current flows in the shaded cell in contrast with the other cells of the string. Consequently, a higher current will be forced to flow through the un-shaded cells. As a result, cells act as a diode in the reverse direction. Furthermore, the shaded cell limits the current flow in the string. Hence, the output power of the PV string decreases. Moreover, as the number of shaded cells increases, the decrease in the output power of the PV string will be more prominent.
The number of multiple peaks in the P-V curve increases with an increase in the shaded modules. Therefore, to mitigate the shading effect, a bypass diode is introduced across the string of particular cells connected in series. A bypass diode allows only unidirectional current flow. Bypass diodes, connected in anti-parallel, offer the low impedance path to power when the power flows toward the sink [10].
Under the ordinary conditions of uniform irradiance, the P-V curve presents a unique MPP, as illustrated by the curve in Figure 2. However, during partial shading, a staircase current waveform is obtained as the I-V curve. Meanwhile, the corresponding P-V curve shows the multiple peaks as depicted by Figures 7 and 8, respectively.

Maximum Power Point Tracking Algorithms
PV array has a non-linear characteristic but has a distinct maximum power point (MPP). Therefore, to exploit the optimum power from PV panels, MPPT techniques are employed. The electronic converter enforces the MPPT algorithm. The MPPT ensures that the PV array must operate at the Vref (reference voltage) all the time, resulting in improvements in PV panel efficiency under varying atmospheric conditions [11]. The typical block diagram of the MPPT framework is illustrated in Figure 9.

Maximum Power Point Tracking Algorithms
PV array has a non-linear characteristic but has a distinct maximum power point (MPP). Therefore, to exploit the optimum power from PV panels, MPPT techniques are employed. The electronic converter enforces the MPPT algorithm. The MPPT ensures that the PV array must operate at the Vref (reference voltage) all the time, resulting in improvements in PV panel efficiency under varying atmospheric conditions [11]. The typical block diagram of the MPPT framework is illustrated in Figure 9.

Maximum Power Point Tracking Algorithms
PV array has a non-linear characteristic but has a distinct maximum power point (MPP). Therefore, to exploit the optimum power from PV panels, MPPT techniques are employed. The electronic converter enforces the MPPT algorithm. The MPPT ensures that the PV array must operate at the V ref (reference voltage) all the time, resulting in improvements in PV panel efficiency under varying atmospheric conditions [11]. The typical block diagram of the MPPT framework is illustrated in Figure 9.

Conventional MPPT Strategies
Recent developments in conventional algorithms are discussed in concise form in Table 1 at the end of this section, while in the following sub-sections, each traditional method is explained comprehensively.

Perturb and Observe (P&O) MPPT Technique
The P&O strategy is a broadly utilized strategy owing to its effortlessness and ease of execution [12]. Furthermore,fewer number of sensors are required, resulting in lower actualized costs [13]. The P&O MPPT algorithm deals with similar rules tothe 'Hill Climb Search' technique. However, the latter is less efficient than the previous one [14].
The P&O strategy is an iterative technique used to track the maximum power point (MPP). Predominantly, its operating principle works by introducing a slight disturbance in the voltage of the PV array, and the corresponding impact on the power is measured. Accordingly, the PV module voltage is elevated or decremented by varying the duty cycle of the dc-dc converter. These perturbations help in confirming whether the power is enhanced or stepped-down. Therefore, if an increment in the voltage increases the power, then the working point of the PV module is on the left edge of the P-V plot. Thus, there is a signal that the perturbation is set in a positive direction [15]. However, if an increment in the voltage prompts a decrease in the power, then the operating point is on the right edge side of the P-V plot. Hence, to follow the MPP, the perturbation direction needs to converge towards a specific end. Consequently, the iteration process is continued until the MPP is attained.
Even though the P&O strategy works well during the settled insolation, it too has few disadvantages of wavering near the MPP, a slow MPP tracking speed, and endures to locate the true MPP under partial shading conditions [13,16].Therefore, an altered P&O technique has been proposed to conquer these drawbacks [17,18]. The standard flowchart to implement the P&O algorithm is shown in Figure 10.

Conventional MPPT Strategies
Recent developments in conventional algorithms are discussed in concise form in Table 1 at the end of this section, while in the following sub-sections, each traditional method is explained comprehensively.

Perturb and Observe (P&O) MPPT Technique
The P&O strategy is a broadly utilized strategy owing to its effortlessness and ease of execution [12]. Furthermore, fewer number of sensors are required, resulting in lower actualized costs [13]. The P&O MPPT algorithm deals with similar rules tothe 'Hill Climb Search' technique. However, the latter is less efficient than the previous one [14].
The P&O strategy is an iterative technique used to track the maximum power point (MPP). Predominantly, its operating principle works by introducing a slight disturbance in the voltage of the PV array, and the corresponding impact on the power is measured. Accordingly, the PV module voltage is elevated or decremented by varying the duty cycle of the dc-dc converter. These perturbations help in confirming whether the power is enhanced or stepped-down. Therefore, if an increment in the voltage increases the power, then the working point of the PV module is on the left edge of the P-V plot. Thus, there is a signal that the perturbation is set in a positive direction [15]. However, if an increment in the voltage prompts a decrease in the power, then the operating point is on the right edge side of the P-V plot. Hence, to follow the MPP, the perturbation direction needs to converge towards a specific end. Consequently, the iteration process is continued until the MPP is attained.
Even though the P&O strategy works well during the settled insolation, it too has few disadvantages of wavering near the MPP, a slow MPP tracking speed, and endures to locate the true MPP under partial shading conditions [13,16]. Therefore, an altered P&O technique has been proposed to conquer these drawbacks [17,18]. The standard flowchart to implement the P&O algorithm is shown in Figure 10.

Incremental Conductance MPPT Algorithm
The incremental conductance (INC) technique is an enhanced version of the P&O strategy. This technique is utilized in following the MPP under fast changes of atmospheric conditions [19,20].
The INC technique is principally established on the reality that the slope of power (P-V) curve of PV array is zero (∂p/∂v = 0) at MPP, positive (∂p/∂v > 0) on the left of MPP, and negative (∂p/∂v < 0) on the right of MPP.
The instantaneous power (P) is defined as the product of current and voltage.

= ×
On differentiating Equation (5) with respect to V, the slope of the P-V curve can be computed as follows in Equation (6) Therefore, the following expressions can be composed = − at the MPP, i. e. , ( = 0) at the MPP, i.e., (7) > − at the left of MPP, i. e. , ( > 0 ) at the left of MPP, i.e., (8) Figure 10. Flowchart to implement the P&O strategy.

Incremental Conductance MPPT Algorithm
The incremental conductance (INC) technique is an enhanced version of the P&O strategy. This technique is utilized in following the MPP under fast changes of atmospheric conditions [19,20].
The INC technique is principally established on the reality that the slope of power (P-V) curve of PV array is zero (∂p/∂v = 0) at MPP, positive (∂p/∂v > 0) on the left of MPP, and negative (∂p/∂v < 0) on the right of MPP.
The instantaneous power (P) is defined as the product of current and voltage.
On differentiating Equation (5) with respect to V, the slope of the P-V curve can be computed as follows in Equation (6) ∂P ∂V Therefore, the following expressions can be composed at the MPP, i.e., (7) at the right of MPP, i.e., (9) Thus, MPP can be followed by comparing incremental conductance (∂I/∂V) to an instantaneous one (I/V), as illustrated in the flowchart to implement the INC technique given in Figure 11.
Electronics 2021, 10, x FOR PEER REVIEW 10 of 60 < − at the right of MPP, i. e. , ( < 0) (9) at the right of MPP, i.e., (9) Thus, MPP can be followed by comparing incremental conductance (∂I/∂V) to an instantaneous one (I/V), as illustrated in the flowchart to implement the INC technique given in Figure 11. The INC technique can avoid oscillations in a steady state until the conditions (irradiance and temperature) are changed [21]. However, during the transition, the INC strategy acts analogous to the P&O technique. Hence, the theoretical preference of the Incremental Conductance algorithm over Perturb and Observe is lost [21,22]  The INC technique can avoid oscillations in a steady state until the conditions (irradiance and temperature) are changed [21]. However, during the transition, the INC strategy acts analogous to the P&O technique. Hence, the theoretical preference of the Incremental Conductance algorithm over Perturb and Observe is lost [21,22]  The FOCV technique employs the theory that there exists an approximated linear relation between the maximum power point voltage (VMPP) and open-circuit voltage (VOC) of the PV module, as given in Equation (10).
The FOCV technique has some drawbacks, such as the load having to be isolated during the interim, debasing the linear relationship between the VMPP and VOC with time, and additional types of equipment are needed to measure VOC after a specific interval [25].

Fractional Short Circuit Current Technique
The Fractional Short Circuit Current (FSCC) technique is additionally an indirect technique practically identical to the FOCV method. The FSCC technique is actualized on the fact that there exists a straight-line relationship between the current at the maximum power point (IMPP) and the short circuit current (ISC) of the PV module, as demonstrated in Equation (11) [26]. The block diagram of the FSCC strategy is represented in Figure 13.
During the short circuit condition, Vout is zero, resulting in zero output power. Hence, it is a squander of energy.  The FOCV technique employs the theory that there exists an approximated linear relation between the maximum power point voltage (V MPP ) and open-circuit voltage (V OC ) of the PV module, as given in Equation (10).
where C V signifies the proportionality constant, which depends on the PV module's characteristics and climatic circumstances (i.e., temperature and solar insolation) [23]. C V value varies in the range [0.71, 0.78] [24]. The FOCV technique has some drawbacks, such as the load having to be isolated during the interim, debasing the linear relationship between the V MPP and V OC with time, and additional types of equipment are needed to measure V OC after a specific interval [25].

Fractional Short Circuit Current Technique
The Fractional Short Circuit Current (FSCC) technique is additionally an indirect technique practically identical to the FOCV method. The FSCC technique is actualized on the fact that there exists a straight-line relationship between the current at the maximum power point (I MPP ) and the short circuit current (I SC ) of the PV module, as demonstrated in Equation (11) [26]. The block diagram of the FSCC strategy is represented in Figure 13.
where C I denotes the invariable current factor which generally varies between [0.78, 0.92] [27]. The FOCV technique employs the theory that there exists an approximated linear relation between the maximum power point voltage (VMPP) and open-circuit voltage (VOC) of the PV module, as given in Equation (10).
The FOCV technique has some drawbacks, such as the load having to be isolated during the interim, debasing the linear relationship between the VMPP and VOC with time, and additional types of equipment are needed to measure VOC after a specific interval [25].

Fractional Short Circuit Current Technique
The Fractional Short Circuit Current (FSCC) technique is additionally an indirect technique practically identical to the FOCV method. The FSCC technique is actualized on the fact that there exists a straight-line relationship between the current at the maximum power point (IMPP) and the short circuit current (ISC) of the PV module, as demonstrated in Equation (11) [26]. The block diagram of the FSCC strategy is represented in Figure 13.
During the short circuit condition, Vout is zero, resulting in zero output power. Hence, it is a squander of energy.  During the short circuit condition, V out is zero, resulting in zero output power. Hence, it is a squander of energy.  The strategy consists of two loops: standard loop and I PV loop.

•
The suggested algorithm eliminates the need of the irradiance sensor. Cost-effective alternative for low price solar PV appliance without compromising on the energy loss.
Note: Du-Duty cycle, V OC -Open circuit voltage, I PV -PV array current, I SC -Short circuit current.

Meta-Heuristic Techniques
The classification of the meta-heuristic algorithm reviewed in this article is illustrated in Figure 14.
controller board solar PV appliance without compromising on the energy loss. Note: Du-Duty cycle, VOC-Open circuit voltage, IPV-PV array current, ISC-Short circuit current.

Meta-Heuristic Techniques
The classification of the meta-heuristic algorithm reviewed in this article is illustrated in Figure 14.

Swarm Intelligence Methods
This section sub-part explains each swarm intelligence technique in a detailed manner. Additionally, at last, current work related to these algorithms is tabulated in Table 2.

(a) Particle Swarm Optimization Method
The Particle Swarm Optimization (PSO) technique is among the most widely used random search methods. The PSO strategy maximizes the nonlinear continuous functions. The PSO strategy was suggested by Eberhart and Kennedy in 1995 [41].

Swarm Intelligence Methods
This section sub-part explains each swarm intelligence technique in a detailed manner. Additionally, at last, current work related to these algorithms is tabulated in Table 2.

(a) Particle Swarm Optimization Method
The Particle Swarm Optimization (PSO) technique is among the most widely used random search methods. The PSO strategy maximizes the nonlinear continuous functions. The PSO strategy was suggested by Eberhart and Kennedy in 1995 [41].
The functioning rule of the PSO algorithm is demonstrated after the natural demeanor of fish schooling and flock gathering [42]. In this strategy, numerous collaborative birds are employed, and each bird signifies a particle.
Each particle has its fitness value in the search space, which is mapped by a position vector and velocity vector. Furthermore, each particle utilizes its fitness value to choose the direction and distance of its step. After that, each particle proposed a resolution by trading the information obtained in its particular search process to find the best solution. The primary flowchart of PSO methodology is depicted in Figure 15. meanor of fish schooling and flock gathering [42]. In this strategy, numerous collaborative birds are employed, and each bird signifies a particle.
Each particle has its fitness value in the search space, which is mapped by a position vector and velocity vector. Furthermore, each particle utilizes its fitness value to choose the direction and distance of its step. After that, each particle proposed a resolution by trading the information obtained in its particular search process to find the best solution. The primary flowchart of PSO methodology is depicted in Figure 15.  The PSO method initializes with a group of random solutions (position and velocity for each particle) in the search arena. After each iteration, particles change their fitness value by employing an intellectual, social trade-off. The trade-off leads to a change in the individual best (P p,best ) and neighborhood's (characterized as the entire populace or as the subset of it) best position (P g,best ).
Every particle remembers the individual best of all particles in addition to the global best position. Thus, the swarm attempts to find the best solution by refreshing the position and velocity after every cycle. Subsequently, each particle rapidly converges to the global maxima. The refreshing conditions [43] for the position (X) and velocity (V) of the nth molecule for the kth cycle are given in Equations (12) and (13).
V n (k + 1) = ωV n (k) + s 1 ρ 1 P p, best−k − X n (k) + s 2 ρ 2 P g, best − X n (k) (12) X n (k + 1) = X n (k) + V n (k + 1) (13) n = 1, 2, 3, . . . , N where k signifies the iteration count; X n corresponds to the position of the nth particle; V n indicates the velocity of the nth particle; ω represents the inertia burden; s 1 , s 2 denote the social and cognitive acceleration coefficients, respectively; ρ 1 , ρ 2 indicate the arbitrary variables and their assessments are uniformly distributed between zero and one; P p,best-i signifies the individual optimal position of the nth particle at the kth iteration; P g,best implies the swarm-optimum position. If an extempore situation, such as the condition in Equation (14) of initialization, was satisfied, then the method update is in accordance with Equation (15).
P p,best-k = X n-k (15) where Ft shows the target function needed to be maximized. Although the traditional PSO strategy can track the global maximum power point GMPP under all cases, its overall tracking speed is slower than the common INC method for certain cases [44]. Various variations of the PSO procedure can be procured by consolidating it with other developmental techniques. There is a pattern in exploration to make a cross variety of the PSO algorithm to improve the general advancement of the computation. Some used variations of the PSO estimation are introduced in [44][45][46].
Applications: PSO strategy discovered its first application in the field of neural network training. From that point forward, it has been utilized in a wide assortment of fields like power systems, telecommunications, configuration, power frameworks, control, and numerous others. A PSO algorithm has been employed in the following cases, such as the Min-Max issues and different advancement problems.

(b) Ant Colony Optimization Strategy
The Ant Colony Optimization (ACO) strategy is the ant systems' most distinguished and effective substitution. Macro Dorigo first proposed the ant system in 1992 [47]. Later, further enhancements were done by Gambardella in1997 to ant systems [48].
The ACO technique is inspired by the cooperative searching conduct of ants searching for the shortest route between their colony and source food. The trail-laying and trailfollowing conduct of ants is the foundation of the ACO strategy. The flowchart of the ACO algorithm is depicted in Figure 16. route to the food source contrary to meandering arbitrarily. When they re-visit their province, they also leave pheromones, resulting in enriching the current pheromone intensity. Pheromone evaporates with time, in this way reducing the strength of the pheromone. Eventually, the ants adjust and locate the briefest course to the food source. The process begins by considering a single colony of (artificial) ants placed arbitrarily in that colony. Let there be N parameters indicating ants. Each ant in the population entices another ant with its magnetic force. Contingent upon the attractive force, the ants relocate from the lower strength zone to the higher strength zone. After every iteration cycle, the appealing power is determined. As per the results, ants move towards the optimum solution. At first, ants wander in random directions. When one or more ants come across the food source, they re-visit their province (with food), while leaving behind pheromone trails. Ants use pheromone as a means of communication. The pheromone consists of specific synthetic substances delivered by living beings to impart messages or signs to different individuals of a similar species. If other ants discover such a path, they follow the route to the food source contrary to meandering arbitrarily. When they re-visit their province, they also leave pheromones, resulting in enriching the current pheromone intensity. Pheromone evaporates with time, in this way reducing the strength of the pheromone. Eventually, the ants adjust and locate the briefest course to the food source.
The process begins by considering a single colony of (artificial) ants placed arbitrarily in that colony. Let there be N parameters indicating ants. Each ant in the population entices another ant with its magnetic force. Contingent upon the attractive force, the ants relocate from the lower strength zone to the higher strength zone. After every iteration cycle, the appealing power is determined. As per the results, ants move towards the optimum solution.
At first, consider an issue where N parameters (artificial ants) are to be optimized such that Z ≥ N. Z symbolizes the initially generated random solutions and is stored in the solution chronicle. Then the result is positioned as per their fitness value Ft(s i ), as demonstrated in Equation (16).
Likewise, new arrangements are formed by sampling the Gaussian Kernel function to ascertain the ants' positions by following Equation (17).
whereĜ i (x) denotes the Gaussian kernel for the ith dimension of the solution; w k indicates the weight factor for the kth solution; g i k represents the kth sub-Gaussian function for the ith dimension; ∼ σ k i symbolizes the ith dimensional standard deviation for the kth solution; µ i k signifies the ith means value for the kth solution.
Utilizing Z initial solutions, the computation for the standard deviation, mean value, and weight factor can be done following Equations (18)- (20), respectively.
Standard Deviation: where ∈ symbolizes the convergence rate. Mean Value:μ Weight: where ϕ represents the best optimal operating solution.
The probability value of selecting the kth Gaussian function can be evaluated using Equation (21).
The examining cycle will be continued as per the number of parameters to be enhanced. Create Y new solutions that sum up to the Z initial solutions. Then, the Z + Y solutions need to be positioned in the search area. Later, Z's best arrangements are re-established once more. In this way, the whole cycle is re-hashed for the necessary number of iterations [49].
The ACO method effectively tracks the global MPP dissimilar to the traditional optimization techniques. The ACO algorithm has a higher convergence rate. In addition, ACO requires a lesser number of iterations to get the result. Hence, the ACO method is more advantageous compared to other algorithms.
Applications: The ACO method is naturally appropriate for discrete value optimization problems [50]. Furthermore, ACO can handle continuous value optimization. However, the design vector in the continuous value problem ought to be transformed into little discrete advances. Moreover, ACO discusses earlier works with input-output mapping only like all other algorithms. Therefore, subsidiary data of objective work is not fundamental.

(c) Artificial Bee Colony Technique
The Artificial Bee Colony (ABC) method is centered on the honey bees' intelligent foraging conduct. The ABC algorithm was proposed by Dervis Karaboga in 2005 to improve the polynomial mathematical issues [51]. The ABC strategy is sensibly a modern stochastic algorithm for global optimization. The fundamental flowchart of the ABC strategy is depicted in Figure 17.

(c) Artificial Bee Colony Technique
The Artificial Bee Colony (ABC) method is centered on the honey bees' intelligent foraging conduct. The ABC algorithm was proposed by Dervis Karaboga in 2005 to improve the polynomial mathematical issues [51]. The ABC strategy is sensibly a modern stochastic algorithm for global optimization. The fundamental flowchart of the ABC strategy is depicted in Figure 17. The honey bees live within the frame of the province (i.e., in the hives). The honey bees can communicate with one another utilizing pheromone (chemical trade) and the waggle dance. If any bee discovers the nourishment source, and it brings back some food to the province, it trades off the food source location through a waggle dance. The strength and span of the waggle dance demonstrate the extravagance of the food source found. The waggling moves change from one group of species to another group.
The ABC strategy divides the artificial bees into the following three classes: employed bees, onlooker bees, and scouts. Half of the honey bee province consists of employed bees and another half of the onlooker bees. The objective of the entire ant group is to locate the optimum source of nectar, signifying the food. Initially, employed honey bees search for a food source, return to the hive and share their data through the waggle dance moves. The onlooker honey bees attempt to discover a food source by watching the The honey bees live within the frame of the province (i.e., in the hives). The honey bees can communicate with one another utilizing pheromone (chemical trade) and the waggle dance. If any bee discovers the nourishment source, and it brings back some food to the province, it trades off the food source location through a waggle dance. The strength and span of the waggle dance demonstrate the extravagance of the food source found. The waggling moves change from one group of species to another group.
The ABC strategy divides the artificial bees into the following three classes: employed bees, onlooker bees, and scouts. Half of the honey bee province consists of employed bees and another half of the onlooker bees. The objective of the entire ant group is to locate the Electronics 2021, 10, 2419 20 of 59 optimum source of nectar, signifying the food. Initially, employed honey bees search for a food source, return to the hive and share their data through the waggle dance moves. The onlooker honey bees attempt to discover a food source by watching the employed honey bee's waggle dance, whereas scout honey bees look for new food sources haphazardly. The communication of honey bees is subject to the quality of the food source. The likelihood of sharing the data by employed bees is straightforwardly relative to the productivity of food sources. In this way, artificial honey bees communicate and coordinate among themselves to obtain optimal solutions in a brief time [52,53].
To simplify the ABC solution process, mathematically and logically, the following assumptions have to be considered [54]: In the solution course, the sources reached by the bees in the food search relate to the possible optimum values. In the ABC strategy, the nectar amount is computed. The nectar idea is utilized in light of the nature of the solution values gainedfrom the sources. ii. The nectar (food) in each source must be taken by only one employer bee. For this situation, the absolute number of food sources and employer bees are considered equivalent.
The ABC algorithm employed the following steps to track the GMPP: Step-1 [Initialization Phase]: Randomly build N s food source in the search space. The larger the group is, the better is the performance of the algorithm. To distribute all the employed bees corresponding to each unique food source as per Equation (22), each solution X i is an n-dimensional vector.
where n specifies the number of optimization parameters; X max,i and X min,i indicate the maximum and the minimum value of the nth dimension, respectively.
Step-2 (Employed Bee Phase): The aim is to follow the food source position with maximum nectar available (i.e., GMPP) in the search area. Each employed bee advances its new position (V i,j ) in the proximity space using the old position value (X i ) to keep safely in memory, as per Equation (23).
where X k indicates the randomly selected food source other than X i , i.e., k should be different from 'i'; α i,j indicates the random number between them [−1, 1]. When the employed honey bee investigates another food source location, it uses the greedy selection strategy. This strategy involves a comparison of the amount of nectar present at the old and new positions. Thus, it preserves a better solution.
Step-3 (Onlooker Bee Phase): According to the information (i.e., the nectars in the food sources) conveyed by the employed bees to the onlooker bees with the assistance of a waggle dance, onlooker bees perform the probabilistic selection process for the selection of food sources (solutions). The probability of selection of each food source is computed using Equation (24).p where f (x) represents the fitness factor as per the food resource.
Step-4 (Scout Bee Phase): As per Equation (24), scout bees can discover new promising solutions around the selected food source. In any case, the fitness value of a food source remains unenhanced for the given step even after the inspection of the whole search area by employed and onlooker bees. In the next step, the corresponding employed bees become scout bees, and the scout bees look for new possible solutions, utilizing Equation (22).
Step-5 (Conclusion Phase): The entire procedure ceases when there is no further improvement in the output power. However, when there is a fluctuation in the output power, the process will reinitiate. The fluctuation effect can be because of solar insolation changes. Such changes in insolation are represented by the inequality condition, represented in Equation (25).
The GMPP search will again start if the above-mentioned criterion in (25) is satisfied. Hence, the ABC strategy can indeed distinguish the genuine MPP regardless of partial shading conditions.

(d) Grey Wolf Optimization Technique
The Grey Wolf Optimization (GWO) technique was advised by Mirjalili et al. in the year 2014. The GWO strategy is inspired by grey wolves' social hierarchy and hunting conduct in nature [55]. Generally, grey wolves prefer to live in a pack. The average grey wolf pack size is in the [5,12] range. Based on the social dominance attribute, the grey wolves are categorized into four types, as per the hierarchical sequence shown in Figure 18. At the top, Alpha (ά) wolves are the pioneers and are hence considered to be the fittest solution for a given optimization issue. Beta β ) wolves come afterά wolves' and help theά wolves in obligations. Therefore, β wolves can substituteά wolves if they die. The second last category consists of the delta (δ ) wolves, which constitute the hunters, keepers, and explorers of the pack. Hence, β and δ wolves stand for the second and third-best solutions, respectively. The last category is Omega (Ω ) wolves. Ω wolves are the young members of the pack, and hence represent the remaining solutions [56]. The dominance of wolves decreases in correspondence to the decrease in the rank of wolves from the top to the bottom in a hierarchical sequence. The primary flowchart of the GWO strategy is depicted in Figure 19.    Other than the social order of wolves, aggregate hunting also significantly involves the social conduct of grey wolves. Based on this, the mathematical model of the GWO algorithm considers the following measure [56]: social hierarchy, tracking and encircling, hunting, searching, and attacking the prey as follows: i.
Social Hierarchy: To model the hierarchical system of wolves in the GWO technique, assume the fittest solution as the alpha (ά) followed by beta (β ) and delta (δ ) as the second and third best solutions, respectively. The remaining candidate solutions are supposed to be omega (ώ). In this way, the hunting process is guided by alpha, beta, and delta wolves, while the omega wolves follow them.
ii. Tracking and Encircling the prey: During the hunting process, grey wolves usually encircle prey. The mathematical functions stated in Equations (26) and (27) indicate the encircling process. Equation (26) estimates the distance vector → D of a wolf from prey.
where 'i' indicates the current iteration; → X P signifies the prey vector; → X P GW denotes the grey wolf (GW) position vector; are the coefficient vectors computed utilizing Equations (28) and (29), respectively.
iii. Hunting: Based on the random vectors ( → r 1 and → r 2 ), a wolf can reach any position between the points.
Initially, the first three finest solutions (i.e., the location of alpha, beta, and delta wolves) are stored. As per the best solution knowledge, other searching wolves update their position. Therefore, a grey wolf can upgrade its location in any random direction by employing (30).
iv. Attack the prey: As the 'á' linearly decreases from 2 to 0 in each cycle. Therefore, when the |A| < 1 condition is satisfied, the prey halts at a fixed position, following which the grey wolves attack the prey. v.
Search for a prey: When the |A . | > 1 condition is reached, grey wolves are forced to search the target. This process depicts the exploration method, where the wolves move away from each other in search of prey, and later move towards each other to attack the prey.
The implementation of the GWO strategy for MPPT tracking starts by assuming the initial positions of wolves (Ẋ) and the best location (P best ) of them [57]. In this optimization process, 'i' iterations are employed to determine the best position of the wolf. Hence, in the ith iteration, there will be ith iteration values for the position of N wolf pack, i.e.,Ẋ 1,i , X 2,i , . . . ,Ẋ k,i , . . . ,Ẋ N,i . Therefore, this technique focuses on determining the next iteration value of the location for wolves (particles). In this way, the wolves get closer to their objective, i.e., the maximum power.

(e) Emperor Penguin Optimization MPPT Technique
Emperor penguins gather during the Antarctic winter for their survival. Thousands of emperor penguins gather in huge colonies during breeding and spend their lives in open ice throughout the winter season. The emperor penguins have neighbors who are selected arbitrarily in the herd [58]. The EPO is motivated by the social gathering conduct of emperor penguins. They position themselves on a polygon-shaped grid periphery during the gathering. The wind that flows around the huddle is determined to find the huddle border line around a polygon. Since emperor penguins' habitat is on the Antarctic continent, the low temperature throughout the winter makes their survival difficult. They flock in a huddle to maintain their body temperature at an appropriate limit necessary for their survival. Gathering behavior is shown by emperor penguins only. It depends on many attributes, for example, distance, temperature, and efficacious penguins throughout the herd. Maximizing the ambient temperature in the huddle is the crucial motive of the emperor penguins' gathering [59]. The basic flowchart of the GWO algorithm is exhibited in Figure 20. iii.
Calculationof the distance between emperor penguins. The distance is responsible forexploration and exploitation; iv.
The effective mover (the best optimal solution) is procured; v.
Reposition of the effective mover.
A temperature profile with different locations guides exploration and exploitation for emperor penguins. Temperature (TM) relies on the radius of the herd polygon p as follows: Moreover, , a temperature parameter, identifies the exploration and the exploitation as stated in Equation (32).
where indicates the temperature profile of the herd; symbolizes the maximum total count of iterations; denotes the present iteration.  The following steps describe the huddling conduct of emperor penguins: i. Identify and set the huddle boundary of emperor penguins'; ii.
Measurementof the temperature profile of the herd; iii.
Calculationof the distance between emperor penguins. The distance is responsible forexploration and exploitation; iv.
The effective mover (the best optimal solution) is procured; v.
Reposition of the effective mover.
A temperature profile with different locations guides exploration and exploitation for emperor penguins. Temperature (TM) relies on the radius of the herd polygon p as follows: Moreover, TM 0 , a temperature parameter, identifies the exploration and the exploitation as stated in Equation (32).
where TM 0 indicates the temperature profile of the herd; MX symbolizes the maximum total count of iterations; PX denotes the present iteration. Subsequently, for the herd boundary identification, the distance between the emperor penguins and the best optimal solution d is calculated as follows: where SF(C) describes the social forces of emperor penguins; B(x) represents the recent position vector of the emperor penguin; C and E are anti-collision factors between neighbors; B ep (x) corresponds to the vector of the best optimal solutions discovered.
C and E are accountable for tuning the distance (d) and can be computed using Equations (34) and (35).
where m stands for the movement parameter that upholds a gap between search agents for collision evasion whose estimation is taken as 2; PG(te) indicates the polygon grid accuracy by evaluating the difference between emperor penguins; rand2 belongs to (0, 1) and rand 3 lies in the interval (0, 1). SF(C) directs the way towards the best optimal hunt agent, computed using Equation (36), and the position is updated by utilizing Equation (37).
where T and w indicate the restrain parameters for better exploration and exploitation. T lies between (2 and 3), and w varies from 1.5 to 2; B(x + 1) signifies the n following the modified location of the emperor penguin. The steps involved in EPO execution are as follows: Step 1: Initialize rand2, rand3, p, TM, TM 0 , C, E, SF(C), m, T, and w.
Step 2: Develop initial values for essential parameters like B(x). Then, estimate theirequivalentfitness values.
Step 3: Set the initial best optimal solution among the initially calculated fitness values.
Step 4: Begin the first iteration by computing the new values of TM 0 , SF(C), PG(te), and C.
Step 5: Compute the value of d. Then, operate it in the best solution; B ep (x) function toevaluate the newly updated solution B(x + 1).
Step 6: Evaluate the new best optimal solution and save it in B ep (x). Furthermore, save the corresponding best fitness.
Step 7: Check for maximum iteration count if it has not been reached. Then, replicate until the maximum number of iterations is achieved.
Step 8: Notice the fitness array to establish the optimum fitness in it. Later,present it in the corresponding result.

(f) Salp Swarm Algorithms
A salp swarm algorithms (SSA) is a recent bio-inspired meta-heuristic optimization algorithm. The SSA strategy was suggested by Mirjalili et al. in 2017. The SSA imitates the swarm behavior of the salps, as depicted in Figure 21. Salps are gelatinous zooplankton with barrel-shaped bodies. Salps' habitat is the deep warm ocean. They move by contracting and there by pumping water through their jellylike bodies. Salp moves by forming chains in which the leaders show the way to the whole population while followers follow the leaders. The forefront salps in the chain are known as the leaders, and followers constitute the remaining salps [60]. The flowchart of the SSA technique is illustrated in Figure 22.     Candidate Solution for the leaders: The candidate solutions for the leaders are updated by employing Equation (38).
where L m,t new is the updated candidate solution for L m,t ; P t is the food source position, where I indicate the maximum count of the iterations, 'i' denote the current iteration. Candidate Solutions for Followers: The candidate solution of the leaders assists in updating the candidate solutions for the followers. Equation (40) where L m,t new denotes the updated candidate solution for the follower L m,t . In case candidate solutions of the whole chain violate the minimum and maximum values of decision variables even after modification as suggested in Equations (38) and (40), there is a need to reinitialize the candidate solutions at the respective minimum and maximum values of decision variables [61].
(g) Jaya Algorithm The Jaya Algorithm (JA) was proposed by R. Venkata Rao in 2015 [62]. The JA strategy is an easy global optimization technique. The JA assists in solving the constrained and unconstrained issues. As there is no learning phase involved in the JA technique, it can implement a parameter-free system, since there is no learning phase involved [63].
The JA helps in solving a specific problem by chasing the best solution while discarding the worst solution. Furthermore, this algorithm requires few constraints like population size, number of design variables, and the total count of generations. The flowchart of JA is demonstrated in Figure 23.
The working rule of the JA strategy involves the following steps: Step_1: Initialization of the population size, the total count of designed variables, and the termination condition.
Step_2: Repeat Step3 to Step5 until the termination condition is fulfilled.
Step_3: Evaluate the solutions for the objective function.
The prime aim of the optimization problem is to minimize or maximize the objective function (here obj_f(y)). At the ith iteration, N indicates the total count of candidate solution (i.e., u = 1, 2, 3, . . . , N) and M represents the total count of design variables (i.e., v = 1, 2, 3, . . . , M). Moreover, for the ith iteration count, obj_f(y) best and obj_f(y) worst indicate the best and worst solutions among the individuals. Let Y u,best,i and Y u,worst,i stand for the best and worst solutions for the uth design variable at the ith iteration, respectively. The random numbers r 1,v,i and r 1,v,i lie in the range [0, 1]. The random numbers aid in the movement of the candidates. These direct candidates toward the best solution and away from the worst solution by utilizing Step_4: Compute the modified solution utilizing Equation (41).
Step_5: Update the previous solution, if Y u,v,i > Y u,v,i . Otherwise, the previous solution is retained.
Step_6: Display the final best solution.
The Jaya algorithm is better than conventional techniques regarding efficiency and tracking time parameters during the PSC conditions. random numbers , , and , , lie in the range [0, 1]. The random numbers aid in the movement of the candidates. These direct candidates toward the best solution and away from the worst solution by utilizing , , ( , , − , , )and (− , , ( , , − , , )), respectively.
Step_4: Compute the modified solution utilizing Equation (41).    In the initial few tracking stages, the population size of the ants is kept constant to promote the effective food searching process. Hence, it prevents the possibility of trapping in the local minima.

•
Locations far from the immediate optimal solution (i.e., inefficient) are removed in the following search process. Such a reduction process reduces the computational time.

•
The enhanced scheme provides a low tracking time and small power ripples.  The highest achievable efficiency of the algorithm is 99%.

Bio-Inspired Algorithm
The following sub-section thoroughly discusses the bio-inspired MPPT techniques. Furthermore, the recent research work related to these methods in the MPPT domain is encapsulated in Table 3.

(a) Cuckoo Search MPPT Technique
The Cuckoo Search (CS) technique was suggested by Xin-She Yang and Suash Deb in 2009 [74]. The cuckoo species' parasitic impersonation strategy (brood-parasitism) [75] is the inspiration behind the CS strategy. Brood parasitism is the conduct of a few cuckoo birds like Tapera. Generally, brood parasitism is classified as intra-explicit, synergetic, and nest takeover [76]. Tapera is a wise winged animal that mirrors the host fowls fit. The fiddle and shading tricks are a part of the host fowl strategy, hence prompting next-generation survival. The CS technique is an efficient meta-heuristic tool for optimization purposes. The flowchart of the CS strategy is illustrated in Figure 24. The cuckoo bird comes under the category of parasitic living beings. The cuckoo lays its eggs inside the other flying creatures' nests as opposed to constructing their nests. Initially, the cuckoo female birds will fly haphazardly to look for the host species' nest with comparable egg attributes to their own. Afterward, it will pick the best nest with the end goal that their eggs have the most obvious opportunity to bring forth and hence, produce another age of cuckoo. The cuckoo bird will make a few attempts to help the incubating chance by deliberately laying their eggs in a decent position. Sometimes, the cuckoo may throw species eggs from the nest. Host birds could be easily tricked and acknowledge the unfamiliar eggs. By chance, if the host bird finds out about the alien eggs, then the unloading of the eggs outside the nest is sure. In the worst scenario, the host bird may destroy the nest, destroying the alien eggs.
For implementing the CS strategy, the following three idealized standards are utilized. The cuckoo bird comes under the category of parasitic living beings. The cuckoo lays its eggs inside the other flying creatures' nests as opposed to constructing their nests. Initially, the cuckoo female birds will fly haphazardly to look for the host species' nest with comparable egg attributes to their own. Afterward, it will pick the best nest with the end goal that their eggs have the most obvious opportunity to bring forth and hence, produce another age of cuckoo. The cuckoo bird will make a few attempts to help the incubating chance by deliberately laying their eggs in a decent position. Sometimes, the cuckoo may throw species eggs from the nest. Host birds could be easily tricked and acknowledge the unfamiliar eggs. By chance, if the host bird finds out about the alien eggs, then the unloading of the eggs outside the nest is sure. In the worst scenario, the host bird may destroy the nest, destroying the alien eggs.
For implementing the CS strategy, the following three idealized standards are utilized.
i. Each cuckoo bird lays a single egg at a time and puts it in a haphazardly selected host nest; ii. The nest withthe best high-quality eggs (i.e., the optimal solutions) will carry forward the next generation of cuckoos; iii. The total count of the accessible host nests is fixed in the search space. The likelihood that the host bird will find the foreign egg is denoted by P f , which lies in the range (0 ≤ P f ≤ 1).
In the CS strategy implementation, cuckoo birds symbolize the particles relegated to discover the solution. The cuckoo bird's eggs represent the solution for the present iteration concerning an optimization issue.
The search for the nest is equivalent to the search for food. As Levŷ flight is possibly the most widely recognized model for choosing the walks and directions, it hence later demonstrates certain numerical functions [77]. The Levŷ flight is like a chaotic walk wherein the progression lengths have a probability distribution while steps characterize the progression lengths. The CS algorithm utilizes the power-law to draw the progression length from Levŷ distribution as per the [78].
where L , denotes the length of the step size, and γ represents the variance, i.e., the power-law index. Hence, Levŷ(γ) function has an infinite variance.

Levŷ flight is characterized by utilizing Equation (43) to generate new solutions (
where n stands for the nth particle, 'i' designates the iteration cycle; (a > 0) signifies the step size related to the optimization problem; symbolizes the operator representing the entry-wise multiplication for the multidimensional problem. For MPPT, the Levŷ flight can be modified as Equation (44) whereṼ i n indicates the voltage of the nth particle at the ith iteration; K signifies the coefficient of Levŷ multiplication; and a, b are calculated by the standard distribution curve as depicted in (45).
where Γ stand for the integral gamma function. Levŷ flights are conveyed by all the particles in every iteration cycle till they discover the GMPP. If all particles converge to a specific solution, then the tracking process will halt as the best solution is attained.

(b) Flying Squirrel Search Optimization Strategy
The flying squirrel search optimization (FSSO) was proposed by Nagendra Singh, Krishna Kumar Gupta, and their colleagues in 2020 [79]. The FSSO strategy impersonates the powerful search strategy of southern flying squirrels. Additionally, the FSSO emulates the squirrels' way of floating headways in the air exhibited in Figure 25. The probable result vector and the equivalent wellness are alluded to as the stance of a flying squirrel (FS) and are characteristic of food origin, respectively. Based on wellness worth, the stance is grouped into three districts addressing sets of the best solution (BS) (hickory nut tree), close to best solution (CBS) (acorn nut tree), and unplanned solution (US) (ordinary tree). The flying squirrel search optimization (FSSO) was proposed by Nagendra Singh, Krishna Kumar Gupta, and their colleagues in 2020 [79]. The FSSO strategy impersonates the powerful search strategy of southern flying squirrels. Additionally, the FSSO emulates the squirrels' way of floating headways in the air exhibited in Figure 25. The probable result vector and the equivalent wellness are alluded to as the stance of a flying squirrel (FS) and are characteristic of food origin, respectively. Based on wellness worth, the stance is grouped into three districts addressing sets of the best solution (BS) (hickory nut tree), close to best solution (CBS) (acorn nut tree), and unplanned solution (US) (ordinary tree). The FSSO strategy exploits the collaboration feature of the flying squirrels. Moreover, the stances of FSs are updated irrespective of the hunter's presence. The aforementioned cooperative feature among FSs is the reason for the convergence attribute. This strategy guide by the following steps: Step1:CBS moves towards the course chosen by the globally leading solution; Step2:Part of USs progress towards the Optimum Solution (OS); Step3: The surplus US progresses towards CBS.
The following assumptions are taken into consideration while implementing the FSSO strategy for MPPT:  The aim (food point of supply) resembles the PV power yield ( ).  The choice variable, i.e., the stance, is considered a duty ratio (D) of the converter employed in the MPPT technique.  The FSSO strategy is appropriately custom-fitted by wiping out the presence of hunters to lessen the time to reach the GMPP.
The FSSO strategy flowchart is exhibited in Figure 26. The FSSO strategy exploits the collaboration feature of the flying squirrels. Moreover, the stances of FSs are updated irrespective of the hunter's presence. The aforementioned cooperative feature among FSs is the reason for the convergence attribute. This strategy guide by the following steps: Step1: CBS moves towards the course chosen by the globally leading solution; Step2: Part of USs progress towards the Optimum Solution (OS); Step3: The surplus US progresses towards CBS.
The following assumptions are taken into consideration while implementing the FSSO strategy for MPPT: The aim (food point of supply) resembles the PV power yield (P PV ). The choice variable, i.e., the stance, is considered a duty ratio (D) of the converter employed in the MPPT technique. The FSSO strategy is appropriately custom-fitted by wiping out the presence of hunters to lessen the time to reach the GMPP.
The FSSO strategy flowchart is exhibited in Figure 26. The FSSO strategy implementation considers the following step measures: The FSSO strategy implementation considers the following step measures: Starting: At first, the N number of FSs is situated at various positions. These positions in the solution arena are the precise estimations of the converter's duty ratio per Equation (47).
Here, 'i' indicate the iteration count; D max and D min depict the maximum and minimum values of the converter's duty ratio, usually taken as 10% and 90% of allowable duty in the ratio for the boost application. The duty ratio (D i ) can vary in between (0, 0.5). Wellness Evaluation: In this progression, the converter is progressively working with every duty ratio (i.e., the stance of every FS). For every duty ratio (D), the characteristic of a food source represents the immediate PV power yield P PV (D) This progression is repeated for all duty cycles, while the goal wellness function (F) for the MPPT is characterized as: Declaration and Categorization: The duty ratio corresponding to the maximum PV power yield is pronounced as the hickory tree. Acorn trees are viewed as the best stances of the FS. The left-behind FSs reside on the typical trees.
Here, X at and X ht address squirrels stance at hickory and acorn trees, individually; 'i' symbolizes the current cycle count, and i m denotes the maximum number of cycles permitted.
The Levy distribution is employed for better hunt arena investigation. Thus, moving the duty ratio of (FSs on ordinary trees) OTFS.
Here, X ot address the squirrel stance at ordinary tree and d indicates the step distance, and the utilizing Levy distribution is introduced as: addresses the step coefficient whose values are 1.5 and 1.25, individually. At the same time, y and z are decided from the standard distribution curve, as per Equations (59) and (60).
ii. Groove contemporized: The hickory tree squirrels abide in their stance. Although, the acorn tree squirrels navigate to approach the hickory tree. However, the erratically chosen squirrel ETFS from ordinary trees navigate toward the hickory tree, while the leftover (NTFS − ETFS) is pushed toward the acorn tree. The comparing duty ratios are refreshed as per the following conditions: Here H c and h d address the hovering constant and hovering distances individually. The estimation of H c is held to 1.90 after thorough imitations. The hovering distance h d is estimated as follows: where z h signifies the zenith loss after hovering, its value is estimated to be 8 m; c m stands for the measuring factor whose value is chosen as 18 to keep h d somewhere in the range of 0.5 and 1.11 to restrict the distress in Equations (52)- (54).Whereas, F h and F r represent the force of the haul and the rise, individually, which are determined as: where ρ a is the air density whose estimation is taken as 1.204 kg/m 3 ; v represents the velocity of a squirrel, whose estimation is taken as 5.25 m/s; furthermore, A is the surface area of the body, which is chosen as 154 cm 2 ; C h is the haul coefficient which is chosen as 0.6, and C r indicates the rise coefficient whose value is chosen arbitrarily between [0.675, 1.5] [80].
iii. Convergence Resolution: If the adjustment instance of every FSs evolves into a diminutive ratherthan an edge. Moreover, if the maximum count of iteration has arrived, then in such a case, the improved algorithm is ended and yields the duty cycle at the point at which the converter works while following GMPP. iv. Re-Initialization:As the MPPT strategy is the time variation advancement, the frequently changing climate conditions harm the wellness esteem. In the circumstances mentioned above, the FSs stances (i.e., duty ratio) will reinitialize to look for the new GMPP once more. The duty ratio will reinitialize by accompanying the limitation condition as inEquation (57). The reinitialization is in the wake of distinguishing the change in insolation.
Owls are nocturnal. Irrespective of this, they are skilled predators. They have an auditory system with distinct anatomical features. This feature helps them to hear a sound in one ear before the other. Hence, they can easily detect quarries' location in the search arena. Furthermore, time and intensity differences in sound wave arrival play a crucial role in estimating their distance to their prey, as illustrated in Figure 27 [81]. The owl search algorithm (OSA) simulates the owl's hunting method, i.e., relying on the hearing ability in the dark rather than sight to locate the prey. The OSA technique starts with a random arrangement of owls in a search space. This arrangement represents a random set of solutions in a p-dimensional search arena. Here, p indicates the number of variables to be resolved. Matrix (d × p) stores the computed results, as shown in Equation (58). where W n,j signifies the j th initial position (variable) of the n th (the owl), which is determined using a uniform distribution, as given in Equation (59): where W U , W L symbolize the upper and lower bounds for the n th owl (W n ), respectively; R P (0, 1) signifies a random number between [0, 1]. where , signifies the initial position (variable) of the (the owl), which is determined using a uniform distribution, as given in Equation (59): where , symbolize the upper and lower bounds for the owl( ), respectively; (0,1)signifies a random number between [0,1]. After the application of the random solution, parameter evaluation helps in tracking the optimum solution. Thus, the evaluated parameter aids in enhancing the result in the next cycle. The parameter updates the position of a particular owl for a specific fitness function T, as represented in Equation (60).
Here k, f stand for the maximum and minimum values of the fitness function output saved up to the current iteration. The distance between the current solution and the optimum solution for each owl is computed using Equation (61).
Here M signifies the best location of the prey (i.e., the optimum solution). Therefore, only the fittest owl can reach it. Another parameter is calculated by utilizing Equation (62).
Lastly, for the next iteration, the position update of the owls is done by employing Equation (63).
where stands for the probability of the quarry movement (optimum solution); indicates an arbitrary number uniformly distributed in the range [0, 0.5]; µ symbolizes the function constantly decreasing linearly from 1.9 to 0; is set to zero since MPP does not After the application of the random solution, parameter evaluation helps in tracking the optimum solution. Thus, the evaluated parameter aids in enhancing the result in the next cycle. The parameter updates the position of a particular n th owl for a specific fitness function T, as represented in Equation (60).
Here k, f stand for the maximum and minimum values of the fitness function output saved up to the current iteration. The distance between the current solution and the optimum solution for each owl is computed using Equation (61).
Here M signifies the best location of the prey (i.e., the optimum solution). Therefore, only the fittest owl can reach it. Another parameter is calculated by utilizing Equation (62).
Lastly, for the next iteration, the position update of the owls is done by employing Equation (63). W n i+1 = W n i + µ × D n βM − W n i ; p oa < 0.5 W n i+1 = W n i − µ × D n βM − W n i ; p oa ≥ 0.5 (63) where p oa stands for the probability of the quarry movement (optimum solution); β indicates an arbitrary number uniformly distributed in the range [0, 0.5]; µ symbolizes the function constantly decreasing linearly from 1.9 to 0; p oa is set to zero since MPP does not change each cycle (∆n = 0.0001 s). The constants β and µ are chosen to be 0.5 and 1.5, respectively [82].

(d) Firefly Algorithm (FFA)
Fireflies are also nocturnal and have a specific light pattern that they use to communicate with each other. Each species has the color of the light they produce. Attraction among the fireflies governs the search pattern of the FFA. The FFA was developed by Xin-She Yang while working in Cambridge in 2008. The attractiveness is equivalent to the brightness. A dim firefly moves toward a brighter firefly. Whereas, if the brightness level of the firefly is the same as that of a particular firefly, it will move randomly [83]. The typical flowchart of the firefly algorithm is demonstrated in Figure 28. change each cycle (Δn = 0.0001 s). The constants and µ are chosen to be 0.5 and 1.5, respectively [82].

(d) Firefly Algorithm (FFA)
Fireflies are also nocturnal and have a specific light pattern that they use to communicate with each other. Each species has the color of the light they produce. Attraction among the fireflies governs the search pattern of the FFA. The FFA was developed by Xin-She Yang while working in Cambridge in 2008. The attractiveness is equivalent to the brightness. A dim firefly moves toward a brighter firefly. Whereas, if the brightness level of the firefly is the same as that of a particular firefly, it will move randomly [83]. The typical flowchart of the firefly algorithm is demonstrated in Figure 28. The FFA strategy has primarily two roles of flickering,  To entice other fireflies.  To lure their prey.
The shine of the fireflies accompanied by the value of the objective function governs the charisma of fireflies. The attraction value of µ depends on the estimation of other fireflies. The attraction will differ in accordance with the distance ( ) between the firefly and firefly . The attraction of µ can be obtained byemploying Equation (64). The FFA strategy has primarily two roles of flickering, To entice other fireflies.
To lure their prey.
The shine of the fireflies accompanied by the value of the objective function governs the charisma of fireflies. The attraction value of µ depends on the estimation of other fireflies. The attraction will differ in accordance with the distance (d ij ) between the firefly i and firefly j. The attraction of µ can be obtained byemploying Equation (64). (64) where d indicates the distance between the two fireflies; µ 0 symbolizes the attraction when d = 0 or the initial appeal; β lies in the range [0. 1,10]; the distance between the two fireflies i and j at the positions r i and r j can be computed by utilizing Equation (65).
where r i,m and r j,m denote m-components in the spatial coordinates of the it h firefly and the jth firefly; n denotes the dimension number. Since the MPPT problem is a 1-dimensional case, hence d = 1 is utilized. Brighter fireflies entice the dull fireflies, which govern the movement of dull fireflies as per Equation (66).
Here £ indicates a random parameter in the range [0, 1]; rand signifies a random disturbance value in between 0 to 1. Generally, large £ leads to the global search, whereas small £ leads to the local search [84].

Artificial Intelligent (AI) Methods
The AI techniques reviewed are grouped as depicted in Figure 29. A comprehensive review of the AI algorithms is addressed in the following sub-sections, while the latest work related to these strategies is sum up in the tabular form in Table 4.

Artificial Intelligent (AI) Methods
The AI techniques reviewed are grouped as depicted in Figure 29. A comprehensive review of the AI algorithms is addressed in the following sub-sections, while the latest work related to these strategies is sum up in the tabular form in Table 4.

(a) Fuzzy Logic Controller Strategy
Fuzzy Logic Controller (FLC) is a control system based on fuzzy logic which converts analog inputs into continuous digital values of0 and 1. For each sample, the FLC strategy analyzes the PV output power. In each case, the change ratio is more than zero. Then the algorithm will adjust the duty cycle of the Pulse Width Modulation (PWM) to increase the voltage. This enhancement in voltage leads to the maximum power ratio outcome to be zero (∆ṕ/∆v = 0). Whereas, when the change is less than zero, the algorithm modifies the duty cycle of the PWM to reduce the voltage until the power reaches the pinnacle. The error and the change in error are the two inputs of the FLC algorithm. The PWM signal controls the boost converter and serves as the output of the strategy. The two input variables: FLC error (E) and error change (∂É), during times samples (ki), can be computed using Equations (67) and (68), respectively.

(a) Fuzzy Logic Controller Strategy
Fuzzy Logic Controller (FLC) is a control system based on fuzzy logic which converts analog inputs into continuous digital values of0 and 1. For each sample, the FLC strategy analyzes the PV output power. In each case, the change ratio is more than zero. Then the algorithm will adjust the duty cycle of the Pulse Width Modulation (PWM) to increase the voltage. This enhancement in voltage leads to the maximum power ratio outcome to be zero (∆ p/∆ v = 0). Whereas, when the change is less than zero, the algorithm modifies the duty cycle of the PWM to reduce the voltage until the power reaches the pinnacle.
The error and the change in error are the two inputs of the FLC algorithm. The PWM signal controls the boost converter and serves as the output of the strategy. The two input variables: FLC error (E) and error change (∂ E), during times samples (k i ), can be computed using Equations (67) and (68), respectively.
Here, P pv (k) and V pv (k) symbolize the power and the voltage of the PV panel, respectively.
The FLC strategy consists of three steps: fuzzification, fuzzy rules, and de-fuzzification. In the first step, the input variables transform into linguistic variables by implementing various defined membership functions. In the next step, these variables are manipulated based on the rules "if-then" by applying the desired behavior of the system. Lastly, these variables are renewed to numerical variables. The membership functions are significant in affecting the speed and accuracy of FLC [90].
FLC effectively tracks the maximum power point under different ambient conditions. The FLC strategy shows less oscillation around the MPP. Moreover, its response is faster in comparison with the conventional methods [91]. Furthermore, it has a higher tracking efficiency in contrast to the traditional MPPT methods [92]. The block diagram implementation of the FLC strategy is depicted in Figure 30.
Here, Ppv(k) and Vpv(k) symbolize the power and the voltage of the PV panel, respectively.
The FLC strategy consists of three steps: fuzzification, fuzzy rules, and de-fuzzification. In the first step, the input variables transform into linguistic variables by implementing various defined membership functions. In the next step, these variables are manipulated based on the rules "if-then" by applying the desired behavior of the system. Lastly, these variables are renewed to numerical variables. The membership functions are significant in affecting the speed and accuracy of FLC [90].
FLC effectively tracks the maximum power point under different ambient conditions. The FLC strategy shows less oscillation around the MPP. Moreover, its response is faster in comparison with the conventional methods [91]. Furthermore, it has a higher tracking efficiency in contrast to the traditional MPPT methods [92]. The block diagram implementation of the FLC strategy is depicted in Figure 30.

Demerits:
Difficulty in deriving fuzzy rules and this strategy is time-consuming; inability to automatically learn from the environment; complex calculations; undesirable performance under PSC; and fuzzy rules directly affect system performance.

(b) Artificial Neural Network Strategy
The artificial neural network (ANN) is a collection of statistical learning models. The ANN technique emulates the biological neural network for predicting an accurate output per input. Neurons are the basic units of the network which are interlinked. Consequently, neurons process the inflowing data.
A neural network has three layers: input layer, hidden layer, and output layer, as depicted in Figure 31. The total count of neurons in each layer is variable and problemdependent.

Demerits:
Difficulty in deriving fuzzy rules and this strategy is time-consuming; inability to automatically learn from the environment; complex calculations; undesirable performance under PSC; and fuzzy rules directly affect system performance.

(b) Artificial Neural Network Strategy
The artificial neural network (ANN) is a collection of statistical learning models. The ANN technique emulates the biological neural network for predicting an accurate output per input. Neurons are the basic units of the network which are interlinked. Consequently, neurons process the inflowing data.
A neural network has three layers: input layer, hidden layer, and output layer, as depicted in Figure 31. The total count of neurons in each layer is variable and problem-dependent. ANNs are operated as maximum power point tracking systems to foretell the optimum power or voltage produced at a distinct instance. The predicted value acts as a reference that aids in determining the duty cycle. The input variables take into account the PV module parameters and atmospheric parameters. Later, hidden layers in the network process these input variables.
The ANN algorithm provides an enhanced method to diminish the total error (É), as demonstrated in Equation (69).
where I indicates the i th network; Ŏ denotes the actual output; ť symbolizes the estimated outcome. The breeding algorithm is retrospective in nature and drives a blunder. Later, it feeds back to the output through the input neurons utilizing the centered (covered up) layer neurons. The total number of hidden neurons present is computed by employing Equation (70).
where Ń stands for the count of hidden neurons;Ń symbolizes the total count of the input neurons injected in the system; Ń° denotes the total count of output neurons; Ń indicates the total count of training samples. The hardware and simulation setup helps in collecting essential data. Subsequently, the dataset is acquired by inputting solar irradiances, temperatures, PV voltage, or current to the ANN for finding the corresponding Pmax or Vmax output.
These data are converted to the training data. Later, it passes into the designed ANN to teach it how to perform.
Furthermore, the input data functions transform as the training data for the designed ANN model. The ANN model teaches itself how to perform. After the training part, the ANNs are operated as maximum power point tracking systems to foretell the optimum power or voltage produced at a distinct instance. The predicted value acts as a reference that aids in determining the duty cycle. The input variables take into account the PV module parameters and atmospheric parameters. Later, hidden layers in the network process these input variables.
The ANN algorithm provides an enhanced method to diminish the total error (É), as demonstrated in Equation (69).
where I indicates the ith network; O denotes the actual output; t symbolizes the estimated outcome.
The breeding algorithm is retrospective in nature and drives a blunder. Later, it feeds back to the output through the input neurons utilizing the centered (covered up) layer neurons. The total number of hidden neurons present is computed by employing Equation (70).
where N h stands for the count of hidden neurons; N I symbolizes the total count of the input neurons injected in the system; N• denotes the total count of output neurons; N t indicates the total count of training samples. The hardware and simulation setup helps in collecting essential data. Subsequently, the dataset is acquired by inputting solar irradiances, temperatures, PV voltage, or current to the ANN for finding the corresponding P max or V max output.
These data are converted to the training data. Later, it passes into the designed ANN to teach it how to perform.
Furthermore, the input data functions transform as the training data for the designed ANN model. The ANN model teaches itself how to perform. After the training part, the test datasets evaluate the performance of the designed ANN, and the errors are fedback to ANN until the weights of all the neurons are adjusted accordingly.
For a particular application, the network needs to be trained by training algorithms. Hence, the system's overall performance relies on factors like the training process, activa-tion function, and the number of neurons in the hidden layer. Moreover, the quality of the training datasets defines the accuracy of the network.
The feed-forward topology-based ANN consists of three network layers, which is discussed in [93]. As per the simulation results, the ANN-based MPPT algorithm is more accurate than the MPPT algorithm without ANN during solar irradiation and temperature variation. It is proven to have a better response time and less oscillation around MPP [94]. Artificial Neural Network performance improves with an increase in the number of training samples.
However, an accurate, standardized, and proper training set is the main limitation for the ANN to perform optimally without a high training error [95]. ANN requires periodic tuning to cope with the aging and degradation problem of the solar cells [96].
ANN strategy shows a fast response, fast-tracking speed, small steady-state oscillations, and there is even no need to re-program it. However, it requires a massive dataset, which makes its implementation complex and time-consuming.

(a) Genetic Algorithm
Genetic algorithms (GA) are computational models motivated by evolution.GA comprises chromosomes. These chromosomes encode the possible solution to a problem. Each chromosome carries a distinct set of attributes, i.e., a solution for the application of recombination operators to conserve vital information. GA operates as a function optimizer. To date, GA has been implemented in a broad range of applications. The main reasons for the popularity of GAs in search and optimization problems are their widespread applicability, their global perspective, and their inherent parallelism [91]. GA helps in enhancing the PV voltages and hence generates the maximum power transfer (P MPP ). The simulation result creates an array of data containing voltage (V pv ), power (P pv ), and current (I PV ). In GA, V pv searches for optimized solutions represented by chromosomes, while Ppv represents the fitness value of a particular chromosome.
The principal concept is to perform genetic alterations (selection, crossover, mutation, and insertion) on a population of individuals. Eventually, an ideal individual is obtained, corresponding to the maximum of the function (i.e., fitness function). The usual flowchart of GA is depicted in Figure 32. generation by taking the current value as the parent population as per the fitness function.
• Genetic Functioning: The operations employed in this step are the foundation of the GA strategy. These do not reject the probability hypotheses, yet they give fascinating tasks; these tasks are: 1. Selection: The selection method employed is known as the roulette wheel selection. The probability (ṕ ) of the kth individual to be picked is computed by utilizing Equation (72). Figure 32. Flowchart of the GA strategy.

Crossover:
In this operation, reproduction is performed by crossing the pairs of individuals to produce the novel ones (i.e., children).

Mutation:
In this process, mutation analogous to the biological one is applied. The The steps of GA strategy execution are as follows: i. Initialization: initially, an arbitrary population withŃ binary individuals is generated with a length Decemberision (bits numberṠ, exactness). The population consists of a binary matrix (71) in which the count of lines addresses the number of individuals, whereas the column number symbolizes the length of individuals.
• Assessment: In this appraisal process, the possibility of an individual to be picked is decided by the fitness function (f t) value, so it is a crucial step. In the case of MPPT, the fitness function is the power of the PV module (i.e., P pv ). For each individual, the fitness function is computed, and then its value is utilized to produce a new generation by taking the current value as the parent population as per the fitness function.
• Genetic Functioning: The operations employed in this step are the foundation of the GA strategy. These do not reject the probability hypotheses, yet they give fascinating tasks; these tasks are: 1. Selection: The selection method employed is known as the roulette wheel selection. The probability (p k ) of the kth individual to be picked is computed by utilizing Equation (72).
2. Crossover: In this operation, reproduction is performed by crossing the pairs of individuals to produce the novel ones (i.e., children).

3.
Mutation: In this process, mutation analogous to the biological one is applied. The alteration of one or more genes occurs in a chromosome with the likelihood of change in the random bit from its original form.

4.
Insertion: It is a replacement process in which the new population is integrated with the previous group of individuals. Later, the individuals withpoor fitness function values are replaced.

• Program End:
Eventually, the algorithm produces a new population consisting of the best individuals. The program will terminate after reaching the desired output as per the system.
GA has relatively small oscillations and rapid convergence speed, and unlike conventional MPPT, GA-based MPPT is capable of searching GMPP instead of being trapped in the local MPP [97,98].

(b) Differential Evolution
A Differential Evolution (DE) strategy was suggested by Storn and Price in 1996, specifically for global optimization problems [99]. DE execution is simple as it requires only two parameters, such as a population of particles and a maximum iteration needed to yield the optimal result. Besides, DE has global search space. Therefore, it is employed to follow the GMPP in the case of partial shading conditions. Moreover, the mutation stage in each cycle utilizes distinct attributes of the particles. The flowchart of the DE algorithm is illustrated in Figure 33.  . The DC-DC converter's duty cycle (D) is required to be regulated efficiently to operate the PV system at the GMPP. Hence, the DE strategy utilizes the duty cycle as the target vector, D n .
At first, a two-dimensional target vector is initialized in DE implementation, with D n as the populace for each iteration and generation, as shown in Equation (73). After one generation, three particles are selected randomly to decrease the DE strategy execution time. D g,n ; n = 1, 2, 3 . . . N Subsequently, the chosen duty cycles compute the corresponding power (P n ) of the PV array. Afterwards, the maximum power in the set of P n is chosen as P best , and the relating D n is selected as D best . Next, a mutation factor (M) utilizes the weight distinction between the two chosen target vectors. Later, the mutated particle, known as the donor vector (DV n ), is formed by adding the weighted deviation to the third target vector. This interface elucidates the opposition lead between the individuals in life. Hence, the interface promotes local learning from the distinct attributes of one another in the group. Later, this leads to the generation of better individuals to guarantee the advancement of society.
The direction of mutation should guarantee the convergence towards P best, which is achieved by employing comparison depicted in Equation (74).
where M lies in the range [0, 1]. After mutation, a process known as the crossover is employed to produce the trial vectors (TV n ) by mixing donor vectors and target vectors, as described in Equation (75). In this process, an arbitrary number (i.e., rand), which lies in the scope of [0, 1], contrasts with the hybrid rate HR, which lies in the range [0, 1].
TV n = DV n ; i f rand ≥ HR D n ; else.
Later, we estimated the powers of the PV array corresponding to trial vectors; P n,TV . Notwithstanding, after the crossover process, the value of TV n may remain the same as D n and subsequently P n,TV is likewise equivalent to P n . Hence, the power P n,TV corresponding to the duty cycle is different to that of P n and is estimated again by employing a DC-DC converter. This interaction assists with lessening the search time.
As forthe correlation, the duty cycle directing the optimum power is utilized as the new TV as per Equation (76). D n+1 = D n,TV ; i f P n,TV ≥ P n D n ; else.
Therefore, the course of action is continued from the mutation production until a convergence condition is fulfilled.   • Direct detection of the I SC is by a basic reading of the I PV through decrement and increment in solar irradiance.

•
During changing atmospheric conditions, the proposed algorithm shows a short following time and an increase in power production.

Comparison and Analysis
During the uniform irradiance condition, conventional MPPT techniques (i.e., P&O, INC, FOCV, FSCC, etc.) show an efficiency of 99%. However, the traditional strategy competence deceases under the PSC circumstances because of the multiple peaks in the P-V curve. Hence, conventional tracking strategies got stuck at the local maxima. Therefore, to overcome this problem, advanced techniques have been employed such as the metaheuristic approaches, hybrids, etc. The fundamental principle of the advanced strategies is to yield the maximum power irrespective of the change in irradiation. In this review article, a comparison between the conventional, meta-heuristic, and artificial intelligence algorithms has been made. The comparison of algorithms acknowledges different parameters, such as strategy complexity, convergence speed, tacking accuracy, PV array dependency, etc. The pros and cons, applications, and commercial products of the MPPT techniques are illustrated in Table 5. The comparison of some well-known MPPT strategies based on various factors is depicted in Table 6.    The parameters considered for evaluation are as follows: i.

Algorithm Complexity
MPPT technique efficiency relies immensely upon its implementation complexity. It decides the accuracy of the computations carried out by the strategy to follow the MPPT. Consequently, the algorithms consisting of simple calculations, such as FSCC, FOCV, etc., have good competence under uniform irradiation conditions, and their execution rate is fast. However, advanced techniques, such as meta-heuristic and artificial intelligence, have complex algorithm designs.

ii. Implementation
While designing the PV framework, choosing the MPPT technique was a crucial decision. This decision also takes into account the ease in execution of algorithms. Nonetheless, it profoundly depends on the user's interest and his knowledge of MPPT handling since some may like to manage analog circuits while others may be more skillful in handling computerized systems. Moreover, the MPPT algorithm implementation relies on the count of sensors utilized, the cost, design complexity, and other equipment necessities. Furthermore, some MPPT algorithms require a particular topology for their implementation. Hence, the user should be familiar with the various implementation topologies of the algorithms.

iii. Cost
The necessity of sensors is more pronounced when climatic conditions change suddenly, as the requirement of sensors and hardware increase in such cases. Additionally, some MPPT algorithms require current measurements. Hence, such strategies must invest in the high venture for its hardware execution as current sensors are costly. Furthermore, the expense of MPPT technique implementation relies upon selecting the circuits, as digital courses are expensive compared to analog ones.

iv. Following and Convergence Speed
Solar irradiance does not remain constant throughout the day as it varies accordingly with the rapid climate changes. Hence, in such a case, partial shading phenomena may be more pronounced. The PSC leads to multiple peaks in the solar P-V characteristics. Because of this, there is a decrease in the tracking speed and convergence speed of the PV framework. The conventional techniques cannot track the true maxima in a single step because these strategies need iterative advances that sense, compute, and perturbed to follow the GMPP. Therefore, traditional algorithms have a slow tracking speed and advanced techniques like PSO, FLC, etc., are much more efficient for PSC cases. Furthermore, it is essential to take the tracking and convergence speeds into account when designing PV systems.

Simulation Results
The Cuckoo Search (CS) and Jaya algorithm (JA) are implemented in MATLAB Simulink software version 2021a.Eventually, the output of the system was measured for varying solar irradiance. The characteristics of the PV module employed in the simulation are listed in Table 7. The first simulation was performed by taking solar irradiances of the four panels as 1000, 1000, 300, and 300 W/m 2 at a temperature of 25 • C. The results of the simulation for CS and JA strategies are illustrated in Figures 34 and 35, respectively. The oscillations from the start of the simulation settle down when the optimum power point gets tracked.
The simulation results indicate that JA tracks the maximum power point at 0.7 s, whereas the CS strategy tracks it at 1.2 s. The difference in tracking time signifies that the JA algorithm tracking speed is faster than the CS strategy. Furthermore, the JA strategy tracked down the global optimum power point as 67.57 W. However, the CS technique tracked the MPP as 45 W, which indicates that it is stuck in the local optimum power point. Thus, the JA strategy is more accurate than the CS technique.

Simulation Results
The Cuckoo Search (CS) and Jaya algorithm (JA) are implemented in MATLAB Simulink software version 2021a.Eventually, the output of the system was measured for varying solar irradiance. The characteristics of the PV module employed in the simulation are listed in Table 7. The first simulation was performed by taking solar irradiances of the four panels as 1000, 1000, 300, and 300 W m ⁄ at a temperature of 25 ℃. The results of the simulation for CS and JA strategies are illustrated in Figures 34 and 35, respectively. The oscillations from the start of the simulation settle down when the optimum power point gets tracked.  The simulation results indicate that JA tracks the maximum power point at 0.7 s, whereas the CS strategy tracks it at 1.2 s. The difference in tracking time signifies that the JA algorithm tracking speed is faster than the CS strategy. Furthermore, the JA strategy tracked down the global optimum power point as 67.57 W. However, the CS technique tracked the MPP as 45 W, which indicates that it is stuck in the local optimum power point. Thus, the JA strategy is more accurate than the CS technique.
For the second simulation, insolation levels of 1000, 800, 600, and 400 W m ⁄ were adjusted to the PV panels. Each panel was set to a temperature of 25 ℃. The curve for the variation of the power, voltage, current, and duty cycle with respect to time for the CS algorithm is depicted in Figure 36. The graph for the power indicates that the maximum power tracked was 30.36 W, which is less than the one followed in the first case at 1.2 s. Moreover, the time taken to locate the MPP by the CS strategy is approximately the same in both cases.  For the second simulation, insolation levels of 1000, 800, 600, 400 and 300 W/m 2 were adjusted to the PV panels. Each panel was set to a temperature of 25 • C. The curve for the variation of the power, voltage, current, and duty cycle with respect to time for the CS algorithm is depicted in Figure 36. The graph for the power indicates that the maximum power tracked was 30.36 W, which is less than the one followed in the first case at 1.2 s. Moreover, the time taken to locate the MPP by the CS strategy is approximately the same in both cases. The simulation results indicate that JA tracks the maximum power point at 0.7 s, whereas the CS strategy tracks it at 1.2 s. The difference in tracking time signifies that the JA algorithm tracking speed is faster than the CS strategy. Furthermore, the JA strategy tracked down the global optimum power point as 67.57 W. However, the CS technique tracked the MPP as 45 W, which indicates that it is stuck in the local optimum power point. Thus, the JA strategy is more accurate than the CS technique.
For the second simulation, insolation levels of 1000, 800, 600, and 400 W m ⁄ were adjusted to the PV panels. Each panel was set to a temperature of 25 ℃. The curve for the variation of the power, voltage, current, and duty cycle with respect to time for the CS algorithm is depicted in Figure 36. The graph for the power indicates that the maximum power tracked was 30.36 W, which is less than the one followed in the first case at 1.2 s. Moreover, the time taken to locate the MPP by the CS strategy is approximately the same in both cases.

Future Research Work Recommendations
In this article, nineteen cutting-edge MPPT strategies have been reviewed and compared based on different parameters. Moreover, the advantages and disadvantages of each algorithm have been comprehensively discussed, and later, the information was arranged in tabular form. Currently, the PV MPPT algorithm counts exceed more than a hundred, and each year more than five new optimization algorithms are published. Such extensive research works to open the wide area for further reviewing the MPPT algorithms. Hence, this article updates the review articles by analyzing and assessing the well-known methods along with some recent strategies. Additionally, this work presents the latest findings in each algorithm field in the tabulated format.
In the future, the research study can be updated by including: i. An analysis of the accuracy of the MPPT techniques during the hot spot process. Since the last Decemberade, the hot spot has emerged as one of the prominent problems affecting the efficiency of the MPPT algorithms. ii. For the real-time assessment of the MPPT algorithms, the time period should be about one week/month. iii. Moreover, hybrid algorithms can be examined in the future. Asfor now hybrid strategies are ina boom, these methods avoid the disadvantage of two or more algorithms being taken into account. Thus, a combination of different algorithms complements each other.

Conclusions
This analysis gives a thorough review of various optimization techniques used in MPPT, which can be classified into three groups: Conventional, Meta-heuristic, and Artificial Intelligence. This article comprises a comprehensive description of operating measures, along with a flowchart depiction of each MPPT strategy. Later, based on the reviewed articles, different MPPT strategies with some more recent ones are assessed while considering the specific factors. The acknowledged factors are algorithm complexity, tracking speed, cost of implementation, sensed parameters, monitoring accuracy, PV array dependency, steady-state oscillation, and analog/digital. Furthermore, the pros and cons of the MPPT methods are discussed and are later mustered in the tabulated form. Besides, this study suggests that employing MPP controllers is the best way to tackle PSC problems. Hence, this has paved the way for a vast area of research.
The conventional techniques discussed work fine under uniform irradiance conditions and have less algorithm complexity. However, the classical algorithm tracking speeds are slow when compared to the advanced ones. Furthermore, nowadays, intelligent strategies are more pronounced, as these techniques give the best results during PSCs. Furthermore, intelligent algorithms provide ease in data storing and are independent of mathematical computations. On the other hand, meta-heuristic techniques, which are bio-inspired, are compatible with any solar PV system. These strategies require no prior knowledge of the PV panel parameters. To encapsulate, this review article will be a valuable reference for an analyst or a researcher in selecting an MPPT strategy for an explicit purpose.