A Hybrid Golden Jackal Optimization and Golden Sine Algorithm with Dynamic Lens-Imaging Learning for Global Optimization Problems

: Golden jackal optimization (GJO) is an effective metaheuristic algorithm that imitates the cooperative hunting behavior of the golden jackal. However, since the update of the prey’s position often depends on the male golden jackal and there is insufﬁcient diversity of golden jackals in some cases, it is prone to falling into a local optimal optimum. In order to address these drawbacks of GJO, this paper proposes an improved algorithm, called a hybrid GJO and golden sine (S) algorithm (Gold-SA) with dynamic lens-imaging (L) learning (LSGJO). First, this paper proposes novel dual golden spiral update rules inspired by Gold-SA. These rules give GJO the ability to think like a human (Gold-SA), making the golden jackal more intelligent in the process of preying, and improving the ability and efﬁciency of optimization. Second, a novel nonlinear dynamic decreasing scaling factor is introduced into the lens-imaging learning operator to maintain the population diversity. The performance of LSGJO is veriﬁed through 23 classical benchmark functions and 3 complex design problems in real scenarios. The experimental results show that LSGJO converges faster and more accurately than 11 state-of-the-art optimization algorithms, the global and local search ability has improved signiﬁcantly, and the proposed algorithm has shown superior performance in solving constrained problems.


Introduction
Natural science and social economy optimization problems are a research hotspot in computer science, management and decision-making, artificial intelligence, and other fields. The search for high-precision solutions to such optimization problems has attracted many researchers. However, the traditional optimization methods based on mathematical theory, such as Newton's downhill method and the gradient descent method, have been unable to solve these problems effectively [1,2], so many scholars favor metaheuristic algorithms.
Metaheuristic algorithms are used to find the optimal solution or satisfactory solution to complex optimization problems [3][4][5], and they are inspired by the phenomenon of a biological population, physical phenomena, evolutionary law, etc. For example, the whale optimization algorithm (WOA) is inspired by the humpback whales' foraging behavior in nature [6]. The salp swarm algorithm (SSA) is inspired by the swarming behavior of salps when navigating and foraging in oceans [7]. The Harris's hawk optimization (HHO) is inspired by the different mechanisms of the Harris's hawk's strategy for capturing prey [8]. Biological populations inspire these algorithms. For example, the equilibrium optimizer (EO) is inspired by control volume mass balance models that estimate both dynamic and equilibrium states [9]. The lightning attachment procedure optimization (LAPO) is inspired by the natural process of connecting the upward-facing and downward-facing leads of The remaining sections of this paper are as follows: Section 2 briefly summarizes the conventional golden jackal algorithm. Section 3 proposes LSGJO and analyzes its time complexity. The benchmark functions are tested, and the results are analyzed in Section 4. LSGJO is used to solve three constrained optimization problems in mechanical fields in Section 5. Section 6 discusses the challenges, recommendations, and limitations related to the proposed algorithm. Finally, Section 7 concludes the paper and proposes future studies.

Golden Jackal Algorithm
The golden jackal algorithm is a swarm intelligence algorithm proposed by Nitish Chopra and Muhammad Mohsin Ansari; it mimics the hunting behavior of golden jackals in nature. Golden jackals usually hunt with males and females. The hunting behavior of the golden jackal is divided into three steps: (1) searching for and moving towards the prey; (2) enclosing and irritating the prey until it stops moving; and (3) pouncing towards the prey.
where N denotes the number of prey populations and n denotes dimensions. The mathematical model of the golden jackal's hunt is as follows (|E| > 1): where t is the current iteration, Y M (t) indicates the position of the male golden jackal, Y FM (t) indicates the position of the female, and Prey(t) is the position vector of the prey. Y 1 (t) and Y 2 (t) are the updated positions of the male and female golden jackals. E is the evading energy of prey and is calculated as follows: where E 0 is a random number in the range [-1, 1], indicating the prey's initial energy; T represents the maximum number of iterations; c 1 is the default constant set to 1.5; and E 1 denotes the prey's decreasing energy.
In Equations (2) and (3), |Y M (t) − rl·Prey(t)| denotes the distance between the golden jackal and prey and "rl" is the vector of random numbers calculated by the Levy flight function. rl = 0.05 · LF(y) where u and v are random values in (0, 1) and β is the default constant set to 1.5.
where Y(t + 1) is the updated position of the prey based on the male and the female golden jackals. When the prey is harassed by the golden jackals, the evading energy is decreased. The mathematical model of the golden jackals surrounding prey and devouring it is as follows (|E| ≤ 1): The pseudo-code of the above GJO is shown in Algorithm 1.

Proposed LSGJO
When solving some optimization problems, GJO easily falls into iterative stagnation, slow convergence in later stages, and insufficient exploration and exploitation capacity, and the shortcomings are more apparent when solving complex problems. In this section, we propose two improvement strategies described in detail below.

Dynamic Lens-Imaging Learning Strategy
Lens-imaging learning strategy is a recently proposed opposition-based learning method [36]. This strategy is derived from the law of optics in the convex lens-imaging law. The principle of the strategy is to refract the entity on one side to the other through a convex lens to form an inverted image. Here, Figure 1 is used to outline its principle: on the left of the coordinate axis y, there is an individual G (the male golden jackal); its projection on the coordinate axis x is X, and its distance from the coordinate axis x is h. The coordinate axis y denotes a convex lens of focal length f, and the O point is the center of the convex lens. G passes through a convex lens to produce an opposite individual G , whose projection on the coordinate axis x is X , and its distance from the coordinate axis x is h . The individual X and its opposite individual X are obtained. a convex lens to form an inverted image. Here, Figure 1 is used to outline its principle: on the left of the coordinate axis y, there is an individual G (the male golden jackal); its projection on the coordinate axis x is X, and its distance from the coordinate axis x is h. The coordinate axis y denotes a convex lens of focal length f, and the O point is the center of the convex lens. G passes through a convex lens to produce an opposite individual G', whose projection on the coordinate axis x is X', and its distance from the coordinate axis x is h'. The individual X and its opposite individual X' are obtained. According to Figure 1, and can be derived from the convex lens-imaging principle: ub lb X h ub lb h X + − = + − (11) where ub and lb are the upper and lower bounds. Let ℎ/ℎ′ = , and is called the scaling factor; then, Equation (11) is transformed to obtain the formula for the opposite point : ub lb ub lb X X α α + + = + − ⋅ (12) The scaling factor can increase the local development ability of the LSGJO. In the original lens-imaging learning strategy, the scaling factors are generally considered constant, which reduces the convergence performance of the algorithm. Therefore, this pa- According to Figure 1, X and X can be derived from the convex lens-imaging principle: where ub and lb are the upper and lower bounds. Let h/h = α, and α is called the scaling factor; then, Equation (11) is transformed to obtain the formula for the opposite point X : The scaling factor α can increase the local development ability of the LSGJO. In the original lens-imaging learning strategy, the scaling factors are generally considered constant, which reduces the convergence performance of the algorithm. Therefore, this paper proposes a new scaling factor based on nonlinear dynamic decreasing, which can obtain larger values in the early iteration of the algorithm so that the algorithm can search in the broader range of different dimensional regions and improve the diversity of the population. At the end of the algorithm iteration, a smaller value is obtained, so the fine search near the optimal individual can be carried out to improve the local optimization ability. The nonlinear dynamic scaling factor α is calculated by Equation (13): where ζ max is the maximum scaling factor, ζ min is the minimum scaling factor, and T is the maximum number of iterations; the value ζ max is 100, and the value ζ min is 10. Equation (12) can be popularized to the n-dimensional search space: where X j and X j are the components of X and X in dimension j, respectively, and lb j and ub j are the upper and lower bounds of dimension j, respectively. The dynamic lens-imaging strategy considers the candidate and opposite solutions and selects the best solution according to the calculated fitness. In this paper, the dynamic lens-imaging learning strategy is applied to the current global optima of the swarm in the GJO and is beneficial to help the population avoid stagnation in local optima. Learning strategies mainly include the opposition-based learning (OBL) strategy, the quasi-opposition-based learning strategy, and the dynamic lens-imaging-based learning Appl. Sci. 2022, 12, 9709 6 of 28 strategy. The original OBL is a special form of α = 1 in Equation (12). Quasi-oppositional learning, proposed by Tizhoosh et al. [37], is utilized to improve the overall exploration of the initial and execution stages, and it is an excellent improvement on the original opposition-based learning method. It can increase the diversity of the population but ignores that with the increase in the number of iterations, the algorithm changes from global optimization to local optimization. The dynamic lens-imaging learning proposed in this paper takes this into account.

Novel Update Rules
The golden sine algorithm was proposed by Tanyildizi et al. [2]; it is inspired by the relationship between the sine function and the unit circle in mathematics. The golden section coefficient is introduced in the position update in the golden sine algorithm, and the special relationship between the sine function and the unit element is combined with the golden section. The golden sine algorithm finds the global optimal solution by reducing the search scope continuously. First, the global search finds the optimal solution space, then the local search is carried out, and finally the global optimal solution is sought. The golden sine algorithm has better local search ability, and its mathematical model is as follows: where t denotes the current iteration number; R 1 is a random value inside [0, 2π]; R 2 is a random value inside [0, π]; R 1 and R 2 indicate the direction and distance of the next generation of movement, respectively; and x 1 and x 2 are the golden section coefficients, which are used to narrow the search space and guide the individual to converge to the optimal solution.
where a and b are the initial values −π and π, and τ represents the golden number. When the golden sine algorithm and the golden jackal algorithm are combined, the position update rules of male and female jackals in the exploitation stage are as follows: The position updating rule adopts the dual golden spiral update rules, mimics the golden jackal surrounding the prey in a curve way, consumes the prey's physical strength, gradually narrows the encircling circle of the prey, and then captures the prey. This position updating rule is more in line with the natural golden jackal surrounding and capturing prey state, and the principle of the dual golden spiral update rules is shown in Figure 2.
In a word, combining the lens-imaging strategy with a nonlinear dynamic decreasing factor and the new update rules enables GJO to jump out of the local optimum, accelerate the convergence, and improve the convergence accuracy. In the exploitation phase of the golden jackal algorithm, adding a levy flight function can avoid falling into local optimization to a certain extent. Since the levy flight function is characterized by shortdistance and occasional long-distance jumps, GJO still falls into local optima in some numerical optimizations. Especially in the high-dimensional function, its effect will be significantly reduced. In this regard, the dynamic lens-imaging learning strategy is used to find the opposite of the current global optimal solution, increase the population's diversity, and retain a better one by comparing the fitness function values. In the exploitation phase, the positions of male and female jackals are updated by the new update rules. The pseudo-code of LSGJO is shown in Algorithm 2 and Figure 3.
The position updating rule adopts the dual golden spiral update rules, mimics the golden jackal surrounding the prey in a curve way, consumes the prey's physical strength, gradually narrows the encircling circle of the prey, and then captures the prey. This position updating rule is more in line with the natural golden jackal surrounding and capturing prey state, and the principle of the dual golden spiral update rules is shown in Figure 2. In a word, combining the lens-imaging strategy with a nonlinear dynamic decreasing factor and the new update rules enables GJO to jump out of the local optimum, accelerate the convergence, and improve the convergence accuracy. In the exploitation phase of the golden jackal algorithm, adding a levy flight function can avoid falling into local optimization to a certain extent. Since the levy flight function is characterized by short-distance and occasional long-distance jumps, GJO still falls into local optima in some numerical optimizations. Especially in the high-dimensional function, its effect will be significantly reduced. In this regard, the dynamic lens-imaging learning strategy is used to find the opposite of the current global optimal solution, increase the population's diversity, and retain a better one by comparing the fitness function values. In the exploitation phase, the positions of male and female jackals are updated by the new update rules. The pseudo-code of LSGJO is shown in Algorithm 2 and Figure 3. Obtain Y * 1 by Equation (14) Calculate the fitness function values of Y 1 and Y * 1 , set the better one as Y 1 for (each prey individual) Update the evading energy "E" using Equations (3) and (4) Update "rl" using Equations (6) and (7) If(|E| ≥ 1) (Exploration phase) Update the prey position using Equations (8), (19), and (20) If(|E| < 1) (Exploitation phase) Update the prey position using Equations (8), (9), and (10) end for t = t + 1 end while return  (14) Calculate the fitness function values of and * , set the better one as for (each prey individual) Update the evading energy "E" using Equations (3) and (4) Update "rl" using Equations (6) and (7) If(|E| > = 1) (Exploration phase) Update the prey position using Equations (19), (20), and (8) If(|E| < 1) (Exploitation phase) Update the prey position using Equations (9), (10) Figure 3. Flowchart of LSGJO.

The Computational Complexity of LSGJO
The time complexity indirectly reflects the convergence rate of the algorithm. Suppose the time required to initialize the parameters (population size N, dimension d, coefficient E, rl, etc.) is . According to Equation (7), the time needed for each dimension to update the position of the prey and the position of the golden jackal is , and the time for solv-

The Computational Complexity of LSGJO
The time complexity indirectly reflects the convergence rate of the algorithm. Suppose the time required to initialize the parameters (population size N, dimension d, coefficient E, rl, etc.) is γ 1 . According to Equation (7), the time needed for each dimension to update the position of the prey and the position of the golden jackal is γ 2 , and the time for solving the fitness value of the objective function is f (n); then, the time complexity of GJO is: In the LSGJO algorithm, it is assumed that the initialization parameters (population size N, dimension d, τ, x 1 , x 2 coefficient E, rl, etc.) are γ 3 , and the time required to perform the lens-imaging learning strategy is γ 4 . The time required to execute the greedy mechanism is γ 5 . According to Equation (7), the time needed for each dimension to update the position of the prey and the position of the golden jackal is γ 6 ; then, the time complexity of the LSGJO is: The LSGJO proposed in this paper has the same time complexity as GJO: In summary, the LSGJO does not increase the time complexity.

Simulation and Result Analysis
To verify the performance of the LSGJO, this study uses 23 benchmark functions commonly used in the literature [2,35], which are listed in Table 1. The functions F1~F7 are high-dimensional unimodal functions and have a single global optimal solution. These functions are used to test the convergence rate of search algorithms, The functions F8~F13 are high-dimensional multimodal functions and have a single global optimum and multiple locally optimal solutions; these functions are designed to test the search capacities of optimization algorithms. The functions F14~F23 are low-dimensional multimodal functions and have a small number of local minima. The range indicates the solution space, and F min denotes the optimal value. In order to verify the robustness of LSGJO, the 13 functions F1~F13 were tested with 100 and 500 dimensions.
u(xi, 10, 100, 4) All experiments were conducted on the same environment configuration, and all algorithms were implemented in Matlab 2016 b installed on Windows 10 (64 bit), CPU Intel(R) i5-9400 F at 2.9 GHz and 16 GB of RAM.
The parameters of all the comparison algorithms are shown in Table 2. In order to ensure the fairness of the experimental results, the population size of each algorithm was set to 30, and the maximum number of iterations was set to 500. Each algorithm ran 30 times independently, and its average and standard deviation were recorded.

Comparison and Analysis with Metaheuristic Algorithms
The experimental results of 11 algorithms on 23 benchmark functions are shown in Table 3. As can be seen from the mean and standard deviation, LSGJO performs better than GJO in almost every function. Compared with other algorithms, LSGJO is the first in all the test functions except the average ranking of F6, F12, F14, and F20. In all benchmark function tests, the average value and standard deviation of LSGJO test results are small, indicating that the performance of the LSGJO is the best. From the convergence curve in Figure 4, it can be seen that LSGJO converges to the optimal solution much faster than other algorithms.

Comparison and Analysis with Metaheuristic Algorithms
The experimental results of 11 algorithms on 23 benchmark functions are shown in Table 3. As can be seen from the mean and standard deviation, LSGJO performs better than GJO in almost every function. Compared with other algorithms, LSGJO is the first in all the test functions except the average ranking of F6, F12, F14, and F20. In all benchmark function tests, the average value and standard deviation of LSGJO test results are small, indicating that the performance of the LSGJO is the best. From the convergence curve in Figure 4, it can be seen that LSGJO converges to the optimal solution much faster than other algorithms.

Experimental Analysis of the Algorithm in Different Dimensions of Function
As the dimension of the function increases, the computational cost of the function increases exponentially. The other setting conditions are shown above when the dimensions are set to 100 and 500. As can be seen from Tables 4 and 5, LSGJO can obtain the optimal solutions in both 100 dimensions and 500 dimensions. To further observe the performance of LSGJO, 100-dimensional convergence curves and the 500-dimensional convergence curve are shown in Figures 5 and 6, respectively. The convergence speed of LSGJO in the image of functions F1-F13 is faster than that of other algorithms, and the convergence accuracy is higher in Figures 5 and 6. The results show that LSGJO has better robustness than other comparison algorithms.
Multidimensional testing not only reflects the robustness of the algorithm but also has a certain practical significance. The traveling salesman problem (TSP) is a typical NPC problem that aims to minimize the path traversing all cities. When there are many cities, the algorithm needs to have the ability to solve multidimensional problems. When swarm intelligence algorithms are used to optimize the weights and thresholds of multilayer neural networks and when the number of layers of the network is large, the number of variables will exceed 500, and the algorithm needs to have the ability to solve 500-dimensional problems. When solving large-scale job-shop scheduling problems, due to a large number of jobs and machines, the algorithm will need to be able to solve multidimensional problems. When a swarm intelligence algorithm is used for wireless sensor coverage optimization, if the coverage area is large, the algorithm needs to have the ability to solve multidimensional problems. In addition, swarm intelligence algorithms are also used in assembly sequence and process planning, and under certain conditions, the ability of algorithms to deal with multidimensional variables is also required.    large number of jobs and machines, the algorithm will need to be able to solve multidimensional problems. When a swarm intelligence algorithm is used for wireless sensor coverage optimization, if the coverage area is large, the algorithm needs to have the ability to solve multidimensional problems. In addition, swarm intelligence algorithms are also used in assembly sequence and process planning, and under certain conditions, the ability of algorithms to deal with multidimensional variables is also required.  dimensional problems. When solving large-scale job-shop scheduling problems, due to a large number of jobs and machines, the algorithm will need to be able to solve multidimensional problems. When a swarm intelligence algorithm is used for wireless sensor coverage optimization, if the coverage area is large, the algorithm needs to have the ability to solve multidimensional problems. In addition, swarm intelligence algorithms are also used in assembly sequence and process planning, and under certain conditions, the ability of algorithms to deal with multidimensional variables is also required.

Convergence Behavior Analysis
This experiment is used to observe the convergence behavior of LSGJO with 30 dimensions and 500 iterations. The convergence process of the LSGJO is shown in Figure 7. The diagram in the first column is a three-dimensional plot of the benchmark function. The diagram in the second column is the convergence curve of the LSGJO, which is the optimal value of the current iteration. It can be seen that LSGJO converges quickly on the unimodal function, and the ladder shape appears on the multimodal function, which shows that the improved algorithm has better exploration ability and exploitation ability. The diagram in the third column is the trajectory of the first golden jackal in the first dimension. The significant fluctuation at the beginning is due to the global optimization in the early iteration stage. The trajectory fluctuates significantly in the later stage of the iteration because of the dynamic lens-imaging learning strategy added, which can avoid falling into iterative stagnation. The fourth column diagram is the average fitness of the overall solution, which is used to evaluate the overall performance of the population. The curve will be relatively high in the initial iteration, and the average fitness will likely be stable as the number of iterations increases. The fifth column diagram shows the historical position of the search agent in the iterative process. In the search history of functions F1-F4 and functions F9-F11, the point positions are more clustered, indicating that the fitness of the search agent is small, and the next iteration will be a local search in this area. In the search history of functions F5, F6, and F16, many points are scattered, indicating that if the optimal value is not found quickly, other search agents will continue to search for the optimal solution.
Appl. Sci. 2022, 12, x FOR PEER REVIEW 17 of 27 shows that the improved algorithm has better exploration ability and exploitation ability. The diagram in the third column is the trajectory of the first golden jackal in the first dimension. The significant fluctuation at the beginning is due to the global optimization in the early iteration stage. The trajectory fluctuates significantly in the later stage of the iteration because of the dynamic lens-imaging learning strategy added, which can avoid falling into iterative stagnation. The fourth column diagram is the average fitness of the overall solution, which is used to evaluate the overall performance of the population. The curve will be relatively high in the initial iteration, and the average fitness will likely be stable as the number of iterations increases. The fifth column diagram shows the historical position of the search agent in the iterative process. In the search history of functions F1-F4 and functions F9-F11, the point positions are more clustered, indicating that the fitness of the search agent is small, and the next iteration will be a local search in this area. In the search history of functions F5, F6, and F16, many points are scattered, indicating that if the optimal value is not found quickly, other search agents will continue to search for the optimal solution.

Statistical Analysis
In the statistical processing of experimental data, each experiment's average value and standard deviation have been calculated and can be used to judge the algorithms' quality. In order to further verify the significant differences between the proposed algorithm and other algorithms, the Wilcoxon rank-sum test is performed at a significance level of 0.05 [42]. If the p-value is > 0.05, we should consider the performance of these two algorithms to be similar, and the values are underlined. The performance of all the algorithms is ranked by the Friedman rank test [43]. The results of the Wilcoxon rank sum test are shown in Table 6. NaN indicates that significance cannot be determined. The total number of significant differences is shown in the last column. In F9 and F11, some algorithms do not have significant differences in 30 and 100 dimensions. However, they have significant differences in 500 dimensions, indicating that other algorithms have poor performance in high dimensions, while LSGJO has good performance. The sorting results by Friedman show that LSGJO ranks first in both low-dimensional and high-dimensional functions.

Statistical Analysis
In the statistical processing of experimental data, each experiment's average value and standard deviation have been calculated and can be used to judge the algorithms' quality. In order to further verify the significant differences between the proposed algorithm and other algorithms, the Wilcoxon rank-sum test is performed at a significance level of 0.05 [42]. If the p-value is > 0.05, we should consider the performance of these two algorithms to be similar, and the values are underlined. The performance of all the algorithms is ranked by the Friedman rank test [43]. The results of the Wilcoxon rank sum test are shown in Table 6. NaN indicates that significance cannot be determined. The total number of significant differences is shown in the last column. In F9 and F11, some algorithms do not have significant differences in 30 and 100 dimensions. However, they have significant differences in 500 dimensions, indicating that other algorithms have poor performance in high dimensions, while LSGJO has good performance. The sorting results by Friedman show that LSGJO ranks first in both low-dimensional and high-dimensional functions.  3.34 × 10 −3 3.02 × 10 −11 3.02 × 10 −11 3.02 × 10 −11 3.02 × 10 −11 3.02 × 10 −11 3.02 × 10 −11 3.02 × 10 −11 3.02 × 10 −11 3.02 × 10 −11 10 500 3.02 × 10 −11 3.03 × 10 −2 3.02 × 10 −11 3.02 × 10 −11 3.02 × 10 −11 3.02 × 10 −11 3.02 ×

Real-World Engineering Design Problems
In order to verify the optimization performance of LSGJO in real-world engineering design problems, this paper introduces three constraint problems, namely the speed reducer design problem [44], the gear train design problem [45], and the multiple-disk clutch design problem [46]. The number of iterations for all algorithms is set to 500, and the population size is set to 30.

Speed Reducer Design Problem
The speed reducer is widely used in mechanical products. According to its use occasion, its specific functions are mainly manifested as reducing the speed, increasing the torque, reducing the inertia of the movement mechanism, and so on. The main goal of this design problem is to minimize the weight of the speed reducer. The variables include face width y 1 , the module of teeth y 2 , the number of teeth in the pinion y 2 , the length of the first shaft y 3 , the length of the second shaft y 4 , the diameter of the first shaft y 5 , and the diameter of the second shaft y 6 . The speed reducer is shown in Figure 8. The mathematical model of the speed reducer design is stated in Appendix A, Equation (A1).
Appl. Sci. 2022, 12, x FOR PEER REVIEW 20 of 27 clutch design problem [46]. The number of iterations for all algorithms is set to 500, and the population size is set to 30.

Speed Reducer Design Problem
The speed reducer is widely used in mechanical products. According to its use occasion, its specific functions are mainly manifested as reducing the speed, increasing the torque, reducing the inertia of the movement mechanism, and so on. The main goal of this design problem is to minimize the weight of the speed reducer. The variables include face width , the module of teeth , the number of teeth in the pinion , the length of the first shaft , the length of the second shaft , the diameter of the first shaft , and the diameter of the second shaft . The speed reducer is shown in Figure 8. The mathematical model of the speed reducer design is stated in Appendix A, Equation (A1). x7 x2 x3 x4  Table 7 shows the optimal solution to the speed reducer design problem obtained by 10 popular intelligent algorithms, namely the proposed algorithm in this paper, GWO, HHO, ChoA, GJO, EO, WOA, SO, MPSO, and SOGWO. The experimental results show that LSGJO is better than other algorithms. The minimum weight of the speed reducer is Minimize ( ⃗) = 2994.4711, with the optimal solution ⃗ = {3.5000, 0.7000, 17.0000, 7.3000, 7.7153, 3.3502, 5.2867}.

Gear Train Design Problem
The gear train plays an essential role in watches, clutches, differentials, machine tools, fans, mixers, and many other products. It is one of the most common mechanisms in the mechanical field. The main objective of the gear train design problem is to minimize the gear ratio. The variable is the number of teeth of four gears. The gear train is shown in Figure 9. The mathematical model of the gear train design is stated in Appendix A, Equation (A2).

Multiple-Disk Clutch Design Problem
The multiple-disk clutch is widely used in mechanical transmission systems in machine tools, steel rolling, metallurgical mining, handling, ship fishery equipment, etc. The main objective of the multiple-disk clutch design problem is to minimize the weight of the clutch. The variables include the internal surface area radius y 1 , the external surface radius y 2 , the disc thickness y 3 , the driving force y 4 , and the number of friction surfaces y 5 . The multiple-disk clutch is shown in Figure 10. The mathematical model of multiple-disk clutch design is stated in Appendix A, Equation (A3). The main objective of the multiple-disk clutch design problem is to minimize the weight of the clutch. The variables include the internal surface area radius , the external surface radius , the disc thickness , the driving force , and the number of friction surfaces . The multiple-disk clutch is shown in Figure 10. The mathematical model of multiple-disk clutch design is stated in Appendix A, Equation (A3).  Table 9 is the optimal solution for multiple-disk clutch design obtained by 11 advanced intelligent algorithms. These intelligent algorithms are LSGJO, GWO, GJO, ChoA, ant lion optimizer (ALO) [55], multi-verse optimizer (MVO) [56], ACO, sine cosine algorithm (SCA) [57], EO, SOGWO, and MPSO. The experimental results show that LSGJO is better than the other 10 algorithms; the minimum weight of the multiple-disk clutch is Minimize ( ⃗) = 0.2352425, and the optimal solution y ⃗ = {69.9999928, 90.0000000, 1.0000000, 945.1761801, 2.0000000}. Table 9. Comparison results of multiple-disk clutch design problem. The best results of the experiments are shown in bold.

Algorithm
Optimum

Discussion
Every metaheuristic should be critically evaluated [58]. Metaheuristic algorithms are created to solve practical problems. The algorithm proposed in this paper only proves the effectiveness of numerical optimization problems and can not prove that the algorithm is universal in other problems. The algorithm proposed in this paper only ob-  Table 9 is the optimal solution for multiple-disk clutch design obtained by 11 advanced intelligent algorithms. These intelligent algorithms are LSGJO, GWO, GJO, ChoA, ant lion optimizer (ALO) [55], multi-verse optimizer (MVO) [56], ACO, sine cosine algorithm (SCA) [57], EO, SOGWO, and MPSO. The experimental results show that LSGJO is better than the other 10 algorithms; the minimum weight of the multiple-disk clutch is Minimize f → y = 0.2352425, and the optimal solution → y = {69.9999928, 90.0000000, 1.0000000, 945.1761801, 2.0000000}. Table 9. Comparison results of multiple-disk clutch design problem. The best results of the experiments are shown in bold.

Discussion
Every metaheuristic should be critically evaluated [58]. Metaheuristic algorithms are created to solve practical problems. The algorithm proposed in this paper only proves the effectiveness of numerical optimization problems and can not prove that the algorithm is universal in other problems. The algorithm proposed in this paper only obtains the approximate solution to the optimization problem, but not the exact solution, which is worthy of further improvement and also our future work. The improved algorithm in this paper is closer to the hunting state of the real golden jackal than the original algorithm, but there is still a gap between it and the real hunting state. It is worth studying how to establish a mathematical model consistent with the actual hunting state. Determining which components of the algorithm have an impact on the optimization problem is also an important issue that will help us further improve the algorithm.

Conclusions
In order to improve the efficiency of GJO in global numerical optimization and practical design problems, a hybrid GJO and golden sine algorithm with dynamic lens-imaging learning is proposed in this paper. LSGJO makes two effective improvements over the GJO. Firstly, the candidate solution of the optimal solution is generated by the dynamic lens-imaging learning strategy, which increases the possibility of finding the optimal value quickly. Secondly, novel dual golden spiral update rules are introduced in the exploitation stage to accelerate convergence and avoid falling into local optima. The algorithm's global search ability and local search ability are enhanced and achieve balance with each other by combining the two proposed improvements. Twenty-three benchmark functions were tested to evaluate the performance of the LSGJO, including three dimensions (30, 100, 500). Experimental results and statistical data show that the algorithm proposed in this paper has a fast convergence speed, high convergence precision, strong robustness, and stable searching performance. Compared to 11 state-of-the-art optimization algorithms, LSGJO has excellent competitiveness. In addition, LSGJO was successfully applied to three real-world engineering problems in the mechanical field (speed reducer design, gear train design, and multiple-disk clutch design), and its optimization effect was better than that of other algorithms.
In the future, the potential of LSGJO will be explored and focused on applications, and research in other directions, such as (1) path planning for unmanned aerial vehicles (UAVs), (2) the use of the oppositional learning method in the initialization stage, and (3) a multiobjective optimization algorithm based on LSGJO, will be studied and applied to feature selection and process parameter optimization.