An Enhanced Neural Network Algorithm with Quasi-Oppositional-Based and Chaotic Sine-Cosine Learning Strategies

Global optimization problems have been a research topic of great interest in various engineering applications among which neural network algorithm (NNA) is one of the most widely used methods. However, it is inevitable for neural network algorithms to plunge into poor local optima and convergence when tackling complex optimization problems. To overcome these problems, an improved neural network algorithm with quasi-oppositional-based and chaotic sine-cosine learning strategies is proposed, that speeds up convergence and avoids trapping in a local optimum. Firstly, quasi-oppositional-based learning facilitated the exploration and exploitation of the search space by the improved algorithm. Meanwhile, a new logistic chaotic sine-cosine learning strategy by integrating the logistic chaotic mapping and sine-cosine strategy enhances the ability that jumps out of the local optimum. Moreover, a dynamic tuning factor of piecewise linear chaotic mapping is utilized for the adjustment of the exploration space to improve the convergence performance. Finally, the validity and applicability of the proposed improved algorithm are evaluated by the challenging CEC 2017 function and three engineering optimization problems. The experimental comparative results of average, standard deviation, and Wilcoxon rank-sum tests reveal that the presented algorithm has excellent global optimality and convergence speed for most functions and engineering problems.


Introduction
In contemporary practical applications, it is significant to imperative tackle a wide variety of optimization problems.These encompass the optimization of route planning [1,2], production scheduling [3,4], energy system [5], nonlinear programming [6], supply chain [7], facility layout [8], medical registration [9], and unmanned system [10], among others.These projects typically involve an enormous amount of information and constraints where conventional algorithms would struggle to find an optimal solution within a reasonable timeframe.Consequently, investigating efficient approaches to these intricate optimization processes has become an extremely challenging research domain.After relentless efforts, there are numerous optimization methods exploited by researchers, commonly employing deterministic and meta-heuristic approaches over intricate optimization issues.
Deterministic methods can be described as problem-solving approaches that rely on rigorous logic and mathematical models, effectively utilizing extensive gradient information to search for optimal or near-optimal solutions [11].However, the strong dependence on the initial starting point makes it easy to produce identical results.In the real world, optimization problems are often highly intricate and exhibit nonlinear characteristics [12], which frequently involve multiple local optima within the objective function.Consequently, deterministic methods often encounter difficulties in escaping local minima when dealing with complex optimization problems [13,14].Instead, metaheuristics are inspired by phenomena observed in nature and simulate these phenomena to efficiently optimize and solve problems without relying on complex gradient information and mathematical principles thereby better exploring optimal solutions [15][16][17].For instance, the grey wolf optimization (GWO) [18] replicates the social behavior of grey wolves during the search for optimization; the artificial immune algorithm (AIA) [19] mimics the evolutionary process of the human immune system to adaptively adjust the solution quality; the ant colony optimization (ACO) [20] emulates the pheromone-based foraging behavior of ants.It is noteworthy that the parameters of metaheuristic algorithms can be classified into two categories [21]: common parameters and special parameters.Common parameters encompass the foundational principles that govern the behavior of an algorithm, such as population size and termination criteria.On the other hand, specific parameters are tailored to the unique characteristics of individual algorithms.For instance, in simulated annealing (SA) [22] configuring the initial temperature and cooling rate is crucial for achieving optimal outcomes.Given the sensitivity of the algorithms for input data, any improper tuning of specific parameters may contribute to an augmented computational effort or the conundrum of local optimality when treating varying sorts of projects.
It is for heuristic algorithms featuring non-specific parameters that have gained immense relevance.Neural network algorithm (NNA) [23], which draws inspiration from artificial neural networks and biological nervous systems, emerged in 2018 as a promising method towards achieving globally optimal solutions.Additionally, a distinguishing trait of NNA from many famous heuristic algorithms is that it relies only on common parameters; hence, no extra parameters are required.This universality dramatically enhances its superior adaptability across a range of engineering applications.Nevertheless, NNA is confronted with two notable constraints: susceptibility to local optima and sluggish convergence speed.Therefore, a lot of improved optimization algorithms based on the scientific method have been offered to ameliorate the defects of NNA.For example, the competitive learning chaos neural network algorithm (CCLNNA) [24] is proposed by integrating NNA with competitive mechanisms and chaotic mapping; an effective hybrid algorithm TLNNA based on TLBO algorithm and NNA is proposed [25]; the gray wolf optimization neural network algorithm (GNNA) was created by combining GWO with NNA [26]; and the dropout strategy in the neural network was introduced and the elite selection strategy was proposed as a neural network algorithm with dropout using elite selection [27].Moreover, by the no free lunch theorem [28], no one algorithm can be applied to all optimization questions.Thereby, it is fundamental for the ongoing refinement of existing to develop novel algorithms along with the integration of multiple algorithms for better results under practical applications.In this paper, the quasi-oppositional and chaotic sine-cosine neural network algorithm to boost the global search capability and refine the convergence performance of NNA is proposed.The main contributions of this work are listed below:

•
To maintain the QOCSCNNA diversity of populations, a quasi-oppositional-based learning (QOBL) [29] is introduced, where quasi-opposite populations are randomly generated between the centers of solution space and opposite space, which contributes to better balance exploration and exploitation that make these populations closer to the most optimal ones more likely.

•
By integrating logistic chaotic mapping [30] and sine-cosine strategy [31], a new logistic chaotic sine-cosine learning strategy (LCSC) is proposed that helps to escape from local optimum in the bias strategy phase.

•
To improve the QOCSCNNA convergence performance, a dynamic tuning factor of piecewise linear chaotic mapping [32] is employed to adjust the chances of operation for the bias and transfer operators.

•
The optimization performance of QOCSCNNA was verified through 29 numerical optimization problems based on the CEC 2017 test suite [33], as well as two real-world engineering constraint problems.
The remainder of this paper follows the following structure: a brief introduction of the original NNA is given in Section 2. Section 3 describes the proposed QOCSCNNA in detail.Section 4 validates the performance of the QOCSCNNA as well as explores the application of the QOCSCNNA to real-world engineering design problems using the CEC 2017 test suite.Finally, the main conclusions of this paper are summarized in Section 5 and further research directions are proposed.

NNA
Artificial neural networks (ANNs) are mathematical models that are based on the principles of biological neural networks, aiming to simulate the mechanisms of information processing in the human brain.ANNs are used for prediction primarily by receiving input data and output data which infer the relationship between these.The input data for ANN are typically obtained through experiments, computations, and other means, and the weights are iteratively adjusted to minimize the error between the predicted solution and the target solution, as shown in Figure 1.However, it might sometimes be unknown what the target solution is.Aiming to solve in this way, the authors of NNA treat the current best solution as the target solution and keep adjusting the weights of each neuron to achieve it.The NNA is a population-based evolutionary algorithm, which involves initializing the population, updating the weight matrices, and setting bias operators, and transferring operators.
piecewise linear chaotic mapping [32] is employed to adjust the chances o tion for the bias and transfer operators.

•
The optimization performance of QOCSCNNA was verified through 29 nu optimization problems based on the CEC 2017 test suite [33], as well as tw world engineering constraint problems.
The remainder of this paper follows the following structure: a brief introdu the original NNA is given in Section 2. Section 3 describes the proposed QOCSC detail.Section 4 validates the performance of the QOCSCNNA as well as expl application of the QOCSCNNA to real-world engineering design problems u CEC 2017 test suite.Finally, the main conclusions of this paper are summarized tion 5 and further research directions are proposed.

NNA
Artificial neural networks (ANNs) are mathematical models that are based principles of biological neural networks, aiming to simulate the mechanisms o mation processing in the human brain.ANNs are used for prediction primaril ceiving input data and output data which infer the relationship between these.Th data for ANN are typically obtained through experiments, computations, an means, and the weights are iteratively adjusted to minimize the error between dicted solution and the target solution, as shown in Figure 1.However, it migh times be unknown what the target solution is.Aiming to solve in this way, the au NNA treat the current best solution as the target solution and keep adjusting the of each neuron to achieve it.The NNA is a population-based evolutionary alg which involves initializing the population, updating the weight matrices, and se as operators, and transferring operators.

Initial Population
In the NNA algorithm, the population is updated using a neural network like approach.In the search space, the initial population  = [ ,  , … ,  ] is u through the weight matrix  = [ ,  , … ,  ], for any generation (r).Here,  sents the  individual vector and  represents the  weight vector, both wit mensions.Thus,  = [ , ,  ., … ,  , ] and  = [ , ,  ., … ,  , ], where  = 1, It is desirable to impose constraints on the weights associated with new mod tions so that significant biases are prevented in the generation and transmission solutions.In this way, NNA was equipped to regulate its behavior through subtl tions.After initializing the weights, the one corresponding to the desired ( ), i.e., the target weight ( ), that is chosen from the weight matrix W fore, the summation of the weight matrix must adhere to the following condition

Initial Population
In the NNA algorithm, the population is updated using a neural network model-like approach.In the search space, the initial population X r = x r 1 , x r 2 , . . ., x r N is updated through the weight matrix W r = w r 1 , w r 2 , . . ., w r N , for any generation (r).Here, x r i represents the i th individual vector and w r i represents the i th weight vector, both with D dimensions.Thus, x r i = x r i,1 , x r i.2 , . . ., x r i,D and w r i = w r i,1 , w r i.2 , . . ., w r i,D , where i = 1, 2, . . ., N p .It is desirable to impose constraints on the weights associated with new model solutions so that significant biases are prevented in the generation and transmission of these solutions.In this way, NNA was equipped to regulate its behavior through subtle deviations.After initializing the weights, the one corresponding to the desired solution (X target ), i.e., the target weight (W target ), that is chosen from the weight matrix W. Therefore, the summation of the weight matrix must adhere to the following conditions: where In addition, the formula of generating a new population at the (r + 1) th iteration can be expressed by: where N p is the population size, r is the current number of iterations, and x r i,new is the weighted solution of the i th individual at time r.

Update Weight Matrix
The weight matrix is then adjusted based on the desired target weight W target using the following formula: where w r target is the vector of optimal target weights obtained in each iteration.

Bias Operator
To enhance the global search capability of NNA, a bias operator has been incorporated to fine-tune the probabilities of pattern solutions generated using the new population and updated weight matrices.A correction factor β is utilized to precisely define the probability of the adjusted pattern solution.Initially, β is initialized to 1 and progressively decreased in each iteration.The update process can be outlined as follows: The bias operator encompasses two components: the bias population and the bias weight matrix.To begin, a random number N P and a set P are generated, where N P is D multiplied by β r .Let L = (l 1 , l 2 , . . . ,l D ) and U = u 1 , u 2 , . . ., u D ) be the lower and upper limits of the variables Additionally, P denotes a set of N P integers that are randomly selected from the range of 0 to D. Consequently, the definition of the bias population can be formulated as follows: x r i,P(s) = l P(s) + u P(s) − l P(s) × α 1 , s = 1, 2, . . ., N P where α 1 is a random number between 0 and 1 that obeys a uniform distribution.The bias weight matrix also involves two variables: a random number P w , a stochastic number determined by the formula N × β r , and Q, a set of P w integers randomly chosen between 0 and N. Therefore, the scientific representation for defining the bias weight matrix can be formulated as follows: where α 2 is a random number between 0 and 1, following a uniform distribution.

Transfer Operator
There is an introduced transfer function operator (TF) that transfers the new mode solution at the current position to a new position in the search space proximal to the target solution (x r target ).This operator can be denoted as: Entropy 2023, 25, 1255 where α 3 is a random number between 0 and 1 that follows a uniform distribution.Based on the above statements, the overall NNA framework can be seen in the pseudocode in Algorithm 1.

Algorithm 1:
The pseudocode of the NNA algorithm 1.
Initialize the population X r and the weight matrix W r .2.
Calculate the fitness value of each solution and then set X target and W target 3.
Generate the new solution x r i by Equation ( 3) and new weight matrix w r i by Equation (5) 5.
Perform the bias operator for x r+1 i by Equation ( 7) and the weight matrix w r+1 i by Equation (8) 7. else 8.
Perform the transfer function operator for x r i via Equation (9) 9.
end if 10. end for 11.Generate the new modification factor β r+1 by Equation ( 6) 12. Calculate the fitness value of each solution and find the optimal solution and the optimal weight 13.Until(stop condition = false) 14.Post process results and visualization

Quasi-Oppositional-Based Learning Strategy
Opposites-based learning (OBL) theory [34] has been proposed by Tizhoosh to synthesize the selection of existing solutions and their opposites to improve the quality of candidate solutions.The OBL strategy can provide more accurate candidate solutions.Moreover, the OBL theory evolved into quasi-oppositional-based learning (QOBL) approaches, which show a higher probability of approaching the unknown optimal solution compared to the candidate solutions generated by OBL in terms of achieving the global optimum [29].To enhance the quality and convergence speed of the solutions, researchers integrated QOBL into metaheuristic methods.
The opposite point is the symmetric point of a given point concerning the center point in the solution space.Figure 2 shows the positions of the current point, the opposite point ∼ X, and the quasi-opposite QH within the one-dimensional space [A, B].IF X = (x 1 , x 2 , . . . ,x n ) represents a point in an n-dimensional space, where each coordinate x 2 , . . .x n ) corresponding to the generated X is as follows: Generate the new solution  by Equation ( 3) and new weight matrix  by Equation (5) 5.
Perform the bias operator for  by Equation ( 7) and the weight matrix  by Equatio 7. else 8.
Perform the transfer function operator for  via Equation (9) 9.
end if 10. end for 11.Generate the new modification factor  by Equation ( 6) 12. Calculate the fitness value of each solution and find the optimal solution and the optimal w 13.Until(stop condition = false) 14.Post process results and visualization

Quasi-Oppositional-Based Learning Strategy
Opposites-based learning (OBL) theory [34] has been proposed by Tizhoosh thesize the selection of existing solutions and their opposites to improve the qu candidate solutions.The OBL strategy can provide more accurate candidate so Moreover, the OBL theory evolved into quasi-oppositional-based learning (QO proaches, which show a higher probability of approaching the unknown optim tion compared to the candidate solutions generated by OBL in terms of achiev global optimum [29].To enhance the quality and convergence speed of the solut searchers integrated QOBL into metaheuristic methods. The opposite point is the symmetric point of a given point concerning th point in the solution space.Furthermore, the quasi-opposite point  = ( ,  , …  ) is randomly ge between the inverse point and the center point M = (A + B)/2 of the solution spa quasi-opposite point  of  can be generated as follows [29]: where k is a uniformly distributed random number between 0 and 1.
In this study, QOBL performs the initialization and generation of jum QOCSCNNA.The initialization phase through which randomly generated initia Furthermore, the quasi-opposite point QH = ( qx 1 , qx 2 , . . .qx n ) is randomly generated between the inverse point and the center point M = (A + B)/2 of the solution space.The quasi-opposite point QH of ∼ X can be generated as follows [29]: where k is a uniformly distributed random number between 0 and 1.
In this study, QOBL performs the initialization and generation of jumps for QOCSCNNA.The initialization phase through which randomly generated initial populations of quasiopposite populations is created.The ridiculously generated initial population is taken to define the optimal solution of the inception phase; the generation jumping phase drives the algorithm jumps during the selection process to the solution with a better fitness function value.In this process, a greedy strategy is used to decide whether to keep the current solution or leap to a quasi-opposite solution.The pseudocode for the QOBL strategy is presented in Algorithm 2.

Sine-Cosine Learning Strategy
For the performance improvement of meta-heuristic algorithms, Mirjalili introduced the sine-cosine learning strategy (SCLS) in his research [31].It is the core idea of this strategy that the current solution is updated using the sine and cosine functions which effectively refrain the algorithm from falling into a local optimum.The definition of the algorithm is given below [31]: where r is the current iteration number; x r i,j is the position of the i th individual in the j th dimension in the r iteration; and x r target is the optimal solution of the previous generation.u 2 is a range greater than 0 and less than √ 2 as the radius of the circles.u 3 is set to be a random number between 0 and 2, to control the distance of the optimal solution and maintain the diversity of the population.The value of u 4 is a random number between 0 and 1. u 1 is the cosine amplitude adjustment factor, set as follows: where R is the maximum number of iterations.

Logistic Chaos Mapping
The exploratory potential of chaos optimization algorithms can be further enhanced by leveraging the traversing traits and stochastic attributes of chaotic variables to optimize the diversity within the population [35].In this work, the well-known logistic chaos mapping (LCM) was chosen to generate chaotic candidate solutions.Logistic mapping is formulated as follows [30]: where c z ∈ (0, 1) ∀ z∈{0, 1, . ..N P } and p = 3.8.Based on the candidate solutions generated by the logistic chaos, the generation is as follows: In biased operators, a novel strategy, i.e., logistic chaotic sine-cosine learning strategy (LCSC), is generated by integrating the LCM with the SCLS to align the candidate solutions to be that more chaotic to explore the design space.This mechanism serves as a preventive measure against premature convergence in subsequent iterations.The new solution is generated as follows: where r is the current iteration number; x r i,j is the position of the i th individual in the r th iteration.u 1 is the positive cosine amplitude adjustment factor, defined as shown in Equation ( 12).u 2 is set to be more than 0 and smaller than a circle with a radius of √ 2, u 3 is set to be a random number between 0 and 2, and u 4 is set to be a random number between 0 and 1.The pseudocode of the bias operator changed by the CSCL is given in Algorithm 3.

Algorithm 3:
The Bias Operator P n signifies the number of biased variables in the population of the new pattern solution P w signifies the number of biased variables in the updated weight matrix for i = 1: N P if rand ≤ β %Bias for new pattern solution % P n = round(N × β) Update the chaotic sequence δ z using Equation ( 14) for j = 1: P n Update the new pattern solution x r+1 i,Integer rand[0,N] by Equation ( 16) end for %Bias for updated weight % P w = round(N P × β) for j = 1: j = 1 : P w Update the weight w r j,Integer rand[0,N P ] by Equation ( 8) end for end if end for

The Dynamic Tuning Factor
Since the bias operator decreases as the number of iterations increases, a piecewise linear chaotic map (PWLCM) [32] is introduced, for which the chances of running different learning strategies are dynamically tuned to help QOCSCNNA converge faster as more iterations are added.As well, the definition of PWLCM is denoted in Equation ( 17): where r represents the function mapping value for the r th iteration; k is a control parameter, with k between 0 and 1.In this study, the improved algorithm by fusing the original NNA with the bias operators of QOBL, LCSC strategy, and PWLCM factor is called QOCSCNNA.The detailed flowchart as shown in Figure 3 and the pseudocode for QOCSCNNA can be found in Algorithm 4.  Initialize the number of iterations r (r = 1), the dynamic tuning factor β r β 1 = 1 2. Randomly generate an initial population X 3.
Generate quasi-opposite solutions QH i using algorithm 2 4.
Calculate the fitness value of combined set {X, QH} then make the greedy selection to obtain QH 5.
Randomly generate the weight matrix considering the imposed constraints in Equations ( 1) and (2) 6.
Set the optimal solution x r target and the optimal weight w r target 7.
While r < r max 8.
Generate new pattern solution x r i by Equations ( 3)-( 4), new weight matrix w r i by Equation (5) 9.
Calculate the fitness value of X 10.
X i = QH i 12. fit end if 14.
Perform the bias operator using algorithm 3 16.else 17.
Perform the transfer function operator for x r i via Equation ( 7) 18.
end if 19.Calculate the fitness value of each solution and find the optimal solution x r+1 target and the optimal weight w r+1 target 20.Update the current number of iterations by r = r + 1 21.Update the dynamic tuning factor β r+1 by Equation ( 17) 22. End while

Numerical Experiments and Result Analysis
This section examines the properties of the proposed QOCSCNNA numerical optimization problems.This chapter is divided into three subsections.Section 4.1 details the CEC 2017 test function and the experimental environment that ensures the reliability of the experimental results.Section 4.2 provides a comparative analysis between QOCSCNNA and eight other metaheuristics on the CEC 2017 function which validates the effectiveness of the improved algorithms.Finally, the performance of the algorithm is compared with other algorithms through three engineering projects of practical significance in Section 4.3.

Experiment Setup
It is a broadly used CEC 2017 test suite [33] specifically dedicated to evaluating the performance of complex optimization algorithms.The test suite consists of 30 test functions covering a wide range of test requirements to obtain a more comprehensive insight into the performance characteristics of optimization algorithms.Unfortunately, for unavoidable reasons, the F2 test functions could not be tested, resulting in only 29 functions being tested.These functions could be categorized into four types, each with diverse levels of complexity and characteristics.Firstly, there are the single-peaked functions (F1,F3), which have a clear optimal solution and are suitable for assessing the behavior of the algorithm when dealing with simple problems.Secondly, there are simple multimodal functions (F4-F9), which have multiple partial optimal solutions and can be used to test the robustness and convergence of the algorithm during local search.The third category is hybrid functions (F11-F20), which combine the characteristics of single-peak and multimodal and are closer to the situation of complex problems in reality, enabling a comprehensive assessment of the overall global and local search capability of algorithms.Finally, the synthesized functions (F21-F30) are combined with other functions.The specific functions are shown in Table 1.Furthermore, it was necessary to place all algorithms under the same test conditions to ensure fairness, and experiments were conducted using MATLAB R2022a software under MacOS 12.3 M1.In the CEC 2017 suite, the population size was set to 50 and the dimensionality was set to 10 D. To fully evaluate the performance of the algorithms, the maximum number of function evaluations was set to 20,000 times the population size.This setup ensures a thorough exploration of the search space, thus improving the optimization results.It is noted that the other parameters required to compare the algorithms were extracted directly from the original references to keep the consistency of the results.Moreover, there were 30 independent runs of each algorithm execution to get reliable results, and the average value (AVG) and standard deviation (STD) of the obtained results were logged.

QOCSCNNA for Unconstrained Benchmark Functions
To evaluate the performance of the improved algorithm, QOCSCNNA was compared with eight other well-known optimization algorithms, including NNA, CSO [36], SA [22], HHO [37], WOA [38], SCA [31], WDE [39], and RSA [40].Based on the experimental settings outlined in Section 4.1, the average (AVG) and standard deviation (STD) of the minimum fitness values obtained on the CEC 2017 benchmark functions are presented in Table A1, with the smallest average and standard deviation highlighted in bold.When compared to other algorithms, QOCSCNNA demonstrated significant superiority in terms of both AVG and STD results in the 2017 CEC functions.Moreover, given the limited evaluation budget, the QOCSCNNA algorithm had relatively minor means and standard deviations for a range of functions including F1, F4, F5, F7, F8, F10-F17, F19-F21, F27, F29, and F30.These results highlight QOCSCNNA's superior ability to effectively tackle optimization problems characterized by complexity and hybridity.
The results of the Wilcoxon rank-sum test ("+", "=", and "−" indicate that QOCSC-NNA performs better, the same, or worse, respectively, compared to the other algorithms) are shown in Table A1 to better compare the performance of the different algorithms.As can be seen in the last row of Table A2, QOCSCNNA achieved significantly superior results to SA, SCA, RSA, and CSO on more than 28 test functions, while QOCSCNNA beats HHO and WOA for more than 26 functions and exceeds WDE and NNA for 23 functions.In other words, the average superiority rate of QOCSCNNA over 29 functions is 92.24% (∑ 8 i=1 + i 29×8 × 100%).These results indicate that adopting the CSCL can effectively improve the optimization capability of NNA.
Nine convergence plots of QOCSCNNA with the comparison algorithm on the CEC 2017 test set including F1, F8, F10, F12, F16, F21, F24, F29, and F30 are given in Figure 4, where the vertical axis takes the logarithm of the function's minimum value, and the horizontal axis denotes the number of times the function was evaluated.It can be noticed that although sometimes QOCSCNNA does not perform the best in the initial phase, as the number of function iterations increases, smaller fitness values can be searched for by constantly jumping out of the local optimum.The good performance of this algorithm is because the exploration of QOBL enhances the global search capability.

Real-World Engineering Design Problems
Furthermore, to validate the feasibility of the QOCSCNNA for actual engineering applications, multiple algorithms were utilized to address the critical engineering design problems of cantilever beam structures (CB) [41], car side impact (CSI) [41], and tension spring (TS) [41].For three problems, a population size of 50 was set with an iteration count of 2000 times the population size.Moreover, each algorithm was independently run 30 times to obtain reliable results.Such settings ensured thorough exploration of the search space, leading to improved optimization results.Additionally, the solution provided by QOCSCNNA was compared to well-known algorithms to better evaluate its performance.

CB Engineering Design Problem
The weight optimization of a square cross-section cantilever beam is involved in the CB structural engineering design.The beam has a rigid support at one extremity, while vertical forces act on the free nodes of the cantilever.A model of the CB design problem is illustrated in Figure 5.The beam consists of five hollow squares of equal thickness, with the height (or width) of each square being the decision variable.Meanwhile, the thickness of these squares remains constant at 2/3.The objective function of this design problem can be represented by Equation (18).
Subject to: Variable range: 0.01 ≤ x i ≤ 100, i = 1, . . ., 5. Entropy 2017 test set including F1, F8, F10, F12, F16, F21, F24, F29, and F30 are given in Figure 4, where the vertical axis takes the logarithm of the function's minimum value, and the horizontal axis denotes the number of times the function was evaluated.It can be noticed that although sometimes QOCSCNNA does not perform the best in the initial phase, as the number of function iterations increases, smaller fitness values can be searched for by constantly jumping out of the local optimum.The good performance of this algorithm is because the exploration of QOBL enhances the global search capability.

Real-World Engineering Design Problems
Furthermore, to validate the feasibility of the QOCSCNNA for actual engineering applications, multiple algorithms were utilized to address the critical engineering design problems of cantilever beam structures (CB) [41], car side impact (CSI) [41], and tension spring (TS) [41].For three problems, a population size of 50 was set with an iteration count of 2000 times the population size.Moreover, each algorithm was independently run 30 times to obtain reliable results.Such settings ensured thorough exploration of the search space, leading to improved optimization results.Additionally, the solution provided by QOCSCNNA was compared to well-known algorithms to better evaluate its performance.

CB Engineering Design Problem
The weight optimization of a square cross-section cantilever beam is involved in the CB structural engineering design.The beam has a rigid support at one extremity, while vertical forces act on the free nodes of the cantilever.A model of the CB design problem is illustrated in Figure 5.The beam consists of five hollow squares of equal thickness, with the height (or width) of each square being the decision variable.Meanwhile, the This problem is being solved by several researchers using different metaheuristic methods, such as NNA, WOA, SCA, SA and PSO used in [42].Table 2 reveals that the optimal result of QOCSCNNA is 1.3548, as well as the optimal constraints obtained from QOCSCNNA, satisfy Equation (19), which proves the validity of the optimal solutions obtained by QOCSCNNA.In addition, the optimum solutions of WOA and SA are 1.3567 and 1.3569, respectively, which are very close to the best results of QOCSCNNA.In contrast, the NNA, PSO, and SCA algorithms have poor optimum solutions, which indicates that these three algorithms are not suitable for the problem.Furthermore, by comparing the results of the Wilcoxon rank-sum test (+, =, and − indicating better, equal, or worse performance of QOCSCNNA compared to other algorithms), it is possible to discover that QOCSCNNA outperforms NNA, PSO, and SCA in terms of performance.Hence, it can be summarized that the proposed QOCSCNNA demonstrates superior feasibility compared to other algorithms.This problem is being solved by several researchers using different methods, such as NNA, WOA, SCA, SA and PSO used in [42].Table 2 re optimal result of QOCSCNNA is 1.3548, as well as the optimal constraints o QOCSCNNA, satisfy Equation ( 19), which proves the validity of the opti obtained by QOCSCNNA.In addition, the optimum solutions of WOA 1.3567 and 1.3569, respectively, which are very close to the best results of Q In contrast, the NNA, PSO, and SCA algorithms have poor optimum sol indicates that these three algorithms are not suitable for the problem.Fur comparing the results of the Wilcoxon rank-sum test (+, =, and indicating or worse performance of QOCSCNNA compared to other algorithms), it discover that QOCSCNNA outperforms NNA, PSO, and SCA in terms of Hence, it can be summarized that the proposed QOCSCNNA demonstrates sibility compared to other algorithms.As shown in Table 3, 11 parameters should be considered when minim

CSI Engineering Design Problem
As shown in Table 3, 11 parameters should be considered when minimizing the impact of a side impact on a vehicle.Figure 6 illustrates the model of the CSI crash design problem.The objective function of this design problem can be expressed as Equation (21):  The CSI problem is a widely studied classical many heuristics have been proposed to solve it ove NNA, SA, WOA, PSO, and SCA.According to the com ble 4), the presented QOCSCNNA achieves the opt makes the optimal constraints satisfy Equations ( 22)the optimal results obtained by QOCSCNNA.In add SA, WOA, PSO, and SCA are significantly higher than cates that QOCSCNNA holds a clear advantage amo the problem.The analysis results by the Wilcox QOCSCNNA was superior to the other algorithms.It the obtained QOCSCNNA.Subject to: G 4 (x) = 28.98 + 3.818x 3 − 4.2x 1 x 2 + 0.0207x 5 x 10 + 6.63x 6 x 9 − 7.7x 7 x 8 + 0.32x 9 x 10 − 32 ≤ 0 G 5 (x) = 0.261 − 0.0159x 1 x 2 − 0.188x 1 x 8 − 0.019x 2 x 7 + 0.0144x 3 x 5 + 0.0008757x 5 x 10 + 0.08045x 6 x 9 + 0.00139x 8 x 11 + 0.00001575x 10 x 11 − 32 ≤ 0 G 6 (x) = 0.214 + 0.00817x 5 − 0.131x 1 x 8 − 0.0704x 1 x 9 + 0.03099x 2 x 6 − 0.018x 2 x 7 + 0.0208x 3 x 8 − 0.02x 2 2 + 0.121x 3 x 9 − 0.00364x 5 x 6 + 0.0007715x 5 x 10 − 0.0005354x 6 x 10 + 0.00121x 8 x 11 + 0.00184x 9 x 10 − 0.32 ≤ 0 (27) G 10 (x) = 16.45 − 0.489x 3 x 7 − 0.843x 5 x 6 + 0.0432x 9 x 10 − 0.0556x 9 x 11 − 0.000786x 2 11 − 15.7 ≤ 0 Variable range: The CSI problem is a widely studied classical engineering design problem and many heuristics have been proposed to solve it over the years.The methods include NNA, SA, WOA, PSO, and SCA.According to the comparative experimental results (Table 4), the presented QOCSCNNA achieves the optimal fitness value of 23.4538 and makes the optimal constraints satisfy Equations ( 22)-( 32).This validates the efficacy of the optimal results by QOCSCNNA.In addition, the optimal results of NNA, SA, WOA, PSO, and SCA are significantly higher than those of QOCSCNNA.This indicates that QOCSCNNA holds a clear advantage among the five algorithms for solving the problem.The analysis results by the Wilcoxon rank-sum test showed that QOCSCNNA was superior to the other algorithms.It further confirms the feasibility of the obtained QOCSCNNA.The goal of the TS problem is to reduce the weight of the spring, illustrated in Figure 7. Minimum deflection, shear stress, surge frequency, outer diameter limits, and limitations on design variables need to be considered in the design process.The parameter settings include the average coil diameter D (denoted as x 1 ), the wire diameter d (denoted as x 2 ), and the effective number of coils N (denoted as x 3 ).The issue is described as: The goal of the TS problem is to reduce the weight of the spring, illustrated ure 7. Minimum deflection, shear stress, surge frequency, outer diameter limits, an itations on design variables need to be considered in the design process.The par settings include the average coil diameter D (denoted as  ), the wire diameter d ( ed as  ), and the effective number of coils N (denoted as  ).The issue is describe  Several researchers have tried various meta-heuristics to solve this problem, i ing NNA, SA, WOA, PSO, and HHO.Table 5 demonstrates the optimal solutio tained by QOCSCNNA and the comparative algorithms, and it can be seen that th posed QOCSCNNA obtains the optimal solution, i.e., 0.127.Also, it is given that th straints on the optimal cost achieved with QOCSCNNA meet Equations ( 34)- (38), implies that the best solution provided by QOCSCNNA is valid.In addition, HH an optimal fitness value of 0.0129, which is nearly the same as the optimal re QOCSCNNA.On the contrary, the optimal solutions of NNA, SA, WOA, and P inferior, which means QOCSCNNA and HHO have significant advantages.Mo Subject to: Entropy 2023, 25, 1255 16 of 20 Variable range: Several researchers have tried various meta-heuristics to solve this problem, including NNA, SA, WOA, PSO, and HHO.Table 5 demonstrates the optimal solutions obtained by QOCSCNNA and the comparative algorithms, and it can be seen that the proposed QOCSCNNA obtains the optimal solution, i.e., 0.127.Also, it is given that the constraints on the optimal cost achieved with QOCSCNNA meet Equations ( 34)- (38), which implies that the best solution provided by QOCSCNNA is valid.In addition, HHO has an optimal fitness value of 0.0129, which is nearly the same as the optimal result of QOCSCNNA.On the contrary, the optimal solutions of NNA, SA, WOA, and PSO are inferior, which means QOCSCNNA and HHO have significant advantages.Moreover, through a comparison of the results of the Wilcoxon rank-sum test, it is observed that QOCSCNNA outperforms NNA, SA, WOA, and PSO in the aspect of performance.Therefore, it can be drawn that QOCSCNNA is a more efficient and feasible method compared with other algorithms.

Conclusions and Future Works
This paper reports on the NNA based on the quasi-oppositional-based strategy, piecewise linear chaotic mapping operator, and logistic chaotic sine-cosine learning strategy proposed to enhance global search capability and convergence.More specifically, QOBL allows the generation of quasi-opposite solutions between opposite solutions and the center of the solution space during the initialization phase, helping to balance exploration and exploitation in the generation jump.A new LCSC strategy by integrating LCM and SCLS is proposed which facilitates the algorithm to control at the bias strategy stage to jump out of the local optimum.Moreover, a dynamic adjustment factor that varies with the number of evaluations is presented, which facilitates tuning the search space and accelerates the convergence speed.To demonstrate the validity of QOCSCNNA, the performance of numerical optimization problems is investigated by solving challenging CEC 2017 functions.The results of the average and standard deviation of the comparison experiments in 29 test functions show that the QOCSCNNA algorithm outperforms the NNA algorithm in 23 functions and beats the other 7 algorithms in more than half of the test functions.Meanwhile, the Wilcoxon rank-sum test and convergence analysis indicate that the QOCSCNNA algorithm significantly outperforms the other algorithms.Furthermore, QOCSCNNA and other comparative algorithms are applied to three real-world engineering design problems, and the results further evidence the applicability of the algorithms in solving practical projects.
For future research, we concentrate on the next two areas.First, QOCSCNNA will continue to be improved to address more complex real-world engineering optimal problems, which include intelligent traffic management, supply chain optimization, and large-scale unmanned aircraft systems.Second, even though QOCSCNNA can greatly enhance the

Figure 1 .
Figure 1.Structure of an artificial neural network.

Figure 1 .
Figure 1.Structure of an artificial neural network.

)Algorithms 1 :
Entropy 2023,25,  x FOR PEER REVIEW The pseudocode of the NNA algorithm 1. Initialize the population  and the weight matrix  .2. Calculate the fitness value of each solution and then set  and  3. for i = 1: 4.
Figure 2 shows the positions of the current point, th site point  , and the quasi-opposite  within the one-dimensional space [A  = ( ,  , … ,  ) represents a point in an n-dimensional space, where each coo  ∈ [ ,  ] for i = 1, 2, ..., n.The opposite point  = ( ,  , …  ) corresponding generated X is as follows:  =  +  −

Figure 2 .
Figure 2. One-dimensional space opposites points and quasi-opposite points.

Figure 2 .
Figure 2. One-dimensional space opposites points and quasi-opposite points.

Figure 4 .
Figure 4. Convergence graph of QOCSC and its competitors.

Figure 4 .
Figure 4. Convergence graph of QOCSC and its competitors.

Figure 5 .
Figure 5.The model for the CB design problem [41].

Figure 5 .
Figure 5.The model for the CB design problem [41].

Table 1 .
The definition of the CEC2017 test suite.

Table 2 .
Comparison results between QOCSCNNA and its competitor CB design pr

Table 2 .
Comparison results between QOCSCNNA and its competitor CB design problems.

Table 3 .
Influence parameters of the weight of the door.

Table 4 .
Comparison results between QOCSCNNA and its competitor CSI design problems.

Table 3 .
Influence parameters of the weight of the door.

Table 5 .
Comparison results between QOCSCNNA and its competitor TS design problems.