Abstract
The trust region method is an effective method for solving unconstrained optimization problems. Incorrectly updating the rules of the trust region radius will increase the number of iterations and affect the calculation efficiency. In order to obtain an effective radius for the trust region, an adaptive radius updating criterion is proposed based on the gradient of the current iteration point and the eigenvalue of the Hessian matrix which avoids calculating the inverse of the Hessian matrix during radius updating. This approach reduces the computation time and enhances the algorithm’s performance. On this basis, we apply adaptive radius and non-monotonic techniques to the trust region algorithm and propose an improved non-monotonic adaptive trust region algorithm. Under proper assumptions, the convergence of the algorithm is analyzed. Numerical experiments confirm that the suggested algorithm is effective.
MSC:
49M37; 65K05; 90C30
1. Introduction
Consider the following unconstrained optimization problem:
where is twice continuously differentiable. There are two types of optimization problems: confined optimization and unconstrained optimization. Constrained optimization problems are typically addressed by being converted into unconstrained optimization problems. Unconstrained optimization problems can be solved using various techniques, including the conjugate gradient approach [1,2], the trust region method [3,4], and the Newton method [5,6]. The conjugate gradient method requires that the coefficient matrix is not only symmetric but also positive definite, which may not be applicable in the case of matrices with non-positive definite coefficients [7]. The Newton method needs to calculate the inverse of the Hessian matrix, and selection of the initial points is difficult in practice [8]. The trust region algorithm has strong convergence and robustness, and it has become one of the effective methods for solving unconstrained optimization problems [9,10].
Many practical application problems which arise in applied mathematics, economics, and engineering can be translated into Equation (1) to be solved [11]. As one of the effective algorithms for solving unconstrained optimization problems, the trust region has many applications in real life. For example, it can be extended to deal with constrained optimization problems, variational inequality problems, and nonlinear complementary problems [12]. Of course, the trust region algorithm can also be extended to reinforcement learning [13] and support vector machines [14] in machine learning, the electrical impedance tomography problem [15], and solving the inverse problem related to seismic spectrum analysis of Pn waves [16] as well as combined with the epitaxial implicit method [17]. Therefore, it is necessary to propose a trust region algorithm which can effectively solve unconstrained optimization problems.
The underlying principle of trust region (TR) methods is that a trial step is obtained at each iterate point by solving the following subproblem:
where is the Euclidean norm, , , and is the TR radius [18]. Next, the TR ratio is used to compute the agreement between the predicted and actual reductions. It has the definition
where the predicted reduction and the actual reduction are given by
The iteration form of the TR is described below:
If for a specified , then the trial step is approved, and . The TR radius is adjusted correctly in this instance. In contrast, the is rejected, and the current point is held constant for the following iteration if . The TR radius is suitably decreased in this instance. This procedure is continued until the convergence requirements hold.
In the traditional TR algorithm, it is essential to select the appropriate initial value and update criteria for the TR radius. If the initial radius or radius updating criteria are not selected correctly, then the number of iterations of the trust region algorithm will be increased. Based on this, Sartenaer [19] proposed a criterion that can automatically determine the initial radius which uses gradient information (i.e., ). Later, Lin et al. [20] proposed a better criterion for selecting the initial radius, which is set to be , where is an adjustable constant.
It can be noted that when the sequence generated by the traditional TR algorithm converges to the approximate best point , the TR ratio may converge to one. Therefore, according to the iterative steps of the algorithm, once k is big enough, may be larger than the normal number. Moreover, the information of and generated at the current iteration point is not used to revise the radius , which greatly increases the number of subproblems (Equation (2)) to be solved, thus reducing the calculation rate of the algorithm. To avoid the above problems, Zhang [21] proposed some variations of the adaptive TR method based on the following updated formula:
in which , is a constant and . Subsequently, Zhang et al. [22] proposed an adaptive radius formula containing and as follows:
with ℓ and q as defined above and for some nonnegative integer i and the Hessian approximation as a positive definite matrix. To avoid calculating at each iteration point , Shi and Wang [23] proposed
with ℓ, q, and as defined above. To avoid counting , Sheng et al. [24] proposed a variant:
where , , and . Recently, Wang et al. [25] employed the new radius updating criterion , where is a constant and the updating of depends on the value of the ratio . Building an adaptive TR radius which can speed up the solving of subproblems is vital since the TR algorithm heavily relies on the choice for the TR radius.
Each iteration in the monotone trust region method requires objective function to have a certain reduction. However, it can be seen from some numerical experimental results that the monotonicity of forced may significantly reduce the convergence rate, especially for the objective functions with highly eccentric level curves [26,27,28]. Therefore, allowing function values to increase to some extent while maintaining global convergence is advantageous. Non-monotonic methods are characterized by the fact that they do not enforce strict monotonicity of the value of the objective function over successive iterations. It has been shown that using non-monotonic techniques can improve the likelihood of finding global optimal values and the rate of convergence of the algorithm [26,29]. Therefore, many researchers combined non-monotonic techniques with trust regions to form a new method for solving optimization problems [28,30,31,32,33]. Chamberlain et al. [34] proposed watchdog technology, which belongs to a non-monotonic line search technology. Next, Grippo et al. [26] extended Newton’s method with another non-monotonic line search methodology and used it to solve unconstrained optimization problems. The idea of this non-monotonic line search technique is that for a given , one should select the step length such that the following conditions are true:
where non-monotonic technology is defined as
where and M are positive integers. However, some numerical experiment results indicate that there are still certain issues with Grippo’s non-monotonic method [35]. For instance, the current function value cannot be fully used. Moreover, the traditional non-monotonic technique is highly dependent on the selection of parameter M, and thus the numerical results of the algorithm may vary greatly with different selections of M values. To address these shortcomings of non-monotonic techniques, Zhang and Hager [29] proposed another non-monotonic technique as follows:
and
where , , and . Because the non-monotonic technique needs to update and in each iteration, which affects the computation rate, Ahookhosh and Amini [35] combined and in a convex manner to form another efficient non-monotonic technique :
where , . Compared with , has the following advantages. It can not only make full use of the current objective function value but also retain the current optimal operation value to a certain extent. Finally, better convergence can be achieved by selecting different parameters. Since the non-monotonic algorithm does not require the objective function to be strictly monotonic during continuous iteration, application of the non-monotonic technique to the trust region algorithm can increase both the likelihood of finding the global optimal solution and the algorithm’s computation rate [30,36,37].
Because the convergence rate of the trust region algorithm depends on the renewal of the trust region radius, this paper aims to construct a new formula for the trust region radius. Combined with the non-monotonic technique, a new non-monotonic adaptive trust region algorithm is proposed to solve unconstrained optimization problems.
In this paper, in order to fully utilize the gradient and Hessian matrix of the current iteration points, we propose an improved adaptive radius updating criterion. In the improved adaptive radius-updating formula, the upper bound of the eigenvalue of matrix is solved instead of the inverse of matrix .
The rest of this article is organized as follows. In Section 2, we present a new adaptive trust region radius and propose a non-monotonic adaptive trust region algorithm. In Section 3, we demonstrate this algorithm’s global convergence. In Section 4, numerical experiments are carried out, and the new method’s effectiveness is demonstrated by the numerical results. Finally, we conclude the paper in Section 5 with a few closing thoughts.
2. The New Algorithm
In this section, an adaptive radius-updating formula is proposed. The improved adaptive radius updating criterion is described in detail below.
At the iterate point , we set to be the ith eigenvalue of the Hessian matrix . According to the Geršgorin circle theory [12,38], we obtain
where can be thought of as an upper bound on the ith eigenvalue of and the element is the one which is placed at the ith row and jth column in matrix .
Let
where and .
Based on this, the new adaptive trust region radius proposed in this paper is as follows:
where
and . Obviously, we adjust the size of using the trust region ratio , which can better use the information of the function to adjust the radius of the trust region so as to effectively decrease the number of solutions for Equation (2).
The adaptive radius updating criterion proposed in this paper not only makes full use of the first- and second-order information of the objective function but also avoids solving the inverse matrix . Based on the non-monotonic technique and the new adaptive trust region radius (Equation (7)), we propose an improved non-monotonic adaptive trust region algorithm (NATR). In our algorithm, the non-monotone technique is defined by , where is given by Equation (4) and the ratio of the trust region is determined by
where is the trial step to be calculated by Equation (2).
The procedure of NATR is described by Algorithm 1.
| Algorithm 1 An improved non-monotonic adaptive trust region algorithm (NATR). |
| Step 0. (Initialization) Start with and the symmetric matrix . The constants |
| , , , , , , , |
| , and are also given. Set . |
| Step 1. If , then stop. |
| Step 2. Solve the subproblems of Equation (2) to find the trial step . |
| Step 3. Compute and . If , then go to Step 2. If , then set , |
| and go to Step 4. |
| Step 4. Compute . If or , then set ; Update , |
| where is given by Equation (8), and set . |
| Step 5. Update by using a modified quasi-Newton formula. Set , and go to Step 1. |
From Step 4 of the NATR algorithm, it can be seen that sequence is bounded; in other words, we have
for any .
Remark 1.
It can be noted that in this paper, the non-monotone adaptive trust region algorithm uses the gradient and matrix eigenvalues when the radius is updated. Compared with [30,39] and [22], the radius in this paper avoids solving and makes full use of the ratio of the trust region in each iteration to automatically adjust the radius , which can make better use of the information of the current iteration point of the objective function.
Throughout this paper, we consider the following assumptions in order to analyze the convergence of the new algorithm:
H1: The level set is a bounded closed set.
H2: There exist constants , , and such that , , and .
H3: Suppose that there is a constant such that , and is true.
Remark 2.
Similar to [10], for a solution to the subproblem in Equation (2), we have
3. Convergence Analysis
In order to analyze the convergence properties of Algorithm 1, we also need to present the following lemmas:
Lemma 1.
Proof.
See [31] for the proof of Lemma 1. □
Lemma 2.
Assume that the NATR algorithm produced the sequence . Then, the sequence is a decreasing sequence.
Proof.
See the proof of Lemma 4 in [35]. □
Lemma 3.
Suppose that the sequence is generated by the NATR algorithm. Then, we have
and
for each .
Proof.
Lemma 4.
Assume that the NATR algorithm produced the sequence and . Then, such that is a successful iteration (i.e., ).
Proof.
Assume that there exists an integer constant k such that the point is not a successful point for any . Hence, we have that for any constant . From H2 and Remark 1, we obtain
Thus, from Lemma 1 and Equation (13), we have
When , we have , and subsequently
From Lemma 3 and Equation (15), we obtain
Therefore, when is sufficiently large, . This contradicts the assumption that
and thus the hypothesis is not valid. □
Lemma 5.
Assuming that H3 is true, then there exists a constant such that satisfies:
Proof.
Set . Then, we have
□
Lemma 6.
Assume that H1 and H2 are true and the sequence is generated by the NATR algorithm. Then, we have
Proof.
See Lemma 7 in [35]. □
Lemma 7.
Assume that the sequence is generated by the NATR algorithm. Then, we have
Theorem 1.
(Global Convergence) Assume that H1, H2, and H3 are true, and the sequence is generated by the NATR algorithm. Then, we have
Proof.
By contradiction, we assume that there exists a constant such that
for any integer k.
From H2, Equation (23), Lemma 3, and the definition of , it follows that
This implies that
If —that is, is a successful iteration point—then there exists a constant such that
for sufficiently large k values. This information and Equation (10) suggest that
Hence, the proof is completed. □
Remark 3.
It can be seen in Theorem 1 that the NATR algorithm has global convergence; that is, the algorithm is feasible theoretically.
4. Preliminary Numerical Experiments
In this section, we test and compare the NATR algorithm with the traditional trust region algorithm in [8] (TTR) and the non-monotonic trust region algorithm proposed by Ahookhoosh et al. in [35] (NTR). All unconstrained optimization test problems were selected from [40], and all of the numerical experiments were carried out in the Matlab 2018 environment.
In all algorithms, we set , , , , , , , , , and . If , then we set , or if , then we set . Moreover, we set [35]. The matrix is updated by the BFGS formula in [41], as follows:
where and . Furthermore, we declare that an algorithm will fail if it has more than 10,000 iterations. All of the algorithms are stopped when .
In Table 1, the name of the test function is denoted as “function”, the dimension of the test function is denoted as “n”, the number of iterations of the algorithm is denoted as “k”, and the time used by the algorithm to solve the test function is denoted as “Cpu” (in seconds). If the algorithm is iterated more than 10,000 times, then it fails and is marked with “F”.
Table 1.
Numerical results.
According to the numerical results in Table 1, we can find that when the NATR algorithm was applied to solve these unconstrained optimization problems, it required fewer iterations than the TTR and NTR methods. Especially for the perturbed quadratic diagonal function and diagonal 2 function, our NATR algorithm could solve the problem in fewer iterations. However, the iteration times of the NTR algorithm and TTR algorithm both exceeded 10,000, which means that they could not effectively solve these problems. For the POWER function (CUTE), the CPU time required by the NATR algorithm was only 1/30 and 1/3 of that needed by the NTR and TTR methods, respectively. This is because neither the NTR nor TTR method can correctly adjust the radius of the confidence region, while the radius of the trust region of the NATR algorithm is automatically adjusted according to the gradient of the current iteration point and the eigenvalue of the Hessian matrix, thus reducing the number of iterations and reducing the calculation cost. For most test functions, the three algorithms ended up with the same function value results. For some functions, such as the POWER function (CUTE), the NATR algorithm not only had fewer iterations and fewer iterations, but the function value was also smaller.
For higher-dimension problems, such as the 3000 dimensional test functions of the HIMMELBG function and QUARTC function (CUTE) and the 5000 dimensional test functions of the perturbed quadratic diagonal function, the NATR algorithm could solve them effectively, but the TTR algorithm failed to solve them. This shows that the NATR algorithm has a certain effect when solving higher-dimension problems.
Overall, our new algorithm was effective for most test problems and took fewer iterations, as demonstrated by these numerical findings for both high- and low-dimensional test issues. In general, we can infer that the NATR algorithm is more efficient than the traditional trust region (TTR) algorithm and the non-monotonic trust region (NTR) algorithm in terms of the number of iterations and running time. Therefore, the NATR algorithm can effectively solve unconstrained optimization problems.
5. Conclusions
In this paper, based on the current iteration point gradient and the eigenvalues of the Hessian matrix , we proposed a new improved adaptive radius updating criterion which reduces the effort and computation time and improves the performance of an algorithm. Then, we proposed an improved non-monotonic adaptive trust region (NATR) algorithm by combining the adaptive radius updating criterion with the non-monotonic technique. Under certain common assumptions, the global convergence of the NATR algorithm was demonstrated. Finally, in the same environment, the test function was used to conduct numerical experiments on the three algorithms. The numerical experiments show that the NATR algorithm can effectively solve unconstrained problems, and for some test functions, the iteration time and iteration times of the NATR algorithm were less than those of the NTR algorithm and TTR algorithm, demonstrating an effective algorithm.
In the NATR algorithm, when the trust region ratio did not meet the requirements, the subproblem was recalculated to find the test step which met the conditions. In the future, we will consider combining the trust region algorithm with the line search method to avoid double-computation subproblems. Secondly, the application of the non-monotonic trust region method will be studied further, such as by applying it to nonlinear fractional-order multi-agent systems [42] and cooperative control for linear delta operator systems [43].
Author Contributions
Writing—original draft, M.X.; Writing—review & editing, Q.Z.; Visualization, H.X.; Supervision, H.X. All authors contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the specialized research fund of YiBin University (Grant No. 412-2021QH027).
Data Availability Statement
The data that support the findings of this study are available on request from the corresponding author, upon reasonable request.
Acknowledgments
The authors would like to thank the Editor and the anonymous referees for their helpful comments and valuable suggestions regarding this article.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Andrei, N. Another hybrid conjugate gradient algorithm for unconstrained optimization. Numer. Algorithms 2008, 47, 143–156. [Google Scholar] [CrossRef]
- Fatemi, M. A new efficient conjugate gradient method for unconstrained optimization. J. Comput. Appl. Math. 2016, 300, 207–216. [Google Scholar] [CrossRef]
- Kimiaei, M.; Ghaderi, S. A new restarting adaptive trust-region method for unconstrained optimization. J. Oper. Res. Soc. China 2017, 5, 487–507. [Google Scholar] [CrossRef]
- Xue, Y.; Liu, H.; Liu, Z. An improved nonmonotone adaptive trust region method. Appl. Math. 2019, 64, 1–16. [Google Scholar] [CrossRef]
- Gao, H.; Zhang, H.B.; Li, Z.B. A nonmonotone inexact Newton method for unconstrained optimization. Optim. Lett. 2017, 11, 947–965. [Google Scholar] [CrossRef]
- Wei, Z.; Li, G.; Qi, L. New quasi-Newton methods for unconstrained optimization problems. Appl. Math. Comput. 2006, 175, 1156–1188. [Google Scholar] [CrossRef]
- Fletcher, R. Conjugate gradient methods for indefinite systems. In Numerical Analysis: Proceedings of the Dundee Conference on Numerical Analysis, 1975; Springer: Berlin/Heidelberg, Germany, 2006; pp. 73–89. [Google Scholar]
- Ma, C.F. Optimization Method and Matlab Program Design; Science Press: Beijing, China, 2010. [Google Scholar]
- Mo, J.; Zhang, K.; Wei, Z. A nonmonotone trust region method for unconstrained optimization. Appl. Math. Comput. 2005, 171, 371–384. [Google Scholar] [CrossRef]
- Nocedal, J.; Yuan, Y. Combining trust region and line search techniques. In Advances in Nonlinear Programming: Proceedings of the 96 International Conference on Nonlinear Programming; Springer: New York, NY, USA, 1998; pp. 153–175. [Google Scholar]
- Jiang, X.; Huang, Z. An accelerated relaxed-inertial strategy based CGP algorithm with restart technique for constrained nonlinear pseudo-monotone equations to image de-blurring problems. J. Comput. Appl. Math. 2024, 447, 115887. [Google Scholar] [CrossRef]
- Liu, J.; Xu, X.; Cui, X. An accelerated nonmonotone trust region method with adaptive trust region for unconstrained optimization. Comput. Optim. Appl. 2018, 69, 77–97. [Google Scholar] [CrossRef]
- Xu, H.; Xuan, J.; Zhang, G.; Lu, J. Trust region policy optimization via entropy regularization for Kullback CLeibler divergence constraint. Neurocomputing 2024, 589, 127716. [Google Scholar] [CrossRef]
- Yu, Z.; Yuan, Y.; Tian, P. An efficient trust region algorithm with bounded iteration sequence for unconstrained optimization and its application in support vector machine. J. Comput. Appl. Math. 2024, 449, 115956. [Google Scholar] [CrossRef]
- Goharian, M.; Soleimani, M.; Jegatheesan, A.; Moran, G.R. Regularization of EIT Problem Using Trust Region SubProblem Method; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
- Weiping, W.; Hongchun, W.; Haofeng, Z.; Xiong, X. A Pn-wave spectral inversion technique based on trust region reflective algorithm. J. Appl. Geophys. 2024, 230, 105525. [Google Scholar] [CrossRef]
- Ceng, L.; Yuan, Q. On a General Extragradient Implicit Method and Its Applications to Optimization. Symmetry 2020, 12, 124. [Google Scholar] [CrossRef]
- Sun, W.; Yuan, Y. Optimization Theory and Methods: Nonlinear Programming; Springer: New York, NY, USA, 2006. [Google Scholar]
- Sartenaer, A. Automatic Determination Of An Initial Trust Region In Nonlinear Programming. Siam J. Sci. Comput. 1997, 18, 1788–1803. [Google Scholar] [CrossRef]
- Lin, C.; Mor, J. Newton’s method for large bound-constrained optimization problems. Siam J. Optim. 1999, 9, 1100–1127. [Google Scholar] [CrossRef]
- Zhang, X. NN models for general nonlinear programming. In Neural Networks in Optimization; Springer: Boston, MA, USA, 2000; pp. 273–288. [Google Scholar]
- Zhang, X.; Zhang, J.; Liao, L. An adaptive trust region method and its convergence. Sci. China Ser. Math. 2002, 45, 620–631. [Google Scholar] [CrossRef]
- Shi, Z.; Wang, S. Nonmonotone adaptive trust region method. Eur. J. Oper. Res. 2011, 208, 28–36. [Google Scholar] [CrossRef]
- Sheng, Z.; Gonglin, Y.; Zengru, C. A new adaptive trust region algorithm for optimization problems. Acta Math. Sci. 2018, 38, 479–496. [Google Scholar] [CrossRef]
- Wang, X.; Ding, X.; Qu, Q. A new filter nonmonotone adaptive trust region method for unconstrained optimization. Symmetry 2020, 12, 208. [Google Scholar] [CrossRef]
- Grippo, L.; Lampariello, F.; Lucidi, S. A nonmonotone line search technique for Newton’s method. Siam J. Numer. Anal. 1986, 23, 707–716. [Google Scholar] [CrossRef]
- Grippo, L.; Lampariello, F.; Lucidi, S. A truncated Newton method with nonmonotone line search for unconstrained optimization. J. Optim. Theory Appl. 1989, 60, 401–419. [Google Scholar] [CrossRef]
- Toint, P.L. Non-monotone trust-region algorithms for nonlinear optimization subject to convex con-straints. Math. Progr. 1997, 77, 69–94. [Google Scholar] [CrossRef]
- Zhang, H.; Hager, W. A nonmonotone line search technique and its application to unconstrained optimization. Siam J. Optim. 2004, 14, 1043–1056. [Google Scholar] [CrossRef]
- Cui, Z.; Wu, B. A new modified nonmonotone adaptive trust region method for unconstrained optimization. Comput. Optim. Appl. 2012, 53, 795–806. [Google Scholar] [CrossRef]
- Reza Peyghami, M.; Ataee Tarzanagh, D. A relaxed nonmonotone adaptive trust region method for solving unconstrained optimization problems. Comput. Optim. Appl. 2015, 61, 321–341. [Google Scholar] [CrossRef]
- Rezaee, S.; Babaie-Kafaki, S. An adaptive nonmonotone trust region algorithm. Optim. Methods Softw. 2019, 34, 264–277. [Google Scholar] [CrossRef]
- Zhang, J.L.; Zhang, X.S. A nonmonotone adaptive trust region method and its convergence. Comput. Math. Appl. 2003, 45, 1469–1477. [Google Scholar] [CrossRef]
- Chamberlain, R.M.; Powell, M.J.D.; Lemarechal, C.; Pedersen, H.C. The watchdog technique for forcing convergence in algorithms for constrained optimization. In Algorithms for Constrained Minimization of Smooth Nonlinear Functions; Springer: Berlin/Heidelberg, Germany, 1982; pp. 1–17. [Google Scholar]
- Ahookhosh, M.; Amini, K. An efficient nonmonotone trust-region method for unconstrained optimization. Numer. Algorithms 2012, 59, 523–540. [Google Scholar] [CrossRef]
- Ahookhosh, M.; Amini, K. A Nonmonotone trust region method with adaptive radius for unconstrained optimization problems. Comput. Math. Appl. 2010, 60, 411–422. [Google Scholar] [CrossRef][Green Version]
- Fu, J.; Sun, W. Nonmonotone adaptive trust-region method for unconstrained optimization problems. Appl. Math. Comput. 2005, 163, 489–504. [Google Scholar] [CrossRef]
- Varga, R.S. Geršgorin and His Circles; Springer: Berlin/Heidelberg, Germany, 2010; Volume 36. [Google Scholar]
- Sang, Z.; Sun, Q. Aself-adaptive trust region method with linesearch based on a simple subproblem model. J. Comput. Appl. Math. 2009, 232, C514–C522. [Google Scholar] [CrossRef]
- Andrei, N. An unconstrained optimization test functions collection. Adv. Model. Optim. 2008, 10, 147–161. [Google Scholar]
- Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 2006. [Google Scholar]
- Yang, Y.; Qi, Q.; Hu, J.; Dai, J.; Yang, C. Adaptive Fault-Tolerant Control for Consensus of Nonlinear Fractional-Order Multi-Agent Systems with Diffusion. Fractal Fract. 2023, 7, 760. [Google Scholar] [CrossRef]
- Xue, Y.; Han, J.; Tu, Z.; Chen, X. Stability analysis and design of cooperative control for linear delta operator system. Aims Math. 2023, 8, 12671–12693. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).