Abstract
The implementation of Newton’s method for solving nonlinear equations in abstract domains requires the inversion of a linear operator at each step. Such an inversion may be computationally very expensive or impossible to find. That is why alternative iterative methods are developed in this article that require no inversion or only one inversion of a linear operator at each step. The inverse of the operator is replaced by a frozen sum of linear operators depending on the Fréchet derivative of an operator. The numerical examples illustrate that for all practical purposes, the new methods are as effective as Newton’s but much cheaper to implement. The same methodology can be used to create similar alternatives to other methods using inversions of linear operators such as divided differences or other linear operators.
Keywords:
Newton-type method; banach space; frozen sum of operators; Fréchet derivative; convergence; inverse of a linear operator MSC:
65G99; 65H10; 49H17; 49M15
1. Introduction
A plethora of problems from mathematics, computational science, and other applied disciplines can be reduced using mathematical modeling to an equation of the form
where is a Fréchet-differentiable operator [,,]. The symbols stand for Banach spaces and D denotes an open and convex subset of X. An analytical form of a solution of the Equation (1) can be realized only in some rare cases. Therefore, researchers and practitioners develop iterative methods approximating the solution provided that certain usually sufficient for convergence conditions are imposed on the initial approximation and the operator F [,,]. There are differences among certain types of convergence. In the local convergence the conditions on the solution are imposed and a convergence ball is determined, which provides the degree of difficulty in the selection of the initial approximation . The main problem in the local convergence is that the solution is unknown (usually). However, in the semi-local convergence, the conditions involve the initial approximation , and the solution is then found inside a ball at . There is extensive literature on both types of convergence based on Lipchitz, Holder, or generalized continuity conditions imposed on the derivative in order to control it. The best-known iterative method is without a doubt Newton’s one, which is defined as
where denotes the inverse of the Frećhet derivative of the operator The study of Newton’s method is carried out via means of local and semi-local convergence. The former uses information about the solution and provides estimates of a possible radius of convergence, error estimates on the norms , and isolation of the solution findings. The latter type is based on information about the initial approximation and convergence conditions, which guarantee that [,,,].
In this article, we address an important issue with the implementation of Newton’s method. The order of convergence is two, which is considered fast for a single-step method. Motivation There are concerns with the implementation of Newton’s method: The inversion of the linear operator is required at every step in Euclidean, Hilbert or Banach space. This operator may be a large-sized matrix or an operator whose analytical form is computationally expensive or impossible to find or it may be very small (case of small divisors) [,,,,,,,,].
The motivation of this article is to address this issue. This is achieved as follows.
First, let k denote a natural number and be the space of all linear operators that are bounded. Suppose that there exists such that the operators M and are invertible. Then, Newton’s method (2) can be rewritten as
Notice that clearly, the iterates of the two preceding versions of Newton’s method coincide, since
But even if the linear operator is fixed, we still need to invert , which is not a fixed linear operator in general. However, we can avoid this inversion, if we develop the operators and for k a fixed natural number, and consider the method for
This method requires the inversion of the fixed linear operator M only once. Therefore, method (4) is a useful alternative to Newton’s method using only one inverse.
If and , the method (4) reduces to the modified Newton’s method
We can also set Many other choices of M and k are possible (see the numerical section).
By letting , we see that , provided that this limit exists. The condition for each guarantees the existence of this limit. It is worth noting that if the linear operator B is invertible, and the sequence is convergent to some then (4) gives
so
Consequently, we deduce that i.e., the limit point , solves the equation The same argument leads to the development of the method defined for
and each by
Clearly, the study of this method is analogous to the method (4). Based on the preceding reasoning it is worth studying the convergence of the method (4).
In particular, for the semi-local analysis, we develop the application of majorizing sequence defined for , some and each by
The scalar sequence as well as its convergence order are well studied [,]. The parameter depends on the initial approximation and the nonnegative parameters and are related to M and (see Section 2 and Section 3).
The study of the local convergence reduces to solving the inequality
to determine the convergence radius
for some and depending on and (see Section 4). The uniqueness of the solution is studied in Section 5. In this article, we also study in Section 6 the convergence of the Picard-type or Krasnoselskij method, given as
where
for some . Notice that if , the method (5) reduces to the method (4). The convergence of the method (5) is presented using the celebrated contraction mapping principle [,,]. In Section 7, an error analysis is provided comparing the sequence to the sequence generated by (4). Section 8 is developed to further validate the theory of the previous sections and compare the results to Newton’s method (2). The concluding remarks of Section 9 contain information about the future direction of our research.
The methodology of this article is very general. That is why it can be extended to other single-step methods using inverses of linear operators. Let us consider the Newton-like method defined by
where the linear operator and is considered to be an approximation to the actual derivative of the operator F [,,,]. Then both the local and semi-local results obtained for the method (4) are extended to the method (6) if we simply replace by in the definition of the operators A and B, since (6) reduces to (4) if . The same is true for the Picard method (5) but with operators A and B using instead of F. The same idea can be used on two-step, multi-step or multi-point methods using inverses of linear operators in an analogous way [,,,,,,,,,,,,,].
- Novelty of the Article
The article addresses the concerns with the implementation of Newton’s or Picard-type method or any other method using inverses of linear operators, as described in the motivation for writing this article. This is acheived by simply replacing the inverse by a finite sum of linear operators. The semi-local and local analysis that is developed in the next sections shows the convergence of these hybrid methods to the solution of the equation Moreover, the numerical experimentation further demonstrates that the iterations required by the hybrid methods are essentially the same as the original methods but cheaper to find.
2. Convergence of Majorizing Sequence
Let and be given parameters. It is convenient for the semi-local convergence to be defined: the parameters
provided that , and the scalar sequence for , some , and each by
Notice that the parameters depend on k, which is fixed. This sequence is important in the study of the semi-local convergence of the method (2). In particular, it is shown to be majorizing for given by the Formula (7) in the Theorems. However, first, some conditions are developed to that establish the convergence of the sequence . The proving technique of recurrent polynomials has been used in [,,,] to weaken the sufficient semi-local convergence conditions for Newton’s method.
Under the preceding notation, the following result can be proven.
Lemma 1.
Suppose that the following conditions hold:
and
Then, the scalar sequence is non-negative, non-decreasing, and bounded from above by μ. Moreover, the following assertions hold:
and there exists such that the sequence converges to and
Proof.
The conditions of the Lemma and the definitions of the parameters imply and . By the definition of the parameters and , the assertion (11) holds if . Next, mathematical induction is used to establish the assertions
and
These assertions hold for by the definition of and . Then it follows by the definition of the sequence that , and
Thus, the assertions (13) and (14) hold if . Suppose that these assertions hold for all natural numbers up to m. Then, we have to show
and
The first assertion holds if instead
But this assertion motivates the introduction of consecutive polynomials defined on the interval by
A relation between two consecutive polynomials is useful:
- Define the function
This function is given explicitly, since by (21) at
Thus, the following holds for all m
Therefore, (14) holds. The induction for the assertions (13) and (14) is complete. Hence, the sequence is non-negative, non-decreasing, and bounded from above by , and as such it is convergent to some unique .
Then, for j a natural number, we get
By letting , we show (12). □
It is worth noting that the conditions of the Lemma 1 are weaker than those given for the method (7) given in [Lemma 4.2, Page 57] in [], where, however, and have a different meaning.
3. Semi-Local Analysis for Method (4)
Throughout this paper, we use the notation to denote the open ball centered at with radius and is the closure of The following Banach Lemma is used to prove our results.
Theorem 1
(Banach Lemma on Invertible Operators [,,]). If T is a bounded linear operator in X, exists if and only if there is a bounded linear operator in X such that exists and
If exists, then
and
Further, we use majorizing sequences to prove the semi-local convergence. Recall the definition of a majorizing sequence.
Definition 1
([,,]). Let be a sequence in a normed space X. Then a nonnegative scalar sequence for which
holds, is a majorizing sequence for . Note that any majorizing sequence is necessarily nondecreasing. Moreover, if the sequence converges, then converges too, and for ,
Hence, the study of the convergence of the sequence reduces to that of
The following conditions are needed:
- Suppose:
- ()
- There exist parameters , a point , and an invertible operator M such that
- ()
- There exists a parameter such that for each
- ()
- The conditions of the Lemma 1 hold and
- ()
Theorem 2.
Suppose that the conditions – hold. Then, the following items hold for method (4):
and there exists a solution to the equation such that
Proof.
Notice that by the invertibility of the linear operator M assumed in condition and method (4), all iterates are well defined. The method of induction shall first establish the validity of items (25) and (26). The item (25) trivially holds true for . By the definition of method (4), the scalar sequence (7) and the condition , we see that
So, the iterate , and the item (26) holds if
The definition of the operator B and the condition imply that by choosing a,
Thus, by Theorem 1, the operator B is invertible and
By the definition of method (4), we can write in turn that
But
So,
where we also used the definition of the operator A to get . Thus, the preceding Ostrowski representation (29) for can be simplified as
By taking norms in both sides of the identity (30), using the triangle inequality (30), and the conditions , we get in turn
Then, method (4) gives
Consequently, we obtain
Hence, the induction for the items (25) and (26) is complete. Then, it follows by (25) and (26) that the sequence is Cauchy in the Banach space X (since the majorizing sequence is also Cauchy as convergent to by the condition ). Therefore, there exists such that . By letting in (31), and the continuity of the operator F, we deduce that . That is, the limit point is a solution of the equation . Moreover, for i denoting a natural number, we have by the triangle inequality and (26) that
Finally, letting we obtain the item (27). □
Remark 1.
- (1)
- The condition on the parameter a can be replaced as follows. Let be fixed.Replace the condition with. Then there exists a parameter such that for all ,Let .There exists a parameter (depending on ) such that for each ,where . Notice that . So, and .Moreover, replace with,, where . Then, the conclusions of Theorem 2 hold with and replacing and r, respectively.
- (2)
- A choice for the linear operator can be or or any other operator that satisfies the conditions and .
- (3)
4. Local Analysis of Convergence for the Method (4)
As in the semi-local analysis, it is convenient to define some parameters and functions. Let and . Define the parameters
and the polynomial . These parameters are connected below to the operators on method (4).
- Suppose:
- (H1)
- There exists a solution of the equation and an invertible operator such that for all ,
- (H2)
- There exists such that . It follows that .Set and
- (H3)
- .
Next, the local analysis of convergence follows using the conditions ()–() and the developed terminology.
Theorem 3.
Suppose that the conditions ()–() hold and the starting point . Then, the following items hold for the method (4)
and
Proof.
The items (33) and (34) are shown using induction. By the hypothesis , the item (33) holds if . Notice that all iterates of method (4) are well defined by the condition () since the linear operator M is invertible. Then, using method (4), we can write in turn
But, by definition as in the semi-local analysis,
and
By combining (36), using the conditions ()–() and the triangle inequality, we get in turn that
Remark 2.
- (i)
- A possible choice for the operator M is or However, other choices are possible as long as the conditions ()–() hold.
- (ii)
- The parameter ρ is the radius of convergence for method (4).
- (iii)
- As in the semi-local case, the corresponding local results for method (3) are obtained from Theorem 3 by letting .
5. Uniqueness of the Solution
In this Section, the uniqueness of the solution is determined in certain regions. In the cases of the semi-local convergence of method (4), we can demonstrate the following:
Proposition 1.
Suppose there exists a solution for some of the equation ; There exists an invertible operator M and a parameter such that
for each and there exists such that
Define . Then, the only solution of the equation in the region is z.
Proof.
Suppose that there exists solving the equation . Define the linear operator
Hence, the linear operator Q is invertible. Then, by using the identity
we conclude that . □
Remark 3.
Notice that the hypotheses of Theorems 2 are used in the Proposition 1. Otherwise, we can set and .
Next, the corresponding result follows for the local convergence case of method (4).
Proposition 2.
There exists parameters and an invertible operator M such that for each
and there exists such that
Define the region . Then, the only solution of the equation in the region is .
Proof.
Suppose that there exists solving the equation . Define the linear operator . Then, using the conditions (40), (41), and the definition of the operator , we obtain
So, the operator is invertible. Then, from the identity
we deduce that . □
Remark 4.
We can certainly set in Proposition 2 provided that the conditions ()–() hold.
6. The Picard Iteration
The contraction mapping principle attributed to Banach is a useful tool in establishing the convergence of iterative methods in abstract spaces [,,,,,].
Theorem 4
([,,]). Let be a contraction operator, with Lipschitz constant . Then, G has a fixed point , i.e, Moreover, for each starting point , the method of successive approximations or Picard method converges to
If the operator G has more than one fixed point, Theorem 4 is not applicable. In this case, the fixed points must be separated. Let be a non-empty closed subset of . We also have the following results:
Theorem 5
([,]). Suppose that the operator is a contraction with constant Then, the operator G has a unique fixed point . Moreover, for each , the Picard method, converges to .
Next, we present convergence of method (5) based on Theorem 5.
Theorem 6.
Suppose the following: For fixed natural number k, the following conditions hold for . Then, there exist an invertible operator such that
and , where .
Then, the operator has a unique fixed point . Moreover, the Picard iteration is convergent to for each .
Proof.
The Picard iteration (5) is well defined by the existence of Moreover, it follows by (44) that the operator P is a contraction on D with Lipschitz constant q. Furthermore, by the definition of r, condition (43), (45) and the estimate
Hence, is a contraction with constant . The result now follows by Theorem 6 for . □
7. Error Analysis for the Method (4)
We need a version of the Newton–Kantorovich Theorem [,,,] for Newton’s method (2).
Theorem 7.
Let be a Fréchet-differentiable operator. Suppose the following:
There exist and such that the linear operator is invertible. Set
There exists such that for each
and
where Then, the Newton method (2) converges to a unique solution
Set
Next, we relate the sequence to the sequence
Lemma 2.
Suppose that the conditions ()–() and those of Theorem 7 hold. Then, the following error estimate holds for each
where
Proof.
The iterates and are well defined by Theorems 2 and 7. By subtracting (2) from (4) and taking out , we have in turn that
Note that we have the estimates
Using these estimates, and summing up, we obtain from (46) in turn that
□
Proposition 3.
Let all the conditions of the Lemma 2 hold. Define the polynomial ψ for and by
Suppose in addition that
Then, the polynomial ψ has two positive roots. Denote the smallest by Moreover, for each
8. Numerical Examples
In the following example, consider method (4) for cases , which remains independent of both . Additionally, they are compared with method (4) where .
Example 1.
The solution sought for the nonlinear system
Let Then, the system becomes
Then
Method (2)
Method (4), ,
Method (4), ,
Method (4), ,
Method (4), ,
Method (4), ,
Method (4), ,
Thus, the comparison shows that the behavior of method (4) is essentially the same as Newton’s method (2). However, the iterates of method (4) are cheaper to obtain than Newton’s. As observed in Table 1, Table 2, Table 3 and Table 4, the number of iterations required for the proposed methods with k ranging from 3 to 5 closely aligns with those of Newton’s method.
Table 1.
Number of iterations to achieve tolerance with initial guess and .
Table 2.
Number of iterations to achieve tolerance with initial guess and .
Table 3.
Number of iterations to achieve tolerance , where and .
Table 4.
Number of iterations to achieve tolerance , where and .
Table 5 shows the results of calculations to determine the Computational Order of Convergence (COC) and the Approximated Computational Order of Convergence (ACOC) aiming to compare the convergence order of method (4) with the convergence order of Newton’s method (2).
Table 5.
Computational Order of Convergence and the Approximated Computational Order of Convergence, where ,
Definition 2.
Computational order of convergence of a sequence is defined by
where are three consecutive iterations near the root α and [].
Definition 3.
The approximated computational order of convergence of a sequence is defined by
where are three consecutive iterates [].
Table 5 demonstrates that the convergence of the proposed methods closely corresponds with the convergence of Newton’s method, particularly for values of k ranging from 4 to 5 with the convergence order closely approximating 2.
Example 2.
Let and The mapping F is defined on D for as
Then, the definition of the derivative according to Fréchet [,,,,] is given for the mapping F
The point solves the equation Moreover, The conditions of the Theorem 3 hold provided that Then, we can have
Example 3.
Let stand for the space of continuous functions mapping the interval into the real number system. Let and with The operator E is defined on as
Then, the definition of the derivative according to Fréchet [,,,,] is given below for the operator E
for each Therefore, the conditions are validated, since for provided that and . Then, we can get
9. Concluding Remarks
The difficulty of implementing Newton’s method is addressed in this article. In particular, the computation of required at each step of Newton’s method is avoided with the introduction of method (4) (or method (5)), where the inversion only once of a fixed linear operator is required to implement it. The inverse of the linear operator is exchanged with a finite sum of linear operators depending on Both the semi-local and the local convergence analysis of these methods are asymptotically comparable to Newton’s. This is guaranteed by Theorem 2, and Remark 1 for the semi-local and Theorem 3, and Remark 2, for the local analysis, respectively. Moreover, the numerical examples are used to demonstrate that method (4) or method (5) are reliable replacements of Newton’s method for all practical purposes. In our future research, we shall consider further extensions of the form
Here, Q is a conscious approximation to the inverse of a linear operator (like B), which may be a divided difference or some other operator, and it is also given by where L is a linear operator [,,,,,,,,,,,,,].
Author Contributions
Conceptualization, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; Algorithm, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; methodology, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; software, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; validation, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; formal analysis, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; investigation, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; resources, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; data curation, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; writing—original draft preparation, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; writing—review and editing, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; visualization, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; supervision, I.K.A., S.G., S.S., S.R., M.H. and M.I.A.; project administration, I.K.A., S.G., S.S., S.R., M.H. and M.I.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Data are contained within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
References
- Deuflhard, P.; Heindl, G. Affine invariant convergence theorems for Newton’s method and extensions to related methods. SIAM J. Numer. Anal. 1979, 16, 1–10. [Google Scholar] [CrossRef]
- Häubler, W.M. A Kantorovich-type convergence analysis for the Gauss-Newton-method. Numer. Math. 1986, 48, 119–125. [Google Scholar]
- Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton- Kantarovich type theorems. J. Complex. 2010, 25, 3–42. [Google Scholar] [CrossRef]
- Catinas, E. The inexact, inexact perturbed, and quasi-Newton methods are equivalent models. Math. Comp. 2005, 74, 291–301. [Google Scholar] [CrossRef]
- Nashed, M.Z. Generalized Inverses and Applications; Academic Press: New York, NY, USA, 1976. [Google Scholar]
- Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Academic Press: New York, NY, USA, 1973. [Google Scholar]
- Proinov, P.D.; Petkova, M.D. Local and semilocal Convergence of a family of Multi-point Weierstrass-type Root-Finding Methods. Mediterr. J. Math. 2010, 17, 107. [Google Scholar] [CrossRef]
- Argyros, I.K.; George, S. On a unified Convergence analysis for Newton-type methods solving generalized equations with the Aubin property. J. Complex. 2024, 81, 101817. [Google Scholar] [CrossRef]
- Candelario, G.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Generalized conformable fractional Newton-type method for solving nonlinear systems. Numer. Algor. 2023, 93, 1171–1208. [Google Scholar] [CrossRef]
- Dennis, J.E., Jr. On Newton-like methods. Numer. Math. 1968, 11, 324–330. [Google Scholar] [CrossRef]
- Deuflhard, P. Newton Methods for Nonlinear Problems. Affine Invariance and Adaptive Algorithms; Springer Series in Computational Mathematics; Springer: Berlin/Heidelberg, Germany, 2004; Volume 35. [Google Scholar]
- Ezquerro, J.A.; Gutierrez, J.M.; Hernandez, M.A.; Romero, N.; Rubio, M.J. The Newton method: From Newton to Kantorovich. (Spanish). Gac. R. Soc. Mat. Esp. 2010, 13, 53–76. [Google Scholar]
- Krasnoselskij, M.A. Two remarks on the method of successive approximations (Russian). Uspehi Mat. Nauk. 1995, 10, 123–127. [Google Scholar]
- Kantorovich, L.V.; Akilov, G. Functional Analysis in Normed Spaces. Moscow: Fizmatgiz 1959. (German Translation, Akademie- Verlag, Berlin, 1964): (English translation (2nd edition); Pergamon Press: London, UK, 1981. [Google Scholar]
- Regmi, S.; Argyros, I.K.; George, S.; Argyros, C.I. Extended Convergence of Three Step Iterative Methods for Solving Equations in Banach Space with Applications. Symmetry 2022, 14, 1484. [Google Scholar] [CrossRef]
- Regmi, S. Optimized Iterative Methods with Applications in Diverse Disciplines; Nova Science Publisher: New York, NY, USA, 2021. [Google Scholar]
- Berinde, V. Iterative Approximation of Fixed Points; Springer: New York, NY, USA, 2007. [Google Scholar]
- Kelley, C.T. Solving Nonlinear Equations with Iterative Methods, Solvers and Examples in Julia, Fundamentals of Algorithms; SIAM: Philadelphia, PA, USA, 2023. [Google Scholar]
- Moore, R.H.; Nashed, M.Z. Approximations to generalized inverses of linear operators. SIAM J. Appl. Math. 1974, 27, 1–16. [Google Scholar] [CrossRef]
- Potra, F.A. Sharp error bounds for a class of Newton-like methods. Lib. Math. 1985, 5, 71–84. [Google Scholar]
- Padcharoen, A.; Kumam, P.; Chaipunya, P.; Shehu, Y. Convergence of inertial modified Krasnoselskii-Mann iteration with application to image recovery. Thai J. Math. 2020, 18, 126–142. [Google Scholar]
- Rall, L.B. Computational Solution of Nonlinear Operator Equations; Wiley: New York, NY, USA, 1969. [Google Scholar]
- Rheinboldt, W.C. A unified convergence theory for a class of iterative process. SIAM J. Numer. Anal. 1968, 5, 42–63. [Google Scholar] [CrossRef]
- Argyros, I.K. The Theory and Applications of Iteration Methods, Engineering Series; CRC Press, Taylor and Francis Publ. Comp.: Boca Raton, FL, USA, 2022; Volume 2. [Google Scholar]
- Argyros, C.I.; Regmi, S.; Argyros, I.K.; George, S. Contemporary Algorithms: Theory and Applications; NOVA Publishers: New York, NY, USA, 2023; Volume III. [Google Scholar]
- Allgower, E.L.; Georg, K. Introduction to Numerical Continuation Methods; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 1989. [Google Scholar]
- Erfanifar, R.; Hajariah, M. A new multi-step method for solving nonlinear systems with high efficiency indices. Numer. Algor. 2024, 1–26. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Hernandez-Veron, M.A. Domains of global convergence for Newtons’s method from auxiliary points. Appl. Math. Lett. 2018, 85, 48–56. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Noguera, M.; Gutiérrez, J.M. On some computational orders of convergence. Appl. Math. Lett. 2010, 23, 472–478. [Google Scholar] [CrossRef]
- Ben-Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications; John Wiley and Sons: New York, NY, USA, 1974. [Google Scholar]
- Traub, J.F.; Wozniakowsi, H. Convegence and complexity of Newton iteration for operator equations. J. Assoc. Comput. March. 1979, 26, 250–258. [Google Scholar] [CrossRef]
- Yamamoto, T. A convergence theorem for Newton-like methods in Banach spaces. Numer. Math. 1987, 51, 545–557. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).