Abstract
In this work, we study a class of variational inequality problems defined over the intersection of sub-level sets of a countable family of convex functions. We propose a new iterative method for approximating the solution within the framework of Hilbert spaces. The method incorporates several strategies, including inertial effects, a self-adaptive step size, and a relaxation technique, to enhance convergence properties. Notably, it requires computing only a single projection onto a half space. Using some mild conditions, we prove that the sequence generated by our proposed method is strongly convergent to a minimum-norm solution to the problem. Finally, we present some numerical results that validate the applicability of our proposed method.
Keywords:
variational inequalities; inertial technique; self-adaptive step size; relaxation technique; sub-level sets; convex functions; minimum-norm solution MSC:
47H09; 49J25; 65K10; 90C25
1. Introduction
Ever since the independent introduction of the classical variational inequality problem (VIP) by Fichera [1] and Stampacchia [2], this field has received great attention from numerous researchers. Let be a nonempty, closed and convex subset of a real Hilbert space and let and be the inner product and induced norm on , respectively. The VIP with an operator is primarily to find a point such that
We denote the solution set of the VIP (1) by The great attention being received by the VIP is due to its applications in so many areas of studies such as optimization, economics, structural analysis, engineering, physics, and operation research (see [3,4,5,6,7,8] and the references therein).
From the fixed point formulation, find such that many projection algorithms for approximating the solution of VIPs have been proposed (for more on fixed point articles, see [9,10,11]). Some of these can be found in [7,12,13,14,15]. The simplest known projection method is the gradient projection method (GPM), which is formulated as
where is a positive real number. This is obtained from the fixed point formulation
This projection method is characterized by some stringent assumptions for its convergence. In a bid to weaken some of these assumptions, Korpelevich [7] proposed the following extragradient method (EgM):
EGM
where is the Lipschitz constant of the operator and is the metric projection of onto .
Two projections of onto the feasible set and two evaluations of the cost operator that must be performed at each iteration make EgM computationally difficult. This significantly reduces the method’s efficiency and applicability, especially when and are complex in structure. The authors [16,17,18,19,20] came up with different modifications to address computational inefficiency of EgM. One of such notable modifications, called the Tseng extragradient method (TEgM), was proposed by Tseng [19]. This method, which is presented as follows, reduced the two projections of the EgM to one:
TEGM
where
Another modification was given by Censor et al. [21] with the introduction of the subgradient extragradient method (SEgM), in which the second projection onto was replaced with a projection onto a constructible half space, which is one of the subgradient half spaces known to have a simple structure. The following is the method proposed by Censor et al. in [21]:
SEgM
where
It is clear that the TEgM and SEgM algorithms still have the drawback of calculating a projection onto the feasible set In order to weaken this requirement, Censor et al. [3] proposed the following two-subgradient extragradient method (TSEgM):
TSEgM
where and is the sub-differential of the convex function at point a defined in (12).
In the TSEgM, the main idea is that any closed and convex set can be expressed as
where is a convex function. For instance, we can take where “dist” is the distance function. In this method, we observe that two projections are made onto a half space, an advantage that enhances the computational efficiency of the algorithm.
Another method for solving the VIP is the projection and contraction method (PCM), which has been developed by many researchers in the literature (see [22,23,24,25] and the references therein). These PCM algorithms have been shown through numerical experiments to always outperform the EgMs (see [1]). He et al. [14] recently proposed another variant of the projection and contraction algorithm for solving the VIP. The algorithm is as follows:
PCM
where and are half spaces, is a relaxation factor, is the production step size and is the optimal correction step length.
Iterative methods with an improved rate of convergence for solving the problems of optimization have received great attention from many researchers recently. The inertial and relaxation techniques are the two commonly used techniques researchers employ to speed up the rate of convergence of algorithms. The inertial technique, which was introduced by Polyak [26], has been used by many authors, few of whom are in [12,13,16,27]. The relaxation method, on the other hand, has also been adopted by many researchers; see [28,29,30]. The authors in [30] investigated the effect of these two techniques on the convergence properties of iterative schemes.
Cao and Guo [27] modified the work of Censor et al. in [3], with the addition of an inertial technique, and so proposed the following inertial two-subgradient extragradient algorithm for solving the VIP:
ITSEgM
where and
In this research, our interest is in studying the VIPs where the feasible set is given as a finite intersection of sub-level sets of convex functions defined as follows:
where k is a positive integer and for all are convex functions.
In recent times, He et al. [31] proposed a new iterative algorithm called the totally relaxed and self-adaptive subgradient extragradient method (TRSSEM) for solving VIP. The algorithm is as follows:
TRSSEM
where and
A weak convergent result of the proposed method in the framework of Hilbert spaces was obtained by the authors in [31].
This paper is motivated by the above studies, which necessitated our proposed study on inertial and relaxation methods for solving the VIP in the framework of Hilbert spaces. The following are the important features of our algorithm:
- The combination of the inertial and relaxation techniques for speeding up the convergence rate the iterative scheme.
- The presence of a simple self-adaptive stepsize, which is generated at each iteration by some simple computations.
- The algorithm is independent of the use of the Lipschitz continuity assumption which is commonly employed by authors when solving the monotone variational inequality problem (MVIP).
- Strong convergence of the generated sequence to a minimum-norm solution to the problems.
- Computation of only one projection onto some half space.
The organization of the paper is as follows: Section 2 contains some recalled definitions and well-known lemmas needed for our analysis. Section 3 consists of our proposed algorithm, Section 4 accommodates strong convergence analysis of the algorithm, while in Section 5, there are numerical experiments used to validate our results, and the concluding remarks are given in Section 6.
2. Preliminaries
Let be a nonempty, closed and convex subset of a real Hilbert space The weak convergence and strong convergence of to a are represented by and respectively, and denotes the set of weak limits of that is,
Definition 1.
Let be a real Hilbert space . The mapping is said to be
(1) -Lipschitz-continuous, where if
If then is a contraction;
(2) nonexpansive, if is 1-Lipschitz continuous.
Definition 2.
Given a mapping , F is called monotone if
Definition 3.
([32]). A function is said to be Gteaux-differentiable at if there exists an element denoted by such that
where is called the Gteaux differential of c at a. Recall that if for each , c is Gteaux differentiable at a, then c is Gteaux-differentiable on .
Definition 4.
([32]). Let c be a convex function. is said to be subdifferentiable at a point if the set
is nonempty. Each element in is called a subgradient of c at a. We note that if c is subdifferentiable at each , then c is subdifferentiable on . It is also known that if c is Gteaux-differentiable at a, then c is subdifferentiable at a and .
Definition 5.
Let be a real Hilbert space. A function is said to be weakly lower semi-continuous (w-lsc) at if
holds for every sequence in satisfying
Lemma 1.
Let be a real Hilbert space such that Then, the following results hold:
- (i)
- (ii)
- (iii)
- If with we have
Lemma 2.
Let be a nonempty, closed, and convex subset of H. Suppose is a continuous monotone mapping and ; then,
Lemma 3.
([33]). Let be a set defined as in (10), and let be an operator. Suppose the solution set is nonempty. Then, the following alternating theorem holds for the solution of the , that is, given , if and only if one of the following holds.
- (i)
- or
- (ii)
- and there exist (depending on the point ) and such that where denotes the boundary of the set and is the convex hull of the set
Lemma 4.
([34]). Let be a sequence of non-negative real numbers, be a sequence in with and be a sequence of real numbers. Assume that
and if for every subsequence of satisfying then
Lemma 5.
([35]). Suppose and are two nonnegative real sequences such that
If then exists.
3. Proposed Algorithm
We present our proposed algorithm in this section and give the conditions for its convergence.
Assumption 1.
- (1)
- The solution set is nonempty.
- (2)
- The mapping is monotone and -Lipschitz-continuous on .
- (3)
- For all , the family of functions satisfy the following conditions.
- (i)
- Any is convex on .
- (ii)
- Any is weakly lower semi-continuous on
- (iii)
- Any is Gâteaux-differentiable and is -Lipschitz-continuous on
- (iv)
- There exists a positive constant M such that for all the following holds:where is defined as in Lemma 3.
- (4)
- and are non-negative sequences satisfying the following conditions:
- (i)
- (ii)
- such that
- (iii)
- (iv)
- Let be a nonnegative sequence such that
The following is the proposed algorithm (Algorithm 1):
| Algorithm 1: TRSTEM |
Initialization: Given Let be two initial points and set Given the and iterates, choose such that with defined by Iterative steps: Calculate the next iterate as follows: |
Remark 1.
- We do not require the knowledge of the Lipschitz constant of the cost operator or the Lipschitz constant of each Gteaux differential of to implement our proposed algorithm, as most often used by some researchers (for instance, see [27]).
- Computation of only one projection onto some half space is another feature of our algorithm that makes it computationally efficient to implement.
Remark 2.
Observe that, by Assumption 1 4(i), it can easily be verified from (13) that
4. Convergence Analysis
Here, we start with some relevant lemmas needed to establish the strong convergence theorem of our proposed algorithm.
Lemma 6.
Suppose and are the sets defined by (10) and (14), respectively. Then, we have that
Proof.
For all , let . Thus, we see that . Then, for each and any by the subdifferential inequality, it follows that
By definition of the sets (14), we see that It then follows that Therefore, as required. □
Lemma 7.
Let be a sequence generated by Algorithm 1. Then, is well defined and where for some positive constant K and
Proof.
By the Lipschitz continuity of and and each , for all , we have
where Clearly, from the definition of the sequence has lower bound and upper bound By Lemma 5, we have that exists and we denote It is clear that □
Lemma 8.
Let and be a sequence generated by Algorithm 1 under Assumption 1. Then, we have the following inequality:
Proof.
From (15), we have
which implies that
Assume Then, Since and then by the characterization of we obtain
which is equivalent to
Using Lemma 1(i), we have
By the monotonicity of , we obtain
Substituting (21) and (22) into (20), we have
Now using the definition of together with Lemma 1, we obtain
Applying (23) in (24) gives
Now, we consider the following two cases:
Case 1: If then from (25) and by applying (19), we have
which is the required inequality.
Case 2: By Lemma 3, we have that and
where is some positive constant, , and are nonnegative constants satisfying Then, by the subdifferential inequality, we obtain
Since we have that for each and then
From (26) and (28), we obtain
Since it follows that
Then, by the differential inequality, we obtain
Adding (30) and (31) gives
Now, by applying (19) and (32), we obtain
By the condition (3)(iv) of Assumption 1, we have
So, from (29) and by applying (33), (34) and the definition of we obtain
Applying (19) and (35) in (25), we obtain
which is the required inequality. Thus, we have obtained the desired result. □
Lemma 9.
Let be a sequence generated by Algorithm 1. Then, under Assumption 1, is bounded.
Proof.
Let First, since the limit of exists with then by Assumption 1 (4)(ii)–(iii), we have
Hence, there exists such that for all , we have
Consequently, from (17), we have that for all ,
Using the definition of we have
Hence, by Remark 2, there exists such that
It then follows from (38) that
From the definition of , we have
On the other hand, by applying Lemma 1(i) and (37) we obtain
which implies that
Next, by applying (39) and (41) in (40), we have, for all ,
This implies that the sequence is bounded. Consequently, and are all bounded. □
Lemma 10.
Let and be sequences generated by Algorithm 1 such that Suppose is a subsequence of that converges weakly to some and then
Proof.
Suppose and are two sequences generated by Algorithm 1 with subsequences and respectively, such that ; then, by the hypothesis of the lemma, it follows that as Since then, by the definition of we obtain
Using the Cauchy–Schwartz inequality, we have
Since is Lipschitz-continuous and is bounded, then is bounded. Thus, there exists a constant such that for all Hence, from (42), we obtain
Since is weakly lower semi-continuous, then it follows from (43) and the definition of weakly lower semi-continuity that
which implies that By the characterization of we obtain
It follows from the monotonicity of , that
Letting in the last inequality, applying and we have
Applying Lemma 2, we have □
Lemma 11.
Let be a sequence generated by Algorithm 1. Then, under Assumption 1, we have the following inequality for all and
Proof.
Let Then, from the definition of , the use of the Cauchy–Schwartz inequality and the application of Lemma 1, we obtain
where
Next, from the definition of and by applying (17), (46) together with Lemma 1, we obtain
which is the required inequality. □
- In the following theorem, we state and prove the strong convergence theorem for our proposed algorithm.
Theorem 1.
Let be a sequence generated by Algorithm 1 under Assumption 3.1. Then, the sequence converges strongly to a point where
Proof.
Since we have From Lemma 11, we obtain
where Next, we claim that the sequence converges to zero. To show this, by Lemma 4, it suffices to establish that for every subsequence of satisfying
Suppose is a subsequence of such that (48) holds. Again, from Lemma 11, we have
Applying (48), Remark 2, together with the fact that we obtain
By the conditions on the control parameters together with (36), we obtain
From the definition of and applying (19), we have
By applying (49) together with the conditions on the control parameters, it follows from (50) that
By Remark 2, we obtain
Moreover, by applying (51) and (52), we obtain
Next, by using (52), (53), together with the fact that , we obtain
Since is bounded, then is nonempty. Suppose is an arbitrary element. Then, there exists a subsequence of such that as It follows from (52) that as Moreover, by Lemma 10 and (49) we obtain Since was chosen arbitrarily, it follows that
Next, since is bounded, there exists a subsequence of such that and
Since it follows that
Therefore, it follows from the last inequality and (54) that
Now, by Remark 2 and (55), we have Thus, by invoking Lemma 4, it follows from (47) that converges to zero as desired. □
5. Numerical Example
In this section, we present two numerical examples to illustrate the behavior of the sequences generated by Algorithm 1 and make comparisons with the algorithms presented in [6,36,37]. All the programs are implemented in MATLAB 2023b on a Intel(R)Core(TM) i5-8250S CPU @1.60 GHz computer with 8.00 GB RAM.
Example 1.
Consider the variational inequality (1) with the feasible set where
and
and is defined by where
We observe from Lemma 3 that the solution set of the variational inequality problem and is its solution. The following constants, which can be obtained through simple calculations, are used: Lipschitz constants of the functions for are and thus Furthermore, based on Assumption 1 (3)(iv), and As in TRSSEM, and To implement SEM, we first need to estimate the Lipschitz connstant of , which is , and also calculate the projection operator This operator without explicit expression is a weakness of SEM. Furthermore, control parameter conditions are taken as follows: The numerical and graphical results of three methods are shown in Figure 1. The different initial values of and are as follows. The process is terminated by using the stopping criterion where
- (Case 1):
- and
- (Case 2):
- and
- (Case 3):
- and
- (Case 4):
- and
Table 1.
Numerical results for Example 1.
Figure 1.
Top left: Case 1; top right: Case 2; bottom left: Case 3; bottom right: Case 4 [6,36,37].
Example 2.
Suppose and let be a closed, convex, and feasible set defined as follows:
for each where and We choose the following as our control parameters: The process is terminated by using the stopping criterion where We consider the following cases for the initial points and
- (Case i)
- and
- (Case ii)
- and
- (Case iii)
- and
- (Case iv)
- and
The report of this example is given in Figure 2.
Figure 2.
Top left: Case i; top right: Case ii; bottom left: Case iii; bottom right: Case iv [6,36,37].
6. Conclusions
A novel iterative method was proposed for approximating the solution of a class of variational inequality problems defined over the intersection of sub-level sets of a countable family of convex functions. The method requires only a single projection onto a half space for computation. It was shown that the sequence generated by the proposed method converges strongly to the minimum-norm solution of the problem. The efficiency and validity of the method were demonstrated through two numerical examples. In the future, we intend to study the convergence behavior of our proposed algorithm in Banach spaces under weaker conditions, as well as to develop accelerated variants that further reduce computational complexity.
Author Contributions
Conceptualization, O.J.O.; Formal analysis, O.J.O. and O.K.O.; Methodology, S.P.M.; Software, O.K.O.; Validation, O.K.O. and S.P.M.; Writing—original draft, O.J.O.; Writing—review and editing, O.K.O. and H.A.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Data are contained within the article.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Fichera, G. Sul problem elastostatico di signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei Cl Sci. Fis. Mat. Nat. 1963, 34, 138–142. [Google Scholar]
- Stampacchia, G. Variational Inequalities. In Theory and Applications of Monotone Operators, Proceedings of the NATO Advanced Study Institute, Venice, Italy, 17–30 June 1968; Edizioni Odersi: Gubbio, Italy, 1968; pp. 102–192. [Google Scholar]
- Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
- Fukushima, M. A relaxed projection method for variational inequalities. Math. Program. 1986, 35, 58–70. [Google Scholar] [CrossRef]
- Gu, Z.; Mani, G.; Gnanaprakasam, A.J.; Li, Y. Solving a System of Nonlinear Integral Equations via Common Fixed Point Theorems on Bicomplex Partial Metric Space. Mathematics 2021, 9, 1584. [Google Scholar] [CrossRef]
- He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 2013, 942315. [Google Scholar] [CrossRef]
- Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Ekon. Mat. Metody 1976, 12, 747–756. [Google Scholar]
- Nallaselli, G.; Baazeem, A.S.; Gnanaprakasam, A.J.; Mani, G.; Javed, K.; Ameer, E.; Mlaiki, N. Fixed Point Theorems via Orthogonal Convex Contraction in Orthogonal b-Metric Spaces and Applications. Axioms 2023, 12, 143. [Google Scholar] [CrossRef]
- Beg, I.; Mani, G.; Gnanaprakasam, A.J. Best proximity point of generalized F-proximal non-self contractions. J. Fixed Point Theory Appl. 2021, 23, 49. [Google Scholar] [CrossRef]
- Gnanaprakasam, A.J.; Nallaselli, G.; Haq, A.U.; Mani, G.; Baloch, I.A.; Nonlaopon, K. Common Fixed-Points Technique for the Existence of a Solution to Fractional Integro-Differential Equations via Orthogonal Branciari Metric Spaces. Symmetry 2022, 14, 1859. [Google Scholar] [CrossRef]
- Ramaswamy, R.; Mani, G.; Gnanaprakasam, A.J.; Abdelnaby, O.A.A.; Stojiljković, V.; Radojevic, S.; Radenović, S. Fixed Points on Covariant and Contravariant Maps with an Application. Mathematics 2022, 10, 4385. [Google Scholar] [CrossRef]
- Alakoya, T.O.; Taiwo, A.; Mewomo, O.T.; Cho, Y.J. An iterative algorithm for solving variational inequality generalized mixed equilibrium, convex minimization and zeros problems for a class of nonexpansive-type mappings. Ann. Univ. Ferrara Sez. VII Sci. Mat. 2021, 67, 1–31. [Google Scholar] [CrossRef]
- Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2017, 70, 687–704. [Google Scholar] [CrossRef]
- He, S.; Dong, Q.-L.; Tian, H. Relaxed projection and contraction methods for solving Lipschitz-continuous monotone variational inequalities. Rev. De La Real Acad. De Cienc. Exactas Fis. Y Nat. Ser. A Mat. 2019, 113, 2773–2791. [Google Scholar] [CrossRef]
- Ogwo, G.N.; Izuchukwu, C.; Mewomo, O.T. A modified extragradient algorithm for a certain class of split pseudo-monotone variational inequality problem. Numer. Algebra Control Optim. 2022, 12, 373–393. [Google Scholar] [CrossRef]
- Ceng, L.; Petrușel, A.; Qin, X.; Yao, J. A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 2020, 21, 93–108. [Google Scholar] [CrossRef]
- Iusem, A.N.; Nasri, M. Korpelevich’s method for variational inequality problems in Banach spaces. J. Glob. Optim. 2011, 50, 59–76. [Google Scholar] [CrossRef]
- Ogwo, G.N.; Izuchukwu, C.; Mewomo, O.T. Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algorithms 2021, 88, 1419–1456. [Google Scholar] [CrossRef]
- Tseng, P. A Modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
- Yang, J.; Liu, H. Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer. Algorithms 2019, 80, 741–752. [Google Scholar] [CrossRef]
- Censor, Y.; Gibali, A.; Reich, S. The Split Variational Inequality Problem; The Technion-Israel Institute of Technology: Haifa, Israel, 2010. [Google Scholar]
- He, S. A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 1997, 35, 69–76. [Google Scholar] [CrossRef]
- He, B.; Yuan, X.; Zhang, J.J. Comparison of two kinds of prediction-correction methods for monotone variational inequalities. Comput. Optim. Appl. 2004, 27, 247–267. [Google Scholar] [CrossRef]
- Solodov, M.V.; Tseng, P. Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 1996, 34, 1814–1830. [Google Scholar] [CrossRef]
- Sun, D. A class of iterative methods for solving nonlinear projection equations. J. Optim. Theory Appl. 1996, 91, 123–140. [Google Scholar] [CrossRef]
- Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
- Cao, Y.; Guo, K. On the convergence of inertial two-subgradient extragradient method for solving variational inequality problems. Optimization 2020, 69, 1237–1253. [Google Scholar] [CrossRef]
- Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in hilbert space. SIAM J. Optim. 2004, 14, 773–782. [Google Scholar] [CrossRef]
- Attouch, H.; Cabot, A. Convergence of a relaxed inertial forward–backward algorithm for structured monotone inclusions. Appl. Math. Optim. 2019, 80, 547–598. [Google Scholar] [CrossRef]
- Iutzeler, F.; Hendrickx, J.M. A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optim. Methods Softw. 2019, 34, 383–405. [Google Scholar] [CrossRef]
- He, S.; Wu, T.; Gibali, A.; Dong, Q.-L. Totally relaxed, self-adaptive algorithm for solving variational inequalities over the intersection of sub-level sets. Optimization 2018, 67, 1487–1504. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: New York, NY, USA, 2017. [Google Scholar]
- Nguyen, H.Q.; Xu, H.K. The supporting hyperplane and an alternative to solutions of variational inequalities. J. Nonlinear Convex Anal. 2015, 16, 2323–2331. [Google Scholar]
- Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2012, 75, 742–750. [Google Scholar] [CrossRef]
- Tan, K.K.; Xu, H.K. Approximating Fixed Points of Nonexpansive Mappings by the Ishikawa Iteration Process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef]
- Thong, D.V.; Gibali, A. Two strong convergence subgradient extragradient methods for solving variational inequalities in Hilbert spaces. Jpn. J. Ind. Appl. Math. 2019, 36, 299–321. [Google Scholar] [CrossRef]
- Uzor, V.A.; Mewomo, O.T.; Alakoya, T.O.; Gibali, A. Outer approximated projection and contraction method for solving variational inequalities. J. Inequalities Appl. 2023, 2023, 141. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).