Abstract
The implicit midpoint rules are employed as a powerful numerical technique, and in this article we attend a class of viscosity iteration approximations on hierarchical problems for the implicit double midpoint rules. We prove the strong convergence theorem to the unique solution on hierarchical problem of this technique is established under some favorable conditions imposed on the control parameters in Hilbert spaces. Furthermore, we propose some applications to the constrained convex minimization problem, nonlinear Fredholm integral equation and variational inequality on fixed point problem. Moreover, some numerical examples are also presented to illustrate the different proposed methods and convergence results. Our results modified the implicit double midpoint rules with the hierarchical problem.
Keywords:
nonexpansive mapping; strongly positive linear bounded operator; Lipchitz continuous; variational inequality; hierarchical problem; viscosity; implicit double midpoint rule MSC:
4G20; 46C05; 47H06; 47H09; 47H10; 47J20; 47J25; 47N20; 65J15
1. Introduction
To begin with, we first give some necessary notations that we use throughout our paper. In the framework of a real Hilbert space H with inner product and its induced norm , let C be a subset of H with its properties which are closed and convex. The notations ⇀ and → refer to weak convergence and strong convergence, respectively.
Next, we recall some definitions which will be considered in the next part of our paper. We shall start with the well-known problem referred to as The variational inequality [] which is to find the solution that satisfies the following inequality
where C is nonempty. The set of its solution is denoted by , that is,
The contraction mapping with a constant is defined as follows: for all
A self-mapping on H, A, is said to be -strongly monotone if there exists a positive real number satisfying
A self-mapping on H is called L-Lipschitz continuous if there exists a real number such that for all in H which satisfies the following:
An operator A which is linear and bounded is titled as a strongly positive on H if there exists a positive constant that meets the following inequality:
A well-known nonexpansive mapping, T, is defined by
for all elements in C.
We shall say that a point x in C is a fixed point of a mapping T when that x satisfies the equality . Undoubtedly, for any mapping T, there may be one or various or no fixed point. However, where it is present we will denote the set of its fixed point as , i.e., .
For a nonexpansive mapping where C is bounded, closed and convex, is exactly nonempty [].
Recently, since the variational inequality problem has attracted many mathematicians to find the best way to solve it, there arose a new interesting problem, known as the hierarchical problem, that was improved from the classical variational inequality. Instead of considering the variational inequality over a closed convex set C, we mention that problem over the fixed point set of a nonexpansive mapping . This problem can be stated as follows:
Let and be a monotone continuous mapping and a nonexpansive mapping, respectively. This hierarchical problem is to find which satisfies
where is nonempty and we aim to denote its solution set as . There are many researches involving this problem in the literature [,,,,,,,,,,,,,,].
In 2011, Yao et.al [] proposed an iterative algorithm that provides a strong convergence to a unique solution of variational in equality in case of hierarchical problem. Their iterative algorithm for generating the sequence is designed by
where is chosen arbitrarily and both sequence and are in . Under some appropriate assumptions, they can gaurantee that the generated sequence converges to a unique solution of the following variational inequality:
where which is a strongly positive linear bounded operator, is a -contraction and which is a nonexpansive mapping where is nonempty. They identified the solution set of (1) by .
Later, in 2011, Ceng et.al [] studied a strong convergence to a unique solution of the variational inequality on the modified hierarchical problem. For which is chosen arbitrarily, define a sequence followed by
where the sequences and in Then, converges strongly to which is the unique solution of the variational inequality which is to find satisfying
By algorithm (2), the assumption of an is a Lipschitzian and strongly monotone operator, is a contraction mapping, are both nonexpansive mappings with being nonempty and others satisfying certain conditions. They give a notation of the solution set of (3) as .
Next, in 2014, Kumam and Jitpeera [] consider a strong convergence to a unique solution of the hybrid hierarchical problem. They generated the sequence iteratively as follows:
where can be chosen arbitrarily and both sequences and in They found that the generated sequence converges strongly to a unique solution of the following vriational inequality:
where is a Lipschitzian and strongly monotone operator, is a contraction mapping and are nonexpansive mappings with is nonempty. The solution set of (5) is denoted by .
In recent years, the implicit midpoint rule has been proved in many papers [,]. The implicit midpoint rule is one of the powerful methods for finding ordinary differential equations. In 2019, Dhakal and Sintunavarat [] studied the viscosity method to the implicit double midpoint rule for nonexpansive mapping. For is chosen arbitrarily, the sequences be generated by the following algorithm,
where the sequences Under some mild conditions, they can show that the generated sequence converges strongly to a unique solution of the following variational inequality.
where is a contraction mapping and T is nonexpansive mapping with is a nonempty set. They denoted as the solution set of (6).
By considering the previous mentioned research, we aim to consider a hybrid viscosity method using implicit double midpoint rule to solve a hybrid hierarchical problems, stated as follows:
where are nonexpansive mappings with is nonempty, is a Lipschitzian and strongly monotone operator, is a contraction mapping and other control sequences satisfy some mild conditions. Our mentioned problem is stated as follows:
We also give the notation of its solutions set by , that is
Under some appropriate assumptions, we exactly claim the strong convergence of our sequence generated by our proposed algorithm. The results improve the main theorem of Dhakal and Sintunavarat [], Kumam and Jitpeera []. Thus, our solution is , which is more general than . Furthermore, our new algorithm (7) is more general than (4) that uses the double midpoint rule.
The remainder of this paper is divided into six sections. In Section 1, we recall some definitions and properties to be used in the sequel. In Section 2, lemmas are provided for using in proof. In Section 3, we prove the strong convergence theorem of the hybrid hierarchical problem with double midpoint rules in the Hilbert spaces. Some deduced results are provided in Section 4. In Section 5, we present some applications and numerical examples. The conclusion is given in the final section.
2. Preliminaries
In this section, we collect some definitions, properties and lemmas that are necessary for use in this paper. We start with the following inequality:
An operator , that project every point to a unique nearest point in C is called the metric projection of H onto C, that is, From it definitions, it is trivial that the following properties hold.
Furthermore, for a monotone mapping , the properties (8) implies that
Next, we recall some lemmas that will be used in the proof.
Lemma 1
([]). Let be a sequence of nonnegative real numbers such that
where and is a sequence in such that
(i) ,
(ii) or
Then .
Lemma 2
([]). Let C be a nonempty closed and convex subset of a real Hilbert space H, and be a nonexpansive mapping with If is a sequence in C such that converges weakly to x and converges strongly to 0, where I is the identity mapping, then
3. Main Results
In this section, we propose our algorithm for solving hierarchical problem by using technique of the viscosity method together with a generalized implicit double midpoint rule. We also verify the strong convergence of our generated sequence to a fixed point of nonexpansive mapping which is also a unique solution of a mentioned variational inequality.
Theorem 1.
Let C be a nonempty closed and convex subset of a real Hilbert space H. be κ-Lipschitzian and η-strongly monotone operators with constant κ and . be a ρ-contraction with coefficient . be a nonexpansive mapping with , be a nonexpansive mapping. Let and , where . Suppose is a sequence generated by the following algorithm which is chosen arbitrarily,
where , satisfy the following conditions:
(C1): ;
(C2): , , ;
(C3): .
Then, converges strongly to , which is the unique solution of another variational inequality:
where . On the other hand, is a unique fixed point that is,
Proof.
First, we want to show the existence of a sequence defined by (9). Consider the mapping by for all We will show the mapping is a contraction mapping for all For each and we have
where for all This shows that the mapping is a contraction mapping for all From the Banach contraction principle, has a unique fixed point for all Thus, we conclude the existence of a sequence defined by (9). We will divide the proof into six steps.
Step 1. First, we claim that is bounded. Indeed, for any , we can see that
By induction, it follows that
Therefore, is bounded.
Step 2. We verify that . For each with , we obtain
So that
which and This yields that for all with . We can also write
for all with , where
and
Using the conditions (C1), (C2) and comparing (10) with Lemma 1, we obtain
Step 3. We want to show that . For each , we have
From the conditions (C1), (C2) and using (11), we obtain
Step 4. We need to claim that , where
Let us consider Then there exists a subsequence of such that . From (12), we obtain
It implies that strong convergence to 0. Using Lemma 2, we obtain and . Thus, we can conclude that
Step 5. We want to show that
where is a unique fixed point of that is Since is bounded, there exists a subsequence of such that it has weak convergence to Without loss of generality, we may assume that as for some and
From the Step 4, we obtain . Using (8), we obtain
This completes the proof. □
4. Some Deduced Results
Corollary 1.
Let C be a nonempty closed and convex subset of a real Hilbert space H. be κ-Lipschitzian and η-strongly monotone operators with constant κ and . Let be a nonexpansive mapping with , be a nonexpansive mapping. Let the control conditions be and , where . Suppose the generated sequence is designed by the following algorithm where can be chosen arbitrarily:
where , satisfy conditions (C1)–(C3). Then, converges strongly to , which is the unique solution of variational inequality:
where . On the other hand, is a unique fixed point that is, .
Proof.
Putting into Theorem 1, we can immediately obtain the desired result. □
Corollary 2.
Let C be a nonempty closed and convex subset of a real Hilbert space H. be a ρ-contraction with coefficient , be a nonexpansive mapping with and be a nonexpansive mapping. Suppose is a sequence generated by the following algorithm arbitrarily:
where , satisfy the following conditions (C1)–(C3). Then, converges strongly to , which is the unique solution of variational inequality:
where . On the other hand, is a unique fixed point that is, .
Proof.
Putting and in Theorem 1, we can immediately obtain the desired result. □
Corollary 3.
Let C be a nonempty closed and convex subset of a real Hilbert space H. be a nonexpansive mapping with and be a nonexpansive mapping. Suppose is a sequence generated by the following algorithm arbitrarily:
where satisfy the following condition (C1)-(C3). Then converges strongly to , which is the unique solution of variational inequality:
where . On the other hand, is a unique fixed point that is, .
Proof.
Putting and in Corollary 2, we can immediately obtain the desired result. □
Corollary 4.
Let C be a nonempty closed and convex subset of a real Hilbert space H. be a nonexpansive mapping with and be a nonexpansive mapping. Suppose is a sequence generated by the following algorithm arbitrarily:
where , satisfy the following conditions (C1)-(C3). Then converges strongly to , which is the unique solution of variational inequality:
where . On the other hand, is a unique fixed point that is, .
Proof.
Putting in Corollary 2, we can immediately obtain the desired result. □
5. Applications and Numerical
5.1. Nonlinear Fredholm Integral Equation
In this part, we consider the following nonlinear Fredholm integral equation:
where h is a continuous function on the interval
is a continuous function. In this case, if we assume that Q satisfies the Lipschitz continuity condition, i.e.,
then we can verify that Equation (13) has at least one solution in (see [], Theorem 3.3). Define the mappings by:
and
Then, for any we have:
which implies that S and T are nonexpansive mapping on We can definitely say that the solution finding of Equation (13) and the solution finding of a commom fixed point of S and T in are equivalent.
Theorem 2.
Let a mapping satisfies the Lipschitz continuity condition and h be a continuous function on closed interval Let be a mapping defined by (14) and (15). Let be κ-Lipschitzian and η-strongly monotone operators with constant κ and , respectively, be a ρ-contraction with coefficient . Let and , where . Suppose that and are the sequences in and satisfy the conditions - of Theorem 1. For any let be a sequence generated by:
where Then, the sequence converges strongly in to the solution of the integral Equation (13).
5.2. Application to Convex Minimization Problem
In this part, we consider the well-known optimization problem
where is a convex and differentiable function. Assume that (17) is consistent, and let a nonempty set refers to its set of solutions. We generate the sequence iteratively by using the gradient projection method as follows:
where and is (Gâteaux) differentiable. If is L-Lipschtzian, then is -inverse strongly monotone, that is,
Theorem 3.
Let C be a nonempty closed convex subset a real Hilbert space For the minimization problem (17), assume that Ψ is (Gâteaux) differentiable and the gradient is -inverse strongly monotone mapping with Let be a ρ-contraction with coefficient . Let and , where . Suppose that and are the sequences in that satisfy the conditions - of Theorem 1. For a given let be a sequence generated by:
Then converges strongly to a solution of the minimization problem (17), which is also the unique solution of the variational inequality
where .
5.3. Application to Hierarchical Minimization
The following hierarchical minimization problem will be mentioned in this subsection. (see [] and references therein).
Let be lower semi-continuous convex functions. The hierarchical minimization is shown as follows:
Assume that is nonempty. Let and assume
Let and are differentiable and their gradients satisfy the Lipschitz continuity conditions:
Note that the condition (18) implies that is -inverse strongly monotone Now let
where and Note that is nonexpansive if Furthermore, it is easily seen that
The optimality condition for to be a solution of the hierarchical minimization (19) is the VI:
Theorem 4.
Assume the hierarchical minimization problem (19) is solvable. Let be a ρ-contraction with coefficient . Let and , where . Suppose that and are the sequences in that satisfy the conditions - of Theorem 1. Let be a sequence generated by:
If the condition (18) is satisfied and then converges in norm to a solution of the VI (19) that is, a solution of hierarchical minimization problem (17) which also solves the VI
5.4. Numerical Experiments
Example 1.
Let be a subset of a real Hilbert space with the usual inner product and define the mappings by
Let sequence be generated by algorithm (9), where and Then, sequence converges strongly to
Under the different setting of initial points , the computational results of algorithm (9) are given in both Table 1 and Figure 1.
Table 1.
The approximation value via the algorithm (9) in the initial point .
Figure 1.
Values of .
6. Conclusions
According to the importance and attractiveness of hierarchical problems, in our research, we applied the viscosity technique together with a generalized implicit double midpoint rule to find a fixed point of nonexpansive mapping in the framework of real Hilbert spaces. We obtain the strong convergence theorem of our designed algorithm which can solve fixed point problem and also it is the same solution of our mentioned hierarchical problem. We also we propose the deduced corollaries and express how to apply our algorithm to solve other problems including the nonlinear Fredholm integral equation, convex minimization problem and hierarchical minimization. Moreover, we conduct a numerical experiment under a different initial point to illustrate the effectiveness of our algorithm.
Author Contributions
Conceptualization, T.J.,W.K. and A.P.; methodology, T.J.; writing—original draft preparation, T.J.; writing—review and editing, T.J. and W.K. All authors have read and agreed to the published version of the manuscript.
Funding
This research received funding from the Rajamangala University of Technology Thanyaburi and Rajamangala University of Technology Lanna.
Acknowledgments
This research was supported by The Science, Research and Innovation Promotion Funding (TSRI) (Grant no.FRB650070/0168). This research block grants was managed under Rajamangala University of Technology Thanyaburi (FRB65E0633M.2).
Conflicts of Interest
The authors declare no conflict of interest.
References
- Hartman, P.; Stampacchia, G. On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115, 271–310. [Google Scholar] [CrossRef]
- Kirk, W.A. Fixed point theorem for mappings which do not increase distance. Am. Math. Mon. 1965, 72, 1004–1006. [Google Scholar] [CrossRef]
- Combettes, P.L. A block-itrative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans. Signal Process. 2003, 51, 1771–1782. [Google Scholar] [CrossRef]
- Gu, G.; Wang, S.; Cho, Y.J. Strong convergence algorithms for hierarchical fixed points problems and variational inequalities. J. Appl. Math. 2011, 2011, 164978. [Google Scholar] [CrossRef]
- Hirstoaga, S.A. Iterative selection method for common fixed point problems. J. Math. Anal. Appl. 2006, 324, 1020–1035. [Google Scholar] [CrossRef]
- Iiduka, H.; Yamada, I. A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 2009, 58, 251–261. [Google Scholar] [CrossRef]
- Marino, G.; Xu, H.K. Explicit hierarchical fixed point approach to variational inequalities. J. Optim. Theory Appl. 2011, 149, 61–78. [Google Scholar] [CrossRef]
- Pakdeerat, N.; Sitthithakerngkiet, K. Approximating methods for monotone inclusion and two variational inequality. Bangmod Int. J. Math. Comp. Sci. 2020, 6, 71–89. [Google Scholar]
- Slavakis, K.; Yamada, I. Robust wideband beamforming by the hybrid steepest descent method. J. Math. Anal. Appl. 2007, 55, 4511–4522. [Google Scholar] [CrossRef]
- Slavakis, K.; Yamada, I.; Sakaniwa, K. Computation of symmetric positive definite Toeplitz matrices by the hybrid steepest descent method. Signal Process 2003, 83, 1135–1140. [Google Scholar] [CrossRef]
- Yamada, I. The hybrid steepest descent method for the variational inequality problems over the intersection of fixed point sets of nonexpansive mappings. Inherently Parallel Algorithms Feasibility Optim. Their Appl. 2001, 8, 473–504. [Google Scholar]
- Yao, Y.; Cho, J.; Liou, Y.C. Iterative algorithms for hierarcical fixed points problems and variational inequalities. Math. Comput. Model. 2010, 52, 1697–1705. [Google Scholar] [CrossRef]
- Yao, Y.; Cho, J.; Liou, Y.C. Hierarchical convergence of an implicit double net algorithm for nonexpansive semigroups and variational inequality problems. Fixed Point Theory Appl. 2011, 2011, 101. [Google Scholar] [CrossRef]
- Yao, Y.; Cho, Y.J.; Yang, P.X. An iterative algorithm for a hierarchical problem. J. Appl. Math. 2012, 2012, 320421. [Google Scholar] [CrossRef]
- Yao, Y.; Liou, Y.C.; Chen, C.P. Hierarchical convergence of a double-net algorithm for equilibrium problems and variational inequality problems. Fixed Point Theory Appl. 2010, 2010, 642584. [Google Scholar] [CrossRef]
- Yamada, I.; Ogura, N. Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mapping. Numer. Funct. Anal. Optim. 2004, 25, 619–655. [Google Scholar] [CrossRef]
- Yamada, I.; Ogura, N.; Shirakawa, N. A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems. Am. Math. Soc. 2002, 313, 269–305. [Google Scholar]
- Yao, Y.; Liou, Y.C.; Kang, S.M. Algorithms construction for variational inequaliies. Fixed Point Theory Appl. 2011, 2011, 794203. [Google Scholar] [CrossRef]
- Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Iterative methods for triple hierarchical variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2011, 151, 489–512. [Google Scholar] [CrossRef]
- Kumam, P.; Jitpeera, T. Strong convergence of an iterative algorithm for hierarchical problems. Abstract Appl. Anal. 2014, 2014, 678147. [Google Scholar] [CrossRef]
- Alghamdi, M.A.; Alghamadi, M.A.; Shahzad, N.; Xu, H.K. The implicit midpoint rule for nonexpansive mappings. Fixed Point Theory Appl. 2014, 2014, 96. [Google Scholar] [CrossRef]
- Xu, H.K.; Alghamdi, M.A.; Shahzad, N. The viscosity technique for the implicit midpoint rule of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2015, 2015, 41. [Google Scholar] [CrossRef]
- Dhakal, S.; Sintunavarat, W. The viscosity method for the implicit double midpoint rule with numerical results and its applications. Comput. Appl. Math. 2019, 38, 40. [Google Scholar] [CrossRef]
- Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
- Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point; Cambridge University Press: Cambridge, UK, 1990; Volume 28. [Google Scholar]
- Nieto, J.J.; Xu, H.K. Solvability of nonlinear Volterra and Fredholm equations in weighted spaces. Nonlinear Anal. 1995, 24, 1289–1297. [Google Scholar] [CrossRef]
- Cabot, A. Proximal point algorithm controlled by a slowly vanishing term: Applications to hierarchical minimization. SIAM J. Optim. 2005, 15, 555–572. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).