A Modiﬁed Viscosity-Type Self-Adaptive Iterative Algorithm for Common Solution of Split Problems with Multiple Output Sets in Hilbert Spaces

: A modiﬁed viscosity-type self-adaptive iterative algorithm is presented in this study, having a strong convergence theorem for estimating the common solution to the split generalized equilibrium problem along with the split common null point problem with multiple output sets, subject to some reasonable control sequence restrictions. The suggested algorithm and its immediate consequences are also discussed. The effectiveness of the proposed algorithm is ﬁnally demonstrated through analytical examples. The ﬁndings presented in this paper will help to consolidate, extend, and improve upon a number of recent ﬁndings in the literature.

In 2020, the subsequent generalization of the split feasibility problem with multiple output sets (SFPMOS) was proposed and investigated in real Hilbert spaces by Reich and Tuyen [23]: they assumed H, H i , (i = 1, 2, • • • , N) are N + 1 real Hilbert spaces and A i : H → H i , (i = 1, 2, • • • , N) are N bounded linear operators.They also assumed that K ⊂ H and D i ⊂ H i , (i = 1, 2, • • • , N) are non-empty, closed, and convex sets.Assuming that K ∩ N ∩ i=1 A −1  i (D i ) = ∅, they considered the following problem: find Reich and Tuyen [23] came out with the following two successive techniques to solve the SFPMOS (2): for any two elements x 0 , y 0 ∈ K, assume that the sequences {x k } and {y k } are induced by where h : K → K is used for a strict contraction mapping.By employing Algorithms ( 3) and ( 4), weak and strong convergence were analyzed.Further, Reich and Tuyen [24] investigated the following split common null point problem with multiple output sets (SCNPPMOS) in real Hilbert spaces: where M : H → 2 H , M i : H i → 2 H i , and (i = 1, 2, • • • , N) are N + 1 multi-valued monotone operators and A i are the same as in (2).The authors estimated the solution of ( 5) by employing the following scheme: for any x 0 ∈ K, let the sequence {x k } be induced by Under certain assumptions on the control parameters, they established strong convergence results.On the other hand, the theory of equilibrium problems has seen tremendous expansion in a variety of fields throughout the pure and practical sciences, and it has been the subject of extensive research in published works.It offers a structure that may be applied to a variety of problems pertaining to finance, economics, network analysis, optimization, and other areas; see, for example, [25][26][27][28][29].
The following split generalized equilibrium problem (SGEP) was developed by Kazmi and Rizvi [30] and investigated in response to a wide range of works in this area: find and where ψ 1 , ϕ 1 : K × K → R and ψ 2 , ϕ 2 : D × D → R are real-valued nonlinear bi-functions.
If ψ 2 = ϕ 2 = 0, then the SGEP ( 7) and ( 8) becomes the subsequent generalized equilibrium problem (GEP) suggested and investigated by Cianciaruso and Marino [31]: find z * ∈ K in such a way that where ψ : K × K → R and ϕ : D × D → R are real-valued nonlinear bi-functions.The GEP (9) is generic in the sense that it encompasses minimization problems, Nash equilibrium problems in non-cooperative games, variational inequality problems, fixed point problems, etc.; see [32].When ϕ = 0 in the GEP (9), the GEP (9) turns into the subsequent classical equilibrium problem (EP): find z * ∈ K in such a way that The EP (10) was initially suggested and investigated by Blum and Oetlli [33] in 1994.
Recently, Mewomo et al. [34] introduced the split generalized equilibrium problem with multiple output sets (SGEPMOS) as follows: find z * ∈ K in such a way that where GEP(ψ, ϕ) is the solution set of the GEP (9).In order to examine null point problems and equilibrium problems independently, a large number of iterative techniques exist.You can find examples of these algorithms in a number of published works and on the web.Many researchers have focused their efforts recently on developing common solutions to the aforementioned problems; see, for example, [3,32].Motivated by the work of [24,34] and the continuous study in this area, the following problem is considered in this article: find z * such that where B j : H → H j , j = 1, 2, • • • , M are bounded linear operators.In other words, find z * such that z * is a common solution of the SCNPPMOS (5) and SGEPMOS (11).
To solve the problem (12), a modified viscosity-type self-adaptive algorithm is proposed and studied.The significance of the recommended approach is that it does not call for any prior knowledge of the bounded linear operators' norm.This attribute is essential for algorithms that implement the operator norm since it is challenging to compute A .The results of this study are more general than previous ones since they incorporate a number of additional optimization problems as special cases.The method that this paper proposes has the following characteristics, stated plainly and simply: 1.
The current literature extends the works of [24,34].2.
Our solution employs a straightforward self-adaptive step size that is determined at each iteration by a straightforward calculation.As a result, our method does not require prior estimation of the norm of a bounded linear operators.This characteristic is crucial since it allows for the computation of the bounded linear operator's norm, which is typically exceedingly challenging to do and is necessary for algorithms whose implementation relies on the operator norm.

Preliminaries
The following definitions and results are mentioned in this section, which are used in the convergence analysis of the suggested scheme.
Assume that → and stand for strong and weak convergence, respectively; ω w (x k ), the set of all weak cluster points of {x k } and N, is the set of natural numbers.
The mapping P K : H → K is referred to as a metric projection if each z ∈ H assigns the unique element P K z ∈ K and satisfies Evidently, P K is nonexpansive.Moreover, P K x is possesses the subsequent fact: Definition 1.A mapping U : H → H is referred to as follows: (i) A contraction, if ∃L ∈ (0, 1) satisfying (ii) Nonexpansive, if the inequality (14) holds with L = 1.
(iii) γ-cocoercive or γ-inverse strongly monotone (γ-ism) if, for all z, t ∈ H, ∃γ > 0 satisfying (iv) Firmly nonexpansive if, for any z, t ∈ H, Moreover, Fix(U ) represents the collection of all fixed points of U , i.e., Lemma 1 ([35]).Assume that H is a real Hilbert space.A mapping U : H → H is referred to as firmly nonexpansive iff the compliment of U i.e., I − U is firmly nonexpansive.
The domain and the range of a multi-valued operator M : H → 2 H are defined as follows:

Definition 2 ([36]
). Suppose that M : H → 2 H is a multi-valued mapping.Then, (i) The graph of M, denoted as G(M), can be defined by and the graph of no other monotone operator properly contains G(M).Evidently, a monotone mapping M is maximal iff, for any pair, (z, , for all z ∈ H, is said to be the resolvent operator of M, where r > 0 and I H is the identity operator.Note that R M r is nonexpansive.It is trivial that M −1 0 = Fix(R M r ), for all r > 0.
To accomplish our main results, we set out following significant lemmas.

Lemma 2 ([37]
). Assume that M : D(M) ⊂ H → 2 H is a multi-valued monotone mapping.Then, subsequent assertions hold: (ii) For every number r > 0 and for every point z, t ∈ R(I H + rM), we have: Lemma 3 ([38] (Demiclosedness principle)).Let K = φ) ⊆ H be a closed convex set.Let U be a nonexpansive mapping from H to itself with Fix(U ) = ∅.Then, (I if lim sup s→∞ c k s ≤ 0 for every subsequence {s k s } of {s k } comply with the condition: To deal with the split generalized equilibrium problem, it is assumed that the realvalued bi-functions ψ, ϕ : K × K → R satisfy the subsequent assumptions: Assumption 1 ([41]).Let ψ : K × K → R be a real-valued bi-function comply with the subsequent presumptions: (iii) For any triplet z, t, s ∈ K, (iv) For any fixed point z ∈ K, the map t → ψ(z, t) is convex and lower semi-continuous.
Let ϕ : K × K → R such that: (a) ϕ(z, z) ≥ 0, for all z ∈ K; (b) For any fixed point t ∈ K, the map z → ϕ(z, t) is upper semi-continuous; (c) For any fixed point z ∈ K, the map t → ϕ(z, t) is convex and lower semi-continuous; (d) For any fixed point s > 0 and any z ∈ K, there exists a non-empty closed, convex, and bounded subset Q of H 1 and z ∈ K ∩ Q such that The subsequent assertions are true given these presumptions: Lemma 7 ([41]).Assume that the real-valued bi-functions ψ 1 , ϕ 1 : K × K → R satisfythe conditions of Assumptions 1. Suppose that, for any s > 0 and any point z ∈ H 1 , ∃z ∈ K such that ). Assume that the real-valued bi-functions ψ 1 , ϕ 1 : K × K → R satisfythe conditions of Assumption 1.For any s > 0 and any point x ∈ H 1 , define Q Then, the subsequent assertions hold: is non-empty as a set and single-valued as a map; (ii) Q (iii) Fix(Q ) is closed and convex.

Main Result
This section presents the suggested algorithm and provides an analysis of its convergence.
Step 0. Take any x 0 ∈ H; assume Step 1. Compute Step 2. Compute Step 3. Compute Update step sizes τ i,k and λ j,k as: Following hypotheses are necessary tools to analyze the convergence.
From Lemma 2 (ii), we obtain From ( 18) and ( 19), we attain Since z ∈ Ω, we have Taking (20) into consideration, we acquire It follows from Assumption 2 (ii)-(v) that Further, by applying (23), we obtain Continuing the process, we acquire As a result, both the sequence {x k } and the sequences {y k } and {z k } are bounded.
The operator P Ω • h can be easily understood to be a contraction.Consequently, a unique point z * ∈ Ω is proven to exist by the Banach contraction theorem such that z * = P Ω • h(z * ).The description of the projection implies Lemma 10.Suppose that {x k } is a sequence induced by Algorithm 1, and let z ∈ Ω.Then, under Assumption 1 and Assumption 2 (i)-(v), the subsequent inequality meets, for all k ≥ 1, Proof.Let z ∈ Ω. Applying to Lemma 4 (ii) and ( 22), we achieve Consequently, , where Hence, the proof is complete.
The strong convergence for the suggested scheme is presented as follows: Theorem 1. Assume that Assumption 1, Assumption 2 (i)-(v) are true and the sequence {x k } is induced by Algorithm 1. Then x k → x ∈ Ω, where x = P Ω • h( x).
Proof.Let x = P Ω • h( x), and thanks to Lemma 10, we acquire Next, we prove that lim k→∞ x k − x → 0. By invoking Lemma 6, it remains to prove Presume that the subsequence Again, from Lemma 10, we have By using ( 27) along with Assumption 2 (i), we have By Assumption 2 (iv), we have for each i = 0, 1, • • • , N. Given that the operator A i and the sequence {x k s } are bounded and the resolvent operators R M i r i,ks are nonexpansive, then it follows that Thus, from Assumption 2 (ii), it follows that Combining ( 28) and ( 29), we deduce that lim s→∞ By similar arguments, from Lemma 10, Assumption 2 (i),(iv), and ( 27), we obtain that for all j = 0, 1, • • • , M. As a result of the boundedness of the operator B j , the nonexpansivity of the resolvent operators Q , and the boundedness of the sequence {y k s }, it follows that Thus, from Assumption 2 (iii), it follows that Combining ( 31) and ( 32), we deduce that lim s→∞ Further, we obtain from the definition of the sequence {y k } that Applying (30) together with Assumption 2 (iv), it follows from the last inequality that lim Furthermore, from the definition of the sequence {z k } and (33) together with Assumption 2 (v), we obtain lim It follows from ( 34) and ( 35) that lim Consequently, by applying Assumption 2 (i), we have To conclude the proof, we must demonstrate that ω w (x k ) ⊂ Ω.It is given that the sequence {x k } is bounded; hence, ω w (x k ) is non-empty.Let us take an arbitrary element x ∈ ω w (x k ).Then, one can have a subsequence {x k s } of {x k } satisfying x k s x as s → ∞.From (34), y k s x. Since the operators A i , i = 0, 1, 2, • • • , N, are linear and bounded.It follows that A i x k s A i x.Thus, with the help of Lemma 3 and (30), we can M −1 i 0; that is, A i x ∈ Ω.Furthermore, from (34), y k s x. Since, j = 0, 1, • • • , M, B j are bounded linear operators, then B j y k s B j x.Invoking Lemma 3 and (33), we acquire B j x ∈ Fix(Q ) for all j = 0, 1, • • • , M; that is B j x ∈ Ω.In light of this, we obtain x ∈ Ω, which suggests ω w (x k ) ∈ Ω.
Because {x k s } is bounded, so we have a subsequence {x k s l } of {x k s } satisfying x k s l x and In the light of x = P Ω • h( x), inequalities (24) and (36) With the help of Lemma 6 to (25) and using (37), along with the fact that lim k→∞ ζ k = 0, we conclude that lim k→∞ x k − x = 0, as required.

Consequences
Herein, some direct consequences of the proposed algorithm are listed.If we set I H i = R M i r i,k for i = 0, 1, • • • , N, then the following scheme will be obtained.The following corollary can be derived by implementing Algorithm 2.
Algorithm 2: Modified viscosity-type self-adaptive iterative algorithm for the SGEPMOS.
Step 0. Take any Step 2. Compute Update step size λ j,k as: Corollary 1. Suppose that Assumption 1 and Assumption 2 If we set I H j = Q (ψ j ,ϕ j ) s j for j = 0, 1, • • • , M, then we get the succeeding algorithm.
Algorithm 3: Modified viscosity-type self-adaptive iterative algorithm for the SCNPPMOS.
Step 1. Compute Step 2. Compute Update step size τ i,k as:

Analytical Discussion
For better understanding of how our suggested approaches can be put into practice, we provide some examples in this section.
Matlab Version R2021a on an Asus Core i5 8th Gen Laptop with an NVIDIA 1650 Geforce GTX graphics card was utilized for all numerical calculations.We plot the error versus iteration graphs using several initial points that were selected at random.We terminated the computation if x k+1 − x k ≤ 10 −6 .

Conclusions
This paper introduced a novel modified viscosity-type self-adaptive scheme to address SCNPPMOS and SGEPMOS.We rigorously proved strong convergence theorems, discussed the practical implications, and provided analytical examples that highlight the algorithm's effectiveness.Our work not only contributes to the theoretical foundations of split problems, but also offers valuable tools for practitioners in fields such as optimization, signal processing, and machine learning.By consolidating and extending recent findings, our research advances the state-of-the-art in solving complex split problems.Future research may explore further enhancements and applications of this algorithm, pushing the boundaries of knowledge and practical problem-solving in this domain.

Figure 2 .
Figure 2. Error analysis of Algorithm 1 for Example 2.

Table 2
represents the iterations and execution time with randomly the chosen initial points and terminating scale of Algorithm 1 for Example 2.

Table 2 .
Numerical results of Algorithm 1 for Example 2.In Figure2, errors with regards to the number of iterations are plotted for randomly chosen different initial points for Example 2.