Next Article in Journal
Three Bounded Solutions for a Degenerate Nonlinear Dirichlet Problem
Next Article in Special Issue
A Novel Approach to Some Proximal Contractions with Examples of Its Application
Previous Article in Journal
Symmetric and Asymmetric Double Series via Expansions over Legendre Polynomials
Previous Article in Special Issue
Approximation Results: Szász–Kantorovich Operators Enhanced by Frobenius–Euler–Type Polynomials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flexible and Efficient Iterative Solutions for General Variational Inequalities in Real Hilbert Spaces

by
Emirhan Hacıoğlu
1,
Müzeyyen Ertürk
2,
Faik Gürsoy
2 and
Gradimir V. Milovanović
3,4,*
1
Department of Mathematics, Trakya University, 22030 Edirne, Türkiye
2
Department of Mathematics, Adiyaman University, 02040 Adiyaman, Türkiye
3
Serbian Academy of Sciences and Arts, 11000 Belgrade, Serbia
4
Faculty of Sciences and Mathematics, University of Niš, 18000 Niš, Serbia
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(4), 288; https://doi.org/10.3390/axioms14040288
Submission received: 15 February 2025 / Revised: 31 March 2025 / Accepted: 7 April 2025 / Published: 11 April 2025
(This article belongs to the Special Issue Numerical Methods and Approximation Theory)

Abstract

This paper introduces a novel Picard-type iterative algorithm for solving general variational inequalities in real Hilbert spaces. The proposed algorithm enhances both the theoretical framework and practical applicability of iterative algorithms by relaxing restrictive conditions on parametric sequences, thereby expanding their scope of use. We establish convergence results, including a convergence equivalence with a previous algorithm, highlighting the theoretical relationship while demonstrating the increased flexibility and efficiency of the new approach. The paper also addresses gaps in the existing literature by offering new theoretical insights into the transformations associated with variational inequalities and the continuity of their solutions, thus paving the way for future research. The theoretical advancements are complemented by practical applications, such as the adaptation of the algorithm to convex optimization problems and its use in real-world contexts like machine learning. Numerical experiments confirm the proposed algorithm’s versatility and efficiency, showing superior performance and faster convergence compared to an existing method.

1. Introduction

In this paper, we adopt the standard notation for a real Hilbert space H . The inner product on H is denoted by · , · , and the associated norm is represented by · . Let H denote a nonempty, closed, and convex subset of H , and let T, g : H H be two nonlinear operators. The operator T : H H H is called
(i)
λ-Lipschitzian if there exists a constant λ > 0 , such that
( x , y H ) T x T y λ x y ;
(ii)
Nonexpansive if
( x , y H ) T x T y x y ;
(iii)
α-inverse strongly monotonic if there exists a constant α > 0 , such that
( x , y H ) T x T y , x y α T x T y 2 ;
(iv)
r-strongly monotonic if there exists a constant r > 0 , such that
( x , y H ) T x T y , x y r x y 2 ;
(iv)
Relaxed ( γ , r ) cocoercive if there exist constants γ > 0 and r > 0 , such that
( x , y H ) T x T y , x y γ T x T y 2 + r x y 2 .
It is evident that the classes of α -inverse strongly monotonic and r-strongly monotonic mappings are subsets of the class of relaxed γ , r cocoercive mappings; however, the reverse implication does not hold.
Example 1.
Let H = R and H = 1 / 3 , . Clearly, H = R is a Hilbert space with norm x = x induced by the inner product x , y = x · y . Define the operator T : 1 / 3 , R by T x = x 3 + 10 .
We demonstrate that T is relaxed γ , r cocoercive with γ = 2 and r = 1 . Specifically, we aim to verify that for all x , y 1 / 3 , ,
T x T y , x y + 2 T x T y 2 x y 2 0 .
First, note that T x T y , x y = x 3 + y 3 , x y = ( x y ) 2 x 2 + x y + y 2 , and
γ T x T y 2 r x y 2 = 2 x y 2 x 2 + x y + y 2 2 x y 2 .
Combining the terms, for all x , y [ 1 / 3 , ) , we see that
T x T y , x y + 2 T x T y 2 x y 2 = x y 2 x 2 + x y + y 2 + 2 x y 2 x 2 + x y + y 2 2 x y 2 = ( x y ) 2 2 x 2 + x y + y 2 2 x 2 + x y + y 2 1 0 .
Indeed, putting X = x 2 + x y + y 2 , we conclude that
2 x 2 + x y + y 2 2 x 2 + x y + y 2 1 = 2 X 2 X 1 = ( 2 X + 1 ) ( X 1 ) 0 ,
because X 1 for all x , y [ 1 / 3 , ) . Thus, T is relaxed ( 2 , 1 ) cocoercive.
Since T x T y , x y = x y 2 x 2 + x y + y 2 0 for all x , y 1 / 3 , , we conclude that there is no positive constant α, such that (3) holds. Thus, the operator T is not α-inverse strongly monotonic. Also, it is not an r-strongly monotonic.
The theory of variational inequalities, initially introduced by Stampacchia [1] in the context of obstacle problems in potential theory, provides a powerful framework for addressing a broad spectrum of problems in both pure and applied sciences. Stampacchia’s pioneering work revealed that the minimization of differentiable convex functions associated with such problems can be characterized by inequalities, thus establishing the foundation for variational inequality theory. The classical variational inequality problem (VIP) is commonly stated as follows:
Find u H such that
( v H ) T u , v u 0 ,
where T : H H H is a given operator. The VI (5) and its solution set are denoted by VI ( H , T ) and Ω ( H , T ) = { u H : T u , v u 0 , v H } , respectively.
Lions and Stampacchia [2] further expanded this theory, demonstrating its deep connections to other classical mathematical results, including the Riesz–Fréchet representation theorem and the Lax–Milgram lemma. Over time, the scope of variational inequality theory has been extended and generalized, becoming an indispensable tool for the analysis of optimization problems, equilibrium systems, and dynamic processes in a variety of fields. The historical development of variational principles, with contributions from figures such as Euler, Lagrange, Newton, and the Bernoulli brothers, highlights their profound impact on the mathematical sciences. These principles serve as the foundation for solving maximum and minimum problems across diverse disciplines such as mechanics, game theory, economics, general relativity, transportation, and machine learning. Both classical and contemporary studies emphasize the importance of variational methods in solving differential equations, modeling physical phenomena, and formulating unified theories in elementary particle physics. The remarkable versatility of variational inequalities stems from their ability to provide a generalized framework for tackling a wide range of problems, thereby advancing both theoretical insights and computational techniques.
Consequently, the theory of variational inequalities has garnered significant attention over the past three decades, with substantial efforts directed towards its development in various directions [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. Building on this rich foundation, Noor [18] introduced a significant extension of variational inequalities known as the general nonlinear variational inequality (GNVI), formulated as follows:
Find u H such that
( v H and g ( v ) , g ( u ) H ) T u , g ( v ) g ( u ) 0 .
The GNVI (6) and its solution set are denoted by GNVI H , T , g and Ω ( H , T , g ) = u H : T u , g ( v ) g ( u ) 0 , v H , g ( v ) , g ( u ) H , respectively. It has been shown in Ref. [18] that problem (6) reduces to VI H , T , where g I (the identity operator). Furthermore, the GNVI problem can be reformulated as a general nonlinear complementarity problem:
Find u H such that
T u , g ( u ) = 0 , g u H , T u H ,
where H = { u H : u , v 0 f o r   e a c h   v H } is the dual cone of a convex cone H in H . For g u = m u + H , where m is point-to-point mapping, problem (7) corresponds to the implicit (quasi-)complementarity problem.
A wide range of problems arising in various branches of pure and applied sciences have been studied within the unified framework of the GNVI problem (6) (see Refs. [1,19,20,21]). As an illustration of its application in differential equation theory, Noor [22] successfully formulated and studied the following third-order implicit obstacle boundary value problem:
Find x u ( x ) such that on Λ = [ 0 , 1 ]
u f ( x ) , u ψ ( x ) , u f ( x ) u ψ ( x ) = 0 , u ( 0 ) = 0 , u ( 0 ) , u ( 1 ) = 0 ,
where ψ ( x ) is an obstacle function and f ( x ) is a continuous function.
The projection operator technique enables the establishment of an equivalence between the variational inequality VI H , T and fixed-point problems, as follows:
Lemma 1.
Let P H : H H be a projection (which is also nonexpansive). For given z H , the condition
( v H ) u z , v u 0 ,
is equivalent to u = P H [ z ] . This implies that
u VI H , T u = P H u σ T u ,
where σ > 0 is a constant.
Applying this lemma to the GNVI problem (6), Noor [18] derived the following equivalence result, which establishes a connection between the GNVI and fixed-point problems:
Lemma 2.
Let P H : H H be a projection (which is also nonexpansive). A function u H satisfies the GNVI problem (6) if and only if it satisfies the relation
g u = P H g u σ T u ,
where σ > 0 is a constant.
This equivalence has played a crucial role in the development of efficient methods for solving GNVI problems and related optimization problems. Noor [22] showed that the relation (8) can be rewritten as
u = u g u + P H g u σ T u ,
which implies that
u = S u = u g u + P H g u σ T u = S { u g u + P H g u σ T u } ,
where S : H H is a nonexpansive operator and F ( S ) denotes the set of fixed points of S.
Numerous iterative methods have been proposed for solving variational inequalities and variational inclusions [22,23,24,25,26,27,28,29,30,31]. Among these, Noor [22] introduced an iterative algorithm based on the fixed-point formulation (9) to find a common solution to both the general nonlinear variational inequality GNVI and the fixed-point problem. The algorithm is described as follows:
x 0 ( 1 ) H , z n ( 1 ) = 1 c n x n ( 1 ) + c n S x n ( 1 ) g x n ( 1 ) + P H g x n ( 1 ) σ T x n ( 1 ) , y n ( 1 ) = 1 b n x n ( 1 ) + b n S z n ( 1 ) g z n ( 1 ) + P H g z n ( 1 ) σ T z n ( 1 ) , x n + 1 ( 1 ) = 1 a n x n ( 1 ) + a n S y n ( 1 ) g y n ( 1 ) + P H g y n ( 1 ) σ T y n ( 1 ) .
where { a n } n = 0 , { b n } n = 0 , and { c n } n = 0 [ 0 , 1 ] .
The convergence of this algorithm was established in [22] under the following conditions:
Theorem 1.
Let T : H H be a relaxed γ , r cocoercive and λ-Lipschitzian mapping, g : H H be a relaxed γ 1 , r 1 cocoercive and λ 1 -Lipschitzian mapping, and S : H H be a nonexpansive mapping such that F ( S ) Ω H , T , g . Define { x n ( 1 ) } n = 0 as the sequence generated by the algorithm in (10), with real sequences { a n } n = 0 , { b n } n = 0 , and { c n } n = 0 [ 0 , 1 ] , where n = 0 a n = . Suppose the following conditions are satisfied
σ r γ λ 2 λ 2 < r γ λ 2 λ 2 L 2 L λ 2 , r > γ λ 2 + λ L 2 L , L < 1 ,
where
L = 2 1 + 2 γ 1 λ 1 2 2 r 1 + λ 1 2 .
Then, { x n ( 1 ) } n = 0 converges strongly to a solution s F ( S ) Ω H , T , g .
Noor’s algorithm in (10) and its variants have been widely studied and applied to variational inclusions, variational inequalities, and related optimization problems. These algorithms are recognized for their efficiency and flexibility, contributing significantly to the field of variational inequalities. However, there remains considerable potential for developing more robust and broadly applicable iterative algorithms for solving GNVI problems. Motivated by the limitations of existing methods, we propose a novel Picard-type iterative algorithm designed to address general variational inequalities and nonexpansive mappings:
x 0 H , z n = 1 c n x n + c n S x n g x n + P H g x n σ T x n , y n = 1 b n S x n g x n + P H g x n σ T x n + b n S z n g z n + P H g z n σ T z n , x n + 1 = S y n g y n + P H g y n σ T y n ,
where { b n } n = 0 and { c n } n = 0 [ 0 , 1 ] . Algorithm (11) cannot be directly derived from (10) because the update rule for y n differs fundamentally. In the first algorithm, y n is updated as
y n = ( 1 b n ) S x n g ( x n ) + P H [ g ( x n ) σ T x n ] + b n S z n g ( z n ) + P H [ g ( z n ) σ T z n ] ,
whereas in the second algorithm, the update for y n ( 1 ) follows a direct convex combination of previous iterates:
y n ( 1 ) = ( 1 b n ) x n ( 1 ) + b n S z n ( 1 ) g ( z n ( 1 ) ) + P H [ g ( z n ( 1 ) ) σ T z n ( 1 ) ] .
This structural difference in the y n update step leads to different iterative behaviors, making it impossible to derive (11) directly from (10).
Building on these methodological advances, recent research has significantly deepened our understanding of variational inequalities by offering innovative frameworks and solution techniques that address real-world challenges.
The literature has seen significant advancements in variational inequality theory through seminal contributions that extend its applicability to diverse practical problems. Nagurney [32] laid the groundwork by establishing a comprehensive framework for modeling complex network interactions, which has served as a cornerstone for subsequent research in optimization and equilibrium analysis. Also, it is interesting to see a collection of papers presented in the book [33], mainly from the 3rd International Conference on Dynamics of Disasters (Kalamata, Greece, 5–9 July 2017), offering valuable strategies for optimizing resource allocation under emergency conditions. More recently, Fargetta, Maugeri, and Scrimali [34] expanded the scope of variational inequality methods by formulating a stochastic Nash equilibrium framework to analyze competitive dynamics in medical supply chains, thereby addressing challenges in healthcare logistics.
These developments underscore the dynamic evolution of variational inequality research and its capacity to address complex, real-world problems. In this context, the new Picard-type iterative algorithm proposed in our study builds upon these advances by relaxing constraints on parameter sequences, ultimately providing a more flexible and efficient approach for solving general variational inequalities.
In Section 2, we establish a strong convergence result (Theorem 2) for the proposed algorithm. Unlike Noor’s algorithm, which requires specific conditions on the parametric sequences for convergence, our algorithm eliminates this requirement while maintaining strong convergence properties. Specifically, Theorem 2 refines the convergence criteria in Theorem 1, leading to broader applicability and enhanced theoretical robustness. Furthermore, Theorem 3 demonstrates the equivalence in convergence between the algorithms in (10) and (11), highlighting their inter-relationship and the efficiency of our approach. The introduction of the Collage–Anticollage Theorem 4 within the context of variational inequalities marks a significant innovation, offering a novel perspective on transformations related to the GNVI problem discussed in (6). To the best of our knowledge, this theorem is presented for the first time in this setting. Additionally, Theorems 5 and 6 explore the continuity of solutions to variational inequalities, a topic rarely addressed in the existing literature. These contributions extend the theoretical framework established by Noor [22], offering new insights into general nonlinear variational inequalities. Beyond theoretical advancements, we validate the practical utility of the proposed algorithm by applying it to convex optimization problems and real-world scenarios. Section 3 provides a modification of the algorithm for solving convex minimization problems, supported by numerical examples. In Section 4, we demonstrate the algorithm’s applicability in real-world contexts, including machine learning tasks such as classification and regression. Comparative analysis shows that our algorithm consistently converges to optimal solutions in fewer iterations than the algorithm in (10), highlighting its superior computational efficiency and practical advantages.
The development of the main results in this paper relies on the following lemmas:
Lemma 3
([35]). Let φ n ( i ) n = 0 for i = 1 , 2 be non-negative sequences of real numbers satisfying
( N ) φ n + 1 ( 1 ) μ φ n ( 1 ) + φ n ( 2 ) ,
where μ [ 0 , 1 ) and lim n φ n ( 2 ) = 0 . Then, lim n φ n ( 1 ) = 0 .
Lemma 4
([36]). Let φ n ( i ) n = 0 for i = 1 , 2 , 3 be non-negative real sequences satisfying the following inequality
( N ) φ n + 1 ( 1 ) 1 φ n ( 3 ) φ n ( 1 ) + φ n ( 2 ) ,
where φ n ( 3 ) [ 0 , 1 ] for all n 0 , n = 1 φ n ( 3 ) = , and φ n ( 2 ) = o φ n ( 3 ) . Then, lim n φ n ( 1 ) = 0 .

2. Main Results

Theorem 2.
Let T : H H be a relaxed γ , r cocoercive and λ -Lipschitz operator, g : H H be a relaxed γ 1 , r 1 cocoercive and λ 1 -Lipschitz operator, and S : H H be a nonexpansive mapping such that F ( S ) Ω ( H , T , g ) . Let { x n } n = 0 be an iterative sequence defined by the algorithm in (11) with real sequences { b n } n = 0 , { c n } n = 0 [ 0 , 1 ] . Assume the following conditions hold
σ r γ λ 2 λ 2 < r γ λ 2 2 4 λ 2 L 1 L λ 2 , 2 λ L 1 L < γ r λ 2 < 1 λ , L < 1 2 ,
where
L = 1 + 2 γ 1 λ 1 2 2 r 1 + λ 1 2 .
Then, the sequence { x n } n = 0 converges strongly to s F ( S ) Ω ( H , T , g ) with the following estimate for each n N ,
x n + 1 s ( 2 L + δ ) 2 ( n + 1 ) k = 0 n 1 b k c k 1 ( 2 L + δ ) x 0 s ,
where
δ = 1 + 2 σ γ λ 2 2 σ r + σ 2 λ 2 .
Proof. 
Let s H be a solution to F ( S ) Ω H , T , g . Then,
s = S s g s + P H g s σ T s = 1 b n S s g s + P H g s σ T s + b n S { s g s + P H g s σ T s } = 1 c n s + c n S s g s + P H g s σ T s .
Using (11), (15), and the assumptions that P H and S are nonexpansive operators, we obtain
x n + 1 s = S y n g y n + P H g y n σ T y n S s g s + P H g s σ T s y n s g y n g s + g y n g s σ T y n T s 2 y n s g y n g s + y n s σ T y n T s .
Since T is a relaxed ( γ , r ) cocoercive and λ -Lipschitzian operator,
y n s σ T y n T s 2 = y n s 2 2 σ y n s , T y n T s + σ 2 T y n T s 2 y n s 2 + 2 σ γ T y n T s 2 2 σ r y n s 2 + σ 2 T y n T s 2 δ 2 y n s 2
or equivalently
y n s σ T y n T s δ y n s ,
where δ is defined by (14).
Since g is a relaxed γ 1 , r 1 cocoercive and λ 1 -Lipschitzian operator,
y n s g y n g s L y n s ,
where L is defined by (13).
Combining (16), (17), and (18), we have
x n + 1 s ( 2 L + δ ) y n s ,
and from (12) and (14), we know that 2 L + δ < 1 (see Appendix A).
It follows from (11), (15), and the nonexpansivity of the operators S and P H that
y n s 1 b n x n g x n + P H g x n σ T x n s g s + P H g s σ T s + b n z n g z n + P H g z n σ T z n s g s + P H g s σ T s 1 b n x n s g x n g s + 1 b n g x n g s σ T x n T s + b n z n s g z n g s + b n g z n g s σ T z n T s 2 1 b n x n s g x n g s + 1 b n x n s σ T x n T s + 2 b n z n s g z n g s + b n z n s σ T z n T s .
Using the same arguments above gives us the following estimates
x n s σ T x n T s δ x n s , x n s g x n g s L x n s , z n s σ T z n T s δ z n s , z n s g z n g s L z n s , z n s d n x n s ,
where d n = 1 c n 1 2 L + δ .
Combining (19)–(21), we obtain
x n + 1 s ( 2 L + δ ) 2 1 b n c n [ 1 ( 2 L + δ ) ] x n s .
As b n , c n [ 0 , 1 ] for all n N and 2 L + δ < 1 , we have 1 b n c n 1 2 L + δ < 1 for all n N . Using this fact in (22), we obtain
x n + 1 s ( 2 L + δ ) 2 x n s ,
which implies
x n + 1 s ( 2 L + δ ) 2 ( n + 1 ) x 0 s .
Taking the limit as n 0 , we conclude that lim n x n s = 0 . □
Theorem 3.
Let H , H, T, g, S, L, and δ be defined as in Theorem 2, and let the iterative sequences x n ( 1 ) n = 0 and x n n = 0 be generated by (10) and (11), respectively. Assume the conditions in (12) hold, and a n n = 0 , b n n = 0 , and c n n = 0 [ 0 , 1 ] . Then, the following assertions are true:
(i) If x n ( 1 ) n = 0 converges strongly to s F ( S ) Ω ( H , T , g ) , then x n ( 1 ) x n n = 0 also converges strongly to 0. Moreover, the estimate holds for all n N ,
x n + 1 ( 1 ) x n + 1 ( 2 L + δ ) 2 x n ( 1 ) x n + 1 + ( 1 + 2 L + δ ) 2 1 a n b n x n ( 1 ) s ,
Furthermore, the sequence x n n = 0 converges strongly to s F ( S ) Ω ( H , T , g ) .
(ii) If the sequence 1 a n b n a n ( 1 ( 2 L + δ ) ) n = 0 is bounded and n = 0 a n = , then the sequence x n x n ( 1 ) n = 0 converges strongly to 0. Additionally, the estimate holds for all n N
x n + 1 x n + 1 ( 1 ) 1 a n 1 ( 2 L + δ ) x n x n ( 1 ) + 1 + 2 L + δ 2 1 a n b n x n s .
Moreover, the sequence x n ( 1 ) n = 0 converges strongly to s F ( S ) Ω ( H , T , g ) .
Proof. 
(i) Suppose that x n ( 1 ) n = 0 converges strongly to s F ( S ) Ω ( H , T , g ) . We aim to show that x n ( 1 ) x n n = 0 converges strongly to 0. Using (1), (2), (4), (10), (11), and (15), we deduce the following inequalities
x n + 1 ( 1 ) x n + 1 1 a n x n ( 1 ) s + 1 a n S s g s + P H g s σ T s S y n g y n + P H g y n σ T y n + a n S y n ( 1 ) g y n ( 1 ) + P H g y n ( 1 ) σ T y n ( 1 ) S y n g y n + P H g y n σ T y n 1 a n x n ( 1 ) s + 2 1 a n y n s g y n g s + 1 a n y n s σ T y n T s + 2 a n y n ( 1 ) y n g y n ( 1 ) g y n + a n y n ( 1 ) y n σ T y n ( 1 ) T y n 1 a n x n ( 1 ) s + 1 a n 2 L + δ y n ( 1 ) s + ( 2 L + δ ) y n ( 1 ) y n , y n ( 1 ) y n 1 b n x n ( 1 ) s + 2 1 b n x n s g x n g s + 1 b n x n s σ T x n T s + 2 b n z n ( 1 ) z n g z n ( 1 ) g z n + b n z n ( 1 ) z n σ T z n ( 1 ) T z n 1 b n 1 + 2 L + δ x n ( 1 ) s + 1 b n 2 L + δ x n ( 1 ) x n + b n 2 L + δ z n ( 1 ) z n ,
as well as
y n ( 1 ) s 1 b n x n ( 1 ) s + b n 2 L + δ z n ( 1 ) s , z n ( 1 ) s 1 c n 1 2 L + δ x n ( 1 ) s , z n ( 1 ) z n 1 c n 1 2 L + δ x n ( 1 ) x n .
Combining these inequalities, we get
x n + 1 ( 1 ) x n + 1 2 L + δ 2 1 b n c n 1 2 L + δ x n ( 1 ) x n + 1 a n x n ( 1 ) s + ( 2 L + δ ) 1 b n 1 a n + 1 + 2 L + δ x n ( 1 ) s + 1 a n b n ( 2 L + δ ) 2 1 c n 1 ( 2 L + δ ) x n ( 1 ) s .
Since { a n } n = 0 , { b n } n = 0 , { c n } n = 0 [ 0 , 1 ] and 2 L + δ < 1 , for all n N , we have
( 2 L + δ ) 2 < 1 , 1 b n c n 1 ( 2 L + δ ) < 1 , 1 c n 1 ( 2 L + δ ) < 1 , ( 1 a n ) b n < 1 a n b n , 1 a n < 1 a n b n , 1 b n < 1 a n b n , 1 a n < 1 .
By applying the inequalities in (24) to (23), we derive the following result
x n + 1 ( 1 ) x n + 1 ( 2 L + δ ) 2 x n ( 1 ) x n + 1 + ( 1 + 2 L + δ ) 2 1 a n b n x n ( 1 ) s .
Define φ n ( 1 ) : = x n ( 1 ) x n , φ n ( 2 ) : = 1 + 1 + 2 L + δ 2 1 a n b n x n ( 1 ) s , and μ : = 2 L + δ 2 [ 0 , 1 ) , for all n N . Given the assumption lim n x n ( 1 ) s = 0 , it follows that lim n φ n ( 2 ) = 0 . It is straightforward to verify that (25) satisfies the conditions of Lemma 3. By applying the conclusion of Lemma 3, we obtain lim n x n ( 1 ) x n = 0 . Furthermore, we note the following inequality for all n N ,
x n s x n ( 1 ) x n + x n ( 1 ) s .
Taking the limit as n , we conclude that lim n x n s = 0 , since
lim n x n ( 1 ) s = lim n x n ( 1 ) x n = 0 .
(ii) Let us assume that the sequence 1 a n b n a n 1 ( 2 L + δ ) n = 0 is bounded and n = 0 a n = . By Theorem 2, it follows that lim n x n = s . We now demonstrate that the sequence x n ( 1 ) n = 0 converges strongly to s . Utilizing results from (1), (2), (4), (10), (11), and (15), we derive the following inequalities:
x n + 1 x n + 1 ( 1 ) 1 a n x n x n ( 1 ) + 1 a n x n s + 1 a n ( 2 L + δ ) y n s + a n ( 2 L + δ ) y n y n ( 1 ) ,
y n y n ( 1 ) 1 b n x n x n ( 1 ) + 1 b n 1 + 2 L + δ x n s + b n ( 2 L + δ ) z n z n ( 1 ) ,
z n z n ( 1 ) 1 c n 1 ( 2 L + δ ) x n x n ( 1 ) .
From the proof of Theorem 2, we know that
y n s ( 2 L + δ ) 1 b n c n 1 ( 2 L + δ ) x n s .
Combining (26)–(29), we obtain
x n + 1 x n + 1 ( 1 ) 1 a n + a n ( 2 L + δ ) 1 b n + a n b n ( 2 L + δ ) 2 1 c n 1 ( 2 L + δ ) x n x n ( 1 ) + 1 a n + 1 a n ( 2 L + δ ) 2 1 b n c n 1 ( 2 L + δ ) + a n ( 2 L + δ ) 1 b n 1 + 2 L + δ x n s .
Since a n , b n , c n [ 0 , 1 ] and 2 L + δ ( 0 , 1 ) , for all n N , we have
( 2 L + δ ) 2 < 2 L + δ , 1 b n c n 1 ( 2 L + δ ) < 1 , a n b n < a n , 1 c n 1 ( 2 L + δ ) < 1 .
Applying the inequalities in (31) to (30) gives
x n + 1 x n + 1 ( 1 ) 1 a n 1 ( 2 L + δ ) x n x n ( 1 ) + ( 1 + 2 L + δ ) 2 1 a n b n x n s .
Now, we define the sequences φ n ( 1 ) : = x n x n ( 1 ) ,
φ n ( 2 ) : = ( 1 + 2 L + δ ) 2 1 a n b n x n s , φ n ( 3 ) : = a n ( 1 ( 2 L + δ ) ) ( 0 , 1 ) , q n : = ( 1 + 2 L + δ ) 2 ( 1 a n b n ) a n ( 1 ( 2 L + δ ) ) ,
for all n N . Note that n = 0 a n = .
Since the sequence { q n } n = 0 is bounded, there exists K > 0 such that | q n | < K for all n N . For any ε > 0 , since θ n : = x n s converges to 0 and ε / K > 0 , there exists n 0 N such that θ n < ε / K for all n n 0 . Consequently, q n θ n < ε for all n n 0 , which implies lim n φ n ( 2 ) / φ n ( 3 ) = 0 , i.e., φ n ( 2 ) = o φ n ( 3 ) . Thus, inequality (32) satisfies the requirements of Lemma 4, and by its conclusion, we deduce that lim n x n x n ( 1 ) = 0 . Since lim n x n s = 0 and
x n ( 1 ) s x n x n ( 1 ) + x n s ,
it follows that lim n x n ( 1 ) s = 0 . □
After establishing the strong convergence properties of our proposed algorithm in Theorems 2 and 3, we now present additional results that further illustrate the robustness and practical applicability of our approach. In the following theorems, we first quantify the error estimate between an arbitrary point and the solution via the operator Φ , and then we explore the relationship between Φ and its approximation Φ ˜ .
Specifically, Theorem 4 provides rigorous bounds linking the error Φ ( x ) x to the distance between any point x H and a solution x , thereby offering insights into the stability of the method. Building on this result, Theorem 5 establishes an upper bound on the distance between the fixed point of the exact operator Φ and that of its approximation Φ ˜ . Finally, Theorem 6 delivers a direct error bound in terms of a prescribed tolerance ε , which is particularly useful for practical implementations.
Theorem 4.
Let H , H, T, g, S, L, and δ be as defined in Theorem 2, and suppose the conditions in (12) are satisfied. Then, for any solution x F ( S ) Ω ( H , T , g ) and for any x H , the following inequalities hold:
1 1 + ( 2 L + δ ) Φ ( x ) x x x 1 1 2 L + δ Φ ( x ) x ,
where the operator Φ : H H is defined as Φ x = S x g ( x ) + P H g ( x ) σ T x .
Proof. 
From equation (9), we know that Φ x = x . If x = x , inequality (33) is trivially satisfied. On the other hand, if x x for all x H , we have
x x = Φ x x Φ x Φ ( x ) + Φ ( x ) x x x g x g x + P H g x σ T x P H g ( x ) σ T x + Φ ( x ) x x x g x g x + g x g x σ T x T x + Φ ( x ) x 2 x x g x g x + x x σ T x T x + Φ ( x ) x ,
as well as
x x g x g ( x ) 1 + 2 γ 1 λ 1 2 2 r 1 + λ 1 2 x x ,
x x σ T x T x 1 + 2 σ γ λ 2 2 σ r + σ 2 λ 2 x x .
Inserting (35) and (36) into (34), we obtain
x x 2 1 + 2 γ 1 λ 1 2 2 r 1 + λ 1 2 x x + 1 + 2 σ γ λ 2 2 σ r + σ 2 λ 2 x x + Φ ( x ) x ,
or equivalently
x x Φ x x 1 2 1 + 2 γ 1 λ 1 2 2 r 1 + λ 1 2 + 1 + 2 σ γ λ 2 2 σ r + σ 2 λ 2 .
On the other hand, we have
Φ ( x ) x = Φ x x + x x Φ ( x ) Φ x + x x = S x g ( x ) + P H g ( x ) σ T x S x g x + P H g x σ T x + x x .
By employing similar arguments as in (34)–(36), we deduce
Φ ( x ) x 2 1 + 2 γ 1 λ 1 2 2 r 1 + λ 1 2 + 1 + 2 σ γ λ 2 2 σ r + σ 2 λ 2 + 1 x x ,
or equivalently
Φ ( x ) x 1 + 2 1 + 2 γ 1 λ 1 2 2 r 1 + λ 1 2 + 1 + 2 σ γ λ 2 2 σ r + σ 2 λ 2 x x .
Combining the bounds derived from (34)–(36), we finally arrive at
1 1 + 2 L + δ Φ ( x ) x x x 1 1 ( 2 L + δ ) Φ ( x ) x ,
which completes the proof. □
Transitioning from error estimates for the exact operator, Theorem 5 shifts the focus to the interplay between the original operator Φ and its approximation Φ ˜ . This theorem establishes an upper bound for the distance between their respective fixed points, thus providing a measure of how closely the approximation tracks the behavior of the exact operator. The theorem is stated as follows:
Theorem 5.
Let T, g, S, Φ, L, and δ be as defined in Theorem 4. Assume that Φ ˜ : H H is a map with a fixed point x ˜ H . Further, suppose the conditions in (12) are satisfied. Then, for a solution x F ( S ) Ω ( H , T , g ) , the following holds
x x ˜ 1 1 2 L + δ sup x H Φ ( x ) Φ ˜ ( x ) .
Proof. 
By (9), we know that Φ x = x . If x = x ˜ , then inequality (37) is directly satisfied. If x x ˜ , then using the same arguments as in the proof of Theorem 4, we obtain
x x ˜ = Φ x Φ ˜ x ˜ Φ x Φ x ˜ + Φ x ˜ Φ ˜ x ˜ S x g x + P H g x σ T x S x ˜ g x ˜ + P H g x ˜ σ T x ˜ + sup x H Φ ( x ) Φ ˜ x 2 x x ˜ g x g x ˜ + x x ˜ σ T x T x ˜ + sup x H Φ ( x ) Φ ˜ ( x ) ,
as well as
x x ˜ g x g x ˜ 1 + 2 γ 1 λ 1 2 2 r 1 + λ 1 2 x x ˜ ,
x x ˜ σ T x T x ˜ 1 + 2 σ γ λ 2 2 σ r + σ 2 λ 2 x x ˜ .
Combining (38)–(40), we derive
x x ˜ 2 1 + 2 γ 1 λ 1 2 2 r 1 + λ 1 2 + 1 + 2 σ γ λ 2 2 σ r + σ 2 λ 2 x x ˜ + sup x H Φ ( x ) Φ ˜ ( x ) .
Simplifying further, this yields
x x ˜ 1 1 ( 2 L + δ ) sup x H Φ ( x ) Φ ˜ ( x ) ,
which completes the proof. □
Finally, Theorem 6 extends this analysis by providing a direct error bound in terms of a prescribed tolerance ε . This result is particularly valuable for practical implementations, as it offers a clear metric for the performance of the approximating operator in approximating the fixed point of Φ . The theorem is formulated as follows:
Theorem 6.
Let T, g, S, Φ, Φ ˜ , L, and δ be as defined in Theorem 5. Let Φ ˜ : H H be a map with a fixed point x ˜ H . Suppose the conditions stated in (12) hold. Additionally, assume that
sup x H Φ ( x ) Φ ˜ x ε ,
for some fixed ε > 0 . Then, for a fixed point x ˜ H , such that Φ ˜ x ˜ = x ˜ , the following inequality holds
x ˜ Φ x ˜ 1 + 2 L + δ 1 ( 2 L + δ ) ε .
Proof. 
Let Φ x = x . From (38)–(40), we have
Φ x Φ x ˜ ( 2 L + δ ) x x ˜ .
Then, using this inequality, as well as (37) and (41), we obtain
x ˜ Φ x ˜ x ˜ x + x Φ x ˜ = x ˜ x + Φ x Φ x ˜ 1 + 2 L + δ 1 ( 2 L + δ ) ε ,
which had to be proven. □

3. An Application to the Convex Minimization Problem

Let H be a Hilbert space, H be a closed and convex subset of H , and f : H R be a convex function. The problem of finding the minimums of f is referred to as the convex minimization problem, which is formulated as follows
min x H f ( x ) .
Denote the set of solutions to the minimization problem (42) by . The minimization problem (42) can equivalently be expressed as a fixed-point problem:
A point x is a solution to the minimization problem if and only if P H ( I σ f ) x = x , where P H is the metric projection, f denotes the gradient of the Fréchet differentiable function f, and σ > 0 is a constant.
Moreover, the minimization problem (42) can also be reformulated as a variational inequality problem:
A point x is a solution to the minimization problem if and only if x satisfies the variational inequality f x , v x 0 for all v H .
Now, let S : H H be a nonexpansive operator, and let F ( S ) represent the set of fixed points of S. If x F ( S ) Ω ( H , f , I ) = F ( S ) , then for any σ > 0 , the following holds:
P H x σ f x = x = S x = S P H x σ f x ,
since x is both the solution to the problem (42) and a fixed point of S.
Based on these observations, if we set g = I (the identity operator) and T = f in the iterative algorithm (11), we derive the following algorithm, which converges to a point that is both a solution to the minimization problem (42) and a fixed point of S:
x 0 H , x n + 1 = S [ P H y n σ f y n ] , y n = 1 b n S [ P H x n σ f x n ] + b n S [ P H z n σ f z n ] , z n = 1 c n x n + c n S [ P H x n σ f x n ] } ,
where { b n } n = 0 , { c n } n = 0 [ 0 , 1 ] .
Theorem 7.
Let S, L, and δ be defined as in Theorem 2 and F S . Let f : H R be a convex mapping such that its gradient f is a relaxed γ , r cocoercive and λ-Lipschitz mapping from H to H . Assume that F ( S ) Ω ( H , f , I ) . Define the iterative sequence { x n } n = 0 by the algorithm in (43) with real sequences { b n } n = 0 , { c n } n = 0 [ 0 , 1 ] . In addition to the condition (12) in Theorem 2, assume the following condition is satisfied
r 1 γ 1 1 .
Then, the sequence { x n } n = 0 converges strongly to s F ( S ) , and the following estimate holds
( n N ) x n + 1 s δ 2 ( n + 1 ) k = 0 n 1 b k c k ( 1 δ ) x 0 s .
Proof. 
Set g = I and T = f in Theorem 2. The mapping g is 1-Lipschitzian and a relaxed γ 1 , r 1 cocoercive for every γ 1 , r 1 > 0 , satisfying the condition (44). Consequently, by Theorem 2, it follows that x F ( S ) Ω ( H , f , I ) = F ( S ) . □
Example 2.
Let
H = x = x k k = 0 : k = 0 x k 2 1 / 2 < and x k R for all k N 0
denote a real Hilbert space equipped with the norm x = x , x 1 / 2 , where x , y = k = 0 x k y k for x , y H . Additionally, the set H = x = x k n = 0 : x 1 is a closed and convex subset of H .
Now, we consider a function f : H R defined by f ( x ) = x 2 2 + x 2 , where x 2 = { x k 2 } k = 0 . The solution to the minimization problem (42) is the zero vector, 0 = 0 k = 0 for f.
From [37] (Theorem 2.4.1, p. 167), the Fréchet derivative of f at a point x is f x = 4 x 3 + 2 x = 4 x k 3 k = 0 + 2 x k k = 0 , which is unique. For x k , y k [ 1 , 1 ] , we have
4 x k 3 4 y k 3 + 2 x k y k x k y k 1 784 4 x k 2 + x k y k + y k 2 x k y k 2 + 2 x k y k 2 ,
from which we deduce
f x f y , x y = n = 0 4 x k 3 4 y k 3 + 2 x k y k x k y k 1 784 T x T y 2 + 2 x y 2 .
This means that f is a relaxed 1 / 784 , 2 cocoercive operator. Additionally, since
f x f y 2 = k = 0 4 x k 3 4 y k 3 + 2 x k y k 2 14 2 x y 2 ,
f x is a 14-Lipschitz function.
Let S : H H be defined by S x = sin x = sin x k k = 0 . The operator S is nonexpansive since
S x S y 2 = k = 0 2 sin x k y k 2 2 cos x k + y k 2 2 k = 0 x k y k 2 = x y 2 .
Moreover, F ( S ) = 0 k = 0 . Based on assumptions (12) and (44), we set σ = 1 / 392 , γ 1 = 1 / 2 , and r 1 = 1.4999995 . Consequently, we calculate δ = 0.99616612 and L = 0.001 , which yields 2 L + δ = 0.99816612 < 1 . Also, we have F ( S ) = 0 k = 0 . It is evident that these parameter choices satisfy conditions (12) and (44).
Next, let a n = b n = c n = 1 / ( n + 1 ) for all n. To ensure clarity, we denote a sequence of elements in the Hilbert space H as { x n } n = 0 , where x n = ( x n , 0 , x n , 1 , x n , 2 , ) H . Under these notations, the iterative algorithms defined in (43) and (11) are reformulated as follows:
x 0 = x 0 , k k = 0 H , x n + 1 , k = sin P H 390 y n , k 4 ( y n , k ) 3 392 , y n , k = 1 1 n + 1 sin P H 390 x n , k 4 ( x n , k ) 3 392 + 1 n + 1 sin P H 390 z n , k 4 ( z n , k ) 3 392 , z n , k = 1 1 n + 1 x n , k + 1 n + 1 sin P H 390 x n , k 4 ( x n , k ) 3 392
and
x 0 = x 0 , k k = 0 H , x n + 1 , k = 1 1 n + 1 x n , k + 1 n + 1 sin P H 390 y n , k 4 ( y n , k ) 3 392 , y n , k = 1 1 n + 1 x n , k + 1 n + 1 sin P H 390 z n , k 4 ( z n , k ) 3 392 , z n , k = 1 1 n + 1 x n , k + 1 n + 1 sin P H 390 x n , k 4 ( x n , k ) 3 392 ,
where P H : H H is defined by P H x = x , when x H , and P H x = x / x , when x H .
Let the initial point for both iterative processes be the sequence x 0 = 1 / 10 k + 1 n = 0 H . From Table 1 and Table 2, as well as from Figure 1, it is evident that both algorithms (45) and (46) converge strongly to the point 0 = ( 0 , 0 , 0 , ) . Furthermore, the algorithm in (45) exhibits faster convergence compared to the algorithm in (46).
As a prototype, consider the mapping Φ, Φ ˜ : H H , be defined as
Φ ( x ) = sin P H 390 x k 4 ( x k ) 3 392
and Φ ˜ ( x ) = Φ ˜ x n n = 0 = x n 3 n = 0 = x 0 3 , x 1 3 , x 2 3 , x 3 3 , . With these definitions, the results of Theorems 4, 5, and 6 can be straightforwardly verified. All computations in this example were performed using Wolfram Mathematica 14.2.

4. Numerical Experiments

In this section, we adapt and apply the iterative algorithm (11) within the context of machine learning to demonstrate the practical significance of the theoretical results derived in this study. By doing so, we highlight the real-world applicability of the proposed methods beyond their theoretical foundations. Furthermore, we compare the performance of algorithm (11) with algorithm (10), providing additional support for the validity of the theorems presented in previous sections.
Our focus is on the framework of loss minimization in machine learning, employing two novel projected gradient algorithms to solve related optimization problems. Specifically, we consider a regression/classification setup characterized by a dataset consisting of m samples and d attributes, represented as X R m × d , with corresponding outcomes (labels) Y R m . The optimization problem is formulated as follows:
min F ( w ) = min w R d 1 2 X w Y 2 2 .
Using the 1 -projection operator P H (onto the positive quadrant), S = I , w R d , g = I , and T = F , we define two iterative algorithms:
w 0 H , w n + 1 = P H u n σ F u n u n = 1 b n P H w n σ F w n + b n P H v n σ F v n , v n = 1 c n w n + c n P H w n σ F w n
and
w 0 H , w n + 1 = 1 a n w n + a n P H u n σ F u n , u n = 1 b n w n + b n P H v n σ F v n , v n = 1 c n w n + c n P H w n σ F w n ,
where { a n } n = 0 , { b n } n = 0 , and { c n } n = 0 [ 0 , 1 ] . To compute the optimal value of the step size σ , a backtracking algorithm is employed. All numerical implementations and simulations were carried out using Matlab, Ver. R2023b.
The real-world datasets used in this study are:
+
Aligned Dataset (in Swarm Behavior): Swarm behavior refers to the collective dynamics observed in groups of entities such as birds, insects (e.g., ants), fish, or animals moving cohesively in large masses. These entities exhibit synchronized motion at the same speed and direction while avoiding mutual interference. The Aligned dataset comprises pre-classified data relevant to swarm behavior, including 24,017 instances with 2400 attributes (see https://archive.ics.uci.edu/ml/index.php (accessed on 14 February 2025)).
+
COVID-19 Dataset: COVID-19, an ongoing viral epidemic, primarily causes mild to moderate respiratory infections but can lead to severe complications, particularly in elderly individuals and those with underlying conditions such as cardiovascular disease, diabetes, chronic respiratory illnesses, and cancer. The dataset is a digitized collection of patient records detailing symptoms, medical history, and risk classifications. It is designed to facilitate predictive modeling for patient risk assessment, resource allocation, and medical device planning. This dataset includes 1,048,576 instances with 21 attributes (see https://datos.gob.mx/busca/dataset/informacion-referente-a-casos-covid-19-en-mexico (accessed on 14 February 2025)).
+
Predict Diabetes Dataset: Provided by the National Institute of Diabetes and Digestive and Kidney Diseases (USA), this dataset contains diagnostic metrics for determining the presence of diabetes. It consists of 768 instances with 9 attributes, enabling the development of predictive models for diabetes diagnosis (see https://www.kaggle.com/datasets (accessed on 14 February 2025)).
+
Sobar Dataset: The Sobar dataset focuses on factors related to cervical cancer prevention and management. It includes both personal and social determinants, such as perception, motivation, empowerment, social support, norms, attitudes, and behaviors. The dataset comprises 72 instances with 20 attributes (see https://archive.ics.uci.edu/ml/index.php (accessed on 14 February 2025)).
The methodology for dataset analysis and model evaluation was carried out as follows:
All datasets were split into training ( 60 % ) and testing ( 40 % ) subsets. During the analysis, we set the tolerance value (i.e., the difference between two successive function values) to 10 5 and capped the maximum number of iterations at 10 5 . To evaluate the performance of algorithms on these datasets, we recorded the following metrics:
  • Function values F ( w n ) ;
  • The norm of the difference between the optimal function value and the function values at each iteration, i.e., F ( w n ) F ( w ) ;
  • Computation times (in seconds);
  • Prediction and test accuracies, measured using root mean square error (rMSE).
The results and observations are as follows:
  • Function Values: In Figure 2, the function values F ( w n ) for the evaluated algorithms are presented.
  • Convergence Analysis: Figure 3 demonstrates the convergence performance of the algorithms in terms of F ( w n ) F ( w ) .
  • Prediction Accuracy: Figure 4 show cases the prediction accuracy (rMSE) achieved by the algorithms during the testing phase.
The results, as illustrated in Figure 2, Figure 3 and Figure 4 and summarized in Table 3, clearly indicate that algorithm (47) outperforms algorithm (48) in terms of efficiency and accuracy.
Table 3 clearly demonstrates that algorithm (47) yields significantly better results than algorithm (48) across various datasets. In terms of the number of iterations, (47) converges in far fewer steps (for example, 135 versus 2559 for the Aligned dataset and 116 versus 10,480 for the Diabetes dataset) and achieves the same or lower minimum F values, resulting in superior outcomes. Moreover, (47) shows slightly lower training errors (rMse) and, in most cases—with the exception of COVID-19, where (48) achieves a marginally better test error—comparable or improved test errors (rMse2). Most importantly, the training time for (47) is significantly shorter across all datasets (for instance, 6.28 s versus 118.14 s for the Aligned dataset and 0.022 s versus 2.83 s for the Diabetes dataset), which confers an advantage in computational efficiency. Overall, these results demonstrate that algorithm (47) not only converges faster but also delivers better performance in terms of both accuracy and computational cost compared to algorithm (48).

5. Conclusions

This study presents the development of a novel Picard-S hybrid iterative algorithm designed to address general variational inequalities and nonexpansive mappings within real Hilbert spaces. By relaxing the stringent constraints traditionally imposed on parametric sequences, the proposed algorithm achieves enhanced flexibility and broader applicability while retaining its strong convergence properties. This advancement not only bridges gaps in the existing theoretical framework but also establishes a robust equivalence between the new method and a previously established algorithm, demonstrating its consistency and efficacy. One of the key contributions of this work is the integration of the Collage–Anticollage Theorem, which provides an innovative perspective on transformations associated with general nonlinear variational inequalities (GNVI). This theorem, explored for the first time in this context, enriches the theoretical toolkit for analyzing and solving variational inequalities. The study also delves into the continuity properties of solutions to variational inequalities, addressing a rarely discussed yet crucial aspect of these problems, thereby offering a more holistic approach to their resolution. Numerical experiments conducted as part of this research validate the proposed algorithm’s superior performance. In comparison to an existing algorithm, the new algorithm consistently converges to optimal solutions with fewer iterations, underscoring its computational efficiency and practical advantages. Applications in areas such as convex optimization and machine learning further highlight its versatility. For example, the algorithm has shown promise in solving real-world problems related to classification, regression, and large-scale optimization tasks, solidifying its relevance in both theoretical and applied domains.

Author Contributions

Conceptualization, M.E., F.G. and G.V.M.; data curation, E.H. and M.E.; methodology, M.E., F.G. and G.V.M.; formal analysis, E.H., M.E., F.G. and G.V.M.; investigation, E.H., M.E., F.G. and G.V.M.; resources, E.H., M.E., F.G. and G.V.M.; writing—original draft preparation, M.E. and F.G.; writing—review and editing, E.H., M.E., F.G. and G.V.M.; visualization, E.H., M.E., F.G. and G.V.M.; supervision, F.G., M.E. and G.V.M.; project administration, G.V.M.; funding acquisition, E.H., M.E. and F.G. All authors have read and agreed to the published version of the manuscript.

Funding

The authors (E.H., M.E., and F.G.) acknowledge that their contribution to this work was partially supported by the Adiyaman University Scientific Research Projects Unit under Project No. FEFMAP/2025-0001, titled “A New Preconditional Forward-Backward Algorithm for Monotone Operators: Convergence Analysis and Applications”. The work of G.V.M. was supported in part by the Serbian Academy of Sciences and Arts (Φ-96).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declares no conflicts of interest.

Appendix A

Let L and δ be given by (13) and (14), respectively (see Theorem 2), and let q = r / λ 2 γ . Then, the conditions (12) and (14) give
| σ q | < q 2 4 λ 2 L ( 1 L ) ,
2 λ L ( 1 L ) < | q | < 1 λ , L < 1 2 ,
and
δ 2 = 1 2 σ r γ λ 2 + σ 2 λ 2 = 1 2 σ λ 2 q + σ 2 λ 2 = 1 + λ 2 ( σ q ) 2 q 2 ,
respectively. The conditions (A2) ensure that | σ q | and δ are well defined.
Using (A1), the last expression for δ 2 gives
δ 2 < 1 4 L ( 1 L ) = ( 1 2 L ) 2 ,
i.e., 2 L + δ < 1 under condition L < 1 / 2 .

References

  1. Stampacchia, G. Formes bilinearies coercivities sur les ensembles convexes. C. R. Acad. Sci. Paris 1964, 258, 4413–4416. [Google Scholar]
  2. Lions, J.; Stampacchia, G. Variational inequalities. Commun. Pure Appl. Math. 1967, 20, 493–519. [Google Scholar] [CrossRef]
  3. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Academic Press: New York, NY, USA, 1980. [Google Scholar]
  4. Glowinski, R.; Lions, J.L.; Trémolières, R. Numerical Analysis of Variational Inequalities; North-Holland: Amsterdam, The Netherlands, 1981. [Google Scholar]
  5. Giannessi, F.; Maugeri, A. (Eds.) Variational Inequalities and Network Equilibrium Problems; Springer: New York, NY, USA, 1995. [Google Scholar]
  6. Atalan, Y.; Hacıoğlu, E.; Ertürk, M.; Gürsoy, F.; Milovanović, G.V. Novel algorithms based on forward-backward splitting technique: Effective methods for regression and classification. J. Glob. Optim. 2024, 90, 869–890. [Google Scholar] [CrossRef]
  7. Gürsoy, F.; Hacıoğlu, E.; Karakaya, V.; Milovanović, G.V.; Uddin, I. Variational inequality problem involving multivalued nonexpansive mapping in CAT(0) Spaces. Results Math. 2022, 77, 131. [Google Scholar] [CrossRef]
  8. Keten Çopur, A.; Hacıoğlu, E.; Gürsoy, F.; Ertürk, M. An efficient inertial type iterative algorithm to approximate the solutions of quasi variational inequalities in real Hilbert spaces. J. Sci. Comput. 2021, 89, 50. [Google Scholar] [CrossRef]
  9. Gürsoy, F.; Ertürk, M.; Abbas, M. A Picard-type iterative algorithm for general variational inequalities and nonexpansive mappings. Numer. Algorithms 2020, 83, 867–883. [Google Scholar] [CrossRef]
  10. Atalan, Y. On a new fixed point iterative algorithm for general variational inequalities. J. Nonlinear Convex Anal. 2019, 20, 2371–2386. [Google Scholar]
  11. Maldar, S. Iterative algorithms of generalized nonexpansive mappings and monotone operators with application to convex minimization problem. Symmetry 2022, 14, 1841–1868. [Google Scholar] [CrossRef]
  12. Maldar, S. New parallel fixed point algorithms and their application to a system of variational inequalities. J. Appl. Math. Comput. 2022, 68, 1025. [Google Scholar] [CrossRef]
  13. Konnov, I.V. Combined relaxation methods for variational inequalities. In Lecture Notes in Mathematical Economics; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  14. Facchinei, F.; Pang, J.-S. Finite Dimensional Variational Inequalities and Complementarity Problems; Springer: New York, NY, USA; Berlin/Heidelberg, Germany, 2003; Volumes I and II. [Google Scholar]
  15. Giannessi, F.; Maugeri, A. (Eds.) Variational Analysis and Applications; Springer: New York, NY, USA, 2005. [Google Scholar]
  16. Ansari, Q.H. (Ed.) Topics in Nonlinear Analysis and Optimization; World Education: Delhi, India, 2012. [Google Scholar]
  17. Ansari, Q.H.; Lalitha, C.S.; Mehta, M. Generalized Convexity. In Nonsmooth Variational Inequalities and Nonsmooth Optimization; CRC Press: Boca Raton, FL, USA; London, UK; New York, NY, USA, 2014. [Google Scholar]
  18. Noor, M.A. General variational inequalities. Appl. Math. Lett. 1988, 1, 119–122. [Google Scholar] [CrossRef]
  19. Noor, M.A. Variational inequalities in physical oceanography. In Ocean Waves Engineering, Advances in Fluid Mechanics; Rahman, M., Ed.; WIT Press: Southampton, UK, 1994; Volume 2. [Google Scholar]
  20. Bnouhachem, A.; Liu, Z.B. Alternating direction method for maximum entropy subject to simple constraint sets. J. Math. Anal. Appl. 2004, 121, 259–277. [Google Scholar] [CrossRef]
  21. Kocvara, M.; Outrata, J.V. On implicit complementarity problems with application in mechanics. In Proceedings of the the IFIP Conference on Numerical Analysis and Optimization, Rabat, Marocco, 15–17 December 1993. [Google Scholar]
  22. Noor, M.A. General variational inequalities and nonexpansive mappings. J. Math. Anal. Appl. 2007, 331, 810–822. [Google Scholar] [CrossRef]
  23. Ahmad, R.; Ansari, Q.H.; Irfan, S.S. Generalized variational inclusions and generalized resolvent equations in Banach spaces. Comput. Math. Appl. 2005, 29, 1825–1835. [Google Scholar] [CrossRef]
  24. Ahmad, R.; Ansari, Q.H. Generalized variational inclusions and H-resolvent equations with H-accretive operators. Taiwan. J. Math. 2007, 111, 703–716. [Google Scholar] [CrossRef]
  25. Ahmad, R.; Ansari, Q.H. An iterative algorithm for generalized nonlinear variational inclusions. Appl. Math. Lett. 2000, 13, 23–26. [Google Scholar] [CrossRef]
  26. Fang, Y.P.; Huang, N.J. H-Monotone operator and resolvent operator technique for variational inclusions. Appl. Math. Comput. 2003, 145, 795–803. [Google Scholar] [CrossRef]
  27. Huang, N.J.; Fang, Y.P. A new class of general variational inclusions involving maximal η-monotone mappings. Publ. Math. Debrecen 2003, 62, 83–98. [Google Scholar] [CrossRef]
  28. Huang, Z.; Noor, M.A. Equivalency of convergence between one-step iteration algorithm and two-step iteration algorithm of variational inclusions for H-monotone mappings. Computers Math. Appl. 2007, 53, 1567–1571. [Google Scholar] [CrossRef]
  29. Noor, M.A.; Huang, Z. Some resolvent iterative methods for variational inclusions and nonexpansive mappings. Appl. Math. Comput. 2007, 194, 267–275. [Google Scholar] [CrossRef]
  30. Zeng, L.C.; Guu, S.M.; Yao, J.C. Characterization of H-monotone operators with applications to variational inclusions. Comput. Math. Appl. 2005, 50, 329–337. [Google Scholar] [CrossRef]
  31. Gürsoy, F.; Sahu, D.R.; Ansari, Q.H. S-iteration process for variational inclusions and its rate of convergence. J. Nonlinear Convex Anal. 2016, 17, 1753–1767. [Google Scholar]
  32. Nagurney, A. Network Economics: A Variational Inequality Approach; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  33. Kotsireas, I.S.; Nagurney, A.; Pardalos, P.M. (Eds.) Dynamics of Disasters–Algorithmic Approaches and Applications; Springer Optimization and Its Applications 140; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  34. Fargetta, G.; Maugeri, A.; Scrimali, L. A stochastic Nash equilibrium problem for medical supply competition. J. Optim. Theory Appl. 2022, 193, 354–380. [Google Scholar] [CrossRef]
  35. Qihou, L. A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings. J. Math. Anal. Appl. 1990, 146, 301–305. [Google Scholar] [CrossRef]
  36. Weng, X. Fixed point iteration for local strictly pseudocontractive mapping. Proc. Amer. Math. Soc. 1991, 113, 727–731. [Google Scholar] [CrossRef]
  37. Milovanović, G.V. Numerical Analysis and Approximation Theory—Introduction to Numerical Processes and Solving of Equations; Zavod za udžbenike: Beograd, Serbia, 2014. (In Serbian) [Google Scholar]
Figure 1. Graph in log log scale denotes the convergence behaviors of algorithms (45) (blue line) and (46) (red line) to 0 n = 0 .
Figure 1. Graph in log log scale denotes the convergence behaviors of algorithms (45) (blue line) and (46) (red line) to 0 n = 0 .
Axioms 14 00288 g001
Figure 2. Comparison of the efficiency of algorithms (47) (blue line) and (48) (red line) based on reduction in function values F ( w n ) in each step.
Figure 2. Comparison of the efficiency of algorithms (47) (blue line) and (48) (red line) based on reduction in function values F ( w n ) in each step.
Axioms 14 00288 g002
Figure 3. Comparison of the efficiency of algorithms (47) (blue line) and (48) (red line) based on F ( w n ) F ( w ) in each step.
Figure 3. Comparison of the efficiency of algorithms (47) (blue line) and (48) (red line) based on F ( w n ) F ( w ) in each step.
Axioms 14 00288 g003
Figure 4. Comparison of the efficiency of algorithms (47) (blue line) and (48) (red line) based on rMSE in each step.
Figure 4. Comparison of the efficiency of algorithms (47) (blue line) and (48) (red line) based on rMSE in each step.
Axioms 14 00288 g004
Table 1. Convergence behavior of algorithm (45).
Table 1. Convergence behavior of algorithm (45).
n x n = x n , k k = 0 x n
0 { 10 1 , 10 2 , 10 3 , 10 4 , } 1.0050378 × 10 1
1 { 9.7967792 × 10 2 , 9.8472060 × 10 3 , 9.8477132 × 10 4 , 9.8477183 × 10 5 , } 9.8466417 × 10 2
10 { 8.6656450 × 10 2 , 8.9532943 × 10 3 , 8.9563123 × 10 4 , 8.9563425 × 10 5 , } 8.7122397 × 10 2
100 { 3.1258552 × 10 2 , 3.5598150 × 10 3 , 3.5651069 × 10 4 , 3.5651600 × 10 5 , } 3.1462641 × 10 2
500 { 5.1356941 × 10 4 , 5.9449351 × 10 5 , 5.9550580 × 10 6 , 5.9551595 × 10 7 , } 5.1703345 × 10 4
1000 { 3.0841463 × 10 6 , 3.5701370 × 10 7 , 3.5762164 × 10 8 , 3.5762773 × 10 9 , } 3.1049491 × 10 6
2000 { 1.1122794 × 10 10 , 1.2875491 × 10 11 , 1.2897416 × 10 12 , 1.2897635 × 10 13 , } 1.1197818 × 10 10
{ 0 } k = 0 0
Table 2. Convergence behavior of algorithm (46).
Table 2. Convergence behavior of algorithm (46).
n x n = x n , k k = 0 x n
0 { 10 1 , 10 2 , 10 3 , 10 4 , } 1.0050378 × 10 1
1 { 9.79677792 × 10 2 , 9.8472060 × 10 3 , 9.8477132 × 10 4 , 9.8477183 × 10 5 , } 9.8466417 × 10 2
10 { 9.6217387 × 10 2 , 9.7133063 × 10 3 , 9.7142348 × 10 4 , 9.7142441 × 10 5 , } 9.6711360 × 10 2
100 { 9.4718081 × 10 2 , 9.5972797 × 10 3 , 9.5985596 × 10 4 , 9.5985724 × 10 5 , } 9.5207948 × 10 2
500 { 9.3707517 × 10 2 , 9.5183569 × 10 3 , 9.5198684 × 10 4 , 9.5198835 × 10 5 , } 9.4194550 × 10 3
1000 { 9.3277892 × 10 2 , 9.4846274 × 10 3 , 9.4862361 × 10 4 , 9.4862522 × 10 5 , } 9.3763705 × 10 2
2000 { 9.2851285 × 10 2 , 9.4510301 × 10 3 , 9.4527346 × 10 4 , 9.4527516 × 10 5 , } 9.3335876 × 10 2
{ 0 } k = 0 0
Table 3. Comparison of the efficiency of algorithms (47) and (48).
Table 3. Comparison of the efficiency of algorithms (47) and (48).
AlignedDiabetes
Algorithm (47)Algorithm (48)Algorithm (47)Algorithm (48)
# of iterations135255911610,480
Min F value633.5152581633.540710146.3181283247.514786
rMse (Train.)0.3559154540.3559229620.3459088590.3504142
rMse2 (Test)0.2550970070.2545590060.2729199360.2745554
Train. time (s)6.282856118.14028580.02173412.8294408
COVID-19Sobar
Algorithm (47)Algorithm (48)Algorithm (47)Algorithm (48)
# of iterations64,173100,0005422817
Min F value449.786490.5244.28372524.668452
rMse (Train.)0.29940.31320.34288080.358791
rMse2 (Test)0.169430.158620.2880590.295836
Train. time (s)187.705420.370.44193861.013948
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hacıoğlu, E.; Ertürk, M.; Gürsoy, F.; Milovanović, G.V. Flexible and Efficient Iterative Solutions for General Variational Inequalities in Real Hilbert Spaces. Axioms 2025, 14, 288. https://doi.org/10.3390/axioms14040288

AMA Style

Hacıoğlu E, Ertürk M, Gürsoy F, Milovanović GV. Flexible and Efficient Iterative Solutions for General Variational Inequalities in Real Hilbert Spaces. Axioms. 2025; 14(4):288. https://doi.org/10.3390/axioms14040288

Chicago/Turabian Style

Hacıoğlu, Emirhan, Müzeyyen Ertürk, Faik Gürsoy, and Gradimir V. Milovanović. 2025. "Flexible and Efficient Iterative Solutions for General Variational Inequalities in Real Hilbert Spaces" Axioms 14, no. 4: 288. https://doi.org/10.3390/axioms14040288

APA Style

Hacıoğlu, E., Ertürk, M., Gürsoy, F., & Milovanović, G. V. (2025). Flexible and Efficient Iterative Solutions for General Variational Inequalities in Real Hilbert Spaces. Axioms, 14(4), 288. https://doi.org/10.3390/axioms14040288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop