Next Article in Journal
Hyperpolyadic Structures
Next Article in Special Issue
A Novel Radial Basis and Sigmoid Neural Network Combination to Solve the Human Immunodeficiency Virus System in Cancer Patients
Previous Article in Journal
Tightly-Secure Two-Tier Signatures on Code-Based Digital Signatures with Chameleon Hash Functions
Previous Article in Special Issue
Efficient Scheme for the Economic Heston–Hull–White Problem Using Novel RBF-FD Coefficients Derived from Multiquadric Function Integrals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Trends in Applying LRM to Nonlinear Ill-Posed Equations

1
Department of Mathematical & Computational Sciences, National Institute of Technology Karnataka, Surathkal 575025, India
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(15), 2377; https://doi.org/10.3390/math12152377
Submission received: 21 June 2024 / Revised: 22 July 2024 / Accepted: 24 July 2024 / Published: 30 July 2024
(This article belongs to the Special Issue Numerical Analysis and Modeling)

Abstract

:
Tautenhahn (2002) studied the Lavrentiev regularization method (LRM) to approximate a stable solution for the ill-posed nonlinear equation κ ( u ) = v , where κ : D ( κ ) X X is a nonlinear monotone operator and X is a Hilbert space. The operator in the example used in Tautenhahn’s paper was not a monotone operator. So, the following question arises. Can we use LRM for ill-posed nonlinear equations when the involved operator is not monotone? This paper provides a sufficient condition to employ the Lavrentiev regularization technique to such equations whenever the operator involved is non-monotone. Under certain assumptions, the error analysis and adaptive parameter choice strategy for the method are discussed. Moreover, the developed theory is applied to two well-known ill-posed problems—inverse gravimetry and growth law problems.

1. Introduction

Consider the ill-posed nonlinear equation [1]
κ ( u ) = v ,
where κ : D ( κ ) X X ,  X is a Hilbert space. The norm and inner product on X are · and · , · , respectively. Suppose ∃ u ^ D ( κ ) , such that κ ( u ^ ) = v and v δ X , such that
v v δ   δ .
The standard regularization method to approximate u ^ of (1) is Tikhonov regularization (see [2,3,4,5,6,7,8,9,10,11] for the latest work on regularization methods for (1)), wherein the regularised approximation u α δ is the minimizer of the Tikhonov’s functional
J α ( u ) =   κ ( u ) v δ 2 +   α u u 0 2 ,
where u 0 X is the initial guess and α > 0 is the regularization parameter.
It is known that if κ is monotone [1,12,13,14,15], i.e.,
κ ( u ) κ ( v ) , u v 0 u , v D ( κ ) ,
then one can use LRM to approximate the exact solution u ^ . Recall that [1,13,14,15], in LRM for (1), with a unique solution u α , (which consists) of
κ ( u ) + α ( u u 0 ) = v
is considered an approximation for u ^ with u 0 as an initial guess for u ^ .
As (1) is ill-posed, and in most practical problems, the available data are v δ (the noisy data), one has to deal with the solution u α δ of
κ ( u ) + α ( u u 0 ) = v δ .
This unique solution u α δ is observed to be a good approximation for u ^ , provided that the regularization parameter α is chosen appropriately. The existence and uniqueness of the solution are due to the Minty–Browder theorem [1,12].
To obtain the error bounds on the distance u α δ u ^ , one needs some additional smoothness assumptions on u ^ with respect to the operator κ ( u ^ ) or κ ( u 0 ) . In the literature, various source conditions are used, including the well-known Holder-type conditions [1,13,14,16]. Our study considers one such assumption, as given in Assumption 2.
The regularization theory for linear ill-posed problems is already well developed, thus directing our attention to the more interesting nonlinear case. One can see a detailed overview in [17] for the linear case.
The theoretical analysis of LRM for (1) with a monotone operator κ has been well studied by many authors [1,13,14,15,18,19,20,21]. However, the examples of operators used in some of the papers are not monotone. For example, in [1], Tautenhahn considered the following example to illustrate the theoretical results, assuming the operator in (1) was monotone, which it actually was not. It was proven in [22] that the operator κ involved was not monotone.
Example 1.
Consider the problem of identifying u ( t ) , t ( 0 , 1 ) in the initial value problem
d v d t = u ( t ) v ( t ) , v ( 0 ) = c , t ( 0 , 1 ) , c > 0 ,
when the given noisy data are v δ ( t ) L 2 ( 0 , 1 ) . The problem can be reformulated as an ill-posed operator equation κ ( u ) = v with
[ κ ( u ) ] ( t ) = c e 0 t u ( θ ) d θ , u L 2 ( 0 , 1 ) , t ( 0 , 1 ) .
In [23], the following example was used to illustrate the theoretical results obtained for the Lavrentiev regularization method when the operator in (1) was monotone, but the operator κ in the following example (inverse gravimetry problem [24]) was also not monotone.
Example 2.
The operator equation
κ ( w ) ( [ 0 , m ] × [ 0 , m ] ) 1 [ ( θ θ ) 2 + ( ϑ ϑ ) 2 + w 2 ( θ , ϑ ) ] 1 / 2 d θ d ϑ = q ( θ , ϑ ) ,
where q ( θ , ϑ ) = Δ g ( θ , ϑ ) + κ ( H ) ,   Δ g ( θ , ϑ ) is the gravitational field (anomalous), H is the asymptotic horizontal plane and κ : H ( [ 0 , m ] × [ 0 , m ] ) L 2 ( [ 0 , m ] × [ 0 , m ] ) L 2 ( [ 0 , m ] × [ 0 , m ] ) where w 0 ( θ , ϑ ) = H is constant, is an ill-posed equation (details in [15,24]).
Note that Equations (2) and (3) have unique solutions when κ is monotone. So, the question is, under what condition do Equations (2) and (3) have unique solutions when the operator κ in (1) is not monotone.
The goal of this paper is to provide the conditions under which one can utilize the Lavrentiev regularization method to solve (1), when the operator κ is not monotone.
Here, Section 2 gives the main results and Section 3 consists of the convergence analysis. The adaptive parameter choice strategy is discussed in Section 4 and the algorithm is given in Section 5, numerical examples and conclusions in Section 6 and Section 7, respectively.

2. Main Results

In the first theorem of this section, we prove that κ ( u 0 ) + α I is invertible. We use the notation B ( u , ρ ) = { v X :   v u   < ρ } for u X and ρ > 0 , and B ¯ ( u , ρ ) to denote the closure of B ( u , ρ ) .
Theorem 1.
Suppose that for every ϵ h > 0 , there exists a positive self-adjoint operator A h , such that
κ ( u 0 ) A h     ϵ h
for some u 0 B ( u ^ , r 0 ) and for some r 0 > 0 . Then, ( κ ( u 0 ) + α I ) 1 exists for fixed α ( max { 2 ϵ h , δ } , α ¯ ) for some α ¯ > max { 2 ϵ h , δ } . Furthermore,
α ( κ ( u 0 ) + α I ) 1   2
and
( κ ( u 0 ) + α I ) 1 κ ( u 0 )   3 .
Proof. 
As A h is positive self adjoint operator, A h + α I is invertible for any fixed α > 0 .
Suppose α ( max { 2 ϵ h , δ } , α ¯ ) for some α ¯ > max { 2 ϵ h , δ } . Then,
( A h + α I ) 1 [ ( κ ( u 0 ) + α I ) ( A h + α I ) ] = ( A h + α I ) 1 [ κ ( u 0 ) A h ] ϵ h α ϵ h 2 ϵ h = 1 2 < 1 .
Therefore, κ ( u 0 ) + α I is invertible by the Banach lemma on invertible operators [25,26] and
( κ ( u 0 ) + α I ) 1 ( A h + α I )   1 1 1 2 = 2
or
( κ ( u 0 ) + α I ) 1   2 ( A h + α I ) 1   = 2 α .
Thus,
α ( κ ( u 0 ) + α I ) 1   2 α α 2
and
( κ ( u 0 ) + α I ) 1 κ ( u 0 ) = ( κ ( u 0 ) + α I ) 1 ( κ ( u 0 ) + α I α I ( κ ( u 0 ) + α I ) 1 ( κ ( u 0 ) + α I ) + α ( κ ( u 0 ) + α I ) 1 3 .
Remark 1.
Note that, as κ ( u 0 ) + α I is invertible for all α > 0 (as κ ( · ) is positive type or sectorial [12,27]) in Example (1), one can apply the theory developed in this paper to the Example (1). Observe that if κ ( · ) is a positive type, then the condition α ( max { 2 ϵ h , δ } , α ¯ ) is not required.
In what follows, we assume the following:
Assumption 1
([1,13,16]). There exists an element φ ( u , u 0 , v ) X and a constant k 0 > 0 , such that u D ( κ ) and v X ,
[ κ ( u ) κ ( u 0 ) ] v = κ ( u 0 ) φ ( u , u 0 , v ) , φ ( u , u 0 , v ) k 0 v u u 0 .

3. Approximating Sequence

For each n = 1 , 2 , , define the sequence ( u n , α δ ) as
u 0 , α δ : = u 0 , u n + 1 , α δ = u n , α δ ( A h + α I ) 1 [ κ ( u n , α δ ) + α ( u n , α δ u 0 ) v δ ] .
We require some parameters to prove ( u n , α δ ) converges. Let k 0 1 24 ,
r 0 = 9 + 12 k 0 1 24 k 0 1 3 3 k 0 ,
γ = 3 4 k 0 r 0 2 + 3 2 r 0 + 1 ,
r = 1 2 + 1 4 6 k 0 γ 3 k 0
and
σ = 1 + 3 k 0 r 2 .
Remark 2.
For the above r 0 , we have γ 1 24 k 0 and σ < 1 and γ < γ 1 σ r .
Theorem 2.
Let Assumption 1 be satisfied. Then, sequence ( u n , α δ ) is well defined and ( u n , α δ ) B ( u 0 , r ) ,   n 0 . Also, the sequence ( u n , α δ ) is Cauchy in B ( u 0 , r ) and thus convergent to some u α δ B ( u 0 , r ) ¯ , satisfying κ ( u α δ ) + α ( u α δ u 0 ) = v δ .
Further,
u n , α δ u α δ   σ n 1 σ γ .
Proof. 
We employ mathematical induction to prove u n , α δ B ( u 0 , r ) ,   n 0 .
From Assumption 1 and as κ ( u ^ ) = v , we have
u 1 , α δ u 0 = ( A h + α I ) 1 [ κ ( u 0 ) v δ ] = ( A h + α I ) 1 [ κ ( u 0 ) κ ( u ^ ) + v v δ ] ( A h + α I ) 1 1 ( κ ( u ^ + t ( u 0 u ^ ) κ ( u 0 ) ) d t ( u 0 u ^ ) ) + ( A h + α I ) 1 [ κ ( u 0 ) A h + A h ] ( u 0 u ^ ) + ( A h + α I ) 1 ( v v δ ) ( A h + α I ) 1 κ ( u 0 ) 0 1 φ ( u ^ + t ( u 0 u ^ , u 0 , u 0 u ^ ) d t + ( A h + α I ) 1 [ κ ( u 0 ) A h + A h ] ( u 0 u ^ ) + ( A h + α I ) 1 ( v v δ )
( A h + α I ) 1 [ κ ( u 0 ) A h + A h ] 0 1 k 0 | 1 t | u 0 u ^ 2 d t + ( A h + α I ) 1 [ κ ( u 0 ) A h + A h ] ( u 0 u ^ ) + ( A h + α I ) 1 ( v v δ ) ϵ h α + 1 k 0 2 r 0 2 + ϵ h α + 1 r 0 + δ α 3 4 k 0 r 0 2 + 3 2 r 0 + 1 = γ r .
Next, we assume that u k , α δ B ( u 0 , r ) , for some k. Then,
u k + 1 , α δ u 0 u k + 1 , α δ u k , α δ   +   u k , α δ u k 1 , α δ   + + u 1 , α δ u 0 ( σ k + σ k 1 + + 1 ) γ γ 1 σ r .
Therefore, u k + 1 , α δ B ( u 0 , r ) and induction yields our assertion.
Suppose u n + 1 , α δ , u n , α δ B ( u 0 , r ) , n > 0 . Then, by (9),
u n + 1 , α δ u n , α δ = u n , α δ u n 1 , α δ ( A h + α I ) 1 [ κ ( u n , α δ ) κ ( u n 1 , α δ ) + α ( u n , α δ u n 1 , α δ ) ] = ( A h + α I ) 1 ( ( A h + α I ) ( u n , α δ u n 1 , α δ ) ( κ ( u n , α δ ) κ ( u n 1 , α δ ) + α ( u n , α δ u n 1 , α δ ) ) ) = ( A h + α I ) 1 ( A h 0 1 κ ( u n 1 , α δ + t ( u n , α δ u n 1 , α δ ) ) d t ) × ( u n , α δ u n 1 , α δ ) = ( A h + α I ) 1 ( A h κ ( u 0 ) + 0 1 [ κ ( u 0 ) κ ( u n 1 , α δ + t ( u n , α δ u n 1 , α δ ) ) ] d t × ( u n , α δ u n 1 , α δ ) .
From Assumption 1, we obtain
u n + 1 , α δ u n , α δ ϵ h α u n , α δ u n 1 , α δ   +   ( A h + α I ) 1 κ ( u 0 ) 0 1 φ ( u n 1 , α δ + t ( u n , α δ u n 1 , α δ ) , u 0 , u n , α δ u n 1 , α δ ) d t ϵ h α u n , α δ u n 1 , α δ   +   ( A h + α I ) 1 [ κ ( u 0 ) A h + A h ] 0 1 φ ( u n 1 , α δ + t ( u n , α δ u n 1 , α δ ) , u 0 , u n , α δ u n 1 , α δ ) d t ϵ h α u n , α δ u n 1 , α δ   + ( ϵ h α + 1 ) k 0 r u n , α δ u n 1 , α δ ( b e c a u s e u n 1 , α δ + t ( u n , α δ u n 1 , α δ ) B ( u 0 , r ) ) 1 + 3 k 0 r 2 u n , α δ u n 1 , α δ = σ u n , α δ u n 1 , α δ .
Let n , m N . Consider,
u n + m , α δ u n , α δ u n + m , α δ u n + m 1 , α δ   +   u n + m 1 , α δ u n + m 2 , α δ   + + u n + 1 , α δ u n , α δ ( σ n + m 1 + σ n + m 2 + + σ n ) γ σ n 1 σ γ .
Thus, ( u n , α δ ) is Cauchy in B ( u 0 , r ) . Therefore ( u n , α δ ) converges to u α δ B ( u 0 , r ) ¯ . As n in (9), we obtain κ ( u α δ ) + α ( u α δ u 0 ) = v δ .
Further, taking m in (12), we obtain
u n , α δ u α δ   σ n 1 σ γ .
In the following, an additional assumption is given.
Assumption 2.
is a constant ρ > 0 and an element ξ D ( κ ) with ξ   ρ , such that u 0 u ^ = ( κ ( u 0 ) * κ ( u 0 ) ) ω ξ ,   0 < ω 1 .
We define some parameters to be used in the proof of the results that follow.
Hereafter, we assume
r 0 = 3 8 + 6 k 0 3 k 0 , k 0 < 1 6 ,
R 1 = ( 1 3 k 0 r 0 ) 1 18 k 0 r 0 + 9 k 0 2 r 0 2 3 k 0
and
R 2 = ( 1 3 k 0 r 0 ) 1 18 k 0 r 0 + 9 k 0 2 r 0 2 6 k 0 3 k 0 .
Then, one can see that R 1 < R 2 . Let R = max { R 2 , r + r 0 } .
Remark 3.
When δ = 0 ,   ( u n , α δ ) converges to u α , which satisfies κ ( u α ) + α ( u α u 0 ) = v . Further, one can observe that u n , α δ B ( u ^ , R ) , for all n = 0 , 1 , 2 , .
Theorem 3.
Let α ¯ > 0 be a fixed number, such that α ¯ > max { 2 ϵ h , δ + ϵ h } . Let α ( max { 2 ϵ h , δ + ϵ h } , α ¯ ) and u 0 , such that
u 0 u ^   = r 0 .
Then, u α , u α δ B ( u ^ , R ) .
Proof. 
As
κ ( u α ) + α ( u α u 0 ) = v ,
we have κ ( u α ) κ ( u ^ ) + α ( u α u 0 ) = 0 or
0 1 κ ( u ^ + t ( u α u ^ ) ) d t ( u α u ^ ) + α ( u α u 0 ) = 0 .
That is,
( κ ( u 0 ) + α I ) ( u α u ^ ) = κ ( u 0 ) ( u α u ^ ) 0 1 κ ( u ^ + t ( u α u ^ ) ) d t ( u α u ^ ) + α ( u 0 u ^ )
and hence
u α u ^ = ( κ ( u 0 ) + α I ) 1 0 1 [ κ ( u 0 ) κ ( u ^ + t ( u α u ^ ) ) ] d t ( u α u ^ ) + ( κ ( u 0 ) + α I ) 1 α ( u 0 u ^ ) .
Thus, from Assumption 1, we obtain
u α u ^ 3 k 0 u α u ^ u α u ^ 2 + u ^ u 0 + 2 u 0 u ^ 3 k 0 2 u α u ^ 2 +   3 k 0 r 0 u α u ^ + 2 r 0 .
This implies
u α u ^   R 1 < R ,
so u α B ( u ^ , R ) .
Next, note that κ ( u α δ ) + α ( u α δ u 0 ) = v δ or
κ ( u α δ ) κ ( u ^ ) + α ( u α δ u 0 ) = v δ v .
This implies
0 1 κ ( u ^ + t ( u α δ u ^ ) ) d t ( u α δ u ^ ) + α ( u α δ u ^ ) = v δ v + α ( u 0 u ^ )
so,
( κ ( u 0 ) + α I ) ( u α δ u ^ ) = κ ( u 0 ) ( u α δ u ^ ) 0 1 κ ( u ^ + t ( u α δ u ^ ) ) d t ( u α δ u ^ ) + α ( u 0 u ^ ) + ( v δ v )
hence
u α δ u ^ = ( κ ( u 0 ) + α I ) 1 0 1 [ κ ( u 0 ) κ ( u ^ + t ( u α δ u ^ ) ) ] d t ( u α δ u ^ ) + ( κ ( u 0 ) + α I ) 1 α ( u 0 u ^ ) + ( κ ( u 0 ) + α I ) 1 ( v δ v ) .
Thus, from Assumption 1, we obtain
u α δ u ^ 3 k 0 u α δ u ^ u α δ u ^ 2 + u ^ u 0 + 2 u 0 u ^ + δ α ϵ h 3 k 0 2 u α δ u ^ 2 +   3 k 0 r 0 u α δ u ^ + 2 r 0 + 1 ( b e c a u s e δ α ϵ h < 1 ) .
This implies
u α δ u ^   R 2 < R ,
so u α δ B ( u ^ , R ) .
The next theorem provides an estimate for u ^ u α and u ^ u α δ . The following Lemma is considered in its proof.
Lemma 1
([28]). Suppose that A : X X is a linear bounded operator with A = A * 0 . If p > 0 and a > 0 , then, for any linear bounded operator A h : X X with A h = A h * 0 ,   A h     a , we have
A p A h p   a p A A h min { p , 1 } .
Here, a p = 4 π , if p 1 and p a p is bounded in ( 0 , p 0 ] for any p 0 > 0 .
Theorem 4.
Let α ¯   > 0 be a fixed number, such that α ¯ > max { 2 ϵ h , δ + ϵ h } . Let α ( max { 2 ϵ h , δ + ϵ h } , α ¯ ) and u 0 , such that
u 0 u ^   = r 0 .
Let u α δ and u α be the same as in Theorem 2 and Remark 3, respectively. Suppose Assumption 1 and Assumption 2 hold and 6 k 0 R < 1 . Then,
(a) 
u ^ u α c 1 α ω ,
where c 1 = 2 ρ 1 3 k 0 R 4 π ( κ ( u 0 )   +   A h ) ω + A h ω .
(b) 
u α δ u α   c 2 δ α ,
where c 2 = 2 1 6 k 0 R .
(c) 
u α δ u ^   max { c 1 , c 2 } ( α ω + δ α ) .
In particular, for α = δ 1 ω + 1 , we have
u α δ u ^   = O ( δ ω ω + 1 ) .
Proof. 
By (15),
u α u ^ = ( κ ( u 0 ) + α I ) 1 0 1 [ κ ( u 0 ) κ ( u ^ + t ( u α u ^ ) ) ] d t ( u α u ^ ) + ( κ ( u 0 ) + α I ) 1 α ( u 0 u ^ )
hence, from Assumption 1, we have
u α u ^ α ( κ ( u 0 ) + α I ) 1 ( u 0 u ^ )   + 3 k 0 R u α u ^
or
( 1 3 k 0 R ) u α u ^ α ( κ ( u 0 ) + α I ) 1 ( u 0 u ^ ) .
Further, note that
α ( κ ( u 0 ) + α I ) 1 ( u 0 u ^ ) α [ ( κ ( u 0 ) + α I ) 1 ( A h + α I ) 1 ] ( u 0 u ^ ) + α ( A h + α I ) 1 ( u 0 u ^ ) = α ( κ ( u 0 ) + α I ) 1 ( A h κ ( u 0 ) ) ×   ( A h + α I ) 1 ( u 0 u ^ ) + α ( A h + α I ) 1 ( u 0 u ^ ) ϵ h α ϵ h + 1 α ( A h + α I ) 1 ( u 0 u ^ ) 2 α ( A h + α I ) 1 ( u 0 u ^ ) .
Now, from Assumption 2 and Lemma 1, we have
α ( A h + α I ) 1 ( u 0 u ^ ) = α ( A h + α I ) 1 ( κ ( u 0 ) * κ ( u 0 ) ) ω ξ α ( A h + α I ) 1 [ ( κ ( u 0 ) * κ ( u 0 ) ) ω ( A h * A h ) ω ] ξ + α ( A h + α I ) 1 ( A h * A h ) ω ξ 4 ρ π κ ( u 0 ) * κ ( u 0 ) A h * A h ω ξ + α ( A h + α I ) 1 A h ω A h ω ξ = 4 ρ π [ κ ( u 0 ) * ( κ ( u 0 ) A h ) + ( κ ( u 0 ) * A h * ) A h ] ω ξ + α ( A h + α I ) 1 A h ω A h ω ξ ρ [ ( κ ( u 0 )   +   A h ) ω 4 ϵ h ω π   +   A h ω α ω ] .
Thus, from (20), (21), (22) and the fact that ϵ h < α , we have
u α u ^   2 ρ 1 3 k 0 R 4 π ( κ ( u 0 )   +   A h ) ω + A h ω α ω .
This completes the proof of (17).
Next, as
κ ( u α δ ) + α ( u α δ u 0 ) = v δ ,
by (14) and (23), we have
κ ( u α δ ) κ ( u α ) + α ( u α δ u α ) = v δ v .
So, from the mean value theorem
0 1 κ ( u α + t ( u α δ u α ) ) d t + α I ( u α δ u α ) = v δ v
or
( κ ( u 0 ) + α I ) ( u α δ u α ) = 0 1 [ κ ( u 0 ) κ ( u α + t ( u α δ u α ) ) ] d t ( u α δ u α ) + v δ v .
Therefore, from Assumption 1, we have
u α δ u α 0 1 ( κ ( u 0 ) + α I ) 1 [ κ ( u 0 ) κ ( u α + t ( u α δ u α ) ) ] d t × ( u α δ u α )   +   ( κ ( u 0 ) + α I ) 1 ( v δ v ) 3 k 0 u α δ u α u α u 0   + u α δ u 0   +   u α u 0 2 + δ α ϵ h 6 k 0 r u α δ u α + δ α ϵ h
and hence
u α δ u α 1 1 6 k 0 R δ α ϵ h 1 1 6 k 0 R 2 δ α .
This proves (18); (c) and (19) follows from (17) and (18). □
Theorem 2 and Theorem 4 lead to the following theorem.
Theorem 5.
Let α ¯ > 0 be a fixed number, such that α ¯ > max { 2 ϵ h , δ + ϵ h } ,   α ( max { 2 ϵ h , δ + ϵ h } , α ¯ ) and let u 0 , such that
u 0 u ^   = r 0 .
Let u α δ and u α be the same as in Theorem 2 and Remark 3, respectively. Suppose Assumption 1 and Assumption 2 hold and 6 k 0 R < 1 . Let
n α , δ : = min { m N : α σ m δ } .
Then,
u n α , δ , α δ u ^   c ˜ ( δ α + α ω ) ,
where c ˜ = max { c 1 , c 2 , γ 1 σ } .
One should choose α , such that α 1 + ω = δ . But, such a choice is not possible as ω is unknown. Therefore, a strategy for the parameter choice is to be considered. Here, the adaptive parameter choice introduced by Pereverzyev and Schock in [29] is adopted.

4. Adaptive Choice and Stopping Rule

In our study, the regularization parameter α ( > 0 ) is chosen and the error bound is deduced based on an adaptive scheme, which is not only independent of ω but also of the regularization method involved. This was introduced by Pereverzyev and Schock in [29] (also see [20]). Here, different regularization parameters α i are usually chosen from a finite set Δ = { α 0 , α 1 , α 2 , , α n } , with α 0 < α 1 < α 2 < < α n and, then, the corresponding u n α i , δ , α i δ are studied.
Theorem 6.
Assume the conditions in Theorem 5 hold. Further, assume i { 0 , 1 , 2 , , n } such that α i ω δ α i and for some μ > 1 ,
α i = : μ i α 0 i = 1 , 2 , , n , w h e r e α 0 = C δ ,
for some constant C . Let
n α i , δ = min { m : α i σ m δ } , l = max { i : α i ω δ α i } < n
and
k = max { i :   u n α i , δ , α i δ u n α j , δ , α j δ   4 c ˜ δ α j , j = 0 , 1 , 2 , , i } .
Then, l k and
u ^ u n α k , δ , α k δ   6 c ˜ μ δ ω ω + 1 = O ( δ ω ω + 1 ) , 0 < ω 1 .
Proof. 
First, we assert that l k . To prove this, it is enough to show that for each i = 1 , 2 , , n ,   α i ω δ α i implies u n α i , δ , α i δ u n α j , δ , α j δ   4 c ˜ δ α j , j = 0 , 1 , 2 , , i .
Let j i . Consider
u n α i , δ , α i δ u n α j , δ , α j δ u n α i , δ , α i δ u ^   +   u n α j , δ , α j δ u ^ c ˜ δ α i + α i ω + c ˜ δ α j + α j ω c ˜ δ α i + δ α i + c ˜ δ α j + δ α j 4 c ˜ δ α j .
Hence, the assertion.
Next, we fix α k and as l k , one can observe that
u ^ u n α k , δ , α k δ u ^ u n α l , δ , α l δ   +   u n α l , δ , α l δ u n α k , δ , α k δ c ˜ α l ω + δ α l + 4 c ˜ δ α k c ˜ δ α l + δ α l + 4 c ˜ δ α l 6 c ˜ δ α l .
Note that the quantity α ω + δ α attains its least value for the choice α : = α δ , such that α δ ω = δ α δ .
We have α δ α l + 1 μ α l . This gives, δ α l μ δ α δ = μ α δ ω = μ δ ω ω + 1 .
Thus,
u ^ u n α k , δ , α k δ 6 c ˜ μ δ ω ω + 1 = O ( δ ω ω + 1 ) w h e r e 0 < ω 1 .

5. Algorithm

The algorithm associated with the adaptive parameter choice strategy in Theorem (6) is as follows:
  • Choose α 0 = C δ and μ > 1 .
  • Choose α i = μ i α 0 , i = 0 , 1 , 2 , , n .
(i)
Set i = 0 .
(ii)
Choose the minimum n, such that σ n δ α i (say n α i , δ ).
(iii)
Compute u n α i , δ , α i δ using the iteration (9).
(iv)
If u n α i , δ , α i δ u n α j , δ , α j δ > 4 c ˜ δ α j ,   j i , then, take k = i 1 and return u n α k , δ , α k δ .
(v)
Else, put i = i + 1 and get back to (ii).

6. Numerical Examples

Example 3.
Getting back to Example 2, the derivative of the operator κ at point w 0 ( θ , ϑ ) is
κ ( w 0 ) h = ( [ 0 , m ] × [ 0 , m ] ) w 0 ( θ , ϑ ) h ( θ , ϑ ) [ ( θ θ ) 2 + ( ϑ ϑ ) 2 + ( w 0 ( θ , ϑ ) ) 2 ] 3 / 2 d θ d ϑ .
By applying the two-dimensional analogy of rectangle’s formula with a uniform grid for every variable in the integral Equation (6), we obtain the following system of nonlinear equations:
i = 1 m j = 1 m 1 [ ( θ k θ j ) 2 + ( ϑ l ϑ i ) 2 + w 2 ( θ j , ϑ i ) ] 1 / 2 Δ θ Δ ϑ = q ( θ k , ϑ l ) ;
( k , l = 1 , 2 , , m ) for the unknown vector { w j , i = w ( θ j , ϑ i ) , i , j = 1 , 2 , , m } . In vector–matrix form, this system takes the form:
κ N ( w N ) = q N ,
where w N , q N are vectors of dimension N = m 2 .
The discrete variant of the derivative κ ( w 0 ) has the form
{ κ n 0 h n } k , l = i = 1 m j = 1 m Δ θ Δ ϑ w 0 ( θ j , ϑ i ) h ( θ j , ϑ i ) [ ( θ k θ j ) 2 + ( ϑ l ϑ i ) 2 + w 0 2 ( θ j , ϑ i ) ] 3 / 2 ,
where w 0 ( θ , ϑ ) = H is constant, N = m 2 . In order to compute w n , α δ , we choose orthonormal system of box function Φ i ( t , τ ) = ζ k ( t ) ζ l ( τ ) , i = ( k 1 ) m + l , k , l = 1 , 2 , 3 , , m , i = 1 , 2 , , N ( = m 2 ) , where ζ k ( t ) , ζ l ( τ ) are L 2 -orthonormalized characteristic functions of the intervals [ k 1 , k ) , [ l 1 , l )  [30], respectively, in [ 0 , m ] × [ 0 , m ] .
Then, there exists λ 1 n , λ 2 n , , λ N n R , such that w n , α δ i = 1 N λ i n Φ i ( t , τ ) . Then, from (9) we have,
( A h + α I ) i = 1 N ( λ i n + 1 λ i n ) Φ i ( t , τ ) = i = 1 N η i Φ i ( t , τ ) i = 1 N κ i Φ i ( t , τ ) + α i = 1 N ( W 0 , i λ i n ) Φ i ( t , τ ) ,
where κ i = κ ( w n , α δ ) k , l , η i = q δ ( k , l ) and W 0 , i = w 0 ( k , l ) , i = ( k 1 ) m + l , k , l = 1 , 2 , , m , i = 1 , 2 , , N . Then, w n + 1 , α δ is a solution of (9), if and only if [ λ n + 1 λ n ] = [ λ 1 n + 1 λ 1 n , λ 2 n + 1 λ 2 n , , λ N n + 1 λ N n ] T is the unique solution of
[ M N + α B N ] [ λ n + 1 λ n ] = B N [ η κ N + α ( W 0 λ N ) ] ,
where M N = ( A h Φ i , Φ j ) , B N = ( Φ i , Φ j ) i , j = 1 , 2 , , N ,
κ N = [ κ 1 , κ 2 , , κ N ] T , η = [ η 1 , η 2 , , η N ] T ,
W 0 = [ W 0 , 1 , W 0 , 2 , , W 0 , N ] T a n d λ N = [ λ 1 n , λ 2 n , , λ N n ] T .
For m = 40 , κ n 0 is a positive symmetric matrix, with the minimal eigenvalue, λ m i n = 1.5471 × 10 18 , and the condition number, C o n d ( κ n 0 ) = 6.4636 × 10 17 .
Similarly, for m = 35 , κ n 0 is a positive symmetric matrix, with a minimal eigenvalue, λ m i n = 6.2379 × 10 19 and condition number, C o n d ( κ n 0 ) = 1.6031 × 10 18 . We take
w ^ ( θ , ϑ ) = 5 ( e x p ( ( ( θ / 10 ) 2.5 ) 2 ( ( ϑ / 10 ) 2.5 ) 2 ) + 3 e x p ( ( ( θ / 10 ) + 2.5 ) 2 ( ( ϑ / 10 ) + 2.5 ) 2 ) ) / 60 ,
where w ^ ( θ , ϑ ) is given on the domain D = { 0 θ m , 0 ϑ m } . Let Δ θ = Δ ϑ = 1 , N = m 2 , ϵ h = 1 m 2 , Δ = 0.25 , w 0 H 5  [15,24,31,32]. We take q δ = κ ( w ^ ( θ , ϑ ) ) + δ in our computations.
Here, w n , α k δ is the numerical solution obtained by method (9); the relative error of solution and residual are
Δ 1 = w ^ w n , α k δ w n , α k δ , Δ 2 = κ n ( w n , α k δ ) q δ q δ ,
respectively.
Taking k 0 = 1 30 , we obtain r 0 = 0.1662 ,   γ = 1.2500 ,   r = 5.0000 , σ = 0.7500 ,   r 0 = 1.3644   , R 1 = 4.1635 and R 2 = 8.6356 = R .  Table 1 gives the values of α k , the relative error, and the residual, for different δ values. Figure 1, Figure 2, Figure 3 and Figure 4 show the exact and computed solutions for different δ values. The work is further compared with the study in [33], giving values of Δ 1 ,   Δ 2 and computation time (t) in seconds in Table 1.
Example 4.
We next consider Example 1. The Fréchet derivative of κ is
[ κ ( u ) h ] ( t ) = [ κ ( u ) ] ( t ) 0 t h ( θ ) d θ .
Then, κ is not monotone (proved in [22]), but κ is a positive type or sectorial [12] (i.e., ( κ ( · ) + α I ) 1   c α for α > 0 and c > 0 ) and the spectrum of κ ( u ) is the singleton set { 0 } .
In order to compute u n , α δ , we consider Φ i ( t ) , i = 1 , 2 , , m + 1 , is the characteristic function of interval [ i 1 , i )  [30] in [ 0 , 1 ] , (partitioned into m subintervals).
Then, there exists λ 1 n , λ 2 n , , λ m + 1 n R , such that u n , α δ i = 1 m + 1 λ i n Φ i ( t ) . Then, from (9) we have,
( A h + α I ) i = 1 m + 1 ( λ i n + 1 λ i n ) Φ i ( t ) = i = 1 m + 1 η i Φ i ( t ) i = 1 m + 1 κ i Φ i ( t ) + α i = 1 m + 1 ( λ i 0 λ i n ) Φ i ( t )
where κ i = κ ( u n , α δ ) ( t i ) a n d η i = v δ ( t i ) , i = 1 , 2 , , m + 1 and t i s are the points of partition. Then, u n + 1 , α δ is a solution of (3), if and only if [ λ n + 1 λ n ] = [ λ 1 n + 1 λ 1 n , λ 2 n + 1 λ 2 n , , λ m + 1 n + 1 λ m + 1 n ] T is the unique solution of
[ M + α B ] [ λ n + 1 λ n ] = B [ η κ m + 1 + α ( λ 0 λ n ) ]
where M = ( A h Φ i , Φ j ) , B = ( Φ i , Φ j ) i , j = 1 , 2 , , m + 1 ,
κ m + 1 = [ κ 1 , κ 2 , , κ m + 1 ] T , η = [ η 1 , η 2 , , η m + 1 ] T ,
λ 0 = [ λ 1 0 , λ 2 0 , , λ m + 1 0 ] T a n d λ n = [ λ 1 n , λ 2 n , , λ m + 1 n ] T .
For the computations, the exact solution is taken to be u ^ ( t ) = t , t ( 0 , 1 ) , u 0 ( t ) = 0 and v ( t ) = e t 2 2 . We apply random noise to v to obtain v δ . Let u n , α k δ be the numerical solution obtained by method (9). Taking k 0 = 1 30 , we obtain r 0 = 0.1662 , γ = 1.2500 , r = 5.0000 , σ = 0.7500 , r 0 = 1.3644 , R 1 = 4.1635 and R 2 = 8.6356 = R .  Table 2 gives the values of α k , the relative error, and the residual,
Δ 1 = u ^ u n , α k δ u n , α k δ , Δ 2 = κ n ( u n , α k δ ) v δ v δ
respectively, for different δ and m values. Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 show the exact and noisy data with the corresponding exact and computed solutions (C.S.), for different δ and m values.

7. Conclusions

A sufficient condition required to apply LRM to solve ill-posed nonlinear equations involving operators that are not monotone has been developed. With this theory, researchers can study a wider range of problems using Lavrentiev regularisation, irrespective of the monotonicity of the operator involved. The error estimates, convergence analysis, and adaptive parameter choice strategy are studied. The paper also provides numerical examples.

Author Contributions

Conceptualization, S.G., R.S., J.P., A.K. and I.K.A.; methodology, S.G., R.S., J.P., A.K. and I.K.A.; software, S.G., R.S., J.P., A.K. and I.K.A.; validation, S.G., R.S., J.P., A.K. and I.K.A.; formal analysis, S.G., R.S., J.P., A.K. and I.K.A.; investigation, S.G., R.S., J.P., A.K. and I.K.A.; resources, S.G., R.S., J.P., A.K. and I.K.A.; data curation, S.G., R.S., J.P., A.K. and I.K.A.; writing—original draft preparation, S.G., R.S., J.P., A.K. and I.K.A.; writing—review and editing, S.G., R.S., J.P., A.K. and I.K.A.; visualization, S.G., R.S., J.P., A.K. and I.K.A.; supervision, S.G., R.S., J.P., A.K. and I.K.A.; project administration, S.G., R.S., J.P., A.K. and I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tautenhahn, U. On the method of Lavrentiev regularization for nonlinear ill-posed problems. Inverse Probl. 2002, 18, 191–207. [Google Scholar] [CrossRef]
  2. Bakushinsky, A.B.; Smirnova, A. A study of frozen iteratively regularized Gauss-Newton algorithm for nonlinear ill-posed problems under generalized normal solvability condition. J. Inverse Ill-Posed Probl. 2020, 28, 275–286. [Google Scholar] [CrossRef]
  3. Mahale, P.; Dixit, S. Simplified iteratively regularized Gauss-Newton method in Banach spaces under a general source condition. Comput. Methods Appl. Math. 2020, 20, 321–341. [Google Scholar] [CrossRef]
  4. Mahale, P.; Shaikh, F. Simplified Levenberg-Marquardt meethod in Banach spaces for nonlinear ill-posed operator equations. Appl. Anal. 2021, 102, 124–148. [Google Scholar] [CrossRef]
  5. Mittal, G.; Giri, A.K. Iteratively regularized Landweber iteration method: Convergence analysis via Holder stability. Appl. Math. Comput. 2021, 392, 125744. [Google Scholar] [CrossRef]
  6. Mittal, G.; Giri, A.K. Convergence rates for iteratively regularized Gauss-Newton method subject to stability constraints. J. Comput. Appl. Math. 2022, 400, 113744. [Google Scholar] [CrossRef]
  7. Mittal, G.; Giri, A.K. Nonstationary iterated Tikhonov regularization: Convergence analysis via Holder stability. Inverse Probl. 2022, 38, 125008. [Google Scholar] [CrossRef]
  8. Mittal, G.; Giri, A.K. Convergence analysis of iteratively regularized Gauss-Newton method with frozen derivative in Banach spaces. J. Inverse Ill-Posed Probl. 2022, 30, 857–876. [Google Scholar] [CrossRef]
  9. Mittal, G.; Giri, A.K. Convergence analysis of an optimally accurate frozen multi-level projected steepest descent iteration for solving inverse problems. J. Complex. Artic. 2022, 75, 101711. [Google Scholar] [CrossRef]
  10. George, S.; Sabari, M. Numerical approximation of a Tikhonov type regularizer by a discretized frozen steepest descent method. J. Comput. Appl. Math. 2018, 330, 488–498. [Google Scholar] [CrossRef]
  11. Xia, Y.; Han, B.; Fu, Z. Convergence analysis of inexact Newton -Landweber iteration under Holder stability. Inverse Problems 2023, 39, 015004. [Google Scholar] [CrossRef]
  12. Alber, Y.; Ryazantseva, I. Nonlinear Ill-Posed Problems of Monotone Type; Springer: Dordrecht, The Netherlands, 2006. [Google Scholar]
  13. Mahale, P.; Nair, M.T. Iterated Lavrentiev regularization for nonlinear ill-posed problems. ANZIAM J. 2009, 51, 191–217. [Google Scholar] [CrossRef]
  14. Semenova, E. Lavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators. Comput. Methods Appl. Math. 2010, 10, 444–454. [Google Scholar] [CrossRef]
  15. Vasin, V.; George, S. An analysis of Lavrentiev regularization method and Newton type process for nonlinear ill-posed problems. Appl. Math. Comput. 2014, 230, 406–413. [Google Scholar] [CrossRef]
  16. Mahale, P.; Nair, M.T. Lavrentiev regularization of non-linear ill-posed equations under general source condition. J. Nonlinear Anal. Optim. 2013, 4, 193–204. [Google Scholar]
  17. Nair, M.T. Regularization of ill-posed operator equations: An overview. J. Anal. 2021, 29, 519–541. [Google Scholar] [CrossRef]
  18. George, S.; Kanagaraj, K. Derivative free regularization method for nonlinear ill-posed equations in Hilbert scales. Comput. Methods Appl. Math. 2019, 19, 765–778. [Google Scholar] [CrossRef]
  19. George, S.; Nair, M.T. A derivative-free iterative method for nonlinear ill-posed equations with monotone operators. J. Inverse Ill-Posed Probl. 2017, 25, 543–551. [Google Scholar] [CrossRef]
  20. George, S.; Nair, M.T. A modified Newton-Lavrentiev regularization for nonlinear ill-posed Hammerstein-type operator equations. J. Complex. 2008, 24, 228–240. [Google Scholar] [CrossRef]
  21. Hofmann, B.; Kaltenbacher, B.; Resmerita, E. Lavrentiev’s regularization method in Hilbert spaces revisited. arXiv 2015, arXiv:1506.01803. [Google Scholar] [CrossRef]
  22. Nair, M.T.; Ravishankar, P. Regularized versions of continuous Newton’s method and continuous modified Newton’s method under general source conditions. Numer. Funct. Anal. Optim. 2008, 29, 1140–1165. [Google Scholar] [CrossRef]
  23. Jidesh, P.; Shubha, V.S.; George, S. A quadratic convergence yielding iterative method for the implementation of Lavrentiev regularization method for ill-posed equations. Appl. Math. Comput. 2015, 254, 148–156. [Google Scholar] [CrossRef]
  24. Vasin, V.; Prutkin, I.L.; Yu Timerkhanova, L. Retrieval of a three-dimensional relief of geological boundary from gravity data. Izv. Phys. Solid Earth 1996, 11, 901–905. [Google Scholar]
  25. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press: Boca Raton, FL, USA; Taylor and Francis Group: Oxfordshire, UK, 2022. [Google Scholar]
  26. Argyros, C.I.; Regmi, S.; Argyros, I.K.; George, S. Contemporary Algorithms: Theory and Applications; NOVA Publishers: Hauppauge, NY, USA, 2023; Volume III. [Google Scholar]
  27. Kantorovich, L.V.; Akilov, G.P. Functional Analysis, 2nd ed.; Pergamon Press: Elmsford, NY, USA, 1982. [Google Scholar] [CrossRef]
  28. Plato, R.; Vainikko, G. On the regularization of projection methods for solving ill-posed problems. Numer. Math. 1990, 57, 63–79. [Google Scholar] [CrossRef]
  29. Pereverzyev, S.; Schock, E. On the adaptive selection of the parameter in regularization of ill-posed problems. SIAM J. Numer. Anal. 2005, 43, 2060–2076. [Google Scholar] [CrossRef]
  30. Lu, S.; Pereverzyev, S. Sparsity reconstruction by the standard Tikhonov method. RICAM Rep. 2008, 17, 2008. [Google Scholar]
  31. Vasin, V. Modified Newton-type processes generating Fejér approximations of regularized solutions to nonlinear equations. Tr. Instituta Mat. Mekhaniki UrO RAN 19 2013, 2, 85–97. [Google Scholar] [CrossRef]
  32. Vasin, V. Irregular nonlinear operator equations: Tikhonov’s regularization and iterative approximation. J. Inverse Ill-Posed Probl. 2013, 21, 109–123. [Google Scholar] [CrossRef]
  33. Shubha, V.S.; George, S.; Jidesh, P. Finite dimensional realization of a Tikhonov gradient type-method under weak conditions. Rend. Del Circ. Mat. Palermo Ser. 2016, 65, 395–410. [Google Scholar] [CrossRef]
Figure 1. Exact and Computed Solutions ( m = 40 ).
Figure 1. Exact and Computed Solutions ( m = 40 ).
Mathematics 12 02377 g001
Figure 2. Computed Solution ( m = 40 ).
Figure 2. Computed Solution ( m = 40 ).
Mathematics 12 02377 g002
Figure 3. Exact and Computed Solutions ( m = 35 ).
Figure 3. Exact and Computed Solutions ( m = 35 ).
Mathematics 12 02377 g003
Figure 4. Computed Solution ( m = 35 ).
Figure 4. Computed Solution ( m = 35 ).
Mathematics 12 02377 g004
Figure 5. Data (a), Solution (b) ( m = 1000 , δ = 0.01 ).
Figure 5. Data (a), Solution (b) ( m = 1000 , δ = 0.01 ).
Mathematics 12 02377 g005
Figure 6. Data (a), Solution (b) ( m = 1000 , δ = 0.001 ).
Figure 6. Data (a), Solution (b) ( m = 1000 , δ = 0.001 ).
Mathematics 12 02377 g006
Figure 7. Data (a), Solution (b) ( m = 1000 , δ = 0.0001 ).
Figure 7. Data (a), Solution (b) ( m = 1000 , δ = 0.0001 ).
Mathematics 12 02377 g007
Figure 8. Data (a), Solution (b) ( m = 800 , δ = 0.01 ).
Figure 8. Data (a), Solution (b) ( m = 800 , δ = 0.01 ).
Mathematics 12 02377 g008
Figure 9. Data (a), Solution (b) ( m = 800 , δ = 0.001 ).
Figure 9. Data (a), Solution (b) ( m = 800 , δ = 0.001 ).
Mathematics 12 02377 g009
Figure 10. Data (a), Solution (b) ( m = 800 , δ = 0.0001 ).
Figure 10. Data (a), Solution (b) ( m = 800 , δ = 0.0001 ).
Mathematics 12 02377 g010
Table 1. Error and computation time.
Table 1. Error and computation time.
δ α k m Δ 1 Δ 2 CPU Δ 1  [33] Δ 2  [33]CPU
Time Time [33]
0.010.0256 0.0020890.0004204.022.3448 × 10 4 7.0843 × 10 5 4.31
0.0050.0128400.0020300.0004073.842.3448 × 10 4 7.0871 × 10 5 4.19
0.0020.0051 0.0018750.0003723.792.3448 × 10 4 7.0888 × 10 5 4.04
0.010.0256 0.0022110.0004912.443.0643 × 10 4 1.0248 × 10 4 2.92
0.0050.0128350.0021510.0004762.413.0643 × 10 4 1.0252 × 10 4 2.84
0.0020.0051 0.0019940.0004382.373.064 3× 10 4 1.0255 × 10 4 2.65
Table 2. Relative error.
Table 2. Relative error.
δ α k m Δ 1 Δ 2
0.01 1.0201 × 10 4 7.8823 × 10 3 5.7002
0.001 1.0201 × 10 6 1000 7.8077 × 10 3 5.6951
0.0001 1.0201 × 10 8 7.8070 × 10 3 5.6947
0.01 1.0201 × 10 4 7.8808 × 10 3 5.6796
0.001 1.0201 × 10 6 800 7.8062 × 10 3 5.6931
0.0001 1.0201 × 10 8 7.8055 × 10 3 5.6936
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

George, S.; Sadananda, R.; Padikkal, J.; Kunnarath, A.; Argyros, I.K. New Trends in Applying LRM to Nonlinear Ill-Posed Equations. Mathematics 2024, 12, 2377. https://doi.org/10.3390/math12152377

AMA Style

George S, Sadananda R, Padikkal J, Kunnarath A, Argyros IK. New Trends in Applying LRM to Nonlinear Ill-Posed Equations. Mathematics. 2024; 12(15):2377. https://doi.org/10.3390/math12152377

Chicago/Turabian Style

George, Santhosh, Ramya Sadananda, Jidesh Padikkal, Ajil Kunnarath, and Ioannis K. Argyros. 2024. "New Trends in Applying LRM to Nonlinear Ill-Posed Equations" Mathematics 12, no. 15: 2377. https://doi.org/10.3390/math12152377

APA Style

George, S., Sadananda, R., Padikkal, J., Kunnarath, A., & Argyros, I. K. (2024). New Trends in Applying LRM to Nonlinear Ill-Posed Equations. Mathematics, 12(15), 2377. https://doi.org/10.3390/math12152377

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop