Next Article in Journal
Applying Visual Cryptography to Enhance Text Captchas
Next Article in Special Issue
Convergence Rate of the Modified Landweber Method for Solving Inverse Potential Problems
Previous Article in Journal
A Fuzzy Multi-Criteria Evaluation Framework for Urban Sustainable Development
Previous Article in Special Issue
Generalized-Fractional Tikhonov-Type Method for the Cauchy Problem of Elliptic Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of the Discrepancy Principle on the Tikhonov-Regularized Solutions with Oversmoothing Penalties

Faculty of Mathematics, Chemnitz University of Technology, 09107 Chemnitz, Germany
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(3), 331; https://doi.org/10.3390/math8030331
Submission received: 7 February 2020 / Revised: 20 February 2020 / Accepted: 22 February 2020 / Published: 3 March 2020
(This article belongs to the Special Issue Numerical Analysis: Inverse Problems – Theory and Applications)

Abstract

:
This paper deals with the Tikhonov regularization for nonlinear ill-posed operator equations in Hilbert scales with oversmoothing penalties. One focus is on the application of the discrepancy principle for choosing the regularization parameter and its consequences. Numerical case studies are performed in order to complement analytical results concerning the oversmoothing situation. For example, case studies are presented for exact solutions of Hölder type smoothness with a low Hölder exponent. Moreover, the regularization parameter choice using the discrepancy principle, for which rate results are proven in the oversmoothing case in in reference (Hofmann, B.; Mathé, P. Inverse Probl. 2018, 34, 015007) is compared to Hölder type a priori choices. On the other hand, well-known analytical results on the existence and convergence of regularized solutions are summarized and partially augmented. In particular, a sketch for a novel proof to derive Hölder convergence rates in the case of oversmoothing penalties is given, extending ideas from in reference (Hofmann, B.; Plato, R. ETNA. 2020, 93).

1. Introduction

This paper tries to complement the theory and practice of Tikhonov regularization with oversmoothing penalties for the stable approximate solution of nonlinear ill-posed problems in a Hilbert scale setting. Thus, we consider the operator equation
F ( x ) = y
with a nonlinear forward operator F : D ( F ) X Y , possessing the domain D ( F ) and mapping between the infinite dimensional real Hilbert spaces X and Y. In this context, · X and · Y denote the norms in X and Y, respectively. Throughout the paper, let x D ( F ) be a solution to Equation (1) for a given right-hand side y. We restrict our considerations to problems that are locally ill-posed at  x . This means that the replacement of the exact right-hand side y by noisy data y δ Y , obeying the deterministic noise model
y - y δ Y δ
with noise level δ > 0 , may lead to significant errors in the solution of Equation (1) measured by the X-norm, even if δ tends to zero (cf. ([1], Def. 2) for details).
For finding approximate solutions, we apply a Hilbert scale setting, where the densely defined, unbounded and self-adjoint linear operator B : D ( B ) X X with domain D ( B ) generates the Hilbert scale. This operator is assumed to be strictly positive such that we have for some m ̲ > 0
B x X m ̲ x X , for all x D ( B ) .
In this sense, we exploit the Hilbert scale { X τ } τ R generated by B with X 0 = X , X τ = D ( B τ ) , and with corresponding norms x τ : = B τ x X .
As approximate solutions to x , we use Tikhonov-regularized solutions x α δ D ( F ) that are minimizers of the extremal problem
T α δ ( x ) : = F ( x ) - y δ Y 2 + α B x X 2 min , subject to x D : = D ( F ) D ( B ) ,
where α > 0 is the regularization parameter and F ( x ) - y δ Y 2 characterizes the misfit or fidelity term. The penalty functional B x X 2 = x 1 2 in the Tikhonov functional T α δ is adjusted to the level one of Hilbert scale such that all regularized solutions have the property x α δ D ( B ) . A more general form of the penalty functional in the Tikhonov functional would be B ( x - x ¯ ) X 2 , where x ¯ D denotes a given smooth reference element. x ¯ then plays the role of the origin (the point of central interest), which can be very different for nonlinear problems. Without a loss of generality, we set in the sequel x ¯ : = 0 , which makes the formulas simpler.
In our study, the discrepancy principle named after Morozov (cf. [2]) as the most prominent a posteriori choice of the regularization parameter α > 0 plays a substantial role. On the one hand, the simplified version
α d i s c r : = α ( δ , y δ ) : F ( x α d i s c r δ ) - y δ Y = C δ
of the discrepancy principle in equation form with a prescribed constant C > 1 is important for theory (cf. [3]). However, it is well known that there are nonlinear problems, where that version is problematic due to duality gaps that prevent the solvability of Equation (5). For overcoming the remaining weaknesses of the parameter choice expressed in Equation (5), sequential versions of the discrepancy principle can be applied that approximate α d i s c r , and we refer to [4,5,6] for more details. Such an approach is used for performing the numerical case studies in Section 6.
Our focus is on oversmoothing penalties in the Tikhonov functional T α δ , where x D ( B ) = X 1 such that  T α δ ( x ) = . In this case, the regularizing property T α δ ( x α δ ) T α δ ( x ) does not yield any information. This property, however, is the basic tool for obtaining error estimates and convergence assertions for the Tikhonov-regularized solutions in the standard case, where T α δ ( x ) < . We refer as an example to Chapter 10 of the monograph [7], which also deals with nonlinear operator equations, but adjusts the penalty functional to level zero. To derive error estimates in the oversmoothing case, the regularizing property must be replaced by inequalities of the form T α δ ( x α δ ) T α δ ( x a u x ) , where  x a u x D is an appropriately chosen auxiliary element.
The seminal paper on Tikhonov regularization in Hilbert scales that includes the oversmoothing case was written by F. Natterer in 1984 (cf. [8]) and was restricted to linear operator equations. Error estimates in the X-norm and convergence rates were proven under two-sided inequalities that characterize the degree of ill-posedness a > 0 of the problem. We follow this approach and adapt it to the case of nonlinear problems throughout the subsequent sections and assume the inequality chain
c a x - x - a F ( x ) - F ( x ) Y C a x - x - a for all x D
and constants  0 < c a C a < . The left-hand inequality in Equation (6) represents a conditional stability estimate and is substantial for obtaining stable regularized solutions, whereas the right-hand inequality in Equation (6) contributes to the determination of the nonlinearity structure of the forward operator F. Convergence and rate results for the Tikhonov regularization expressed in Equation (4) with oversmoothing penalties under the inequality chain expressed in Equation (6) were recently presented in [3,6,9] and complemented by case studies in [10]. The present paper continues this series of articles by addressing open questions with respect to the discrepancy principle for choosing the regularization parameter α and its comparison to a priori parameter choices. In this context, one of the examples from [10] is reused for performing new numerical experiments in order to obtain additional assertions that cannot be taken from analytical investigations.
The paper is organized as follows: We summarize in Section 2 basic properties of regularized solutions under assumptions that are typical for oversmoothing penalties and in Section 3 assertions concerning the convergence. In Section 4 we show that the error estimates derived in [6] for obtaining low order convergence rates are also applicable to obtain the order optimal Hölder convergence rates under the associated Hölder-type source conditions. Section 5 recalls a nonlinear inverse problem from an exponential growth model and an appropriate Hilbert scale, which can both be used for performing numerical experiments in the subsequent section. In that section (Section 6), the obtained numerical results are presented and interpreted based on a series of tables and figures.

2. Assumptions and Properties of Regularized Solutions

In this section, we formulate the standing assumptions concerning the forward operator F, the Tikhonov functional T α δ , and the solution x of Equation (1) in order to ensure the existence and stability of regularized solutions x α δ for all regularization parameters α > 0 and noisy data y δ .
Assumption 1.
(a)
The operator F : D ( F ) X Y mapping between the real Hilbert spaces X and Y is weakly sequentially continuous, and its domain D ( F ) with 0 D ( F ) is a convex and closed subset of X.
(b)
The generator B : D ( B ) X X of the Hilbert scale is a densely defined, unbounded, and self-adjoint linear operator that satisfies the inequality expressed in Equation (3).
(c)
The solution x D ( F ) of Equation (1) is supposed to be an interior point of the domain D ( F ) .
(d)
To characterize the case of oversmoothing penalties, we assume that
x D ( B ) = X 1 .
(e)
There is a number  a > 0 , and there are constants  0 < c a C a < such that the two-sided estimates expressed in Equation (6) hold.
As a specific impact of Item (d) of Assumption 1 on approximate solutions to x , we have the following proposition that is of interest for the behavior of regularized solutions in the case of oversmoothing penalties.
Proposition 1.
Let a sequence { x n } D ( B ) X converge weakly in X to x D ( B ) as n . We then have lim n B x n X = .
Proof. 
In order to construct a contradiction, let us assume that the sequence { x n } D ( B ) (or some of its subsequences) is bounded in X 1 , i.e.,  B x n X K for all n N . Thus, a subsequence of { B x n } converges weakly in X to some element z X , because bounded sets are weakly pre-compact in the Hilbert space X. Since the operator B is densely defined and self-adjoint, it is closed, i.e., the graph { ( x , B x ) : x D ( B ) } is closed and, due to the convexity of this set, a weakly closed subset of X × Y . Hence, the operator B is weakly closed, which implies that x D ( B ) and B x = z . This, however, contradicts the assumed property x D ( B ) and proves the proposition. ☐
Remark 1.
As a consequence of Proposition 1, we have for any sequence of regularized solutions { x n = x α n δ n } , which is norm-convergent (and thus also weak-convergent) to x D ( B ) for δ n 0 as n , such that it blows up to infinity with respect to the X 1 -norm. In other words, we have the limit condition lim n B x α n δ n X = B x X = .
Based on Lemma 1, we can formulate in Proposition 2 the existence of minimizers to the extremal problem expressed in Equation (4).
Lemma 1.
The non-negative penalty functional B · X 2 : D ( B ) X R as part of the Tikhonov functional T α δ is a proper convex and a lower, semi-continuous, and stabilizing functional.
Proof. 
The obviously convex penalty functional is proper, since it attains finite values for all x D ( B ) = X 1 . It is also a stabilizing functional because, as a consequence of Equation (3), the sub-level sets { x D ( B ) : B x X 2 c } are weakly sequentially pre-compact subsets in X for all constants c 0 . Namely, all such non-empty sub-level sets are bounded in X and hence weakly pre-compact. For showing that the functional B · X 2 = · 1 2 is lower semi-continuous, by taking into account Proposition 1 and its proof, it is enough to show that a sequence { x n } D ( B ) with x n 1 K < for all n N that converges weakly in X to x ^ X implies that x ^ D ( B ) and that this sequence also converges weakly in X 1 to x ^ . The lower semi-continuity of the norm functional · 1 then yields B x ^ X 2 lim inf n B x n X 2 . Now note that a subsequence { x n } bounded in X 1 has a subsequence that converges weakly in X 1 to some element z X 1 . Since the operator B is weakly closed, we then have B x ^ = z . Since z is uniform for all subsequences, this completes the proof. ☐
Proposition 2.
For all α > 0 and y δ Y , there is a regularized solution x α δ D , solving the extremal problem expressed in Equation (4).
Proof. 
Proposition 4.1 from [11], which coincides with our proposition, is immediately applicable, since the Assumptions 3.11 and 3.22 from [11] are satisfied due to Assumption 1 and Lemma 1 above. ☐
In addition to the existence assertion of Proposition 2, we also have, under the assumptions stated above, the stability of regularized solutions, which means that small changes in the data y δ yield only small changes in x α δ . For a detailed description of this fact, see Proposition 4.2 from [11] that applies here under Assumption 1.
Remark 2.
From Assumption 1, we have that there are no solutions x D = D ( F ) D ( B ) , satisfying with F ( x ) = y the operator expressed in Equation (1), because this would contradict, with F ( x ) = F ( x ) and x - x X > 0 , the left-hand inequality of Equation (6). Besides x , however, other solutions with x D ( B ) may exist. The regularized solutions x α δ , for fixed α > 0 and y δ Y , need not be uniquely determined, since, though possessing a convex part B x 2 , the Tikhonov functional T α δ ( x ) is not necessarily convex.

3. Convergence of Regularized Solutions in the Case of Oversmoothing Penalties

In this section, we discuss assertions about the X-norm convergence of regularized solutions with the Tikhonov functional T α δ introduced in Equation (4). First we recall the following lemma (from [6], Proposition 3.4).
Lemma 2.
Under Assumption 1, we have, for regularized solutions x α δ D solving the extremal problem expressed in Equation (4), a function φ : [ 0 , ) [ 0 , ) satisfying the limit condition lim α 0 φ ( α ) = 0 and a constant K ¯ > 0 such that the error estimate
x α δ - x X φ ( α ) + K ¯ δ α a / ( 2 a + 2 )
is valid for all δ > 0 and for sufficiently small α > 0 .
From Lemma 2, we directly obtain the following proposition (cf. [6], Theorem 4.1):
Theorem 1.
For any a priori parameter choice α = α ( δ ) and any a posteriori parameter choice α = α ( δ , y δ ) , the regularized solutions x α δ converge under Assumption 1 to the solution x of the operator expressed in Equation (1) for δ 0 , i.e.,
lim δ 0 x α δ - x X = 0 ,
whenever
α 0 a n d δ 2 α a / ( a + 1 ) 0 a s δ 0 .
Remark 3.
By inspection of the corresponding proofs in [6], it becomes clear that the validity of Lemma 2 and consequently of Theorem 1 is not restricted to the case of oversmoothing penalties, but it holds if Items (a), (b), (c), and (e) of Assumption 1 are fulfilled. This means that the solution x D ( F ) can possess arbitrary smoothness.
Example 1.
In this example, we consider with respect to Theorem 1 the a priori parameter choice
α = α ( δ ) δ κ
for varying exponents κ > 0 . As the following proposition, as a consequence of Theorem 1, indicates, there is a wide range of exponents κ yielding convergence.

Proposition 3.

For the a priori choice expressed in Equation (11) of the regularization parameter α > 0 , the condition expressed in Equation (10) in Theorem 1 holds if and only if 0 < κ < 2 + 2 a .
However, the proof of the underlying Theorem 4.1 in [6] shows that the general verification of the basic estimate expressed in Equation (8), developed with a focus on the case of oversmoothing penalties, requires both the left-hand inequality as well as the right-hand inequality in the nonlinearity condition expressed in Equation (6) and, moreover, that x is an interior point of D ( F ) . More discussions in that direction can be made if we distinguish the following three κ -intervals:
(i):
0 < κ < 2 with
α 0 and δ 2 α 0 as δ 0 .
(ii):
κ = 2 with two constants c ̲ and c ¯ such that
α 0 and 0 < c ̲ δ 2 α c ¯ < for   all δ > 0 .
and
(iii):
2 < κ < 2 + 2 a with
α 0 and δ 2 α as δ 0 .
If we have x X 1 in contrast to Item (d) of Assumption 1, then, for the convergence of regularized solutions to x in Case (i), the nonlinearity condition expressed in Equation (6) is not needed at all provided that Items (a) and (b) of Assumption 1 are fulfilled. Also Item (c) is not necessary there. However to derive Equation (9), x must be the uniquely determined penalty-minimizing solution to Equation (1) (cf. [11], Sect. 4.1.2 or alternatively [12,13]). Note that, for x X 1 and parameter choices according to (i), conditions of the type expressed in Equation (6) are only relevant for proving convergence rates.
If regularization parameters are chosen such that Equation (12) is violated as in Cases (ii) and (iii), then even for x X 1 inequalities from the condition expressed in Equation (6) are important. Precisely, Case (iii) seems to require both inequalities of Equation (6) for deriving convergence of regularized solutions to x . The parameter choice according to Case (ii) with α = α ( δ ) δ 2 represents for x X 1 the typical conditional stability estimate situation introduced in the seminal paper [14]. There, only the left-hand inequality of condition expressed in Equation (6) is needed for convergence, which then is a consequence of convergence rate results (cf. [15,16,17] and references therein). However, to derive Equation (9), x must be the uniquely determined solution to Equation (1). In the oversmoothing case x X 1 , both inequalities in Equation (6) seem to be indispensable for obtaining convergence; moreover, for all suggested choices of the regularization parameter α , the convergence proofs published by now, all using auxiliary elements, are essentially based on the fact that x is an interior point of the domain D ( F ) . Determining the conditions under which convergence takes place if κ 2 + 2 / a is chosen in Equation (11) is an open problem.
Now we turn to convergence assertions, provided that the regularization parameter α > 0 is selected according to the discrepancy principle expressed in Equation (5) with prescribed constant C > 1 . The main ideas of the proof are outlined along the lines of [6], Theorem 4.9, where a sequential discrepancy has been considered.
Theorem 2.
Under Assumption 1, let there be, for a sequence { δ n } of positive noise levels with lim n δ n = 0 and all admissible noisy data y δ n Y obeying y δ n - y Y δ n , regularization parameters α n : = α d i s c r ( δ n , y δ n ) > 0 , satisfying the discrepancy principle
F ( x α n δ n ) - y δ n Y = C δ n
for a prescribed constant C > 1 . We then have
lim n α n = 0
and convergence as
lim n x α n δ n - x X = 0 .
Proof. 
First, we show that Equation (16) always takes place for oversmoothing penalties with x D ( B ) = X 1 . To find a contradiction, we assume that lim inf n α n > 0 . Since 0 D as a consequence of Item (a) of Assumption 1, we have T α n δ n ( x α n δ n ) T α n δ n ( 0 ) and thus α n x α n δ n 1 2 F ( 0 ) - y δ n Y 2 2 ( F ( 0 ) - y Y 2 + δ n 2 ) , which means that x α n δ n 1 = B x α n δ n X and, by Equation (3), x α n δ n X are uniformly bounded from above for all n N . We then have for a subsequence the weak convergences in X as x α n k δ n k x ˜ X and B x α n δ n x ˜ ˜ X as k . Since the operator B is weakly sequentially closed, we therefore obtain x ˜ D ( B ) and for F weakly sequentially continuous (cf. Item (a) of Assumption 1) also F ( x α n k δ n k ) F ( x ˜ ) . Now, by Equation (15), we easily derive that F ( x α n k δ n k ) - F ( x ) Y 0 as k and consequently F ( x ˜ ) = F ( x ) . Thus, the left-hand inequality of Equation (6) (cf. Item (e) of Assumption 1) yields x ˜ = x , which contradicts the assumption x D ( B ) and proves the property expressed in Equation (16) of the regularization parameter choice.
Secondly, we prove the convergence property expressed in Equation (17). From [6], Lemma 3.2, we have that
F ( x α n δ n ) - y δ n Y ψ ( α n ) α n a / ( 2 a + 2 ) + δ n
for some function ψ : [ 0 , ) [ 0 , ) satisfying the limit condition lim α 0 ψ ( α ) = 0 , for sufficiently small α n > 0 and arbitrary δ n > 0 . In combination with F ( x α n δ n ) - y δ n Y = C δ n , this yields under the condition expressed in Equation (16), implying lim n ψ ( α n ) = 0 the estimate
δ n α n a / ( 2 a + 2 ) ψ ( α n ) C - 1 0 as n .
Now Theorem 1 applies. This completes the proof. ☐
Remark 4.
In the case x X 1 of non-oversmoothing penalties, the limit condition expressed in Equation (16) represents a canonical situation for regularized solutions, whereas the non-existence of α d i s c r from Equation (5) and the violation of Equation (16) only occur in exceptional cases. For the sequential variant of the discrepancy principle, the exceptional case lim inf n a n > 0 is discussed in [4] in the context of the exact penalization veto introduced there.

4. An Alternative Approach to Prove Hölder Convergence Rates in the Case of Oversmoothing Penalties for an A Priori Parameter Choice of the Regularization Parameter

In this section, we consider order optimal convergence rate results in the case of oversmoothing penalties for an a priori parameter choice of the regularization parameter α > 0 . Such results have been proven in the paper [9] under the condition x X p = D ( B p ) = R ( B - p ) for 0 < p < 1 , which is a Hölder-type source condition. In that paper, the proof is formulated for the penalty B ( x - x ¯ ) X 2 with reference element x ¯ D . This proof has been repeated in the appendix of the paper [10] in the simplified version with x ¯ = 0 and penalty term B x X 2 , which is also utilized in the present work.
In the following, we present the sketch of an alternative proof for the order optimal Hölder convergence rates under the Hölder-type source condition x X p for 0 < p < 1 . This alternative approach is based on error estimates that have been verified in [6] for showing convergence of the regularized solutions x α δ to x and for proving low order (e.g., logarithmic) convergence rates under corresponding low order source conditions. By one novel idea outlined below, the results from [6] can be extended to prove Hölder convergence rates, too.
For the subsequent investigations, we complement Assumption 1 with an assumption that specifies the smoothness of the solution x :
Assumption A2.
There are 0 < p < 1 and w X such that
x = B - p w X p .
Theorem 3.
Under Assumptions 1 and 2, we have for the a priori parameter choice
α = α ( δ ) δ 2 ( a + 1 ) a + p
the convergence rate
x α δ - x X = O δ p a + p a s δ 0 .
Proof. 
We give only a sketch of a proof for this theorem, presupposing the results of the recent paper [6]. Precisely, we outline only the points, where we amend and complement the results of [6] in order to extend [6], Theorem 5.3, to the case of appropriate power-type functions φ .
Auxiliary elements z α D ( B ) , which are for all α > 0 the uniquely determined minimizers of the artificial Tikhonov functional T α , a : = x - x - a 2 + α B x X 2 , represent, in combination with the moment inequality, the essential tool for the proof. By introducing the self-adjoint and positive semi-definite bounded linear operator G : = B - ( 2 a + 2 ) : X X , we can verify these elements in an explicit manner as
z α = G ( G + α I ) - 1 x = x - α ( G + α I ) - 1 x ,
which implies that, for all α > 0 ,
z α - x = - α ( G + α I ) - 1 x ,
B - a ( z α - x ) = - G a / ( 2 a + 2 ) [ α ( G + α I ) - 1 x ] ,
and
B ( z α - x ) = G ( 2 a + 1 ) / ( 2 a + 2 ) [ α ( G + α I ) - 1 x ] .
According to [6], Lemma 3.1, we then have the functions f 1 ( α ) = o ( 1 ) , f 2 ( α ) = o ( 1 ) and f 3 ( α ) = o ( 1 ) as α 0 introduced there, which can be found in our notation from the representations z α - x X = f 1 ( α ) , B - a ( z α - x ) X = f 2 ( α ) α a / ( 2 a + 2 ) , and B ( z α - x ) X = f 3 ( α ) α - 1 / ( 2 a + 2 ) . Under the source condition expressed in Equation (18), which attains the form x = G p / ( 2 a + 2 ) w with some source element w X , we derive in detail the formulas
f 1 ( α ) = G p / ( 2 a + 2 ) [ α ( G + α I ) - 1 w ] X = O α p 2 a + 2 as α 0 ,
f 2 ( α ) = α - a / ( 2 a + 2 ) G ( a + p ) / ( 2 a + 2 ) [ α ( G + α I ) - 1 w ] X = O α p 2 a + 2 as α 0 ,
and
f 3 ( α ) = α - ( 2 a + 1 ) / ( 2 a + 2 ) G ( 2 a + 1 + p ) / ( 2 a + 2 ) [ α ( G + α I ) - 1 w ] X = O α p 2 a + 2 as α 0 .
The asymptotics O ( α p / ( 2 a + 2 ) ) in Equations (21)–(23) is a consequence of the properties G ( G + α I ) - 1 1 , ( G + α I ) - 1 1 / α , which yield by exploiting the moment inequality (cf. [7], Formula (2.49))
G θ ( G + α I ) - 1 G ( G + α I ) - 1 θ ( G + α I ) - 1 1 - θ α θ - 1 , for α > 0 and 0 θ 1
for the self-adjoint and positive semi-definite operator G. Here, · denotes the operator norm in the space of bounded linear operators mapping in X. In Equation (21), the inequality expressed in Equation (24) is applied with θ = p / ( 2 a + 2 ) , it is applied with θ = ( a + p ) / ( 2 a + 2 ) in Equation (22), and it is applied with θ = ( 2 a + p + 1 ) / ( 2 a + 2 ) in Equation (23), taking into account that all three θ -values are smaller than one. These are the new ideas of the present proof.
Because the function f 9 ( α ) in [6], Formula (3.12), is found by linear combination and maximum-building of the functions f 1 ( α ) , f 2 ( α ) , and f 3 ( α ) , we derive here f 9 ( α ) α p / ( 2 a + 2 ) along the lines of Section 3 in [6] and consequently the error estimate
x α δ - x X K ̲ α p / ( 2 a + 2 ) + K ¯ δ α a / ( 2 a + 2 )
with constants K ̲ , K ¯ > 0 , which is valid for all δ > 0 and a sufficiently small α > 0 . Such a restriction to a sufficiently small α > 0 is due to the fact that z α has to belong to D ( F ) in order to apply the inequality chain expressed in Equation (6), but this is the case for small α , since x is assumed to be an interior point of D ( F ) . Under the a priori parameter choice expressed in Equation (19), we immediately obtain the convergence rate expressed in Equation (20) from the error estimate expressed in Equation (25). This completes the sketch of the proof of the theorem. ☐
Remark 5.
Obviously, the a priori parameter choice expressed in Equation (19) satisfies the sufficient condition expressed in Equation (10) for the convergence of regularized solutions from Theorem 1. More precisely, taking into account Example 1, the choice expressed in Equation (19) has the form of Equation (11) with κ = 2 ( a + 1 ) a + p , which for 0 < p < 1 yields 2 < κ < 2 + 2 a and belongs to Case (iii), where the quotient δ 2 α tends toward infinity as δ 0 . We mention that the choice expressed in Equation (19) coincides with the choice in [8] suggested by Natterer, who proved the order optimal convergence rate expressed in Equation (20) for linear ill-posed operator equations. For the nonlinear operator expressed in Equation (1), in [3], the convergence rate expressed in Equation (20) has also been proven for the a posteriori parameter choice α d i s c r = α ( δ , y δ ) from Equation (5). However, by now, there are no analytical results about the α d i s c r -asymptotics of the discrepancy principle as δ tends toward zero. The numerical experiments in the subsequent sections will provide some hints that the hypothesis α d i s c r δ 2 ( a + 1 ) a + p does not have to be rejected.

5. Model Problem and Appropriate Hilbert Scale

In the following, we introduce an example for a nonlinear inverse operator expressed in Equation (1) together with an appropriate Hilbert scale, for which we will investigate the analytic results from the previous section numerically, following up on [10]. The well-known scale of Hilbert-type Sobolev spaces H p ( 0 , 1 ) with integer values of p 0 consists of functions whose p-th derivative is still in L 2 ( 0 , 1 ) . For positive indices p, the spaces can be defined by using an interpolation argument, and for general real parameters of p R the norms of H p ( 0 , 1 ) can be defined by using the Fourier transform x ^ of the function x as
x H p ( 0 , 1 ) 2 : = R ( 1 + | ξ | 2 ) p | x ^ ( ξ ) | 2 d ξ
(cf. [18]). The Sobolev scales do not constitute a Hilbert scale in the strict sense, but for each 0 < p < there is an operator B : L 2 ( 0 , 1 ) L 2 ( 0 , 1 ) such that { X p } 0 p p is a Hilbert scale (see [19]). In order to form a full Hilbert scale for arbitrary real values of p, boundary values conditions need to be imposed.
Hilbert scale.
To generate a full Hilbert scale { X τ } τ R , we exploit the simple integration operator
[ J h ] ( t ) : = 0 t h ( τ ) d τ ( 0 t 1 )
of Volterra-type mapping in X = Y = L 2 ( 0 , 1 ) and set
B : = ( J J ) - 1 / 2 .
Using the Riemann-Liouville fractional integral operator J p and its adjoint ( J ) p = ( J p ) for 0 < p 1 , we receive
X p = D ( B p ) = R ( ( J J ) p / 2 ) = R ( ( J ) p ) ,
(cf. [20,21,22,23]); hence, by [20], Lemma 8, the explicit structure
X p = H p ( 0 , 1 ) for 0 < p < 1 2 { x H 1 2 ( 0 , 1 ) : 0 1 | x ( t ) | 2 1 - t d t < } for p = 1 2 { x H p ( 0 , 1 ) : x ( 1 ) = 0 } for 1 2 < p 1 .
Further boundary conditions have to be incorporated for higher Sobolev indices p.
Model problem.
The exponential growth model of this example was discussed in early literature (cf., e.g., [24], Section 3.1). More details and properties can be found in [25] and in the appendix of [3]. To identify the time dependent growth rate x ( t ) ( 0 t T ) of a population, we use observations y ( t ) ( 0 t T ) of the time-dependent size of the population with initial size y ( 0 ) = y 0 > 0 , where the O.D.E. initial value problem
y ( t ) = x ( t ) y ( t ) ( 0 < t T ) , y ( 0 ) = y 0
is assumed to hold. For simplicity, we set T : = 1 and consider the space setting X = Y : = L 2 ( 0 , 1 ) . Thus, we derive the nonlinear forward operator F : x y mapping in the real Hilbert space L 2 ( 0 , 1 ) as
[ F ( x ) ] ( t ) = y 0 exp 0 t x ( τ ) d τ ( 0 t 1 ) ,
with full domain D ( F ) = L 2 ( 0 , 1 ) and with the Fréchet derivative
[ F ( x ) h ] ( t ) = [ F ( x ) ] ( t ) 0 t h ( τ ) d τ ( 0 t 1 , h X ) .
It can be shown that there is some constant K ^ > 0 such that for all x X the inequality
F ( x ) - F ( x ) - F ( x ) ( x - x ) Y K ^ F ( x ) - F ( x ) Y x - x X
is valid. This in turn guarantees that a tangential cone condition
F ( x ) - F ( x ) - F ( x ) ( x - x ) Y η F ( x ) - F ( x ) Y
holds with some 0 < η < 1 in D ( F ) = B r ( x ) ¯ for a sufficiently small r > 0 (cf. [3], Example A.2), where  B r ( x ) ¯ denotes a closed ball around x with radius r. According to the construction of the Hilbert scale { X τ } τ R generated by the operator B in Equation (28), and due to 0 < c ̲ F ( x ) c ¯ as a consequence of Equation (30), we receive from [3], Proposition A.4 that the inequality chain expressed in Equation (6) holds with a = 1 in this example.

6. Numerical Case Studies

In this section, numerical evidence for the behavior of the regularized solutions x α δ of the model problem introduced in Section 5 is provided. In Section 6.1, numerical experiments with a focus on the discrepancy principle are conducted using exact solutions for low order Hölder-type smoothness x X p with 0 < p < 1 / 2 , while the focus of the recent paper [10] was on results for p = 1 / 2 and larger values p. The essential point of Section 6.2 is the comparison of results obtained by the discrepancy principle with those calculated by a priori choices expressed in Equation (11) of the regularization parameter α .

6.1. Case Studies for Exact Solutions with Low Order Hölder Smoothness p < 1 2

In our first series of experiments, we investigated the interplay between the value p ( 0 , 1 2 ) , the decay rates of the regularization parameter α d i s c r with respect to the noise level δ for different values p as δ tends toward zero, and the corresponding rates of the error of regularized solutions x α δ . Therefore, we turn to exact solutions of the form x ( t ) = c t - β ( 0 < t 1 ) with β ( 0 , 1 / 2 ) . These functions x do not belong to the Sobolev space H p ( 0 , 1 ) with fractional order p if 1 / 2 - β < p (see for example [26], p. 422). This allows us to study the behavior of the regularized solutions for exact solutions with low order Hölder-type solution smoothness. For the numerical simulations, we therefore assume that 1 / 2 - β is at least approximately the smoothness of the exact solution x .
To confirm our theoretical findings, we solve Equation (4) after discretization using the trapezoidal rule for the integral, the MATLAB®-function fmincon. We would also like to point out the difficulties associated with the numerical treatment of functions of this particular type. Obviously the pole occurring at zero is source force for the low smoothness of the exact solution and needs to be captured accordingly. After multiple different approaches, equidistant discretization with the first discretization point very close to zero was proven to be very successful. Typically, a discretization level N = 200 is used. To the simulated data y = F ( x ) , we added random noise for which we prescribe the relative error δ ¯ such that y - y δ = δ ¯ y ; i.e., we have Equation (2) with δ = δ ¯ y . To obtain the X 1 norm in the penalty, we set · 1 = · H 1 ( 0 , 1 ) and additionally enforce the boundary condition x ( 1 ) = 0 . The regularization parameter α in these series of experiments is chosen as α d i s c r = α ( δ , y δ ) using, with some prescribed multiplier C > 1 , the discrepancy principle
δ F ( x α d i s c r δ ) - y δ Y C δ ,
which approximates α d i s c r from Equation (5). Unless otherwise noted, C = 1.3 was used. From the case studies in [10], we can conjecture, but have no stringent proof, that the α -rate of the discrepancy principle does not systematically deviate from the a priori rate expressed in Equation (19), which for a = 1 attains the form
α ( δ ) δ 4 p + 1 .
This α -rate already occurred in Natterer’s paper [8] for linear problems, and occurs in the case of oversmoothing penalties. We should in our numerical experiments be able to observe the order optimal convergence rate, which is for a = 1
x α ( δ ) δ - x X = O ( δ p p + 1 ) as δ 0 .
This convergence rate was proven for the a priori parameter choice expressed in Equation (19) as well as for the discrepancy principle expressed in Equation (5) in [9] and [3], respectively.
As the exact solutions x are known, we can compute the regularization errors x α δ - x X . Interpreting these errors as a function of δ justifies a regression for the convergence rates according to
x α δ - x X c x δ κ x .
The α -rates are then computed in a similar fashion using
α = α ( δ ) = c α δ κ α .
Both exponents κ x and κ α and the corresponding multipliers c x and c α , all obtained by using a least squares regression based on samples for varying δ , are displayed for different values β in Table 1. As we know, the convergence rate expressed in Equation (35), we can estimate the smoothness p by the the formula p e s t : = κ x 1 - κ x . The far right column of Table 1 displays the quotient 4 p e s t + 1 estimating the exponent in Equation (34), which can be compared with the κ α -values in the second to right column obtained by regression from a data sample. By comparing the far right column and the second to right column of Table 1, we can state that the asymptotics of α d i s c r as δ 0 seems to be approximately the same as for the optimal a priori parameter choice expressed in Equation (34). Such observation was already made for larger values p in [10] for the same model problem.
Figure 1 illustrates results from Table 1 for x ( t ) = c t - β ( 0 < t 1 ) characterizing with varying β ( 0 , 1 / 2 ) different smoothness levels of the solution. Since we have an oversmoothing penalty for all such β , the κ α -values lie between 2 and 2 + 2 a = 4 (cf. the κ -interval (iii) in Example 1). Additionally, the border lines for κ α = 2 and κ α = 2 + 2 a are displayed taking into account that [6] guarantees convergence of the regularized solution x α δ to the exact solution x as δ 0 for a priori choices in the sense of Equation (19) whenever 2 κ α < 2 + 2 a . It becomes evident that the α -rates resulting from the discrepancy principle also lie between those bounds.
Figure 2 and Figure 3 give some more insight into the situation for the special case β = 0 . 2 , which approximately corresponds with the smoothness x X 0 . 31 . In Figure 2 (left), the realized errors x α δ - x X are visualized for a discrete set of noise levels and compared with the associated regression line in a double-logarithmic scale. It becomes evident that the approximation using Hölder rates is highly accurate. The right image of Figure 2 visualizes the behavior of δ 2 α d i s c r for various noise levels on a logarithmic scale. The tendency that δ 2 α d i s c r as δ 0 seems to be convincing. Figure 3 (left) displays the realized α d i s c r -values for this particular situation together with the best approximating regression line according to Equation (37). We see again a very good fit for this type of approximation. The right subfigure shows the exact and regularized solution for δ = 10 - 3 . 5 . The excellent fit of the regularized solution confirms our confidence in the numerical implementation, especially considering the problems associated with this type of exact solution.

6.2. A Comparison with Results from A Priori Parameter Choices

The question of whether the a posteriori choices using the discrepancy principle or appropriate a priori choices according to Equation (19) yield better results is of interest. The influence of the constant c α when using a priori choice according to Equation (37) remains especially unclear. To numerically investigate this, we remain in the setting of Section 6.1; i.e., we consider x ( t ) = c t - β ( 0 < t 1 ) as an exact solution. Figure 4 illuminates this situation for β = 0 . 2 . The error x α δ - x X is plotted for various constants c α , where we use α = α ( δ ) = c α δ 4 1 + p . The error curve shows a clear minimum, which is connected with smaller values x α δ - x X compared with those obtained by exploiting the discrepancy principle with C = 1 . 4 and C = 1 . 6 . It is completely unclear how to find suitable multipliers c α in practice, whereas the discrepancy principle can always be applied as a robust parameter choice rule for practical applications.
We complete our numerical experiments on a priori choices of the regularization parameter with Table 2 and Figure 5, where we list and illustrate the best regression exponents κ x according to the error norm estimate expressed in Equation (36) for different exponents κ α in the a priori parameter choice expressed in Equation (37). In this case study, we used the exact solution x ( t ) = 1 ( 0 t 1 ) with the higher smoothness p = 0 . 5 . For the a priori parameter choice expressed in Equation (37) with varying exponents κ α , the factor c α = 1 has been fixed. The discretization level N = 1000 was considered.
As expected, Table 2 indicates that maximal error rates occur if κ α is close to the optimal value 4 1 + p . These rates also correspond with the order optimal rates according to Equation (20). For smaller exponents κ α , the error rates are falling, and for large exponents κ α 2 + 2 a = 4 , the convergence seems to degenerate. This is visualized in Figure 5: For κ α = 3 . 5 , convergence still takes place, whereas for κ α = 5 . 5 convergence cannot be observed anymore.
Remark 6.
As an alternative a posteriori approach for choosing the regularization parameter α , one could also consider the balancing (Lepskiĭ) principle (cf., e.g., [27,28]). In [29], this principle is adapted to the Hilbert scale setting, but not with respect to oversmoothing penalties. In future work, we can discuss this missing facet and perform numerical experiments for the balancing principle in the case of oversmoothing penalties.

Author Contributions

Formal analysis, B.H.; Investigation, B.H. and C.H.; Software, C.H.; Supervision, B.H.; Visualization, C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Deutsche Forschungsgemeinschaft (grant HO 1454/12-1).

Acknowledgments

The authors thank Daniel Gerth (TU Chemnitz) for fruitful discussions and his kind support during the preparation of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hofmann, B.; Scherzer, O. Factors influencing the ill-posedness of nonlinear problems. Inverse Probl. 1994, 10, 1277–1297. [Google Scholar] [CrossRef]
  2. Morozov, V.A. Methods for Solving Incorrectly Posed Problems; Springer: New York, NY, USA, 1984. [Google Scholar]
  3. Hofmann, B.; Mathé, P. Tikhonov regularization with oversmoothing penalty for non-linear ill-posed problems in Hilbert scales. Inverse Probl. 2018, 34, 015007. [Google Scholar] [CrossRef] [Green Version]
  4. Anzengruber, S.W.; Hofmann, B.; Mathé, P. Regularization properties of the sequential discrepancy principle for Tikhonov regularization in Banach spaces. Appl. Anal. 2014, 93, 1382–1400. [Google Scholar] [CrossRef]
  5. Anzengruber, S.W.; Ramlau, R. Morozov’s discrepancy principle for Tikhonov-type functionals with nonlinear operators. Inverse Probl. 2010, 26, 025001. [Google Scholar] [CrossRef] [Green Version]
  6. Hofmann, B.; Plato, R. Convergence results and low order rates for nonlinear Tikhonov regularization with oversmoothing penalty term. Electron. Trans. Numer. Anal. 2020, 93. [Google Scholar]
  7. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Volume 375 of Mathematics and Its Applications; Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1996. [Google Scholar]
  8. Natterer, F. Error bounds for Tikhonov regularization in Hilbert scales. Appl. Anal. 1984, 18, 29–37. [Google Scholar] [CrossRef]
  9. Hofmann, B.; Mathé, P. A priori parameter choice in Tikhonov regularization with oversmoothing penalty for non-linear ill-posed problems. In Springer Proceedings in Mathematics & Statistics; Cheng, J., Lu, S., Yamamoto, M., Eds.; Springer: Singapore, Singapore, 2020; pp. 169–176. [Google Scholar]
  10. Gerth, D.; Hofmann, B.; Hofmann, C. Case studies and a pitfall for nonlinear variational regularization under conditional stability. In Springer Proceedings in Mathematics & Statistics; Cheng, J., Lu, S., Yamamoto, M., Eds.; Springer: Singapore, Singapore, 2020; pp. 177–203. [Google Scholar]
  11. Schuster, T.; Kaltenbacher, B.; Hofmann, B.; Kazimierski, K.S. Regularization Methods in Banach Spaces; Volume 10 of Radon Series on Computational and Applied Mathematics; Walter de Gruyter: Berlin, Germany; Boston, MA, USA, 2012. [Google Scholar]
  12. Scherzer, O.; Grasmair, M.; Grossauer, H.; Haltmeier, M.; Lenzen, F. Variational Methods in Imaging; Volume 167 of Applied Mathematical Sciences; Springer: New York, NY, USA, 2009. [Google Scholar]
  13. Tikhonov, A.N.; Leonov, A.S.; Yagola, A.G. Nonlinear Ill-Posed Problems; Chapman & Hall: London, UK; New York, NY, USA, 1998; Volume 1. [Google Scholar]
  14. Cheng, J.; Yamamoto, M. One new strategy for a priori choice of regularizing parameters in Tikhonov’s regularization. Inverse Probl. 2000, 16, L31–L38. [Google Scholar] [CrossRef]
  15. Egger, H.; Hofmann, B. Tikhonov regularization in Hilbert scales under conditional stability assumptions. Inverse Probl. 2018, 34, 115015. [Google Scholar] [CrossRef] [Green Version]
  16. Neubauer, A. Tikhonov regularization of nonlinear ill-posed problems in Hilbert scales. Appl. Anal. 1992, 46, 59–72. [Google Scholar] [CrossRef]
  17. Tautenhahn, U. On a general regularization scheme for nonlinear ill-posed problems II: Regularization in Hilbert scales. Inverse Probl. 1998, 14, 1607–1616. [Google Scholar] [CrossRef]
  18. Adams, R.A.; Fournier, J.F.J. Sobolev Spaces; Elsevier/Academic Press: Amsterdam, The Netherlands, 2003. [Google Scholar]
  19. Neubauer, A. When do Sobolev spaces form a Hilbert scale? Proc. Am. Math. Soc. 1988, 103, 557–562. [Google Scholar] [CrossRef]
  20. Gorenflo, R.; Yamamoto, M. Operator-theoretic treatment of linear Abel integral equations of first kind. Jpn. J. Ind. Appl. Math. 1999, 16, 137–161. [Google Scholar] [CrossRef]
  21. Gorenflo, R.; Luchko, Y.; Yamamoto, M. Time-fractional diffusion equation in the fractional Sobolev spaces. Fract. Calc. Appl. Anal. 2015, 18, 799–820. [Google Scholar] [CrossRef]
  22. Hofmann, B.; Kaltenbacher, B.; Resmerita, E. Lavrentiev’s regularization method in Hilbert spaces revisited. Inverse Probl. 2016, 10, 741–764. [Google Scholar] [CrossRef] [Green Version]
  23. Plato, R.; Hofmann, B.; Mathé, P. Optimal rates for Lavrentiev regularization with adjoint source conditions. Math. Comput. 2018, 87, 785–801. [Google Scholar] [CrossRef]
  24. Groetsch, C.W. Inverse Problems in the Mathematical Sciences. In Vieweg Mathematics for Scientists and Engineers; Vieweg+Teubner Verlag: Wiesbaden, Germany, 1993. [Google Scholar]
  25. Hofmann, B. A local stability analysis of nonlinear inverse problems. In Inverse Problems in Engineering—Theory and Practice; The American Society of Mechanical Engineers: New York, NY, USA, 1998; pp. 313–320. [Google Scholar]
  26. Fleischer, G.; Hofmann, B. On inversion rates for the autoconvolution equation. Inverse Probl. 1996, 12, 419–435. [Google Scholar] [CrossRef]
  27. Mathé, P. The Lepskiĭ principle revisited. Inverse Probl. 2006, 22, L11–L15. [Google Scholar] [CrossRef]
  28. Lu, S.; Pereverzev, S.V.; Ramlau, R. An analysis of Tikhonov regularization for nonlinear ill-posed problems under a general smoothness assumption. Inverse Probl. 2007, 23, 217–230. [Google Scholar] [CrossRef]
  29. Pricop-Jeckstadt, M. Nonlinear Tikhonov regularization in Hilbert scales with balancing principle tuning parameter in statistical inverse problems. Inverse Probl. Sci. Eng. 2019, 27, 205–236. [Google Scholar] [CrossRef]
Figure 1. Regression lines for decay rates of α d i s c r for δ 0 and different values β from Table 1 on a double-logarithmic scale.
Figure 1. Regression lines for decay rates of α d i s c r for δ 0 and different values β from Table 1 on a double-logarithmic scale.
Mathematics 08 00331 g001
Figure 2. Approximation error x α δ - x X in red and approximate rate in blue/dashed (left) and δ 2 α d i s c r (right), both depending on various δ on a log-log scale.
Figure 2. Approximation error x α δ - x X in red and approximate rate in blue/dashed (left) and δ 2 α d i s c r (right), both depending on various δ on a log-log scale.
Mathematics 08 00331 g002
Figure 3. α d i s c r ( δ , y δ ) in red for various δ and best approximating regression line in blue/dashed on a log-log scale (left). Regularized (red) compared with exact solution (blue) for δ = 10 - 3 . 5 (right).
Figure 3. α d i s c r ( δ , y δ ) in red for various δ and best approximating regression line in blue/dashed on a log-log scale (left). Regularized (red) compared with exact solution (blue) for δ = 10 - 3 . 5 (right).
Mathematics 08 00331 g003
Figure 4. Regularization error x α δ - x X using the a priori rate expressed in Equation (19) implemented in the sense of Equation (37) depending on various constants c α . Comparison with the error occurring for the discrepancy principle with C = 1.4 (orange) or C = 1.6 (red) and with noise level δ = 10 - 2 . 5 .
Figure 4. Regularization error x α δ - x X using the a priori rate expressed in Equation (19) implemented in the sense of Equation (37) depending on various constants c α . Comparison with the error occurring for the discrepancy principle with C = 1.4 (orange) or C = 1.6 (red) and with noise level δ = 10 - 2 . 5 .
Mathematics 08 00331 g004
Figure 5. Regularization error x α δ - x X and regression line for different noise levels δ on a log-log scale. x ( t ) = 1 , a priori parameter choice according to Equation (37) with κ α = 3 . 5 (left) and κ α = 5 . 5 (right).
Figure 5. Regularization error x α δ - x X and regression line for different noise levels δ on a log-log scale. x ( t ) = 1 , a priori parameter choice according to Equation (37) with κ α = 3 . 5 (left) and κ α = 5 . 5 (right).
Mathematics 08 00331 g005
Table 1. Numerically computed results for discrepancy principle expressed in Equation (33) and x ( t ) = c t - β ( 0 < t 1 ) yielding by regression multipliers and exponents of regularization error expressed in Equation (36) and α -rates expressed in Equation (37) for various values 0 < β < 0 . 5 .
Table 1. Numerically computed results for discrepancy principle expressed in Equation (33) and x ( t ) = c t - β ( 0 < t 1 ) yielding by regression multipliers and exponents of regularization error expressed in Equation (36) and α -rates expressed in Equation (37) for various values 0 < β < 0 . 5 .
β c x κ x p est = κ x 1 - κ x c α κ α 4 p est + 1
0.10.20140.24970.3328111.50892.47383.0011
0.20.45040.23830.312943.99932.79243.0467
0.30.74280.19700.245319.26173.11723.2120
0.41.32940.18310.22412.85483.16193.2677
0.451.58290.14220.16575.12833.66653.4314
Table 2. Error rates κ x by regression for x 1 and varying a priori exponents κ α .
Table 2. Error rates κ x by regression for x 1 and varying a priori exponents κ α .
a Priori α -Rate κ α 22.52.6633.5
convergence rate κ x 0.27630.33040.34790.36860.3114

Share and Cite

MDPI and ACS Style

Hofmann, B.; Hofmann, C. The Impact of the Discrepancy Principle on the Tikhonov-Regularized Solutions with Oversmoothing Penalties. Mathematics 2020, 8, 331. https://doi.org/10.3390/math8030331

AMA Style

Hofmann B, Hofmann C. The Impact of the Discrepancy Principle on the Tikhonov-Regularized Solutions with Oversmoothing Penalties. Mathematics. 2020; 8(3):331. https://doi.org/10.3390/math8030331

Chicago/Turabian Style

Hofmann, Bernd, and Christopher Hofmann. 2020. "The Impact of the Discrepancy Principle on the Tikhonov-Regularized Solutions with Oversmoothing Penalties" Mathematics 8, no. 3: 331. https://doi.org/10.3390/math8030331

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop