Abstract
The convergence analysis of the Landweber iteration for solving inverse problems in Banach spaces via Hölder stability estimates is well studied by de Hoop et al. (Inverse Probl 28(4):045001, 2012) in the presence of unperturbed data. For real life problems, it is important to study the convergence analysis in the presence of perturbed data. In this paper, we show that the convergence analysis of the Landweber iteration can also be studied by utilizing the Hölder stability estimates in the presence of perturbed data. Furthermore, as a by-product, we formulate the convergence rates of the Landweber iteration without utilizing any additional smoothness condition. This shows the advantage of Hölder stability estimates over a tangential cone condition in the theory of inverse problems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and main result
1.1 Background and main problem
Let \(F:D(F)\subset B_1\rightarrow B_2\) be a nonlinear operator between Banach spaces \(B_1\) and \(B_2.\) Here D(F) represents the domain of F. In this paper, we are concerned about the solution of following equation:
For practical applications, it is known that v is never available. Instead some perturbed data \(v^{\delta }\) fulfilling \(\Vert v^{\delta }-v\Vert \le \delta \) is available, where \(\delta >0.\) Consequently, (1.1) is ill-posed due to no continuous dependence between the solution and data. We assume that (1.1) has a solution. Let it be represented by \(u^{\dagger }.\) To approximately solve (1.1), a number of regularization methods are known in Hilbert as well as Banach spaces (cf. [2,3,4,5,6, 8, 10, 17]). A well known and classical regularization method is the Landweber iteration [7, 17]:
Here \(F'(u_r)\) is the Fréchet derivative of F at \(u_r,\) \(F'(u_r)^*\) is the adjoint of \(F'(u_r),\) \(u_0\) is an initial guess of the exact solution \(u^{\dagger },\) \(p>1,\) p and q are conjugate exponents, and \(J_p : B_1 \rightarrow 2^{B_1^*}\) defined as \(J_p(u) := \{ u^* \in B_1^*\ | \ \langle u, u^*\rangle = \Vert u\Vert ^p,\Vert u^*\Vert = \Vert u\Vert ^{p-1}\}\) is the duality mapping of \(B_1\) with the gauge function \(s \rightarrow s^{p-1}.\) For a gauge function \(s\rightarrow s^{q-1},\) the corresponding duality mapping \(J_{q}^*:B_1^*\rightarrow B_1\) is the inverse of \(J_{p}.\) The convergence analysis of (1.2) is well studied by utilizing a tangential cone condition [4] in Hilbert as well as Banach spaces [10, 17]. In addition, the convergence rates for this method have been obtained by incorporating the source conditions and variational inequalities [10, 17]. Recently, de Hoop et al. [7] studied the convergence analysis of (1.2) by utilizing the following Hölder-type stability:
where \(\epsilon \in [0, 1],\) \(p>1,\) \({\mathfrak {A}}>0,\) \(\Delta _p(u, {\bar{u}})\) is the Bregman distance of \({\bar{u}}\) from u and it is given by
Here, we assume that \({\mathcal {B}}_{\rho }(u^{\dagger }):=\{{\bar{u}}\in B_1:\Delta _p({\bar{u}}, u^{\dagger })\le \rho \}\subset D(F)\) for some \(\rho >0.\) However, the convergence analysis of the perturbed version of (1.2), i.e.,
is not yet studied in the literature via the stability estimates (1.3). In this paper, we fill this important gap in the literature. The importance of studying the convergence analysis of an iterative method via stability estimates is that these provide the convergence rates without the requirement of any additional smoothness condition. This is in contrary to the standard analysis. We refer to [11,12,13,14,15] for studying the convergence analysis of several other regularization methods through stability estimates. Very recently, Jin [9] studied the convergence rates for the method (1.4) in Banach spaces for linear ill-posed problems with perturbed data.
To this end, we recall some basic definitions and known results related to our work (see [11, 17] for more details). For \(u\in B_1\) and \(\zeta \in B_1^*,\) we write \(\langle \zeta , u\rangle =\zeta (u)\) for the duality pairing. In this paper, in order to ensure the well-definedness of the method (1.4), we require the following definitions of the modulus of convexity \(\delta _{B_1}(\cdot )\) and the modulus of smoothness \(\rho _{B_1}(\cdot )\):
Here \({\mathbb {S}}\) denotes the boundary of the unit sphere in \(B_1.\) We say that \(B_1\) is p-convex if \(\delta _{B_1}(\epsilon ) \ge {\mathcal {C}}_1 \epsilon ^p\) for all \(\epsilon \in [0, 2],\) where \(p\ge 0\) and \({\mathcal {C}}_1>0.\) Further, we say that \(B_1\) is q-smooth if \(\rho _{B_1}(\tau ) \le {\mathcal {C}}_2 \tau ^q\) for all \(\tau \ge 0,\) where \(q>1\) and \({\mathcal {C}}_2>0.\) Also, \(B_1\) is uniformly convex if for any \(\epsilon \in (0, 2],\) \(\delta _{B_1}(\epsilon )>0\) and it is uniformly smooth if \(\lim _{\tau \rightarrow 0} \rho _{B_1}(\tau )\tau ^{-1}=0.\) We note that \(B_1\) is uniformly convex if and only if \(B_1^*\) is uniformly smooth. Moreover, any uniformly convex or uniformly smooth Banach space is reflexive. We emphasize that uniform smoothness of \(B_1\) guarantees that \(J_p(u)\) is single valued for all \(u\in B_1,\) i.e., the method (1.4) becomes well-defined.
Finally, we recall a known result that will be utilized in our work.
Lemma 1
([7, 16]). Let \(B_1\) be a uniformly convex and uniformly smooth Banach space. Then, for all \(u, {\bar{u}} \in B_1\) and \(u^*, {\bar{u}}^*\in B_1^*,\) we have :
-
(1)
\(\Delta _p(u, {\bar{u}}) \ge 0\) and \(\Delta _p(u, {\bar{u}}) = 0 \iff u = {\bar{u}}.\)
-
(2)
If \(B_1\) is p-convex, then \(\Delta _p(u, {\bar{u}}) \ge {\mathcal {C}}_3 p^{-1}\Vert u-{\bar{u}}\Vert ^p,\) where \({\mathcal {C}}_3 > 0\) is a constant.
-
(3)
If \(B_1^*\) is q-smooth, then \(\Delta _q(u^*, {\bar{u}}^*) \le {\mathcal {C}}_4 q^{-1}\Vert u^*-{\bar{u}}^*\Vert ^q,\) where \({\mathcal {C}}_4 > 0\) is a constant.
-
(4)
The following are equivalent : (a) \(\lim _{r\rightarrow \infty }\Vert u_r-u\Vert =0.\) (b) \(\lim _{r\rightarrow \infty }\) \(\Delta _p(u_r, u)=0.\) (c) \(\lim _{r\rightarrow \infty }\Vert u_r\Vert =\Vert u\Vert \) and \(\lim _{r\rightarrow \infty }\langle J_p(u_r), u\rangle =\langle J_p(u), u\rangle .\)
-
(5)
\(\Delta _p(u, {\tilde{u}}) =p^{-1}\Vert {\tilde{u}}\Vert ^p+q^{-1}\Vert u\Vert ^{p}-\langle J_p(u), {\tilde{u}}\rangle = p^{-1}\Vert {\tilde{u}}\Vert ^p-p^{-1}\Vert u\Vert ^{p}-\langle J_p(u), {\tilde{u}}\rangle +\Vert u\Vert ^p.\)
1.2 Main result
In order to formulate our main result, we discuss certain assumptions. With the gauge function \(s\rightarrow s^{p-1},\) we assume that \(j_p\) denotes the single valued selection of the duality mapping. For the method (1.4), we engage the following well known discrepancy criterion:
where \(\tau >1\) satisfies
and \(r_*=r_*(\delta , v^{\delta })\) is the stopping index. The utilization of (1.5) yields \(r_*\) and \(u_{r_*}^{\delta }\) which is the required approximate solution. Our main result is as follows:
Theorem 1
Let \(B_1\) be p-convex and q-smooth with conjugate exponents \(1<p,q<\infty \) and let \(B_2\) be an arbitrary Banach space. Moreover, we assume:
-
(1)
For all \(u, {\bar{u}}\in {\mathcal {B}}_{\rho }(u^{\dagger }),\) it holds that
$$\begin{aligned} \Vert F'(u)-F'({\bar{u}})\Vert \le {\mathcal {C}}_5\Vert u-{\bar{u}}\Vert ,\quad \text {where}\ {\mathcal {C}}_5>0. \end{aligned}$$(1.7) -
(2)
For all \(u\in {\mathcal {B}}_{\rho }(u^{\dagger }),\) it holds that \(\Vert F'(u)\Vert \le {\mathcal {C}}_6,\) where \({\mathcal {C}}_6>0.\)
-
(3)
\(u^{\dagger }\) is a solution of (1.1) such that \( \Delta _p(u_0, u^{\dagger })\le \rho \) for
$$\begin{aligned} \rho ^{\frac{1}{p}}=2^{-p\epsilon }{\mathcal {C}}_6^{-1} ({\mathcal {C}}_3{\mathfrak {A}}^2)^{-\frac{1}{\epsilon }} (p^{-1}{\mathcal {C}}_3)^{(1+\frac{2}{\epsilon })\frac{1}{p}}. \end{aligned}$$(1.8) -
(4)
\(\mu \) in (1.4) is such that
$$\begin{aligned} \mu<\bigg (\frac{q}{2{\mathcal {C}}_4{\mathcal {C}}_6}\bigg )^{\frac{1}{q-1}}\ \text {and}\ 4{\mathcal {C}}_4 {\mathcal {C}}_6^qq^{-1} \mu ^{q-1} <1. \end{aligned}$$(1.9) -
(5)
The Hölder stability estimate (1.3) and (1.5) hold with \(\tau \) the same as in (1.6).
Further, assume that
Then, we have :
-
(a)
For \(0\le r<r_*,\) \( \Delta _p(u_{r+1}^{\delta }, u^{\dagger })\le \Delta _p(u_{r}^{\delta }, u^{\dagger }).\)
-
(b)
The stopping index \(r_*\) is finite.
-
(c)
Moreover, for a given \(\delta >0,\) if \(\rho >0\) is such that \(\rho \le {\mathcal {C}}_7\delta ^p\) for some \({\mathcal {C}}_7>0,\) then the following convergence rates can be derived :
$$\begin{aligned} \Delta _p(u_{r_*}^{\delta }, u^{\dagger })\le {\mathcal {C}}_8\delta ^p, \end{aligned}$$where \({\mathcal {C}}_8={\mathcal {C}}_7-r_*\frac{\mu {\mathfrak {R}}}{2}.\)
Proof
We engage the fundamental theorem of the Fréchet derivative along with (1.7) to deduce that
It is known that \(\Delta _p(u_0, u^{\dagger })\le \rho .\) Suppose by the induction principle that
We claim that \(\Delta _p(u_{r+1}^{\delta }, u^{\dagger })\le \rho .\) Using induction, the mean value inequality, (2) of Theorem 1 and (2) of Lemma 1, we obtain
where \(s=0, 1, \ldots , r.\) Next, by taking \({\bar{u}}^*=J_p(u_{r+1}^{\delta })\) and \(u^*=J_p(u_{r}^{\delta })\) in (3) of Lemma 1, we derive that
After applying (5) of Lemma 1 and the result that \(J_p^{-1}(u^*)=J_{q}^*(u^*)\) along with the definition of the duality mapping, we note that
Plugging (1.13) in the last estimate we deduce that
Again we note from (5) of Lemma 1 that
By combining (1.14) and (1.15), we note that
We incorporate (1.4) and assumption (2) of Theorem 1 in the last inequality to derive that
where \({\mathcal {A}}_r^{\delta }=F(u_{r}^{\delta })-v^{\delta }\) and the last inequality holds due to (1.11) and the definition of the duality mapping. We plug the Hölder stability estimate (1.3) and (2) of Lemma 1 in (1.16) to further write it as
Inserting (1.12) in the last estimate, we derive that
Using (1.9) and the estimate
in (1.17), we get
By incorporating the discrepancy principle (1.5) and (1.9) in (1.19), we obtain
where \(r+1\le r_{*}.\) This and the choice of \(\tau \) mentioned in (1.6) together with the induction hypothesis guarantee that \( \Delta _p(u_{r+1}^{\delta }, u^{\dagger })<\rho .\) Therefore, our claim holds which completes the proof of assertion (a).
Next, we show that the stopping index \(r_*<\infty .\) For this, we incorporate (1.9) and (1.20) to write
Summing this from \(r=0\) to \(r^*-1\), we deduce that
This, the choice of \(u_0,\) and (1.5) yield
We note that as \(\frac{2\rho }{\mu {\mathfrak {R}}}<\infty \) and both \(\tau , \delta \) are positive quantities, \(r_*\) can never be infinite. This proves assertion (b).
To this end, we deduce the convergence rates for the method (1.4). It follows from the Hölder stability estimate (1.3) that
This with a slightly modified version of (1.18) (i.e., \((\alpha _1+\alpha _2)^p\le 2^{p}(\alpha _1^p+\alpha _2^p)\) for \(\alpha _1, \alpha _2\ge 0, \ p\ge 0\)) implies that
where \(p_1=\frac{p(1+\epsilon )}{2}.\) Inserting the last estimate in (1.20), we obtain
With some minor rearrangements, (1.21) leads to
Consequently, by the induction hypothesis, we derive that
From the last inequality, we can deduce the convergence rates in assertion (c) which completes the proof. \(\square \)
Remark 1
The assumptions considered in our work are standard and similar to [7]. Consequently, our results are applicable on a severely ill-posed inverse conductivity problem related to electrical impedance tomography (EIT) [1]. de Hoop et al. [7] showed that the inverse conductivity problem fulfills a Hölder stability estimate (1.3) for \(p=2\) and \(\epsilon =1.\) In addition to this, it is also known that the operator associated with this inverse conductivity problem fulfills (1) and (2) of Theorem 1. Therefore, by carefully choosing the other parameters such as \(\tau ,\) \(\mu \) etc., one can apply our results on the inverse conductivity problem.
2 Conclusion and future scope
In this paper, we have shown that one can obtain the convergence rates of the Landweber iteration method through stability estimates in the presence of perturbed data without the utilization of any additional smoothness concept. This paper fills an important gap in the literature. With this paper, the study of convergence analysis of the Landweber method for perturbed as well as unperturbed data via stability estimates is complete. One of the most important future tasks in the direction of studying the convergence analysis via stability estimates is to derive the optimal convergence rates. In this direction, the optimality conditions discussed in [4, 17] can be used as a reference.
References
Alessandrini, G., Vessella, S.: Lipschitz stability for the inverse conductivity problem. Adv. Appl. Math. 35(2), 207–241 (2005)
Argyros, I.K., George, S.: Unified convergence analysis of frozen Newton-like methods under generalized conditions. J. Comput. Appl. Math. 347, 95–107 (2019)
Bakushinsky, A.B., Kokurin, M.: Iterative Methods for Approximate Solution of Inverse Problems. Mathematics and its Applications (New York), 577. Springer, Dordrecht (2004)
Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Springer, Berlin (2000)
George, S., Sabari, M.: Numerical approximation of a Tikhonov type regularizer by a discretized frozen steepest descent method. J. Comput. Appl. Math. 330, 488–498 (2018)
Hanke, M., Neubauer, A., Scherzer, O.: A convergence analysis of the Landweber iteration for nonlinear ill-posed problems. Numer. Math. 72, 21–37 (1995)
de Hoop, M.V., Qiu, L., Scherzer, O.: Local analysis of inverse problems: Hölder stability and iterative reconstruction. Inverse Problems 28(4), 045001 (2012)
Jin, Q.: Inexact Newton–Landweber iteration for solving nonlinear inverse problems in Banach spaces. Inverse Problems 28, 065002 (2012)
Jin, Q.: Convergence rate of a dual gradient method for constrained linear ill-posed problems. Numer. Math. 151, 841–877 (2022)
Kaltenbacher, B., Neubauer, A., Scherzer, O.: Iterative Regularization Methods for Nonlinear Ill-posed Problems. De Gruyter, Berlin (2008)
Mittal, G., Giri, A.K.: Novel multi-level projected iteration to solve inverse problems with nearly optimal accuracy. J. Optim. Theory Appl. 194, 643–680 (2022)
Mittal, G., Giri, A.K.: A novel two-point gradient method for regularization of inverse problems in Banach spaces. Appl. Anal. 101(18), 6596–6622 (2022)
Mittal, G., Giri, A.K.: On variational regularization: finite dimension and Hölder stability. J. Inverse Ill-Posed Probl. 29(2), 283–294 (2021)
Mittal, G., Giri, A.K.: Convergence rates for iteratively regularized Gauss–Newton method subject to stability constraints. J. Comput. Appl. Math. 400, 113744 (2022)
Mittal, G., Giri, A.K.: Nonstationary iterated Tikhonov regularization: convergence analysis via Hölder stability. Inverse Problems 38(12), 125008 (2022)
Schöpfer, F., Louis, A.K., Schuster, T.: Nonlinear iterative methods for linear ill-posed problems in Banach spaces. Inverse Problems 22, 311–329 (2006)
Schuster, T., Kaltenbacher, B., Hofmann, B., Kazimierski, K.S.: Regularization Methods in Banach Spaces. De Gruyter, Berlin (2012)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Mittal, G., Giri, A.K. Improved local convergence analysis of the Landweber iteration in Banach spaces. Arch. Math. 120, 195–202 (2023). https://doi.org/10.1007/s00013-022-01807-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00013-022-01807-0