Abstract
The Levenberg–Marquardt algorithm is one of the most popular algorithms for finding the solution of nonlinear least squares problems. Across different modified variations of the basic procedure, the algorithm enjoys global convergence, a competitive worst-case iteration complexity rate, and a guaranteed rate of local convergence for both zero and nonzero small residual problems, under suitable assumptions. We introduce a novel Levenberg-Marquardt method that matches, simultaneously, the state of the art in all of these convergence properties with a single seamless algorithm. Numerical experiments confirm the theoretical behavior of our proposed algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider solving general nonlinear least-squares problems where one may or may not have a solution with a zero residual. Problems of this nature arise in several important practical contexts, including inverse problems for ill-posed nonlinear continuous systems [1] with applications such as data assimilation [2]. Usually, the resulting least-squares problems do not necessarily have a zero residual at any point, although the minimum residual may be small.
Recall that the Gauss–Newton method is an iterative procedure for solving nonlinear least-squares problems by iteratively solving a linearized least-squares subproblem. This subproblem may not be well-posed in the case of rank deficiency of the residual Jacobian function. Furthermore, the Gauss–Newton method may not be globally convergent. The Levenberg–Marquardt (LM) method [3,4,5] was developed to deal with the rank deficiency of the Jacobian matrix and also to provide a globalization strategy for Gauss–Newton. In this paper, we will present and analyze the global and local convergence results of a novel LM method for solving general nonlinear least-squares problems that carefully balances the opposing objectives of ensuring global convergence and stabilizing a fast local convergence regime.
In general, the goals of encouraging global and local convergence compete against each other. Namely, the regularization parameter appearing in the subproblem should be allowed to become arbitrarily large in order to encourage global convergence, by ensuring the local accuracy of the linearized subproblem, but the parameter must approach zero in order to function as a stabilizing regularization that encourages fast local convergence. In the original presentation of the LM method in [3, 4], the regularization parameter is not permitted to go to zero, and only global convergence is considered.
The strongest results for local convergence of LM are given in a series of papers beginning with [6] (followed by, e.g., [7] and [8]; see also [9]), wherein it is assumed that the residual function is zero at the solution. For the global convergence, the algorithm considered is a two-phase one, where quadratic decline in the residual is tested with each step that is otherwise globalized by a line-search procedure.
In the case of nonzero residuals, it has been found that a LM method converges locally at a linear rate, if the norm of the residual is sufficiently small and the regularization parameter goes to zero [10]. Our proof of linear convergence is simpler than in [10]. Worst-case iteration complexity bounds for LM methods applied to nonlinear least-squares problems can be found in [13,14,15]. We show that our proposed algorithm has a complexity bound that matches these results, up to a logarithmic factor.
In this paper, we propose a method that successfully balances the multiple objectives in theoretical convergence properties, including (a) global convergence for exact and inexact solutions of the subproblem, (b) worst-case iteration complexity, and (c) local convergence for both zero and nonzero residual problems. Table 1 summarizes the literature on this class of methods; our proposed algorithm uniquely matches the state of the art in all of these properties.
The outline of this paper is as follows. In Sect. 2, we present the proposed LM algorithm and address the inexact solution of the linearized least-squares subproblem. Section 3 contains a worst-case complexity and global convergence analysis of the proposed method. In Sect. 4, we derive the local convergence theory. In Sect. 5, preliminary numerical experiments are presented to demonstrate the behavior of our algorithm. Finally, we conclude in Sect. 6.
2 A Novel Levenberg–Marquardt Algorithm
In this paper, we consider the following nonlinear least-squares problem
where \(F:{\mathbb {R}}^n \rightarrow {\mathbb {R}}^m\) is a (deterministic) vector-valued function, assumed twice continuously differentiable. Here and in the rest of the text, \(\Vert \cdot \Vert \) denotes the vector or matrix \(l_2\)-norm. At each iteration j, the LM method computes (approximately) a step \(s_j\) of the form \(- ( J_j^\top J_j + \gamma _j I )^{-1} J_j^\top F_j\), corresponding to the unique solution of
where \(\gamma _j>0\) is an appropriately chosen regularization parameter, \(F_j:=F(x_j)\) and \(J_j :=J(x_j)\) denotes the Jacobian of F at \(x_j\).
In deciding whether to accept a step \(s_j\) generated by the subproblem (2), the LM method can be seen as precursor of the trust-region method [12]. In fact, it seeks to determine when the Gauss–Newton step is applicable or when it should be replaced by a slower but safer steepest descent step. One considers the ratio \(\rho _j\) between the actual reduction \(f(x_{j}) - f(x_j+s_j)\) attained in the objective function and the reduction \(m_j(0) - m_j(s_j)\) predicted by the model. Then, if \(\rho _j\) is sufficiently greater than zero, the step is accepted and \(\gamma _j\) is possibly decreased. Otherwise, the step is rejected and \(\gamma _j\) is increased.
In this paper, we use the standard choice of the regularization parameter
where \(\mu _j\) is updated according to the ratio \(\rho _j\). The considered LM algorithm using the above update strategy, as described in Algorithm 1, will be shown to be globally convergent with a complexity bound of order \(\epsilon ^{-2}\) and have strong local convergence properties.
This algorithm has one particularly novel feature among LM methods: we have an auxiliary parameter \({\bar{\mu }}\) which represents the last parameter corresponding to a successful step, introduced to balance the requirements of global and local convergence. If the model is inaccurate, then \(\mu _j\) is driven higher; however, when we reach a region associated with the local convergence regime, the residual \(\Vert F(x_j)\Vert \) should ultimately dominate the behavior of \(\gamma _j\) for successful steps. Step 1 of Algorithm 1 requires the approximate solution of subproblem (2). As in trust-region methods, there are different techniques to approximate the solution of this subproblem that yield a globally convergent step. For that it suffices to compute a step \(s_j\) that provides a reduction in the model at least as good as the one produced by the so-called Cauchy step (defined as the minimizer of the model along the negative gradient) which is given by
The Cauchy step is cheap to calculate as it does not require any system solve. Moreover, the LM method will be globally convergent if it uses a step that attains a reduction in the model as good as a multiple of the Cauchy decrease. Thus, we will impose the following assumption on the step calculation:
Assumption 2.1
There exists \(\theta _{fcd}>0\) such that for every iteration j:
Despite providing a sufficient reduction in the model and being cheap to compute, the Cauchy step is scaled steepest descent. In practice, a version of Algorithm 1 based solely on the Cauchy step would suffer from the same drawbacks as the steepest descent algorithm on ill-conditioned problems.
Since the Cauchy step is the first step of the conjugate gradient method (CG) when applied to the minimization of the quadratic \(s \rightarrow m_j(s)\), it is natural to propose running CG further and stopping only when the residual becomes sufficiently small. Since the CG generates iterates by minimizing the quadratic model over nested Krylov subspaces, and the first subspace is the one generated by \(\nabla f(x_j)\) (see, e.g., [16, Theorem 5.2]), the decrease obtained at the first CG iteration (i.e., by the Cauchy step) is at least attained by the remaining iterations. Thus, Assumption 2.1 holds for all the iterates \( s_j^{\text{ cg }}\) generated by the truncated-CG whenever it is initialized by the null vector. The following lemma is similar to [17, Lemma 5.1] and will be useful for our worst-case complexity analysis.
Lemma 2.1
For the three steps proposed (exact, Cauchy, and truncated-CG), one has that
In what comes next, we will call all iterations j for which \(\rho _j\ge \eta \)successful, and we denote the set of their indices by the symbol \({\mathcal {S}}\), i.e., \({\mathcal {S}} := \{j \in {\mathbb {N}} | ~~\rho _j \ge \eta \}.\)
3 Worst-Case Iteration Complexity and Global Convergence
We now establish a worst-case complexity bound of Algorithm 1. Namely, given a tolerance \(\epsilon \in ]0,1[\), we aim at deriving the number of iterations, in the worst case, needed to reach an iterate \(x_j\) such that
where \({{\bar{x}}}_j \in {{\,\mathrm{argmin}\,}}_{\{x \in {\mathbb {R}}^n: \nabla f(x)=0\}} \Vert x_j - x \Vert \). Without loss of generality, we will assume that \(\Vert F({\bar{x}}_j)\Vert \) is unique and independent from \(x_j\), we will denote it by \({{\bar{f}}}\). It can be seen that, if we drop this assumption, then the same arguments in this section show asymptotic global convergence. Then, \({{\bar{f}}} := \sqrt{2f({\bar{x}}_j)}=\Vert F({\bar{x}}_j)\Vert \) can just be taken to be the value at the limit point of the sequence. We start now by giving some classical assumptions.
Assumption 3.1
The function f is continuously differentiable in an open set containing \(L(x_0) := \{ x \in {\mathbb {R}}^n: f(x) \le f(x_0) \}\) with Lipschitz continuous gradient on \(L(x_0)\) with the constant \(\nu > 0\).
Assumption 3.2
The Jacobian J of F is uniformly bounded, i.e., there exists \(\kappa _{J} > 0\) such that \(\Vert J \Vert \le \kappa _{J}\) for all x.
We begin by obtaining a condition on the parameter \(\mu _j\) that is sufficient for an iteration to be successful. We omit the proof as it is standard, see for instance Lemma 5.2 in [17].
Lemma 3.1
Let Assumptions 2.1, 3.1, and 3.2 hold. Suppose that at the j-th iteration of Algorithm 1, one has
where
Then, the iteration is successful.
Our next result states that, when the gradient norm stays bounded away from zero, the parameter \(\mu _j\) cannot grow indefinitely. Without loss of generality, we assume that \(\epsilon \le \sqrt{\frac{\lambda \kappa }{\mu _0}}\), where \(\kappa \) is the same as in the previous lemma.
Lemma 3.2
Under Assumptions 2.1, 3.1, and 3.2 , let j be a given iteration index such that for every \(l \le j\) it holds that \(\Vert F(x_l)\Vert \ge \max \left\{ \epsilon ,(1+\epsilon ){{\bar{f}}}\right\} \) where \(\epsilon \in ]0,1[\). Then, for every \(l \le j\), one also has
Proof
We prove this result by contradiction. Suppose that \(l \ge 1\) is the first index such that
By the updating rules on \(\mu _l\), either the iteration \(l-1\) is successful, in which case \(\mu _l \le \mu _{l-1} \le \frac{\lambda \kappa }{\max \left\{ \epsilon ^2,(1+\epsilon )^2{{\bar{f}}}^2\right\} }\) which contradicts (5), or the iteration \(l-1\) is unsuccessful, and thus,
therefore, using Lemma 3.1 this implies that the \((l-1)\)-th iteration is successful which leads to a contradiction again. \(\square \)
Thanks to Lemma 3.2, we can now bound the number of successful iterations needed to drive the gradient norm below a given threshold.
Proposition 3.1
Under Assumptions 2.1, 3.1, and 3.2 . Let \(\epsilon \in ]0,1[\) and \(j_{\epsilon }\) be the first iteration index such that (3) holds. Then, if \({\mathcal {S}}_{\epsilon }\) is the set of indexes of successful iterations prior to \(j_{\epsilon }\), one has:
Proof
For any \(j \in {\mathcal {S}}_{\epsilon }\), one has
Hence, using the fact that \(\Vert \nabla f(x_j)\Vert \ge \epsilon \), \(\Vert F(x_j)\Vert \le \Vert F(x_0)\Vert \) and \(\mu _j \le \mu _{\max }\), we arrive at
Consequently, by summing on all iteration indices within \({\mathcal {S}}_{\epsilon }\),
we obtain
hence the result. \(\square \)
Lemma 3.3
Under Assumptions 2.1, 3.1, and 3.2 . Let \({\mathcal {U}}_{\epsilon }\) denote the set of unsuccessful iterations of index less than or equal to \(j_{\epsilon }\). Then,
Proof
Note that we necessarily have \(j_{\epsilon } \in {\mathcal {S}}_{\epsilon }\) (otherwise it would contradict the definition of \(j_{\epsilon }\)). Our objective is to bound the number of unsuccessful iterations between two successful ones. Let this \(\{j_0,\dots ,j_t=j_{\epsilon }\}\) be an ordering of \({\mathcal {S}}_{\epsilon }\), and \(i \in \{0,\dots ,t-1\}\).
Due to the updating formulas for \(\mu _j\) on successful iterations, we have:
Moreover, we have \(\Vert F(x_{j_i+1})\Vert \ge \epsilon \) by assumption. By Lemma 3.1, for any unsuccessful iteration \(j \in \{j_i + 1,\dots , j_{i+1}-1\}\), we must have: \( \mu _j \le \frac{\kappa }{\epsilon ^2}\), since otherwise \(\mu _j > \frac{\kappa }{\epsilon ^2} \ge \frac{\kappa }{\Vert F(x_j)\Vert ^2}\) and the iteration would be successful.
Using the updating rules for \(\mu _j\) on unsuccessful iterations, we obtain:
Therefore, the number of unsuccessful iterations between \(j_i\) and \(j_{i+1}\), equal to \(j_{i+1}-j_i-1\), satisfies:
By considering (7) for \(i=0,\dots ,t-1\), we arrive at
What is left to bound is the number of possible unsuccessful iterations between the iteration of index 0 and the first successful iteration \(j_0\). Since \(\mu _0 \ge \mu _{\min }\), a similar reasoning as the one used to obtain (7) leads to
Putting (8) and (9) together yields the expected result. \(\square \)
By combining the results from Proposition 3.1 and Lemma 3.3, we thus get the following complexity estimate.
Theorem 3.1
Under Assumptions 2.1, 3.1, and 3.2 . Let \(\epsilon \in ]0,1[\). Then, the first index \(j_{\epsilon }\) for which \( \Vert \nabla f(x_{j_{\epsilon }+1})\Vert< \epsilon ~~\text{ or }~~~\Vert F(x_{j_{\epsilon }+1})\Vert < \max \left\{ \epsilon ,(1+\epsilon ){{\bar{f}}}\right\} , \) is bounded above by
where \({\mathcal {C}}\) is the constant defined in Proposition 3.1.
For the LM method proposed in this paper, we thus obtain an iteration complexity bound of \(\tilde{{\mathcal {O}}}\left( \epsilon ^{-2}\right) \), where the notation \(\tilde{{\mathcal {O}}}(\cdot )\) indicates the presence of logarithmic factors in \(\epsilon \). Note that the evaluation complexity bounds are of the same order.
We note that by the definition of \({{\bar{f}}}\) it holds that: \(\Vert F(x_j)\Vert ={{\bar{f}}}\) implies that \(\nabla f(x_j)=0\). Thus, by letting \(\epsilon \rightarrow 0\), Theorem 3.1 implies that
In order to derive the global convergence result, we need to extend this to a limit result.
Theorem 3.2
Under Assumptions 2.1, 3.1, and 3.2, the sequence \(\{x_j\}\) generated by Algorithm 1 satisfies
Proof
Consider the case that \({\mathcal {S}}\), the set of successful iterations, is finite. Then \(\exists j_0\) such that for all \(j \ge j_0, ~ \Vert \nabla f(x_j)\Vert = \Vert \nabla f(x_{j_0})\Vert \) therefore from the previous theorem we conclude that in this case
Alternatively, assume \({\mathcal {S}}\) is infinite and let \(\epsilon >0\). Since \((f(x_j))\) is monotonically decreasing and bounded from below, one has \(\lim _{j\rightarrow \infty } f(x_j) - f(x_{j+1})=0\). Since \(\frac{\eta \theta _{fcd} \epsilon ^2 }{2(\kappa _J^2 + \mu _0 \Vert F(x_0)\Vert ^2 )}>0\), we conclude, for j sufficiently large, that
Thus, for a sufficiently large j, consider the case that \(j \in {\mathcal {S}}\), then \(\rho _j \ge \eta \) and by rearranging the terms and using Assumption 2.1 we conclude that
If \(j \notin {\mathcal {S}}\) than \( \Vert \nabla f(x_j)\Vert \; = \; \Vert \nabla f(x_{{\hat{j}}})\Vert \; \le \epsilon , \) where \({\hat{j}} \in {\mathcal {S}}\) is the last successful iteration before j. Hence, it must hold that \( \lim _{j\rightarrow \infty } \Vert \nabla f(x_j)\Vert \; = \; 0.\)\(\square \)
4 Local Convergence
In this section, we prove local convergence for the algorithm, showing a quadratic rate for zero residual problems and explicit linear rate for nonzero residuals. Since the problem is generally nonconvex and with possibly nonzero residuals, there can be multiple sets of stationary points with varying objective values. We consider a particular subset with a constant value of the objective.
Assumption 4.1
There exists a connected isolated set \(X^*\) composed of stationary points to (1) each with the same value \(f^*\) of \(f(\cdot )=\frac{1}{2}\Vert F(\cdot )\Vert ^2\) and Algorithm 1 generates a sequence with an accumulation point \(x^*\in X^*\).
We note that under Assumption 4.1, the value \(\Vert F({{{\bar{x}}}})\Vert \) is unique for all \({{\bar{x}}}\in X^*\) which may not be the case for the residual vector \(F({{{\bar{x}}}})\). Thus we define \({{\bar{f}}} = \sqrt{2f({{\bar{x}}})}=\Vert F({{\bar{x}}})\Vert \) for \({{\bar{x}}}\in X^*\). Henceforth, from the global convergence analysis, we can assume, without loss of generality, that there exists a subsequence of iterates approaching this \(X^*\). This subsequence does not need to be unique, i.e., there may be more than one subsequence converging to separate connected sets of stationary points. We shall see that eventually, one of these sets shall “catch” the subsequence and result in direct convergence to the set of stationary points at a quadratic or linear rate, depending on \({{\bar{f}}}\).
In the sequel, \(N(x,\delta )\) denotes the closed ball with center x (a given vector) and radius \(\delta >0\) and \({{\,\mathrm{dist}\,}}(x,X^*)\) denotes the distance between the vector x and the set \(X^*\), i.e., \({{\,\mathrm{dist}\,}}(x,X^*)=\min _{y\in X^*}\Vert x-y\Vert ,\) and \({{{\bar{x}}}} \in {{\,\mathrm{argmin}\,}}_{y\in X^*}\Vert x-y\Vert \).
Assumption 4.2
It holds that F(x) is twice continuously differentiable around \(x^*\in X^*\) with \(x^*\) satisfying Assumption 4.1. In particular, this implies that there exists \(\delta _1>0\) such that for all \(x, ~y \in N(x^*,\delta _1)\),
where \(L_1\), \(L_2\), and \(L_3\) are positive constants.
From the triangle inequality and assuming (12), we get
We introduce the following additional assumption.
Assumption 4.3
There exist a \(\delta _3>0\) and \(M >0\) such that
As the function \(x \rightarrow F(x)- F({{\bar{x}}})\) is zero residual, the proposed error bound assumption can be seen as a generalization of the zero residual case [6,7,8,9]. Thus, any ill-posed zero residual problem, as considered in this line of work on quadratic local convergence for LM methods, satisfies the assumptions. Our assumptions are also covered by a range of nonzero residual problems, for instance, standard data assimilation problems [2] as given by Example 4.1.
Example 4.1
Consider the following data assimilation problem \(F: {\mathbb {R}}^n \rightarrow {\mathbb {R}}^{m}\) defined, for a given \(x \in {\mathbb {R}}^n\), by \(F(x)= \left( (x - x_{\text{ b }})^{\top }, (H(x) - y)^{\top }\right) ^{\top }\), where \(x_{\text{ b }} \in {\mathbb {R}}^n\) is a background vector, \(y \in {\mathbb {R}}^{m-n}\) the vector of observations and \(H: {\mathbb {R}}^n \rightarrow {\mathbb {R}}^{m-n}\) is a smooth operator modeling the observations. For such problems, the set of stationary points \(X^{*}=\{\{ {{\bar{x}}}\}\} \subset {\mathbb {R}}^n\) is a finite disjoint set, \({{\bar{F}}}=F({{\bar{x}}})\) and \( {{\,\mathrm{dist}\,}}(x,X^*)= \Vert x - {{\bar{x}}}\Vert \) for \({\bar{x}}\) closest to x. Clearly, one has
Thus, for these typical problems arising in data assimilation, Assumptions 4.2, 4.1 and 4.3 are satisfied.
Example 4.2
Consider \(F: {\mathbb {R}}^3 \rightarrow {\mathbb {R}}^3\) defined, for a given \(x \in {\mathbb {R}}^3\), by
Clearly, F is Lipschitz smooth, hence Assumption 4.2 is satisfied. We note that for all x, \({\bar{x}}\in X^*=\{x \in {\mathbb {R}}^3: x_1= x_2 ~\text{ and }~ x_3=0 \}\) satisfies \(F({{\bar{x}}})=(0, -1, 1)^{\top } \) and, for a given x, one has \({{\,\mathrm{dist}\,}}(x,X^*)= \sqrt{\frac{(x_1-x_2)^2}{2} + x_3^2}.\) Hence, for all \(x \in {\mathbb {R}}^3\) such that \(|x_1-x_2 |\le \frac{1}{2}\), one concludes that
Thus, for this example, Assumptions 4.1 and 4.3 are also satisfied.
From the global convergence results, we have established that there is a subsequence of successful iterations converging to the set of stationary points \(X^*\). In this section, we begin by considering the subsequence of iterations that succeed the successful iterations, i.e., we consider the subsequence \({\mathcal {K}}=\{j+1 : \, j\in {\mathcal {S}}\}\). We shall present the results with a slight abuse of notation that simplifies the presentation without sacrificing accuracy or generality: in particular, every time we denote a quantity \(a_j\), the index j corresponds to an element of this subsequence \({\mathcal {K}}\) denoted above; thus, when we say a particular statement holds eventually, this means that it holds for all \(j\in {\mathcal {S}}+1\) with j sufficiently large. Let \({\hat{\mu }}\) be an upper bound for \( \mu _j\), note that this exists by the formulation of Algorithm 1. We shall denote also \(\delta \) as \(\delta = \min (\delta _1,\delta _2,\delta _3)\), with \(\{\delta _i\}_{i=1,2,3}\). In the proof, we follow the structure of the local convergence proof in [6], with the addition that the step is accepted by the globalization procedure. We use \({{{\bar{F}}}_j}\) to denote \(F({{{\bar{x}}}_j})\). The first lemma is similar to [6, Lemma 2.1].
Lemma 4.1
Suppose that Assumptions 4.1 and 4.2 are satisfied.
If \(x_j\in N(x^*,\frac{\delta }{2})\), then the solution \(s_j\) to (2) satisfies
where \(C_1\) is a positive constant independent of j.
Proof
Let us assume that \({{\bar{f}}}>0\). Otherwise, the proof is the same as in [6, Lemma 2.1]. Without loss of generality, since \(f(x_j)\) is monotonically decreasing, we can consider that j is sufficiently large such that \(\Vert F_j\Vert \le 2 {{\bar{f}}}\). Hence, we get
which concludes the proof. \(\square \)
Lemma 4.2
Suppose that Assumptions 4.1 and 4.2 are satisfied.
If \(x_j\in N(x^*,\frac{\delta }{2})\), then the solution \(s_j\) to (2) satisfies
where \(C_2\) is a positive constant independent of j.
Proof
We assume that \(\Vert {{\bar{F}}}_j\Vert ={{\bar{f}}}>0\), otherwise the proof is the same as in [6, Lemma 2.1]. In this case, using the fact that \(\Vert F_j\Vert \ge {{\bar{f}}}\) and \(\mu _j \ge \mu _{\min }\), one has
which concludes the proof. \(\square \)
Lemma 4.3
Suppose that Assumptions 4.1, 4.2 and 4.3 are satisfied. Consider the case where \({{\bar{f}}}=0\), then for j sufficiently large, one has \(\rho _j\ge \eta \).
Proof
In fact, for j sufficiently large, using Lemmas 4.1 and 4.2, one gets
On the other hand, using the fact that \(\Vert J_j\Vert \) is bounded (by \(\kappa _J>0\)), Lemma 4.2 and (since \({\bar{F}}_j=0\)) \(\frac{1}{M} \Vert {{\bar{x}}}_j-x_j\Vert \le \Vert F_j\Vert \le L_2\Vert {{\bar{x}}}_j-x_j\Vert \), one gets
Hence, for j sufficiently large
Thus, \(|1-\rho _j| \rightarrow 0\) as j goes to \(+\infty \). \(\square \)
For the nonzero residual case, we must consider a specific instance of the algorithm. In particular, we specify Step 3 of Algorithm 1 to be,
If \(\rho _j\ge \eta \), then set \(x_{j+1}=x_j+s_j\), \(\mu _{j+1}=\bar{\mu }\) and \(\bar{\mu }=\mu _{j}\).
Note that this is still consistent with the presentation of the Algorithm.
Lemma 4.4
Suppose that Assumptions 4.1, 4.2 and 4.3 are satisfied. Consider that \({{\bar{f}}}>0\), then for j sufficiently large, one has \(\rho _j\ge \eta \).
Proof
Indeed, by the new updating mechanism, the parameter \(\bar{\mu }\) is monotonically nondecreasing. In particular, if there is an infinite set of unsuccessful steps, then \({\mu _j}\rightarrow \infty \). This implies that for some \(j_0\) it holds that for \(j\ge j_0\), \(\mu _j> \frac{\kappa }{{\bar{f}}^2} > \frac{\kappa }{\Vert F(x_j)\Vert ^2}\), which together with Lemma 3.1 reach a contradiction. Thus, there are a finite number of unsuccessful steps, and every step is accepted for j sufficiently large. \(\square \)
Proposition 4.1
Suppose that Assumptions 4.1, 4.2, and 4.3 are satisfied. Let \(x_j\), \(x_{j+1} \in N(x^*,\delta /2)\). One has
where \(C_3:= M \sqrt{C_1^2 + 2L_3C_1 C_2^2 + L_3^2C_2^4}\) and \({{\hat{C}}}_3:=M \sqrt{ 2C_1 + 2L_3C_2^2}\).
Proof
Indeed, using Assumption 4.3, Lemma 4.1, the fact that the step is accepted for j sufficiently large, and \({{\bar{f}}} =\Vert {{{\bar{F}}}_{j+1}}\Vert \), one has
Therefore, using Lemma 4.2, one gets
where \(C_3 := M \sqrt{C_1^2 + 2L_3C_1 C_2^2 + L_3^2 C_2^4}\) and \( {{\hat{C}}}_3:=M \sqrt{ 2C_1 + 2L_3C_2^2}\) are positive constants. Moreover, by applying Taylor expansion to \(x \rightarrow F(x)^{\top } {{{\bar{F}}}}_{j+1}\) at the point \(x_{j+1}=x_j+s_j\) around \({{\bar{x}}}_{j+1}\), there exists \(R>0\) such that
Note that the Hessian of \(x \rightarrow F(x)^{\top } {{{\bar{F}}}}_{j+1}\) is equal to \(\sum _{i=1}^m {{\bar{F}}}(x_{j+1})_i \nabla ^2 F_i(x)\), and from Assumption 4.2 we have \(\nabla ^2 F_i(x)\) are bounded. Hence, the constant R is bounded as follows \(R \le \frac{L_1}{2} \sum _{i=1}^m |({\bar{F}}_{j+1})_i| \le \frac{\sqrt{m} L_1}{2} \Vert {{{\bar{F}}}}_{j+1}\Vert . \) Combining the obtained Taylor expansion and (16) gives
which completes this proof. \(\square \)
In next lemma, we show that, once the iterates \(\{x_j\}_j\) lie sufficiently near their limit point \(x^*\), the sequence \(\{{{\,\mathrm{dist}\,}}(x_{j},X^*)\}_j\) converges to zero quadratically if the problem has a zero residual, or linearly when the residual is small.
Lemma 4.5
Suppose that Assumptions 4.1, 4.2, and 4.3 are satisfied. Let \(\{x_j\}_j\) be a sequence generated by the proposed Algorithm. Suppose that both \(x_j\) and \(x_{j+1}\) belong to \(N(x^*,\delta /2)\). If the problem has a zero residual, i.e., \({{\bar{f}}}=0\), then
where \(C_3\) is a constant defined according to Proposition 4.1.
Otherwise, if \({{\bar{f}}} < \min \left\{ \frac{1}{\sqrt{m} L_1 M^{2}},\frac{1-C^2_3\delta ^2}{{{{\hat{C}}}_3}^2 + \sqrt{m} L_1 M^{2}}\right\} ,\) then
where \(C_4\in ]0,1[\) is a positive constant independent of j.
Proof
Indeed, under the zero residual case, i.e., \({{\bar{f}}}=0\), then Proposition 4.1 is equivalent to \( {{\,\mathrm{dist}\,}}(x_{j+1},X^*) \le C_3 {{\,\mathrm{dist}\,}}(x_{j},X^*)^2.\)
If the problem has a small residual, i.e., \({{\bar{f}}} < \min \left\{ \frac{1}{\sqrt{m} L_1 M^{2}},\frac{1-C^2_3\delta ^2}{{{{\hat{C}}}_3}^2 + \sqrt{m} L_1 M^{2}}\right\} ,\) then Proposition 4.1 will be equivalent to
where \(C_4:=\sqrt{ \frac{C_3^2 \delta ^2 + {{{\hat{C}}}_3}^2 {{\bar{f}}} }{1 - \sqrt{m} L_1 M^{2} {{\bar{f}}} }} \in ]0,1[\). \(\square \)
The final theorem is standard (see, e.g., [6, Lemma 2.3]).
Theorem 4.1
Suppose that Assumptions 4.1, 4.2, and 4.3 are satisfied. If \({{\bar{f}}}=0\) then Algorithm 1 converges locally quadratically to \(X^*\). Otherwise, if the problem has a small nonzero residual as in Lemma 4.5, Algorithm 1 converges locally at a linear rate to \(X^*\).
5 Numerical Results
In this section, we report the results of some preliminary experiments performed to test the practical behavior of Algorithm 1. All procedures were implemented in MATLAB and run using MATLAB 2019a on a MacBook Pro 2,4 GHz Intel Core i5, 4 GB RAM; the machine precision is \(\epsilon _m \sim 2\cdot 10^{-16}\).
We will compare our proposed algorithm with the LM method proposed in [15]. In fact, the latter algorithm can be seen to be similar to Algorithm 1 except that \(\gamma _j := \mu _j \Vert \nabla f(x_j)\Vert \) where the parameter \(\mu _j\) is updated in the following way. Given some constants \(c_0>1\), \( c_1= c_0(c_0 +1)^{-1}\) and \(0< \eta< \eta _1< \eta _2<1\), if the iteration is unsuccessful then \(\mu _{j}\) is increased (i.e., \(\mu _{j+1}:= c_0 \mu _{j}\)). Otherwise, if \(\Vert \nabla f(x_j)\Vert \mu _j < \eta _1\) then \(\mu _{j+1}= c_0 \mu _j\), if \(\Vert \nabla f(x_j)\Vert \mu _j>\eta _2\) then \(\mu _{j+1}=\max \{c_1 \mu _j, \mu _{\min } \} \), and \(\mu _j\) is kept unchanged otherwise. The LM method proposed in [15] was shown to be globally convergent with a favorable complexity bound, but its local behavior was not investigated. In our comparison, we will refer to the implementation of this method as LM-(global), while Algorithm 1 will be referred to as LM-(global and local) (since it theoretically guarantees both global and local convergence properties). Both algorithms were written in MATLAB, and the subproblem was solved using the backslash operator. For the LM-(global and local) method, two variants were tested. In the first one, named LM-(global and local)-V1, we set the parameter \(\mu _{j+1}\) equal to \(\max \{{{\bar{\mu }}}/\lambda , \mu _{\min }\}\) if the iteration is declared successful. In the second variant, named LM-(global and local)-V2, the parameter is set \(\mu _{j+1}={\bar{\mu }}\) if the iteration j is successful. The initial parameters defining the implemented algorithms were set as: \( \eta =10^{-2}, ~ \eta _1=0.1, ~ \eta _2=0.9, ~\lambda = c_0=5, ~\mu _{0}=1\) and \(\mu _{\min }=10^{-16}. \) As a set of problems \({\mathcal {P}}\), we used the well known 33 Moré/Garbow/Hillstrom problems [20]. All the tested problems are smooth and have a least-squares structure. The residual function F and the Jacobian matrix for all the test problems [20] are implemented in MATLAB. Some of these problems have a nonzero value at the optimum and thus are consistent with the scope of the paper. To obtain a larger test set, we created a set of additional 14 optimization problems by varying the problem dimension n when this was possible. For all the tested problems, we used the proposed starting points \(x_0\) given in the problems’ original presentation [20]. All algorithms are stopped when \( \Vert \nabla f(x_j)\Vert \le \epsilon \) where \(\epsilon \) is the regarded accuracy level. If they did not converge within a maximum number of iterations \(j_{\max }:=10000\), then they were considered to have failed.
For our test comparison, we used the performance profiles proposed by Dolan and Moré [21] over the set of problems \({\mathcal {P}}\) (of cardinality \(|{\mathcal {P}}|\)). For a set of algorithms \({\mathcal {S}}\), the performance profile \(\rho _s(\tau )\) of an algorithm s is defined as the fraction of problems where the performance ratio \(r_{p,s}\) is at most \(\tau \), \( \rho _s(\tau ) \; = \; \frac{1}{|{\mathcal {P}}|} \text{ size } \{ p \in {\mathcal {P}}: r_{p,s} \le \tau \}.\) The performance ratio \(r_{p,s}\) is in turn defined by \(r_{p,s} \; = \; \frac{t_{p,s} }{\min \{t_{p,s}: s \in {\mathcal {S}}\}},\) where \(t_{p,s} > 0\) measures the performance of the algorithm s when solving problem p, seen here as the number of iterations. Better performance of the algorithm s, relatively to the other algorithms on the set of problems, is indicated by higher values of \(\rho _s(\tau )\). In particular, efficiency is measured by \(\rho _s(1)\) (the fraction of problems for which algorithm s performs the best) and robustness is measured by \(\rho _s(\tau )\) for \(\tau \) sufficiently large (the fraction of problems solved by s). For a better visualization, we plot the performance profiles in a \(\log _2\)-scale.
We present the obtained performance profiles using two levels of accuracy in Figure 1. For a level of accuracy of \(10^{-3}\), LM-(global and local) variants present a better efficiency compared to LM-(global) (in more than \(60 \%\) of the tested problems LM-(global and local)-V1 performed best, and LM-(global and local)-V2 performed better on \(40 \%\) while LM-(global) was better on less than \(30 \%\)). When it comes to robustness, all the solvers exhibit good performance. Using a higher accuracy, the two variants of LM-(global and local) outperform LM-(global). The LM-(global and local)-V1 variant shows the best performance both in terms of efficiency and robustness.
In order to estimate the local convergence rate, we estimated the order of convergence by
where \(j_f\) is the index of the final computed iterate. When \( {\text {EOC}} \ge 1.8\), the algorithm will be said quadratically convergent. If \( 1.8 > {\text {EOC}} \ge 1.1\), then the algorithm will be seen as superlinearly convergent. Otherwise, the algorithm is linearly convergent or worse. The estimation of the order of convergence (see Table 2) shows the good local behavior of the LM-(global and local) variants compared to LM-(global). In fact, LM-(global and local) variants converged quadratically or superlinearly on 38 (v1) and 33 (v2) problems respectively, while LM-(global) showed quadratic or superlinear convergence for only 17 problems.
6 Conclusions
In this paper, we presented and analyzed a novel LM method for solving nonlinear least-squares problems. We were able to formulate a globally convergent LM method with strong worst-case iteration complexity bounds. The proposed method is locally convergent at quadratic rate for zero residual problems and at a linear rate for small residuals. Preliminary numerical results confirmed the theoretical behavior. Future research can include problems with constraints as well as those with noisy data.
References
Tarantola, A.: Inverse Problem Theory and Methods for Model Parameter Estimation. SIAM, Philadelphia (2005)
Trémolet, Y.: Model error estimation in 4D-Var. Q. J. R. Meteorol. Soc. 133, 1267–1280 (2007)
Levenberg, K.: A method for the solution of certain problems in least squares. Quart. Appl. Math. 2, 164–168 (1944)
Marquardt, D.: An algorithm for least-squares estimation of nonlinear parameters. SIAM J. Appl. Math. 11, 431–441 (1963)
Osborne, M.R.: Nonlinear least squares—the Levenberg algorithm revisited. J. Austral. Math. Soc. Ser. B 19, 343–357 (1976)
Yamashita, N., Fukushima, M.: On the rate of convergence of the Levenberg–Marquardt method. In: Topics in Numerical Analysis, pp. 239–249. Springer (2001)
Fan, J., Yuan, Y.: On the quadratic convergence of the Levenberg–Marquardt method without nonsingularity assumption. Computing 74, 23–39 (2005)
Dan, H., Yamashita, N., Fukushima, M.: Convergence properties of the inexact Levenberg–Marquardt method under local error bound conditions. Optim. Methods Softw. 17, 605–626 (2002)
Facchinei, F., Fischer, A., Herrich, M.: A family of newton methods for nonsmooth constrained systems with nonisolated solutions. Math. Methods Oper. Res. 77, 433–443 (2013)
Ipsen, I.C.F., Kelley, C.T., Pope, S.R.: Rank-deficient nonlinear least squares problems and subset selection. SIAM J. Numer. Anal. 49, 1244–1266 (2011)
Fan, J.: Convergence rate of the trust region method for nonlinear equations under local error bound condition. Comput. Optim. Appl. 34, 215–227 (2006)
Conn, A.R., Gould, N.I.M., Toint, PhL: Trust-Region Methods. SIAM, Philadelphia, PA, USA (2000)
Ueda, K., Yamashita, N.: On a global complexity bound of the Levenberg–Marquardt method. J. Optim. Theory Appl. 147, 443–453 (2010)
Ueda, K., Yamashita, N.: Global complexity bound analysis of the Levenberg–Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem. J. Optim. Theory Appl. 152, 450–467 (2012)
Zhao, R., Fan, J.: Global complexity bound of the Levenberg–Marquardt method. Optim. Methods Softw. 31, 805–814 (2016)
Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, Berlin (2006)
Bergou, E., Gratton, S., Vicente, L.N.: Levenberg–Marquardt methods based on probabilistic gradient models and inexact subproblem solution, with application to data assimilation. SIAM/ASA J. Uncertain. Quantif. 4, 924–951 (2016)
Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, Berlin (2013)
Fischer, A., Shukla, P., Wang, M.: On the inexactness level of robust Levenberg–Marquardt methods. Optimization 59, 273–287 (2010)
Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7, 17–41 (1981)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)
Acknowledgements
We would like to thank Clément Royer and the referees for their careful readings and corrections that helped us to improve our manuscript significantly. Support for Vyacheslav Kungurtsev was provided by the OP VVV Project CZ.02.1.01/0.0/0.0/16_019/0000765 “Research Center for Informatics”.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Nobuo Yamashita.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Bergou, E.H., Diouane, Y. & Kungurtsev, V. Convergence and Complexity Analysis of a Levenberg–Marquardt Algorithm for Inverse Problems. J Optim Theory Appl 185, 927–944 (2020). https://doi.org/10.1007/s10957-020-01666-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-020-01666-1