1 Introduction

In this paper, we consider solving general nonlinear least-squares problems where one may or may not have a solution with a zero residual. Problems of this nature arise in several important practical contexts, including inverse problems for ill-posed nonlinear continuous systems [1] with applications such as data assimilation [2]. Usually, the resulting least-squares problems do not necessarily have a zero residual at any point, although the minimum residual may be small.

Recall that the Gauss–Newton method is an iterative procedure for solving nonlinear least-squares problems by iteratively solving a linearized least-squares subproblem. This subproblem may not be well-posed in the case of rank deficiency of the residual Jacobian function. Furthermore, the Gauss–Newton method may not be globally convergent. The Levenberg–Marquardt (LM) method [3,4,5] was developed to deal with the rank deficiency of the Jacobian matrix and also to provide a globalization strategy for Gauss–Newton. In this paper, we will present and analyze the global and local convergence results of a novel LM method for solving general nonlinear least-squares problems that carefully balances the opposing objectives of ensuring global convergence and stabilizing a fast local convergence regime.

In general, the goals of encouraging global and local convergence compete against each other. Namely, the regularization parameter appearing in the subproblem should be allowed to become arbitrarily large in order to encourage global convergence, by ensuring the local accuracy of the linearized subproblem, but the parameter must approach zero in order to function as a stabilizing regularization that encourages fast local convergence. In the original presentation of the LM method in [3, 4], the regularization parameter is not permitted to go to zero, and only global convergence is considered.

The strongest results for local convergence of LM are given in a series of papers beginning with [6] (followed by, e.g., [7] and [8]; see also [9]), wherein it is assumed that the residual function is zero at the solution. For the global convergence, the algorithm considered is a two-phase one, where quadratic decline in the residual is tested with each step that is otherwise globalized by a line-search procedure.

In the case of nonzero residuals, it has been found that a LM method converges locally at a linear rate, if the norm of the residual is sufficiently small and the regularization parameter goes to zero [10]. Our proof of linear convergence is simpler than in [10]. Worst-case iteration complexity bounds for LM methods applied to nonlinear least-squares problems can be found in [13,14,15]. We show that our proposed algorithm has a complexity bound that matches these results, up to a logarithmic factor.

In this paper, we propose a method that successfully balances the multiple objectives in theoretical convergence properties, including (a) global convergence for exact and inexact solutions of the subproblem, (b) worst-case iteration complexity, and (c) local convergence for both zero and nonzero residual problems. Table 1 summarizes the literature on this class of methods; our proposed algorithm uniquely matches the state of the art in all of these properties.

Table 1 Convergence properties of Levenberg–Marquardt algorithms

The outline of this paper is as follows. In Sect. 2, we present the proposed LM algorithm and address the inexact solution of the linearized least-squares subproblem. Section 3 contains a worst-case complexity and global convergence analysis of the proposed method. In Sect. 4, we derive the local convergence theory. In Sect. 5, preliminary numerical experiments are presented to demonstrate the behavior of our algorithm. Finally, we conclude in Sect. 6.

2 A Novel Levenberg–Marquardt Algorithm

In this paper, we consider the following nonlinear least-squares problem

$$\begin{aligned} \min _{x \in {\mathbb {R}}^n} \; f(x) := \frac{1}{2}\Vert F(x)\Vert ^2, \end{aligned}$$
(1)

where \(F:{\mathbb {R}}^n \rightarrow {\mathbb {R}}^m\) is a (deterministic) vector-valued function, assumed twice continuously differentiable. Here and in the rest of the text, \(\Vert \cdot \Vert \) denotes the vector or matrix \(l_2\)-norm. At each iteration j, the LM method computes (approximately) a step \(s_j\) of the form \(- ( J_j^\top J_j + \gamma _j I )^{-1} J_j^\top F_j\), corresponding to the unique solution of

$$\begin{aligned} \min _{s \in {\mathbb {R}}^n} \; m_j(s) := \frac{1}{2}\Vert F_j + J_j s\Vert ^2 + \frac{1}{2}\gamma _j \Vert s\Vert ^2, \end{aligned}$$
(2)

where \(\gamma _j>0\) is an appropriately chosen regularization parameter, \(F_j:=F(x_j)\) and \(J_j :=J(x_j)\) denotes the Jacobian of F at \(x_j\).

In deciding whether to accept a step \(s_j\) generated by the subproblem (2), the LM method can be seen as precursor of the trust-region method [12]. In fact, it seeks to determine when the Gauss–Newton step is applicable or when it should be replaced by a slower but safer steepest descent step. One considers the ratio \(\rho _j\) between the actual reduction \(f(x_{j}) - f(x_j+s_j)\) attained in the objective function and the reduction \(m_j(0) - m_j(s_j)\) predicted by the model. Then, if \(\rho _j\) is sufficiently greater than zero, the step is accepted and \(\gamma _j\) is possibly decreased. Otherwise, the step is rejected and \(\gamma _j\) is increased.

In this paper, we use the standard choice of the regularization parameter

$$\begin{aligned} \gamma _j := \mu _j \Vert F(x_j)\Vert ^2, \end{aligned}$$

where \(\mu _j\) is updated according to the ratio \(\rho _j\). The considered LM algorithm using the above update strategy, as described in Algorithm 1, will be shown to be globally convergent with a complexity bound of order \(\epsilon ^{-2}\) and have strong local convergence properties.

figure a

This algorithm has one particularly novel feature among LM methods: we have an auxiliary parameter \({\bar{\mu }}\) which represents the last parameter corresponding to a successful step, introduced to balance the requirements of global and local convergence. If the model is inaccurate, then \(\mu _j\) is driven higher; however, when we reach a region associated with the local convergence regime, the residual \(\Vert F(x_j)\Vert \) should ultimately dominate the behavior of \(\gamma _j\) for successful steps. Step 1 of Algorithm 1 requires the approximate solution of subproblem (2). As in trust-region methods, there are different techniques to approximate the solution of this subproblem that yield a globally convergent step. For that it suffices to compute a step \(s_j\) that provides a reduction in the model at least as good as the one produced by the so-called Cauchy step (defined as the minimizer of the model along the negative gradient) which is given by

$$\begin{aligned} s_j^{\text{ c }}\; := \; -\frac{\Vert \nabla f(x_j)\Vert ^2}{\nabla f(x_j)^\top (J_{j}^\top J_{j}+\gamma _j I) \nabla f(x_j)} \nabla f(x_j). \end{aligned}$$

The Cauchy step is cheap to calculate as it does not require any system solve. Moreover, the LM method will be globally convergent if it uses a step that attains a reduction in the model as good as a multiple of the Cauchy decrease. Thus, we will impose the following assumption on the step calculation:

Assumption 2.1

There exists \(\theta _{fcd}>0\) such that for every iteration j:

$$\begin{aligned} m_j(0) - m_j(s_j) \; \ge \; \frac{\theta _{fcd}}{2}\frac{\Vert \nabla f(x_j)\Vert ^2}{\Vert J_{j}\Vert ^2+\gamma _j}. \end{aligned}$$

Despite providing a sufficient reduction in the model and being cheap to compute, the Cauchy step is scaled steepest descent. In practice, a version of Algorithm 1 based solely on the Cauchy step would suffer from the same drawbacks as the steepest descent algorithm on ill-conditioned problems.

Since the Cauchy step is the first step of the conjugate gradient method (CG) when applied to the minimization of the quadratic \(s \rightarrow m_j(s)\), it is natural to propose running CG further and stopping only when the residual becomes sufficiently small. Since the CG generates iterates by minimizing the quadratic model over nested Krylov subspaces, and the first subspace is the one generated by \(\nabla f(x_j)\) (see, e.g., [16, Theorem 5.2]), the decrease obtained at the first CG iteration (i.e., by the Cauchy step) is at least attained by the remaining iterations. Thus, Assumption 2.1 holds for all the iterates \( s_j^{\text{ cg }}\) generated by the truncated-CG whenever it is initialized by the null vector. The following lemma is similar to [17, Lemma 5.1] and will be useful for our worst-case complexity analysis.

Lemma 2.1

For the three steps proposed (exact, Cauchy, and truncated-CG), one has that

$$\begin{aligned} \Vert s_j\Vert \le \frac{\Vert \nabla f(x_j)\Vert }{\gamma _j}&~~\text{ and }~~&| s_j^\top ( \gamma _j s_j + \nabla f(x_j) ) | \le \frac{ \Vert J_{j}\Vert ^2 \Vert \nabla f(x_j)\Vert ^2 }{\gamma _j^{2}}. \end{aligned}$$

In what comes next, we will call all iterations j for which \(\rho _j\ge \eta \)successful, and we denote the set of their indices by the symbol \({\mathcal {S}}\), i.e., \({\mathcal {S}} := \{j \in {\mathbb {N}} | ~~\rho _j \ge \eta \}.\)

3 Worst-Case Iteration Complexity and Global Convergence

We now establish a worst-case complexity bound of Algorithm 1. Namely, given a tolerance \(\epsilon \in ]0,1[\), we aim at deriving the number of iterations, in the worst case, needed to reach an iterate \(x_j\) such that

$$\begin{aligned} \left\| \nabla f(x_j) \right\|< \epsilon \text { or } \left\| F(x_j)\right\| < \max \left\{ \epsilon , (1+\epsilon )\Vert F({{\bar{x}}}_j)\Vert \right\} \end{aligned}$$
(3)

where \({{\bar{x}}}_j \in {{\,\mathrm{argmin}\,}}_{\{x \in {\mathbb {R}}^n: \nabla f(x)=0\}} \Vert x_j - x \Vert \). Without loss of generality, we will assume that \(\Vert F({\bar{x}}_j)\Vert \) is unique and independent from \(x_j\), we will denote it by \({{\bar{f}}}\). It can be seen that, if we drop this assumption, then the same arguments in this section show asymptotic global convergence. Then, \({{\bar{f}}} := \sqrt{2f({\bar{x}}_j)}=\Vert F({\bar{x}}_j)\Vert \) can just be taken to be the value at the limit point of the sequence. We start now by giving some classical assumptions.

Assumption 3.1

The function f is continuously differentiable in an open set containing \(L(x_0) := \{ x \in {\mathbb {R}}^n: f(x) \le f(x_0) \}\) with Lipschitz continuous gradient on \(L(x_0)\) with the constant \(\nu > 0\).

Assumption 3.2

The Jacobian J of F is uniformly bounded, i.e., there exists \(\kappa _{J} > 0\) such that \(\Vert J \Vert \le \kappa _{J}\) for all x.

We begin by obtaining a condition on the parameter \(\mu _j\) that is sufficient for an iteration to be successful. We omit the proof as it is standard, see for instance Lemma 5.2 in [17].

Lemma 3.1

Let Assumptions 2.13.1, and 3.2 hold. Suppose that at the j-th iteration of Algorithm 1, one has

$$\begin{aligned} \mu _j \; > \; \frac{\kappa }{\Vert F(x_j)\Vert ^2} \end{aligned}$$
(4)

where

$$\begin{aligned} \kappa := \frac{a+\sqrt{a^2+4a \kappa _J^2(1-\eta )}}{2(1-\eta )} \quad \text{ and } \quad a := \frac{\tfrac{\nu }{2}+2\kappa _J^2}{\theta _{fcd}}. \end{aligned}$$

Then, the iteration is successful.

Our next result states that, when the gradient norm stays bounded away from zero, the parameter \(\mu _j\) cannot grow indefinitely. Without loss of generality, we assume that \(\epsilon \le \sqrt{\frac{\lambda \kappa }{\mu _0}}\), where \(\kappa \) is the same as in the previous lemma.

Lemma 3.2

Under Assumptions 2.13.1, and 3.2 , let j be a given iteration index such that for every \(l \le j\) it holds that \(\Vert F(x_l)\Vert \ge \max \left\{ \epsilon ,(1+\epsilon ){{\bar{f}}}\right\} \) where \(\epsilon \in ]0,1[\). Then, for every \(l \le j\), one also has

$$\begin{aligned} \mu _l \le \mu _{\max } := \frac{\lambda \kappa }{\max \left\{ \epsilon ^2,(1+\epsilon )^2{{\bar{f}}}^2\right\} }. \end{aligned}$$

Proof

We prove this result by contradiction. Suppose that \(l \ge 1\) is the first index such that

$$\begin{aligned} \mu _{l} > \frac{\lambda \kappa }{\max \left\{ \epsilon ^2,(1+\epsilon )^2{{\bar{f}}}^2\right\} }. \end{aligned}$$
(5)

By the updating rules on \(\mu _l\), either the iteration \(l-1\) is successful, in which case \(\mu _l \le \mu _{l-1} \le \frac{\lambda \kappa }{\max \left\{ \epsilon ^2,(1+\epsilon )^2{{\bar{f}}}^2\right\} }\) which contradicts (5), or the iteration \(l-1\) is unsuccessful, and thus,

$$\begin{aligned} \mu _{l} = \lambda \mu _{l-1} \quad \Rightarrow \quad \mu _{l-1} = \frac{\mu _l}{\lambda }> \frac{\kappa }{\max \left\{ \epsilon ^2,(1+\epsilon )^2{{\bar{f}}}^2\right\} } > \frac{\kappa }{\Vert F(x_l)\Vert ^2}; \end{aligned}$$

therefore, using Lemma 3.1 this implies that the \((l-1)\)-th iteration is successful which leads to a contradiction again. \(\square \)

Thanks to Lemma 3.2, we can now bound the number of successful iterations needed to drive the gradient norm below a given threshold.

Proposition 3.1

Under Assumptions 2.13.1, and 3.2 . Let \(\epsilon \in ]0,1[\) and \(j_{\epsilon }\) be the first iteration index such that (3) holds. Then, if \({\mathcal {S}}_{\epsilon }\) is the set of indexes of successful iterations prior to \(j_{\epsilon }\), one has:

$$\begin{aligned} \left| {\mathcal {S}}_{\epsilon } \right| \; \le \; {\mathcal {C}}\epsilon ^{-2} ~~\text{ with }~~{\mathcal {C}} := \frac{2 \left( \kappa _J^2+ \mu _{\max } \Vert F(x_0)\Vert ^2\right) }{\eta \theta _{fcd}}f(x_0). \end{aligned}$$

Proof

For any \(j \in {\mathcal {S}}_{\epsilon }\), one has

$$\begin{aligned} f(x_j) - f(x_{j+1})\ge & {} \eta \left( m_j(0) - m_j(s_{j})\right) \ge \eta \frac{\theta _{fcd}}{2}\frac{\Vert \nabla f(x_j)\Vert ^2}{\kappa _J^2+\mu _j\Vert F(x_j)\Vert ^2}. \end{aligned}$$

Hence, using the fact that \(\Vert \nabla f(x_j)\Vert \ge \epsilon \), \(\Vert F(x_j)\Vert \le \Vert F(x_0)\Vert \) and \(\mu _j \le \mu _{\max }\), we arrive at

$$\begin{aligned} f(x_j) - f(x_{j+1})\ge & {} \eta \frac{\theta _{fcd}}{2}\frac{\epsilon ^2}{\kappa _J^2+ \mu _{\max } \Vert F(x_0)\Vert ^2}. \end{aligned}$$

Consequently, by summing on all iteration indices within \({\mathcal {S}}_{\epsilon }\),

we obtain

$$\begin{aligned} f(x_0) - 0 \ge \sum _{j \in {\mathcal {S}}_{\epsilon }} f(x_j) - f(x_{j+1}) \ge |{\mathcal {S}}_{\epsilon }| \frac{\eta \theta _{fcd}}{2\left( {\kappa _J^2+ \mu _{\max } \Vert F(x_0)\Vert ^2}\right) } \epsilon ^2, \end{aligned}$$

hence the result. \(\square \)

Lemma 3.3

Under Assumptions 2.13.1, and 3.2 . Let \({\mathcal {U}}_{\epsilon }\) denote the set of unsuccessful iterations of index less than or equal to \(j_{\epsilon }\). Then,

$$\begin{aligned} | {\mathcal {U}}_{\epsilon } | \; \le \; \log _{\lambda }\left( \frac{\kappa }{\mu _{\min }\epsilon ^2}\right) \, |{\mathcal {S}}_{\epsilon }|. \end{aligned}$$
(6)

Proof

Note that we necessarily have \(j_{\epsilon } \in {\mathcal {S}}_{\epsilon }\) (otherwise it would contradict the definition of \(j_{\epsilon }\)). Our objective is to bound the number of unsuccessful iterations between two successful ones. Let this \(\{j_0,\dots ,j_t=j_{\epsilon }\}\) be an ordering of \({\mathcal {S}}_{\epsilon }\), and \(i \in \{0,\dots ,t-1\}\).

Due to the updating formulas for \(\mu _j\) on successful iterations, we have:

$$\begin{aligned} \mu _{j_i+1} \ge \max \{\mu _{\min },\bar{\mu }/\lambda \} \ge \mu _{\min }. \end{aligned}$$

Moreover, we have \(\Vert F(x_{j_i+1})\Vert \ge \epsilon \) by assumption. By Lemma 3.1, for any unsuccessful iteration \(j \in \{j_i + 1,\dots , j_{i+1}-1\}\), we must have: \( \mu _j \le \frac{\kappa }{\epsilon ^2}\), since otherwise \(\mu _j > \frac{\kappa }{\epsilon ^2} \ge \frac{\kappa }{\Vert F(x_j)\Vert ^2}\) and the iteration would be successful.

Using the updating rules for \(\mu _j\) on unsuccessful iterations, we obtain:

$$\begin{aligned} \forall j=j_i + 1,\dots , j_{i+1}-1, \qquad \mu _j = \lambda ^{j-j_i-1}\mu _{j_i+1} \ge \lambda ^{j-j_i-1}\mu _{\min }. \end{aligned}$$

Therefore, the number of unsuccessful iterations between \(j_i\) and \(j_{i+1}\), equal to \(j_{i+1}-j_i-1\), satisfies:

$$\begin{aligned} j_{i+1}-j_i - 1 \; \le \; \log _{\lambda }\left( \frac{\kappa }{\mu _{\min }\epsilon ^2}\right) . \end{aligned}$$
(7)

By considering (7) for \(i=0,\dots ,t-1\), we arrive at

$$\begin{aligned} \sum _{i=0}^{t-1} (j_{i+1}-j_i - 1) \; \le \; \log _{\lambda }\left( \frac{\kappa }{\mu _{\min }\epsilon ^2}\right) \left[ \left| {\mathcal {S}}_{\epsilon }\right| -1\right] . \end{aligned}$$
(8)

What is left to bound is the number of possible unsuccessful iterations between the iteration of index 0 and the first successful iteration \(j_0\). Since \(\mu _0 \ge \mu _{\min }\), a similar reasoning as the one used to obtain (7) leads to

$$\begin{aligned} j_0-1 \le \log _{\lambda }\left( \frac{\kappa }{\mu _{\min }\epsilon ^2}\right) . \end{aligned}$$
(9)

Putting (8) and (9) together yields the expected result. \(\square \)

By combining the results from Proposition 3.1 and Lemma 3.3, we thus get the following complexity estimate.

Theorem 3.1

Under Assumptions 2.13.1, and 3.2 . Let \(\epsilon \in ]0,1[\). Then, the first index \(j_{\epsilon }\) for which \( \Vert \nabla f(x_{j_{\epsilon }+1})\Vert< \epsilon ~~\text{ or }~~~\Vert F(x_{j_{\epsilon }+1})\Vert < \max \left\{ \epsilon ,(1+\epsilon ){{\bar{f}}}\right\} , \) is bounded above by

$$\begin{aligned} {\mathcal {C}}\left( 1+ \log _{\lambda }\left[ \frac{\kappa }{\mu _{\min }\epsilon ^2}\right] \right) \epsilon ^{-2}, \end{aligned}$$
(10)

where \({\mathcal {C}}\) is the constant defined in Proposition 3.1.

For the LM method proposed in this paper, we thus obtain an iteration complexity bound of \(\tilde{{\mathcal {O}}}\left( \epsilon ^{-2}\right) \), where the notation \(\tilde{{\mathcal {O}}}(\cdot )\) indicates the presence of logarithmic factors in \(\epsilon \). Note that the evaluation complexity bounds are of the same order.

We note that by the definition of \({{\bar{f}}}\) it holds that: \(\Vert F(x_j)\Vert ={{\bar{f}}}\) implies that \(\nabla f(x_j)=0\). Thus, by letting \(\epsilon \rightarrow 0\), Theorem 3.1 implies that

$$\begin{aligned} \liminf _{j\rightarrow \infty } \Vert \nabla f(x_j)\Vert \; = \; 0. \end{aligned}$$

In order to derive the global convergence result, we need to extend this to a limit result.

Theorem 3.2

Under Assumptions 2.13.1, and  3.2, the sequence \(\{x_j\}\) generated by Algorithm 1 satisfies

$$\begin{aligned} \lim _{j\rightarrow \infty } \Vert \nabla f(x_j)\Vert \; = \; 0. \end{aligned}$$

Proof

Consider the case that \({\mathcal {S}}\), the set of successful iterations, is finite. Then \(\exists j_0\) such that for all \(j \ge j_0, ~ \Vert \nabla f(x_j)\Vert = \Vert \nabla f(x_{j_0})\Vert \) therefore from the previous theorem we conclude that in this case

$$\begin{aligned} \Vert \nabla f(x_{j_0})\Vert \; = \; \lim _{j\rightarrow \infty } \Vert \nabla f(x_j)\Vert \; = \; \liminf _{j\rightarrow \infty } \Vert \nabla f(x_j)\Vert \; = \; 0. \end{aligned}$$

Alternatively, assume \({\mathcal {S}}\) is infinite and let \(\epsilon >0\). Since \((f(x_j))\) is monotonically decreasing and bounded from below, one has \(\lim _{j\rightarrow \infty } f(x_j) - f(x_{j+1})=0\). Since \(\frac{\eta \theta _{fcd} \epsilon ^2 }{2(\kappa _J^2 + \mu _0 \Vert F(x_0)\Vert ^2 )}>0\), we conclude, for j sufficiently large, that

$$\begin{aligned} 0 \le f(x_j) - f(x_{j+1}) \le \frac{\eta \theta _{fcd} \epsilon ^2 }{2(\kappa _J^2 + \mu _0 \Vert F(x_0)\Vert ^2 )}. \end{aligned}$$

Thus, for a sufficiently large j, consider the case that \(j \in {\mathcal {S}}\), then \(\rho _j \ge \eta \) and by rearranging the terms and using Assumption 2.1 we conclude that

$$\begin{aligned} \Vert \nabla f(x_j)\Vert\le & {} \sqrt{\frac{2(m_j(0)-m_j(s_j))(\kappa _J^2+\mu _0 \Vert F(x_0)\Vert ^2)}{\theta _{fcd}}} \\\le & {} \sqrt{\frac{2(\kappa _J^2 + \mu _0 \Vert F(x_0)\Vert ^2) \left( f(x_j) - f(x_{j+1})\right) }{\eta \theta _{fcd}}} \le \epsilon . \end{aligned}$$

If \(j \notin {\mathcal {S}}\) than \( \Vert \nabla f(x_j)\Vert \; = \; \Vert \nabla f(x_{{\hat{j}}})\Vert \; \le \epsilon , \) where \({\hat{j}} \in {\mathcal {S}}\) is the last successful iteration before j. Hence, it must hold that \( \lim _{j\rightarrow \infty } \Vert \nabla f(x_j)\Vert \; = \; 0.\)\(\square \)

4 Local Convergence

In this section, we prove local convergence for the algorithm, showing a quadratic rate for zero residual problems and explicit linear rate for nonzero residuals. Since the problem is generally nonconvex and with possibly nonzero residuals, there can be multiple sets of stationary points with varying objective values. We consider a particular subset with a constant value of the objective.

Assumption 4.1

There exists a connected isolated set \(X^*\) composed of stationary points to (1) each with the same value \(f^*\) of \(f(\cdot )=\frac{1}{2}\Vert F(\cdot )\Vert ^2\) and Algorithm 1 generates a sequence with an accumulation point \(x^*\in X^*\).

We note that under Assumption 4.1, the value \(\Vert F({{{\bar{x}}}})\Vert \) is unique for all \({{\bar{x}}}\in X^*\) which may not be the case for the residual vector \(F({{{\bar{x}}}})\). Thus we define \({{\bar{f}}} = \sqrt{2f({{\bar{x}}})}=\Vert F({{\bar{x}}})\Vert \) for \({{\bar{x}}}\in X^*\). Henceforth, from the global convergence analysis, we can assume, without loss of generality, that there exists a subsequence of iterates approaching this \(X^*\). This subsequence does not need to be unique, i.e., there may be more than one subsequence converging to separate connected sets of stationary points. We shall see that eventually, one of these sets shall “catch” the subsequence and result in direct convergence to the set of stationary points at a quadratic or linear rate, depending on \({{\bar{f}}}\).

In the sequel, \(N(x,\delta )\) denotes the closed ball with center x (a given vector) and radius \(\delta >0\) and \({{\,\mathrm{dist}\,}}(x,X^*)\) denotes the distance between the vector x and the set \(X^*\), i.e., \({{\,\mathrm{dist}\,}}(x,X^*)=\min _{y\in X^*}\Vert x-y\Vert ,\) and \({{{\bar{x}}}} \in {{\,\mathrm{argmin}\,}}_{y\in X^*}\Vert x-y\Vert \).

Assumption 4.2

It holds that F(x) is twice continuously differentiable around \(x^*\in X^*\) with \(x^*\) satisfying Assumption 4.1. In particular, this implies that there exists \(\delta _1>0\) such that for all \(x, ~y \in N(x^*,\delta _1)\),

$$\begin{aligned}&\Vert \nabla f(x)\Vert ^2 = \Vert J(x)^{\top }F(x)-J({{{\bar{x}}}})^{\top }{F({{\bar{x}}})}\Vert ^2 \le L_1 \Vert x-{{\bar{x}}}\Vert ^2, \end{aligned}$$
(11)
$$\begin{aligned}&\Vert F(x)-F(y)\Vert \le L_2 \Vert x-y\Vert , \end{aligned}$$
(12)
$$\begin{aligned}&\Vert F(y) -F(x) - J(x)(y-x)\Vert \le L_3 \Vert x-y\Vert ^2, \end{aligned}$$
(13)

where \(L_1\), \(L_2\), and \(L_3\) are positive constants.

From the triangle inequality and assuming (12), we get

$$\begin{aligned} \Vert F(x)\Vert -\Vert F(y)\Vert \le \Vert F(x)-F(y)\Vert \le L_2 \Vert x-y\Vert . \end{aligned}$$
(14)

We introduce the following additional assumption.

Assumption 4.3

There exist a \(\delta _3>0\) and \(M >0\) such that

$$\begin{aligned} \forall x\in N(x^*,\delta _3),~~~~{{\,\mathrm{dist}\,}}(x,X^*)\le M \Vert F(x)- F({{\bar{x}}})\Vert . \end{aligned}$$

As the function \(x \rightarrow F(x)- F({{\bar{x}}})\) is zero residual, the proposed error bound assumption can be seen as a generalization of the zero residual case [6,7,8,9]. Thus, any ill-posed zero residual problem, as considered in this line of work on quadratic local convergence for LM methods, satisfies the assumptions. Our assumptions are also covered by a range of nonzero residual problems, for instance, standard data assimilation problems [2] as given by Example 4.1.

Example 4.1

Consider the following data assimilation problem \(F: {\mathbb {R}}^n \rightarrow {\mathbb {R}}^{m}\) defined, for a given \(x \in {\mathbb {R}}^n\), by \(F(x)= \left( (x - x_{\text{ b }})^{\top }, (H(x) - y)^{\top }\right) ^{\top }\), where \(x_{\text{ b }} \in {\mathbb {R}}^n\) is a background vector, \(y \in {\mathbb {R}}^{m-n}\) the vector of observations and \(H: {\mathbb {R}}^n \rightarrow {\mathbb {R}}^{m-n}\) is a smooth operator modeling the observations. For such problems, the set of stationary points \(X^{*}=\{\{ {{\bar{x}}}\}\} \subset {\mathbb {R}}^n\) is a finite disjoint set, \({{\bar{F}}}=F({{\bar{x}}})\) and \( {{\,\mathrm{dist}\,}}(x,X^*)= \Vert x - {{\bar{x}}}\Vert \) for \({\bar{x}}\) closest to x. Clearly, one has

$$\begin{aligned} \Vert F(x) - F({{\bar{x}}})\Vert ^2= & {} \Vert x - {{\bar{x}}}\Vert ^2 + \Vert H(x) - H({{\bar{x}}})\Vert ^2 \ge \Vert x - {{\bar{x}}}\Vert ^2 = \; {{\,\mathrm{dist}\,}}(x,X^*)^2. \end{aligned}$$

Thus, for these typical problems arising in data assimilation, Assumptions 4.24.1 and 4.3 are satisfied.

Example 4.2

Consider \(F: {\mathbb {R}}^3 \rightarrow {\mathbb {R}}^3\) defined, for a given \(x \in {\mathbb {R}}^3\), by

$$\begin{aligned} F(x)= (\exp (x_1 - x_2) -1, x_3-1, x_3 +1)^{\top }. \end{aligned}$$

Clearly, F is Lipschitz smooth, hence Assumption 4.2 is satisfied. We note that for all x, \({\bar{x}}\in X^*=\{x \in {\mathbb {R}}^3: x_1= x_2 ~\text{ and }~ x_3=0 \}\) satisfies \(F({{\bar{x}}})=(0, -1, 1)^{\top } \) and, for a given x, one has \({{\,\mathrm{dist}\,}}(x,X^*)= \sqrt{\frac{(x_1-x_2)^2}{2} + x_3^2}.\) Hence, for all \(x \in {\mathbb {R}}^3\) such that \(|x_1-x_2 |\le \frac{1}{2}\), one concludes that

$$\begin{aligned} \Vert F(x) - F({{\bar{x}}})\Vert= & {} \sqrt{(\exp (x_1 - x_2) - 1)^2 + 2 x^2_3} \ge \; {{\,\mathrm{dist}\,}}(x,X^*). \end{aligned}$$

Thus, for this example, Assumptions 4.1 and 4.3 are also satisfied.

From the global convergence results, we have established that there is a subsequence of successful iterations converging to the set of stationary points \(X^*\). In this section, we begin by considering the subsequence of iterations that succeed the successful iterations, i.e., we consider the subsequence \({\mathcal {K}}=\{j+1 : \, j\in {\mathcal {S}}\}\). We shall present the results with a slight abuse of notation that simplifies the presentation without sacrificing accuracy or generality: in particular, every time we denote a quantity \(a_j\), the index j corresponds to an element of this subsequence \({\mathcal {K}}\) denoted above; thus, when we say a particular statement holds eventually, this means that it holds for all \(j\in {\mathcal {S}}+1\) with j sufficiently large. Let \({\hat{\mu }}\) be an upper bound for \( \mu _j\), note that this exists by the formulation of Algorithm 1. We shall denote also \(\delta \) as \(\delta = \min (\delta _1,\delta _2,\delta _3)\), with \(\{\delta _i\}_{i=1,2,3}\). In the proof, we follow the structure of the local convergence proof in [6], with the addition that the step is accepted by the globalization procedure. We use \({{{\bar{F}}}_j}\) to denote \(F({{{\bar{x}}}_j})\). The first lemma is similar to [6, Lemma 2.1].

Lemma 4.1

Suppose that Assumptions 4.1 and 4.2 are satisfied.

If \(x_j\in N(x^*,\frac{\delta }{2})\), then the solution \(s_j\) to (2) satisfies

$$\begin{aligned} \Vert J_js_j + F_j\Vert - {{\bar{f}}} \le C_1{{\,\mathrm{dist}\,}}(x_j,X^*)^2, \end{aligned}$$

where \(C_1\) is a positive constant independent of j.

Proof

Let us assume that \({{\bar{f}}}>0\). Otherwise, the proof is the same as in [6, Lemma 2.1]. Without loss of generality, since \(f(x_j)\) is monotonically decreasing, we can consider that j is sufficiently large such that \(\Vert F_j\Vert \le 2 {{\bar{f}}}\). Hence, we get

$$\begin{aligned} \Vert J_js_j+F_j\Vert ^2\le & {} 2 m_j({{\bar{x}}}_j-x_j) = \Vert J_j({{\bar{x}}}_j-x_j)+F_j\Vert ^2 +\mu _j\Vert F_j\Vert ^2\Vert x_j-{{\bar{x}}}_j\Vert ^2 \\\le & {} \left( \Vert {{{\bar{F}}}_j}\Vert + \Vert J_j({{\bar{x}}}_j-x_j)+F_j - {{{\bar{F}}}}_j \Vert \right) ^2 +\mu _j\Vert F_j\Vert ^2\Vert x_j-{{\bar{x}}}_j\Vert ^2 \\\le & {} \left( \Vert {{{\bar{F}}}_j}\Vert +L_3\Vert x_j-{{\bar{x}}}_j\Vert ^2\right) ^2+4\mu _j\Vert {{\bar{F}}}_j\Vert ^2\Vert x_j-{{\bar{x}}}_j\Vert ^2 \\= & {} L_3^2\Vert x_j-{{\bar{x}}}_j\Vert ^4+2\Vert {{{\bar{F}}}}_j\Vert (2 \mu _j \Vert {{{\bar{F}}}}_j\Vert +L_3 )\Vert x_j-{{\bar{x}}}_j\Vert ^2+\Vert {{{\bar{F}}}_j}\Vert ^2 \\\le & {} \left( \left( 2 {\hat{\mu }} {{\bar{f}}}+L_3 \right) \Vert x_j-{{\bar{x}}}_j\Vert ^2+{{\bar{f}}}\right) ^2, \end{aligned}$$

which concludes the proof. \(\square \)

Lemma 4.2

Suppose that Assumptions 4.1 and 4.2 are satisfied.

If \(x_j\in N(x^*,\frac{\delta }{2})\), then the solution \(s_j\) to (2) satisfies

$$\begin{aligned} \Vert s_j\Vert \le C_2 {{\,\mathrm{dist}\,}}(x_j,X^*), \end{aligned}$$
(15)

where \(C_2\) is a positive constant independent of j.

Proof

We assume that \(\Vert {{\bar{F}}}_j\Vert ={{\bar{f}}}>0\), otherwise the proof is the same as in [6, Lemma 2.1]. In this case, using the fact that \(\Vert F_j\Vert \ge {{\bar{f}}}\) and \(\mu _j \ge \mu _{\min }\), one has

$$\begin{aligned} \Vert s_j\Vert \le \frac{\Vert \nabla f(x_j)\Vert }{\gamma _j}=\frac{\Vert \nabla f(x_j)\Vert }{\mu _j \Vert F_j\Vert ^2} \le \frac{\sqrt{L_1}}{\mu _{\min } {{\bar{f}}}^2} {{\,\mathrm{dist}\,}}(x_j,X^*), \end{aligned}$$

which concludes the proof. \(\square \)

Lemma 4.3

Suppose that Assumptions 4.14.2 and 4.3 are satisfied. Consider the case where \({{\bar{f}}}=0\), then for j sufficiently large, one has \(\rho _j\ge \eta \).

Proof

In fact, for j sufficiently large, using Lemmas 4.1 and 4.2, one gets

$$\begin{aligned}&2 \left( m_j(0) - m_j(s_j)\right) = \Vert F_j\Vert ^2-\Vert F_j+J_j s_j\Vert ^2-\gamma _j\Vert s_j\Vert ^2 \\&\quad \ge \left( \Vert F_j\Vert +\Vert F_j+J_js_j\Vert \right) \left( \frac{1}{M}\Vert x_j-{{\bar{x}}}_j\Vert -C_1 \Vert x_j-{{\bar{x}}}_j\Vert ^2 \right) -C^2_2\gamma _j \Vert x_j-{{\bar{x}}}_j\Vert ^2 \\&\quad \ge \Vert F_j\Vert \left( \frac{1}{M}\Vert x_j-{{\bar{x}}}_j\Vert -C_1 \Vert x_j-{{\bar{x}}}_j\Vert ^2 \right) -C^2_2\mu _j \Vert F_j\Vert ^2\Vert x_j-{{\bar{x}}}_j\Vert ^2 \\&\quad \ge \frac{1}{M}\Vert x_j-{{\bar{x}}}_j\Vert \left( \frac{1}{M}\Vert x_j-{{\bar{x}}}_j\Vert -C_1 \Vert x_j-{{\bar{x}}}_j\Vert ^2 \right) -L^2_2 C^2_2 {{\hat{\mu }}} \Vert x_j-{{\bar{x}}}_j\Vert ^4 \\&\quad = \frac{1}{M^2}\Vert x_j-{{\bar{x}}}_j\Vert ^2- \frac{C_1}{M}\Vert x_j-{{\bar{x}}}_j\Vert ^3-L^2_2 C^2_2 {{\hat{\mu }}} \Vert x_j-{{\bar{x}}}_j\Vert ^4>0. \end{aligned}$$

On the other hand, using the fact that \(\Vert J_j\Vert \) is bounded (by \(\kappa _J>0\)), Lemma 4.2 and (since \({\bar{F}}_j=0\)) \(\frac{1}{M} \Vert {{\bar{x}}}_j-x_j\Vert \le \Vert F_j\Vert \le L_2\Vert {{\bar{x}}}_j-x_j\Vert \), one gets

$$\begin{aligned}&2\left| f(x_j+{s_j}) - m_j(s_j)\right| = \left| \Vert F(x_j+s_j)\Vert ^2-\Vert F_j+J_j s_j\Vert ^2-\gamma _j\Vert s_j\Vert ^2\right| \\&\quad = \left| \left( \Vert F(x_j+s_j)\Vert -\Vert F_j+J_j s_j\Vert \right) \left( \Vert F(x_j+s_j)\Vert +\Vert F_j+J_j s_j\Vert \right) -\gamma _j\Vert s_j\Vert ^2\right| \\&\quad \le L_3 \Vert s_j\Vert ^2\left( \Vert F(x_j+s_j)\Vert +\Vert F_j\Vert +\Vert J_j\Vert \Vert s_j\Vert \right) + \gamma _j\Vert s_j\Vert ^2 \\&\quad \le L_3 \Vert s_j\Vert ^2\left( L_2 \Vert x_j+s_j-{{\bar{x}}}_j\Vert + L_2 \Vert x_j-{{\bar{x}}}_j\Vert + \Vert J_j\Vert \Vert s_j\Vert \right) +\gamma _j\Vert s_j\Vert ^2\\&\quad \le (C_2 L_2+ 2L_2+C_2 \kappa _{J}) C_2^2L_3 \Vert x_j-{{\bar{x}}}_j\Vert ^3+L_2^2 C_2^2{\hat{\mu }} \Vert x_j-{{\bar{x}}}_j\Vert ^4. \end{aligned}$$

Hence, for j sufficiently large

$$\begin{aligned} |1-\rho _j|= & {} \left| \frac{m_j(0)-f(x_j) + f(x_j+{s_j}) - m_j(s_j)}{m_j(0)-m_j(s_j)} \right| \\\le & {} \frac{(C_2 L_2+ 2L_2+C_2 \kappa _{J}) C_2^2L_3 \Vert x_j-{{\bar{x}}}_j\Vert +L_2^2 C_2^2{\hat{\mu }} \Vert x_j-{{\bar{x}}}_j\Vert ^2}{\frac{1}{M^2}- \frac{C_1}{M}\Vert x_j-{{\bar{x}}}_j\Vert -L^2_2 C^2_2 {{\hat{\mu }}} \Vert x_j-{{\bar{x}}}_j\Vert ^2}. \end{aligned}$$

Thus, \(|1-\rho _j| \rightarrow 0\) as j goes to \(+\infty \). \(\square \)

For the nonzero residual case, we must consider a specific instance of the algorithm. In particular, we specify Step 3 of Algorithm 1 to be,

If \(\rho _j\ge \eta \), then set \(x_{j+1}=x_j+s_j\), \(\mu _{j+1}=\bar{\mu }\) and \(\bar{\mu }=\mu _{j}\).

Note that this is still consistent with the presentation of the Algorithm.

Lemma 4.4

Suppose that Assumptions 4.14.2 and 4.3 are satisfied. Consider that \({{\bar{f}}}>0\), then for j sufficiently large, one has \(\rho _j\ge \eta \).

Proof

Indeed, by the new updating mechanism, the parameter \(\bar{\mu }\) is monotonically nondecreasing. In particular, if there is an infinite set of unsuccessful steps, then \({\mu _j}\rightarrow \infty \). This implies that for some \(j_0\) it holds that for \(j\ge j_0\), \(\mu _j> \frac{\kappa }{{\bar{f}}^2} > \frac{\kappa }{\Vert F(x_j)\Vert ^2}\), which together with Lemma 3.1 reach a contradiction. Thus, there are a finite number of unsuccessful steps, and every step is accepted for j sufficiently large. \(\square \)

Proposition 4.1

Suppose that Assumptions 4.14.2, and 4.3 are satisfied. Let \(x_j\), \(x_{j+1} \in N(x^*,\delta /2)\). One has

$$\begin{aligned} \left( 1- \sqrt{m} L_1 M^{2} {{\bar{f}}}\right) {{\,\mathrm{dist}\,}}(x_{j+1},X^*)^2 \le C_3^2 {{\,\mathrm{dist}\,}}(x_{j},X^*)^4 + {{{\hat{C}}}_3}^2 {{\bar{f}}} {{\,\mathrm{dist}\,}}(x_{j},X^*)^2, \end{aligned}$$

where \(C_3:= M \sqrt{C_1^2 + 2L_3C_1 C_2^2 + L_3^2C_2^4}\) and \({{\hat{C}}}_3:=M \sqrt{ 2C_1 + 2L_3C_2^2}\).

Proof

Indeed, using Assumption 4.3, Lemma 4.1, the fact that the step is accepted for j sufficiently large, and \({{\bar{f}}} =\Vert {{{\bar{F}}}_{j+1}}\Vert \), one has

$$\begin{aligned}&\Vert x_{j+1}-{{\bar{x}}}_{j+1}\Vert ^2 \le M^{2}\Vert F(x_j+s_j)-{{{\bar{F}}}}_{j+1}\Vert ^2 \\&\quad \le M^{2} \left( \Vert F(x_j+s_j)\Vert ^2-2F(x_j+s_j)^{\top } {{{\bar{F}}}}_{j+1}+{{\bar{f}}}^2\right) \\&\quad \le M^{2} \left( \left( \Vert J(x_{j})s_{j}+F_j\Vert +L_3 \Vert s_{j}\Vert ^2\right) ^2 -2F(x_j+s_j)^{\top } {{{\bar{F}}}}_{j+1}+{{\bar{f}}}^2\right) \\&\quad \le M^{2} \left( \left\| J(x_{j})s_{j}+F_j \right\| ^2+2L_3\Vert J(x_{j})s_{j}+F_j\Vert \Vert s_{j}\Vert ^2 +L^2_3 \Vert s_{j}\Vert ^4 \right. \\&\quad \left. \quad -2F\left( x_j+s_j\right) ^{\top } {{{\bar{F}}}}_{j+1}+{{\bar{f}}}^2 \right) \\&\quad \le M^{2} \left( C_1^2\left\| x_j-{{\bar{x}}}_j\right\| ^4+2C_1\Vert x_j-{{\bar{x}}}_j\Vert ^2{{\bar{f}}}+{{\bar{f}}} ^2+2L_3C_1\Vert x_j-{{\bar{x}}}_j\Vert ^2\Vert s_{j}\Vert ^2\right. \\&\left. \qquad +\, 2L_3{{\bar{f}}} \Vert s_{j}\Vert ^2 +L^2_3 \Vert s_{j}\Vert ^4 -2F\left( x_j+s_j\right) ^{\top } {{{\bar{F}}}}_{j+1}+{{\bar{f}}}^2 \right) . \end{aligned}$$

Therefore, using Lemma  4.2, one gets

$$\begin{aligned} \Vert x_{j+1}-{{\bar{x}}}_{j+1}\Vert ^2 \le C_3^2 \Vert x_j-{{\bar{x}}}_j\Vert ^4 + {{{\hat{C}}}_3}^2 {{\bar{f}}} \Vert x_j-{{\bar{x}}}_j\Vert ^2 + 2 M^{2} |F(x_j+s_j)^{\top } {{{\bar{F}}}}_{j+1}-{{\bar{f}}}^2|,\nonumber \\ \end{aligned}$$
(16)

where \(C_3 := M \sqrt{C_1^2 + 2L_3C_1 C_2^2 + L_3^2 C_2^4}\) and \( {{\hat{C}}}_3:=M \sqrt{ 2C_1 + 2L_3C_2^2}\) are positive constants. Moreover, by applying Taylor expansion to \(x \rightarrow F(x)^{\top } {{{\bar{F}}}}_{j+1}\) at the point \(x_{j+1}=x_j+s_j\) around \({{\bar{x}}}_{j+1}\), there exists \(R>0\) such that

$$\begin{aligned} |F_{j+1}^{\top } {{{\bar{F}}}}_{j+1} - {{\bar{f}}} ^2 |\le & {} |{(J({{\bar{x}}}_{j+1})^{\top }{{\bar{F}}}_{j+1})}^{\top } (x_{j+1}-{{\bar{x}}}_{j+1})|+ R \Vert x_{j+1}-{{\bar{x}}}_{j+1}\Vert ^2 \\= & {} |{\nabla f({{\bar{x}}}_{j+1})}^{\top } (x_{j+1}-{{\bar{x}}}_{j+1})| + R \Vert x_{j+1}-{{\bar{x}}}_{j+1}\Vert ^2 \\= & {} R \Vert x_{j+1}-{{\bar{x}}}_{j+1}\Vert ^2 ~~~~\text{ by } \text{ using } {{\bar{x}}}_{j+1} \in X^*. \end{aligned}$$

Note that the Hessian of \(x \rightarrow F(x)^{\top } {{{\bar{F}}}}_{j+1}\) is equal to \(\sum _{i=1}^m {{\bar{F}}}(x_{j+1})_i \nabla ^2 F_i(x)\), and from Assumption 4.2 we have \(\nabla ^2 F_i(x)\) are bounded. Hence, the constant R is bounded as follows \(R \le \frac{L_1}{2} \sum _{i=1}^m |({\bar{F}}_{j+1})_i| \le \frac{\sqrt{m} L_1}{2} \Vert {{{\bar{F}}}}_{j+1}\Vert . \) Combining the obtained Taylor expansion and (16) gives

$$\begin{aligned} \Vert x_{j+1}-{{\bar{x}}}_{j+1}\Vert ^2 \le C_3^2 \Vert x_j-{{\bar{x}}}_j\Vert ^4 + {{{\hat{C}}}_3}^2 {{\bar{f}}} \Vert x_j-{{\bar{x}}}_j\Vert ^2 + \sqrt{m} L_1 M^{2} {{\bar{f}}} \Vert x_{j+1}-{{\bar{x}}}_{j+1}\Vert ^2, \end{aligned}$$

which completes this proof. \(\square \)

In next lemma, we show that, once the iterates \(\{x_j\}_j\) lie sufficiently near their limit point \(x^*\), the sequence \(\{{{\,\mathrm{dist}\,}}(x_{j},X^*)\}_j\) converges to zero quadratically if the problem has a zero residual, or linearly when the residual is small.

Lemma 4.5

Suppose that Assumptions 4.14.2, and 4.3 are satisfied. Let \(\{x_j\}_j\) be a sequence generated by the proposed Algorithm. Suppose that both \(x_j\) and \(x_{j+1}\) belong to \(N(x^*,\delta /2)\). If the problem has a zero residual, i.e., \({{\bar{f}}}=0\), then

$$\begin{aligned} {{\,\mathrm{dist}\,}}(x_{j+1},X^*)\le C_3{{\,\mathrm{dist}\,}}(x_{j},X^*)^2, \end{aligned}$$
(17)

where \(C_3\) is a constant defined according to Proposition 4.1.

Otherwise, if \({{\bar{f}}} < \min \left\{ \frac{1}{\sqrt{m} L_1 M^{2}},\frac{1-C^2_3\delta ^2}{{{{\hat{C}}}_3}^2 + \sqrt{m} L_1 M^{2}}\right\} ,\) then

$$\begin{aligned} {{\,\mathrm{dist}\,}}(x_{j+1},X^*)\le C_4{{\,\mathrm{dist}\,}}(x_{j},X^*), \end{aligned}$$
(18)

where \(C_4\in ]0,1[\) is a positive constant independent of j.

Proof

Indeed, under the zero residual case, i.e., \({{\bar{f}}}=0\), then Proposition 4.1 is equivalent to \( {{\,\mathrm{dist}\,}}(x_{j+1},X^*) \le C_3 {{\,\mathrm{dist}\,}}(x_{j},X^*)^2.\)

If the problem has a small residual, i.e., \({{\bar{f}}} < \min \left\{ \frac{1}{\sqrt{m} L_1 M^{2}},\frac{1-C^2_3\delta ^2}{{{{\hat{C}}}_3}^2 + \sqrt{m} L_1 M^{2}}\right\} ,\) then Proposition 4.1 will be equivalent to

$$\begin{aligned} {{\,\mathrm{dist}\,}}(x_{j+1},X^*)^2 \le \frac{C_3^2 \delta ^2 + {{{\hat{C}}}_3}^2 {{\bar{f}}} }{1 - \sqrt{m} L_1 M^{2} {{\bar{f}}}} {{\,\mathrm{dist}\,}}(x_{j},X^*)^2 = C^2_4{{\,\mathrm{dist}\,}}(x_{j},X^*)^2, \end{aligned}$$

where \(C_4:=\sqrt{ \frac{C_3^2 \delta ^2 + {{{\hat{C}}}_3}^2 {{\bar{f}}} }{1 - \sqrt{m} L_1 M^{2} {{\bar{f}}} }} \in ]0,1[\). \(\square \)

The final theorem is standard (see, e.g., [6, Lemma 2.3]).

Theorem 4.1

Suppose that Assumptions 4.14.2, and 4.3 are satisfied. If \({{\bar{f}}}=0\) then Algorithm 1 converges locally quadratically to \(X^*\). Otherwise, if the problem has a small nonzero residual as in Lemma 4.5, Algorithm 1 converges locally at a linear rate to \(X^*\).

5 Numerical Results

In this section, we report the results of some preliminary experiments performed to test the practical behavior of Algorithm 1. All procedures were implemented in MATLAB and run using MATLAB 2019a on a MacBook Pro 2,4 GHz Intel Core i5, 4 GB RAM; the machine precision is \(\epsilon _m \sim 2\cdot 10^{-16}\).

We will compare our proposed algorithm with the LM method proposed in [15]. In fact, the latter algorithm can be seen to be similar to Algorithm 1 except that \(\gamma _j := \mu _j \Vert \nabla f(x_j)\Vert \) where the parameter \(\mu _j\) is updated in the following way. Given some constants \(c_0>1\), \( c_1= c_0(c_0 +1)^{-1}\) and \(0< \eta< \eta _1< \eta _2<1\), if the iteration is unsuccessful then \(\mu _{j}\) is increased (i.e., \(\mu _{j+1}:= c_0 \mu _{j}\)). Otherwise, if \(\Vert \nabla f(x_j)\Vert \mu _j < \eta _1\) then \(\mu _{j+1}= c_0 \mu _j\), if \(\Vert \nabla f(x_j)\Vert \mu _j>\eta _2\) then \(\mu _{j+1}=\max \{c_1 \mu _j, \mu _{\min } \} \), and \(\mu _j\) is kept unchanged otherwise. The LM method proposed in [15] was shown to be globally convergent with a favorable complexity bound, but its local behavior was not investigated. In our comparison, we will refer to the implementation of this method as LM-(global), while Algorithm 1 will be referred to as LM-(global and local) (since it theoretically guarantees both global and local convergence properties). Both algorithms were written in MATLAB, and the subproblem was solved using the backslash operator. For the LM-(global and local) method, two variants were tested. In the first one, named LM-(global and local)-V1, we set the parameter \(\mu _{j+1}\) equal to \(\max \{{{\bar{\mu }}}/\lambda , \mu _{\min }\}\) if the iteration is declared successful. In the second variant, named LM-(global and local)-V2, the parameter is set \(\mu _{j+1}={\bar{\mu }}\) if the iteration j is successful. The initial parameters defining the implemented algorithms were set as: \( \eta =10^{-2}, ~ \eta _1=0.1, ~ \eta _2=0.9, ~\lambda = c_0=5, ~\mu _{0}=1\) and \(\mu _{\min }=10^{-16}. \) As a set of problems \({\mathcal {P}}\), we used the well known 33 Moré/Garbow/Hillstrom problems [20]. All the tested problems are smooth and have a least-squares structure. The residual function F and the Jacobian matrix for all the test problems [20] are implemented in MATLAB. Some of these problems have a nonzero value at the optimum and thus are consistent with the scope of the paper. To obtain a larger test set, we created a set of additional 14 optimization problems by varying the problem dimension n when this was possible. For all the tested problems, we used the proposed starting points \(x_0\) given in the problems’ original presentation [20]. All algorithms are stopped when \( \Vert \nabla f(x_j)\Vert \le \epsilon \) where \(\epsilon \) is the regarded accuracy level. If they did not converge within a maximum number of iterations \(j_{\max }:=10000\), then they were considered to have failed.

For our test comparison, we used the performance profiles proposed by Dolan and Moré [21] over the set of problems \({\mathcal {P}}\) (of cardinality \(|{\mathcal {P}}|\)). For a set of algorithms \({\mathcal {S}}\), the performance profile \(\rho _s(\tau )\) of an algorithm s is defined as the fraction of problems where the performance ratio \(r_{p,s}\) is at most \(\tau \), \( \rho _s(\tau ) \; = \; \frac{1}{|{\mathcal {P}}|} \text{ size } \{ p \in {\mathcal {P}}: r_{p,s} \le \tau \}.\) The performance ratio \(r_{p,s}\) is in turn defined by \(r_{p,s} \; = \; \frac{t_{p,s} }{\min \{t_{p,s}: s \in {\mathcal {S}}\}},\) where \(t_{p,s} > 0\) measures the performance of the algorithm s when solving problem p, seen here as the number of iterations. Better performance of the algorithm s, relatively to the other algorithms on the set of problems, is indicated by higher values of \(\rho _s(\tau )\). In particular, efficiency is measured by \(\rho _s(1)\) (the fraction of problems for which algorithm s performs the best) and robustness is measured by \(\rho _s(\tau )\) for \(\tau \) sufficiently large (the fraction of problems solved by s). For a better visualization, we plot the performance profiles in a \(\log _2\)-scale.

Fig. 1
figure 1

Obtained performance profiles considering the two levels of accuracy, \(10^{-3}\) and \(10^{-5}\)

We present the obtained performance profiles using two levels of accuracy in Figure 1. For a level of accuracy of \(10^{-3}\), LM-(global and local) variants present a better efficiency compared to LM-(global) (in more than \(60 \%\) of the tested problems LM-(global and local)-V1 performed best, and LM-(global and local)-V2 performed better on \(40 \%\) while LM-(global) was better on less than \(30 \%\)). When it comes to robustness, all the solvers exhibit good performance. Using a higher accuracy, the two variants of LM-(global and local) outperform LM-(global). The LM-(global and local)-V1 variant shows the best performance both in terms of efficiency and robustness.

In order to estimate the local convergence rate, we estimated the order of convergence by

$$\begin{aligned} {\text {EOC}} := \log \left( \frac{\Vert \nabla f(x_{j_f})\Vert }{\max \{1,\Vert \nabla f(x_{0})\Vert \}} \right) / \log \left( \frac{\Vert \nabla f(x_{j_f-1})\Vert }{\max \{1,\Vert \nabla f(x_{0})\Vert \}} \right) , \end{aligned}$$

where \(j_f\) is the index of the final computed iterate. When \( {\text {EOC}} \ge 1.8\), the algorithm will be said quadratically convergent. If \( 1.8 > {\text {EOC}} \ge 1.1\), then the algorithm will be seen as superlinearly convergent. Otherwise, the algorithm is linearly convergent or worse. The estimation of the order of convergence (see Table 2) shows the good local behavior of the LM-(global and local) variants compared to LM-(global). In fact, LM-(global and local) variants converged quadratically or superlinearly on 38 (v1) and 33 (v2) problems respectively, while LM-(global) showed quadratic or superlinear convergence for only 17 problems.

Table 2 Order of convergence for problems in \({\mathcal {P}}\) with the accuracy level \(\epsilon =10^{-5}\)

6 Conclusions

In this paper, we presented and analyzed a novel LM method for solving nonlinear least-squares problems. We were able to formulate a globally convergent LM method with strong worst-case iteration complexity bounds. The proposed method is locally convergent at quadratic rate for zero residual problems and at a linear rate for small residuals. Preliminary numerical results confirmed the theoretical behavior. Future research can include problems with constraints as well as those with noisy data.