1 Introduction

We study the limited memory steepest descent method (LMSD), introduced by Fletcher [1], in the context of unconstrained optimization problems for a continuously differentiable function f:

$$ \min _{\textbf{x}\in \mathbb {R}^n} f(\textbf{x}). $$

The iteration for a steepest descent scheme reads

$$\begin{aligned} \textbf{x}_{k+1} = \textbf{x}_k - \beta _k \, \textbf{g}_k \ = \ \textbf{x}_k - \alpha _k^{-1} \, \textbf{g}_k, \end{aligned}$$

where \(\textbf{g}_k = \nabla f(\textbf{x}_k)\) is the gradient, \(\beta _k>0\) is the steplength, and its inverse \(\alpha _k = \beta _k^{-1}\) is usually chosen as an approximate eigenvalue of an (average) Hessian. We refer to [2, 3] for recent reviews on various steplength selection procedures.

The key idea of LMSD is to store the latest \(m > 1\) gradients, and to compute (at most) m new stepsizes for the following iterations of the gradient method. We first consider the strictly convex quadratic problem

$$\begin{aligned} \min _{\textbf{x}\in \mathbb {R}^n} \ \tfrac{1}{2} \, \textbf{x}^T\!\textbf{A}\textbf{x}-\textbf{b}^T\textbf{x}\end{aligned}$$
(1)

where \(\textbf{A}\) is a symmetric positive definite (SPD) matrix with eigenvalues \(0 < \lambda _1 \le \dots \le \lambda _n\), and \(\textbf{b}\in \mathbb {R}^n\). Fletcher points out that the m most recent gradients \(\textbf{G}= [\, \textbf{g}_1 \ \dots \ \textbf{g}_m\,]\) form a basis for an m-dimensional Krylov subspace of \(\textbf{A}\). (Although \(\textbf{G}\) will change during the iterations, for convenience, and without loss of generality, we label the first column as \(\textbf{g}_1\).) Then, m approximate eigenvalues of \(\textbf{A}\) (Ritz values) are computed from the low-dimensional representation of \(\textbf{A}\), a projected Hessian matrix, in the subspace spanned by the columns of \(\textbf{G}\), and used as m inverse stepsizes. For \(m = 1\), the proposed method reduces to the steepest descent method with Barzilai–Borwein stepsizes [4].

LMSD shares the property with L-BFGS (see, e.g., [5, Ch. 7]), the limited memory version of BFGS, that 2m past vectors are stored, of the form \(\textbf{s}_{k-1} = \textbf{x}_k-\textbf{x}_{k-1}\) and \(\textbf{y}_{k-1} = \textbf{g}_k-\textbf{g}_{k-1}\). While LMSD is a first-order method which incorporates some second-order information in its simplest form (the stepsize), L-BFGS is a quasi-Newton method, which exploits the \(\textbf{s}\)-vectors and \(\textbf{y}\)-vectors to provide an additive rank-m update of a tentative approximate inverse Hessian (typically a multiple of the identity matrix). Compared to BFGS, at each iteration, the L-BFGS method computes the action of the approximate inverse Hessian, without storing the entire matrix and using \(\mathcal {O}(mn)\) operations (see, e.g., [5, Ch. 7]). As we will see in Section 4, the cost of m LMSD iterations is approximately \(\mathcal {O}(m^2n)\), meaning that the costs of the two algorithms are comparable.

There are several potential benefits of LMSD. First, as shown in Fletcher [1], there are some problems for which LMSD performs better than L-BFGS. Secondly, to the best of our knowledge and as stated in [5, Sec. 6.4], there are no global convergence results for quasi-Newton methods applied to non-convex functions. Liu and Nocedal [6] have proved the global superlinear convergence of L-BFGS only for (twice continuously differentiable) uniformly convex functions. On the contrary, as a gradient method endowed with line search, LMSD converges globally for continuously differentiable functions (see [7, Thm. 2.1], for the convergence of gradient methods combined with nonmonotone line search). Finally, and quite importantly, we note that the idea of LMSD can be readily extended to other types of problems: to name a few, it has been used in the scaled spectral projected gradient method [8] for constrained optimization problems, in a stochastic gradient method [9], and, more recently, in a projected gradient method for box-constrained optimization [10].

Summary of the state of the art and our contributions. In the quadratic case (1), the projected Hessian matrix can be computed from the Cholesky decomposition of \(\textbf{G}^T\textbf{G}\) (cf. [1, Eq. (19)] and Section 2) without involving any extra matrix-vector product with \(\textbf{A}\). Although this procedure is memory and time efficient, it is also known to be potentially numerically unstable (cf., e.g., the discussion in [11]) because of the computation of the Gramian matrix \(\textbf{G}^T\textbf{G}\), especially in our context of having an ill-conditioned \(\textbf{G}\). Therefore, we consider alternative ways to obtain the projected Hessian in Section 2.1; in particular, we propose to use the pivoted QR decomposition of \(\textbf{G}\) (see, e.g., [12, Algorithm 1]), or its SVD, and compare the three methods.

In addition, we show that, in the quadratic case, there is a least squares secant condition associated with LMSD. Indeed, in Section 2.4 we prove that the projected Hessian, obtained via one of these three decompositions, is similar to the solution to \(\min _\textbf{B}\, \Vert \textbf{Y}- \textbf{S}\textbf{B}\Vert \), where \(\Vert \cdot \Vert \) denotes the Frobenius norm of a matrix, and \(\textbf{S}= [\, \textbf{s}_1 \ \dots \ \textbf{s}_m\,]\) and \(\textbf{Y}= [\, \textbf{y}_1 \ \dots \ \textbf{y}_m\,]\) store the m most recent \(\textbf{s}\)-vectors and \(\textbf{y}\)-vectors, respectively.

Since \(\textbf{Y}= \textbf{A}\textbf{S}\) for quadratic functions, the obtained stepsizes are inverse eigenvalues of a projection of the Hessian matrix \(\textbf{A}\). In the general nonlinear case (i.e., for a non-quadratic function f), one can still reproduce the small matrix in [1, Eq. (19)], since the Hessian is not needed explicitly in its computation. However, there is generally not a clear interpretation of the stepsizes as approximate inverse eigenvalues of a certain Hessian matrix. Also, the obtained eigenvalues might even be complex.

To deal with this latter problem, Fletcher proposes a practical symmetrization of [1, Eq. (19)], but, so far, a clear theoretical justification for this approach seems to be lacking. To address this issue, we rely on Schnabel’s theorem [13, Thm. 3.1] to connect Fletcher’s symmetrization to a perturbation of the \(\textbf{Y}\) matrix, of the form \(\widetilde{\textbf{Y}} = \textbf{Y}+ \Delta \textbf{Y}\). This guarantees that the eigenvalues of the symmetrized matrix [1, Eq. (19)] correspond to a certain symmetric matrix \(\textbf{A}_+\) that satisfies multiple secant equations \(\widetilde{\textbf{Y}} = \textbf{A}_+ \textbf{S}\) as in the quadratic case. The matrix \(\textbf{A}_+\) can be interpreted as an approximate Hessian in the current iterate.

In the same line of thought, we also exploit one of the perturbations \(\widetilde{\textbf{Y}}\) proposed by Schnabel [13] in the LMSD context. Although the idea of testing different perturbations of \(\textbf{Y}\) is appealing, a good perturbation may be expensive to compute, compared to the task of getting m new stepsizes. Therefore, we explore a different approach based on the modification of the least squares secant condition of LMSD. The key idea is to add a symmetry constraint to the secant condition:

$$ \min _{\textbf{B}= \textbf{B}^T} \Vert \textbf{Y}- \textbf{S}\textbf{B}\Vert . $$

Interestingly, the solution to this problem corresponds to the solution of a Lyapunov equation (see, e.g., [14]). This secant condition provides a smooth transition from the strictly convex quadratic case to the general case, and its solution has real eigenvalues by construction.

Along with discussing both the quadratic and the general case, we study the computation of harmonic Ritz values, which are also considered by Fletcher [1] and Curtis and Guo [15, 16]. For the quadratic case, in Section 2.2, we show that there are some nice symmetries between the computation of the Ritz values of \(\textbf{A}\) by exploiting a basis for the matrix of gradients \(\textbf{G}\), and the computation of the inverse harmonic Ritz values of \(\textbf{A}\) by means of \(\textbf{Y}\). Our implementation is different from Fletcher’s, but the two approaches show similar performance in the quadratic experiments of Section 4.1. In general, LMSD with harmonic Ritz values appears to show a less favorable behavior than LMSD with Ritz values. Therefore, in Section 2.2, we present a way to improve the quality of the harmonic Ritz values, by taking an extra Rayleigh quotient of the harmonic Ritz vectors. This is based on the remarks in, e.g., [17, 18].

Outline. The rest of the paper is organized as follows. We first focus on the strictly convex quadratic problem (1) in Section 2. We review the LMSD method, as described by Fletcher [1], and present new ways to compute the approximate eigenvalues of the Hessian. We also give a secant condition for the low-dimensional Hessian of which we compute the eigenvalues. We move to the general unconstrained optimization problems in Section 3, where we give a theoretical foundation to Fletcher’s symmetrized matrix [1, Eq. (19)], and show how to compute new stepsizes from the secant equation for quadratics, by adding symmetry constraints. A third new approach based on [13] is also proposed. In both Sections 2 and 3, particular emphasis is put on the issue of (likely) numerical rank-deficiency of \(\textbf{G}\) (or \(\textbf{Y}\), when computing the harmonic Ritz values). Sections 2.3 and 3.4 report the LMSD algorithms for strictly convex quadratic problems, as in [1], and for general continuously differentiable functions, as in [2]. Related convergence results are also recalled. Finally, numerical experiments on both strictly convex quadratics and general unconstrained problems are presented in Section 4; conclusions are drawn in Section 5.

Throughout the paper, the Frobenius norm of a matrix is denoted by \(\Vert \cdot \Vert \). The eigenvalues of a symmetric matrix \(\textbf{A}\) are ordered increasingly \(\lambda _1\le \dots \le \lambda _n\), while its singular values are ordered decreasingly \(\sigma _1\ge \dots \ge \sigma _n\).

2 Limited memory BB1 and BB2 for quadratic problems

We review Fletcher’s limited memory approach [1] for strictly convex quadratic functions (1), and study some new theoretical and computational aspects. Common choices for the steplength in gradient methods for quadratic functions are the Barzilai–Borwein (BB) stepsizes [4]

$$\begin{aligned} \beta _{k}^\mathrm{{BB1}} = \frac{\textbf{g}_{k-1}^T \textbf{g}_{k-1}}{\textbf{g}_{k-1}^T \textbf{A}\, \textbf{g}_{k-1}}, \qquad \beta _{k}^\mathrm{{BB2}} = \frac{\textbf{g}_{k-1}^T \textbf{A}\, \textbf{g}_{k-1}}{\textbf{g}_{k-1}^T \textbf{A}^2\, \textbf{g}_{k-1}}. \end{aligned}$$
(2)

The inverse stepsizes \(\alpha _{k}^\mathrm{{BB1}} = (\beta _{k}^\mathrm{{BB1}})^{-1}\) and \(\alpha _{k}^\mathrm{{BB2}} = (\beta _{k}^\mathrm{{BB2}})^{-1}\) are the standard and the harmonic Rayleigh quotients of \(\textbf{A}\), evaluated at \(\textbf{g}_{k-1}\), respectively. Therefore, they provide estimates of the eigenvalues of \(\textbf{A}\). The key idea of LMSD is to produce \(m > 1\) approximate eigenvalues from an m-dimensional space simultaneously, hopefully capturing more information compared to that from a one-dimensional space. One hint about why considering \(m > 1\) may be favorable is provided by the well-known Courant–Fischer Theorem and Cauchy’s Interlace Theorem (see, e.g., [19, Thms. 10.2.1 and 10.1.1]). For two subspaces \(\mathcal {V}\), \(\mathcal {W}\) with \(\mathcal {V}\subseteq \mathcal {W}\), we have

$$ \max _{\textbf{z}\in \mathcal {V}, \, \Vert \textbf{z}\Vert =1} \textbf{z}^T\!\textbf{A}\textbf{z}\le \max _{\textbf{z}\in \mathcal {W}, \, \Vert \textbf{z}\Vert =1} \textbf{z}^T\!\textbf{A}\textbf{z}\le \max _{\Vert \textbf{z}\Vert =1} \textbf{z}^T\!\textbf{A}\textbf{z}= \lambda _n. $$

Therefore, a larger search space may result in better approximations to the largest eigenvalue of \(\textbf{A}\). Similarly, a larger subspace may better approximate the smallest eigenvalue, as well as the next-largest and the next-smallest values.

We now show why m consecutive gradients form a basis of a Krylov subspace of \(\textbf{A}\). It is easy to check that, given the stepsizes \(\beta _1,\dots ,\beta _{m}\) corresponding to the m most recent gradients, each gradient can be expressed as follows:

$$\begin{aligned} \textbf{g}_k = \prod _{i=1}^{k-1} \, (I - \beta _i\textbf{A})\, \textbf{g}_1, \quad k = 1, \dots , m. \end{aligned}$$
(3)

Therefore all m gradients belong to the Krylov subspace of degree m (and of dimension at most m)

$$ \textbf{g}_k \in \mathcal {K}_m(\textbf{A},\textbf{g}_1) = \text {span}\{\textbf{g}_1, \, \textbf{A}\, \textbf{g}_1, \, \dots , \, \textbf{A}^{m-1}\textbf{g}_1\}. $$

Moreover, under mild assumptions, the columns of \(\textbf{G}\) form a basis for \(\mathcal {K}_m(\textbf{A},\textbf{g}_1)\). This result is mentioned by Fletcher [1]; here we provide an explicit proof.

Proposition 1

Suppose the gradient \(\textbf{g}_1\) does not lie in an \(\ell \)-dimensional invariant subspace, with \(\ell < m\), of the SPD matrix \(\textbf{A}\). If \(\beta _k \ne 0\) for all \(k = 1,\dots ,m-1\), the vectors \(\textbf{g}_1,\dots ,\textbf{g}_m\) are linearly independent.

Proof

In view of the assumption, the set \(\{\textbf{g}_1,A\, \textbf{g}_1\dots , A^{m-1}\textbf{g}_1\}\) is a basis for \(\mathcal {K}_m(A,\textbf{g}_1)\). In fact, from (3),

$$\begin{aligned} [\textbf{g}_1\ \ \textbf{g}_2\ \dots \ \ \textbf{g}_m\,] = [\textbf{g}_1\ \, A\textbf{g}_1\ \dots \ \, A^{m-1}\textbf{g}_1] \left[ \begin{array}{ccccc} 1& \times & \times & \times & \times \\ & -\beta _1& \times & \times & \times \\ & & \beta _1\beta _2& \times & \times \\ & & & \ddots & \times \\ & & & & (-1)^{m}\prod _{i=1}^{m-1}\beta _i\\ \end{array}\right] . \end{aligned}$$
(4)

Up to a sign, the determinant of the rightmost matrix in this equation is \(\beta _1^{m-1}\beta _2^{m-2}\cdots \)\(\beta _{m-1}\), which is nonzero if and only if the stepsizes are nonzero. Therefore, \(\textbf{g}_1, \dots , \textbf{g}_m\) are linearly independent. \(\square \)

This result shows that m consecutive gradients of a quadratic function are linearly independent in general; in practice, this formula suggests that small \(\beta _i\) may quickly cause ill conditioning. Numerical rank-deficiency of \(\textbf{G}\) is an important issue in the LMSD method and will be considered in the computation of a basis for \(\text {span}(\textbf{G})\) in Section 2.1.

For the following discussion, we also relate \(\textbf{S}\) and \(\textbf{Y}\) to the Krylov subspace \(\mathcal {K}_m(\textbf{A},\textbf{g}_1)\).

Proposition 2

If \(\textbf{G}\) is a basis for \(\mathcal {K}_m(A,\textbf{g}_1)\), then

  1. (i)

    the columns of \(\textbf{S}\) also form a basis for \(\mathcal {K}_m(\textbf{A},\textbf{g}_1)\);

  2. (ii)

    the columns of \(\textbf{Y}\) form a basis for \(\textbf{A}\, \mathcal {K}_m(\textbf{A},\textbf{g}_1)\).

Proof

The thesis immediately follows from the relations

$$\begin{aligned} \textbf{S}= -\textbf{G}\textbf{D}^{-1}, \quad \textbf{Y}= -\textbf{A}\textbf{G}\textbf{D}^{-1}, \qquad \textbf{D}= \text {diag}(\alpha _1,\dots ,\alpha _m), \end{aligned}$$
(5)

where the \(\alpha _i = \beta _i^{-1}\) are the latest m inverse stepsizes, ordered from the oldest to the most recent. Note that \(\textbf{D}\) is nonsingular. \(\square \)

Given a basis for \(\mathcal {K}_m(\textbf{A},\textbf{g}_1)\) (or \(\textbf{A}\, \mathcal {K}_m(\textbf{A},\textbf{g}_1)\)), one can approximate some eigenpairs of \(\textbf{A}\) from this subspace. The procedure is known as the Rayleigh–Ritz extraction method (see, e.g., [19, Sec. 11.3]) and is recalled in the next section.

2.1 The Rayleigh–Ritz extraction

We formulate the standard and harmonic Rayleigh–Ritz extractions in the context of LMSD methods for strictly convex quadratic functions. Let \(\mathcal {S}\) be the subspace spanned by the columns of \(\textbf{S}\), and \(\mathcal {Y}\) be the subspace spanned by the columns of \(\textbf{Y}\). Fletcher’s main idea [1] is to exploit the Rayleigh–Ritz method on the subspace \(\mathcal {S}\). We will now review and extend this approach.

We attempt to extract m promising approximate eigenpairs from the subspace \(\mathcal {S}\). Therefore, such approximate eigenpairs can be represented as \((\theta _i, \textbf{S}\textbf{c}_i )\), with nonzero \(\textbf{c}_i \in \mathbb {R}^m\), for \(i = 1, \dots , m\). The (standard) Rayleigh–Ritz extraction imposes a Galerkin condition:

$$\begin{aligned} \textbf{A}\, \textbf{S}\, \textbf{c}- \theta \, \textbf{S}\, \textbf{c}\perp \mathcal {S}. \end{aligned}$$
(6)

This means that the pairs \((\theta _i, \textbf{c}_i )\) are the eigenpairs of the \(m \times m\) pencil \((\textbf{S}^T\textbf{Y}, \, \textbf{S}^T\textbf{S})\). The \(\theta _i\) are called Ritz values. In the LMSD method, we have \(\mathcal {S}= \mathcal {K}_m(\textbf{A},\textbf{g}_1)\) (see Proposition 2). Note that for \(m = 1\), the only approximate eigenvalue reduces to the Rayleigh quotient \(\alpha ^\textrm{BB1}\) (2). Ritz values are bounded by the extreme eigenvalues of \(\textbf{A}\), i.e., \(\theta _i \in [\lambda _1, \lambda _n]\). This follows from Cauchy’s Interlace Theorem [19, Thm. 10.1.1], by choosing an orthogonal basis for \(\mathcal {S}\). This inclusion is crucial to prove the global convergence of LMSD for quadratic functions [1].

Although the matrix of gradients \(\textbf{G}\) (or \(\textbf{S}\)) already provides a basis for \(\mathcal {S}\), from a numerical point of view it may not be ideal to exploit it to compute the Ritz values, since \(\textbf{G}\) is usually numerically ill conditioned. Therefore, we recall Fletcher’s approach [1] to compute a basis for \(\mathcal {S}\), and then propose two new variants: via a pivoted QR and via an SVD. Fletcher starts by a QR decomposition \(\textbf{G}= \textbf{Q}\textbf{R}\), discarding the oldest gradients whenever \(\textbf{R}\) is numerically singular. Then \(\textbf{Q}\) is an orthogonal basis for a possibly smaller space \(\mathcal {S}= \text {span}([\textbf{g}_{m-s+1},\dots ,\textbf{g}_{m}])\), with \(s \le m\). The product \(\textbf{A}\textbf{G}\) can be computed from the gradients without additional multiplications by \(\textbf{A}\), in view of

$$\begin{aligned} \begin{array}{cc}\textbf{A}\textbf{G}=-\textbf{Y}\textbf{D}=\left[ \textbf{G}\quad {\textbf{g}}_{m+1}\right] \textbf{J}, &\text {where}\quad\textbf{J}=\left[\begin{array}{ccc}\alpha&&\\ -{\alpha }_{1}& \ddots&\\&&{\alpha }_{m}\\&\ddots &-{\alpha }_{m} \end{array}\right]\end{array}. \end{aligned}$$
(7)

Here, the relation \(\textbf{y}_{k-1} = \textbf{g}_k-\textbf{g}_{k-1}\) is used. Then the \(s\times s\) low-dimensional representation of \(\textbf{A}\) can be written in terms of \(\textbf{R}\):

$$\begin{aligned} \textbf{T}:= \textbf{Q}^T\!\textbf{A}\textbf{Q}= [\,{\textbf{R}} \ \ {\textbf{r}}\,] \ \textbf{J}\, \textbf{R}^{-1}, \end{aligned}$$
(8)

where \(\textbf{r}= \textbf{Q}^T\textbf{g}_{m+1}\). It is clear that \(\textbf{T}\) is symmetric; it is also tridiagonal in view of the fact that it is associated with a Krylov relation for a symmetric matrix (see also Fletcher [1]). Since \(\textbf{r}\) is also the solution to \(\textbf{R}^T\textbf{r}= \textbf{G}^T\textbf{g}_{m+1}\), the matrix \(\textbf{Q}\) is in fact not needed to compute \(\textbf{r}\). For this reason, Fletcher concludes that the Cholesky decomposition \(\textbf{G}^T\textbf{G}= \textbf{R}^T\textbf{R}\) is sufficient to determine \(\textbf{T}\) and its eigenvalues. Standard routines raise an error when \(\textbf{G}^T\textbf{G}\) is numerically not SPD (numerically having a zero or tiny negative eigenvalue). If this happens, the oldest gradients are discarded (if necessary one by one in several steps), and the Cholesky decomposition is repeated.

Instead of discarding the oldest gradients \(\textbf{g}_1, \dots , \textbf{g}_{m-s}\), we will now consider a new variant by selecting the gradients in the following way. We carry out a pivoted QR decomposition of \(\textbf{G}\), i.e., \(\textbf{G}\widehat{\varvec{\Pi }} = \widehat{\textbf{Q}}\widehat{\textbf{R}}\), where \(\widehat{\varvec{\Pi }}\) is a permutation matrix that iteratively picks the column with the maximal norm after each Gram–Schmidt step [12]. As a consequence, the diagonal entries of \(\widehat{\textbf{R}}\) are ordered nonincreasingly in magnitude. (In fact, we can always ensure that these entries are positive, but since standard routines may output negative values, we consider the magnitudes.)

The pivoted QR approach is also a rank-revealing factorization, although generally less accurate than the SVD (see, e.g., [12]). Let \(\widehat{\textbf{R}}_{G}\) be the first \(s\times s\) block of \(\widehat{\textbf{R}}\) for which \(\vert \widehat{r}_{i}\vert > \textsf{thresh}\cdot \vert \widehat{r}_{1}\vert \), where \(\widehat{r}_i\) is the ith diagonal element of \(\widehat{\textbf{R}}\) and \(\textsf{thresh} > 0\). A crude approximation to its condition number is \(\kappa (\widehat{\textbf{R}}_{G}) \approx \vert \widehat{r}_{1}\vert \,/\, \vert \widehat{r}_{s}\vert \). Although this approximation may be quite imprecise, the alternative to repeatedly compute \(\kappa (\widehat{\textbf{R}}_{G})\) by removing the last column and row of the matrix at each iteration might take up to \(\mathcal {O}(m^4)\) work, which, even for modest values of m, may be unwanted.

The approximation subspace for the eigenvectors of \(\textbf{A}\) is now \(\mathcal {S} = \text {span}(\widehat{\textbf{Q}}_G)\), with \(\textbf{G}\widehat{\varvec{\Pi }}_G = \widehat{\textbf{Q}}_G\widehat{\textbf{R}}_{G}\), where \(\widehat{\varvec{\Pi }}_G\) and \(\widehat{\textbf{Q}}_G\) are the first s columns of \(\widehat{\varvec{\Pi }}\) and \(\widehat{\textbf{Q}}\), respectively. The upper triangular \(\widehat{\textbf{R}}\) can be partitioned as follows:

$$\begin{aligned} \widehat{\textbf{R}} = \begin{bmatrix} \widehat{\textbf{R}}_{G} & \widehat{\textbf{R}}_{12}\\ \textbf{0} & \widehat{\textbf{R}}_{22} \end{bmatrix}. \end{aligned}$$
(9)

As in (8), we exploit (7) to compute the projected Hessian

$$\begin{aligned} \textbf{B}^\textrm{QR}&:= \widehat{\textbf{Q}}_G^T\, \textbf{A}\widehat{\textbf{Q}}_G = \widehat{\textbf{Q}}_G^T\, \textbf{A}\, \textbf{G}\, \widehat{\varvec{\Pi }}_G \widehat{\textbf{R}}_{G}^{-1} = \widehat{\textbf{Q}}_G^T\, [\,{\widehat{\textbf{Q}}\widehat{\textbf{R}}\widehat{\varvec{\Pi }}^{-1}} \ \ {\textbf{g}_{m+1}}\,]\, \textbf{J}\widehat{\varvec{\Pi }}_G \widehat{\textbf{R}}_{G}^{-1}\nonumber \\&= [\,{[\widehat{\textbf{R}}_{G}\; \widehat{\textbf{R}}_{12}]\, \widehat{\varvec{\Pi }}^{-1}} \ \ {\widehat{\textbf{Q}}_G^T\, \textbf{g}_{m+1}}\,]\, \textbf{J}\, \widehat{\varvec{\Pi }}_G \, \widehat{\textbf{R}}_{G}^{-1}. \end{aligned}$$
(10)

Note that, compared to Fletcher’s approach, this decomposition removes the unwanted gradients all at once, while in [1] the Cholesky decomposition is repeated every time the \(\textbf{R}\) matrix is numerically singular. Fletcher’s \(\textbf{T}\) (8) is a specific case of (10), where \(\widehat{\varvec{\Pi }} = \widehat{\varvec{\Pi }}_G\) is the identity matrix, and \(\widehat{\textbf{R}}_{G}\) is the whole \(\widehat{\textbf{R}}\), but where \(\textbf{G}\) only contains \([\textbf{g}_{m-s+1},\dots ,\textbf{g}_{m}]\).

As the second new variant, we exploit an SVD decomposition \(\textbf{G}= \textbf{U}\, \varvec{\Sigma } \, \textbf{V}^T\), where \(\varvec{\Sigma }\) is \(m\times m\), to get a basis for \(\mathcal {S}\). An advantage of an SVD is that this provides a natural way to reduce the space by removing the singular vectors corresponding to singular values below a certain tolerance. We decide to retain the \(s\le m\) singular values for which \(\sigma _i \ge \textsf{thresh}\cdot \sigma _1\), where \(\sigma _1\) is the largest singular value of \(\textbf{G}\). Therefore we consider the truncated SVD \(\textbf{G}\approx \textbf{G}_1 = \textbf{U}_G \, \varvec{\Sigma }_G \, \textbf{V}_G^T\), where the matrices on the right-hand side are \(n \times s\), \(s \times s\), and \(s \times m\), respectively. Then the approximation subspace becomes \(\mathcal {S} = \text {span}(\textbf{U}_G)\), and we compute the corresponding \(s\times s\) representation of \(\textbf{A}\). Since \(\textbf{G}_1\textbf{V}_G = \textbf{G}\textbf{V}_G\), and \(\textbf{U}_G = \textbf{G}_1\textbf{V}_G\varvec{\Sigma }_G^{-1}\), we have, using the expression for \(\textbf{A}\textbf{G}\) (7),

$$\begin{aligned} \textbf{B}^\textrm{SVD}&= \textbf{U}_G^T\, \textbf{A}\textbf{U}_G = \textbf{U}_G^T\, \textbf{A}\textbf{G}\, \textbf{V}_G\, \varvec{\Sigma }_G^{-1} = \textbf{U}_G^T\, [\,{\textbf{U}\, \varvec{\Sigma } \, \textbf{V}^T} \ \ {\textbf{g}_{m+1}}\,]\ \textbf{J}\, \textbf{V}_G\, \varvec{\Sigma }_G^{-1}.\nonumber \\&= [\,{\varvec{\Sigma }_G \textbf{V}_G^T} \ \ {\ \textbf{U}_G^T\,\textbf{g}_{m+1}}\,]\ \textbf{J}\, \textbf{V}_G\, \varvec{\Sigma }_G^{-1}. \end{aligned}$$
(11)

We remark that, by construction, both \(\textbf{B}^\textrm{SVD}\) and \(\textbf{B}^\textrm{QR}\) are SPD. Due to the truncation of the decompositions of \(\textbf{G}\) in both the pivoted QR and SVD techniques, the subspace \(\mathcal {S}\) will generally not be a Krylov subspace, in contrast to Fletcher’s method. An immediate consequence of this is that, in contrast with \(\textbf{T}\) in (8), the matrices \(\textbf{B}^\textrm{SVD}\) and \(\textbf{B}^\textrm{QR}\) are not tridiagonal. Still, of course, one can also expect to extract useful information from a non-Krylov subspace.

Since LMSD with Ritz values can be seen as an extension of a gradient method with BB1 stepsizes, it is reasonable to look for a limited memory extension of the gradient method with BB2 stepsizes. The harmonic Rayleigh–Ritz extraction is a suitable tool to achieve this goal.

2.2 The harmonic Rayleigh–Ritz extraction

The use of harmonic Ritz values in the context of LMSD has been mentioned by Fletcher [1, Sec. 7], and further studied by Curtis and Guo [15]. While the Rayleigh–Ritz extraction usually finds good approximations for exterior eigenvalues, the harmonic Rayleigh–Ritz extraction has originally been introduced to approximate eigenvalues close to a target value in the interior of the spectrum. A natural way to achieve this is to consider a Galerkin condition for \(\textbf{A}^{-1}\):

$$\begin{aligned} \textbf{A}^{-1}\textbf{Y}\widetilde{\textbf{c}} - \widetilde{\theta }^{-1} \textbf{Y}\widetilde{\textbf{c}} \perp \mathcal {Y}, \end{aligned}$$
(12)

which leads to the eigenpairs \((\widehat{\theta }_i^{-1}, \widehat{\textbf{c}}_i )\) of the pair \((\textbf{Y}^T\textbf{S}, \, \textbf{Y}^T\textbf{Y})\). However, since \(\textbf{A}^{-1}\) is usually not explicitly available or too expensive to compute, one may choose a subspace of the form \(\mathcal {Y}= \textbf{A}\, \mathcal {S}\) (see, e.g., [17]). This simplifies the Galerkin condition:

$$ \textbf{A}\textbf{S}\, \widetilde{\textbf{c}} - \widetilde{\theta } \, \textbf{S}\, \widetilde{\textbf{c}} \perp \textbf{A}\, \mathcal {S}. $$

The eigenvalues \(\widetilde{\theta }_i\) from this condition are called harmonic Ritz values. In the limited memory extension of BB2 we set \(\mathcal {Y}= \textbf{A}\, \mathcal {K}_m(\textbf{A},\textbf{g}_1)\), and we know that \(\textbf{Y}\) is a basis for \(\mathcal {Y}\) from Proposition 2. Harmonic Ritz values are also bounded by the extreme eigenvalues of \(\textbf{A}\): \(\widetilde{\theta }_i \in [\lambda _1, \lambda _n]\); see, e.g., [20, Thm. 2.1]. It is easy to check that the (memory-less) case \(m = 1\) corresponds to the computation of the harmonic Rayleigh quotient \(\alpha ^\textrm{BB2}\).

We have just observed that the Galerkin condition for the harmonic Ritz values can be formulated either in terms of \(\textbf{Y}\) or \(\textbf{S}\). The latter way is presented in the references [1, 15], which again look for a basis of \(\mathcal {S}\) by means of a QR decomposition of \(\textbf{G}\). Following the line of [1], the aim is to find the eigenvalues of

$$\begin{aligned} (\textbf{Q}^T\!\textbf{A}\textbf{Q})^{-1}\, \textbf{Q}^T\!\textbf{A}^2\textbf{Q}=: \textbf{T}^{-1}\, \textbf{P}, \end{aligned}$$
(13)

where \(\textbf{G}= \textbf{Q}\textbf{R}\). Since \(\textbf{Q}^T\!\textbf{A}^2\textbf{Q}\) involves the product \([\,{\textbf{G}} \ \ {\textbf{g}_{m+1}}\,]^T [\,{\textbf{G}} \ \ {\textbf{g}_{m+1}}\,]\), we determine the Cholesky decomposition of this matrix, to write [1, Eq. (30)]

$$\begin{aligned} \textbf{P}= \textbf{R}^{-T}\textbf{J}^T\begin{bmatrix} \textbf{R}& \textbf{r}\\ \textbf{0} & \rho \end{bmatrix}^T \begin{bmatrix} \textbf{R}& \textbf{r}\\ \textbf{0} & \rho \end{bmatrix}\textbf{J}\textbf{R}^{-1}, \end{aligned}$$
(14)

where \(\big [{\begin{smallmatrix} \textbf{R}& \textbf{r}\\ \textbf{0} & \rho \end{smallmatrix}}\big ]\) is the Cholesky factor of \([\,{\textbf{G}} \ \ {\textbf{g}_{m+1}}\,]^T [\,{\textbf{G}} \ \ {\textbf{g}_{m+1}}\,]\), such that \(\textbf{R}\) is the Cholesky factor of \(\textbf{G}^T\textbf{G}\), \(\textbf{r}= \textbf{Q}^T\textbf{g}_{m+1}\) as in (8), while \(\rho \) is a scalar. Both \(\textbf{T}\) and \(\textbf{P}\) are symmetric; moreover, while \(\textbf{T}\) is tridiagonal, \(\textbf{P}\) is pentadiagonal. If \(\textbf{G}\) is rank deficient, the oldest gradients are discarded.

Given the similar roles of \(\textbf{S}\) for \(\textbf{A}\) in (6) and of \(\textbf{Y}\) for \(\textbf{A}^{-1}\) in (12), we now consider new alternative ways to find the harmonic Ritz values of \(\textbf{A}\), based on the decomposition of either \(\textbf{Y}\) or \(\textbf{Y}^T\textbf{Y}\). The aim is to get an \(s\times s\) representation of \(\textbf{A}^{-1}\), as we did for \(\textbf{A}\) in Section 2.1. In this context, we need the following (new) relation:

$$\begin{aligned} \textbf{A}^{-1}\textbf{Y}= -\textbf{G}\textbf{D}^{-1} = [\,{\textbf{Y}} \ \ {-\textbf{g}_{m+1}}\,]\, \widetilde{\textbf{J}},\quad \text {where}\quad \widetilde{\textbf{J}} = \begin{bmatrix} 1& & & \\ [-1.5mm] \vdots & \ddots & \\ 1 & \cdots & 1\\ [-0.5mm] 1 & \cdots & 1\\ \end{bmatrix} \!\textbf{D}^{-1}. \end{aligned}$$
(15)

As for (7), this follows from the definition \(\textbf{y}_{k-1} = \textbf{g}_k-\textbf{g}_{k-1}\).

We start with the pivoted QR of \(\textbf{Y}\), i.e., \(\textbf{Y}\check{\varvec{\Pi }} = \check{\textbf{Q}}\check{\textbf{R}}\). As in Section 2.1, we truncate the decomposition based on the diagonal values of \(\check{\textbf{R}}\), and obtain \(\textbf{Y}\check{\varvec{\Pi }}_Y = \check{\textbf{Q}}_Y\check{\textbf{R}}_Y\), with

$$\begin{aligned} \check{\textbf{R}} = \begin{bmatrix} \check{\textbf{R}}_{Y} & \check{\textbf{R}}_{12}\\ \textbf{0} & \check{\textbf{R}}_{22} \end{bmatrix}. \end{aligned}$$

Then we project \(\textbf{A}^{-1}\) onto \(\mathcal {Y}= \textrm{span}(\textbf{Q}_Y)\) to obtain

$$\begin{aligned} \textbf{H}^\textrm{QR}&= \check{\textbf{Q}}_Y^T\textbf{A}^{-1}\check{\textbf{Q}}_Y = \check{\textbf{Q}}_Y^T\textbf{A}^{-1}\, \textbf{Y}\, \check{\varvec{\Pi }}_Y \check{\textbf{R}}_Y^{-1} = \check{\textbf{Q}}_Y^T\,[\,{\textbf{Y}} \ \ {-\textbf{g}_{m+1}}\,]\, \widetilde{\textbf{J}} \, \check{\varvec{\Pi }}_Y \check{\textbf{R}}_Y^{-1}\nonumber \\&= [\,{[\check{\textbf{R}}_Y\; \check{\textbf{R}}_{12}]\, \check{\varvec{\Pi }}^{-1}} \ \ {-\check{\textbf{Q}}_Y^T\textbf{g}_{m+1}}\,]\, \widetilde{\textbf{J}} \,\check{\varvec{\Pi }}_Y \check{\textbf{R}}_Y^{-1}. \end{aligned}$$
(16)

The matrix \(\textbf{H}^\textrm{QR}\) is also symmetric and delivers the reciprocals of harmonic Ritz values; its expression is similar to (10). An approach based on the Cholesky decomposition of \(\textbf{Y}^T\textbf{Y}= \widetilde{\textbf{R}}^T\widetilde{\textbf{R}}\) may also be derived:

$$\begin{aligned} \textbf{H}^\textrm{CH} = [\,{\widetilde{\textbf{R}}} \ \ {\widetilde{\textbf{r}}}\,]\, \widetilde{\textbf{J}}\, \widetilde{\textbf{R}}^{-1}, \end{aligned}$$
(17)

with \(\widetilde{\textbf{r}}\) solution to \(\widetilde{\textbf{R}}^T\widetilde{\textbf{r}} = -\textbf{Y}^T\textbf{g}_{m+1}\).

As for the Ritz values, SVD is another viable option. Consider the truncated SVD of \(\textbf{Y}\): \(\textbf{Y}_1 = \textbf{U}_Y \, \varvec{\Sigma }_Y \, \textbf{V}_Y^T\), where \(\varvec{\Sigma }_Y\) is \(s\times s\). Since \(\textbf{Y}_1\textbf{V}_Y = \textbf{Y}\textbf{V}_Y\), by using similar arguments as in the derivation of (11), we get the following low-dimensional representation of \(\textbf{A}^{-1}\):

$$\begin{aligned} \textbf{H}^\textrm{SVD}&= \textbf{U}_Y^T\textbf{A}^{-1}\textbf{U}_Y = \textbf{U}_Y^T\textbf{A}^{-1} \textbf{Y}\, \textbf{V}_Y\, \varvec{\Sigma }_Y^{-1} = \textbf{U}_Y^T\,[\,{\textbf{Y}} \ \ {-\textbf{g}_{m+1}}\,]\, \widetilde{\textbf{J}}\, \textbf{V}_Y\, \varvec{\Sigma }_Y^{-1} \nonumber \\&= [\,{\varvec{\Sigma }_Y \textbf{V}_Y^T} \ \ {-\textbf{U}_Y^T\textbf{g}_{m+1}}\,]\, \widetilde{\textbf{J}}\, \textbf{V}_Y\, \varvec{\Sigma }_Y^{-1}. \end{aligned}$$
(18)

Note that, in contrast to \(\textbf{T}^{-1}\textbf{P}\), the matrix \(\textbf{H}^\textrm{SVD}\) is symmetric and gives the reciprocals of harmonic Ritz values. In addition, the expression for \(\textbf{H}^\textrm{SVD}\) is similar to the one for \(\textbf{B}^\textrm{SVD}\) in (11).

To conclude the section, we mention the following technique, which is new in the context of LMSD. For the solution of eigenvalue problems, it has been observed (e.g., by Morgan [17]) that harmonic Ritz values sometimes do not approximate eigenvalues well, and it is recommended to use the Rayleigh quotients of harmonic Ritz vectors instead. This means that we use \(\textbf{S}\widetilde{\textbf{c}}_i\) as approximate eigenvectors, and their Rayleigh quotients \(\widetilde{\textbf{c}}_i^T\textbf{S}^T\!\textbf{A}\textbf{S}\widetilde{\textbf{c}}_i\) as approximate eigenvalues. This fits nicely with Fletcher’s approach: in fact, once we have the eigenvectors \(\widetilde{\textbf{c}}_i\) of \(\textbf{T}^{-1}\textbf{P}\) (13), we compute their corresponding Rayleigh quotients as \(\widetilde{\textbf{c}}_i^T\textbf{T}\widetilde{\textbf{c}}_i\). We remark that, in the one-dimensional case, this procedure reduces to the gradient method with BB1 stepsizes, instead of the BB2 ones.

In Section 4.1 we compare and comment on the different strategies to get both the standard and the harmonic Ritz values. We will see how the computation of the harmonic Rayleigh quotients can result in a lower number of iterations of LMSD, although computing extra Rayleigh quotients involves some additional work in the m-dimensional space.

2.3 An algorithm for strictly convex quadratic functions

In this section we present the LMSD method for strictly convex quadratic functions. As already mentioned, the key idea of the algorithm is to store the m most recent gradients or \(\textbf{y}\)-vectors, to compute up to \(s\le m\) new stepsizes, according to one of the procedures described in Sections 2.1 and 2.2. These stepsizes are then used in (up to) s consecutive iterations of a gradient method; this group of iterations is referred to as a sweep [1].

In Algorithm 1, we report the LMSD method for strictly convex quadratic functions as proposed in [1, “A Ritz sweep algorithm”]. This routine is a gradient method without line search. Particular attention is put into the choice of the stepsize: whenever the function value increases compared to the initial function value of the sweep \(f_\textrm{ref}\), Fletcher resets the iterate and computes a new point by taking a Cauchy step (cf. Line 9). This ensures that the next function value will not be higher than the current \(f_\textrm{ref}\), since the Cauchy step is the solution to the exact line search \(\min _\beta f(\textbf{x}_k-\beta \,\textbf{g}_k)\). Additionally, every time we take a Cauchy step, or the norm of the current gradient has increased compared to the previous iteration, we clear the stack of stepsizes and compute new (harmonic) Ritz values. At each iteration, a new gradient or \(\textbf{y}\)-vector is stored, depending on the method chosen to approximate the eigenvalues of \(\textbf{A}\) (cf. Sections 2.1 and 2.2).

Algorithm 1
figure a

LMSD for strictly convex quadratic functions [1].

It is possible to implement LMSD without controlling the function value of the iterates or the gradient norm, as in [16]. Here Curtis and Guo also show the R-linear convergence of the method. However, in our experiments, we have noticed that this latter implementation converges slower than Fletcher’s (for quadratic problems).

The stepsizes are plugged in the gradient method in increasing order, but there is no theoretical guarantee that this choice is optimal in some sense. From a theoretical viewpoint, the ordering of the stepsizes is irrelevant in a gradient method for strictly convex quadratic functions, as is apparent from (3). In practice, due to rounding errors and other additions to the implementation (such as, e.g., Lines 7–13 of Algorithm 1), the stepsize ordering is relevant for both the quadratic and the general nonlinear case, which will be discussed in Section 3.4. For the quadratic case, Fletcher [1] suggests that choosing the increasing order improves the chances of a monotone decrease in both the function value and the gradient norm. Nevertheless, his argument is based on the knowledge of s exact eigenvalues of \(\textbf{A}\) [21].

To the best of our knowledge, an aspect that has not been discussed yet is the presence of rounding errors in the low-dimensional representation of the Hessian. Except for (13), all the obtained matrices are symmetric, but their expressions are not. Therefore, in a numerical setting, it might happen that a representation of the Hessian is not symmetric. This may result in negative or complex eigenvalues; for this reason, we enforce symmetry by taking the symmetric part of the projected Hessian, i.e., \(\textbf{B}\leftarrow \frac{1}{2}(\textbf{B}+ \textbf{B}^T)\), which is the symmetric matrix nearest to \(\textbf{B}\). In the Cholesky decomposition, we replace the upper triangle of \(\textbf{T}\) with the transpose of its lower triangle, in agreement with Fletcher’s choice for the unconstrained case (cf. [1] and Section 3.1). In both situations, we discard negative eigenvalues, which may still arise.

In practice, we observe that the non-symmetry of a projected Hessian appears especially in problems with large \(\kappa (\textbf{A})\), for a relatively large choice of m (e.g., \(m = 10\)) and a small value of \(\textsf{thresh}\) (e.g., \(\textsf{thresh} = 10^{-10}\)). In this situation, the Cholesky decomposition seems to produce a non-symmetric projected Hessian more often than pivoted QR or SVD. This is likely related to the fact that the Cholesky decomposition of an ill-conditioned Gramian matrix leads to a more inaccurate \(\textbf{R}\) factor (cf. Section 1). In addition, the symmetrized \(\textbf{T}\) seems to generate negative eigenvalues more often than the Hessian representations obtained via pivoted QR and SVD. However, these aspects may not directly affect the performance of LMSD. As we will see in Section 4.1, the adoption of different decompositions does not seem to influence the speed of LMSD.

We finally note that for smaller values of m, such as \(m = 5\), the projected Hessian tends to be numerically symmetric even for a small \(\textsf{thresh}\). In fact, fewer gradients form a better condition matrix, because of the following argument. First, we have \(\sigma _i^2(\textbf{G}) = \lambda _{m-i+1}(\textbf{G}^T\textbf{G})\), for \(i = 1,\dots ,m\). Since the Gramian matrix of the \(s \le m\) most recent gradients \([\textbf{g}_{m-s+1},\dots ,\textbf{g}_m]\) is a submatrix of \(\textbf{G}^T\textbf{G}\), from Cauchy’s Interlace Theorem (see, e.g., [19, Thms. 10.2.1 and 10.1.1]), we get that \(\sigma _{\min }(\textbf{G}) \le \sigma _{\min }([\textbf{g}_{m-s+1},\dots ,\textbf{g}_m])\) and \(\sigma _{\max }(\textbf{G}) \ge \sigma _{\max }([\textbf{g}_{m-s+1},\dots ,\textbf{g}_m])\). This proves that \(\kappa ([\textbf{g}_{m-s+1},\dots ,\textbf{g}_m]) \le \kappa (\textbf{G})\).

2.4 Secant conditions for LMSD

We finally show that the low-dimensional representations of the Hessian matrix \(\textbf{A}\) (or its inverse) satisfy a certain secant condition. This result is new in the context of LMSD, and will be the starting point of one of our extensions of LMSD to general unconstrained optimization problems in Section 3. Recall from [4] that the BB stepsizes (2) satisfy a secant condition each, in the least squares sense:

$$ \begin{array}{cc}{\alpha }^{\text {BB}1}=\underset{\alpha }{\text {argmin }}\,\,\Vert \textbf{y}-\alpha \textbf{s}\Vert ,&\quad {\alpha }^{\text {BB}2}=\underset{\alpha }{\text {argmin }}\,\,\Vert {\alpha }^{-1}\textbf{y}-\textbf{s}\Vert . \end{array} $$

We now give a straightforward extension of these conditions to the limited memory variant of the steepest descent. We show that there exist \(m\times m\) matrices that satisfy a secant condition and share the same eigenvalues as the two pencils \((\textbf{S}^T\textbf{Y}, \, \textbf{S}^T\textbf{S})\), \((\textbf{Y}^T\textbf{S}, \, \textbf{Y}^T\textbf{Y})\). In the quadratic case, when \(\textbf{Y}= \textbf{A}\textbf{S}\), the following results correspond to [19, Thm. 11.4.2] and [22, Thm. 4.2], respectively.

Proposition 3

Let \(\textbf{S}\), \(\textbf{Y}\in \mathbb {R}^{n\times m}\) be full rank, with \(n \ge m\), and let \(\textbf{B}\), \(\textbf{H}\in \mathbb {R}^{m\times m}\).

  1. (i)

    The unique solution to \(\displaystyle \min _\textbf{B}\Vert \textbf{Y}-\textbf{S}\textbf{B}\Vert \) is \(\textbf{B}= (\textbf{S}^T\textbf{S})^{-1} \textbf{S}^T\textbf{Y}\).

  2. (ii)

    The unique solution to \(\displaystyle \min _\textbf{H}\Vert \textbf{Y}\textbf{H}-\textbf{S}\Vert \) is \(\textbf{H}= (\textbf{Y}^T\textbf{Y})^{-1} \textbf{Y}^T\textbf{S}\).

  3. (iii)

    In the quadratic case (1), the eigenvalues of \(\textbf{B}\) are the Ritz values of \(\textbf{A}\) and the eigenvalues of \(\textbf{H}\) are the inverse harmonic Ritz values of \(\textbf{A}\).

Proof

The stationarity conditions for the overdetermined least squares problem \(\min _\textbf{B}\ \Vert \textbf{Y}-\textbf{S}\textbf{B}\Vert \) are the normal equations \(\textbf{S}^T(\textbf{Y}- \textbf{S}\textbf{B}) = \textbf{0}\). Since \(\textbf{S}\) is of full rank, \(\textbf{S}^T\textbf{S}\) is nonsingular, and thus \(\textbf{B}= (\textbf{S}^T\textbf{S})^{-1} \, \textbf{S}^T\textbf{Y}\). Part (ii) follows similarly, by exchanging the role of \(\textbf{S}\) and \(\textbf{Y}\). Since \(\textbf{B}\) and \((\textbf{S}^T\textbf{Y}, \, \textbf{S}^T\textbf{S})\) have the same eigenvalues, part (iii) easily follows. The same relation holds for the eigenvalues of \(\textbf{H}\) and the eigenvalues of the pencil \((\textbf{Y}^T\textbf{S}, \, \textbf{Y}^T\textbf{Y})\). \(\square \)

Proposition 3 is a good starting point to extend LMSD for solving general unconstrained optimization problems.

3 General nonlinear functions

When the objective function f is a general continuously differentiable function, the Hessian is no longer constant through the iterations, and not necessarily positive definite. In general, there is no SPD approximate Hessian such that multiple secant equations hold (that is, an expression analogous to \(\textbf{Y}= \textbf{A}\textbf{S}\) in the quadratic case). This is clearly stated by Schnabel [13, Thm. 3.1].

Theorem 4

Let \(\textbf{S},\, \textbf{Y}\) be full rank. Then there exists a symmetric (positive definite) matrix \(\textbf{A}_{+}\) such that \(\textbf{Y}= \textbf{A}_{+}\, \textbf{S}\) if and only if \(\textbf{Y}^T\textbf{S}\) is symmetric (positive definite).

By inspecting all the expressions derived in Sections 2.1 and 2.2, we observe that only \(\textbf{G}\) and \(\textbf{Y}\) are needed to compute the \(m\times m\) matrices of interest for LMSD. However, given that \(\textbf{Y}^T\textbf{S}\) is in general not symmetric, Theorem 4 suggests that we cannot interpret these matrices as low-dimensional representations of some Hessian matrices.

We propose two ways to restore the connection with Hessian matrices. In Section 3.1, we exploit a technique proposed by Schnabel [13] for quasi-Newton methods. It consists of perturbing \(\textbf{Y}\) to make \(\textbf{Y}^T\textbf{S}\) symmetric. We show that Fletcher’s method can also be interpreted in this way. In Section 3.2, we introduce a second method which does not aim at satisfying multiple secant equations at the same time, but finds the solution to the least squares secant conditions of Proposition 3 by imposing symmetry constraints.

3.1 Perturbation of \(\textbf{Y}\) to solve multiple secant equations

In the context of quasi-Newton methods, Schnabel [13] proposes to perturb the matrix \(\textbf{Y}\) of a quantity \(\Delta \textbf{Y}= \widetilde{\textbf{Y}} - \textbf{Y}\) to obtain an SPD \(\widetilde{\textbf{Y}}^T\textbf{S}\). With this strategy, we implicitly obtain a certain SPD approximate Hessian \(\textbf{A}_{+}\) such that \(\widetilde{\textbf{Y}} = \textbf{A}_{+}\, \textbf{S}\). We then refer to Sections 2.1 and 2.2 to compute either the Ritz values or the harmonic Ritz values of the approximate Hessian \(\textbf{A}_{+}\). Although we only have \(\widetilde{\textbf{Y}}\) at our disposal, and not \(\textbf{A}_+\), this is all that is needed; the procedures in Section 2 do not need to know \(\textbf{A}_{+}\) explicitly. In addition, Proposition 3 is still valid, after replacing \(\textbf{Y}\) with \(\widetilde{\textbf{Y}}\). We remark that, for our purpose, just a symmetric \(\widetilde{\textbf{Y}}^T\textbf{S}\) may also be sufficient, since we usually discard negative Ritz values.

In Section 4 we test one possible way of computing \(\Delta \textbf{Y}\), as proposed in [13], and the Ritz values of the associated low-dimensional representation of \(\textbf{A}_{+}\). This application is new in the context of LMSD. The perturbation is constructed as follows: first, consider the strict negative lower triangle \(\textbf{L}\) of \(\textbf{Y}^T\textbf{S}- \textbf{S}^T\textbf{Y}= -\textbf{L}+ \textbf{L}^T\), and suppose \(\textbf{S}\) is of full rank. (If not, remove the oldest \(\textbf{s}\)-vectors until the condition is satisfied.) Then \(\textbf{Y}^T\textbf{S}+ \textbf{L}\) is symmetric. Schnabel [13] solves the underdetermined system \(\Delta \textbf{Y}^T \textbf{S}= \textbf{L}\), which has \(\Delta \textbf{Y}= \textbf{S}(\textbf{S}^T\textbf{S})^{-1}\textbf{L}^T\) as minimum norm solution. By Theorem 4, there exists a symmetric \(\textbf{A}_{+}\) such that \(\widetilde{\textbf{Y}} =\textbf{A}_{+}\, \textbf{S}\). Now let us consider the QR decomposition of \(\textbf{G}= \textbf{Q}\textbf{R}\), which is of full rank since \(\textbf{S}\) is also of full rank. Similar to (7) we know that \(\textbf{A}_{+}\textbf{G}= -\widetilde{\textbf{Y}} \textbf{D}\). Moreover, we recall that \(\textbf{S}= -\textbf{G}\textbf{D}^{-1}\), and that \(\textbf{Y}\textbf{D}= -[\,{\textbf{G}} \ \ {\textbf{g}_{m+1}}\,] \, \textbf{J}\) from (7). For any perturbation \(\Delta \textbf{Y}\) that symmetrizes \(\textbf{Y}^T\textbf{S}\), we obtain the following low-dimensional representation of \(\textbf{A}_{+}\):

$$\begin{aligned} \textbf{Q}^T\textbf{A}_{+}\textbf{Q}&= \textbf{Q}^T\textbf{A}_{+}\textbf{G}\textbf{R}^{-1} = -\textbf{Q}^T \widetilde{\textbf{Y}} \textbf{D}\textbf{R}^{-1} = -\textbf{Q}^T(\textbf{Y}+\Delta \textbf{Y}) \, \textbf{D}\textbf{R}^{-1} \nonumber \\&= \textbf{T}-\textbf{Q}^T\Delta \textbf{Y}\, \textbf{D}\textbf{R}^{-1}. \end{aligned}$$
(19)

where \(\textbf{T}= [\,{\textbf{R}} \ \ {\textbf{r}}\,]\, \textbf{J}\textbf{R}^{-1}\) as in (8). In particular, by replacing \(\Delta \textbf{Y}= \textbf{S}(\textbf{S}^T\textbf{S})^{-1}\textbf{L}^T\), we obtain

$$\begin{aligned} \textbf{Q}^T\textbf{A}_{+}\textbf{Q}= \textbf{T}+ \textbf{R}(\textbf{R}^T\textbf{R})^{-1}\textbf{D}\textbf{L}^T\textbf{D}\textbf{R}^{-1}. \end{aligned}$$
(20)

This means that (20) can be computed by means of the Cholesky decomposition of \(\textbf{G}^T\textbf{G}\) only; the factor \(\textbf{Q}\) is not needed.

We now give a new interpretation of Fletcher’s extension of LMSD to general nonlinear problems [1, Sec. 4], in terms of a specific perturbation of \(\textbf{Y}\). Fletcher notices that the matrix \(\textbf{T}\) (8) is an upper Hessenberg matrix and can still be computed from the matrix of gradients, but, because of Theorem 4, there is no guarantee that \(\textbf{T}\) corresponds to a low-dimensional representation of a symmetric approximate Hessian matrix. Since the eigenvalues of \(\textbf{T}\) might be complex, Fletcher proposes to enforce \(\textbf{T}\) to be tridiagonal by replacing its strict upper triangular part with the transpose of its strict lower triangular part. We now show that this operation in fact corresponds to a perturbation of the \(\textbf{Y}\) matrix. To the best of our knowledge, this result is new.

Proposition 5

Let \(\textbf{T}\) be as in (8) and consider its decomposition \(\textbf{T}= \textbf{L}+ \varvec{\Lambda } + \textbf{U}\), where \(\textbf{L}\) (\(\textbf{U}\)) is strictly lower (upper) triangular and \(\mathbf \Lambda \) is diagonal. Moreover, let \(\textbf{G}\) be full rank and \(\textbf{G}= \textbf{Q}\textbf{R}\) its QR decomposition. If

$$ \Delta \textbf{Y}= \textbf{Q}\,(\textbf{U}-\textbf{L}^T)\, \textbf{R}\textbf{D}^{-1}, $$

then \(\widetilde{\textbf{Y}}^T\textbf{S}= (\textbf{Y}+ \Delta \textbf{Y})^T\textbf{S}\) is symmetric and there exists a symmetric \(\textbf{A}_{+}\) such that \(\widetilde{\textbf{Y}} = \textbf{A}_{+}\textbf{S}\) and \(\textbf{Q}^T\textbf{A}_{+}\textbf{Q}= \textbf{L}+ \varvec{\Lambda } + \textbf{L}^T\).

Proof

First, we prove that \(\textbf{S}^T\widetilde{\textbf{Y}}\) is symmetric. By replacing the expression for \(\Delta \textbf{Y}\) and exploiting the QR decomposition of \(\textbf{G}\), we get

$$\begin{aligned} \textbf{S}^T\widetilde{\textbf{Y}}&= -\textbf{D}^{-1}\textbf{R}^T\big (-[\,{\textbf{R}} \ \ {\textbf{r}}\,]\, \textbf{J}\textbf{R}^{-1} + \textbf{U}- \textbf{L}^T\big )\, \textbf{R}\textbf{D}^{-1}\\&= -\textbf{D}^{-1}\textbf{R}^T\big (-(\textbf{L}+ \varvec{\Lambda } + \textbf{U}) + \textbf{U}- \textbf{L}^T\big )\, \textbf{R}\textbf{D}^{-1}\\&= \textbf{D}^{-1}\textbf{R}^T\big (\textbf{L}+ \varvec{\Lambda } + \textbf{L}^T\big )\, \textbf{R}\textbf{D}^{-1}. \end{aligned}$$

Therefore \(\textbf{S}^T\widetilde{\textbf{Y}}\) is symmetric; Theorem 4 implies that there exists a symmetric \(\textbf{A}_{+}\) such that \(\widetilde{\textbf{Y}} = \textbf{A}_{+}\textbf{S}\). From this secant equation, it follows from (19) that

$$\begin{aligned} \textbf{Q}^T\textbf{A}_{+}\textbf{Q}= \big (\textbf{L}+ \varvec{\Lambda } + \textbf{L}^T\big )\, \textbf{R}\textbf{D}^{-1}(\textbf{D}\textbf{R}^{-1}) = \textbf{L}+ \varvec{\Lambda } + \textbf{L}^T. \end{aligned}$$

\(\square \)

From this proposition, we are able to provide an upper bound for the spectral norm of the perturbation \(\Delta \textbf{Y}\):

$$ \Vert \Delta \textbf{Y}\Vert _2 \le \max _{i}\beta _i\cdot \Vert \textbf{R}\Vert _2 \cdot \Vert \textbf{T}- (\textbf{L}+ \varvec{\Lambda } + \textbf{L}^T)\Vert _2 , $$

where \(\max _{i}\beta _i\) is the largest stepsize among the latest m steps. This suggests that the size of the perturbation \(\Delta \textbf{Y}\) is determined not only by the distance between \(\textbf{T}\) and its symmetrization, as expected, but also by the spectral norm of \(\textbf{R}\): if this is large, the upper bound may also be large.

We would like to point out the following important open and intriguing question. While Schnabel solves \(\Delta \textbf{Y}^T \textbf{S}= \textbf{L}\) to symmetrize \(\textbf{Y}^T \textbf{S}\), and Fletcher’s update is described in Proposition 5, there may be other choices for the perturbation matrix \(\Delta \textbf{Y}\) that, e.g., have a smaller \(\Delta \textbf{Y}\) in a certain norm. However, obtaining these perturbations might be computationally demanding, compared to the task of getting m new stepsizes. In the cases we have analyzed, the lower-dimensional \(\textbf{A}_{+}\) can be obtained from the Cholesky decomposition of \(\textbf{G}^T\textbf{G}\) at negligible cost.

Given the generality of Schnabel’s Theorem 4, another possibility that may be explored is a perturbation of \(\textbf{S}\), rather than \(\textbf{Y}\), to symmetrize \(\textbf{S}^T\textbf{Y}\). This would be a natural choice for computing the harmonic Ritz values given a basis for \(\textbf{Y}\). In this situation, the matrix binding \(\textbf{S}\) and \(\textbf{Y}\) would play the role of an approximate inverse Hessian. A thorough investigation is out of the scope of this paper.

3.2 Symmetric solutions to the secant equations

In this subsection, we explore a second and alternative extension of LMSD. We start from the secant condition of Proposition 3 for a low-dimensional matrix \(\textbf{B}\). The key idea is to impose symmetry constraints to obtain real eigenvalues from the solutions to the least squares problems of Proposition 3. Even if the hypothesis of Theorem 4 is not met, this method still fulfills the purpose of obtaining new stepsizes for the LMSD iterations.

The following proposition gives the stationarity conditions to solve the two modified least squares problems. Denote the symmetric part of a matrix by \(\mathop {\textrm{sym}}\limits (\textbf{A}) := \frac{1}{2}(\textbf{A}+ \textbf{A}^T)\).

Proposition 6

Let \(\textbf{S}\), \(\textbf{Y}\in \mathbb {R}^{n\times m}\) be full rank, with \(n \ge m\), and \(\textbf{B}\), \(\textbf{H}\in \mathbb {R}^{m\times m}\).

  1. (i)

    The solution to \(\displaystyle \min _{\textbf{B}=\textbf{B}^T} \Vert \textbf{Y}-\textbf{S}\textbf{B}\Vert \) satisfies \(\mathop {\textrm{sym}}\limits (\textbf{S}^T\textbf{S}\, \textbf{B}- \textbf{S}^T\textbf{Y}) = \textbf{0}\).

  2. (ii)

    The solution to \(\displaystyle \min _{\textbf{H}=\textbf{H}^T} \Vert \textbf{Y}\textbf{H}-\textbf{S}\Vert \) satisfies \(\mathop {\textrm{sym}}\limits (\textbf{Y}^T\textbf{Y}\textbf{H}- \textbf{Y}^T\textbf{S}) = \textbf{0}\).

Proof

If \(\textbf{B}\) is symmetric, it holds that

$$ \Vert \textbf{Y}-\textbf{S}\textbf{B}\Vert ^2 = \mathop {\text{ tr }}(\textbf{B}\, \textbf{S}^T\textbf{S}\, \textbf{B}- 2\, \mathop {\textrm{sym}}\limits (\textbf{S}^T\textbf{Y})\, \textbf{B}+ \textbf{Y}^T\textbf{Y}), $$

where \(\mathop {\text{ tr }}(\cdot )\) denotes the trace of a matrix. Differentiation leads to the following stationarity condition for \(\textbf{B}\):

$$\begin{aligned} \textbf{S}^T\textbf{S}\, \textbf{B}+ \textbf{B}\, \textbf{S}^T\textbf{S}= 2\, \mathop {\textrm{sym}}\limits (\textbf{S}^T\textbf{Y}), \end{aligned}$$
(21)

which is a Lyapunov equation. Since \(\textbf{S}\) is of full rank, its Gramian matrix is positive definite. This implies that the spectra of \(\textbf{S}^T\textbf{S}\) and \(-\textbf{S}^T\textbf{S}\) are disjoint, and therefore the equation admits a unique solution (see, e.g., [14] for a review of the Lyapunov equation and properties of its solution). Part (ii) follows similarly. \(\square \)

It is easy to check that, for \(m=1\), \(\textbf{B}\) in part (i) reduces to \(\alpha ^\textrm{BB1}\) and \(\textbf{H}\) in part (ii) to \(\beta ^\textrm{BB2}\). Compared to Fletcher’s \(\textbf{T}\) matrix (8), the symmetric solutions \(\textbf{B}\) and \(\textbf{H}\) will generally give a larger residual (since they are suboptimal for the unconstrained secant conditions), but they enjoy the benefit that their eigenvalues are guaranteed to be real.

We remark that symmetry constraints also appear in the secant conditions of the BFGS method, and in the symmetric rank-one update (see, e.g., [5, Chapter 6]). While in the BFGS method the approximate Hessians are SPD, provided that the initial approximation is SPD, in the rank-one update method it is possible to get negative eigenvalues. The fundamental difference between LMSD and these methods is that we do not attempt to find an approximate \(n\times n\) Hessian matrix.

Even while we do not approximate the eigenvalues of some Hessian, as in the quadratic case of Section 2, it is possible to establish bounds for the extreme eigenvalues of the solutions to the Lyapunov equations of Proposition 6, provided that \(\mathop {\textrm{sym}}\limits (\textbf{S}^T\textbf{Y})\) is positive definite. The following result is a direct consequence of [23, Cor. 1].

Proposition 7

Given the solution \(\textbf{B}\) to (21), let \(\lambda _1(\textbf{B})\) (\(\lambda _n(\textbf{B})\)) be the smallest (largest) eigenvalue of \(\textbf{B}\). If \(\textbf{S}\) is of full rank and \(\mathop {\textrm{sym}}\limits (\textbf{S}^T\textbf{Y})\) is positive definite, then

$$ [\lambda _1(\textbf{B}), \, \lambda _n(\textbf{B})] \subseteq [\lambda _1((\textbf{S}^T\textbf{S})^{-1}\mathop {\textrm{sym}}\limits (\textbf{S}^T\textbf{Y})),\ \lambda _n((\textbf{S}^T\textbf{S})^{-1}\mathop {\textrm{sym}}\limits (\textbf{S}^T\textbf{Y}))]. $$

If there exists an SPD matrix \(\textbf{A}_{+}\) such that \(\textbf{Y}= \textbf{A}_{+}\textbf{S}\), then \([\lambda _1(\textbf{B}), \, \lambda _n(\textbf{B})] \subseteq [\lambda _1(\textbf{A}_{+}), \, \lambda _n(\textbf{A}_{+})]\).

Proof

The first statement directly follows from [23, Cor. 1]. From this we have

$$ \lambda _1(\textbf{B}) \ge -\lambda ^{-1}_1(-\textbf{S}^T\textbf{S}\,(\mathop {\textrm{sym}}\limits (\textbf{S}^T\textbf{Y}))^{-1}),\quad \lambda _n(\textbf{B}) \le -\lambda ^{-1}_n(-\textbf{S}^T\textbf{S}\, (\mathop {\textrm{sym}}\limits (\textbf{S}^T\textbf{Y}))^{-1}). $$

The thesis follows from the fact that, given a nonsingular matrix \(\varvec{\Lambda }\) with positive eigenvalues, the following equality holds for the largest and the smallest eigenvalue: \(\lambda _i(-\varvec{\Lambda }^{-1}) = -1/\lambda _i(\varvec{\Lambda })\), where \(i =1,\,n\).

When \(\textbf{Y}= \textbf{A}_{+}\textbf{S}\), from Cauchy’s Interlace Theorem, the spectrum of \(\textbf{B}\) lies in \([\lambda _1(\textbf{A}_{+}), \lambda _n(\textbf{A}_{+})]\). \(\square \)

An analogous result can be provided for the matrix \(\textbf{H}\) of Proposition 6(ii).

3.3 Solving the Lyapunov equation while handling rank deficiency

The solution to the Lyapunov equation (21) is unique, provided that \(\textbf{S}\) is of full rank. In this section, we propose three options if \(\textbf{S}\) is (close to) rank deficient. As in Section 2, we discuss approaches using a Cholesky decomposition, a pivoted QR factorization, and a truncated SVD. By using the decompositions exploited in Section 2.1 we can either discard some \(\textbf{s}\)-vectors (and their corresponding \(\textbf{y}\)-vectors) or solve the Lyapunov equation onto the space spanned by the first right singular vectors of \(\textbf{S}\).

In the Cholesky decomposition and the pivoted QR decomposition we remove some of the \(\textbf{s}\)-vectors and the corresponding \(\textbf{y}\)-vectors, if needed. As we have seen in Section 2.1, in the Cholesky decomposition we discard past gradients until the Cholesky factor \(\textbf{R}\) of \(\textbf{G}^T\textbf{G}\) is sufficiently far from singular. In this new context of Lyapunov equations, we additionally need the relation (cf. (7))

$$\begin{aligned} \begin{array}{cc}\textbf{Y}=\left[ \textbf{G}\quad {\textbf{g}}_{m+1}\right]\textbf{K},& \text {where}\quad \textbf{K}=\left[ \begin{array}{ccc} -1&&\\1& \ddots &\\&& -1\\&\ddots &1 \end{array}\right]\end{array}. \end{aligned}$$
(22)

Then we can easily compute the matrices present in (21):

$$\begin{aligned} \textbf{S}^T\textbf{S}= \textbf{D}^{-1}\textbf{R}^T\textbf{R}\textbf{D}^{-1},\qquad \textbf{S}^T\textbf{Y}= -\textbf{D}^{-1}\textbf{R}^T[\,{\textbf{R}} \ \ {\textbf{r}}\,]\, \textbf{K}. \end{aligned}$$
(23)

In the pivoted QR, we keep only the columns \(\textbf{G}\widehat{\varvec{\Pi }}_G\) and \(\textbf{Y}\widehat{\varvec{\Pi }}_G\), as in Section 2. Let \(\textbf{D}_G\) be the diagonal matrix that stores the inverse stepsizes corresponding to \(\textbf{G}\widehat{\varvec{\Pi }}_G\). Then

$$\begin{aligned} \widehat{\varvec{\Pi }}_G^T \, \textbf{S}^T\textbf{S}\, \widehat{\varvec{\Pi }}_G&= \textbf{D}_G^{-1} \, \widehat{\textbf{R}}_G^T\widehat{\textbf{R}}_G \, \textbf{D}_G^{-1}, \\ \widehat{\varvec{\Pi }}_G^T \, \textbf{S}^T\textbf{Y}\, \widehat{\varvec{\Pi }}_G&= -\textbf{D}_G^{-1}[\,{\widehat{\textbf{R}}_G^T \, [\widehat{\textbf{R}}_G \ \widehat{\textbf{R}}_{12}]\, \widehat{\varvec{\Pi }}^{-1}} \ \ {\widehat{\varvec{\Pi }}_G^T\textbf{G}^T\textbf{g}_{m+1}}\,]\, \textbf{K}\widehat{\varvec{\Pi }}_G, \end{aligned}$$

where \(\widehat{\textbf{R}}_G\) and \(\widehat{\textbf{R}}_{12}\) come from the block representation of \(\widehat{\textbf{R}}\) (9).

The third approach involves the SVD of \(\textbf{S}\), instead of the SVD of \(\textbf{G}\) as in Section 2.1. This is due to the fact that the solution to the Lyapunov equation (21) for \(\textbf{S}\) and \(\textbf{Y}\) is not directly related to the solution to the Lyapunov equation for \(\textbf{S}\varvec{\Lambda }\) and \(\textbf{Y}\varvec{\Lambda }\), for a nonsingular \(\varvec{\Lambda }\). From the SVD \(\textbf{S}= \widehat{\textbf{U}}\widehat{\varvec{\Sigma }} \widehat{\textbf{V}}^T\), we get

$$ \widehat{\textbf{V}}\widehat{\varvec{\Sigma }}^2 \widehat{\textbf{V}}^T\, \textbf{B}+ \textbf{B}\, \widehat{\textbf{V}}\widehat{\varvec{\Sigma }}^2 \widehat{\textbf{V}}^T = 2\, \mathop {\textrm{sym}}\limits (\textbf{S}^T\textbf{Y}). $$

To simplify this equation, consider the truncated SVD \(\textbf{S}_1 = \textbf{U}_S\varvec{\Sigma }_S\textbf{V}_S^T\), where \(\varvec{\Sigma }_S\) is \(s\times s\), and multiply by \(\textbf{V}_S^T\) on the left and by \(\textbf{V}_S\) on the right. Since \(\textbf{V}_S^T\widehat{\textbf{V}}{\varvec{\Sigma }}^2 \widehat{\textbf{V}}^T = \varvec{\Sigma }_S^2\textbf{V}_S^T\),

$$\begin{aligned} \varvec{\Sigma }_S^2 \, \textbf{B}_S + \textbf{B}_S \, \varvec{\Sigma }_S^2 = 2\, \mathop {\textrm{sym}}\limits (\varvec{\Sigma }_S \textbf{U}_S^T \textbf{Y}\textbf{V}_S), \end{aligned}$$
(24)

where \(\textbf{B}_S = \textbf{V}_S^T\textbf{B}\textbf{V}_S\) is the projection of \(\textbf{B}\) onto \(\textbf{V}_S\). Moreover, from (22) we get

$$ \textbf{U}_S^T\textbf{Y}= [\,{-\varvec{\Sigma }_S \textbf{V}_S^T\textbf{D}} \ \ {\textbf{U}_S^T\textbf{g}_{m+1}}\,]\, \textbf{K}. $$

We remark that it is appropriate to control the truncation of the SVD by the condition number of the coefficient matrix \(\textbf{S}^T\textbf{S}\), which is \(\kappa ^2(\textbf{S})\).

The previous discussion on the three decompositions can also be extended to the secant equation of Proposition 6(ii), to compute the matrix \(\textbf{H}\) and use its eigenvalues directly as stepsizes. Several possibilities may be explored by decomposing either \(\textbf{G}\) or \(\textbf{Y}\) as in Section 2.2 for the harmonic Ritz values. We will not discuss any further details regarding all these methods, but in the experiments in Section 4 we will present results obtained with the Cholesky factorization of \([\,{\textbf{G}} \ \ {\textbf{g}_{m+1}}\,]^T[\,{\textbf{G}} \ \ {\textbf{g}_{m+1}}\,]\) as expressed in (14). Then for the quantities in Proposition 6(ii) we have:

$$ \textbf{Y}^T\textbf{Y}= \textbf{K}^T\begin{bmatrix} \textbf{R}& \textbf{r}\\ \textbf{0} & \rho \end{bmatrix}^T \begin{bmatrix} \textbf{R}& \textbf{r}\\ \textbf{0} & \rho \end{bmatrix}\textbf{K}, $$

and the matrix \(\textbf{Y}^T\textbf{S}\) can be obtained from (23).

We note that all Lyapunov equations in this section are of the form \(\textbf{E}^T\textbf{E}\textbf{B}+ \textbf{B}\textbf{E}^T\textbf{E}= \textbf{F}\). We describe a practical solution approach. Consider the truncated SVD \(\textbf{E}\approx \textbf{U}_E\varvec{\Sigma }_E\textbf{V}_E^T\), where the singular values in \(\varvec{\Sigma }_E\) satisfy \(\sigma _i^2(\textbf{E}) \ge \textsf{thresh} \cdot \sigma _1^2(\textbf{E})\). In case we exploit the Cholesky decomposition or the pivoted QR, an extra truncated SVD might still be appropriate, since these two decompositions do not provide an accurate estimate of \(\kappa ^2(\textbf{E})\). By left and right multiplication by \(\textbf{V}_E\), we obtain an expression analogous to (23):

$$ \varvec{\Sigma }_E^2 \, \textbf{B}_E + \textbf{B}_E \, \varvec{\Sigma }_E^2 = \textbf{V}_E^T\textbf{F}\textbf{V}_E, $$

where \(\textbf{B}_E = \textbf{V}_E^T\textbf{B}\textbf{V}_E\). Since \(\varvec{\Sigma }_E\) is diagonal, the solution to this Lyapunov equation can be easily found by elementwise division (cf. [14, p. 388]):

$$ [\textbf{B}_E]_{ij} = [\textbf{V}_E^T\textbf{F}\textbf{V}_E]_{ij} \ / \ (\sigma ^2_i(\textbf{E}) + \sigma ^2_j(\textbf{E})). $$

We notice that, in the SVD approach (23), the solution can be found directly from this last step. In addition, we remark that, for the scope of LMSD, it is not necessary to find the solution \(\textbf{B}\) to the original Lyapunov equation.

3.4 An algorithm for general nonlinear functions

We now review the limited memory steepest descent for general unconstrained optimization problems, as implemented in [2, Alg. 2] and reported in Algorithm 2. Compared to the gradient method for strictly convex quadratic functions, LMSD for general nonlinear functions has more complications. In Section 3 we have proposed two alternative ways to find a set of real eigenvalues to use as stepsizes. However, we may still get negative eigenvalues. This problem also occurs in classical gradient methods, when \(\textbf{s}_k^T\textbf{y}_k < 0\): in this case, the standard approach is to replace any negative stepsize with a positive one. In LMSD, we keep \(s \le m\) positive eigenvalues and discard the negative ones. If all eigenvalues are negative, we restart from \(\beta _k = \max (\min (\Vert \textbf{g}_k\Vert ^{-1}, \, 10^{5}), \ 1)\) as in [7]. Moreover, as in [2], only the latest s gradients are kept. As an alternative to this strategy, we also mention the more elaborated approach of Curtis and Guo [15], which involves the simultaneous computation of Ritz and harmonic Ritz values.

The line search of LMSD in [2] is inspired by Algorithm 1 for quadratic functions, and has many similarities with the routine proposed by Fletcher [1]. Once new stepsizes have been computed, at each sweep we produce a new iterate starting from the smallest stepsize in the stack. The reference function value \(f_\textrm{ref}\) for the Armijo sufficient decrease condition is the function value at the beginning of the sweep, as in Algorithm 1. We note that this Armijo type of line search appropriately replaces the exact line search of Algorithm 1, i.e., the choice of the Cauchy stepsize when a nonmonotone behavior (with respect to \(f_\textrm{ref}\)) is observed. The stack of stepsizes is cleared whenever the current steplength needs to be reduced to meet the sufficient decrease condition, or when the new gradient norm is larger than the previous one. This requirement is also present in Algorithm 1. Notice that, since we terminate the sweep whenever a backtracking step is performed, starting from the smallest stepsizes decreases the likelihood of ending a sweep prematurely. In contrast with [2], we keep storing the past gradients even after clearing the stack. This choice turns out to be favorable for the experiments in Section 4.2.

Algorithm 2
figure b

LMSD for general nonlinear functions [2].

We remark that, by construction, all new function values within a sweep are smaller than \(f_\textrm{ref}\). Therefore, the line search strategy adopted in [2] can be seen as a nonmonotone line search strategy [24]. Given the uniform bounds imposed on the sequence of stepsizes, the result of global convergence for a gradient method with nonmonotone line search [7, Thm. 2.1] also holds for Algorithm 2.

4 Numerical experiments

We explore the several variants of LMSD, for the quadratic and for the general unconstrained case. We compare LMSD with two gradient methods which adaptively pick either BB1 or BB2 stepsizes [25, 26]. As claimed by Fletcher [1], we have observed that LMSD may indeed perform better than L-BFGS on some problems. However, in the majority of our test cases, L-BFGS, as implemented in [27], converges faster than LMSD, in terms of number of function (and gradient) evaluations, and computational time. The comparison with another gradient method seems fairer to us than the comparison with a second-order method, and therefore we will not show L-BFGS in our study. Nevertheless, as discussed in Section 1, we recall the two main advantages of considering LMSD methods: the possibility to extend its idea to problems beyond unconstrained optimization (see, e.g., [8,9,10]), and the less stringent requirements on the objective functions to guarantee the global convergence of the method.

4.1 Quadratic functions

The performance of the LMSD method may depend on several choices: the memory parameter m, whether we compute Ritz or harmonic Ritz values, and how we compute a basis for either \(\mathcal {S}\) or \(\mathcal {Y}\). This section studies how different choices affect the behavior of LMSD in the context of strictly convex quadratic problems (1).

We consider quadratic problems by taking the Hessian matrices from the SuiteSparse Matrix Collection [28]. These are 103 SPD matrices with a number of rows n between \(10^2\) and \(10^4\). From this collection we exclude only mhd1280b, nd3k, nos7. The vector \(\textbf{b}\) is chosen so that the solution of \(\textbf{A}\textbf{x}= \textbf{b}\) is \(\textbf{x}^*= \textbf{e}\), the vector of all ones. For all problems, the starting vector is \(\textbf{x}_0 = 10\, \textbf{e}\), and the initial stepsize is \(\beta _{0} = 1\). The algorithm stops when \(\Vert \textbf{g}_k\Vert \le \textsf{tol} \cdot \Vert \textbf{g}_0\Vert \) with \(\textsf{tol} = 10^{-6}\), or when \(5\cdot 10^4\) iterations are reached. We compare the performance of LMSD with memory parameters \(m = 3,\,5,\, 10\) with the \(\text {ABB}_{\text {min}}\) gradient method [25] and one of its variants, presented in [26], which we indicate with \(\text {ABB}_{\text {bon}}\). The \(\text {ABB}_{\text {min}}\) stepsize is defined as

$$ \beta _{k}^\mathrm{{ABB_\textrm{min}}} = \left\{ \begin{array}{ll} \min \{\beta _{j}^\mathrm{{BB2}}\mid j = \max \{1, k-m\},\dots ,k\}, & \ \text {if} \ \beta _{k}^\mathrm{{BB2}} < \eta \, \beta _{k}^\mathrm{{BB1}}, \\[1.5mm] \beta _{k}^\mathrm{{BB1}}, & \ \text {otherwise,} \end{array} \right. $$

where \(m = 5\) and \(\eta = 0.8\). The \(\text {ABB}_{\text {bon}}\) stepsize is defined in the same way as \(\text {ABB}_{\text {min}}\), but with an adaptive threshold \(\eta \): starting from \(\eta _0 = 0.5\), this is updated as

$$ \eta _{k+1} = \left\{ \begin{array}{ll} 0.9\,\eta _k, & \ \text {if} \ \beta _{k}^\mathrm{{BB2}} < \eta _k \, \beta _{k}^\mathrm{{BB1}}, \\[0.5mm] 1.1\,\eta _k, & \ \text {otherwise.} \end{array} \right. $$

Since the performance of these methods depends less on the choice of m than LMSD, we only show \(m=5\) for both \(\text {ABB}_{\text {min}}\) and \(\text {ABB}_{\text {bon}}\). Among many possible stepsize choices, we compare LMSD with these gradient methods because they behave better than classical BB stepsizes on quadratic problems (see, e.g., [29]).

We recall that one \(\text {ABB}_{\text {min}}\) (\(\text {ABB}_{\text {bon}}\)) step requires the computation of three inner products of cost \(\mathcal {O}(n)\) each. An LMSD sweep is slightly more expensive, involving operations of order \(m^2n\) and (much less important) \(m^3\), but it is performed approximately once every m iterations. These costs correspond to the decomposition of either \(\textbf{G}\) or \(\textbf{Y}\), the computation of the projected Hessian matrices and their eigenvalues. We also remark that, while pivoted QR and SVD require \(\mathcal {O}(m^2n)\) operations, the Cholesky decomposition is \(\mathcal {O}(m^3)\), but is preceded by the computation of a Gramian matrix, with cost \(\mathcal {O}(m^2n)\).

We consider two performance metrics: the number of gradient evaluations (NGE) and the computational time. The number of gradient evaluations also includes the iterations that had to be restarted with a Cauchy step (cf. Algorithm 1, Line 9). Our experience indicates that computational time may depend significantly on the chosen programming language, and therefore should not be the primary choice in the comparison of the methods. Nevertheless, it is included as an indication, because it takes into account the different costs of an LMSD sweep and m iterations of a gradient method.

The comparison of different methods is made by means of the performance profile [30], as it is implemented in Python’s library perfprof. Briefly speaking, the cost of each algorithm per problem is normalized, so that the winning algorithm has cost 1. This quantity is called performance ratio. Then we plot the proportion of problems that have been solved within a certain performance ratio. An infinite cost is assigned whenever a method is not able to solve a problem to the tolerance within the maximum number of iterations.

We compare the performance of LMSD where the stepsizes are computed as summarized in Table 1. In the first comparison we only consider methods that involve a Cholesky decomposition for simplicity. The Cholesky routine raises an error any time the input matrix is not SPD; therefore no tolerance thresh for discarding old gradients needs to be chosen. The performance profiles for this first experiment are shown in Fig. 1 for \(m \in \{3,5,10\}\), in the performance range \([1,\,3]\). As m increases all methods improve, both in terms of gradient evaluations and computational time.

Table 1 Strategies to compute the new stack of stepsizes in LMSD methods for quadratic functions. RQ refers to the computation of Rayleigh quotients from the harmonic Ritz vectors. H stands for “harmonic”, the letters G (Y) indicate whether a decomposition has been used to implicitly compute a basis for \(\textbf{G}\) (\(\textbf{Y}\))
Fig. 1
figure 1

Performance profile for strictly convex quadratic problems, based on the number of gradient evaluations (left) and computational time (right). Different line types indicate different values for m. Comparison between the computation of Ritz values or harmonic Ritz values

The method that performs best, both in terms of NGE and computational time, is LMSD-G for \(m = 10\). When \(m = 5\), LMSD-HG-RQ performs better than LMSD-G in terms of NGE, but it is more computationally demanding. This is reasonable, since LMSD-HG-RQ has to compute m extra Rayleigh quotients; this operation has an additional cost of \(m^3\), which can be relatively large for some problems in our collection, where \(m^3 \approx n\).

LMSD-HG and LMSD-HY perform similarly, since they are two different ways of computing the same harmonic Ritz values. They generally perform worse than the other two methods; in the case \(m = 10\), their performances are comparable with those of LMSD-G for \(m = 5\).

Fig. 2
figure 2

Performance profile for strictly convex quadratic problems, based on the number of gradient evaluations (left) and computational time (right). Different line types indicate different values for m. Comparison between different decompositions for the matrix \(\textbf{G}\)

Fig. 3
figure 3

Condition number of quadratic problems with Hessian matrix \(\textbf{A}\) and corresponding number of gradient evaluations. Different colors indicate different ways of computing the Ritz values of \(\textbf{A}\)

In Fig. 2 we compare LMSD with \(\text {ABB}_{\text {min}}\) and \(\text {ABB}_{\text {bon}}\). Given the comments to Fig. 1, we decide to compute the Ritz values of the Hessian matrix, by decomposing \(\textbf{G}\) in different ways. Specifically, we compare LMSD-G, LMSD-G-QR and LMSD-G-SVD (cf. Table 1). The tolerance to decide the memory size \(s \le m\) is set to \(\textsf{thresh} = 10^{-8}\), for both pivoted QR and SVD. Once more, we clearly see that LMSD improves as the memory parameter increases, both in terms of gradient evaluations and computational time. Once m is fixed, the three methods to compute the basis for \(\mathcal {S}\) are almost equivalent. LMSD-G-SVD seems to be slightly faster than LMSD-G in terms of computational time, as long as the performance ratio is smaller than 1.5. In our implementation, LMSD-G-QR seems to be more expensive. Compared to \(\text {ABB}_{\text {min}}\) and \(\text {ABB}_{\text {bon}}\), all LMSD methods with \(m = 5,10\) perform better in terms of gradient evaluations. LMSD-G-SVD, for \(m=10\), appears to be faster than \(\text {ABB}_{\text {min}}\) and \(\text {ABB}_{\text {bon}}\), also in terms of computational time.

Figure 2 already suggests that different decompositions give approximately equivalent results. In addition, given a problem, it is difficult to recommend a certain decomposition strategy. We illustrate this idea with the following example: consider a family of 15 problems with \(\textbf{A}= \text {diag}(1,\omega ,\omega ^2, \dots ,\omega ^{99})\), where \(\omega \) assumes 15 values equally spaced in \([1.01,\,1.4]\). Geometric sequences as eigenvalues are quite frequent in the literature; see, e.g., [1, 2]. The starting vector is \(\textbf{x}_0 = \textbf{e}\), the associated linear system is \(\textbf{A}\textbf{x}= \textbf{0}\); the memory parameter is \(m = 5\), and each problem is scaled by the norm of the first gradient, so that \(\textsf{tol} = 10^{-7}/\Vert \textbf{g}_0\Vert \). The initial stepsize is \(\beta _0 = 0.5\). In Fig. 3, we plot the condition number of \(\textbf{A}\) against the number of gradient evaluations. The three methods start to differ already with \(\kappa (\textbf{A}) \approx 10^5\). For a large condition number, there is no clear winner in the performed experiments.

To summarize, when the objective function is strictly convex quadratic, Ritz values seem preferable over harmonic Ritz values. This is emphasized by the improvement of LMSD-HG when taking Rayleigh quotients instead of harmonic Rayleigh quotients. Different decompositions of \(\textbf{G}\) result in mild differences in the performance of LMSD. Even if Cholesky decomposition is the least stable from a numerical point of view, its instability does not seem to have a clear effect on the performance of LMSD. Finally, we observe that, in all methods, LMSD seems to improve as the memory parameter m increases.

4.2 General unconstrained optimization problems

In this section we want to assess the performance of LMSD for general unconstrained optimization problems, when we choose different methods to compute the stepsizes. These choices are summarized in Table 2. All the methods presented in Section 3 are considered, along with the extension of the harmonic Ritz values computation to the general unconstrained case. This is explained in [15], and indicated as LMSD-H-CHOL. In the quadratic case, the authors point out that the matrix \(\textbf{P}\) (14) can be expressed in terms of \(\textbf{T}\) as \(\textbf{P}= \textbf{T}^T \textbf{T}+ \varvec{\xi }\varvec{\xi }^T\), where \(\varvec{\xi }^T = [\textbf{0 }^T\ \rho ]\,\textbf{J}\textbf{R}^{-1}\). Then, if \(\widetilde{\textbf{T}}\) is the tridiagonal symmetrization of \(\textbf{T}\) as in LMSD-CHOL, the new \(\textbf{P}\) is defined as \(\widetilde{\textbf{P}} = \widetilde{\textbf{T}}^T\widetilde{\textbf{T}} + \varvec{\xi }\varvec{\xi }^T\). The new stepsizes are the eigenvalues of \(\widetilde{\textbf{P}}^{-1}\widetilde{\textbf{T}}\), and are real since \(\widetilde{\textbf{P}}\) is generically SPD, and \(\widetilde{\textbf{T}}\) is symmetric.

Table 2 Strategies to compute the new stack of stepsizes in LMSD methods for general nonlinear functions. H stands for “harmonic”

All the LMSD methods are tested against either the gradient method with nonmonotone line search [7]. The stepsize choice is again \(\text {ABB}_{\text {min}}\) or \(\text {ABB}_\text {bon}\) with \(m = 5\). The nonmonotone line search features a memory parameter \(M = 10\); negative stepsizes are replaced by \(\beta _k = \max (\min (\Vert \textbf{g}_k\Vert ^{-1}, \, 10^{5}), \ 1)\), as in [7]. In both algorithms, we set \(\beta _{\min } = 10^{-30}\), \(\beta _{\max } = 10^{30}\), \(c_\mathrm{{ls}} = 10^{-4}\), \(\sigma _\mathrm{{ls}} = \frac{1}{2}\), and \(\beta _{0} = \Vert \textbf{g}_0\Vert ^{-1}\). The routine stops when \(\Vert \textbf{g}_k\Vert \le \textsf{tol}\cdot \Vert \textbf{g}_0\Vert \), with \(\textsf{tol} = 10^{-6}\), or when \(10^5\) iterations are reached. In LMSD, the memory parameter has been set to \(m \in \{3,5,7\}\).

We take 31 general differentiable functions from the CUTEst collection [31, 32] and the suggested starting points \(\textbf{x}_0\) therein. The problems are reported in Table 3. Since some test problems are non-convex, we checked whether all gradient methods converged to the same stationary point for different methods. As the performance profile, we may consider three different costs: the number of function evaluations (NFE), the number of iterations, and the computational time. The number of iterations coincides with the number of gradient evaluations for LMSD, \(\text {ABB}_{\text {min}}\) and \(\text {ABB}_{\text {bon}}\).

Table 3 Problems from the CUTEst collection and their sizes

Before comparing LMSD methods with the other gradient methods, we discuss the following two aspects of LMSD: the use of different decompositions of either \(\textbf{G}\) or \(\textbf{S}\) in LMSD-LYA, which has been presented in Section 3.3; the number of steps per sweep that are actually used by each LMSD method, in relation with the chosen memory parameter m.

Different decompositions in LMSD-LYA. In the quadratic case, we notice that there is not much difference between the listed decompositions to compute a basis for \(\mathcal {S}\). We repeat this experiment with LMSD-LYA, for general unconstrained problems, because the Hessian matrix is not constant during the iterations and therefore the way we discard the past gradients might be relevant. We recall that Cholesky decomposition (LMSD-LYA) discards the oldest gradients first, pivoted QR (LMSD-LYA-QR) selects the gradients in a different order; SVD (LMSD-LYA-SVD) takes a linear combination of the available gradients. For the last two methods, the tolerance to detect linear dependency is set to \(\mathsf{thresh = 10^{-8}}\).

Figure 4 shows the three decompositions for \(m = 5\). Memory parameters \(m=3,\,7\) are not reported as they are similar to the case \(m = 5\). The conclusion is the same as for the quadratic case: the decomposition method does not seem to have a large impact on the performance of LMSD. However, for the general case, we remark that while LMSD-LYA solved all problems, both LMSD-LYA-QR and LMSD-LYA-SVD fail to solve one problem each, for all the tested memory parameters. In addition, LMSD-LYA seems more computationally efficient than the other methods. For these two reasons, we continue our analysis by focusing on Cholesky decomposition only.

Fig. 4
figure 4

Performance profile for general unconstrained problems, based on the number of function evaluations, gradient evaluations, and computational time. Comparison between different decompositions for the matrix \(\textbf{G}\) (or \(\textbf{S}\)) and \(m = 5\)

Fig. 5
figure 5

Cumulative distribution function of the number of iterations per sweep, i.e., the average number of stepsizes per sweep. Curves are based on the tested problems. Straight dashed lines indicate the uniform distribution over [1, m]

Average number of stepsizes per sweep. We quantify the efficiency of the various LMSD methods as follows. Ideally, each sweep should provide m new stepsizes, which are supposed to be used in the next m iterations. However, because of the algorithm we adopted, less than m stepsizes are actually employed before the stack is cleared. For each problem and method, we compute the ratio between the number of iterations and the number of sweeps. This gives the average number of stepsizes that are used in each sweep. This value is in [1, m], where the memory parameter m indicates the ideal situation where all the steps are used in a sweep. A method that often uses less than m stepsizes might be inefficient, since the effort of computing new stepsizes (of approximately \(\mathcal {O}(m^2n)\) operations) is not entirely compensated.

The number of iterations per sweep is shown in Fig. 5 as a distribution function over the tested problems. An ideal curve should be a step function with a jump in m. For example, when \(m = 3\), LMSD-CHOL, i.e., Fletcher’s method, tends to use 3 stepsizes on average for approximately \(80\%\) of the problems; this is close to the desired situation. When \(m = 5\), we notice that LMSD-H-LYA and LMSD-LYA have a similar behavior but for an average smaller than 5. In all cases, LMSD-H-LYA is the curve that shows the lowest average number of steps per sweep. Another interesting behavior is the one of LMSD-PERT, which, for some problems, approaches the largest value m, but, for many others, shows a lower average. In \(m=5,7\), more than \(50\%\) of problems are solved by using only half of the available stepsizes per sweep. This behavior was reflected by the performance profiles of LMSD-PERT: while going from \(m=5\) to \(m=7\), we observed an improvement in terms of the number of function evaluations, but a deterioration in the computational time.

As m increases, the deterioration of the average number of stepsizes per sweep is also visible for the other methods. As already remarked by [1, 2], this suggests that choosing a large value for m does not improve the LMSD methods for general unconstrained problems. This is in contrast with what we have observed in the quadratic case.

Fig. 6
figure 6

Performance profile for general unconstrained problems, based on the number of function evaluations, gradient evaluations, and computational time. Comparison between different ways to compute the new stepsizes of a sweep in LMSD, and the gradient method with nonmonotone line search and with either the \(\text {ABB}_{\text {min}}\) or the \(\text {ABB}_{\text {bon}}\) step

Comparison with gradient methods. In what follows, we consider only \(m=5\) for the comparison with \(\text {ABB}_{\text {min}}\) and \(\text {ABB}_{\text {bon}}\). For LMSD, we do not include \(m = 3\) because it showed poorer results compared to the simpler nonmonotone gradient method. LMSD for \(m = 7\) is not considered since it gives performances similar to \(m = 5\), but with a higher computational cost. Results are shown in Fig. 6. From the performance profiles related to the computational time, we see that \(\text {ABB}_{\text {bon}}\) solves a high proportion of problems with the minimum computational time.

Regarding the performance profiles for both NFE and NGE, we note that LMSD-PERT has the highest curve; LMSD-LYA and LMSD-CHOL almost overlap for a performance ratio smaller than 1.5; after that, the two curves split, and LMSD-CHOL reaches LMSD-PERT.

The LMSD-PERT method solves \(42\%\) of the problems with the minimum number of gradient evaluations. By looking at the performance profile for the NFE, this fact does not seem to be complemented by a low number of function evaluations. Intuitively, this means that LMSD-PERT enters the backtracking procedure more often than the other methods, and it reflects what we have also observed in the central plot of Fig. 5. Any time we enter the backtracking procedure, the stack of stepsizes is cleared and the sweep is terminated. Then, the more backtracking we need, the smaller the number of stepsizes per sweep we use.

LMSD-H-CHOL and LMSD-H-LYA, the “harmonic” approaches, perform a little bit worse than the other methods: while LMSD-H-CHOL can still compete with \(\text {ABB}_{\text {min}}\) and \(\text {ABB}_{\text {bon}}\) in terms of NFE and NGE, it performs worse in terms of computational time. LMSD-H-LYA performs generally worse than the other techniques; Figure 5 was already suggesting the poorer quality of the stepsizes of LMSD-H-LYA, which often need backtracking or lead to an increasing gradient norm.

To complete the picture, Table 4 reports two important quantities related to the performance profile: the proportion of problems solved by each method, and the proportion of problems solved with minimum cost, which is not always clearly visible from Fig. 6. We notice that \(\text {ABB}_{\text {min}}\) and LMSD-H-LYA fail to solve one of the 31 tested problems. The \(\text {ABB}_{\text {bon}}\) stepsize solves \(26\%\) of problems with minimum NFE and \(48\%\) of problems with minimum computational time. LMSD-PERT wins in terms of NGE. The proportion of problems solved with minimum NFE is the same for LMSD-CHOL, LMSD-LYA, and LMSD-H-CHOL.

Table 4 For each method, we report the proportion of solved problems and the proportion of problems solved at minimum cost (performance ratio equal to 1) for different performance measures. The memory parameter is \(m = 5\) for all the LMSD methods

5 Conclusions

We have reviewed the limited memory steepest descent method proposed by Fletcher [1], for both quadratic and general nonlinear unconstrained problems. In the context of strictly convex quadratic functions, we have explored pivoted QR and SVD as alternative ways to compute a basis for either the matrix \(\textbf{G}\) (Ritz values) or \(\textbf{Y}\) (harmonic Ritz values). We have also proposed to improve the harmonic Ritz values by computing the Rayleigh quotients of their corresponding harmonic Ritz vectors.

Experiments in Section 4.1 have shown that the type of decomposition has little influence on the number of iterations of LMSD. The choice between Cholesky decomposition, pivoted QR and SVD is problem dependent. These three methods may compete with the \(\text {ABB}_{\text {min}}\) and \(\text {ABB}_{\text {bon}}\) gradient methods.

The experiments also suggest that a larger memory parameter improves the performance of LMSD, and Ritz values seem to perform better than harmonic Ritz values. The modification of the harmonic Ritz values (Section 2.2) effectively improves the number of iterations, at the extra expense of (relatively cheap) \(\mathcal {O}(m^3)\) work.

In the context of general nonlinear functions, we have given a theoretical foundation to Fletcher’s idea [1] (LMSD-CHOL), by connecting the symmetrization of \(\textbf{T}\) (8) to a perturbation of \(\textbf{Y}\). We have proposed another LMSD method (LMSD-PERT) based on a different perturbation given by Schnabel [13] in the area of quasi-Newton methods. An additional modification of LMSD for general functions (LMSD-LYA) has been obtained by adding symmetry constraints to the secant condition of LMSD for quadratic functions. The solution to this problem coincides with the solution to a Lyapunov equation.

In Section 4.2, experiments on general unconstrained optimization problems have shown that, in contrast with the quadratic case, increasing the memory parameter does not necessarily improve the performance of LMSD. This may also be related to the choices made in Algorithm 2, such as the sufficient decrease condition or the criteria to keep or discard old gradients.

Given a certain memory parameter, the aforementioned LMSD methods seem to perform equally well in terms of the number of function evaluations and computational time. They all seem valid alternatives to the nonmonotone gradient method based on either \(\text {ABB}_{\text {min}}\) or \(\text {ABB}_{\text {bon}}\) stepsizes, with the caveat that LMSD-PERT and LMSD-LYA tend to not exploit all the stepsizes computed in a sweep, more often than LMSD-CHOL.

A Python code for the LMSD methods and the nonmonotone \(\text {ABB}_{\text {min}}\) and \(\text {ABB}_{\text {bon}}\) is available at github.com/gferrandi/lmsdpy.