1 Introduction

The use and analysis of first-order methods in convex optimization has gained a considerable amount of attention in recent years. For many applications—such as LASSO regression, boosting/classification, matrix completion, and other machine learning problems—first-order methods are appealing for a number of reasons. First, these problems are often very high-dimensional and thus, without any special structural knowledge, interior-point methods or other polynomial-time methods are unappealing. Second, optimization models in many settings are dependent on data that can be noisy or otherwise limited, and it is therefore not necessary or even sensible to require very high-accuracy solutions. Thus the weaker rates of convergence of first-order methods are typically satisfactory for such applications. Finally, first-order methods are appealing in many applications due to the lower computational burden per iteration, and the structural implications thereof. Indeed, most first-order methods require, at each iteration, the computation of an exact, approximate, or stochastic (sub)gradient and the computation of a solution to a particular “simple” subproblem. These computations typically scale well with the dimension of the problem and are often amenable to parallelization, distributed architectures, efficient management of sparse data-structures, and the like.

Our interest herein is the Frank–Wolfe method, which is also referred to as the conditional gradient method. The original Frank–Wolfe method, developed for smooth convex optimization on a polytope, dates back to Frank and Wolfe [9], and was generalized to the more general smooth convex objective function over a bounded convex feasible region thereafter, see for example Demyanov and Rubinov [3], Dunn and Harshbarger [8], Dunn [6, 7], also Levitin and Polyak [19] and Polyak [24]. More recently there has been renewed interest in the Frank–Wolfe method due to some of its properties that we will shortly discuss, see for example Clarkson [1], Hazan [13], Jaggi [15], Giesen et al. [11], and most recently Harchaoui et al. [12], Lan [18] and Temlyakov [25]. The Frank–Wolfe method is premised on being able to easily solve (at each iteration) linear optimization problems over the feasible region of interest. This is in contrast to other first-order methods, such as the accelerated methods of Nesterov [21, 22], which are premised on being able to easily solve (at each iteration) certain projection problems defined by a strongly convex prox function. In many applications, solving a linear optimization subproblem is much simpler than solving the relevant projection subproblem. Moreover, in many applications the solutions to the linear optimization subproblems are often highly structured and exhibit particular sparsity and/or low-rank properties, which the Frank–Wolfe method is able to take advantage of as follows. The Frank–Wolfe method solves one subproblem at each iteration and produces a sequence of feasible solutions that are each a convex combination of all previous subproblem solutions, for which one can derive an \(O(\frac{1}{k})\) rate of convergence for appropriately chosen step-sizes. Due to the structure of the subproblem solutions and the fact that iterates are convex combinations of subproblem solutions, the feasible solutions returned by the Frank–Wolfe method are also typically very highly-structured. For example, when the feasible region is the unit simplex \(\Delta _n := \{\lambda \in \mathbb {R}^n : e^T\lambda = 1, \lambda \ge 0\}\) and the linear optimization oracle always returns an extreme point, then the Frank–Wolfe method has the following sparsity property: the solution that the method produces at iteration \(k\) has at most \(k\) non-zero entries. (This observation generalizes to the matrix optimization setting: if the feasible region is a ball induced by the nuclear norm, then at iteration \(k\) the rank of the matrix produced by the method is at most \(k\)). In many applications, such structural properties are highly desirable, and in such cases the Frank–Wolfe method may be more attractive than the faster accelerated methods, even though the Frank–Wolfe method has a slower rate of convergence.

The first set of contributions in this paper concern computational guarantees for arbitrary step-size sequences. In Sect. 2, we present a new complexity analysis of the Frank–Wolfe method wherein we derive an exact functional dependence of the complexity bound at iteration \(k\) as a function of the step-size sequence \(\{\bar{\alpha }_k\}\). We derive bounds on the deviation from the optimal objective function value (and on the duality gap in the presence of minmax structure), and on the so-called FW gaps, which may be interpreted as specially structured duality gaps. In Sect. 3, we use the technical theorems developed in Sect. 2 to derive computational guarantees for a variety of simple step-size rules including the well-studied step-size rule \(\bar{\alpha }_k := \frac{2}{k+2}\), simple averaging, and constant step-sizes. Our analysis retains the well-known optimal \(O(\frac{1}{k})\) rate (optimal for linear optimization oracle-based methods [18], following also from [15]) when the step-size is either given by the rule \(\bar{\alpha }_k := \frac{2}{k+2}\) or is determined by a line-search. We also derive an \(O(\frac{\ln (k)}{k})\) rate for both the case when the step-size is given by simple averaging and in the case when the step-size is simply a suitably chosen constant.

The second set of contributions in this paper concern “warm-start” step-size rules and associated computational guarantees that reflect the the quality of the given initial iterate. The \(O(\frac{1}{k})\) computational guarantees associated with the step-size sequence \(\bar{\alpha }_k := \frac{2}{k+2}\) are independent of quality of the initial iterate. This is good if the objective function value of the initial iterate is very far from the optimal value, as the computational guarantee is independent of the poor quality of the initial iterate. But if the objective function value of the initial iterate is moderately close to the optimal value, one would want the Frank–Wolfe method, with an appropriate step-size sequence, to have computational guarantees that reflect the closeness to optimality of the initial objective function value. In Sect. 4, we introduce a modification of the \(\bar{\alpha }_k := \frac{2}{k+2}\) step-size rule that incorporates the quality of the initial iterate. Our new step-size rule maintains the \(O(\frac{1}{k})\) complexity bound but now the bound is enhanced by the quality of the initial iterate. We also introduce a dynamic version of this warm start step-size rule, which dynamically incorporates all new bound information at each iteration. For the dynamic step-size rule, we also derive a \(O(\frac{1}{k})\) complexity bound that depends naturally on all of the bound information obtained throughout the course of the algorithm.

The third set of contributions concern computational guarantees in the presence of approximate computation of gradients and linear optimization subproblem solutions. In Sect. 5, we first consider a variation of the Frank–Wolfe method where the linear optimization subproblem at iteration \(k\) is solved approximately to an (additive) absolute accuracy of \(\delta _k\). We show that, independent of the choice of step-size sequence \(\{\bar{\alpha }_k\}\), the Frank–Wolfe method does not suffer from an accumulation of errors in the presence of approximate subproblem solutions. We extend the “technical” complexity theorems of Sect. 2, which imply, for instance, that when an optimal step-size such as \(\bar{\alpha }_k := \frac{2}{k+2}\) is used and the \(\{\delta _k\}\) accuracy sequence is a constant \(\delta \), then a solution with accuracy \(O(\frac{1}{k} + \delta )\) can be achieved in \(k\) iterations. We next examine variations of the Frank–Wolfe method where exact gradient computations are replaced with inexact gradient computations, under two different models of inexact gradient computations. We show that all of the complexity results under the previously examined approximate subproblem solution case (including, for instance, the non-accumulation of errors) directly apply to the case where exact gradient computations are replaced with the \(\delta \)-oracle approximate gradient model introduced by d’Aspremont [2]. We also examine replacing exact gradient computations with the \((\delta , L)\)-oracle model introduced by Devolder et al. [4]. In this case the Frank–Wolfe method suffers from an accumulation of errors under essentially any step-size sequence \(\{\bar{\alpha }_k\}\). These results provide some insight into the inherent tradeoffs faced in choosing among several first-order methods.

1.1 Notation

Let \(E\) be a finite-dimensional real vector space with dual vector space \(E^*\). For a given \(s \in E^*\) and a given \(\lambda \in E\), let \(s^T\lambda \) denote the evaluation of the linear functional \(s\) at \(\lambda \). For a norm \(\Vert \cdot \Vert \) on \(E\), let \(B(c,r) = \{\lambda \in E : \Vert \lambda - c\Vert \le r\}\). The dual norm \(\Vert \cdot \Vert _*\) on the space \(E^*\) is defined by \(\Vert s\Vert _*:= \max \nolimits _{\lambda \in B(0,1)} \{s^T\lambda \} \) for a given \(s \in E^*\). The notation “\(\tilde{v} \leftarrow \arg \max \nolimits _{v \in S} \{f(v)\}\)” denotes assigning \(\tilde{v}\) to be any optimal solution of the problem \(\max \nolimits _{v \in S} \{f(v)\}\).

2 The Frank–Wolfe method

We recall the Frank–Wolfe method for convex optimization, see Frank and Wolfe [9], also Demyanov and Rubinov [3], Levitin and Polyak [19], and Polyak [24], stated here for maximization problems:

$$\begin{aligned} \begin{array}{ll} \max \limits _{\lambda } &{} h(\lambda ) \\ \mathrm {s.t.} &{} \lambda \in Q, \end{array} \end{aligned}$$
(1)

where \(Q \subset E\) is convex and compact, and \(h(\cdot ) : Q \rightarrow \mathbb {R}\) is concave and differentiable on \(Q\). Let \(h^*\) denote the optimal objective function value of (1). The basic Frank–Wolfe method is presented in Method 1, where the main computational requirement at each iteration is to solve a linear optimization problem over \(Q\) in step (2) of the method. The step-size \(\bar{\alpha }_k\) in step (4) could be chosen by inexact or exact line-search, or by a pre-determined or dynamically determined step-size sequence \(\{\bar{\alpha }_k\}\). Also note that the version of the Frank–Wolfe method in Method 1 does not allow a (full) step-size \(\bar{\alpha }_k = 1\), the reasons for which will become apparent below.

figure a

As a consequence of solving the linear optimization problem in step (2) of the method, one conveniently obtains the following upper bound on the optimal value \(h^*\) of (1):

$$\begin{aligned} B^w_k := h(\lambda _k) + \nabla h (\lambda _k)^T(\tilde{\lambda }_k - \lambda _k), \end{aligned}$$
(2)

and it follows from the fact that the linearization of \(h(\cdot )\) at \(\lambda _k\) dominates \(h(\cdot )\) that \(B^w_k\) is a valid upper bound on \(h^*\). We also study the quantity \(G_k\):

$$\begin{aligned} G_k := B^w_k - h(\lambda _k) = \nabla h (\lambda _k)^T(\tilde{\lambda }_k - \lambda _k), \end{aligned}$$
(3)

which we refer to as the “FW gap” at iteration \(k\) for convenience. Note that \(G_k \ge h^*- h(\lambda _k) \ge 0\). The use of the upper bound \(B^w_k\) dates to the original 1956 paper of Frank and Wolfe [9]. As early as 1970, Demyanov and Rubinov [3] used the FW gap quantities extensively in their convergence proofs of the Frank–Wolfe method, and perhaps this quantity was used even earlier. In certain contexts, \(G_k\) is an important quantity by itself, see for example Hearn [14], Khachiyan [16] and Giesen et al. [11]. Indeed, Hearn [14] studies basic properties of the FW gaps independent of their use in any algorithmic schemes. For results concerning upper bound guarantees on \(G_k\) for specific and general problems see Khachiyan [16], Clarkson [1], Hazan [13], Jaggi [15], Giesen et al. [11], and Harchaoui et al. [12]. Both \(B^w_k\) and \(G_k\) are computed directly from the solution of the linear optimization problem in step (2) and are recorded therein for convenience.

In some of our analysis of the Frank–Wolfe method, the computational guarantees will depend on the quality of upper bounds on \(h^*\). In addition to the Wolfe bound \(B^w_k\), step (3) allows for an “optional other upper bound \(B^o_k\)” that also might be computed at iteration \(k\). Sometimes there is structural knowledge of an upper bound as a consequence of a dual problem associated with (1), as when \(h(\cdot )\) is conveyed with minmax structure, namely:

$$\begin{aligned} h(\lambda ) = \min _{x \in P} \ \phi (x,\lambda ), \end{aligned}$$
(4)

where \(P\) is a closed convex set and \(\phi (\cdot ,\cdot ) : P \times Q \rightarrow \mathbb {R}\) is a continuous function that is convex in the first variable \(x\) and concave in the second variable \(\lambda \). In this case define the convex function \(f(\cdot ): P \rightarrow \mathbb {R}\) given by \(f(x) := \max \nolimits _{\lambda \in Q} \ \phi (x,\lambda )\) and consider the following duality paired problems:

$$\begin{aligned} \hbox {(Primal):} \quad \min \limits _{x \in P}f(x) \quad \hbox {and} \quad \text{(Dual): } \quad \max \limits _{\lambda \in Q}h(\lambda ), \end{aligned}$$
(5)

where the dual problem corresponds to our problem of interest (1). Weak duality holds, namely \(h(\lambda ) \le h^* \le f(x)\) for all \(x \in P, \lambda \in Q\). At any iterate \(\lambda _k \in Q\) of the Frank–Wolfe method one can construct a “minmax” upper bound on \(h^*\) by considering the variable \(x\) in that structure:

$$\begin{aligned} B^m_k := f(x_k):=\max \limits _{\lambda \in Q} \{\phi (x_k,\lambda )\} \quad \hbox {where} \quad x_k \in \arg \min \limits _{x \in P} \{\phi (x,\lambda _k) \}, \end{aligned}$$
(6)

and it follows from weak duality that \(B^o_k := B^m_k\) is a valid upper bound for all \(k\). Notice that \(x_k\) defined above is the “optimal response” to \(\lambda _k\) in a minmax sense and hence is a natural choice of duality-paired variable associated with the variable \(\lambda _k\). Under certain regularity conditions, for instance when \(h(\cdot )\) is globally differentiable on \(E\), one can show that \(B^m_k\) is at least as tight a bound as Wolfe’s bound, namely \(B^m_k \le B^w_k\) for all \(k\) (see Proposition 7.1), and therefore the FW gap \(G_k\) conveniently bounds this minmax duality gap: \(B^m_k - h(\lambda _k) \le B^w_k - h(\lambda _k) = G_k\).

(Indeed, in the minmax setting notice that the optimal response \(x_k\) in (6) is a function of the current iterate \(\lambda _k\) and hence \(f(x_k) - h(\lambda _k)=B^m_k - h(\lambda _k)\) is not just any duality gap but rather is determined completely by the current iterate \(\lambda _k\). This special feature of the duality gap \(B^m_k - h(\lambda _k)\) is exploited in the application of the Frank–Wolfe method to rounding of polytopes [16], parametric optimization on the spectrahedron [11], and to regularized regression [10] (and perhaps elsewhere as well), where bounds on the FW gap \(G_k\) are used to bound \(B^m_k - h(\lambda _k)\) directly).

We also mention that in some applications there might be exact knowledge of the optimal value \(h^*\), such as in certain linear regression and/or machine learning applications where one knows a priori that the optimal value of the loss function is zero. In these situations one can set \(B^o_k \leftarrow h^*\).

Towards stating and proving complexity bounds for the Frank–Wolfe method, we use the following curvature constant \(C_{h, Q}\), which is defined to be the minimal value of \(C\) satisfying:

$$\begin{aligned} h(\lambda + \alpha (\bar{\lambda }- \lambda ))&\ge h(\lambda ) + \nabla h(\lambda )^T(\alpha (\bar{\lambda }- \lambda )) - \tfrac{1}{2}C \alpha ^2 \quad \hbox {for all }\nonumber \\&\quad \lambda , \bar{\lambda }\in Q \quad \hbox { and all } \quad \alpha \in [0,1]. \end{aligned}$$
(7)

(This notion of curvature was introduced by Clarkson [1] and extended in Jaggi [15]). For any choice of norm \(\Vert \cdot \Vert \) on \(E\), let \(\mathrm {Diam}_Q\) denote the diameter of \(Q\) measured with the norm \(\Vert \cdot \Vert \), namely \(\mathrm {Diam}_Q:=\max \nolimits _{\lambda , \bar{\lambda }\in Q} \{\Vert \lambda - \bar{\lambda }\Vert \}\) and let \(L_{h,Q}\) be the Lipschitz constant for \(\nabla h(\cdot )\) on \(Q\), namely \(L_{h,Q}\) is the smallest constant \(L\) for which it holds that:

$$\begin{aligned} \Vert \nabla h(\lambda ) -\nabla h(\bar{\lambda })\Vert _* \le L \Vert \lambda - \bar{\lambda }\Vert \quad \hbox {for all } \quad \lambda , \bar{\lambda }\in Q. \end{aligned}$$

It is straightforward to show that \(C_{h, Q}\) is bounded above by the more classical metrics \(\mathrm {Diam}_Q\) and \(L_{h,Q}\), namely

$$\begin{aligned} C_{h, Q} \le L_{h,Q}(\mathrm {Diam}_Q)^2, \end{aligned}$$
(8)

see [15]; we present a short proof of this inequality in Proposition 7.2 for completeness. In contrast to other (proximal) first-order methods, the Frank–Wolfe method does not depend on a choice of norm. The norm invariant definition of \(C_{h,Q}\) and the fact that (8) holds for any norm are therefore particularly appealing properties of \(C_{h,Q}\) as a behavioral measure for the Frank–Wolfe method.

As a prelude to stating our main technical results, we define the following two auxiliary sequences, where \(\alpha _k\) and \(\beta _k\) are functions of the first \(k\) step-size sequence values, \(\bar{\alpha }_1, \ldots , \bar{\alpha }_k\), from the Frank–Wolfe method:

$$\begin{aligned} \beta _k = \frac{1}{\prod \nolimits _{j=1}^{k-1} (1-\bar{\alpha }_j)}, \quad \alpha _k = \frac{\beta _k \bar{\alpha }_k}{1- \bar{\alpha }_k}, \quad k \ge 1. \end{aligned}$$
(9)

(Here and in what follows we use the conventions: \(\prod _{j = 1}^0 \cdot = 1\) and \(\sum _{i = 1}^0 \cdot = 0\)).

The following two theorems are our main technical constructs that will be used to develop the results herein. The first theorem concerns optimality gap bounds.

Theorem 2.1

Consider the iterate sequences of the Frank–Wolfe method (Method 1) \(\{\lambda _k\}\) and \(\{\tilde{\lambda }_k\}\) and the sequence of upper bounds \(\{B_k\}\) on \(h^*\), using the step-size sequence \(\{\bar{\alpha }_k\}\). For the auxiliary sequences \(\{\alpha _k\}\) and \(\{\beta _k\}\) given by (9), and for any \(k \ge 0\), the following inequality holds:

$$\begin{aligned} B_k - h(\lambda _{k+1}) \le \frac{B_k - h(\lambda _1)}{\beta _{k+1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^k\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}}. \end{aligned}$$
(10)

\(\square \)

(The summation expression in the rightmost term above appears also in the bound given for the dual averaging method of Nesterov [23]. Indeed, this is no coincidence as the sequences \(\{\alpha _k\}\) and \(\{\beta _k\}\) given by (9) arise precisely from a connection between the Frank–Wolfe method and the dual averaging method. If we define \(s_k := \lambda _0 + \sum _{i = 0}^{k-1}\alpha _i\tilde{\lambda }_i\), then one can interpret the sequence \(\{s_k\}\) as the sequence of dual variables in a particular instance of the dual averaging method. This connection underlies the proof of Theorem 2.1, and the careful reader will notice the similarities between the proof of Theorem 2.1 and the proof of Theorem 1 in [23]. For this reason we will henceforth refer to the sequences (9) as the “dual averages” sequences associated with \(\{\bar{\alpha }_k\}\)).

The second theorem concerns the FW gap values \(G_k\) from step (2) in particular.

Theorem 2.2

Consider the iterate sequences of the Frank–Wolfe method (Method 1) \(\{\lambda _k\}\) and \(\{\tilde{\lambda }_k\}\), the sequence of upper bounds \(\{B_k\}\) on \(h^*\), and the sequence of FW gaps \(\{G_k\}\) from step (2), using the step-size sequence \(\{\bar{\alpha }_k\}\). For the auxiliary sequences \(\{\alpha _k\}\) and \(\{\beta _k\}\) given by (9), and for any \(\ell \ge 0\) and \(k \ge \ell + 1\), the following inequality holds:

$$\begin{aligned} \min _{i \in \{\ell +1, \ldots , k\}}G_i&\le \frac{1}{\sum _{i = \ell +1}^k\bar{\alpha }_i}\left[ \frac{B_\ell - h(\lambda _1)}{\beta _{\ell +1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^\ell \frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{\ell +1}}\right] \nonumber \\&+\,\frac{\frac{1}{2}C_{h,Q}\sum _{i = \ell +1}^k\bar{\alpha }_i^2}{\sum _{i=\ell +1}^k\bar{\alpha }_i}. \end{aligned}$$
(11)

\(\square \)

Theorems 2.1 and 2.2 can be applied to yield specific complexity results for any specific step-size sequence \(\{\bar{\alpha }_k\}\) (satisfying the mild assumption that \(\bar{\alpha }_k < 1\)) through the use of the implied \(\{\alpha _k\}\) and \(\{\beta _k\}\) dual averages sequences. This is shown for several useful step-size sequences in the next section.

Proof of Theorem 2.1

We will show the slightly more general result for \(k \ge 0\):

$$\begin{aligned} \min \{B,B_k\} - h(\lambda _{k+1}) \le \frac{B - h(\lambda _1)}{\beta _{k+1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^k\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}} \quad \mathrm {for~any~} B, \end{aligned}$$
(12)

from which (10) follows by substituting \(B=B_k\) above.

For \(k=0\) the result follows trivially since \(\beta _1=1\) and the summation term on the right side of (12) is zero by the conventions for null products and summations stated earlier. For \(k \ge 1\), we begin by observing that the following equalities hold for the dual averages sequences (9):

$$\begin{aligned} \beta _{i+1} - \beta _i = \bar{\alpha }_i \beta _{i+1} = \alpha _i \quad \mathrm {and} \quad \beta _{i+1} \bar{\alpha }_i^2 = \frac{\alpha _i^2}{\beta _{i+1}} \quad \mathrm {for~} i \ge 1, \end{aligned}$$
(13)

and

$$\begin{aligned} 1+\sum _{i=1}^k \alpha _i = \beta _{k+1}\quad \mathrm {for~} k \ge 1. \end{aligned}$$
(14)

We then have for \(i \ge 1\):

$$\begin{aligned} \beta _{i+1} h(\lambda _{i+1})&\ge \beta _{i+1} \left[ h(\lambda _i) + \nabla h(\lambda _i)^T(\tilde{\lambda }_i-\lambda _i)\bar{\alpha }_i - \frac{1}{2}\bar{\alpha }_i^2C_{h,Q} \right] \\&= \beta _{i} h(\lambda _i) + (\beta _{i+1} - \beta _{i})h(\lambda _i) + \beta _{i+1}\bar{\alpha }_i\nabla h(\lambda _i)^T(\tilde{\lambda }_i-\lambda _i) - \frac{1}{2}\beta _{i+1}\bar{\alpha }_i^2C_{h,Q} \\&= \beta _{i} h(\lambda _i) + \alpha _{i}h(\lambda _i) + \alpha _i\nabla h(\lambda _i)^T(\tilde{\lambda }_i-\lambda _i) - \frac{1}{2}\frac{\alpha _i^2}{\beta _{i+1}}C_{h,Q} \\&= \beta _{i} h(\lambda _i) + \alpha _{i}\left[ h(\lambda _i) + \nabla h(\lambda _i)^T(\tilde{\lambda }_i-\lambda _i)\right] - \frac{1}{2}\frac{\alpha _i^2}{\beta _{i+1}}C_{h,Q} \\&= \beta _{i} h(\lambda _i) + \alpha _{i}B^w_i - \frac{1}{2}\frac{\alpha _i^2}{\beta _{i+1}}C_{h,Q}. \\ \end{aligned}$$

The inequality in the first line above follows from the definition of \(C_{h,Q}\) in (7) and \(\lambda _{i+1} -\lambda _i = \bar{\alpha }_i(\tilde{\lambda }_i - \lambda _i)\). The second equality above uses the identities (13), and the fourth equality uses the definition of the Wolfe upper bound (2). Rearranging and summing the above over \(i\), it follows that for any scalar \(B\):

$$\begin{aligned} B+\sum _{i=1}^k \alpha _i B^w_i \le B + \beta _{k+1} h(\lambda _{k+1}) - \beta _1 h(\lambda _1) + \frac{1}{2}\sum _{i=1}^k \frac{\alpha _i^2}{\beta _{i+1}}C_{h,Q}. \end{aligned}$$
(15)

Therefore

$$\begin{aligned} \min \{B,B_k\}\beta _{k+1}&= \min \{B,B_k\}\left( 1+\sum _{i=1}^k \alpha _i \right) \\&\le B + \sum _{i=1}^k \alpha _i B^w_i \\&\le B + \beta _{k+1} h(\lambda _{k+1}) - h(\lambda _1) + \frac{1}{2}\sum _{i=1}^k \frac{\alpha _i^2}{\beta _{i+1}}C_{h,Q}, \end{aligned}$$

where the first equality above uses identity (14), the first inequality uses the fact that \(B_k \le B^w_i\) for \(i \le k\), and the second inequality uses (15) and the fact that \(\beta _1=1\). The result then follows by dividing by \(\beta _{k+1}\) and rearranging terms. \(\square \)

Proof of Theorem 2.2

For \(i\ge 1\) we have:

$$\begin{aligned} h(\lambda _{i+1})&\ge h(\lambda _i) + \nabla h(\lambda _i)^T(\tilde{\lambda }_i-\lambda _i)\bar{\alpha }_i - \frac{1}{2}\bar{\alpha }_i^2C_{h,Q} \nonumber \\&= h(\lambda _i) + \bar{\alpha }_{i}G_i - \frac{1}{2}\bar{\alpha }_i^2C_{h,Q}, \end{aligned}$$
(16)

where the inequality follows from the definition of the curvature constant in (7), and the equality follows from the definition of the FW gap in (3). Summing the above over \(i \in \{\ell + 1, \ldots , k\}\) and rearranging yields:

$$\begin{aligned} \sum _{i=\ell +1}^k \bar{\alpha }_{i}G_i \le h(\lambda _{k+1}) - h(\lambda _{\ell +1}) + \sum _{i=\ell +1}^k\frac{1}{2}\bar{\alpha }_i^2C_{h,Q}. \end{aligned}$$
(17)

Combining (17) with Theorem 2.1 we obtain:

$$\begin{aligned} \sum _{i=\ell +1}^k \bar{\alpha }_{i}G_i \le h(\lambda _{k+1}) - B_\ell + \frac{B_\ell - h(\lambda _1)}{\beta _{\ell +1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^\ell \frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{\ell +1}} + \sum _{i=\ell +1}^k\frac{1}{2}\bar{\alpha }_i^2C_{h,Q}, \end{aligned}$$

and since \(B_\ell \ge h^*\ge h(\lambda _{k+1})\) we obtain:

$$\begin{aligned} \left( \min _{i \in \{\ell +1, \ldots , k\}}G_i\right) \left( \sum _{i=\ell +1}^k \bar{\alpha }_{i}\right)&\le \sum _{i=\ell +1}^k \bar{\alpha }_{i}G_i \le \frac{B_\ell - h(\lambda _1)}{\beta _{\ell +1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^\ell \frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{\ell +1}}\\&\quad + \sum _{i=\ell +1}^k\frac{1}{2}\bar{\alpha }_i^2C_{h,Q}, \end{aligned}$$

and dividing by \(\sum _{i=\ell +1}^k \bar{\alpha }_{i}\) yields the result. \(\square \)

3 Computational guarantees for specific step-size sequences

Herein we use Theorems 2.1 and 2.2 to derive computational guarantees for a variety of specific step-size sequences.

It will be useful to consider a version of the Frank–Wolfe method wherein there is a single “pre-start” step. In this case we are given some \(\lambda _0 \in Q\) and some upper bound \(B_{-1}\) on \(h^*\) (one ca use \(B_{-1}=+\infty \) if no information is available) and we proceed like any other iteration except that in step (4) we set \(\lambda _{1} \leftarrow \tilde{\lambda }_0\), which is equivalent to setting \(\bar{\alpha }_0 := 1\). This is shown formally in the Pre-start Procedure 2.

figure b

Before developing computational guarantees for specific step-sizes, we present a property of the pre-start step (Procedure 2) that has implications for such computational guarantees.

Proposition 3.1

Let \(\lambda _1\) and \(B_0\) be computed by the pre-start step Procedure 2. Then \(B_0 - h(\lambda _1) \le \frac{1}{2}C_{h,Q}\).

Proof

We have \(\lambda _1 = \tilde{\lambda }_0\) and \(B_0 \le B^w_0\), whereby from the definition of \(C_{h,Q}\) using \(\alpha = 1\) we have:

$$\begin{aligned} h(\lambda _1)&= h(\tilde{\lambda }_0) \ge h(\lambda _0) + \nabla h(\lambda _0)^T(\tilde{\lambda }_0 - \lambda _0) - \tfrac{1}{2}C_{h,Q}\\&= B^w_0 - \tfrac{1}{2}C_{h,Q} \ge B_0- \tfrac{1}{2}C_{h,Q}, \end{aligned}$$

and the result follows by rearranging terms.\(\square \)

3.1 A well-studied step-size sequence

Suppose we initiate the Frank–Wolfe method with the pre-start step Procedure 2 from a given value \(\lambda _0 \in Q\) (which by definition assigns the step-size \(\bar{\alpha }_0=1\) as discussed earlier), and then use the step-size \(\bar{\alpha }_i = 2/(i+2)\) for \(i \ge 1\). This can be written equivalently as:

$$\begin{aligned} \bar{\alpha }_i = \frac{2}{i+2} \quad \hbox {for } i \ge 0. \end{aligned}$$
(18)

Computational guarantees for this sequence appeared in Clarkson [1], Hazan [13] (with a corrected proof in Giesen et al. [11]), and Jaggi [15]. In unpublished correspondence with the first author in 2007, Nemirovski [20] presented a short inductive proof of convergence of the Frank–Wolfe method using this step-size rule.

We use the phrase “bound gap” to generically refer to the difference between an upper bound \(B\) on \(h^*\) and the value \(h(\lambda )\), namely \(B-h(\lambda )\). The following result describes guarantees on the bound gap \(B_k - h(\lambda _{k+1})\) and the FW gap \(G_k\) using the step-size sequence (18), that are applications of Theorems 2.1 and 2.2, and that are very minor improvements of existing results as discussed below.

Bound 3.1

Under the step-size sequence (18), the following inequalities hold for all \(k \ge 1\):

$$\begin{aligned} B_k - h(\lambda _{k+1}) \le \frac{2C_{h,Q}}{k+4} \end{aligned}$$
(19)

and

$$\begin{aligned} \min _{i \in \{1, \ldots , k\}}G_i \le \frac{4.5C_{h,Q}}{k}. \end{aligned}$$
(20)

The bound (19) is a very minor improvement over that in Hazan [13], Giesen et al. [11], Jaggi [15], and Harchaoui et al. [12], as the denominator is additively larger by \(1\) (after accounting for the pre-start step and the different indexing conventions). The bound (20) is a modification of the original bound in Jaggi [15], and is also a slight improvement of the bound in Harchaoui et al. [12] in as much as the denominator is additively larger by \(1\) and the bound is valid for all \(k \ge 1\).

Proof of Bound 3.1

Using (18) it is easy to show that the dual averages sequences (9) satisfy \(\beta _k = \frac{k(k+1)}{2}\) and \(\alpha _k = k+1\) for \(k\ge 1\). Utilizing Theorem 2.1, we have for \(k\ge 1\):

$$\begin{aligned} B_k - h(\lambda _{k+1})&\le \frac{B_k - h(\lambda _1)}{\beta _{k+1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^k\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}} \\&\le \frac{B_0 - h(\lambda _1)}{\beta _{k+1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^k\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}} \\&\le \frac{\frac{1}{2}C_{h,Q}}{\beta _{k+1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^k\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}} \\&= \frac{C_{h,Q}}{(k+1)(k+2)}\left[ 1+ \sum _{i = 1}^k\frac{2(i+1)^2}{(i+1)(i+2)}\right] \\&= \frac{C_{h,Q}}{(k+1)(k+2)}\left[ \sum _{i = 0}^k\frac{2(i+1)}{(i+2)}\right] \\&\le \frac{2C_{h,Q}}{k+4}, \end{aligned}$$

where the second inequality uses \(B_k \le B_0\), the third inequality uses Proposition 3.1, the first equality substitutes the dual averages sequence values, and the final inequality follows from Proposition 7.3. This proves (19).

To prove (20) we proceed as follows. First apply Theorem 2.2 with \(\ell = 0\) and \(k = 1\) to obtain:

$$\begin{aligned} G_1&\le \frac{1}{\bar{\alpha }_1}\left[ B_0 - h(\lambda _1)\right] + \frac{1}{2}C_{h,Q}\bar{\alpha }_1 \le \frac{1}{2}C_{h,Q}\left[ \frac{1}{\bar{\alpha }_1} + \bar{\alpha }_1\right] \\&= \frac{1}{2}C_{h,Q}\left[ \frac{3}{2} + \frac{2}{3}\right] = \frac{13}{12}C_{h,Q}, \end{aligned}$$

where the second inequality uses Proposition 3.1. Since \(\frac{13}{12} \le 4.5\) and \(\frac{13}{12} \le \frac{4.5}{2}\), this proves (20) for \(k = 1, 2\). Assume now that \(k \ge 3\). Let \(\ell = \lceil \frac{k}{2}\rceil - 2\) so that \(\ell \ge 0\). We have:

$$\begin{aligned} \sum _{i = \ell +1}^k\bar{\alpha }_i&= 2\sum _{i = \ell +1}^k\frac{1}{i + 2} \!=\! 2\sum _{i = \ell + 3}^{k+2}\frac{1}{i} \ge 2\ln \left( \frac{k+3}{\ell +3}\right) \nonumber \\&\ge 2\ln \left( \frac{k+3}{\frac{k}{2} + 1.5}\right) \!=\! 2\ln (2), \end{aligned}$$
(21)

where the first inequality uses Proposition 7.5 and the second inequality uses \(\lceil \frac{k}{2}\rceil \le \frac{k}{2} + \frac{1}{2}\). We also have:

$$\begin{aligned} \sum _{i = \ell +1}^k\bar{\alpha }_i^2&= 4\sum _{i = \ell +1}^k\frac{1}{(i+2)^2} = 4\sum _{i = \ell +3}^{k+2}\frac{1}{i^2} \le \frac{4(k - \ell )}{(\ell + 2)(k+2)}\nonumber \\&\le \frac{4\left( \frac{k}{2} + 2\right) }{\frac{k}{2}(k+2)} = \frac{4(k + 4)}{k(k+2)}, \end{aligned}$$
(22)

where the first inequality uses Proposition 7.5 and the second inequality uses \(\lceil \frac{k}{2}\rceil \ge \frac{k}{2}\). Applying Theorem 2.2 and using (21) and (22) yields:

$$\begin{aligned} \min _{i \in \{1, \ldots , k\}}G_i&\le \frac{1}{2\ln (2)}\left[ \frac{B_\ell - h(\lambda _1)}{\beta _{\ell +1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^\ell \frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{\ell +1}} + \frac{2C_{h,Q}(k + 4)}{k(k+2)} \right] \\&\le \frac{1}{2\ln (2)}\left[ \frac{2C_{h,Q}}{\ell + 4} + \frac{2C_{h,Q}(k + 4)}{k(k+2)} \right] \\&\le \frac{2C_{h,Q}}{2\ln (2)}\left[ \frac{2}{k + 4} + \frac{k + 4}{k(k+2)} \right] = \frac{2C_{h,Q}}{2\ln (2)}\left[ \frac{3k^2 + 12k + 16}{(k+4)(k+2)k} \right] \\&\le \frac{2C_{h,Q}}{2\ln (2)}\left( \frac{3}{k}\right) \le \frac{4.5C_{h,Q}}{k}, \end{aligned}$$

where the second inequality uses the chain of inequalities used to prove (19), the third inequality uses \(\ell + 4 \ge \frac{k}{2} + 2\), and the fourth inequality uses \(k^2 + 4k + \frac{16}{3} \le k^2 + 6k + 8 = (k+4)(k+2)\).\(\square \)

3.2 Simple averaging

Consider the following step-size sequence:

$$\begin{aligned} \bar{\alpha }_i = \frac{1}{i+1} \quad \hbox {for }\quad i \ge 0, \end{aligned}$$
(23)

where, as with the step-size sequence (18), we write \(\bar{\alpha }_0 = 1\) to indicate the use of the pre-start step Procedure 2. It follows from a simple inductive argument that, under the step-size sequence (23), \(\lambda _{k+1}\) is the simple average of \(\tilde{\lambda }_0, \tilde{\lambda }_1, \ldots , \tilde{\lambda }_k\), i.e., we have

$$\begin{aligned} \lambda _{k+1} = \frac{1}{k+1}\sum _{i = 0}^k\tilde{\lambda }_i \quad \hbox {for all }\quad k \ge 0. \end{aligned}$$

Bound 3.2

Under the step-size sequence (23), the following inequality holds for all \(k \ge 0\):

$$\begin{aligned} B_{k} - h(\lambda _{k+1}) \le \frac{\frac{1}{2}C_{h,Q}(1+\ln (k+1))}{k+1}, \end{aligned}$$
(24)

and the following inequality holds for all \(k \ge 2\):

$$\begin{aligned} \min _{i \in \{1, \ldots , k\}}G_i \le \frac{\frac{3}{4}C_{h,Q}\left( 2.3 + 2\ln (k)\right) }{k-1}. \end{aligned}$$
(25)

Proof of Bound 3.2

Using (23) it is easy to show that the dual averages sequences (9) are given by \(\beta _k = k\) and \(\alpha _k = 1\) for \(k\ge 1\). Utilizing Theorem 2.1 and Proposition 3.1, we have for \(k\ge 1\):

$$\begin{aligned} B_k - h(\lambda _{k+1})&\le \frac{\frac{1}{2}C_{h,Q}}{\beta _{k+1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^k\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}} \\&= \frac{\frac{1}{2}C_{h,Q}}{k+1}\left[ 1+ \sum _{i = 1}^k\frac{1}{i+1}\right] \\&\le \frac{\frac{1}{2}C_{h,Q}}{k+1}\left[ 1 + \ln (k+1)\right] , \end{aligned}$$

where the first equality substitutes the dual averages sequence values and the second inequality uses Proposition 7.5. This proves (24). To prove (25), we proceed as follows. Let \(\ell = \lfloor \frac{k}{2}\rfloor - 1\), whereby \(\ell \ge 0\) since \(k \ge 2\). We have:

$$\begin{aligned} \sum _{i = \ell +1}^k\bar{\alpha }_i = \sum _{i = \ell +1}^k\frac{1}{i + 1} = \sum _{i = \ell + 2}^{k+1}\frac{1}{i} \ge \ln \left( \frac{k+2}{\ell + 2}\right) \ge \ln \left( \frac{k+2}{\frac{k}{2} + 1}\right) = \ln (2), \end{aligned}$$
(26)

where the first inequality uses Proposition 7.5 and the second inequality uses \(\ell \le \frac{k}{2} - 1\). We also have:

$$\begin{aligned} \sum _{i = \ell +1}^k\bar{\alpha }_i^2&= \sum _{i = \ell +1}^k\frac{1}{(i+1)^2} = \sum _{i = \ell +2}^{k+1}\frac{1}{i^2} \le \frac{k - \ell }{(\ell + 1)(k + 1)} \nonumber \\&\le \frac{\frac{k}{2} + 1.5}{(\frac{k}{2} - \frac{1}{2})(k+1)} = \frac{k+3}{(k-1)(k+1)}, \end{aligned}$$
(27)

where the first inequality uses Proposition 7.5 and the second inequality uses \(\ell \ge \frac{k}{2} - 1.5\). Applying Theorem 2.2 and using (26) and (27) yields:

$$\begin{aligned} \min _{i \in \{1, \ldots , k\}}G_i&\le \frac{1}{\ln (2)}\left[ \frac{B_\ell - h(\lambda _1)}{\beta _{\ell +1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^\ell \frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{\ell +1}} + \frac{\frac{1}{2}C_{h,Q}(k+3)}{(k-1)(k+1)}\right] \\&\le \frac{1}{\ln (2)}\left[ \frac{\frac{1}{2}C_{h,Q}(1+\ln (\ell +1))}{\ell +1} + \frac{\frac{1}{2}C_{h,Q}(k+3)}{(k-1)(k+1)}\right] \\&\le \frac{\frac{1}{2}C_{h,Q}}{\ln (2)}\left[ \frac{1+\ln (\frac{k}{2})}{\frac{k}{2} - \frac{1}{2}} + \frac{k+3}{(k-1)(k+1)}\right] \\&\le \frac{\frac{1}{2}C_{h,Q}}{\ln (2)}\left[ \frac{2 + 2\ln (k) - 2\ln (2)}{k - 1} + \frac{\frac{5}{3}}{k-1}\right] \\&\le \frac{\frac{3}{4}C_{h,Q}\left( 2.3 + 2\ln (k)\right) }{k-1}, \end{aligned}$$

where the second inequality uses the bound that proves (24), the third inequality uses \(\frac{k}{2} - 1.5 \le \ell \le \frac{k}{2} - 1\) and the fourth inequality uses \(\frac{k+3}{k+1} \le \frac{5}{3}\) for \(k \ge 2\).\(\square \)

3.3 Constant step-size

Given \(\bar{\alpha }\in (0,1)\), consider using the following constant step-size rule:

$$\begin{aligned} \bar{\alpha }_i = \bar{\alpha } \quad \hbox {for }\quad i \ge 1. \end{aligned}$$
(28)

This step-size rule arises in the analysis of the Incremental Forward Stagewise Regression algorithm (\(\mathrm {FS}_\varepsilon \)), see [10], and perhaps elsewhere as well.

Bound 3.3

Under the step-size sequence (28), the following inequality holds for all \(k \ge 1\):

$$\begin{aligned} B_{k} - h(\lambda _{k+1}) \le \left( B_k - h(\lambda _1)\right) (1-\bar{\alpha })^{k}+ \tfrac{1}{2}C_{h,Q}\left[ \bar{\alpha }-\bar{\alpha }(1-\bar{\alpha })^{k} \right] . \end{aligned}$$
(29)

If the pre-start step Procedure 2 is used, then:

$$\begin{aligned} B_{k} - h(\lambda _{k+1}) \le \tfrac{1}{2}C_{h,Q}\left[ (1-\bar{\alpha })^{k+1} + \bar{\alpha }\right] . \end{aligned}$$
(30)

If we decide a priori to run the Frank–Wolfe method for \(k\) iterations after the pre-start step Procedure 2, then we can optimize the bound (30) with respect to \(\bar{\alpha }\). The optimized value of \(\bar{\alpha }\) in the bound (30) is easily derived to be:

$$\begin{aligned} \bar{\alpha }^{*} = 1 - \frac{1}{\root k \of {k+1}}. \end{aligned}$$
(31)

With \(\bar{\alpha }\) determined by (31), we obtain a simplified bound from (30) and also a guarantee for the FW gap sequence \(\{G_k\}\) if the method is continued with the same constant step-size (31) for an additional \(k + 1\) iterations.

Bound 3.4

If we use the pre-start step Procedure 2 and the constant step-size sequence (31) for all iterations, then after \(k\) iterations the following inequality holds:

$$\begin{aligned} B_{k} - h(\lambda _{k+1}) \le \frac{\frac{1}{2}C_{h,Q}\left( 1 + \ln (k+1)\right) }{k}. \end{aligned}$$
(32)

Furthermore, after \(2k + 1\) iterations the following inequality holds:

$$\begin{aligned} \min _{i \in \{1, \ldots , 2k+1\}} G_i \le \frac{\frac{1}{2}C_{h,Q}\left( 1 + 2\ln (k+1)\right) }{k} \end{aligned}$$
(33)

It is curious to note that the bounds (24) and (32) are almost identical, although (32) requires fixing a priori the number of iterations \(k\).

Proof of Bound 3.3

Under the step-size rule (28) it is straightforward to show that the dual averages sequences (9) are for \(i \ge 1\):

$$\begin{aligned} \beta _i = (1-\bar{\alpha })^{-k+1} \quad \mathrm {and} \quad \alpha _i = \bar{\alpha }(1-\bar{\alpha })^{-k}, \end{aligned}$$

whereby

$$\begin{aligned} \sum _{i=1}^k \frac{\alpha _i^2}{\beta _{i+1}} = \sum _{i=1}^k \bar{\alpha }^2(1-\bar{\alpha })^{-i} =\bar{\alpha }^2\left( \frac{\frac{1}{(1-\bar{\alpha })^k}-1}{\bar{\alpha }} \right) = \bar{\alpha }\left[ (1-\bar{\alpha })^{-k} -1 \right] . \end{aligned}$$

It therefore follows from Theorem 2.1 that:

$$\begin{aligned} B_k - h(\lambda _{k+1})&\le \frac{B_k - h(\lambda _1)}{\beta _{k+1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^k\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}} \nonumber \\&= \left( B_k - h(\lambda _1)\right) (1-\bar{\alpha })^{k} + \left( \frac{C_{h,Q}}{2}\right) \bar{\alpha }\left[ (1-\bar{\alpha })^{-k} -1 \right] (1-\bar{\alpha })^{k}\nonumber \\&= \left( B_k - h(\lambda _1)\right) (1-\bar{\alpha })^{k} + \left( \frac{C_{h,Q}}{2}\right) \left[ \bar{\alpha }- \bar{\alpha }(1-\bar{\alpha })^{k} \right] , \end{aligned}$$
(34)

which proves (29). If the pre-start step Procedure 2 is used, then using Proposition 3.1 it follows that \(B_k - h(\lambda _1) \le B_0 - h(\lambda _1) \le \frac{1}{2}C_{h,Q}\), whereby from (29) we obtain:

$$\begin{aligned} B_k - h(\lambda _{k+1})&\le \frac{1}{2}C_{h,Q}(1-\bar{\alpha })^{k} + \left( \frac{C_{h,Q}}{2}\right) \left[ \bar{\alpha }- \bar{\alpha }(1-\bar{\alpha })^{k} \right] \\&= \frac{1}{2}C_{h,Q} \left[ (1-\bar{\alpha })^{k+1} + \bar{\alpha }\right] , \end{aligned}$$

completing the proof. \(\square \)

Proof of Bound 3.4

Substituting the step-size (31) into (30) we obtain:

$$\begin{aligned} B_k - h(\lambda _{k+1})&\le \frac{1}{2}C_{h,Q} \left[ \left( \frac{1}{\root k \of {k+1}}\right) ^{k+1} + 1 - \frac{1}{\root k \of {k+1}} \right] \\&\le \frac{1}{2}C_{h,Q} \left[ \left( \frac{1}{\root k \of {k+1}}\right) ^{k+1} + \frac{\ln (k+1)}{k} \right] \\&\le \frac{1}{2}C_{h,Q} \left[ \frac{1}{k+1} + \frac{\ln (k+1)}{k} \right] \\&\le \frac{1}{2}C_{h,Q} \left[ \frac{1}{k} + \frac{\ln (k+1)}{k} \right] , \end{aligned}$$

where the second inequality follows from (i) of Proposition 7.4. This proves (32). To prove (33), notice that inequality (34) together with the subsequent chain of inequalities in the proofs of (29), (30), and (32) show that:

$$\begin{aligned} \left[ \frac{B_k - h(\lambda _1)}{\beta _{k+1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^k\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}}\right] \le \frac{1}{2}C_{h,Q} \left( \frac{1+\ln (k+1)}{k} \right) . \end{aligned}$$
(35)

Using (35) and the substitution \(\sum _{i=k+1}^{2k+1}\bar{\alpha }_i = (k+1)\bar{\alpha }\) and \(\sum _{i=k+1}^{2k+1}\bar{\alpha }_i^2 = (k+1)\bar{\alpha }^2\) in Theorem 2.2 yields:

$$\begin{aligned} \min _{i \in \{1, \ldots , 2k+1\}}G_i&\le \frac{1}{(k+1)\bar{\alpha }}\left( \frac{\tfrac{1}{2}C_{h,Q}(1+\ln (k+1))}{k} \right) + \frac{\frac{1}{2}C_{h,Q}(k+1)\bar{\alpha }^2}{(k+1)\bar{\alpha }} \\&\le \tfrac{1}{2}C_{h,Q}\left( \frac{1+\ln (k+1)}{k} \right) + \tfrac{1}{2}C_{h,Q} \cdot \bar{\alpha }\\&\le \frac{\frac{1}{2}C_{h,Q}\left( 1 + 2\ln (k+1)\right) }{k}, \end{aligned}$$

where the second inequality uses (ii) of Proposition 7.4 and the third inequality uses (i) of Proposition 7.4.\(\square \)

3.4 Extensions using line-searches

The original method of Frank and Wolfe [9] utilized an exact line-search to determine the next iterate \(\lambda _{k+1}\) by assigning \(\hat{\alpha }_k \leftarrow \arg \max \nolimits _{\alpha \in [0,1]}\{h(\lambda _k + \alpha (\tilde{\lambda }_k - \lambda _k)) \}\) and \(\lambda _{k+1} \leftarrow \lambda _k + \hat{\alpha }_k(\tilde{\lambda }_k - \lambda _k)\). When \(h(\cdot )\) is a quadratic function and the dimension of the space \(E\) of variables \(\lambda \) is not huge, an exact line-search is easy to compute analytically. It is a straightforward extension of Theorem 2.1 to show that if an exact line-search is utilized at every iteration, then the bound (10) holds for any choice of step-size sequence \(\{\bar{\alpha }_k\}\), and not just the sequence \(\{\hat{\alpha }_k\}\) of line-search step-sizes. In particular, the \(O(\frac{1}{k})\) computational guarantee (19) holds, as does (24) and (29), as well as the bound (38) to be developed in Sect. 4. This observation generalizes as follows. At iteration \(k\) of the Frank–Wolfe method, let \(A_k \subseteq [0,1)\) be a closed set of potential step-sizes and suppose we select the next iterate \(\lambda _{k+1}\) using the exact line-search assignment \(\hat{\alpha }_k \leftarrow \arg \max \nolimits _{\alpha \in A_k}\{h(\lambda _k + \alpha (\tilde{\lambda }_k - \lambda _k)) \}\) and \(\lambda _{k+1} \leftarrow \lambda _k + \hat{\alpha }_k(\tilde{\lambda }_k - \lambda _k)\). Then after \(k\) iterations of the Frank–Wolfe method, we can apply the bound (10) for any choice of step-size sequence \(\{\bar{\alpha }_i\}_{i = 1}^k\) in the cross-product \(A_1 \times \cdots \times A_k\).

For inexact line-search methods, Dunn [7] analyzes versions of the Frank–Wolfe method with an Armijo line-search and also a Goldstein line-search rule. In addition to convergence and computational guarantees for convex problems, [7] also contains results for the case when the objective function is non-concave. And in prior work, Dunn [6] presents convergence and computational guarantees for the case when the step-size \(\bar{\alpha }_k\) is determined from the structure of the lower quadratic approximation of \(h(\cdot )\) in (7), if the curvature constant \(C_{h,Q}\) is known or upper-bounded. And in the case when no prior information about \(C_{h,Q}\) is given, [6] has a clever recursion for determining a step-size that still accounts for the lower quadratic approximation without estimation of \(C_{h,Q}\).

4 Computational guarantees for a warm start

In the framework of this study, the well-studied step-size sequence (18) and associated computational guarantees (Bound 3.1) corresponds to running the Frank–Wolfe method initiated with the pre-start step from the initial point \(\lambda _0\). One feature of the main computational guarantees as presented in the bounds (19) and (20) is their insensitivity to the quality of the initial point \(\lambda _0\). This is good if \(h(\lambda _0)\) is very far from the optimal value \(h^*\), as the poor quality of the initial point does not affect the computational guarantee. But if \(h(\lambda _0)\) is moderately close to the optimal value, one would want the Frank–Wolfe method, with an appropriate step-size sequence, to have computational guarantees that reflect the closeness to optimality of the initial objective function value \(h(\lambda _0)\). Let us see how this can be done.

We will consider starting the Frank–Wolfe method without the pre-start step, started at an initial point \(\lambda _1\), and let \(C_1\) be a given estimate of the curvature constant \(C_{h,Q}\). Consider the following step-size sequence:

$$\begin{aligned} \bar{\alpha }_i = \displaystyle \frac{2}{\frac{2C_1}{B_1-h(\lambda _1)}+i+1} \quad \ \hbox {for } \quad i \ge 1. \end{aligned}$$
(36)

Comparing (36) to the well-studied step-size rule (18), one can think of the above step-size rule as acting “as if” the Frank–Wolfe method had run for \(\frac{2C_1}{B_1-h(\lambda _1)} \) iterations before arriving at \(\lambda _1\). The next result presents a computational guarantee associated with this step-size rule.

Bound 4.1

Under the step-size sequence (36), the following inequality holds for all \(k \ge 1\):

$$\begin{aligned} B_{k} - h(\lambda _{k+1}) \le \displaystyle \frac{2\max \{C_1, C_{h,Q}\}}{\frac{2C_1}{B_1-h(\lambda _1)} \ + \ k}. \end{aligned}$$
(37)

Notice that in the case when \(C_1=C_{h,Q}\), the bound in (37) simplifies conveniently to:

$$\begin{aligned} B_{k} - h(\lambda _{k+1}) \le \displaystyle \frac{2C_{h,Q}}{\frac{2C_{h,Q}}{B_1-h(\lambda _1)} \ + \ k}.\end{aligned}$$
(38)

Also, as a function of the estimate \(C_1\) of the curvature constant, it is easily verified that the bound in (37) is optimized at \(C_1=C_{h,Q}\).

We remark that the bound (37) (or (38)) is small to the extent that the initial bound gap \(B_1-h(\lambda _1)\) is small, as one would want. However, to the extent that \(B_1-h(\lambda _1)\) is small, the incremental decrease in the bound due to an additional iteration is less. In other words, while the bound (37) is nicely sensitive to the initial bound gap, there is no longer rapid decrease in the bound in the early iterations. It is as if the algorithm had already run for \(\left( \frac{2C_1}{B_1 - h(\lambda _1)}\right) \) iterations to arrive at the initial iterate \(\lambda _1\), with a corresponding dampening in the marginal value of each iteration after then. This is a structural feature of the Frank–Wolfe method that is different from first-order methods that use prox functions and/or projections.

Proof of Bound 4.1

Define \(s = \frac{2C_1}{B_1 - h(\lambda _1)}\), whereby \(\bar{\alpha }_i = \frac{2}{s+1+i}\) for \(i \ge 1\). It then is straightforward to show that the dual averages sequences (9) are for \(i \ge 1\):

$$\begin{aligned} \beta _i = \prod \limits _{j=1}^{i-1} (1-\bar{\alpha }_j)^{-1} = \prod \limits _{j=1}^{i-1} \frac{s+j+1}{s+j-1} = \frac{(s+i-1)(s+i)}{s(s+1)}, \end{aligned}$$

and

$$\begin{aligned} \alpha _i = \frac{\beta _i \bar{\alpha }_i}{1-\bar{\alpha }_i} = \frac{2(s+i)(s+i-1)(s+i+1)}{s(s+1)(s+i+1)(s+i-1)} = \frac{2(s+i)}{s(s+1)}. \end{aligned}$$

Furthermore, we have:

$$\begin{aligned} \sum _{i=1}^k\frac{\alpha _i^2}{\beta _{i+1}}&= \sum _{i=1}^k\frac{4(s+i)^2(s)(s+1)}{s^2(s+1)^2(s+i)(s+i+1)}\nonumber \\&= \sum _{i=1}^k\frac{4(s+i)}{s(s+1)(s+i+1)} \le \frac{4k}{s(s+1)}. \end{aligned}$$
(39)

Utilizing Theorem 2.1 and (39), we have for \(k\ge 1\):

$$\begin{aligned} B_k - h(\lambda _{k+1})&\le \frac{B_k - h(\lambda _1)}{\beta _{k+1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^k\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}} \\&\le \frac{s(s+1)}{(s+k)(s+k+1)}\left( B_1 - h(\lambda _1)+ \frac{C_{h,Q}}{2}\cdot \frac{4k}{s(s+1)}\right) \\&= \frac{s(s+1)}{(s+k)(s+k+1)}\left( \frac{2C_1}{s}+ \frac{2 k C_{h,Q}}{s(s+1)}\right) \\&\le \frac{2\max \{C_1,C_{h,Q}\}}{(s+k)(s+k+1)}\left( s+1+k \right) \\&= \frac{2\max \{C_1,C_{h,Q}\}}{s+k}, \end{aligned}$$

which completes the proof. \(\square \)

4.1 A dynamic version of the warm-start step-size strategy

The step-size sequence (36) determines all step-sizes for the Frank–Wolfe method based on two pieces of information at the initial point \(\lambda _1\): (i) the initial bound gap \(B_1 - h(\lambda _1)\), and (ii) the given estimate \(C_1\) of the curvature constant. The step-size sequence (36) is a static warm-start strategy in that all step-sizes are determined by information that is available or computed at the first iterate. Let us see how we can improve the computational guarantee by treating every iterate as if it were the initial iterate, and hence dynamically determine the step-size sequence as a function of accumulated information about the bound gap and the curvature constant.

At the start of a given iteration \(k\) of the Frank–Wolfe method, we have the iterate value \(\lambda _k \in Q\) and an upper bound \(B_{k-1}\) on \(h^*\) from the previous iteration. We also will now assume that we have an estimate \(C_{k-1}\) of the curvature constant from the previous iteration as well. Steps (2) and (3) of the Frank–Wolfe method then perform the computation of \(\tilde{\lambda }_k\), \(B_k\) and \(G_k\). Instead of using a pre-set formula for the step-size \(\bar{\alpha }_k\), we will determine the value of \(\bar{\alpha }_k\) based on the current bound gap \(B_k-h(\lambda _k)\) as well as on a new estimate \(C_{k}\) of the curvature constant. (We will shortly discuss how \(C_k\) is computed). Assuming \(C_k\) has been computed, and mimicking the structure of the static warm-start step-size rule (36), we compute the current step-size as follows:

$$\begin{aligned} \bar{\alpha }_k:=\frac{2}{\frac{2C_k}{B_k-h(\lambda _k)}+2}, \end{aligned}$$
(40)

where we note that \(\bar{\alpha }_k\) depends explicitly on the value of \(C_k\). Comparing \(\bar{\alpha }_k\) in (40) with (18), we interpret \(\frac{2C_k}{B_k-h(\lambda _k)}\) to be “as if” the current iteration \(k\) was preceded by \(\frac{2C_k}{B_k-h(\lambda _k)}\) iterations of the Frank–Wolfe method using the standard step-size (18). This interpretation is also in concert with that of the static warm-start step-size rule (36).

We now discuss how we propose to compute the new estimate \(C_k\) of the curvature constant \(C_{h,Q}\) at iteration \(k\). Because \(C_k\) will be only an estimate of \(C_{h,Q}\), we will need to require that \(C_k\) (and the step-size \(\bar{\alpha }_k\) (40) that depends explicitly on \(C_k\)) satisfy:

$$\begin{aligned} h(\lambda _k+\bar{\alpha }_k(\tilde{\lambda }_k-\lambda _k)) \ge h(\lambda _k) + \bar{\alpha }_k(B_k-h(\lambda _k))-\frac{1}{2}C_k\bar{\alpha }_k^2. \end{aligned}$$
(41)

In order to find a value \(C_k \ge C_{k-1}\) for which (41) is satisfied, we first test if \(C_k := C_{k-1}\) satisfies (41), and if so we set \(C_k \leftarrow C_{k-1}\). If not, one can perform a standard doubling strategy, testing values \(C_k \leftarrow 2C_{k-1}, 4C_{k-1}, 8C_{k-1}, \ldots \), until (41) is satisfied. Since (41) will be satisfied whenever \(C_k \ge C_{h,Q}\) from the definition of \(C_{h,Q}\) in (7) and the inequality \(B_k-h(\lambda _k) \le B^w_k-h(\lambda _k)=\nabla h(\lambda _k)^T(\tilde{\lambda }_k - \lambda _k)\), it follows that the doubling strategy will guarantee \(C_k \le \max \{C_0, 2C_{h,Q}\}\). Of course, if an upper bound \(\bar{C} \ge C_{h,Q}\) is known, then \(C_k \leftarrow \bar{C}\) is a valid assignment for all \(k \ge 1\). Moreover, the structure of \(h(\cdot )\) may be sufficiently simple so that a value of \(C_k \ge C_{k-1}\) satisfying (41) can be determined analytically via closed-form calculation, as is the case if \(h(\cdot )\) is a quadratic function for example. The formal description of the Frank–Wolfe method with dynamic step-size strategy is presented in Method 3.

figure c

We have the following computational guarantees for the Frank–Wolfe method with dynamic step-sizes (Method 3):

Bound 4.2

The iterates of the Frank–Wolfe method with dynamic step-sizes (Method 3) satisfy the following for any \(k\ge 1\):

$$\begin{aligned} B_{k} - h(\lambda _{k}) \le \min _{\ell \in \{1, \ldots , k\}}\left\{ \displaystyle \frac{2C_{k}}{\frac{2C_{k}}{B_\ell -h(\lambda _\ell )} \ + \ k - \ell }\right\} . \end{aligned}$$
(42)

Furthermore, if the doubling strategy is used to update the estimates \(\{C_k\}\) of \(C_{h,Q}\), it holds that \(C_{k} \le \max \{C_0, 2C_{h,Q}\}\).

Notice that (42) naturally generalizes the static warm-start bound (37) (or (38)) to this more general dynamic case. Consider, for simplicity, the case where \(C_k = C_{h,Q}\) is the known curvature constant. In this case, (42) says that we may apply the bound (38) with any \(\ell \in \{1, \ldots , k\}\) as the starting iteration. That is, the computational guarantee for the dynamic case is at least as good as the computational guarantee for the static warm-start step-size (36) initialized at any iteration \(\ell \in \{1, \ldots , k\}\).

Proof of Bound 4.2

Let \(i \ge 1\). For convenience define \(A_i = \frac{2C_i}{B_i - h(\lambda _i)}\), and in this notation (40) is \(\bar{\alpha }_i = \frac{2}{A_i + 2}\). Applying (ii) in step (4) of Method 3 we have:

$$\begin{aligned} B_{i+1} - h(\lambda _{i+1})&\le B_{i+1} - h(\lambda _i) - \bar{\alpha }_i (B_i - h(\lambda _i)) +\tfrac{1}{2}\bar{\alpha }_i^2C_i \\&\le B_i - h(\lambda _i) \!- \!\bar{\alpha }_i (B_i \!-\! h(\lambda _i)) +\tfrac{1}{2}\bar{\alpha }_i^2C_i \\&= (B_i - h(\lambda _i))(1 - \bar{\alpha }_i ) +\tfrac{1}{2}\bar{\alpha }_i^2C_i \\&= \frac{2C_i}{A_i}\left( \frac{A_i}{A_i + 2}\right) + \frac{2C_i}{(A_i + 2)^2} \\&= 2C_i\left( \frac{A_i + 3}{(A_i + 2)^2}\right) \\&< \frac{2C_i}{A_i + 1}, \end{aligned}$$

where the last inequality follows from the fact that \((a+2)^2 > a^2 + 4a + 3 = (a+1)(a+3)\) for \(a \ge 0\). Therefore

$$\begin{aligned} A_{i+1} = \frac{2C_{i+1}}{B_{i+1} - h(\lambda _{i+1})} = \frac{C_{i+1}}{C_{i}}\left( \frac{2C_{i}}{B_{i+1} - h(\lambda _{i+1})}\right) > \frac{C_{i+1}}{C_{i}}\left( A_i + 1 \right) . \end{aligned}$$
(43)

We now show by reverse induction that for any \(\ell \in \{1, \ldots , k\}\) the following inequality is true:

$$\begin{aligned} A_{k} \ge \frac{C_{k}}{C_{\ell }}A_\ell + k - \ell . \end{aligned}$$
(44)

Clearly (44) holds for \(\ell =k\), so let us suppose (44) holds for some \(\ell + 1 \in \{2, \ldots , k\}\). Then

$$\begin{aligned} A_{k}&\ge \frac{C_{k}}{C_{\ell + 1}}A_{\ell + 1} + k - \ell - 1 \\&> \frac{C_{k}}{C_{\ell + 1}}\left( \frac{C_{\ell +1}}{C_{\ell }}\left( A_\ell + 1 \right) \right) + k - \ell - 1 \\&\ge \frac{C_{k}}{C_{\ell }}A_\ell + k - \ell , \end{aligned}$$

where the first inequality is the induction hypothesis, the second inequality uses (43), and the third inequality uses the monotonicity of the \(\{C_k\}\) sequence. This proves (44). Now for any \(\ell \in \{1, \ldots , k\}\) we have from (44) that:

$$\begin{aligned} B_{k} - h(\lambda _{k}) = \frac{2C_{k}}{A_{k}} \le \frac{2C_{k}}{\frac{C_{k}}{C_{\ell }}A_\ell + k - \ell } = \frac{2C_{k}}{\frac{2C_{k}}{B_{\ell } - h(\lambda _{\ell })} + k - \ell }, \end{aligned}$$

proving the result.\(\square \)

5 Analysis of the Frank–Wolfe method with inexact gradient computations and/or subproblem solutions

In this section we present and analyze extensions of the Frank–Wolfe method in the presence of inexact computation of gradients and/or subproblem solutions. We first consider the case when the linear optimization subproblem is solved approximately.

5.1 Frank–Wolfe method with inexact linear optimization subproblem solutions

Here we consider the case when the linear optimization subproblem is solved approximately, which arises especially in optimization over matrix variables. For example, consider instances of (1) where \(Q\) is the spectrahedron of symmetric matrices, namely \(Q=\{ \Lambda \in \mathbb {S}^{n \times n} : \Lambda \succeq 0, \ I \bullet \Lambda = 1\}\), where \(\mathbb {S}^{n \times n}\) is the space of symmetric matrices of order \(n\), “\(\succeq \)” is the Löwner ordering thereon, and “\(\cdot \bullet \cdot \)” denotes the trace inner product. For these instances solving the linear optimization subproblem corresponds to computing the leading eigenvector of a symmetric matrix, whose solution when \(n \gg 0\) is typically computed inexactly using iterative methods.

For \(\delta \ge 0\) an (absolute) \(\delta \)-approximate solution to the linear optimization subproblem \(\max \nolimits _{\lambda \in Q}\left\{ c^T\lambda \right\} \) is a vector \(\tilde{\lambda } \in Q\) satisfying:

$$\begin{aligned} c^T\tilde{\lambda } \ge \max \limits _{\lambda \in Q}\left\{ c^T\lambda \right\} - \delta , \end{aligned}$$
(45)

and we use the notation \(\tilde{\lambda } \leftarrow \text {approx}(\delta )_{\lambda \in Q}\left\{ c^T\lambda \right\} \) to denote assigning to \(\tilde{\lambda }\) any such \(\delta \)-approximate solution. The same additive linear optimization subproblem approximation model is considered in Dunn and Harshbarger [8] and Jaggi [15], and a multiplicative linear optimization subproblem approximation model is considered in Lacoste-Julien et al. [17]; a related approximation model is used in connection with a greedy coordinate descent method in Dudík et al. [5]. In Method 4 we present a version of the Frank–Wolfe algorithm that uses approximate linear optimization subproblem solutions. Note that Method 4 allows for the approximation quality \(\delta = \delta _k\) to be a function of the iteration index \(k\). Note also that the definition of the Wolfe upper bound \(B^w_k\) and the FW gap \(G_k\) in step (2.) are amended from the original Frank–Wolfe algorithm (Method 1) by an additional term \(\delta _k\). It follows from (45) that:

$$\begin{aligned} B^w_k = h(\lambda _k) + \nabla h (\lambda _k)^T(\tilde{\lambda }_k - \lambda _k) + \delta _k \ge \max _{\lambda \in Q}\left\{ h(\lambda _k) + \nabla h (\lambda _k)^T(\lambda - \lambda _k)\right\} \ge h^*, \end{aligned}$$

which shows that \(B^w_k\) is a valid upper bound on \(h^*\), with similar properties for \(G_k\). The following two theorems extend Theorems 2.1 and 2.2 to the case of approximate subproblem solutions. Analogous to the the case of exact subproblem solutions, these two theorems can easily be used to derive suitable bounds for specific step-sizes rules such as those in Sects. 3 and 4.

figure d

Theorem 5.1

Consider the iterate sequences of the Frank–Wolfe method with approximate subproblem solutions (Method 4) \(\{\lambda _k\}\) and \(\{\tilde{\lambda }_k\}\) and the sequence of upper bounds \(\{B_k\}\) on \(h^*\), using the step-size sequence \(\{\bar{\alpha }_k\}\). For the auxiliary sequences \(\{\alpha _k\}\) and \(\{\beta _k\}\) given by (9), and for any \(k \ge 0\), the following inequality holds:

$$\begin{aligned} B_k - h(\lambda _{k+1}) \le \frac{B_k - h(\lambda _1)}{\beta _{k+1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^k\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}} + \frac{\sum _{i = 1}^k\alpha _i\delta _i}{\beta _{k+1}}. \end{aligned}$$
(46)

\(\square \)

Theorem 5.2

Consider the iterate sequences of the Frank–Wolfe method with approximate subproblem solutions (Method 4) \(\{\lambda _k\}\) and \(\{\tilde{\lambda }_k\}\), the sequence of upper bounds \(\{B_k\}\) on \(h^*\), and the sequence of FW gaps \(\{G_k\}\) from step (2.), using the step-size sequence \(\{\bar{\alpha }_k\}\). For the auxiliary sequences \(\{\alpha _k\}\) and \(\{\beta _k\}\) given by (9), and for any \(\ell \ge 0\) and \(k \ge \ell + 1\), the following inequality holds:

$$\begin{aligned} \min _{i \in \{\ell +1, \ldots , k\}}G_i&\le \frac{1}{\sum _{i = \ell +1}^k\bar{\alpha }_i}\left[ \frac{B_\ell - h(\lambda _1)}{\beta _{\ell +1}} + \frac{\frac{1}{2}C_{h,Q}\sum _{i = 1}^\ell \frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{\ell +1}} + \frac{\sum _{i = 1}^\ell \alpha _i\delta _i}{\beta _{\ell +1}}\right] \nonumber \\&\quad +\,\frac{\frac{1}{2}C_{h,Q}\sum _{i = \ell +1}^k\bar{\alpha }_i^2}{\sum _{i=\ell +1}^k\bar{\alpha }_i} + \frac{\sum _{i = \ell +1}^k\bar{\alpha }_i\delta _i}{\sum _{i=\ell +1}^k\bar{\alpha }_i}. \end{aligned}$$
(47)

\(\square \)

Remark 5.1

The pre-start step (Procedure 2) can also be generalized to the case of approximate solution of the linear optimization subproblem. Let \(\lambda _1\) and \(B_0\) be computed by the pre-start step with a \(\delta =\delta _0\)-approximate subproblem solution. Then Proposition 3.1 generalizes to:

$$\begin{aligned} B_0 - h(\lambda _1) \le \frac{1}{2}C_{h,Q} + \delta _0, \end{aligned}$$

and hence if the pre-start step is used (46) implies that:

$$\begin{aligned} B_k - h(\lambda _{k+1}) \le \frac{\frac{1}{2}C_{h,Q}\sum _{i = 0}^k\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}} + \frac{\sum _{i = 0}^k\alpha _i\delta _i}{\beta _{k+1}}, \end{aligned}$$
(48)

where \(\alpha _0 := 1\).

Let us now discuss implications of Theorems 5.1 and 5.2, and Remark 5.1. Observe that the bounds on the right-hand sides of (46) and (47) are composed of the exact terms which appear on the right-hand sides of (10) and (11), plus additional terms involving the solution accuracy sequence \(\delta _1, \ldots , \delta _k\). It follows from (14) that these latter terms are particular convex combinations of the \(\delta _i\) values and zero, and in (48) the last term is a convex combination of the \(\delta _i\) values, whereby they are trivially bounded above by \(\max \{\delta _1, \ldots , \delta _k\}\). When \(\delta _i := \delta \) is a constant, then this bound is simply \(\delta \), and we see that the errors due to the approximate computation of linear optimization subproblem solutions do not accumulate, independent of the choice of step-size sequence \(\{\bar{\alpha }_k\}\). In other words, Theorem 5.1 implies that if we are able to solve the linear optimization subproblems to an accuracy of \(\delta \), then the Frank–Wolfe method can solve (1) to an accuracy of \(\delta \) plus a function of the step-size sequence \(\{\bar{\alpha }_k\}\), the latter of which can be made to go to zero at an appropriate rate depending on the choice of step-sizes. Similar observations hold for the terms depending on \(\delta _1, \ldots , \delta _k\) that appear on the right-hand side of (47).

Note that Jaggi [15] considers the case where \(\delta _i := \frac{1}{2}\delta \bar{\alpha }_iC_{h,Q}\) (for some fixed \(\delta \ge 0\)) and \(\bar{\alpha }_i := \frac{2}{i+2}\) for \(i \ge 0\) (or \(\bar{\alpha }_i\) is determined by a line-search), and shows that in this case Method 4 achieves \(O\left( \frac{1}{k}\right) \) convergence in terms of both the optimality gap and the FW gaps. These results can be recovered as a particular instantiation of Theorems 5.1 and 5.2 using similar logic as in the proof of Bound 3.1.

Proof of Theorem 5.1

First recall the identities (13) and (14) for the dual averages sequences (9). Following the proof of Theorem 2.1, we then have for \(i \ge 1\):

$$\begin{aligned} \beta _{i+1} h(\lambda _{i+1})&\ge \beta _{i+1} \left[ h(\lambda _i) + \nabla h(\lambda _i)^T(\tilde{\lambda }_i-\lambda _i)\bar{\alpha }_i - \frac{1}{2}\bar{\alpha }_i^2C_{h,Q} \right] \\&= \beta _{i} h(\lambda _i) + (\beta _{i+1} - \beta _{i})h(\lambda _i) + \beta _{i+1}\bar{\alpha }_i\nabla h(\lambda _i)^T(\tilde{\lambda }_i-\lambda _i) - \frac{1}{2}\beta _{i+1}\bar{\alpha }_i^2C_{h,Q} \\&= \beta _{i} h(\lambda _i) + \alpha _{i}\left[ h(\lambda _i) + \nabla h(\lambda _i)^T(\tilde{\lambda }_i-\lambda _i)\right] - \frac{1}{2}\frac{\alpha _i^2}{\beta _{i+1}}C_{h,Q} \\&= \beta _{i} h(\lambda _i) + \alpha _{i}B^w_i - \alpha _i\delta _i - \frac{1}{2}\frac{\alpha _i^2}{\beta _{i+1}}C_{h,Q}, \end{aligned}$$

where the third equality above uses the definition of the Wolfe upper bound (2) in Method 4. The rest of the proof follows exactly as in the proof of Theorem 2.1. \(\square \)

Proof of Theorem 5.2

For \(i\ge 1\) we have:

$$\begin{aligned} h(\lambda _{i+1})&\ge h(\lambda _i) + \nabla h(\lambda _i)^T(\tilde{\lambda }_i-\lambda _i)\bar{\alpha }_i - \frac{1}{2}\bar{\alpha }_i^2C_{h,Q} \\&= h(\lambda _i) + \bar{\alpha }_{i}G_i - \bar{\alpha }_i\delta _i - \frac{1}{2}\bar{\alpha }_i^2C_{h,Q}, \end{aligned}$$

where the equality above follows from the definition of the FW gap in Method 4. Summing the above over \(i \in \{\ell + 1, \ldots , k\}\) and rearranging yields:

$$\begin{aligned} \begin{array}{rl} \sum _{i=\ell +1}^k \bar{\alpha }_{i}G_i&\le h(\lambda _{k+1}) - h(\lambda _{\ell +1}) + \sum _{i=\ell +1}^k\frac{1}{2}\bar{\alpha }_i^2C_{h,Q} + \sum _{i = \ell +1}^k\bar{\alpha }_i\delta _i. \end{array} \end{aligned}$$
(49)

The rest of the proof follows by combining (49) with Theorem 5.1 and proceeding as in the proof of Theorem 2.2. \(\square \)

5.2 Frank–Wolfe method with inexact gradient computations

We now consider a version of the Frank–Wolfe method where the exact gradient computation is replaced with the computation of an approximate gradient, as was explored in Sect. 3 of Jaggi [15]. We analyze two different models of approximate gradients and derive computational guarantees for each model. We first consider the \(\delta \)-oracle model of d’Aspremont [2], which was developed in the context of accelerated first-order methods. For \(\delta \ge 0\), a \(\delta \)-oracle is a (possibly non-unique) mapping \(g_\delta (\cdot ): Q \rightarrow E^*\) that satisfies:

$$\begin{aligned} \left| (\nabla h(\bar{\lambda }) - g_\delta (\bar{\lambda }))^T(\lambda - \bar{\lambda })\right| \le \delta \quad \text { for all } \quad \lambda , \bar{\lambda } \in Q. \end{aligned}$$
(50)

Note that the definition of the \(\delta \)-oracle does not consider inexact computation of function values. Depending on the choice of step-size sequence \(\{\bar{\alpha }_k\}\), this assumption is acceptable as the Frank–Wolfe method may or may not need to compute function values. (The warm-start step-size rule (40) requires computing function values, as does the computation of the upper bounds \(\{B_k^w\}\), in which case a definition analogous to (50) for function values can be utilized).

The next proposition states the following: suppose one solves for the exact solution of the linear optimization subproblem using the \(\delta \)-oracle instead of the exact gradient. Then the absolute suboptimality of the computed solution in terms of the exact gradient is at most \(2\delta \).

Proposition 5.1

For any \(\bar{\lambda } \in Q\) and any \(\delta \ge 0\), if \(\tilde{\lambda } \in \arg \max \nolimits _{\lambda \in Q}\left\{ g_\delta (\bar{\lambda })^T\lambda \right\} \), then \(\tilde{\lambda }\) is a \(2\delta \)-approximate solution to the linear optimization subproblem \(\max \nolimits _{\lambda \in Q}\left\{ \nabla h(\bar{\lambda })^T\lambda \right\} \).

Proof

Let \(\hat{\lambda } \in \arg \max \limits _{\lambda \in Q}\left\{ \nabla h(\bar{\lambda })^T\lambda \right\} \). Then we have:

$$\begin{aligned} \nabla h(\bar{\lambda })^T(\tilde{\lambda } - \bar{\lambda })&\ge g_\delta (\bar{\lambda })^T(\tilde{\lambda } - \bar{\lambda }) - \delta \\&\ge g_\delta (\bar{\lambda })^T(\hat{\lambda } - \bar{\lambda }) - \delta \\&\ge \nabla h(\bar{\lambda })^T(\hat{\lambda } - \bar{\lambda }) - 2\delta \\&= \max _{\lambda \in Q}\left\{ \nabla h(\bar{\lambda })^T\lambda \right\} - \nabla h(\bar{\lambda })^T\bar{\lambda } -2\delta , \end{aligned}$$

where the first and third inequalities use (50), the second inequality follows since \(\tilde{\lambda } \!\in \! \arg \max \nolimits _{\lambda \in Q}\left\{ g_\delta (\bar{\lambda })^T\lambda \right\} \), and the final equality follows since \(\hat{\lambda } \in \arg \max \nolimits _{\lambda \in Q}\left\{ \nabla h(\bar{\lambda })^T\lambda \right\} \). Rearranging terms then yields the result.\(\square \)

Now consider a version of the Frank–Wolfe method where the computation of \(\nabla h(\lambda _k)\) at step (1) is replaced with the computation of \(g_{\delta _k}(\lambda _k)\). Then Proposition 5.1 implies that such a version can be viewed simply as a special case of the version of the Frank–Wolfe method with approximate subproblem solutions (Method 4) of Sect. 5.1 with \(\delta _k\) replaced by \(2\delta _k\). Thus, we may readily apply Theorems 5.1 and 5.2 and Proposition 5.1 to this case. In particular, similar to the results in [2] regarding error non-accumulation for an accelerated first-order method, the results herein imply that there is no accumulation of errors for a version of the Frank–Wolfe method that computes approximate gradients with a \(\delta \)-oracle at each iteration. Furthermore, it is a simple extension to consider a version of the Frank–Wolfe method that computes both (i) approximate gradients with a \(\delta \)-oracle, and (ii) approximate linear optimization subproblem solutions.

5.2.1 Inexact gradient computation model via the \((\delta ,L)\)-oracle

The premise (50) underlying the \(\delta \)-oracle is quite strong and can be restrictive in many cases. For this reason among others, Devolder et al. [4] introduce the less restrictive \((\delta , L)\)-oracle model. For scalars \(\delta , L \ge 0\), the \((\delta , L)\)-oracle is defined as a (possibly non-unique) mapping \(Q \rightarrow \mathbb {R} \times E^*\) that maps \(\bar{\lambda } \rightarrow (h_{(\delta ,L)}(\bar{\lambda }), g_{(\delta , L)}(\bar{\lambda }))\) which satisfy:

$$\begin{aligned} h(\lambda )&\le h_{(\delta ,L)}(\bar{\lambda }) + g_{(\delta ,L)}(\bar{\lambda })^T(\lambda - \bar{\lambda }), \ \text { and }\end{aligned}$$
(51)
$$\begin{aligned} h(\lambda )&\ge h_{(\delta ,L)}(\bar{\lambda }) + g_{(\delta ,L)}(\bar{\lambda })^T(\lambda - \bar{\lambda }) - \frac{L}{2}\Vert \lambda - \bar{\lambda }\Vert ^2 - \delta \ \ \text { for all } \lambda , \bar{\lambda } \in Q ,\qquad \end{aligned}$$
(52)

where \(\Vert \cdot \Vert \) is a choice of norm on \(E\). Note that in contrast to the \(\delta \)-oracle model, the \((\delta , L)\)-oracle model does not assume that the function \(h(\cdot )\) is smooth or even concave—it simply assumes that there is an oracle returning the pair \((h_{(\delta ,L)}(\bar{\lambda }), g_{(\delta , L)}(\bar{\lambda }))\) satisfying (51) and (52).

In Method 5 we present a version of the Frank–Wolfe method that utilizes the \((\delta , L)\)-oracle. Note that we allow the parameters \(\delta \) and \(L\) of the \((\delta , L)\)-oracle to be a function of the iteration index \(k\). Inequality (51) in the definition of the \((\delta , L)\)-oracle immediately implies that \(B^w_k \ge h^*\). We now state the main technical complexity bound for Method 5, in terms of the sequence of bound gaps \(\{B_k - h(\lambda _{k+1})\}\). Recall from Sect. 2 the definition \(\mathrm {Diam}_Q := \max \nolimits _{\lambda , \bar{\lambda }\in Q} \{\Vert \lambda - \bar{\lambda }\Vert \}\), where the norm \(\Vert \cdot \Vert \) is the norm used in the definition of the \((\delta , L)\)-oracle (52).

figure e

Theorem 5.3

Consider the iterate sequences of the Frank–Wolfe method with the \((\delta , L)\)-oracle (Method 5) \(\{\lambda _k\}\) and \(\{\tilde{\lambda }_k\}\) and the sequence of upper bounds \(\{B_k\}\) on \(h^*\), using the step-size sequence \(\{\bar{\alpha }_k\}\). For the auxiliary sequences \(\{\alpha _k\}\) and \(\{\beta _k\}\) given by (9), and for any \(k \ge 0\), the following inequality holds:

$$\begin{aligned} B_k - h(\lambda _{k+1}) \le \frac{B_k - h(\lambda _1)}{\beta _{k+1}} + \frac{\frac{1}{2}\mathrm {Diam}_Q^2\sum _{i = 1}^kL_i\frac{\alpha _i^2}{\beta _{i+1}}}{\beta _{k+1}} + \frac{\sum _{i = 1}^k\beta _{i+1}\delta _i}{\beta _{k+1}}. \end{aligned}$$
(53)

\(\square \)

As with Theorem 5.1, observe that the terms on the right-hand side of (53) are composed of the exact terms which appear on the right-hand side of (10), plus an additional term that is a function of \(\delta _1, \ldots , \delta _k\). Unfortunately, Theorem 5.3 implies an accumulation of errors for Method 5 under essentially any choice of step-size sequence \(\{\bar{\alpha }_k\}\). Indeed, suppose that \(\beta _i = O(i^\gamma )\) for some \(\gamma \ge 0\), then \(\sum _{i = 1}^k\beta _{i+1} = O(k^{\gamma + 1})\), and in the constant case where \(\delta _i := \delta \), we have \(\frac{\sum _{i = 1}^k\beta _{i+1}\delta _i}{\beta _{k+1}} = O(k\delta )\). Therefore in order to achieve an \(O\left( \frac{1}{k}\right) \) rate of convergence (for example with the step-size sequence (18)) we need \(\delta = O\left( \frac{1}{k^2}\right) \). This negative result nevertheless contributes to the understanding of the merits and demerits of different first-order methods as follows. Note that in [4] it is shown that the “classical” gradient methods (both primal and dual), which require solving a proximal projection problem at each iteration, achieve an \(O\left( \frac{1}{k} + \delta \right) \) accuracy under the \((\delta , L)\)-oracle model for constant \((\delta , L)\). On the other hand, it is also shown in [4] that all accelerated first-order methods (which also solve proximal projection problems at each iteration) generically achieve an \(O\left( \frac{1}{k^2}+ k\delta \right) \) accuracy and thus suffer from an accumulation of errors under the \((\delta , L)\)-oracle model. As discussed in the Introduction herein, the Frank–Wolfe method offers two possible advantages over these proximal methods: (i) the possibility that solving the linear optimization subproblem is easier than the projection-type problem in an iteration of a proximal method, and/or (ii) the possibility of greater structure (sparsity, low rank) of the iterates. In Table 1 we summarize the cogent properties of these three methods (or classes of methods) under exact gradient computation as well as with the \((\delta , L)\)-oracle model. As can be seen from Table 1, no single method dominates in the three categories of properties shown in the table; thus there are inherent tradeoffs among these methods/classes.

Table 1 Properties of three (classes of) first-order methods after \(k\) iterations

Proof of Theorem 5.3

Note that (51) and (52) with \(\bar{\lambda } = \lambda \) imply that:

$$\begin{aligned} h(\lambda ) \le h_{(\delta ,L)}(\lambda ) \le h(\lambda ) + \delta \quad \text { for all }\quad \lambda \in Q. \end{aligned}$$
(54)

Recall properties (13) and (14) of the dual averages sequences (9). Following the proof of Theorem 2.1, we then have for \(i \ge 1\):

$$\begin{aligned} \beta _{i+1} h(\lambda _{i+1})&\ge \beta _{i+1} \left[ h_i+ g_i^T(\tilde{\lambda }_i-\lambda _i)\bar{\alpha }_i - \frac{1}{2}\bar{\alpha }_i^2L_i\mathrm {Diam}_Q^2 - \delta _i \right] \\&= \beta _{i} h_i + (\beta _{i+1} - \beta _{i})h_i + \beta _{i+1}\bar{\alpha }_ig_i^T(\tilde{\lambda }_i-\lambda _i) \\&- \frac{1}{2}\beta _{i+1}\bar{\alpha }_i^2L_i\mathrm {Diam}_Q^2 - \beta _{i+1}\delta _i\\&= \beta _{i} h_i + \alpha _{i}\left[ h_i + g_i^T(\tilde{\lambda }_i-\lambda _i)\right] - \frac{1}{2}\frac{\alpha _i^2}{\beta _{i+1}}L_i\mathrm {Diam}_Q^2 - \beta _{i+1}\delta _i\\&\ge \beta _{i} h(\lambda _i) + \alpha _{i}B^w_i - \frac{1}{2}\frac{\alpha _i^2}{\beta _{i+1}}L_i\mathrm {Diam}_Q^2 - \beta _{i+1}\delta _i, \end{aligned}$$

where the first inequality uses (52), and the second inequality uses (54) and the definition of the Wolfe upper bound in Method 5. The rest of the proof follows as in the proof of Theorem 2.1. \(\square \)

6 Summary/conclusions

The Frank–Wolfe method is the subject of substantial renewed interest due to the relevance of applications (e.g., regularized regression, boosting/classification, matrix completion, image construction, other machine learning problems), the need in many applications for only moderately high accuracy solutions, the applicability of the method on truly large-scale problems, and the appeal of structural implications (sparsity, low-rank) induced by the method itself. The method requires (at each iteration) the solution of a linear optimization subproblem over the feasible region of interest, in contrast to most other first-order methods which require (at each iteration) the solution of a certain projection subproblem over the feasible region defined by a strongly convex prox function. As such, the Frank–Wolfe method is particularly efficient in various important application settings including matrix completion.

In this paper we have developed new analysis and results for the Frank–Wolfe method. Virtually all of our results are consequences and applications of Theorems 2.1 and 2.2, which present computational guarantees for optimality gaps (Theorem 2.1) and the “FW gaps” (Theorem 2.2) for arbitrary step-size sequences \(\{\bar{\alpha }_k\}\) of the Frank–Wolfe method. These technical theorems are applied to yield computational guarantees for the well-studied step-size rule \(\bar{\alpha }_k := \frac{2}{k+2}\) (Sect. 3.1), simple averaging (Sect. 3.2), and constant step-size rules (Sect. 3.3). The second set of contributions in the paper concern “warm start” step-size rules and computational guarantees that reflect the quality of the given initial iterate (Sect. 4) as well as the accumulated information about the optimality gap and the curvature constant over a sequence of iterations (Sect. 4.1). The third set of contributions concerns computational guarantees in the presence of an approximate solution of the linear optimization subproblem (Sect. 5.1) and approximate computation of gradients (Sect. 5.2).

We end with the following observation: that the well-studied step-size rule \(\bar{\alpha }_k := \frac{2}{k+2}\) does not require any estimation of the curvature constant \(C_{h,Q}\) (which is generically not known). Therefore this rule is in essence fully automatically scaled as regards the curvature \(C_{h,Q}\). In contrast, the dynamic warm-start step-size rule (40), which incorporates accumulated information over a sequence of iterates, requires updating estimates of the curvature constant \(C_{h,Q}\) that satisfy certain conditions. It is an open challenge to develop a dynamic warm-start step-size strategy that is automatically scaled and so does not require computing or updating estimates of \(C_{h,Q}\).