Abstract
We present a unified theorem for the convergence analysis of stochastic gradient algorithms for minimizing a smooth and convex loss plus a convex regularizer. We do this by extending the unified analysis of Gorbunov et al. (in: AISTATS, 2020) and dropping the requirement that the loss function be strongly convex. Instead, we rely only on convexity of the loss function. Our unified analysis applies to a host of existing algorithms such as proximal SGD, variance reduced methods, quantization and some coordinate descent-type methods. For the variance reduced methods, we recover the best known convergence rates as special cases. For proximal SGD, the quantization and coordinate-type methods, we uncover new state-of-the-art convergence rates. Our analysis also includes any form of sampling or minibatching. As such, we are able to determine the minibatch size that optimizes the total complexity of variance reduced methods. We showcase this by obtaining a simple formula for the optimal minibatch size of two variance reduced methods (L-SVRG and SAGA). This optimal minibatch size not only improves the theoretical total complexity of the methods but also improves their convergence in practice, as we show in several experiments.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and Background
Consider the following composite convex optimization problem
where \(f: \mathbb {R}^{d} \rightarrow \mathbb {R}\) is smooth and convex and \(R: \mathbb {R}^{d} \rightarrow (-\infty , \infty ]\) is a proper closed and convex function with an easy-to-compute proximal term. This problem often arises in training machine learning models, where f is a loss function and R is a regularization term, e.g., \(\ell _1\)-regularized logistic regression [33], LASSO regression [41] and elastic net regression [47]. It also includes projected gradient descent, if R is an indicator on a convex set.
A natural algorithm which is well-suited for solving (1) is proximal gradient descent, which requires iteratively taking a proximal step in the direction of the steepest descent. Unfortunately, this method requires computing the gradient \(\nabla f\) at each iteration, which can be computationally expensive or even impossible in several settings. This has sparked interest in developing cheaper, practical methods that need only a stochastic unbiased estimate \(g_k \in \mathbb {R}^d\) of the gradient at each iteration. These methods can be written as
where \(\left( \gamma _k \right) _k\) is a sequence of step sizes. This estimate \(g_k\) can take on many different forms depending on the problem of interest. Here we list a few.
1.1 Stochastic Approximation
Most machine learning problems can be cast as minimizing the generalization error of some underlying model, where \(f_z(x)\) is the loss over a sample z and
Since \(\mathcal {D}\) is an unknown distribution, computing this expectation is impossible in general. However, by sampling \(z \sim \mathcal {D}\), we can compute a stochastic gradient \(\nabla f_z(x)\). Using Algorithm (2) with \(g_k = \nabla f_{z_k}(x_k)\) and \(R \equiv 0\) gives the simplest stochastic gradient descent method: stochastic gradient descent (SGD) [29, 35].
1.2 Finite-Sum Minimization
Since the expectation (3) cannot be computed in general, one well-studied solution to approximately solve this problem is to use a Monte Carlo estimator:
where n is the number of samples and \(f_i(x)\) is the loss at x on the ith drawn sample. When R is a regularization function, problem (1) with f defined in (4) is often referred to as regularized empirical minimization (R-ERM) [39]. For the approximation (4) to be accurate, we would like n to be as large as possible. This, in turn, makes computing the gradient extremely costly. In this setting, for low-precision problems, SGD scales very favorably compared to gradient descent, since an iteration of SGD requires \(\mathcal {O}(d)\) flops compared to \(\mathcal {O}(nd)\) for gradient descent. Moreover, several techniques applied to SGD such as importance sampling and minibatching [12, 21, 28, 46] have made SGD the preferred choice for solving Problem (1) + (4). However, one major drawback of SGD is that, using a fixed step size, SGD does not converge and oscillates in the neighborhood of a minimizer. To remedy this problem, variance reduced methods [3, 8, 19, 32, 36] were developed. These algorithms get the best of both worlds: the global convergence properties of GD and the small iteration complexity of SGD. In the smooth case, they all share the distinguishing property that the variance of their stochastic gradients \(g_k\) converges to 0. This feature allows them to converge to a minimizer with a fixed step size at the cost of some extra storage or computations compared to SGD.
1.3 Distributed Optimization
Another setting where the exact gradient \(\nabla f\) is impossible to compute is in distributed optimization. The objective function in distributed optimization can be formulated exactly as (4), where each \(f_i\) is a loss on the data stored on the ith node. Each node computes the loss on its local data, then the losses are aggregated by the master node. When the number of nodes n is high, the bottleneck of the optimization becomes the cost of communicating the individual gradients. To remedy this issue, various compression techniques were proposed [1, 2, 15, 22, 38, 43, 45], most of which can be modeled as applying a random transformation \(Q: \mathbb {R}^d \mapsto \mathbb {R}^d\) to each gradient \(\nabla f_i(x_k)\) or to a noisy estimate of the gradient \(g_i^k\). Thus, many proximal quantized stochastic gradient methods fit the form (2) with
While quantized stochastic gradient methods have been widely used in machine learning applications, it was not until the DIANA algorithm [26, 27] that a distributed method was shown to converge to the neighborhood of a minimizer for strongly convex functions. Moreover, in the case where each \(f_i\) is itself a finite average of local functions, variance reduced versions of DIANA, called VR-DIANA [18], were recently developed and proved to converge sublinearly with a fixed step size for convex functions.
1.4 High-Dimensional Function Minimization
Lastly, regardless of the structure of f, if the dimension of the problem d is very high, it is sometimes impossible to compute or to store the gradient at any iteration. Instead, in some cases, one can efficiently compute some coordinates of the gradient, and perform a gradient descent step on the selected coordinates only. These methods are known as (randomized) coordinate descent (RCD) methods [30, 44]. These methods also fit the form (2), for example, with
where \(\left( e_{i} \right) _i\) is the canonical basis of \(\mathbb {R}^d\) and \(i_k \in [d]\) is sampled randomly at each iteration. Though RCD methods fit the form (2) their analysis is often very different compared to other stochastic gradient methods. One exception to this observation is SEGA [16], the first RCD method known to converge for strongly convex functions with nonseparable regularizers.
While all the methods presented above have been discovered and analyzed independently, most of them rely on the same assumptions and share a similar analysis. It is this observation and the results derived for strongly convex functions by [11] that motivate this work.
2 Contributions
We now summarize the key contributions of this paper.
2.1 Unified Analysis of Stochastic Gradient Algorithms
Under a unified assumption on the gradients \(g_k\), it was shown by [11] that stochastic gradient methods which fit the format (2) converge linearly to a neighborhood of the minimizer for quasi-strongly convex functions when using a fixed step size. We extend this line of work to the convex setting, and further generalize it by allowing for decreasing step sizes. As a result, for all the methods which verify our assumptions, we are able to prove either sublinear convergence to the neighborhood of a minimum with a fixed step size or exact convergence with a decreasing step size.
2.2 Analysis of SGD Without the Bounded Gradients Assumption
Most of the existing analysis on SGD assume a uniform bound on the second moments of the stochastic gradients or on their variance. Indeed, for the analysis of stochastic (sub)gradient descent, this is often necessary to apply the classical convergence proofs. However, for large classes of convex functions, it has been shown that these assumptions do not to hold [20, 31]. As a result, there has been a recent surge in trying to avoid these assumptions on the stochastic gradients for several classes of smooth functions: strongly convex [14, 25, 31], convex [14, 25, 40, 42], or even nonconvex functions [20, 24, 25]. Surprisingly, a general analysis for SGD with proximal iterations for convex functions without these bounded gradient assumptions is still lacking. As a special case of our unified analysis, assuming only convexity and smoothness, we provide a general analysis of proximal SGD in the convex setting. Moreover, using the arbitrary sampling framework of [12], we are able to prove convergence rates for SGD under minibatching, importance sampling, or virtually any form of sampling.
2.3 Extension of the Analysis of Existing Algorithms to the Convex Case
As another special case of our analysis, we also provide the first convergence rates for the (variance reduced) stochastic coordinate descent method SEGA [16] and the distributed (variance reduced) compressed SGD method DIANA [26] in the convex setting. Our results can also be applied to all the recent methods developed in [11].
2.4 Optimal Minibatches for L-SVRG and SAGA in the Convex Setting
With a unifying convergence theory in hand, we can now ask sweeping questions across families of algorithms. We demonstrate this by answering the question
What is the optimal minibatch size for variance reduced methods?
Recently, precise estimates of the minibach sizes which minimize the total complexity for SAGA [8] and SVRG [4, 19, 34] applied to strongly convex functions were derived by [9] and [37]. We showcase the flexibility of our unifying framework by deriving new optimal minibatch sizes for SAGA [8] and L-SVRG [17, 23] in the general convex setting. Unlike prior work in the strongly convex setting [9, 37], our resulting optimal minibatch sizes can be computed using only the smoothness constants. To verify the validity of our claims, we show through extensive experiments that our theoretically derived optimal minibatch sizes are competitive against a grid search.
3 Unified Analysis for Proximal Stochastic Gradient Methods
3.1 Notation
The Bregman divergence associated with f is the mapping
and the proximal operator of \(\gamma R\) is the function
Let \([n] \overset{\text {def}}{=}\left\{ 1,\dots ,n\right\} \). We denote the expectation of a random variable X by \(\mathbb {E}_{} \left[ X \right] \) and the conditional expectation of X given a random variable Y by \(\mathbb {E}_{} \left[ X \mid Y\right] \).
[11] analyze stochastic gradient methods that fit the form (2) for smooth quasi-strongly convex functions. In this work, we extend these results to the general convex setting. We formalize our assumptions on f and R in the following.
Assumption 1
The function f is L–smooth and convex:
The function R is convex:
When f has the form (4), we assume that for all \(i \in [n]\), \(f_i\) is \(L_i\)-smooth and convex, and we denote \(L_{\max } \overset{\text {def}}{=}\underset{i\in [n]}{\max }\,L_i\).
The innovation introduced by [11] is the following unifying assumption on the stochastic gradients \(g_k\) used in (2) which allows to simultaneously analyze classical SGD, variance reduced methods, quantized stochastic gradient methods, and some randomized coordinate descent methods.
Assumption 2
(Assumption 4.1 in [11]) Consider the iterates \(\left( x_k \right) _k\) and gradients \(\left( g_k \right) _k\) in (2).
-
1.
The gradient estimates are conditionally unbiased:
$$\begin{aligned} \mathbb {E}_{} \left[ g_k \mid x_k\right] = \nabla f(x_k). \end{aligned}$$(7) -
2.
There exist constants \(A,B,C,D_1, D_2, \rho \ge 0\), and a sequence of random variables \(\sigma ^2_k \ge 0 \) such that for all possible minimizers \(x_{*}\) of F:
$$\begin{aligned} \mathbb {E}_{} \left[ \left\Vert g_k - \nabla f(x_*)\right\Vert ^2 \mid x_k \right]&\le 2 A D_{f} (x_k, x_*) + B \sigma _k^2 + D_1, \end{aligned}$$(8)$$\begin{aligned} \mathbb {E}_{} \left[ \sigma _{k+1}^2 \mid x_k \right]&\le \left( 1 - \rho \right) \sigma _k^2 + 2 C D_{f} (x_k, x_*) + D_2. \end{aligned}$$(9)
Though we chose to present Eqs. (7), (8) and (9) as an assumption, we show throughout the main paper and in the appendix that for all the algorithms we consider (excluding DIANA), these equations all hold with known constants when Assumption 1 holds. An extensive yet nonexhaustive list of algorithms satisfying Assumption 2 and the corresponding constants can be found in [11, Table 2]. We report in Section B of the appendix these constants for five algorithms: SGD, two variance reduced methods L-SVRG and SAGA, a distributed method DIANA and a coordinate descent-type method SEGA.
We now state our main theorem.
Theorem 3.1
Suppose that Assumptions 1 and 2 hold. Let \(M \overset{\text {def}}{=}B/\rho \) and let \((\gamma _k)_{k\ge 0}\) be a decreasing, strictly positive sequence of step sizes chosen such that
The iterates given by (2) satisfy
where \(\bar{x}_t \overset{\text {def}}{=}\sum \nolimits _{k=0}^{t-1} \frac{\left( 1 - 2\gamma _k(A+MC) \right) \gamma _k}{\sum \nolimits _{i=0}^{t-1}\left( 1 - 2\gamma _i(A+MC) \right) \gamma _i}x_k\) and \({\delta _0 \overset{\text {def}}{=}F(x_0) - F(x_*)}\).
The proof of Theorem 3.1 is deferred to the appendix (Section C).
4 The Main Corollaries
In contrast to [11], our analysis allows both constant and decreasing step sizes. In this section, we will present two corollaries corresponding to these two choices of step sizes and discuss the resulting convergence rates depending on the constants obtained from Assumption 2. Then, we specialize our theorem to SGD, which allows us to recover the first analysis of proximal SGD without the bounded gradients or bounded gradient variance assumptions in the general convex setting. We apply the same analysis to DIANA and present convergence results for this algorithm in the convex setting.
First, we show that by using a constant step size the average of iterates of any stochastic gradient method of the form (2) satisfying Assumptions 1 and 2 converges sublinearly to the neighborhood of the minimum.
Corollary 4.1
Consider the setting of Theorem 3.1. Let \(M = B/\rho \). Choose stepsizes \(\gamma _k = \gamma > 0\) for all k, where \({\gamma \le \min \left\{ \frac{1}{4 (A + MC)}, \frac{1}{2\,L} \right\} }\); then substituting in the rate in (10), we have,
One can already see that to ensure convergence with a fixed step size, we need to have \(D_1 = D_2 = 0\). The only known stochastic gradient methods which satisfy this property are variance reduced methods, as we show in Sect. 5. When \(D_1 \ne 0\) or \(D_2 \ne 0\), which is the case for SGD and DIANA (See Section B), the solution to ensure anytime convergence is to use decreasing step sizes.
Corollary 4.2
Consider the setting of Theorem 3.1. Let \(M = B/\rho \). Choose stepsizes \(\gamma _k = \frac{\gamma }{\sqrt{k+1}}\) for all \(k \ge 0\), where \(\gamma \le \min \left\{ \frac{1}{4 (A + MC)}, \frac{1}{2\,L} \right\} \). Then substituting in the rate in (10), we have
4.1 SGD Without the Bounded Gradients Assumption
To better illustrate the significance of the convergence rates derived in Corollaries 4.1 and 4.2, consider the SGD method for the finite sum setting (4):
where \(i_k\) is sampled uniformly at random from [n]. Note that we consider the finite sum setting just for illustration, and our results continue to hold under the more general Monte Carlo setting.
Lemma 4.1
Assume that f has a finite sum structure (4) and that Assumption 1 holds. The iterates defined by (11) verify Assumption 1 with
where \(\sigma ^2 = \frac{1}{n}\underset{x_* \in X^*}{\sup }\sum \nolimits _{i=1}^n{\left\Vert \nabla f_i(x_*)\right\Vert }^2\), where \(X_{*}\) is the set of minimizers of F, and \(L_{\max } = \underset{i\in [n]}{\max }\ \, L_i\).
Proof
See Lemma A.1 in [11]. \(\square \)
This analysis can be easily extended to include minibatching, importance sampling, and virtually all forms of sampling by using the constants given in (12), with the exception of \(L_{\max }\) which should be replaced by the expected smoothness constant [12]. Due to lack of space, we defer this general analysis of SGD to the appendix (Sections A and B). Using Theorem 3.1 and Lemma 4.1, we arrive at the following result.
Corollary 4.3
Let \((\gamma _k)_k\) be a sequence of decreasing step sizes such that \(0<\gamma _0 \le 1/4L_{\max }\) for all \(k \in \mathbb {N}\). Let Assumption 1 hold. The iterates of (11) verify
Moreover, as we did in Corollaries 4.1 and 4.2, we can show sublinear convergence to a neighborhood of the minimum if we use a fixed step size, or \(\mathcal {O}(\log (k)/\sqrt{k})\) convergence to the minimum using a step size \(\gamma _k = \frac{\gamma }{\sqrt{k+1}}\). Moreover, if we know the stopping time of the algorithm, we can derive a \(\mathcal {O}(1/\sqrt{k})\) upper bound as done in [29].
Corollary 4.3 fills a gap in the theory of SGD. Indeed, to the best of our knowledge, this is the first analysis of proximal SGD in the convex setting which does not assume neither bounded gradients nor bounded variance (as done in, e.g., [10, 29]). Instead, it relies only on convexity and smoothness. The closest results to ours here are Theorem 1.6 in [14] and Theorem 5 in [40], both of which are in the same setting as Lemma 4.1 but study more restrictive variants of proximal SGD. [14] studies SGD with projection onto closed convex sets and [40] studies vanilla SGD, without proximal or projection operators. Unfortunately, neither result extends easily to include using proximal operators, and hence our results necessitate a different approach. When specialized to the setting where \(g = 0\) (i.e., no prox) and L-smoothness (\(A=L\), \(B=0\)), Corollary 4.1 gives us the following guarantee after T steps of SGD:
Observe that by the smoothness of f we have \(\delta _0 \le \frac{L}{2} {\left\Vert x_0 - x_{*}\right\Vert }^2\), therefore
Optimizing over \(\gamma \), we get that for \(\gamma = \min \left\{ \frac{1}{2\,L}, \frac{\left\Vert x_0 - x_{*}\right\Vert }{\sqrt{2 D_1 T}} \right\} \) we have the convergence rate
This matches the guarantees given in [14, 40] up to constants. Therefore, our analysis is more general while sacrificing no degradation in the convergence rate.
4.2 Convergence of DIANA in the Convex Setting
DIANA was the first distributed quantized stochastic gradient method proven to converge to the minimizer in the strongly convex case and to a critical point in the nonconvex case [26]. See Section B.2 in the appendix for the definition of DIANA and its parameters.
Lemma 4.2
Assume that f has a finite sum structure and that Assumption 1 holds. The iterates of DIANA (Algorithm 4) satisfy Assumption 2 with constants:
where \(w > 0\) and \(\alpha \le \frac{1}{1+w}\) are parameters of Algorithm 4 and \(\sigma ^2\) is such that
Proof
See Lemma A.12 in [11]. \(\square \)
As yet another corollary of Theorem 3.1, we can extend the results of [26] to the convex case and show that DIANA converges sublinearly to the neighborhood of the minimum using a fixed step size, or to the minimum exactly using a decreasing step size.
Corollary 4.4
Assume that f has a finite sum structure (4) and that Assumption 1 holds. Let \((\gamma _k)_{k\ge 0}\) be a decreasing, strictly positive sequence of step sizes chosen such that
By Theorem 3.1 and Lemma 4.2, we have that the iterates given by Algorithm 4 verify
5 Optimal Minibatch Sizes for Variance Reduced Methods
Variance reduced methods are of particular interest because they do not require a decreasing step size in order to ensure convergence. This is because for variance reduced methods we have \(D_1 = D_2 = 0\), and thus, these methods converge sublinearly with a fixed step size.
Variance reduced methods were designed for solving (1) in the special case where f has a finite sum structure. In this case, in order to further improve the convergence properties of variance reduced methods, several techniques can be applied such as adding momentum [3] or using importance sampling [13], but the most popular of such techniques is by far minibatching. Minibatching has been used in conjunction with variance reduced methods since their inception [21], but it was not until [9, 37] that a theoretical justification for the effectiveness of minibatching was proved for SAGA [8] and SVRG [19] in the strongly convex setting. In this section, we show how our theory allows us to determine the optimal minibatch sizes which minimize the total complexity of any variance reduced method. This allows us to compute the first estimates of these minibatch sizes in the nonstrongly convex setting. For simplicity, in the remainder of this section, we will consider the special case where \(R = 0\). Hence, in this section
To derive a meaningful optimal minibatch size from our theory, we need to use the tightest possible upper bounds on the total complexity. When \(R = 0\), we can derive a slightly tighter upper bound than the one we obtained in Theorem 3.1 as follows.
Proposition 5.1
Let \(R = 0\) and \(M=B/2\rho \). Suppose that Assumption 2 holds with \(D_1 = D_2 = 0\). Let the step sizes \(\gamma _k = \gamma \) for all \(k\in \mathbb {N}\), with \(\gamma _k = \gamma \le 1/(4(A+MC))\) for all \(k \in \mathbb {N}\). Then,
We can translate this upper bound into a convenient complexity result as follows.
Corollary 5.1
Assume that there exists a constant \(G \ge 0\) such that
Let \(\epsilon > 0\) and \(\gamma = \frac{1}{4(A+\frac{BC}{2\rho })}\). It follows that
Proof
The result follows from taking \(\gamma = \frac{1}{4(A+\frac{BC}{2\rho })}\) and upper bounding \(\sigma _0^2\) by \(G {\left\Vert x_0 - x_*\right\Vert }^2\) in (13). \(\square \)
In the same way we specialized the general convergence rate given in Theorem 3.1 to the cases of SGD and DIANA in Sect. 4, we can specialize the iteration complexity result (15) to any method which verifies \(D_1 = D_2 = 0\). Due to their popularity, we chose to analyze minibatch variants of SAGA [8] and L-SVRG [17, 23], a single-loop variant of the original SVRG algorithm [19]. The pseudocode for these algorithms is presented in Algorithms 1 and 2. We define for any subset \(B \subseteq [n]\) the minibatch average of f over B as \(f_B(x) = \frac{1}{b}\sum _{i \in B} f_i(x).\)
As we will show next, the iterates of Algorithms 1 and 2 satisfy Assumption 2 with constants which depend on the minibatch size b. These constants will depend on the following expected smoothness and expected residual constants \(\mathcal {L}(b)\) and \(\zeta (b)\) used in the analysis of SAGA and SVRG in [9, 37]:
5.1 Optimal Minibatch Size for b-SAGA
Consider the b-SAGA method in Algorithm 1. Define
and let \(\nabla H(x) \in \mathbb {R}^{d\times n}\) denote the Jacobian of H. Let \(J_k = [J_k^1, \ldots , J_k^n]\) be the current stochastic Jacobian.
Lemma 5.1
The iterates of Algorithm 1 satisfy Assumption 2 and Eq. (14) with
where for all \(Z \in \mathbb {R}^{d\times n},\) \(\left\Vert Z\right\Vert _{\textrm{Tr}}^2 = \textrm{tr}\left( {ZZ^\top } \right) \), and constants
Using Corollary 5.1, we can determine the iteration complexity of Algorithm 1.
Corollary 5.2
(Iteration complexity of \(b-\)SAGA) Consider the iterates of Algorithm 1. Let the stepsize used be \(\gamma = \frac{1}{4(2\mathcal {L}(b)+\zeta (b))}\). Given the constants obtained for Algorithm 1 in (19), by Corollary 5.1 we have that
We define the total complexity as the number of gradients computed per iteration (b) times the iteration complexity required to reach an \(\epsilon \)-approximate solution. Thus, multiplying by b the iteration complexity in Corollary 5.2 and plugging in (17), the total complexity for Algorithm 1 is upper bounded by
Minimizing this upper bound in the minibatch size b gives us an estimate of the optimal empirical minibatch size, which we verify in our experiments.
Proposition 5.2
Let \(b_{saga}^* = \underset{b \in [n]}{{{\,\mathrm{\arg \!\min }\,}}}\, K_{saga}(b)\), where \(K_{saga}(b)\) is defined in (20).
-
If \(L_{\max } \le \frac{2nL}{3}\) then
$$\begin{aligned} b_{saga}^* = \left\{ \begin{array}{ll} 1 &{} \text{ if } \quad \bar{b}< 2 \\ \left\lfloor b_1 \right\rfloor &{} \text{ if } \quad 2 \le \bar{b} < n \\ n &{} \text{ if } \quad \bar{b} \ge n, \end{array} \right. \end{aligned}$$(21)where
$$b_1 \, \overset{\text {def}}{=}\, \frac{n\left( (n-1)L\sqrt{L_{\max }} - 2\sqrt{2nL - 3L_{\max }}(3L_{\max } - 2\,L) \right) }{2(2nL - 3L_{\max })^{\frac{3}{2}}}.$$ -
Otherwise, if \(L_{\max } > \frac{2nL}{3}\) then \(b^* = n\).
5.2 Optimal Minibatch Size for b-L-SVRG
Since the analysis for Algorithm 2 is similar to that of Algorithm 1, we defer its details to the appendix and only present the total complexity and the optimal minibatch size. Indeed, as shown in Section E.3, an upper bound on the total complexity to find an \(\epsilon \)-approximate solution for Algorithm 2 is given by
Proposition 5.3
Let \(b_{svrg}^* = \underset{b \in [n]}{{{\,\mathrm{\arg \!\min }\,}}}\, K_{svrg}(b)\), where \(K_{svrg}(b)\) is defined in (44). Then,
6 Experiments
Here we test our new formula for optimal minibatch size of SAGA given by (21) against the best minibatch size found over a grid search. We used logistic regression with no regularization (\(\lambda =0\)) to emphasize that our results hold for nonstrongly convex functions with data sets taken from the LIBSVM collection [7]. For each data set, we ran minibatch SAGA with the stepsize given in Corollary 5.2 and until a solution with
was reached.
In Fig. 1 we plot the total complexity (number of iterations times the minibatch size) to reach this tolerance for each minibatch size on the grid. We can see in Fig. 1 that for ijcnn and phishing the optimal minibatch size \(b^{*}_{\text {theory}} =b^{*}_{saga}\) (21) is remarkably close to the best minibatch size over the grid \(b^{*}_{\text {empirical}}\). Even when \(b^{*}_{\text {theory}}\) is not close to \(b^{*}_{\text {empirical}}\), such as on the YearPredictionMSD problem, the resulting total complexity is still very close to the total complexity of \(b^{*}_{\text {empirical}}\).
References
Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. Adv. Neural Inf. Process. Syst. 30, 1709–1720 (2017)
Alistarh, D., Hoefler, T., Johansson, M., Konstantinov, N., Khirirat, S., Renggli, C.: The convergence of sparsified gradient methods. Adv. Neural Inf. Process. Syst. 31, 5977–5987 (2018)
Allen-Zhu, Z.: Katyusha: the first direct acceleration of stochastic gradient methods. In: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC, pp. 1200–1205 (2017)
Allen-Zhu, Z., Hazan, E.: Variance reduction for faster non-convex optimization. In: Proceedings of the 33nd International Conference on Machine Learning (2016)
Atchadé, Y.F., Fort, G., Moulines, E.: On perturbed proximal gradient algorithms. J. Mach. Learn. Res. 18(1), 310–342 (2017)
Beck, A.: First-Order Methods in Optimization. MOS-SIAM Series on Optimization, Society for Industrial and Applied Mathematics, Philadelphia (2017)
Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 1–27 (2011)
Defazio, A., Bach, F.R., Lacoste-Julien, S.: SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives. Adv. Neural Inf. Process. Syst. 27, 1646–1654 (2014)
Gazagnadou, N., Gower, R.M., Salmon, J.: Optimal mini-batch and step sizes for SAGA. In: Proceedings of the 36th International Conference on Machine Learning, vol. 97, pp. 2142–2150 (2019)
Ghadimi, S., Lan, G.: Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM J. Optim. 23(4), 2341–2368 (2013)
Gorbunov, E., Hanzely, F., Richtárik, P.: A unified theory of SGD: variance reduction, sampling, quantization and coordinate descent. In: AISTATS (2020)
Gower, R.M., Loizou, N., Qian, X., Sailanbayev, A., Shulgin, E., Richtárik, P.: SGD: general analysis and improved rates. In: Proceedings of the 36th International Conference on Machine Learning, vol. 97, pp. 5200–5209 (2019)
Gower, R.M., Richtárik, P., Bach, F.: Stochastic quasi-gradient methods: variance reduction via Jacobian sketching. Math. Program. 188, 135–192 (2020)
Grimmer, B.: Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity. SIAM J. Optim. 29(2), 1350–1365 (2019)
Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. In: Proceedings of the 32nd International Conference on Machine Learning, vol. 37, pp. 1737–1746 (2015)
Hanzely, F., Mishchenko, K., Richtárik, P.: SEGA: variance reduction via gradient sketching. Adv. Neural Inf. Process. Syst. 31, 2086–2097 (2018)
Hofmann, T., Lucchi, A., Lacoste-Julien, S., McWilliams, B.: Variance reduced stochastic gradient descent with neighbors. Adv. Neural Inf. Process. Syst. 28, 2305–2313 (2015)
Horváth, S., Kovalev, D., Mishchenko, K., Stich, S., Richtárik, P.: Stochastic distributed learning with gradient quantization and variance reduction. arXiv:1904.05115 (2019)
Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. Adv. Neural Inf. Process. Syst. 26, 315–323 (2013)
Khaled, A., Richtárik, P.: Better theory for SGD in the nonconvex world. arXiv:2002.03329 (2020)
Konečný, J., Liu, J., Richtárik, P., Takác, M.: Mini-batch semi-stochastic gradient descent in the proximal setting. J. Sel. Top. Signal Process. 10(2), 242–255 (2016)
Konečný, J., Richtárik, P.: Randomized distributed mean estimation: accuracy vs communication. arXiv:1611.07555 (2016)
Kovalev, D., Horváth, S., Richtárik, P.: Don’t jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop. In: Proceedings of the 31st International Conference on Algorithmic Learning Theory, vol. 117, pp. 451–467 (2020)
Lei, Y., Ting, H., Li, G., Tang, K.: Stochastic gradient descent for nonconvex learning without bounded gradient assumptions. IEEE Trans. Neural Netw. Learn. Syst. 31(10), 4394–4400 (2020)
Loizou, N., Vaswani, S., Laradji, I.H., Lacoste-Julien, S.: Stochastic polyak step-size for SGD: an adaptive learning rate for fast convergence. In: Banerjee, A., Fukumizu, K. (eds.) Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pp. 1306–1314. PMLR (2021)
Mishchenko, K., Gorbunov, E., Takáč, M., Richtárik, P.: Distributed learning with compressed gradient differences. arXiv:1901.09269 (2019)
Mishchenko, K., Hanzely, F., Richtárik, P.: 99% of worker-master communication in distributed optimization is not needed. In: Adams, R.P., Gogate, V. (eds.) Proceedings of the Thirty-Sixth Conference on Uncertainty in Artificial Intelligence, UAI 2020, virtual online, August 3–6, 2020, volume 124 of Proceedings of Machine Learning Research, pp. 979–988. AUAI Press (2020)
Needell, D., Srebro, N., Ward, R.: Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm. Math. Program. Ser. A 155(1), 549–573 (2016)
Nemirovski, A., Juditsky, A.B., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)
Nesterov, Y.E.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. 22(2), 341–362 (2012)
Nguyen, L., Nguyen, P.H., van Dijk, M., Richtárik, P., Scheinberg, K., Takáč, M.: SGD and Hogwild! Convergence without the bounded gradients assumption. In: Proceedings of the 35th International Conference on Machine Learning, vol. 80, pp. 3750–3758 (2018)
Nguyen, L.M., Liu, J., Scheinberg, K., Takáč, M.: SARAH: a novel method for machine learning problems using stochastic recursive gradient. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 2613–2621 (2017)
Ravikumar, P., Wainwright, M.J., Lafferty, J.D.: High-dimensional Ising model selection using \(\ell _1\)-regularized logistic regression. Ann. Stat. 38(3), 1287–1319 (2010)
Reddi, S.J., Hefny, A., Sra, S., Póczos, B., Smola, A.J.: Stochastic variance reduction for nonconvex optimization. In: Proceedings of the 33nd International Conference on Machine Learning, vol. 48, pp. 314–323 (2016)
Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)
Schmidt, M., Le Roux, N., Bach, F.: Minimizing finite sums with the stochastic average gradient. Math. Program. 162(1–2), 83–112 (2017)
Sebbouh, O., Gazagnadou, N., Jelassi, S., Bach, F., Gower, R.M.: Towards closing the gap between the theory and practice of SVRG. Adv. Neural Inf. Process. Syst. 32, 646–656 (2019)
Seide, F., Fu, H., Droppo, J., Li, G., Yu, D.: 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNS. In: INTERSPEECH, 15th Annual Conference of the International Speech Communication Association, pp. 1058–1062 (2014)
Shalev-Shwartz, S., Ben-David, S.: Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, Cambridge (2014)
Stich, S.U.: Unified optimal analysis of the (stochastic) gradient method. arXiv:1907.04232 (2019)
Tibshirani, R.J.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58(1), 267–288 (1996)
Vaswani, S., Bach, F., Schmidt, M.: Fast and faster convergence of SGD for over-parameterized models and an accelerated perceptron. In: The 22nd International Conference on Artificial Intelligence and Statistics, vol. 89, pp. 1195–1204 (2019)
Wangni, J., Wang, J., Liu, J., Zhang, T.: Gradient sparsification for communication-efficient distributed optimization. Adv. Neural Inf. Process. Syst. 31, 1306–1316 (2018)
Wright, S.J.: Coordinate descent algorithms. Math. Program. 151(1), 3–34 (2015)
Zhang, H., Li, J., Kara, K., Alistarh, D., Liu, J., Zhang, C.: Zipml: training linear models with end-to-end low precision, and a little bit of deep learning. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 4035–4043 (2017)
Zhao, P., Zhang, T.: Stochastic optimization with importance sampling for regularized loss minimization. In: Proceedings of the 32nd International Conference on Machine Learning, vol. 37, pp. 1–9 (2015)
Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol. 67(2), 301–320 (2005)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Alexander Vladimirovich Gasnikov.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Outline of the Appendix
The appendix is organized as follows:
-
Section A: we present the arbitrary sampling framework for stochastic gradient methods introduced by [13], which will be used for the analysis of SGD and L-SVRG.
-
Section B: we present specializations of Theorem 3.1 to the algorithms we discuss: SGD, DIANA, L-SVRG, SAGA and SEGA.
-
Section C: we present the proof of our main Theorem 3.1.
-
Section D: we present the proof of Corollary 4.2.
-
Section E, we present the proof of Proposition 5.1, and the detailed analysis of the optimal minibatch results for b-SAGA and b-L-SVRG, in addition to an analysis for the optimal miniblock size for b-SEGA.
-
Section F: we present some technical lemmas which we use in our analysis.
Appendix A: Arbitrary Sampling
In this section, we recall the arbitrary sampling framework [12] which allows us to analyze our algorithms for minibatching, importance sampling and virtually all possible forms of sampling.
1.1 Appendix A.1: Stochastic reformulation
To see importance sampling and minibatch variants of stochastic gradient methods all through the same lens, we introduce a sampling vector which we will use to re-write (1).
Definition A.1
We say that a random element-wise positive vector \(v \in \mathbb {R}^n_+\) drawn from some distribution \(\mathcal {D}\) is a sampling vector if its expectation is the vector of all ones:
For a given distribution \(\mathcal {D}\), we introduce a stochastic reformulation of (1) as follows
By definition of the sampling vector, \(f_v (x)\) and \(\nabla f_v (x)\) are unbiased estimators of f(x) and \(\nabla f(x)\), respectively, and hence problem (23) is indeed equivalent (i.e., a reformulation) of the original problem (1). In the case of the gradient, for instance, we get
Reformulation (23) can be solved using proximal stochastic gradient descent via
where \(v_k \sim \mathcal {D}\) is sampled i.i.d. at each iteration and \(\gamma > 0\) is a stepsize. By substituting specific choices of \(\mathcal {D}\), we obtain specific variants of SGD for solving (1). We further show that (24) is a special case of (2) with a sequence of vectors \(g_k = \nabla f_{v_k} (x_k)\) and use the unified analysis in Theorem 3.1 to obtain convergence rates for (24).
1.2 Appendix A.2: Expected Smoothness and Gradient Noise
In order to analyze (24), we will make use of the following result, which characterizes the smoothness of the subsampled functions \(f_v\).
Lemma A.1
(Expected Smoothness) If for all \(i \in [n], f_i\) is convex and \(L_i-\)smooth, then there exists a constant \(\mathcal {L}\ge 0\) such that
for all \(x \in \mathbb {R}^d\) and where \(x_*\) is any minimizer of (1).
The proof of this result follows closely that of Lemma 1 in [9].
Proof
Since for all \(i \in [n]\), \(f_i\) is \(L_i\)-smooth and convex, we have that each realization \(f_v\) (defined in (23)) is \(L_v\)-smooth and convex. Thus, from Lemma F.1, we have that for all \(x \in \mathbb {R}^d\),
Taking expectation over the samplings,
\(\square \)
Next, we define the gradient noise.
Definition A.2
(Gradient Noise) The gradient noise \(\sigma ^2 = \sigma ^2(f, \mathcal {D})\) is defined by
1.3 Appendix A.3: Minibatching Elements Without Replacement
Since analyzing minibatching for variance reduced methods is one of the main focuses of our work, we present minibatching without replacement as an example of the use of arbitrary sampling.
First, we define samplings.
Definition A.3
(Sampling) A sampling \(S\subseteq [n]\) is any random set-valued map which is uniquely defined by the probabilities \(\sum _{B \subseteq [n]} p_B =1\) where \(p_{B} \;\overset{\text {def}}{=}\; \mathbb {P}(S=B), \quad \forall B \subseteq [n].\) A sampling S is called proper if for every \(i \in [n]\), we have that \(p_i \overset{\text {def}}{=}\mathbb {P}(i \in S) = \underset{C:i\in C}{\sum }p_C > 0\).
We can build a sampling vector using a sampling as follows.
Lemma A.2
(Sampling vector, Lemma 3.3 in [12]) Let S be a proper sampling. Let \(p_i \overset{\text {def}}{=}\mathbb {P}(i \in S)\) and \(\textbf{P} \overset{\text {def}}{=}\textrm{diag }\left( p_1,\dots ,p_n\right) \). Let \(v = v(S)\) be a random vector defined by
It follows that v is a sampling vector.
Proof
The ith coordinate of v(S) is \(v_i(S) = \mathbb {1}(i \in S) / p_i\) and thus
\(\square \)
Next, we define b-nice sampling, also known as minibatching without replacement.
Definition A.4
(b-nice sampling) S is a b-nice sampling if it is a sampling such that
To construct such a sampling vector based on the b–nice sampling, note that \(p_i = \tfrac{b}{n}\) for all \(i \in [n]\) and thus we have that \(v(S) = \tfrac{n}{b}\sum _{i\in S}e_i\) according to Lemma A.2. The resulting subsampled function is then \(f_v(x) = \tfrac{1}{|S|}\sum _{i\in S}f_i(x)\), which is simply the minibatch average over S.
A remarkable result for b-nice sampling is that when all the functions \(f_i, i\in [n]\) are \(L_i\)-smooth and convex, then the expected smoothness constant (25) nicely interpolates between L, the smoothness constant of f, and \(L_{\max } = \underset{i\in [n]}{\max }\,L_i\).
Lemma A.3
(\(\mathcal {L}\) for \(b-\)nice sampling, Proposition 3.8 in [12]) Let v be a sampling vector based on the b-nice sampling defined in A.4. If for all \(i \in [n], f_i\) is convex and \(L_i\)-smooth, then (25) holds with
where L is the smoothness constant of f and \(L_{\max } = \underset{i\in [n]}{\max }\ \, L_i\).
Appendix B: Notable Corollaries of Theorem 3.1
In this section, we present corollaries of Theorem 3.1 for five algorithms:
-
SGD with arbitrary sampling (Algorithm 3).
-
DIANA (Algorithm 4).
-
L-SVRG with arbitrary sampling (Algorithm 5), and minibatch L-SVRG as a special case (Algorithm 2).
-
Minibatch SAGA (Algorithm 1).
-
Miniblock SEGA (Algorithm 6).
This means that for each method, we will present the constants which satisfy Assumption 2 and specialize Theorem 3.1 using these constants.
1.1 Appendix B.1: SGD with Arbitrary Sampling
Lemma B.1
The iterates of Algorithm 3 satisfy Assumption 2 with
and constants:
where \(\mathcal {L}\) is defined in (25) and \(\sigma ^2\) in (26).
Proof
See Lemma A.2 in [11]. \(\square \)
Using the constants given in the above lemma, we have the following immediate corollary of Theorem 3.1.
Corollary B.1
Assume that f has a finite sum structure (4) and that Assumption 1 holds. Let \((\gamma _k)_{k\ge 0}\) be a decreasing, strictly positive sequence of step sizes chosen such that
Then, from Theorem 3.1 and Lemma B.1, we have that the iterates given by Algorithm 3 verify
where \(\bar{x}_t \overset{\text {def}}{=}\sum \nolimits _{k=0}^{t-1} \frac{\left( 1 - 4\gamma _k\mathcal {L} \right) \gamma _k}{\sum _{i=0}^{t-1}\left( 1 - 4\gamma _i\mathcal {L} \right) \gamma _i}x_k\).
1.2 Appendix B.2: DIANA
A complete description of the DIANA algorithm can be found in [26].
To analyze the DIANA algorithm (Algorithm 4), we introduce quantization operators.
Definition B.1
(w-quantization operator, Definition 4 in[26]) Let \(w > 0\). A random operator \(Q: \mathbb {R}^d \rightarrow \mathbb {R}\) with the properties:
for all \(x \in \mathbb {R}^d\) is called a w-quantization operator.
Several examples of quantization operators can be found in [26].
For convenience, we repeat the statement of Lemma 4.2 below.
Lemma B.2
Assume that f has a finite sum structure and that Assumption 1 holds. The iterates of DIANA (Algorithm 4) satisfy Assumption 2 with constants:
where \(w > 0\) and \(\alpha \le \frac{1}{1+w}\) are parameters of Algorithm 4 and \(\sigma ^2\) is such that
Proof
See Lemma A.12 in [11]. \(\square \)
Now using the constants given in the above lemma in Theorem 3.1 gives the following corollary.
Corollary B.2
Assume that f has a finite sum structure (4) and that Assumption 1 holds. Let \((\gamma _k)_{k\ge 0}\) be a decreasing, strictly positive sequence of step sizes chosen such that
Then, from Theorem 3.1 and Lemma B.2, we have that the iterates given by Algorithm 4 verify
where \(\eta \overset{\text {def}}{=}2(1+\frac{4w}{n})L_{\max }\), \(\bar{x}_t \overset{\text {def}}{=}\sum \nolimits _{k=0}^{t-1} \frac{\left( 1 - \gamma _k\eta \right) \gamma _k}{\sum _{i=0}^{t-1}\left( 1 - \gamma _i\eta \right) \gamma _i}x_k\) and \({\delta _0 \overset{\text {def}}{=}F(x_0) - F(x_*)}\).
1.3 Appendix B.3: L-SVRG with Arbitrary Sampling
Lemma B.3
If Assumption 1 holds then the iterates of Algorithm 5 satisfy
where
and \(\mathcal {L}\) is defined in (25).
Proof
By Lemma A.1 we have that (25) holds with \(\mathcal {L}>0.\) Furthermore
where we used in the inequality that for all \(a, b \in \mathbb {R}^d, {\left\Vert a + b\right\Vert }^2 \le 2{\left\Vert a\right\Vert }^2 + 2{\left\Vert b\right\Vert }^2\). Thus,
Moreover,
where we also used in the last inequality that \(\mathbb {E}_{} \left[ \left\Vert X - \mathbb {E}_{} \left[ X\right] \right\Vert ^2\right] = \mathbb {E}_{} \left[ \left\Vert X\right\Vert ^2\right] - \left\Vert \mathbb {E}_{} \left[ X\right] \right\Vert ^2 \le \mathbb {E}_{} \left[ \left\Vert X\right\Vert ^2\right] \). \(\square \)
We have the following immediate consequence of the previous lemma.
Lemma B.4
If Assumption 1 holds then the iterates of Algorithm 5 satisfy Assumption 2 with
and constants
where \(\mathcal {L}\) is defined in (25).
Using the constant derived in Lemma B.4 in Theorem 3.1 gives the following corollary.
Corollary B.3
Assume that f has a finite sum structure (4) and that Assumption 1 holds. Let \(\gamma _k = \gamma \) for all \(k \in \mathbb {N}\), where
Then, from Theorem 3.1 and Lemma B.4, we have that the iterates given by Algorithm 5 verify
where \(\bar{x}_t \overset{\text {def}}{=}\frac{1}{t}\sum \nolimits _{k=0}^{t-1} x_k\) and where \(\mathcal {L}\) is defined in (25).
1.3.1 Appendix B.3.1: b-L-SVRG
As we demonstrated in Section A.3, we can specialize the results derived for arbitrary sampling to minibatching without replacement by using a \(b-\)nice sampling defined in Definition A.4 and the corresponding sampling vector (27).
Indeed, using Algorithm 5 with b-nice sampling is equivalent to using Algorithm 2. Thus, we have the following lemma.
Corollary B.4
From Lemma B.4, we have that the iterates of Algorithm 2 satisfy Assumption 2 with constants:
where \(\mathcal {L}(b)\) is defined in (17).
A convergence result for Algorithm 2 can be easily concluded from Corollary B.3, with \(\mathcal {L}(b)\) in place of \(\mathcal {L}\).
1.4 Appendix B.4: b-SAGA
Lemma 5.1 in the main text is a consequence of the following lemma.
Lemma B.5
Consider the iterates of Algorithm 1. We have:
where:
with \(\left\Vert Z\right\Vert _{\textrm{Tr}}^2 = \textrm{tr}(Z^\top Z)\) for any \(Z \in \mathbb {R}^{d \times n}\).
Proof
The inequality (29) corresponds to Lemma 3.10 and (30) to Lemma 3.9 in [13]. \(\square \)
The previous Lemma gives us the constants for Assumption 2 for Algorithm 1.
Lemma B.6
The iterates of Algorithm 1 satisfy Assumption 2 with
and constants
Using the constant derived in Lemma B.6 in Theorem 3.1 gives the following corollary.
Corollary B.5
Assume that f has a finite sum structure (4) and that Assumption 1 holds. Choose for all \(k \in \mathbb {N}\) \(\gamma _k = \gamma \), where
Then, from Theorem 3.1 and Lemma B.6, we have that the iterates given by Algorithm 1 verify
where \(\bar{x}_t \overset{\text {def}}{=}\frac{1}{t}\sum \limits _{k=0}^{t-1} x_k\).
1.5 Appendix B.5: b-SEGA
Lemma B.7
Consider the iterates of Algorithm 6. We have:
where:
Proof
Let S be a random miniblock s.t. \(\mathbb {P}(S = B) = \frac{1}{\left( {\begin{array}{c}n\\ b\end{array}}\right) }\) for any \(B \subseteq [n]\) s.t. \(|B| = b\). Then, for any vector \(a = [a_1,\dots ,a_n] \in \mathbb {R}^d\), we have:
Indeed,
where we used that \(|B \in [d]: |B|=b \wedge i \in B| = \left( {\begin{array}{c}d-1\\ b-1\end{array}}\right) \). And
Thus,
We have
where we used in the first inequality that for all \(a, b \in \mathbb {R}^d, {\left\Vert a + b\right\Vert }^2 \le 2{\left\Vert a\right\Vert }^2 + 2{\left\Vert b\right\Vert }^2\). Thus, using the fact that f is L-smooth, we have
Moreover,
where we used in the last inequality the \(L-\)smoothness of f. \(\square \)
Lemma B.8
From Lemma B.7, we have that the iterates of Algorithm 6 satisfy Assumption 2 and Eq. (14) with
and constants:
Using the constant derived in Lemma B.8 in Theorem 3.1 gives the following corollary.
Corollary B.6
Assume that f satisfies Assumption 1. Choose for all \(k \in \mathbb {N}\), \(\gamma _k = \gamma \), where
Then, from Theorem 3.1 and Lemma B.8, we have that the iterates given by Algorithm 6 verify
where \(\bar{x}_t \overset{\text {def}}{=}\frac{1}{t}\sum \nolimits _{k=0}^{t-1} x_k\).
Appendix C: Proofs for Sect. 3
1.1 Appendix C.1: Proof of Theorem 3.1
Before proving Theorem 3.1, we present several useful lemmas.
Lemma C.1
(Bounding the gradient variance) Assuming that the \(g_k\) are unbiased and that Assumption 2 holds, we have
Proof
Starting from the left-hand side of (32), we have
where we used that \(\mathbb {E}_{} \left[ \left\Vert X - \mathbb {E}_{} \left[ X\right] \right\Vert ^2\right] = \mathbb {E}_{} \left[ \left\Vert X\right\Vert ^2\right] - \left\Vert \mathbb {E}_{} \left[ X\right] \right\Vert ^2 \le \mathbb {E}_{} \left[ \left\Vert X\right\Vert ^2\right] \) for any random variable X. \(\square \)
Lemma C.2
(Lemma 8 in [5]) Suppose that Assumption 1 holds and let \(\gamma \in \left( 0, \frac{1}{L} \right] \), then for all \(x, y \in \mathbb {R}^d\) and \(p = \textrm{prox}_{\gamma g}(y)\) we have,
Proof
We leave the proof to Section F.3. \(\square \)
Lemma C.3
For any \(x \in \mathbb {R}^d\) and minimizer \(x_*\) of F, we have,
Proof
Because \(x_*\) is a minimizer of F, we have that \(- \nabla f(x_*) \in \partial R(x_*)\). By the definition of subgradients, we have
Rearranging gives
Adding \(f(x) - f(x_*)\) to both sides we have,
Now note that the on the left-hand side we have the Bregman divergence \(D_{f} (x, x_*)\).
\(\square \)
Definition C.1
Given a stepsize \(\gamma > 0\), the prox-grad mapping is defined as:
For the ease of exposition, we restate Theorem 3.1.
Theorem C.1
Suppose that Assumptions 2 and 1 hold. Let \(M \overset{\text {def}}{=}B/\rho \) and let \((\gamma _k)_{k\ge 0}\) be a decreasing, strictly positive sequence of step sizes chosen such that
The iterates given by (2) converge according to
where \(\bar{x}_t \overset{\text {def}}{=}\sum \nolimits _{k=0}^{t-1} \frac{\left( 1 - \gamma _k \eta \right) \gamma _k}{\sum _{i=0}^{t-1}\left( 1 - \gamma _i\eta \right) \gamma _i}x_k\) and \(V_0 \overset{\text {def}}{=}{\left\Vert x_0 - x_*\right\Vert }^2 + 2 \gamma _0^2\,M \sigma _0^2\) and \({\delta _0 \overset{\text {def}}{=}F(x_0) - F(x_*)}\).
Proof
Let \(x_*\) be a minimizer of F. Using (33) from Lemma C.2 with \(y = x_k - \gamma _k g_k\), \(x = x_k\) and \(\gamma = \gamma _k\) gives
Multiplying both sides by \(-1\) results in
Now focusing on the last term in the above and consider the straightforward decomposition
By Cauchy Schwartz we have that
Now using the nonexpansivity of the proximal operator
Using this in (37), we have
Using (38) in (36) and taking expectation conditioned on \(x_k\), and using \(\mathbb {E}_{k} \left[ \cdot \right] \overset{\text {def}}{=}\mathbb {E}_{} \left[ \cdot \; | \; x_k\right] \) for shorthand, we have
Let \(r_k \overset{\text {def}}{=}x_k-x_*.\) Taking expectation conditioned on \(x_k\) in (35) and using (39), we have
Using (8) from Assumption 2, we have
Let \(V_k \overset{\text {def}}{=}{\left\Vert r_k\right\Vert }^2 + 2\,M \gamma _k^2 \sigma _{k}^2\) where \(M = \frac{B}{\rho }\), then
Since \(\gamma _{k+1} \le \gamma _k\), we have that
Using (9) from Assumption 2, we have
Let \(\eta \overset{\text {def}}{=}2A + 2 M C\). Using (34) in (42) we have,
Using the abbreviation \(\delta _k = F(x_k) - F(x_*)\) gives
Taking expectation,
summing over \(k =0,\ldots , t-1\) and using telescopic cancellation gives
Adding \(2\gamma _0\delta _0\) to both sides of the above inequality and rearranging,
where we also used that \(V_t \ge 0\) and \(\delta _t \ge 0.\) By the choice of \(\gamma _0\) we have \(1 - \gamma _0 \eta > 0\), and since \((\gamma _i)_i\) is a decreasing sequence, we have \(1 - \gamma _i \eta > 0\) for all i. Hence, dividing both sides by \(2\sum \nolimits _{i=0}^{t-1}\left( 1 - \gamma _i\eta \right) \gamma _i\), we have
where \(w_k \overset{\text {def}}{=}\frac{\left( 1 - \gamma _k \eta \right) \gamma _k}{\sum _{i=0}^{t-1}\left( 1 - \gamma _i\eta \right) \gamma _i}\) for all \(k \in \left\{ 0,\dots ,t-1\right\} \). Note that \(\sum _{k=0}^{t-1} w_k = 1\) and \(w_k \ge 0\) for all \(k \in \left\{ 0,\dots ,t-1\right\} \). Hence, since F is convex, we can use Jensen’s inequality to conclude
Writing out the definition of \(\delta _0\) yields the theorem’s statement. \(\square \)
Appendix D: Proofs for Sect. 4
1.1 Appendix D.2: Proof of Corollary 4.2
Proof
Note that, using the integral bound, we have:
Moreover, note that since \(\gamma _k \le \frac{1}{4\left( A+MC \right) }\), we have \(1 - 2\gamma _k(A+MC) \ge \frac{1}{2}\) for all \(k \in \mathbb {N}\). Thus
where \(\eta \overset{\text {def}}{=}2(A+MC)\). Corollary 4.2 follows from using these bounds in Equation (10). \(\square \)
Appendix E: Roofs for Sect. 5
1.1 Appendix E.1: Proof of Proposition 5.1
Proof
We start by expanding the square:
Thus, taking expectation conditioned on \(x_k\), and using \(\mathbb {E}_{k} \left[ \cdot \right] \overset{\text {def}}{=}\mathbb {E}_{} \left[ \cdot \; | \; x_k\right] \) for shorthand, we have
Thus, using (9),
Thus, rearranging and taking the expectation, we have:
Summing over \(k =0,\ldots , t-1\) and using telescopic cancellation gives
Ignoring the negative terms in the upper bound, and using Jensen’s inequality, we have
Moreover, notice that if \(\gamma \le \frac{1}{4(A+MC)}\), then \(2(1-2\gamma (A+MC)) \ge 1\), which gives (13). \(\square \)
1.2 Appendix E.2: Optimal Minibatch Size for b-SAGA (Algorithm 1)
In this section, we present the proofs for Sect. 5.1.
1.2.1 Appendix E.2.1: Proof of Lemma 5.1
Proof
For constant \(A, B, \rho , C, D_1, D_2\), see Lemma B.5. Moreover,
Thus, (14) holds with \(G = \zeta (b)\). \(\square \)
1.2.2 Appendix E.2.2: Proof of Proposition 5.2
Proof
First, since \(\frac{{\left\Vert x_0 - x_*\right\Vert }^2}{\epsilon }\) does not depend on b, the variations of K(b) are the same as those of
Let’s determine the sign of \(Q^{'}(b)\). We have:
where
And we have:
Case 1 \(L_{\max } > \frac{2nL}{3}\). We have \(2nL - 3L_{\max } < 0\). Hence, \(W_2^2 - 4W_1W_2 < 0\).
Moreover, since \(W_1 < 0\), we have
Thus,
Case 2 \(L_{\max } \le \frac{2nL}{3}\). Then, \(W_2^2 - 4W_1W_2 \ge 0\) and \(K'(b) = 0\) has at least one solution. We are now going to examine whether or not K(b) is convex. We have:
Thus, K(b) is convex. \(K^{'}(b) = 0\) has two solutions:
But since \(b_2 \le 0\), we have that:
\(\square \)
1.3 Appendix E.3: Optimal Minibatch Size for b-L-SVRG (Algorithm 2)
In this section, we present a detailed analysis of the optimal minibatch size derived in Sect. 5.2.
Lemma E.1
We have that the iterates of Algorithm 2 satisfy Assumption 2 and Eq. (14) with
and constants
where \(\mathcal {L}(b)\) is defined in (17).
Proof
For constant \(A, B, \rho , C, D_1, D_2\), see Lemma B.4 and Corollary B.4.
Moreover,
where we used in the first inequality that \(\mathbb {E}_{} \left[ \left\Vert X - \mathbb {E}_{} \left[ X\right] \right\Vert ^2\right] = \mathbb {E}_{} \left[ \left\Vert X\right\Vert ^2\right] - \left\Vert \mathbb {E}_{} \left[ X\right] \right\Vert ^2 \le \mathbb {E}_{} \left[ \left\Vert X\right\Vert ^2\right] \). Thus, (14) holds with \(G = \mathcal {L}(b)L\). \(\square \)
In the next corollary, we will give the iteration complexity for Algorithm 2 in the case where \(p = 1/n\), which is the usual choice for p in practice. A justification for this choice can be found in [23, 37].
Corollary E.1
(Iteration complexity of L-SVRG) Consider the iterates of Algorithm 2. Let \(p=1/n\) and \(\gamma = \frac{1}{12\mathcal {L}(b)}\). Given the constants obtained for Algorithm 2 in (43), we have, using Corollary 5.1, that if
then, \(\mathbb {E}_{} \left[ f(\bar{x}_k) - f(x_*)\right] \le \epsilon \).
The usual definition for the total complexity is the expected number of gradients computed per iteration, times the iteration complexity, required to reach an \(\epsilon \)-approximate solution in expectation. However, since L-SVRG computes the full gradient every n iterations in expectation, we can say that L-SVRG computes roughly \(2b + 1\) gradients every iteration, so that after n iteration, it will have computed \(n + 2bn\) gradient. Thus, the total complexity for SVRG is:
1.3.1 Appendix E.3.1: Proof of Proposition 5.3
Proof
Since the factor \(\frac{{\left\Vert x_0 - x_*\right\Vert }^2}{\epsilon }\) which appears in (44) does not depend on the minibatch size, minimizing the total complexity in the minibatch size corresponds to minimizing the following quantity:
We have
where \(\xi \) is a constant independent of b. Differentiating, we have:
Since \(L_{\max } \ge L\) and \(nL \ge L_{\max }\) (see for example Lemma A.6 in [37]), C(b) is a convex function of b. Thus, Q(b) is minimized when \(Q'(b) = 0\). Hence:
Since \(L_{\max }\) can take any value in the interval [L, nL], we have \(b^* \in [0, 6]\). \(\square \)
1.4 Appendix E.4: Optimal Miniblock Size for b-SEGA (Algorithm 6)
In this section, we define for any \(j \in [d]\) the matrix \(I_j \in \mathbb {R}^{d\times d}\) such that
and we consequently define for any subset \(B \subseteq [d]\),
Corollary E.2
From Lemma B.8, we have that the iterates of Algorithm 6 satisfy Assumption 2 and Eq. (14) with
and constants:
Proof
For the constants \(A, B, \rho , C, D_1, D_2\), see Lemma B.8. Moreover, in Algorithm 6, \(h_0 = 0\). Thus, \(\sigma _0^2 = {\left\Vert h_0\right\Vert }^2 = 0\). Thus, (14) holds with \(G = 0\). \(\square \)
In the next corollary, we will give the iteration complexity for Algorithm 6.
Corollary E.3
(Iteration complexity of b-SEGA) Consider the iterates of Algorithm 6. Let \(\gamma = \frac{b}{4\left( 3d - b \right) L}\). Given the constants obtained for Algorithm 6 in (46), we have, using Corollary 5.1, that if
then, \(\mathbb {E}_{} \left[ F(\bar{x}_k) - F(x_*)\right] \le \epsilon \).
Here, we define the total complexity as the number of coordinates of the gradient that we sample at each iteration times the iteration complexity. Since at each iteration, we sample b coordinates of the gradient, the total complexity for Algorithm 6 to reach an \(\epsilon \)-approximate solution is
Thus, we immediately have the following proposition.
Proposition E.1
Let \(b^* = \underset{b \in [d]}{{{\,\mathrm{\arg \!\min }\,}}}\, K(b)\), where K(b) is defined in (47). Then,
The consequence of this proposition is that when using Algorithm 6, one should always use as big a miniblock as possible if the cost of a single iteration is proportional to the miniblock size.
Appendix F: Auxiliary Lemmas
1.1 Appendix F.1: Smoothness and Convexity Lemma
We now develop an immediate consequence of each \(f_i\) being convex and smooth based on the follow lemma.
Lemma F.1
Let \(g: \mathbb {R}^d \mapsto \mathbb {R}\) be a convex function
and \(L_g\)-smooth
It follows that
Proof
Fix \(i\in \{1,\ldots , n\}\) and let
To prove (50), it follows that
Substituting this in z into (51) gives
\(\square \)
Lemma F.2
Suppose that for all \(i \in [n]\), \(f_i\) is convex and \(L_i\)-smooth, and let \(L_{\max } = \max _{i \in [n]} L_i\). Then
Proof
From (50), we have for all \(i \in [n]\),
Thus,
\(\square \)
1.2 Appendix F.2: Proximal Lemma
Lemma F.3
Let \(R: \mathbb {R}^d \mapsto \mathbb {R}\) be a convex lower semi-continuous function. For \(z,y \in \mathbb {R}^d\) and \(\gamma >0\). With \(p = \textrm{prox}_{\gamma g} (y)\) we have that for
Proof
This is classic result, see, for example, the “Second Prox Theorem” in Section 6.5 in [6]. \(\square \)
1.3 Appendix F.3: Proof of Lemma C.2
This proof follows the proof of Lemma 8 in [5], and we reproduce it for completeness. Indeed, using the convexity of f
in combination with (53) where \(z = x_*\) gives
Now using smoothness
gives
Using that
in combination with (54) gives
Now it remains to multiply both sides by \(-2\gamma \) to arrive at (33).
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Khaled, A., Sebbouh, O., Loizou, N. et al. Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization. J Optim Theory Appl 199, 499–540 (2023). https://doi.org/10.1007/s10957-023-02297-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-023-02297-y