Abstract
We estimate the Lebesgue constants for the weak thresholding greedy algorithm in a Banach space relative to a biorthogonal system. The estimates involve the weakness (relaxation) parameter of the algorithm, as well as properties of the basis, such as its quasi-greedy constant and democracy function.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this short note, we calculate the Lebesgue constants associated with the \(t\)-greedy, and the Chebyshev \(t\)-greedy, algorithms in Banach spaces (thus measuring the efficiency of these approximation methods, in the worst case).
Throughout this paper, \(X\) is a separable infinite dimensional Banach space. A family \((e_i, e_i^*)_{\in \in \mathbb N} \subset X \times X^*\) is called a bounded biorthogonal system if:
-
1.
\(X = \text{ span }\,[e_i : i \in \mathbb N]\).
-
2.
\(e_i^*(e_j) = 1\) if \(i=j\), \(e_i^*(e_j) = 0\) otherwise.
-
3.
\(0 < \inf _i \min \{\Vert e_i\Vert , \Vert e_i^*\Vert \} \le \sup _i \max \{\Vert e_i\Vert , \Vert e_i^*\Vert \} < \infty \).
For brevity, we refer to \((e_i)\) as a basis. Note that Condition (3) is referred to as \((e_i)\) being seminormalized. In this note, only seminormalized bases are considered.
It is easy to see that, for any \(x \in X\), \(\lim _i e_i^*(x) = 0\), and \(\sup _i |e_i^*(x)| > 0\), unless \(x=0\).
Bases as above are quite common. It is known [7, Theorem 1.27] that, for any \(c > 1\) any separable Banach space has a bounded biorthogonal system (a Markushevitch basis) with \(1 \le \Vert e_i\Vert \), \(\Vert e_i^*\Vert \le c\), and \(X^* = \overline{\text{ span }\,}^{w^*}[e_i^* : i \in \mathbb N]\).
To consider the problem of approximating \(x \in X\) by finite linear combinations of \(e_i\)’s, introduce some notation. For \(x \in X\) set \(\mathop {\mathrm{supp}}x = \{ i \in \mathbb N: e_i^*(x) \ne 0\}\). For finite \(A \subset \mathbb N\), set \(P_A x = \sum _{i \in A} e_i^*(x) e_i\). If \(A^c = \mathbb N\backslash A\) is finite, write \(P_A x = x - P_{A^c} x\).
The best \(n\) -term approximation for \(x \in X\) is defined as
while the best \(n\) -term coordinate approximation is
It is easy to see that \(\lim _n \sigma _n(x) = 0\), and
(the second equality is due to the fact that \(\lim _i e_i^*(x) = 0\)).
We also consider the \(n\) term residual approximation
We say that \((e_i)\) is a Schauder basis if \(\lim _n \hat{\sigma }_n(x) = 0\) for every \(x \in X\) (in this case, also \(\lim _n \tilde{\sigma }_n(x) = 0\)). Many commonly used bases (such as the Haar basis or the trigonometric basis in \(L_p\), for \(1 < p < \infty \)) are, in fact, Schauder bases.
Note that calculating \(\sigma _n(x)\) and \(\tilde{\sigma }_n(x)\) is next to impossible, since all coordinates of \(x\) are in play. Therefore, one can naively look for a good \(n\)-term approximant of \(x\) by considering the \(n\) largest (or “nearly largest”) coefficients. This is done using the weak greedy algorithm. To define this algorithm, fix the relaxation parameter \(t \in (0,1]\). Consider a non-zero \(x \in X\). A set \(A \subset \mathbb N\) is called \(t\) -greedy for \(x\) if \(\inf _{i \in A} |e_i^*(x)| \ge t \sup _{i \notin A} |e_i^*(x)|\) (by the above, \(A\) is finite). When there is no confusion about \(x\), we shorten this term to \(t\) -greedy set. Suppose \(\rho = \rho _x : \mathbb N\rightarrow \mathbb N\) is a \(t\) -greedy ordering—that is, \(\{\rho (1), \ldots , \rho (n)\}\) is \(t\)-greedy for every \(n\). In general, a \(t\)-greedy ordering is not unique. Note that \(\{\rho (n) : n \in \mathbb N\} = {\mathfrak {S}}_x := \{n \in \mathbb N: e_n^*(x) \ne 0\}\) if the set \({\mathfrak {S}}_x\) is infinite. On the other hand, if \(|{\mathfrak {S}}_x| = m < \infty \), then \(\{\rho (1), \ldots , \rho (m)\} = {\mathfrak {S}}_x\) while \(\rho (i) \notin {\mathfrak {S}}_x\) for \(i > m\).
An \(n\) -term \(t\) -greedy approximant of \(x\) is defined as \(\mathbf {G}_n^t(X) = P_{A_n} x\), where \(A_n = \{\rho (1), \ldots , \rho (n)\}\), and \(\rho \) is a \(t\)-greedy ordering for \(x\). We define an \(n\) -term Chebyshev \(t\) -greedy approximant \(\mathbf {CG}_n^t(x)\) as \(y \in \text{ span }\,[e_i : i \in A_n]\) so that \(\Vert x - y\Vert \) is minimal. We stress that these approximants are not unique, and a fortiori, the operators \(x \mapsto \mathbf {G}_n^t(x)\) and \(x \mapsto \mathbf {CG}_n^t(x)\) are not linear.
For more information on greedy approximation algorithms, we refer the reader to the survey papers [13, 18], as well as to the recent monograph [14].
When \(t = 1\), we omit it, and use the terms “greedy set”, (“Chebyshev”) “greedy approximant”, as well as notation \(\mathbf {G}_n(x)\) and \(\mathbf {CG}_n(x)\). A basis \((e_i)\) is called quasi-greedy if its quasi-greedy constant is finite:
with the inner \(\sup \) taken over all realizations of \(\mathbf {G}_n(x)\). In [17] it was shown that a basis is quasi-greedy if and only if \(\lim _n \mathbf {G}_n(x) = x\) for any \(x \in X\), and any (equivalently, some) choice of the sequence \(\mathbf {G}_n(x)\). By [9], for a quasi-greedy basis we also have \(\lim _n \mathbf {G}_n^t(x) = x\) for any \(x \in X\), and any choice of the sequence \(\mathbf {G}_n^t(x)\).
The goal of this paper is to estimate the efficiency of the \(t\)-greedy and \(t\)-Chebyshev greedy methods (in the worst case), by comparing \(\Vert x - \mathbf {G}_n^t(x)\Vert \) and \(\Vert x - \mathbf {CG}_n^t(x)\Vert \) with the best \(n\)-term approximation \(\sigma _n(x)\), and similar quantities. This is done through estimating the Lebesgue constants and its relatives:
We stress that the suprema in the above inequalities are taken over all \(x \in X\), and all possible realizations of the (Chebyshev) weakly greedy algorithm. A basis is called greedy if \(\sup _n \mathbf {L}(n,1) < \infty \), and partially greedy if \(\sup _n {\mathbf {L}}_{\mathrm {re}}(n,1) < \infty \).
To estimate the Lebesgue constants, we quantify some properties of \((e_i)\). We use the left and right democracy functions \(\phi _l(k) = \inf _{|A|=k} \Vert \sum _{i \in A} a_i\Vert \) and \(\phi _r(k) = \sup _{|A|=k} \Vert \sum _{i \in A} a_i\Vert \) (sometimes, \(\phi _r\) is also referred to as the fundamental function). We define the democracy parameter
Following [12], define the disjoint democracy parameter
Clearly, \(\varvec{\mu _d}(n) \le \varvec{\mu }(n)\). By [10, Lemma 13], \(\varvec{\mu }(n) \le 2 \mathfrak {K}\varvec{\mu _d}(n)\). Related to the democracy parameter of a basis \((e_i)\) is its conservative parameter:
Clearly \(\mathbf {c}(n) \le \varvec{\mu _d}(n)\). The norms of coordinate projections in a basis \((e_i)\) are quantified by the unconditionality parameter and complemented unconditionality parameter: \(\mathbf {k}(n) = \sup _{|A| \le n} \Vert P_A\Vert \), resp. \(\mathbf {k}_c(n) = \sup _{|A| \le n} \Vert I - P_A\Vert \) (clearly \(|\mathbf {k}(n) - \mathbf {k}_c(n)| \le 1\)).
The investigation of Lebesgue constants for greedy algorithms dates back to the earliest works on greedy algorithms, with some relevant ideas appearing already in [8]. In [12], the Lebesgue constants of the Haar basis in the BMO, and the dyadic BMO, were computed. More recently, in [15, 16], the Lebesgue constants for tensor product bases in \(L_p\)-spaces (in particular, for the multi-Haar basis) were calculated. The Lebesgue constants for the trigonometric basis \(L_p\) (which is not quasi-greedy) are also known, see e.g. [13, Section 1.7]. The recent paper [4] estimates the Lebesgue constants for bases in \(L_p\) spaces with specific properties (such as being uniformly bounded). Lebesgue constants for redundant dictionaries are studied in [14, Section 2.6].
This paper is structured as follows: in Sect. 2, we gather some preliminary facts about quasi-greedy bases. In Sect. 3, we estimate \(\mathbf {L}(n,t)\) in terms of \(\mathfrak {K}\), \(\varvec{\mu _d}(n)\), \(\mathbf {k}(n)\), and \(t\). For \(t=1\), related results were obtained in [5]. However, the Lebesgue constant was not explicitly calculated there. Retracing the computations, one obtains worse constants than those given by Theorem 3.1. Corollary 3.5 gives an upper estimate for the Lebesgue constant of quasi-greedy bases in Hilbert spaces, by combining Theorem 3.1 with the recent results of Garrigos and Wojtaszczyk [6]. Further, we estimate the Lebesgue constant for general (not necessarily quasi-greedy) systems in Proposition 3.6.
In Sect. 4, we estimate \({\mathbf {L}}_{\mathrm {ch}}(n,t)\). The estimates involve only \(t\), \(\mathfrak {K}\), and \(\varvec{\mu _d}(n)\). Finally, in Sect. 5, we provide upper and lower bounds for \({\mathbf {L}}_{\mathrm {re}}(n,t)\), involving \(t\), \(\mathfrak {K}\), and \(\mathbf {c}(n)\). The main results are given in Theorems 4.1 and 5.1, respectively.
Most of the work in this paper is done in the real case. In Sect. 5, we indicate that the complex versions of the results of this paper also hold, albeit perhaps with different numerical constants.
Remark 1.1
After the first version of this article was circulated, the referee brought the recent paper [10] to the attention of the authors. There, order-of-magnitude estimates for the Lebesgue constant, and the Chebyshevian Lebesgue constant (similar to our Theorems 3.1, 4.1) are given. Our results have the advantage of establishing the dependence of the Lebesgue constants not only of \(\varvec{\mu }_d(n)\) and \(\varvec{k}(n)\), but also of \(\mathfrak {K}\) and \(t\).
2 Preliminary results
In this section we prove two lemmas, which will be needed throughout the paper, and may be of interest in their own right. First we sharpen some results from [9, Section 2].
Lemma 2.1
Suppose \((e_i) \subset X\) is a basis with a quasi-greedy constant \(\mathfrak {K}\). Consider \(x \in X\), and let \(A\) be a \(t\)-greedy set for \(x\). Then \(\Vert P_A x\Vert \le (1 + 4 t^{-1} \mathfrak {K}) \mathfrak {K}\Vert x\Vert \).
Proof
For the sake of brevity, set \(a_i = e_i^*(x)\). Let \(M = \min _{i \in A} |a_i|\), then \(|a_i| \le t^{-1} M\) for \(i \notin A\). Define \(B = \{i : |a_i| > t^{-1}M\}\) and \(C = \{i : |a_i| \ge M\}\). Then \(B \subset A \subset C\), and \(P_A x = P_B x + P_{A \backslash B} x\). By the definition of \(\mathfrak {K}\), \(\Vert P_B x\Vert \le \mathfrak {K}\Vert x\Vert \), and \(\Vert P_C x\Vert \le \mathfrak {K}\Vert x\Vert \). Write \(P_C x = \sum _{i \in C} a_i e_i\).
Now define the basis \((e_i^\prime )\) by setting
As this basis has the same quasi-greedy constant as \((e_i)\), Lemma 6.1(2) shows that \(M \Vert \sum _{i \in C} e_i^\prime \Vert \le 2 \mathfrak {K}\Vert x\Vert \). For \(i \in C\), set
For any \(i\), \(|b_i| \le t^{-1} M\), hence, by Lemma 6.1(1)
By the triangle inequality, \(\Vert P_A x\Vert \le \Vert P_B x\Vert + \Vert P_{A \backslash B} x\Vert \). \(\square \)
Lemma 2.2
Suppose \((e_i)\) is a \(\mathfrak {K}\)-quasi-greedy basis in \(X\). Consider \(x \in X\), and let \(a_i = e_i^*(x)\), for \(i \in \mathbb N\). Suppose a finite set \(A \subset \mathbb N\) satisfies \(\min _{i \in A} |a_i| \ge M\). Then \(M \Vert \sum _{i \in A} \text{ sign }\,(a_i) e_i\Vert \le 2\mathfrak {K}^2 \Vert x\Vert \). Furthermore, \(M \Vert \sum _{i \in A} e_i\Vert \le 4\mathfrak {K}^2 \Vert x\Vert \).
Proof
Consider the set \(B = \{i : |a_i| \ge M\}\) (clearly \(A \subset B\)). By [5, Lemma 10.1], \(\Vert \sum _{i \in A} \text{ sign }\,(a_i) e_i\Vert \le \mathfrak {K}\Vert \sum _{i \in B} \text{ sign }\,(a_i) e_i\Vert \). By Lemma 6.1(2), \(\Vert \sum _{i \in B} \text{ sign }\,(a_i) e_i\Vert \) \(\le 2\mathfrak {K}\Vert x\Vert /M\). To establish the “moreover” part, let \(A_+ = \{i \in A : \text{ sign }\,(a_i) = 1\}\), and \(A_- = \{i \in A : \text{ sign }\,(a_i) = -1\}\). By the above, \(M \Vert \sum _{i \in A_+} \text{ sign }\,(a_i) e_i\Vert \le 2\mathfrak {K}^2 \Vert x\Vert \), and the same holds for \(A_-\). Complete the proof using the triangle inequality. \(\square \)
We close this section with a brief discussion about the values of \(\varvec{\mu _d}(n)\), \(\mathbf {k}(n)\), and \(\mathbf {c}(n)\). It was shown in [2, 5] that, for a \(\mathfrak {K}\)-quasi-greedy basis, \(\mathbf {k}(n) \le C \log (en)\), where the constant \(C\) depends on the particular basis. For bases in \(L_p\) spaces, sharper estimates were obtained in [6]. It is easy to see that \(\mathbf {c}(n) \le \varvec{\mu _d}(n) \le C n\), where \(C\) depends on a basis. These estimates are optimal: indeed, an appropriate enumeration of the canonical (normalized and \(1\)-unconditional) basis in \(c_0 \oplus _2 \ell _1\) gives \(\mathbf {c}(n) \ge cn\).
3 The Lebesgue constant
In this section, we use some of the techniques of [5] to estimate the Lebesgue constants \(\mathbf {L}(n,t)\).
Theorem 3.1
For any \(\mathfrak {K}\)-quasi-greedy basis,
The proof of the theorem relies on several lemmas, whose proofs closely resemble those given in [5] (Lemma 3.4 yields better upper estimates).
Lemma 3.2
For any \(\mathfrak {K}\)-quasi-greedy basis, \(\mathbf {L}(n,t) \ge t^{-1} \varvec{\mu _d}(n)\).
Proof
Fix \(n \in \mathbb N\) and \(\varepsilon > 0\). Find \(A, B \subset \mathbb N\), so that \(A \cap B = \emptyset \), \(|A| = |B| = k \le n\), and
Pick a set \(C\), disjoint from \(A\) and \(B\), so that \(|C| = n-k\). Consider
Then \((t + \varepsilon ) \sum _{i \in B \cup C} e_i\) is a \(t\)-greedy approximant of \(x\), for which \(\Vert x - \mathbf {G}_n^t(x)\Vert = \Vert \sum _{i \in A} e_i\Vert \). However, \(|A \cup C| = n\), hence
Thus,
As \(\varepsilon \) can be arbitrarily small, the desired estimate follows. \(\square \)
Lemma 3.3
For any basis, \(\mathbf {L}(n,t) \ge \mathbf {k}_c(n)\).
Proof
Clearly \(\mathbf {L}(n,t) \ge \mathbf {L}(n,1)\). By [5, Proposition 3.3], \(\mathbf {L}(n,1) \ge \mathbf {k}_c(n)\). \(\square \)
Lemma 3.4
For any \(\mathfrak {K}\)-quasi-greedy basis, \(\mathbf {L}(n,t) \le \mathbf {k}(n) + \mathbf {k}_c(n) + 8 t^{-1} \mathfrak {K}^3 \varvec{\mu _d}(n)\).
Proof
For \(x \in X\), let \(a_i = e_i^*(x)\), and fix \(\varepsilon > 0\). Suppose \(A \subset \mathbb N\) is a \(t\)-greedy set for \(x\), of cardinality \(n\). Find \(z \in X\), supported on a set \(B\) of cardinality \(n\), so that \(\Vert x - z\Vert < \sigma _n(x) + \varepsilon \). Let \(M = \sup _{i \notin A} |a_i|\), then \(|a_i| \ge tM\) whenever \(i \in A\). By the triangle inequality,
We have
and
It remains to estimate the third summand, in the non-trivial case of \(|B \backslash A| = k > 0\). For \(i \in B \backslash A\), \(|a_i| \le M\), hence by Lemma 6.1(1) (see also [3, Lemma 2.1]),
By Lemma 2.2, \(M \le 4 t^{-1} \mathfrak {K}^2 \Vert \sum _{i \in A \backslash B} e_i\Vert ^{-1} \Vert x-z\Vert \). Thus,
As \(\Vert x-z\Vert \) can be arbitrarily close to \(\sigma _n(x)\), we are done. \(\square \)
We use Theorem 3.1 to estimate the Lebesgue constant for quasi-greedy bases in a Hilbert spaces. Recall that a basis \((e_i)\) is called hilbertian (besselian) if there exists a constant \(c\) so that, for every finite sequence of scalars \((\alpha _i)\). we have \(\sum _i |\alpha _i|^2 \ge c \Vert \sum _i \alpha _i e_i\Vert ^2\) (resp. \(\sum _i |\alpha _i|^2 \le c \Vert \sum _i \alpha _i e_i\Vert ^2\)).
Corollary 3.5
For any quasi-greedy basis in a Hilbert space, there exists \(\alpha \in (0,1)\) and \(C > 0\) so that, for any \(n \in \mathbb N\) and \(t \in (0,1)\), \(\mathbf {L}(n,t) \le C (t^{-1} + (\log (en) )^\alpha )\). If, moreover, the basis is either besselian or hilbertian, then there exists \(\alpha \in (0,1/2)\) with the above property.
Proof
By [6], there exists \(c_1 > 0\), and \(\alpha \) as above, so that \(\mathbf {k}(n) \le c_1 (\log (en) )^\alpha \). By [17, Theorem 3], \(\varvec{\mu }(n) \le c_2\), for some constant \(c_2\). To finish the proof, apply Theorem 3.1. \(\square \)
We conclude this section with an estimate for \(\mathbf {L}(n,t)\) for bounded Markushevitch bases which are not necessarily quasi-greedy. Let \(1 \le p \le q \le \infty \). We say that \((e_i)\) satisfies weak upper \(p\)- and lower \(q\)-estimates if there exists \(K>0\) such that for all \(x \in X\),
where, letting \((a_n^*)\) denote the decreasing rearrangement of the sequence \((|a_n|)\),
and
are the usual Lorentz sequence norms. Note that \(p=1\) and \(q=\infty \) are just the \(\ell _1\) and \(c_0\) norms, respectively.
The following result slightly extends [17, Theorem 5] by incorporating the weakness parameter \(t\) and replacing upper \(\ell _p\)-and lower \(\ell _q\)-estimates by weaker Lorentz sequence space estimates.
Proposition 3.6
Suppose \((e_i)\) satisfies weak upper \(p\)- and lower \(q\)-estimates. Then there exists \(D := D(p,q,K)\) such that
Proof
First suppose \(q > p\). Let \(x \in X\) and set \(a_i := e_i^*(x)\). Let \(A\) be a \(t\)-greedy set for \(x\), with \(|A|=n\), and let \(\mathbf {G}^t_n(x) := \sum _{i \in A} a_i e_i\). Given \(\varepsilon >0\), choose \(B \subset \mathbb {N}\), with \(|B| =n\), such that \(\Vert x - \sum _{i \in B}b_ie_i\Vert \le \sigma _n(x) + \varepsilon \). For convenience, set \(b_i = 0\) if \(i \notin B\). By the triangle inequality,
Setting \(C = C(p,q) := (1/p - 1/q)^{1/q - 1/p}\), we obtain:
Similarly,
We clearly have \(|A\setminus B| = |B \setminus A|\). As \(A\) is \(t\)-greedy set for \(x\), we have \(\min _{A \backslash B} |a_i| \ge t \max _{B \backslash A}|a_i|\). Therefore,
Since \(\varepsilon >0\) is arbitrary, combining (3.1)–(3.4) gives
and hence \(\mathbf {L}(n,t) \le \Big (1 + 2K^2C + \frac{K^2C}{t}\Big ) n^{1/p - 1/q}\). The case \(p=q\) is similar except \(Cn^{1/p-1/q}\) is replaced by \(1 + \log n\) throughout. \(\square \)
Corollary 3.7
Let \(1 \le p < \infty \) and let \((e_i)\) be a bounded Markushevitch basis such that \(\phi _r(k) \le C k^{1/p}\) for some \(C>0\). Then \(\mathbf {L}(n,t) \le C^\prime n^{1/p}/t\), for some constant \(C^\prime \).
Proof
Any basis satisfies the lower \(\infty \)-estimate. In order to apply Proposition 3.6, we need to show that \((e_i)\) has a weak upper \(p\)-estimate.
By the triangle inequality \(\Vert \sum _{i \in A} \pm e_i\Vert \le 2C n^{1/p}\) for all \(A\subset \mathbb {N}\) with \(|A|=n\). Suppose, for \(x \in X\), the sequence \(a_n = e_n^*(x)\) satisfies \(\sum _n n^{1/p-1} a_n^* = \gamma \). Let \((n_i)\) be a non-decreasing enumeration of this sequence – that is, \(|a_{n_i}| = a_i^*\) for every \(i\). Set \(\varepsilon _i = \text{ sign }\,(a_{n_i})\), \(c_i = a_i^* - a_{i+1}^*\), and \(y_i = \sum _{j=1}^i \varepsilon _j e_{n_j}\). Note that, for every \(i\), \(i^{1/p} - (i-1)^{1/p} \le i^{1/p-1}\), hence
Consequently, \(\sum _i c_i y_i\) converges in \(X\). For every \(i\), we have \(e_i^*(\sum _i c_i y_i) = e_i^*(x)\), hence \(\sum _i c_i y_i = x\). By the above, \(\Vert x\Vert \le 2C \gamma \). \(\square \)
Remark 3.8
The estimates of Proposition 3.6 and Corollary 3.7 are sharp, even for unconditional (hence quasi-greedy) bases. For \(q > p\), consider the canonical basis of \(\ell _q \oplus _q \ell _p\) (\(c_0 \oplus _\infty \ell _p\) if \(q = \infty \)). This basis clearly possesses the lower \(q\)- and upper \(p\)-estimates, with constant \(1\). Denote the bases of \(\ell _q\) and \(\ell _p\) by \((e_i)\) and \((f_i)\) respectively. Fix \(c > 1\), and let \(x = \sum _{i=1}^n \big ( ct e_i + f_i \big )\). One possible realization of the \(t\)-greedy algorithm gives \(\mathbf {G}_m^t(x) = ct \sum _{i=1}^n e_i\), hence \(\Vert x - \mathbf {G}_m^t(x)\Vert = n^{1/p}\). On the other hand, \(\sigma _n(x) \le \tilde{\sigma }_n(x) \le \Vert ct \sum _{i=1}^n e_i\Vert = ct n^{1/q}\). As \(c\) can be arbitrarily close to \(1\), we obtain \(\mathbf {L}(n,t) \ge n^{1/p-1/q}/t\), showing the optimality of Proposition 3.6. Note that \(\phi _r(k) = k^{1/p}\), hence, for \(q = \infty \), we witness the optimality of Corollary 3.7.
We can also show the optimality of Proposition 3.6 for \(p=q=2\), once more for quasi-greedy basis. By [6, Theorem 3.1 and Corollary 3.11], there exists a quasi-greedy democratic basis in \(c_0 \oplus \ell _1 \oplus \ell _2\), so that \(\phi _r(n) \sim \phi _l(n) \sim \sqrt{n}\). The weak upper \(2\)-estimate follows from the proof of Corollary 3.7, whereas the weak lower \(2\)-estimate follows from Lemma 6.1(2). Furthermore, [6, Corollary 3.11] gives \(\mathbf {k}(n) \ge c \log n\) for this basis (\(c\) is a constant). By Theorem 3.1, \(\mathbf {L}(n,t) \ge \mathbf {k}(n) - 1\).
Remark 3.9
We also present two examples of sharpness of Proposition 3.6 for bases which are not quasi-greedy. Throughout, we use some well-known facts about Lorentz spaces, see e.g. the survey [1].
First pick \(p \in (1,2)\). Set \(q = p/(p-1)\) and \(\gamma = 2/p - 1\) (so \(1/p = (1+\gamma )/2\), and \(1/q = 1 - 1/p = (1-\gamma )/2\)). Consider the measures \(\mu \) and \(\nu \) on \([-\pi ,\pi ]\), by setting \(d \mu = |t|^{-\gamma } \, dt\) and \(d \nu = |t|^\gamma \, dt\). The trigonometric system forms a non-quasi-greedy Schauder basis in both \(L_2(\mu )\) and \(L_2(\nu )\), see e.g. [11]. Denote by \(e_1, e_2, \ldots \) (\(f_1, f_2, \ldots \)) the trigonometric basis in \(L_2(\mu )\) (resp. \(L_2(\nu )\)), enumerated as \(1, e^{i t}, e^{-i t}, e^{2i t}, e^{-2i t}, \ldots \).
First concentrate on the basis \((e_i)\) in \(L_2(\mu )\). Clearly this basis satisfies the lower \(2\)-estimate:
Next show that \(\phi _r(n) \sim n^{1/p}\) (once this is established, the weak upper \(p\)-estimate will follow, as in the proof of Corollary 3.7). The lower estimate on \(\phi _r\) is proved in [6, Lemma 3.7]. For the upper estimate, recall the well-known fact that \(\int |\phi \psi | \le \int \phi ^* \psi ^*\) (\(\phi ^*\) and \(\psi ^*\) are decreasing rearrangments of \(\phi \) and \(\psi \) respectively). Consequently, if \(f\) is a function of \([0,\pi ]\) with \(0 \le f \le n^2\), and \(\int f(t) \,dt = n\), then \(\int f(t) t^{-\gamma } \, dt \le n^{1+\gamma }/(1-\gamma )\) (the equality is attained when \(f(t) = n^2 \mathbf {1}_{[0,1/n]}\)). Now suppose \(A \subset \mathbb N\) has cardinality \(n\). Applying our observation to \(f = |\sum _{j \in A} e_j|^2\), we obtain \(\Vert \sum _{j \in A} e_j\Vert _{L_2(\mu )} \prec n^{(1+\gamma )/2} = n^{1/p}\).
Use [6, Lemma 3.7] to find \(\varepsilon _1, \ldots , \varepsilon _{2n+1} \in \{-1,1\}\) so that \(\Vert \sum _i \varepsilon _i e_i \Vert _{L_2(\mu )} \sim \sqrt{n}\), while \(\Vert \sum _i e_i \Vert _{L_2(\mu )} \sim n^{1/p}\). Let \(B = \{i : \varepsilon _i = 1 \}\) and \(C = \{i : \varepsilon _i = -1 \}\). For \(\varepsilon > 0\) set \(x = (1+\varepsilon ) \sum _{i \in B} e_i - \sum _{i \in C} e_i\). For \(\varepsilon < 1/n\) we have \(\Vert x\Vert \sim \sqrt{n}\), yet \(\Vert x - \mathbf {G}_{|B|}(x)\Vert = \Vert \sum _{i \in C} e_i\Vert \sim n^{1/p}\). Consequently, \(\mathbf {L}(|B|,1) \succ |B|^{1/p-1/2}\). By the above, \(|B| \sim n\). Thus, the estimates on \(\mathbf {L}(n,t)\) obtained in Proposition 3.6 are optimal for this basis.
In the second example the optimality of these estimates is shown for a basis with a weak upper \(p\)-estimate, and a weak lower \(q\)-estimate. Following [6, Section 3], define the Schauder basis \((g_j)\) in \(L_2(\mu )\, \oplus _2\, L_2(\nu )\) by setting, for \(k \in \mathbb N\), \(g_{2k-1} = (e_k + f_k)/\sqrt{2}\) and \(g_{2k} = (e_k - f_k)/\sqrt{2}\). By the proof of [6, Proposition 3.10], for any odd \(n\) we can have \(\Vert \sum _{k=1}^{2n} g_k\Vert \sim n^{1/q}\), yet \(\Vert \sum _{k=1}^n g_{2k-1}\Vert \sim n^{1/p}\). As in the previous paragraph, we conclude that \(\mathbf {L}(n,1) \succ n^{1/p-1/q}\).
Next show that \((g_j)\) satisfies the weak upper \(p\)-estimate, and the weak lower \(q\)-estimate. Consider
We have to show that
Start by recalling that, for any sequence \((\gamma _i)\),
The basis \((f_k)\) satisfies the upper \(2\)-estimate:
Thus, by (3.7), (3.5), and the triangle inequality for \(\Vert \cdot \Vert _{p,1}\),
yielding the right hand side of (3.6).
Next note that \((f_i)\) satisfies the weak lower \(q\)-estimate. Indeed, the functions \(f_i^\prime (t) = e_i(t) |t|^\gamma \) are biorthogonal to \((e_i)\) in \(L_2(\mu )\). By duality, the sequence \((f_i^\prime )\) satisfies the weak lower \(q\)-estimate. Now observe that \(U : L_2(\mu ) \rightarrow L_2(\nu ) : f_i^\prime \rightarrow f_i\) is an isometry. Moreover, \((e_i)\) satisfies the lower \(2\)-estimate, hence the weak lower \(q\)-estimate as well. As \(\Vert \cdot \Vert _{q,\infty }\) is a quasi-norm, we obtain
This yields the left hand side of (3.6).
4 The Chebyshevian Lebesgue constant
Theorem 4.1
For any \(\mathfrak {K}\)-quasi-greedy basis,
As a corollary, we recover a result from [2].
Corollary 4.2
Any almost greedy basis is semi-greedy.
Recall that \((e_i)\) is almost greedy if there exists a constant \(C\) so that \(\Vert x - \mathbf {G}_n(x)\Vert \le C \tilde{\sigma }_n(x)\) for any \(n \in \mathbb N\) and \(x \in X\), and semi-greedy if there exists a constant \(C\) so that \(\Vert x - \mathbf {CG}_n(x)\Vert \le C \sigma _n(x)\), for any \(n\) and \(x\).
Proof
By [2], a basis is almost greedy if and only if it is quasi-greedy and democratic (that is, \(\sup _n \varvec{\mu }(n) < \infty \)). In this case \(\sup _n {\mathbf {L}}_{\mathrm {ch}}(n,1) < \infty \), hence the basis is semi-greedy. \(\square \)
Below, we shall use the “truncation function”
Abusing the notation slightly, we shall write
The sum above converges, since the set \(\{i \in \mathbb N: |e_i^*(x)| > M\}\) is finite. Moreover, \({\mathbf {F}}\!_M(x)\) is the only element \(y \in X\) with the property that, for every \(i\), \(e_i^*(y) = {\mathbf {F}}\!_M(e_i^*(x))\). By [2, Proposition 3.1], \(\Vert {\mathbf {F}}\!_M(x)\Vert \le (1+3\mathfrak {K})\Vert x\Vert \).
Proof
(The upper estimate in Theorem 4.1) For \(x \in X\) let \(a_i = e_i^*(x)\), and fix \(\varepsilon > 0\). Suppose a set \(A \subset \mathbb N\) of cardinality \(n\) is \(t\)-greedy for \(x\). Let \(M = \max _{i \notin A} |a_i|\), then \(\min _{i \in A} |a_i| \ge tM\). We have to show that there exists \(w \in X\) so that \(\mathop {\mathrm{supp}}(x-w) \subset A\), and \(\Vert w\Vert \prec 20 t^{-1} \mathfrak {K}^3 \varvec{\mu _d}(n) (\sigma _n(x) + \varepsilon )\).
Pick \(z = \sum _{i \in B} b_i e_i\), where \(|B| \le n\), and \(\Vert x - z\Vert < \sigma _n(x) + \varepsilon \). Set \(y = x-z\) and
We claim that \(w = P_A {\mathbf {F}}\!_M(y) + P_{A^c} x\) has the desired properties. Indeed, \(x-w\) is supported on \(A\). To estimate \(\Vert w\Vert \), note that, for \(i \notin B\), \(y_i = a_i\). For \(i \notin A\), \({\mathbf {F}}\!_M(a_i) = a_i\), hence, for \(i \notin A \cup B\), \(a_i = {\mathbf {F}}\!_M(y_i)\). Thus,
We use [2, Proposition 3.1] to estimate on the first summand:
To handle the second summand, set \(k = |B \backslash A|\). For \(i \in B \backslash A\), \(|a_i| \le M\), hence \(|a_i - {\mathbf {F}}\!_M(y_i)| \le 2M\). By Lemma 6.1(1),
On the other hand, for \(i \in A \backslash B\), \(a_i = y_i\), and \(|a_i| \ge tM\), hence by Lemma 2.2,
Plugging this into (4.3), we get:
Together with (4.2), we obtain:
As \(\varepsilon \) can be arbitrarily close to \(0\), we are done. \(\square \)
Proof
(The lower estimate in Theorem 4.1) Fix \(n \in \mathbb N\) and \(\varepsilon > 0\). Find \(A, B \subset \mathbb N\), so that \(A \cap B = \emptyset \), \(|A| = |B| = k \le n\), and
Pick a set \(C\), disjoint from \(A\) and \(B\), so that \(|C| = n-k\). Consider
We can find a Chebyshev \(t\)-greedy approximant \(\mathbf {CG}_n^t(x)\) supported on \(B \cup C\), and then \(y = x - \mathbf {CG}_n^t(x) = \sum _{i \in A} e_i + \sum _{i \in B \cup C} y_i e_i\). Let \(D = \{i \in B \cup C : |y_i| \ge 1\}\). Both \(\sum _{i \in A} e_i + \sum _{i \in D} y_i e_i\) and \(\sum _{i \in D} y_i e_i\) are greedy approximants of \(y\), hence
By the triangle inequality, \(\Vert \sum _{i \in A} e_i\Vert \le 2\mathfrak {K}\Vert y\Vert \). Thus,
(since \(|A \cup C| = n\)). As \(\varepsilon \) can be arbitrarily small, we are done. \(\square \)
5 The residual Lebesgue constant
Theorem 5.1
For any \(\mathfrak {K}\)-quasi-greedy basis,
Proof
(The upper estimate in Theorem 5.1) For \(x \in X\) set \(a_i = e_i^*(x)\). Suppose \(A\) is a \(t\)-greedy subset of \(\mathbb N\), of cardinality \(n\), and set \(B = [1,n]\). Let \(M = \min _{i \in A} |a_i|\), then \(|a_i| \le t^{-1} M\) for \(i \notin A\). By the triangle inequality,
Let \(y = P_{B^c} x\), then \(\Vert y\Vert = \hat{\sigma }_n(x)\). For \(i \in A \backslash B\), we have \(|e_i^*(y)| \ge M\), hence by Lemma 2.2, \(M \Vert \sum _{i \in A \backslash B} e_i\Vert \le 4 \mathfrak {K}^2 \Vert y\Vert \). By Lemmas 2.2 and 6.1(1),
Plug the above results into (5.1) to obtain the upper estimate for \({\mathbf {L}}_{\mathrm {re}}(n,t)\). \(\square \)
Proof
(The lower estimate in Theorem 5.1) Fix \(\varepsilon > 0\), and find sets \(A \subset [1,n]\) and \(B \subset [n+1,\infty )\) so that \(|A| = k = |B|\), and
Consider \(x = \sum _{i=1}^n e_i + (t + \varepsilon ) \sum _{i \in B} e_i\). Then \(B \cup ([1,n] \backslash A)\) is a \(t\)-greedy set for \(x\), hence one can run the \(t\)-greedy algorithm in such a way that \(\Vert x - \mathbf {G}_n^t(x)\Vert = \Vert \sum _{i \in A} e_i\Vert \). On the other hand, \(\hat{\sigma }_n(x) = \Vert P_{[n+1,\infty )} x\Vert = (t + \varepsilon ) \Vert \sum _{i \in B} e_i\Vert \). The lower estimate follows from comparing these two quantities. \(\square \)
6 Appendix: The complex case
The results above are stated for the real case. The complex case is similar, but the constants are different. As customary, we set
The following result is present (implicitly or explicitly) in [5, Appendix] (the better-known real case is in [3, Lemmas 2.1 and 2.2]):
Lemma 6.1
Suppose \((e_i)\) is a \(\mathfrak {K}\)-quasi-greedy basis in a Banach space \(X\).
-
1.
If \(A\) is a finite set, then \(\Vert \sum _{i \in A} a_i e_i\Vert \le 4 \sqrt{2} \mathfrak {K}\max _i |a_i| \Vert \sum _{i \in A} e_i\Vert \). Moreover, if the \(a_i\)’s are real, then \(\Vert \sum _{i \in A} a_i e_i\Vert \le 2 \mathfrak {K}\max _i |a_i| \Vert \sum _{i \in A} e_i\Vert \).
-
2.
Suppose \(A\) is a greedy set for \(x \in X\). Let \(M = \min _{i \in A} |e_i^*(x)|\). Then
$$\begin{aligned} \frac{M}{8 \sqrt{2} \mathfrak {K}^2} \Vert \sum _{i \in A} e_i\Vert \le \frac{M}{2 \mathfrak {K}} \Vert \sum _{i \in A} \text{ sign }\,\big ( e_i^*(x) \big ) e_i\Vert \le \Vert x\Vert . \end{aligned}$$
For \(M > 0\), define
For \(x \in X\), we set \({\mathbf {F}}\!_M(x) = x - \sum _i \big (e_i^*(x) - {\mathbf {F}}\!_M(e_i^*(x))) e_i\) (the sum converges, and \(e_i^*({\mathbf {F}}\!_M(x)) = {\mathbf {F}}\!_M(e_i^*(x))\) for every \(i\)). As in [2, Proposition 3.1], one can prove:
Lemma 6.2
In the above notation, \(\Vert {\mathbf {F}}\!_M(x)\Vert \le (1 + 3 \mathfrak {K}) \Vert x\Vert \).
As in Sect. 2, we obtain:
Lemma 6.3
Suppose \((e_i) \subset X\) is a basis with a quasi-greedy constant \(\mathfrak {K}\), and a set \(A\) is \(t\)-greedy for \(x \in X\). Then \(\Vert P_A x\Vert \le (1 + 8 \sqrt{2} t^{-1} \mathfrak {K}) \mathfrak {K}\Vert x\Vert \).
Lemma 6.4
Suppose \((e_i)\) is a \(\mathfrak {K}\)-quasi-greedy basis in \(X\). Consider \(x \in X\), and let \(a_i = e_i^*(x)\), for \(i \in \mathbb N\). Suppose a finite set \(A \subset \mathbb N\) satisfies \(\min _{i \in A} |a_i| \ge M\). Then \(M \Vert \sum _{i \in A} \text{ sign }\,(a_i) e_i\Vert \le 2\mathfrak {K}^2 \Vert x\Vert \). Furthermore, \(M \Vert \sum _{i \in A} e_i\Vert \le 8\mathfrak {K}^2 \Vert x\Vert \).
Proof
Consider \(C = \{i : |a_i| \ge M\}\) (note that \(A \subset C\)). For the brevity of notation, let \(e_i^\prime = \text{ sign }\,(a_i) e_i\) (if \(a_i = 0\), let \(e_i^\prime = e_i\)). Clearly the basis \((e_i^\prime )\) is \(\mathfrak {K}\)-quasi-greedy. Set \(y = \sum _{i \in C} e_i^\prime \). By Lemma 6.1(2), \(M \Vert y\Vert \le 2\mathfrak {K}\Vert x\Vert \). For \(\varepsilon > 0\), let
By the triangle inequality, \(\Vert y_\varepsilon \Vert \le \Vert y\Vert + \varepsilon \sum _{i \in C \backslash A} \Vert e_i\Vert \). Furthermore, \(\Vert \sum _{i \in A} e_i^\prime \Vert \le \mathfrak {K}\Vert y_\varepsilon \Vert \). As \(\varepsilon \) is arbitrary, we establish the first statement of the lemma.
The reasoning above also shows that \(M \Vert \sum _{i \in B} e_i^\prime \Vert \le 2 \mathfrak {K}^2 \Vert x\Vert \) for any \(B \subset A\). Let \(S\) be the absolute convex hull of the elements \(\sum _{i \in B} e_i^\prime \)—that is,
We claim that \(\sum _{i \in A} e_i = \sum _{i \in A} \omega _i e_i^\prime \in 4 S\) here \(|\omega _i|=1\). Otherwise, by Hahn-Banach Separation Theorem, there exists a sequence \((b_i)_{i \in A} \in {\mathbb {C}}^{|A|}\) so that \(|\sum _{i \in B} b_i| < 1\) whenever \(B \subset A\), yet \(|\sum _{i \in A} \omega _i b_i| > 4\). Let \(B_+ = \{i \in A : \mathfrak {R}b_i \ge 0\}\) and \(B_- = \{i \in A : \mathfrak {R}b_i < 0\}\).
and similarly, \(\sum _{i \in B_-} (-\mathfrak {R}b_i) \le 1\). Therefore,
The same way, we show that \(\sum _{i \in A} |\mathfrak {I}b_i| \le 2\). Consequently,
yielding a contradiction. This establishes the second statement of our lemma. \(\square \)
These results allow us to emulate the proofs of previous sections, and to estimate the Lebesgue constants:
Theorem 6.5
Suppose \((e_i)\) is a \(\mathfrak {K}\)-quasi-greedy basis in a complex Banach space \(X\). Then:
-
1.
$$\begin{aligned} \max \big \{\mathbf {k}_c(n), t^{-1} \varvec{\mu _d}(n)\big \} \le \mathbf {L}(n,t) \le 1 + 2\mathbf {k}(n) + 32 \sqrt{2} t^{-1} \mathfrak {K}^3 \varvec{\mu _d}(n). \end{aligned}$$
-
2.
$$\begin{aligned} \frac{\varvec{\mu _d}(n)}{2 t \mathfrak {K}} \le {\mathbf {L}}_{\mathrm {ch}}(n,t) \le \frac{100 \mathfrak {K}^3 \varvec{\mu _d}(n)}{t}. \end{aligned}$$
-
3.
$$\begin{aligned} t^{-1} \mathbf {c}(n) \le {\mathbf {L}}_{\mathrm {re}}(n,t) \le 1 + 8 \mathfrak {K}^2 + 32 \sqrt{2} t^{-1} \mathfrak {K}^3 \mathbf {c}(n). \end{aligned}$$
References
Dilworth, S.: Special Banach lattices and their applications. In: Johnson, W.B., Lindenstrauss, J. (eds.) Handbook of the Geometry of Banach Spaces, pp. 497–532. North-Holland, Amsterdam (2001)
Dilworth, S., Kalton, N., Kutzarova, D.: On the existence of almost greedy bases in Banach spaces. Stud. Math. 159, 67–101 (2003)
Dilworth, S., Kalton, N., Kutzarova, D., Temlyakov, V.: The thresholding greedy algorithm, greedy bases, and duality. Constr. Approx. 19, 575–597 (2003)
Dilworth, S., Soto-Bajo, M., Temlyakov, V.: Quasi-greedy bases and Lebesgue-type inequalities. Stud. Math. 211, 41–69 (2012)
Garrigos, G., Hernandez, E., Oikhberg, T.: Lebesgue-type inequalities for quasi-greedy bases. Constr. Approx. 38, 447–470 (2013)
Garrigos, G., Wojtaszczyk, P.: Conditional quasi-greedy bases in Hilbert and Banach spaces. Indiana Univ. Math. J. 63, 1017–1036 (2014)
Hajek, P., Montesinos Santalucia, V., Vanderwerff, J., Zizler, V.: Biorthogonal Systems in Banach Spaces. Springer, New York (2008)
Konyagin, S., Temlyakov, V.: A remark on greedy approximation in Banach spaces. East J. Approx. 5, 365–379 (1999)
Konyagin, S., Temlyakov, V.: Greedy approximation with regard to bases and general minimal systems. Serdica Math. J. 28, 305–328 (2002)
Long, J., Ye, P.: Weak greedy algorithms for nonlinear approximation with quasi-greedy bases. WSEAS Trans. Math. 13, 525–534 (2014)
Nielsen, M.: Trigonometric quasi-greedy bases for \(L_p({\mathbf{T}}, w)\). Rocky Mt. J. Math. 39, 1267–1278 (2009)
Oswald, P.: Greedy algorithms and best m-term approximation with respect to biorthogonal systems. J. Fourier Anal. Appl. 7, 325–341 (2001)
Temlyakov, V.: Greedy approximation. Acta Numer. 17, 235–409 (2008)
Temlyakov, V.: Greedy Approximation. Cambridge University Press, Cambridge (2011)
Temlyakov, V., Yang, M., Ye, P.: Greedy approximation with regard to non-greedy bases. Adv. Comput. Math. 34, 319–337 (2011)
Temlyakov, V., Yang, M., Ye, P.: Lebesgue-type inequalities for greedy approximation with respect to quasi-greedy bases. East J. Approx. 17, 203–214 (2011)
Wojtaszczyk, P.: Greedy algorithm for general biorthogonal systems. J. Approx. Theory 107, 293–314 (2000)
Wojtaszczyk, P.: Greedy type bases in Banach spaces. In: Bojanov, B. D (ed.) Constructive Theory of Functions. Proceedings of the International Conference Dedicated to the 70th Anniversary of Blagovest Sendov Held in Varna, June 19–23, 2002, pp. 136–155 DARBA, Sofia (2003)
Acknowledgments
We are grateful to the anonymous referees for their careful reading of our paper, and for valuable sugestions which led to numerous improvements. We would like to thank them for bringing [10] to our attention.
Author information
Authors and Affiliations
Corresponding author
Additional information
The first author was partially supported by NSF Grants DMS1101490 and DMS 1361461. The third author was partially supported by the Simons Foundation travel award 210060.
Rights and permissions
About this article
Cite this article
Dilworth, S.J., Kutzarova, D. & Oikhberg, T. Lebesgue constants for the weak greedy algorithm. Rev Mat Complut 28, 393–409 (2015). https://doi.org/10.1007/s13163-014-0163-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13163-014-0163-5