1 Introduction

In this short note, we calculate the Lebesgue constants associated with the \(t\)-greedy, and the Chebyshev \(t\)-greedy, algorithms in Banach spaces (thus measuring the efficiency of these approximation methods, in the worst case).

Throughout this paper, \(X\) is a separable infinite dimensional Banach space. A family \((e_i, e_i^*)_{\in \in \mathbb N} \subset X \times X^*\) is called a bounded biorthogonal system if:

  1. 1.

    \(X = \text{ span }\,[e_i : i \in \mathbb N]\).

  2. 2.

    \(e_i^*(e_j) = 1\) if \(i=j\), \(e_i^*(e_j) = 0\) otherwise.

  3. 3.

    \(0 < \inf _i \min \{\Vert e_i\Vert , \Vert e_i^*\Vert \} \le \sup _i \max \{\Vert e_i\Vert , \Vert e_i^*\Vert \} < \infty \).

For brevity, we refer to \((e_i)\) as a basis. Note that Condition (3) is referred to as \((e_i)\) being seminormalized. In this note, only seminormalized bases are considered.

It is easy to see that, for any \(x \in X\), \(\lim _i e_i^*(x) = 0\), and \(\sup _i |e_i^*(x)| > 0\), unless \(x=0\).

Bases as above are quite common. It is known [7, Theorem 1.27] that, for any \(c > 1\) any separable Banach space has a bounded biorthogonal system (a Markushevitch basis) with \(1 \le \Vert e_i\Vert \), \(\Vert e_i^*\Vert \le c\), and \(X^* = \overline{\text{ span }\,}^{w^*}[e_i^* : i \in \mathbb N]\).

To consider the problem of approximating \(x \in X\) by finite linear combinations of \(e_i\)’s, introduce some notation. For \(x \in X\) set \(\mathop {\mathrm{supp}}x = \{ i \in \mathbb N: e_i^*(x) \ne 0\}\). For finite \(A \subset \mathbb N\), set \(P_A x = \sum _{i \in A} e_i^*(x) e_i\). If \(A^c = \mathbb N\backslash A\) is finite, write \(P_A x = x - P_{A^c} x\).

The best \(n\) -term approximation for \(x \in X\) is defined as

$$\begin{aligned} \sigma _n(x) = \inf _{|\mathop {\mathrm{supp}}y| \le n} \Vert x-y\Vert , \end{aligned}$$

while the best \(n\) -term coordinate approximation is

$$\begin{aligned} \tilde{\sigma }_n(x) = \inf _{|B| \le n} \Vert x - P_B x\Vert . \end{aligned}$$

It is easy to see that \(\lim _n \sigma _n(x) = 0\), and

$$\begin{aligned} \sigma _n(x) = \inf _{|\mathop {\mathrm{supp}}y| = n} \Vert x-y\Vert \, \quad \mathrm{ and } \, \quad \tilde{\sigma }_n(x) = \inf _{|B| = n} \Vert x - P_B x\Vert \end{aligned}$$

(the second equality is due to the fact that \(\lim _i e_i^*(x) = 0\)).

We also consider the \(n\) term residual approximation

$$\begin{aligned} \hat{\sigma }_n(x) = \Vert x - P_{[1,n]} x\Vert . \end{aligned}$$

We say that \((e_i)\) is a Schauder basis if \(\lim _n \hat{\sigma }_n(x) = 0\) for every \(x \in X\) (in this case, also \(\lim _n \tilde{\sigma }_n(x) = 0\)). Many commonly used bases (such as the Haar basis or the trigonometric basis in \(L_p\), for \(1 < p < \infty \)) are, in fact, Schauder bases.

Note that calculating \(\sigma _n(x)\) and \(\tilde{\sigma }_n(x)\) is next to impossible, since all coordinates of \(x\) are in play. Therefore, one can naively look for a good \(n\)-term approximant of \(x\) by considering the \(n\) largest (or “nearly largest”) coefficients. This is done using the weak greedy algorithm. To define this algorithm, fix the relaxation parameter \(t \in (0,1]\). Consider a non-zero \(x \in X\). A set \(A \subset \mathbb N\) is called \(t\) -greedy for \(x\) if \(\inf _{i \in A} |e_i^*(x)| \ge t \sup _{i \notin A} |e_i^*(x)|\) (by the above, \(A\) is finite). When there is no confusion about \(x\), we shorten this term to \(t\) -greedy set. Suppose \(\rho = \rho _x : \mathbb N\rightarrow \mathbb N\) is a \(t\) -greedy ordering—that is, \(\{\rho (1), \ldots , \rho (n)\}\) is \(t\)-greedy for every \(n\). In general, a \(t\)-greedy ordering is not unique. Note that \(\{\rho (n) : n \in \mathbb N\} = {\mathfrak {S}}_x := \{n \in \mathbb N: e_n^*(x) \ne 0\}\) if the set \({\mathfrak {S}}_x\) is infinite. On the other hand, if \(|{\mathfrak {S}}_x| = m < \infty \), then \(\{\rho (1), \ldots , \rho (m)\} = {\mathfrak {S}}_x\) while \(\rho (i) \notin {\mathfrak {S}}_x\) for \(i > m\).

An \(n\) -term \(t\) -greedy approximant of \(x\) is defined as \(\mathbf {G}_n^t(X) = P_{A_n} x\), where \(A_n = \{\rho (1), \ldots , \rho (n)\}\), and \(\rho \) is a \(t\)-greedy ordering for \(x\). We define an \(n\) -term Chebyshev \(t\) -greedy approximant \(\mathbf {CG}_n^t(x)\) as \(y \in \text{ span }\,[e_i : i \in A_n]\) so that \(\Vert x - y\Vert \) is minimal. We stress that these approximants are not unique, and a fortiori, the operators \(x \mapsto \mathbf {G}_n^t(x)\) and \(x \mapsto \mathbf {CG}_n^t(x)\) are not linear.

For more information on greedy approximation algorithms, we refer the reader to the survey papers [13, 18], as well as to the recent monograph [14].

When \(t = 1\), we omit it, and use the terms “greedy set”, (“Chebyshev”) “greedy approximant”, as well as notation \(\mathbf {G}_n(x)\) and \(\mathbf {CG}_n(x)\). A basis \((e_i)\) is called quasi-greedy if its quasi-greedy constant is finite:

$$\begin{aligned} \mathfrak {K}= \sup _{\Vert x\Vert =1} \sup _{n \in \mathbb N} \sup \Vert \mathbf {G}_n(x)\Vert < \infty , \end{aligned}$$

with the inner \(\sup \) taken over all realizations of \(\mathbf {G}_n(x)\). In [17] it was shown that a basis is quasi-greedy if and only if \(\lim _n \mathbf {G}_n(x) = x\) for any \(x \in X\), and any (equivalently, some) choice of the sequence \(\mathbf {G}_n(x)\). By [9], for a quasi-greedy basis we also have \(\lim _n \mathbf {G}_n^t(x) = x\) for any \(x \in X\), and any choice of the sequence \(\mathbf {G}_n^t(x)\).

The goal of this paper is to estimate the efficiency of the \(t\)-greedy and \(t\)-Chebyshev greedy methods (in the worst case), by comparing \(\Vert x - \mathbf {G}_n^t(x)\Vert \) and \(\Vert x - \mathbf {CG}_n^t(x)\Vert \) with the best \(n\)-term approximation \(\sigma _n(x)\), and similar quantities. This is done through estimating the Lebesgue constants and its relatives:

$$\begin{aligned} \displaystyle {\textit{The Lebesgue constant }} \, \mathbf {L}(n,t)&= \sup _{x \in X, \sigma _n(x) \ne 0}\frac{\Vert x - \mathbf {G}_n^t(x)\Vert }{\sigma _n(x)}.\\ \displaystyle {\textit{The Chebyshevian Lebesgue constant }} \, {\mathbf {L}}_{\mathrm {ch}}(n,t)&= \sup _{x \in X, \sigma _n(x) \ne 0}\frac{\Vert x - \mathbf {CG}_n^t(x)\Vert }{\sigma _n(x)}.\\ \displaystyle {\textit{The residual Lebesgue constant }} \, {\mathbf {L}}_{\mathrm {re}}(n,t)&= \sup _{x \in X, \hat{\sigma }_n(x) \ne 0}\frac{\Vert x - \mathbf {G}_n^t(x)\Vert }{\hat{\sigma }_n(x)}. \end{aligned}$$

We stress that the suprema in the above inequalities are taken over all \(x \in X\), and all possible realizations of the (Chebyshev) weakly greedy algorithm. A basis is called greedy if \(\sup _n \mathbf {L}(n,1) < \infty \), and partially greedy if \(\sup _n {\mathbf {L}}_{\mathrm {re}}(n,1) < \infty \).

To estimate the Lebesgue constants, we quantify some properties of \((e_i)\). We use the left and right democracy functions \(\phi _l(k) = \inf _{|A|=k} \Vert \sum _{i \in A} a_i\Vert \) and \(\phi _r(k) = \sup _{|A|=k} \Vert \sum _{i \in A} a_i\Vert \) (sometimes, \(\phi _r\) is also referred to as the fundamental function). We define the democracy parameter

$$\begin{aligned} \varvec{\mu }(n) = \max _{k \le n} \frac{\phi _r(k)}{\phi _l(k)} =\sup _{|A| = |B| \le n} \frac{\Vert \sum _{i \in A} e_i\Vert }{\Vert \sum _{i \in B} e_i\Vert }. \end{aligned}$$

Following [12], define the disjoint democracy parameter

$$\begin{aligned} \varvec{\mu _d}(n) = \sup _{|A| = |B| \le n, A \cap B = \emptyset }\frac{\Vert \sum _{i \in A} e_i\Vert }{\Vert \sum _{i \in B} e_i\Vert }. \end{aligned}$$

Clearly, \(\varvec{\mu _d}(n) \le \varvec{\mu }(n)\). By [10, Lemma 13], \(\varvec{\mu }(n) \le 2 \mathfrak {K}\varvec{\mu _d}(n)\). Related to the democracy parameter of a basis \((e_i)\) is its conservative parameter:

$$\begin{aligned} \mathbf {c}(n) = \sup \Bigg \{ \frac{\Vert \sum _{i \in A} e_i\Vert }{\Vert \sum _{i \in B} e_i\Vert } :\max A \le n < \min B, |A| = |B| \Bigg \}. \end{aligned}$$

Clearly \(\mathbf {c}(n) \le \varvec{\mu _d}(n)\). The norms of coordinate projections in a basis \((e_i)\) are quantified by the unconditionality parameter and complemented unconditionality parameter: \(\mathbf {k}(n) = \sup _{|A| \le n} \Vert P_A\Vert \), resp. \(\mathbf {k}_c(n) = \sup _{|A| \le n} \Vert I - P_A\Vert \) (clearly \(|\mathbf {k}(n) - \mathbf {k}_c(n)| \le 1\)).

The investigation of Lebesgue constants for greedy algorithms dates back to the earliest works on greedy algorithms, with some relevant ideas appearing already in [8]. In [12], the Lebesgue constants of the Haar basis in the BMO, and the dyadic BMO, were computed. More recently, in [15, 16], the Lebesgue constants for tensor product bases in \(L_p\)-spaces (in particular, for the multi-Haar basis) were calculated. The Lebesgue constants for the trigonometric basis \(L_p\) (which is not quasi-greedy) are also known, see e.g. [13, Section 1.7]. The recent paper [4] estimates the Lebesgue constants for bases in \(L_p\) spaces with specific properties (such as being uniformly bounded). Lebesgue constants for redundant dictionaries are studied in [14, Section 2.6].

This paper is structured as follows: in Sect. 2, we gather some preliminary facts about quasi-greedy bases. In Sect. 3, we estimate \(\mathbf {L}(n,t)\) in terms of \(\mathfrak {K}\), \(\varvec{\mu _d}(n)\), \(\mathbf {k}(n)\), and \(t\). For \(t=1\), related results were obtained in [5]. However, the Lebesgue constant was not explicitly calculated there. Retracing the computations, one obtains worse constants than those given by Theorem 3.1. Corollary 3.5 gives an upper estimate for the Lebesgue constant of quasi-greedy bases in Hilbert spaces, by combining Theorem 3.1 with the recent results of Garrigos and Wojtaszczyk [6]. Further, we estimate the Lebesgue constant for general (not necessarily quasi-greedy) systems in Proposition 3.6.

In Sect. 4, we estimate \({\mathbf {L}}_{\mathrm {ch}}(n,t)\). The estimates involve only \(t\), \(\mathfrak {K}\), and \(\varvec{\mu _d}(n)\). Finally, in Sect. 5, we provide upper and lower bounds for \({\mathbf {L}}_{\mathrm {re}}(n,t)\), involving \(t\), \(\mathfrak {K}\), and \(\mathbf {c}(n)\). The main results are given in Theorems 4.1 and 5.1, respectively.

Most of the work in this paper is done in the real case. In Sect. 5, we indicate that the complex versions of the results of this paper also hold, albeit perhaps with different numerical constants.

Remark 1.1

After the first version of this article was circulated, the referee brought the recent paper [10] to the attention of the authors. There, order-of-magnitude estimates for the Lebesgue constant, and the Chebyshevian Lebesgue constant (similar to our Theorems 3.1, 4.1) are given. Our results have the advantage of establishing the dependence of the Lebesgue constants not only of \(\varvec{\mu }_d(n)\) and \(\varvec{k}(n)\), but also of \(\mathfrak {K}\) and \(t\).

2 Preliminary results

In this section we prove two lemmas, which will be needed throughout the paper, and may be of interest in their own right. First we sharpen some results from [9, Section 2].

Lemma 2.1

Suppose \((e_i) \subset X\) is a basis with a quasi-greedy constant \(\mathfrak {K}\). Consider \(x \in X\), and let \(A\) be a \(t\)-greedy set for \(x\). Then \(\Vert P_A x\Vert \le (1 + 4 t^{-1} \mathfrak {K}) \mathfrak {K}\Vert x\Vert \).

Proof

For the sake of brevity, set \(a_i = e_i^*(x)\). Let \(M = \min _{i \in A} |a_i|\), then \(|a_i| \le t^{-1} M\) for \(i \notin A\). Define \(B = \{i : |a_i| > t^{-1}M\}\) and \(C = \{i : |a_i| \ge M\}\). Then \(B \subset A \subset C\), and \(P_A x = P_B x + P_{A \backslash B} x\). By the definition of \(\mathfrak {K}\), \(\Vert P_B x\Vert \le \mathfrak {K}\Vert x\Vert \), and \(\Vert P_C x\Vert \le \mathfrak {K}\Vert x\Vert \). Write \(P_C x = \sum _{i \in C} a_i e_i\).

Now define the basis \((e_i^\prime )\) by setting

$$\begin{aligned} e_i^\prime = \left\{ \begin{array}{ll} \text{ sign }\,(a_i) e_i &{} \quad i \in C \\ e_i &{} \quad {\mathrm {otherwise.}} \end{array}\right. \end{aligned}$$

As this basis has the same quasi-greedy constant as \((e_i)\), Lemma 6.1(2) shows that \(M \Vert \sum _{i \in C} e_i^\prime \Vert \le 2 \mathfrak {K}\Vert x\Vert \). For \(i \in C\), set

$$\begin{aligned} b_i = \left\{ \begin{array}{ll} |a_i| &{} \quad i \in A \backslash B \\ 0 &{} \quad {\mathrm {otherwise}} \end{array} \right. . \end{aligned}$$

For any \(i\), \(|b_i| \le t^{-1} M\), hence, by Lemma 6.1(1)

$$\begin{aligned} \left\| \sum _{i \in A \backslash B} a_i e_i\right\| = \left\| \sum _{i \in C} b_i e_i^\prime \right\| \le 2 t^{-1} M \mathfrak {K}\left\| \sum _{i \in C} e_i^\prime \right\| \le 4 t^{-1} \mathfrak {K}^2 \Vert x\Vert . \end{aligned}$$

By the triangle inequality, \(\Vert P_A x\Vert \le \Vert P_B x\Vert + \Vert P_{A \backslash B} x\Vert \). \(\square \)

Lemma 2.2

Suppose \((e_i)\) is a \(\mathfrak {K}\)-quasi-greedy basis in \(X\). Consider \(x \in X\), and let \(a_i = e_i^*(x)\), for \(i \in \mathbb N\). Suppose a finite set \(A \subset \mathbb N\) satisfies \(\min _{i \in A} |a_i| \ge M\). Then \(M \Vert \sum _{i \in A} \text{ sign }\,(a_i) e_i\Vert \le 2\mathfrak {K}^2 \Vert x\Vert \). Furthermore, \(M \Vert \sum _{i \in A} e_i\Vert \le 4\mathfrak {K}^2 \Vert x\Vert \).

Proof

Consider the set \(B = \{i : |a_i| \ge M\}\) (clearly \(A \subset B\)). By [5, Lemma 10.1], \(\Vert \sum _{i \in A} \text{ sign }\,(a_i) e_i\Vert \le \mathfrak {K}\Vert \sum _{i \in B} \text{ sign }\,(a_i) e_i\Vert \). By Lemma 6.1(2), \(\Vert \sum _{i \in B} \text{ sign }\,(a_i) e_i\Vert \) \(\le 2\mathfrak {K}\Vert x\Vert /M\). To establish the “moreover” part, let \(A_+ = \{i \in A : \text{ sign }\,(a_i) = 1\}\), and \(A_- = \{i \in A : \text{ sign }\,(a_i) = -1\}\). By the above, \(M \Vert \sum _{i \in A_+} \text{ sign }\,(a_i) e_i\Vert \le 2\mathfrak {K}^2 \Vert x\Vert \), and the same holds for \(A_-\). Complete the proof using the triangle inequality. \(\square \)

We close this section with a brief discussion about the values of \(\varvec{\mu _d}(n)\), \(\mathbf {k}(n)\), and \(\mathbf {c}(n)\). It was shown in [2, 5] that, for a \(\mathfrak {K}\)-quasi-greedy basis, \(\mathbf {k}(n) \le C \log (en)\), where the constant \(C\) depends on the particular basis. For bases in \(L_p\) spaces, sharper estimates were obtained in [6]. It is easy to see that \(\mathbf {c}(n) \le \varvec{\mu _d}(n) \le C n\), where \(C\) depends on a basis. These estimates are optimal: indeed, an appropriate enumeration of the canonical (normalized and \(1\)-unconditional) basis in \(c_0 \oplus _2 \ell _1\) gives \(\mathbf {c}(n) \ge cn\).

3 The Lebesgue constant

In this section, we use some of the techniques of [5] to estimate the Lebesgue constants \(\mathbf {L}(n,t)\).

Theorem 3.1

For any \(\mathfrak {K}\)-quasi-greedy basis,

$$\begin{aligned} \max \big \{\mathbf {k}_c(n), t^{-1} \varvec{\mu _d}(n)\big \} \le \mathbf {L}(n,t)\le 1 + 2\mathbf {k}(n) + 8 t^{-1} \mathfrak {K}^3 \varvec{\mu _d}(n). \end{aligned}$$

The proof of the theorem relies on several lemmas, whose proofs closely resemble those given in [5] (Lemma 3.4 yields better upper estimates).

Lemma 3.2

For any \(\mathfrak {K}\)-quasi-greedy basis, \(\mathbf {L}(n,t) \ge t^{-1} \varvec{\mu _d}(n)\).

Proof

Fix \(n \in \mathbb N\) and \(\varepsilon > 0\). Find \(A, B \subset \mathbb N\), so that \(A \cap B = \emptyset \), \(|A| = |B| = k \le n\), and

$$\begin{aligned} \Bigg \Vert \sum _{i \in A} e_i \Bigg \Vert \ge (\varvec{\mu _d}(n) - \varepsilon ) \Bigg \Vert \sum _{i \in B} e_i \Bigg \Vert . \end{aligned}$$

Pick a set \(C\), disjoint from \(A\) and \(B\), so that \(|C| = n-k\). Consider

$$\begin{aligned} x = (t + \varepsilon ) \sum _{i \in B \cup C} e_i + \sum _{i \in A} e_i. \end{aligned}$$

Then \((t + \varepsilon ) \sum _{i \in B \cup C} e_i\) is a \(t\)-greedy approximant of \(x\), for which \(\Vert x - \mathbf {G}_n^t(x)\Vert = \Vert \sum _{i \in A} e_i\Vert \). However, \(|A \cup C| = n\), hence

$$\begin{aligned} \sigma _n(x) \le \tilde{\sigma }_n(x) \le \Vert x - P_{A \cup C} x \Vert =(t + \varepsilon ) \Big \Vert \sum _{i \in B} e_i \Big \Vert . \end{aligned}$$

Thus,

$$\begin{aligned} \mathbf {L}(n,t) \ge \frac{\Vert x - \mathbf {G}_n^t(x)\Vert }{\sigma _n(x)} =(t + \varepsilon )^{-1} \frac{\Vert \sum _{i \in A} e_i\Vert }{\Vert \sum _{i \in B} e_i\Vert } \ge \frac{\varvec{\mu _d}(n) - \varepsilon }{t + \varepsilon }. \end{aligned}$$

As \(\varepsilon \) can be arbitrarily small, the desired estimate follows. \(\square \)

Lemma 3.3

For any basis, \(\mathbf {L}(n,t) \ge \mathbf {k}_c(n)\).

Proof

Clearly \(\mathbf {L}(n,t) \ge \mathbf {L}(n,1)\). By [5, Proposition 3.3], \(\mathbf {L}(n,1) \ge \mathbf {k}_c(n)\). \(\square \)

Lemma 3.4

For any \(\mathfrak {K}\)-quasi-greedy basis, \(\mathbf {L}(n,t) \le \mathbf {k}(n) + \mathbf {k}_c(n) + 8 t^{-1} \mathfrak {K}^3 \varvec{\mu _d}(n)\).

Proof

For \(x \in X\), let \(a_i = e_i^*(x)\), and fix \(\varepsilon > 0\). Suppose \(A \subset \mathbb N\) is a \(t\)-greedy set for \(x\), of cardinality \(n\). Find \(z \in X\), supported on a set \(B\) of cardinality \(n\), so that \(\Vert x - z\Vert < \sigma _n(x) + \varepsilon \). Let \(M = \sup _{i \notin A} |a_i|\), then \(|a_i| \ge tM\) whenever \(i \in A\). By the triangle inequality,

$$\begin{aligned} \Vert x - P_A x\Vert \le \Vert x - P_B x\Vert + \Vert P_{A \backslash B} x\Vert + \Vert P_{B \backslash A} x\Vert . \end{aligned}$$

We have

$$\begin{aligned} \Vert P_{A \backslash B} x\Vert = \Vert P_{A \backslash B} (x-z)\Vert \le \mathbf {k}(n) \Vert x-z\Vert , \end{aligned}$$

and

$$\begin{aligned} \Vert x - P_B x\Vert = \Vert x - P_B x + z - P_B z\Vert = \Vert (1 - P_B) (x-z)\Vert \le \mathbf {k}_c(n) \Vert x-z\Vert . \end{aligned}$$

It remains to estimate the third summand, in the non-trivial case of \(|B \backslash A| = k > 0\). For \(i \in B \backslash A\), \(|a_i| \le M\), hence by Lemma 6.1(1) (see also [3, Lemma 2.1]),

$$\begin{aligned} \Vert P_{B \backslash A} x\Vert = \Big \Vert \sum _{i \in B \backslash A} a_i e_i\Big \Vert \le 2 M \mathfrak {K}\Vert \sum _{i \in B \backslash A} e_i\Vert . \end{aligned}$$

By Lemma 2.2, \(M \le 4 t^{-1} \mathfrak {K}^2 \Vert \sum _{i \in A \backslash B} e_i\Vert ^{-1} \Vert x-z\Vert \). Thus,

$$\begin{aligned} \Vert P_{B \backslash A} x\Vert&\le 2 M \mathfrak {K}\Big \Vert \sum _{i \in B \backslash A} e_i\Big \Vert \le 8 t^{-1} \mathfrak {K}^3\frac{\Vert \sum _{i \in B \backslash A} e_i\Vert }{\Vert \sum _{i \in A \backslash B} e_i\Vert }\Vert x-z\Vert \\&\le 8 t^{-1} \mathfrak {K}^3 \varvec{\mu _d}(n) \Vert x-z\Vert . \end{aligned}$$

As \(\Vert x-z\Vert \) can be arbitrarily close to \(\sigma _n(x)\), we are done. \(\square \)

We use Theorem 3.1 to estimate the Lebesgue constant for quasi-greedy bases in a Hilbert spaces. Recall that a basis \((e_i)\) is called hilbertian (besselian) if there exists a constant \(c\) so that, for every finite sequence of scalars \((\alpha _i)\). we have \(\sum _i |\alpha _i|^2 \ge c \Vert \sum _i \alpha _i e_i\Vert ^2\) (resp. \(\sum _i |\alpha _i|^2 \le c \Vert \sum _i \alpha _i e_i\Vert ^2\)).

Corollary 3.5

For any quasi-greedy basis in a Hilbert space, there exists \(\alpha \in (0,1)\) and \(C > 0\) so that, for any \(n \in \mathbb N\) and \(t \in (0,1)\), \(\mathbf {L}(n,t) \le C (t^{-1} + (\log (en) )^\alpha )\). If, moreover, the basis is either besselian or hilbertian, then there exists \(\alpha \in (0,1/2)\) with the above property.

Proof

By [6], there exists \(c_1 > 0\), and \(\alpha \) as above, so that \(\mathbf {k}(n) \le c_1 (\log (en) )^\alpha \). By [17, Theorem 3], \(\varvec{\mu }(n) \le c_2\), for some constant \(c_2\). To finish the proof, apply Theorem 3.1. \(\square \)

We conclude this section with an estimate for \(\mathbf {L}(n,t)\) for bounded Markushevitch bases which are not necessarily quasi-greedy. Let \(1 \le p \le q \le \infty \). We say that \((e_i)\) satisfies weak upper \(p\)- and lower \(q\)-estimates if there exists \(K>0\) such that for all \(x \in X\),

$$\begin{aligned} \frac{1}{K} \Vert (e_i^*(x))\Vert _{q,\infty } \le \Vert x\Vert \le K\Vert (e_i^*(x))\Vert _{p,1}, \end{aligned}$$

where, letting \((a_n^*)\) denote the decreasing rearrangement of the sequence \((|a_n|)\),

$$\begin{aligned} \Vert (a_n)\Vert _{q,\infty } := \sup _{n\ge 1} n^{1/q}a_n^* \end{aligned}$$

and

$$\begin{aligned} \Vert (a_n)\Vert _{p,1} := \sum _{n \ge 1} n^{1/p - 1}a_n^* \end{aligned}$$

are the usual Lorentz sequence norms. Note that \(p=1\) and \(q=\infty \) are just the \(\ell _1\) and \(c_0\) norms, respectively.

The following result slightly extends [17, Theorem 5] by incorporating the weakness parameter \(t\) and replacing upper \(\ell _p\)-and lower \(\ell _q\)-estimates by weaker Lorentz sequence space estimates.

Proposition 3.6

Suppose \((e_i)\) satisfies weak upper \(p\)- and lower \(q\)-estimates. Then there exists \(D := D(p,q,K)\) such that

$$\begin{aligned} \mathbf {L}(n,t) \le {\left\{ \begin{array}{ll} Dn^{1/p-1/q}/t,&{}\quad p \ne q\\ D\log n/t, &{}\quad p = q. \end{array}\right. } \end{aligned}$$

Proof

First suppose \(q > p\). Let \(x \in X\) and set \(a_i := e_i^*(x)\). Let \(A\) be a \(t\)-greedy set for \(x\), with \(|A|=n\), and let \(\mathbf {G}^t_n(x) := \sum _{i \in A} a_i e_i\). Given \(\varepsilon >0\), choose \(B \subset \mathbb {N}\), with \(|B| =n\), such that \(\Vert x - \sum _{i \in B}b_ie_i\Vert \le \sigma _n(x) + \varepsilon \). For convenience, set \(b_i = 0\) if \(i \notin B\). By the triangle inequality,

$$\begin{aligned} \Vert x - \mathbf {G}^t_n(x)\Vert&\le \Vert x - \sum _{i \in B} b_i e_i\Vert + \Vert \sum _{i \in B} b_i e_i - \sum _{i \in A} a_i e_i\Vert \nonumber \\&\le \sigma _n(x) + \varepsilon + \Vert \sum _{i \in B} b_i e_i - \sum _{i \in A} a_i e_i\Vert . \end{aligned}$$
(3.1)

Setting \(C = C(p,q) := (1/p - 1/q)^{1/q - 1/p}\), we obtain:

$$\begin{aligned} \Big \Vert \sum _{i \in A} (b_i - a_i)e_i\Big \Vert&\le K \Vert (b_i - a_i)_{i \in A}\Vert _{p,1}\nonumber \\&\le KC n^{1/p - 1/q}\Vert (b_i-a_i)_{i \in A}\Vert _{q,\infty }\nonumber \\&\le K^2C n^{1/p - 1/q} \Vert x - \sum _{i \in B} b_i e_i\Vert \nonumber \\&\le K^2C n^{1/p - 1/q}(\sigma _n(x) + \varepsilon ). \end{aligned}$$
(3.2)

Similarly,

$$\begin{aligned} \Big \Vert \sum _{i \in B \setminus A} b_i e_i\Big \Vert&\le \Big \Vert \sum _{i \in B \setminus A} (b_i-a_i) e_i\Big \Vert + \Big \Vert \sum _{i \in B \setminus A} a_i e_i\Big \Vert \nonumber \\&\le K^2C n^{1/p-1/q} (\sigma _n(x) + \varepsilon ) + \Big \Vert \sum _{i \in B \setminus A} a_i e_i\Big \Vert . \end{aligned}$$
(3.3)

We clearly have \(|A\setminus B| = |B \setminus A|\). As \(A\) is \(t\)-greedy set for \(x\), we have \(\min _{A \backslash B} |a_i| \ge t \max _{B \backslash A}|a_i|\). Therefore,

$$\begin{aligned} \Big \Vert \sum _{i \in B \setminus A} a_i e_i\Big \Vert&\le KC n^{1/p-1/q}\Vert (a_i)_{i \in B \setminus A}\Vert _{q,\infty }\nonumber \\&\le \frac{KC n^{1/p-1/q}}{t}\Vert (a_i)_{i \in A \setminus B}\Vert _{q,\infty }\nonumber \\&\le \frac{K^2C n^{1/p-1/q}}{t} \Vert x - \sum _{i \in B} b_i e_i\Vert \nonumber \\&\le \frac{K^2C n^{1/p-1/q}}{t} (\sigma _n(x) + \varepsilon ). \end{aligned}$$
(3.4)

Since \(\varepsilon >0\) is arbitrary, combining (3.1)–(3.4) gives

$$\begin{aligned} \Vert x - \mathbf {G}^t_n(x)\Vert \le \Big (1 + 2K^2C + \frac{K^2C}{t}\Big ) n^{1/p - 1/q} \sigma _n(x), \end{aligned}$$

and hence \(\mathbf {L}(n,t) \le \Big (1 + 2K^2C + \frac{K^2C}{t}\Big ) n^{1/p - 1/q}\). The case \(p=q\) is similar except \(Cn^{1/p-1/q}\) is replaced by \(1 + \log n\) throughout. \(\square \)

Corollary 3.7

Let \(1 \le p < \infty \) and let \((e_i)\) be a bounded Markushevitch basis such that \(\phi _r(k) \le C k^{1/p}\) for some \(C>0\). Then \(\mathbf {L}(n,t) \le C^\prime n^{1/p}/t\), for some constant \(C^\prime \).

Proof

Any basis satisfies the lower \(\infty \)-estimate. In order to apply Proposition 3.6, we need to show that \((e_i)\) has a weak upper \(p\)-estimate.

By the triangle inequality \(\Vert \sum _{i \in A} \pm e_i\Vert \le 2C n^{1/p}\) for all \(A\subset \mathbb {N}\) with \(|A|=n\). Suppose, for \(x \in X\), the sequence \(a_n = e_n^*(x)\) satisfies \(\sum _n n^{1/p-1} a_n^* = \gamma \). Let \((n_i)\) be a non-decreasing enumeration of this sequence – that is, \(|a_{n_i}| = a_i^*\) for every \(i\). Set \(\varepsilon _i = \text{ sign }\,(a_{n_i})\), \(c_i = a_i^* - a_{i+1}^*\), and \(y_i = \sum _{j=1}^i \varepsilon _j e_{n_j}\). Note that, for every \(i\), \(i^{1/p} - (i-1)^{1/p} \le i^{1/p-1}\), hence

$$\begin{aligned} (2C)^{-1} \sum _i |c_i| \Vert y_i\Vert \le \sum _i (a_i^* - a_{i+1}^*) i^{1/p} =\sum _i a_i^*(i^{1/p} - (i-1)^{1/p}) \le \gamma . \end{aligned}$$

Consequently, \(\sum _i c_i y_i\) converges in \(X\). For every \(i\), we have \(e_i^*(\sum _i c_i y_i) = e_i^*(x)\), hence \(\sum _i c_i y_i = x\). By the above, \(\Vert x\Vert \le 2C \gamma \). \(\square \)

Remark 3.8

The estimates of Proposition 3.6 and Corollary 3.7 are sharp, even for unconditional (hence quasi-greedy) bases. For \(q > p\), consider the canonical basis of \(\ell _q \oplus _q \ell _p\) (\(c_0 \oplus _\infty \ell _p\) if \(q = \infty \)). This basis clearly possesses the lower \(q\)- and upper \(p\)-estimates, with constant \(1\). Denote the bases of \(\ell _q\) and \(\ell _p\) by \((e_i)\) and \((f_i)\) respectively. Fix \(c > 1\), and let \(x = \sum _{i=1}^n \big ( ct e_i + f_i \big )\). One possible realization of the \(t\)-greedy algorithm gives \(\mathbf {G}_m^t(x) = ct \sum _{i=1}^n e_i\), hence \(\Vert x - \mathbf {G}_m^t(x)\Vert = n^{1/p}\). On the other hand, \(\sigma _n(x) \le \tilde{\sigma }_n(x) \le \Vert ct \sum _{i=1}^n e_i\Vert = ct n^{1/q}\). As \(c\) can be arbitrarily close to \(1\), we obtain \(\mathbf {L}(n,t) \ge n^{1/p-1/q}/t\), showing the optimality of Proposition 3.6. Note that \(\phi _r(k) = k^{1/p}\), hence, for \(q = \infty \), we witness the optimality of Corollary 3.7.

We can also show the optimality of Proposition 3.6 for \(p=q=2\), once more for quasi-greedy basis. By [6, Theorem 3.1 and Corollary 3.11], there exists a quasi-greedy democratic basis in \(c_0 \oplus \ell _1 \oplus \ell _2\), so that \(\phi _r(n) \sim \phi _l(n) \sim \sqrt{n}\). The weak upper \(2\)-estimate follows from the proof of Corollary 3.7, whereas the weak lower \(2\)-estimate follows from Lemma 6.1(2). Furthermore, [6, Corollary 3.11] gives \(\mathbf {k}(n) \ge c \log n\) for this basis (\(c\) is a constant). By Theorem 3.1, \(\mathbf {L}(n,t) \ge \mathbf {k}(n) - 1\).

Remark 3.9

We also present two examples of sharpness of Proposition 3.6 for bases which are not quasi-greedy. Throughout, we use some well-known facts about Lorentz spaces, see e.g. the survey [1].

First pick \(p \in (1,2)\). Set \(q = p/(p-1)\) and \(\gamma = 2/p - 1\) (so \(1/p = (1+\gamma )/2\), and \(1/q = 1 - 1/p = (1-\gamma )/2\)). Consider the measures \(\mu \) and \(\nu \) on \([-\pi ,\pi ]\), by setting \(d \mu = |t|^{-\gamma } \, dt\) and \(d \nu = |t|^\gamma \, dt\). The trigonometric system forms a non-quasi-greedy Schauder basis in both \(L_2(\mu )\) and \(L_2(\nu )\), see e.g. [11]. Denote by \(e_1, e_2, \ldots \) (\(f_1, f_2, \ldots \)) the trigonometric basis in \(L_2(\mu )\) (resp. \(L_2(\nu )\)), enumerated as \(1, e^{i t}, e^{-i t}, e^{2i t}, e^{-2i t}, \ldots \).

First concentrate on the basis \((e_i)\) in \(L_2(\mu )\). Clearly this basis satisfies the lower \(2\)-estimate:

$$\begin{aligned} \Big \Vert \sum _i \alpha _i e_i\Big \Vert _{L_2(\mu )} \ge \pi ^{-\gamma } \left( \int _{-\pi }^\pi \big | \sum _i \alpha _i e_i \big |^2 \, dt \right) ^{1/2} = \sqrt{2} \pi ^{1/2-\gamma } \left( \sum _i |\alpha _i|^2 \right) ^{1/2}. \end{aligned}$$

Next show that \(\phi _r(n) \sim n^{1/p}\) (once this is established, the weak upper \(p\)-estimate will follow, as in the proof of Corollary 3.7). The lower estimate on \(\phi _r\) is proved in [6, Lemma 3.7]. For the upper estimate, recall the well-known fact that \(\int |\phi \psi | \le \int \phi ^* \psi ^*\) (\(\phi ^*\) and \(\psi ^*\) are decreasing rearrangments of \(\phi \) and \(\psi \) respectively). Consequently, if \(f\) is a function of \([0,\pi ]\) with \(0 \le f \le n^2\), and \(\int f(t) \,dt = n\), then \(\int f(t) t^{-\gamma } \, dt \le n^{1+\gamma }/(1-\gamma )\) (the equality is attained when \(f(t) = n^2 \mathbf {1}_{[0,1/n]}\)). Now suppose \(A \subset \mathbb N\) has cardinality \(n\). Applying our observation to \(f = |\sum _{j \in A} e_j|^2\), we obtain \(\Vert \sum _{j \in A} e_j\Vert _{L_2(\mu )} \prec n^{(1+\gamma )/2} = n^{1/p}\).

Use [6, Lemma 3.7] to find \(\varepsilon _1, \ldots , \varepsilon _{2n+1} \in \{-1,1\}\) so that \(\Vert \sum _i \varepsilon _i e_i \Vert _{L_2(\mu )} \sim \sqrt{n}\), while \(\Vert \sum _i e_i \Vert _{L_2(\mu )} \sim n^{1/p}\). Let \(B = \{i : \varepsilon _i = 1 \}\) and \(C = \{i : \varepsilon _i = -1 \}\). For \(\varepsilon > 0\) set \(x = (1+\varepsilon ) \sum _{i \in B} e_i - \sum _{i \in C} e_i\). For \(\varepsilon < 1/n\) we have \(\Vert x\Vert \sim \sqrt{n}\), yet \(\Vert x - \mathbf {G}_{|B|}(x)\Vert = \Vert \sum _{i \in C} e_i\Vert \sim n^{1/p}\). Consequently, \(\mathbf {L}(|B|,1) \succ |B|^{1/p-1/2}\). By the above, \(|B| \sim n\). Thus, the estimates on \(\mathbf {L}(n,t)\) obtained in Proposition 3.6 are optimal for this basis.

In the second example the optimality of these estimates is shown for a basis with a weak upper \(p\)-estimate, and a weak lower \(q\)-estimate. Following [6, Section 3], define the Schauder basis \((g_j)\) in \(L_2(\mu )\, \oplus _2\, L_2(\nu )\) by setting, for \(k \in \mathbb N\), \(g_{2k-1} = (e_k + f_k)/\sqrt{2}\) and \(g_{2k} = (e_k - f_k)/\sqrt{2}\). By the proof of [6, Proposition 3.10], for any odd \(n\) we can have \(\Vert \sum _{k=1}^{2n} g_k\Vert \sim n^{1/q}\), yet \(\Vert \sum _{k=1}^n g_{2k-1}\Vert \sim n^{1/p}\). As in the previous paragraph, we conclude that \(\mathbf {L}(n,1) \succ n^{1/p-1/q}\).

Next show that \((g_j)\) satisfies the weak upper \(p\)-estimate, and the weak lower \(q\)-estimate. Consider

$$\begin{aligned} x = \sum _k \big ( \alpha _k g_{2k-1} + \beta _k g_{2k} \big ) = \frac{1}{\sqrt{2}} \Bigg ( \sum _k (\alpha _k + \beta _k) e_k \Bigg ) \oplus _2 \Bigg ( \sum _k (\alpha _k - \beta _k) f_k \Bigg ).\quad \end{aligned}$$
(3.5)

We have to show that

$$\begin{aligned} \Vert (\alpha _1, \beta _1, \alpha _2, \beta _2, \ldots )\Vert _{q,\infty } \prec \Vert x\Vert \prec \Vert (\alpha _1, \beta _1, \alpha _2, \beta _2, \ldots )\Vert _{p,1}. \end{aligned}$$
(3.6)

Start by recalling that, for any sequence \((\gamma _i)\),

$$\begin{aligned} \Vert (\gamma _i)\Vert _{q,\infty } \prec \Bigg (\sum _i |\gamma _i|^2\Bigg )^{1/2} = \Vert (\gamma _i)\Vert _2 \prec \Vert (\gamma _i)\Vert _{p,1}. \end{aligned}$$
(3.7)

The basis \((f_k)\) satisfies the upper \(2\)-estimate:

$$\begin{aligned} \Big \Vert \sum _i \alpha _i f_i\Big \Vert _{L_2(\nu )} \le \pi ^{\gamma } \Bigg ( \int _{-\pi }^\pi \big | \sum _i \alpha _i f_i \big |^2 \, dt \Bigg )^{1/2} = \sqrt{2} \pi ^{1/2+\gamma } \Bigg ( \sum _i |\alpha _i|^2 \Bigg )^{1/2}. \end{aligned}$$

Thus, by (3.7), (3.5), and the triangle inequality for \(\Vert \cdot \Vert _{p,1}\),

$$\begin{aligned} \Vert x\Vert&\prec \Vert (\alpha _k + \beta _k)\Vert _{p,1} + \Vert (\alpha _k - \beta _k)\Vert _2 \prec \Vert (\alpha _k + \beta _k)\Vert _{p,1} + \Vert (\alpha _k - \beta _k)\Vert _{p,1} \\&\sim \Vert (\alpha _k)\Vert _{p,1} + \Vert (\beta _k)\Vert _{p,1} \prec \Vert (\alpha _1, \beta _1, \alpha _2, \beta _2, \ldots )\Vert _{p,1}, \end{aligned}$$

yielding the right hand side of (3.6).

Next note that \((f_i)\) satisfies the weak lower \(q\)-estimate. Indeed, the functions \(f_i^\prime (t) = e_i(t) |t|^\gamma \) are biorthogonal to \((e_i)\) in \(L_2(\mu )\). By duality, the sequence \((f_i^\prime )\) satisfies the weak lower \(q\)-estimate. Now observe that \(U : L_2(\mu ) \rightarrow L_2(\nu ) : f_i^\prime \rightarrow f_i\) is an isometry. Moreover, \((e_i)\) satisfies the lower \(2\)-estimate, hence the weak lower \(q\)-estimate as well. As \(\Vert \cdot \Vert _{q,\infty }\) is a quasi-norm, we obtain

$$\begin{aligned} \Vert x\Vert&\succ \Vert (\alpha _k + \beta _k)\Vert _2 + \Vert (\alpha _k - \beta _k)\Vert _{q,\infty } \succ \Vert (\alpha _k + \beta _k)\Vert _{q,\infty } + \Vert (\alpha _k - \beta _k)\Vert _{q,\infty } \sim \\&\Vert (\alpha _k)\Vert _{q,\infty } + \Vert (\beta _k)\Vert _{q,\infty } \succ \Vert (\alpha _1, \beta _1, \alpha _2, \beta _2, \ldots )\Vert _{q,\infty }. \end{aligned}$$

This yields the left hand side of (3.6).

4 The Chebyshevian Lebesgue constant

Theorem 4.1

For any \(\mathfrak {K}\)-quasi-greedy basis,

$$\begin{aligned} \frac{\varvec{\mu _d}(n)}{2 t \mathfrak {K}} \le {\mathbf {L}}_{\mathrm {ch}}(n,t) \le \frac{20 \mathfrak {K}^3 \varvec{\mu _d}(n)}{t}. \end{aligned}$$

As a corollary, we recover a result from [2].

Corollary 4.2

Any almost greedy basis is semi-greedy.

Recall that \((e_i)\) is almost greedy if there exists a constant \(C\) so that \(\Vert x - \mathbf {G}_n(x)\Vert \le C \tilde{\sigma }_n(x)\) for any \(n \in \mathbb N\) and \(x \in X\), and semi-greedy if there exists a constant \(C\) so that \(\Vert x - \mathbf {CG}_n(x)\Vert \le C \sigma _n(x)\), for any \(n\) and \(x\).

Proof

By [2], a basis is almost greedy if and only if it is quasi-greedy and democratic (that is, \(\sup _n \varvec{\mu }(n) < \infty \)). In this case \(\sup _n {\mathbf {L}}_{\mathrm {ch}}(n,1) < \infty \), hence the basis is semi-greedy. \(\square \)

Below, we shall use the “truncation function”

$$\begin{aligned} {\mathbf {F}}\!_M : \mathbb R\rightarrow \mathbb R: t \mapsto \left\{ \begin{array}{ll} -M &{} \quad t < -M \\ t &{}\quad -M \le t \le M \\ M &{}\quad t > M \end{array} \right. . \end{aligned}$$

Abusing the notation slightly, we shall write

$$\begin{aligned} {\mathbf {F}}\!_M(x) = x - \sum _i \Big (e_i^*(x) - {\mathbf {F}}\!_M\big (e_i^*(x)\big )\Big ) e_i. \end{aligned}$$

The sum above converges, since the set \(\{i \in \mathbb N: |e_i^*(x)| > M\}\) is finite. Moreover, \({\mathbf {F}}\!_M(x)\) is the only element \(y \in X\) with the property that, for every \(i\), \(e_i^*(y) = {\mathbf {F}}\!_M(e_i^*(x))\). By [2, Proposition 3.1], \(\Vert {\mathbf {F}}\!_M(x)\Vert \le (1+3\mathfrak {K})\Vert x\Vert \).

Proof

(The upper estimate in Theorem 4.1) For \(x \in X\) let \(a_i = e_i^*(x)\), and fix \(\varepsilon > 0\). Suppose a set \(A \subset \mathbb N\) of cardinality \(n\) is \(t\)-greedy for \(x\). Let \(M = \max _{i \notin A} |a_i|\), then \(\min _{i \in A} |a_i| \ge tM\). We have to show that there exists \(w \in X\) so that \(\mathop {\mathrm{supp}}(x-w) \subset A\), and \(\Vert w\Vert \prec 20 t^{-1} \mathfrak {K}^3 \varvec{\mu _d}(n) (\sigma _n(x) + \varepsilon )\).

Pick \(z = \sum _{i \in B} b_i e_i\), where \(|B| \le n\), and \(\Vert x - z\Vert < \sigma _n(x) + \varepsilon \). Set \(y = x-z\) and

$$\begin{aligned} y_i = e_i^*(y) = \left\{ \begin{array}{ll} a_i - b_i &{}\quad i \in B \\ a_i &{}\quad i \notin B \end{array} \right. . \end{aligned}$$

We claim that \(w = P_A {\mathbf {F}}\!_M(y) + P_{A^c} x\) has the desired properties. Indeed, \(x-w\) is supported on \(A\). To estimate \(\Vert w\Vert \), note that, for \(i \notin B\), \(y_i = a_i\). For \(i \notin A\), \({\mathbf {F}}\!_M(a_i) = a_i\), hence, for \(i \notin A \cup B\), \(a_i = {\mathbf {F}}\!_M(y_i)\). Thus,

$$\begin{aligned} w = {\mathbf {F}}\!_M(y) + \sum _{i \in B \backslash A} (a_i - {\mathbf {F}}\!_M(y_i)) e_i. \end{aligned}$$
(4.1)

We use [2, Proposition 3.1] to estimate on the first summand:

$$\begin{aligned} \Vert {\mathbf {F}}\!_M(y)\Vert \le (1+3\mathfrak {K}) \Vert y\Vert = (1+3\mathfrak {K}) \Vert x-z\Vert . \end{aligned}$$
(4.2)

To handle the second summand, set \(k = |B \backslash A|\). For \(i \in B \backslash A\), \(|a_i| \le M\), hence \(|a_i - {\mathbf {F}}\!_M(y_i)| \le 2M\). By Lemma 6.1(1),

$$\begin{aligned} \left\| \sum _{i \in B \backslash A} (a_i - {\mathbf {F}}\!_M(y_i)) e_i\right\| \le 4M \mathfrak {K}\left\| \sum _{i \in B \backslash A} e_i\right\| . \end{aligned}$$
(4.3)

On the other hand, for \(i \in A \backslash B\), \(a_i = y_i\), and \(|a_i| \ge tM\), hence by Lemma 2.2,

$$\begin{aligned} M \le t^{-1} \frac{4\mathfrak {K}^2 \Vert x-z\Vert }{\Vert \sum _{i \in A \backslash B} e_i\Vert }. \end{aligned}$$

Plugging this into (4.3), we get:

$$\begin{aligned} \left\| \sum _{i \in B \backslash A} (a_i - {\mathbf {F}}\!_M(y_i)) e_i\right\| \le \frac{16}{t} \frac{\Vert \sum _{i \in B \backslash A} e_i\Vert }{\Vert \sum _{i \in A \backslash B} e_i\Vert } \mathfrak {K}^3 \Vert x-z\Vert \le \frac{16}{t} \varvec{\mu _d}(n) \mathfrak {K}^3 \Vert x-z\Vert . \end{aligned}$$

Together with (4.2), we obtain:

$$\begin{aligned} \Vert w\Vert \le \Big ( \frac{16}{t} \varvec{\mu _d}(n) \mathfrak {K}^3 + 1 + 3\mathfrak {K}\Big ) \Vert x-z\Vert \le \frac{20 \mathfrak {K}^3 \varvec{\mu _d}(n)}{t} (\sigma _n(x) + \varepsilon ). \end{aligned}$$

As \(\varepsilon \) can be arbitrarily close to \(0\), we are done. \(\square \)

Proof

(The lower estimate in Theorem 4.1) Fix \(n \in \mathbb N\) and \(\varepsilon > 0\). Find \(A, B \subset \mathbb N\), so that \(A \cap B = \emptyset \), \(|A| = |B| = k \le n\), and

$$\begin{aligned} \Bigg \Vert \sum _{i \in A} e_i\Bigg \Vert \ge (\varvec{\mu _d}(n) - \varepsilon ) \Bigg \Vert \sum _{i \in B} e_i\Bigg \Vert . \end{aligned}$$

Pick a set \(C\), disjoint from \(A\) and \(B\), so that \(|C| = n-k\). Consider

$$\begin{aligned} x = (t + \varepsilon ) \sum _{i \in B \cup C} e_i + \sum _{i \in A} e_i. \end{aligned}$$

We can find a Chebyshev \(t\)-greedy approximant \(\mathbf {CG}_n^t(x)\) supported on \(B \cup C\), and then \(y = x - \mathbf {CG}_n^t(x) = \sum _{i \in A} e_i + \sum _{i \in B \cup C} y_i e_i\). Let \(D = \{i \in B \cup C : |y_i| \ge 1\}\). Both \(\sum _{i \in A} e_i + \sum _{i \in D} y_i e_i\) and \(\sum _{i \in D} y_i e_i\) are greedy approximants of \(y\), hence

$$\begin{aligned} \max \Bigg \{ \Bigg \Vert \sum _{i \in A} e_i + \sum _{i \in D} y_i e_i\Bigg \Vert , \Bigg \Vert \sum _{i \in D} y_i e_i\Bigg \Vert \Bigg \} \le \mathfrak {K}\Vert y\Vert . \end{aligned}$$

By the triangle inequality, \(\Vert \sum _{i \in A} e_i\Vert \le 2\mathfrak {K}\Vert y\Vert \). Thus,

$$\begin{aligned} \Vert x - \mathbf {CG}_n^t(x)\Vert&\ge \frac{1}{2\mathfrak {K}} \Bigg \Vert \sum _{i \in A} e_i\Bigg \Vert \ge \frac{\varvec{\mu _d}(n)-\varepsilon }{2(t+\varepsilon )\mathfrak {K}} \Bigg \Vert (t + \varepsilon ) \sum _{i \in B} e_i\Bigg \Vert \\&= \frac{\varvec{\mu _d}(n)-\varepsilon }{2(t+\varepsilon )\mathfrak {K}} \Vert x - P_{A \cup C} x\Vert \ge \frac{\varvec{\mu _d}(n)-\varepsilon }{2(t+\varepsilon )\mathfrak {K}} \tilde{\sigma }_n(x) \ge \frac{\varvec{\mu _d}(n)-\varepsilon }{2(t+\varepsilon )\mathfrak {K}} \sigma _n(x) \end{aligned}$$

(since \(|A \cup C| = n\)). As \(\varepsilon \) can be arbitrarily small, we are done. \(\square \)

5 The residual Lebesgue constant

Theorem 5.1

For any \(\mathfrak {K}\)-quasi-greedy basis,

$$\begin{aligned} t^{-1} \mathbf {c}(n) \le {\mathbf {L}}_{\mathrm {re}}(n,t) \le 1 + 4 \mathfrak {K}^2 + 8 t^{-1} \mathfrak {K}^3 \mathbf {c}(n). \end{aligned}$$

Proof

(The upper estimate in Theorem 5.1) For \(x \in X\) set \(a_i = e_i^*(x)\). Suppose \(A\) is a \(t\)-greedy subset of \(\mathbb N\), of cardinality \(n\), and set \(B = [1,n]\). Let \(M = \min _{i \in A} |a_i|\), then \(|a_i| \le t^{-1} M\) for \(i \notin A\). By the triangle inequality,

$$\begin{aligned} \Vert x - \mathbf {G}_n^t(x)\Vert = \Vert P_{A^c} x\Vert \le \Vert x - P_B x\Vert + \Vert P_{A \backslash B} x\Vert + \Vert P_{B \backslash A} x\Vert . \end{aligned}$$
(5.1)

Let \(y = P_{B^c} x\), then \(\Vert y\Vert = \hat{\sigma }_n(x)\). For \(i \in A \backslash B\), we have \(|e_i^*(y)| \ge M\), hence by Lemma 2.2, \(M \Vert \sum _{i \in A \backslash B} e_i\Vert \le 4 \mathfrak {K}^2 \Vert y\Vert \). By Lemmas 2.2 and 6.1(1),

$$\begin{aligned} \Vert P_{B \backslash A} x\Vert \le 2 t^{-1} M \mathfrak {K}\Bigg \Vert \sum _{i \in B \backslash A} e_i\Bigg \Vert \le 2 t^{-1} M \mathfrak {K}\mathbf {c}(n) \Bigg \Vert \sum _{i \in A \backslash B} e_i\Bigg \Vert \le 8 t^{-1} \mathfrak {K}^3 \mathbf {c}(n) \Vert y\Vert . \end{aligned}$$

Plug the above results into (5.1) to obtain the upper estimate for \({\mathbf {L}}_{\mathrm {re}}(n,t)\). \(\square \)

Proof

(The lower estimate in Theorem 5.1) Fix \(\varepsilon > 0\), and find sets \(A \subset [1,n]\) and \(B \subset [n+1,\infty )\) so that \(|A| = k = |B|\), and

$$\begin{aligned} \mathbf {c}(n) - \varepsilon < \frac{\Vert \sum _{i \in A} e_i\Vert }{\Vert \sum _{i \in B} e_i\Vert }. \end{aligned}$$

Consider \(x = \sum _{i=1}^n e_i + (t + \varepsilon ) \sum _{i \in B} e_i\). Then \(B \cup ([1,n] \backslash A)\) is a \(t\)-greedy set for \(x\), hence one can run the \(t\)-greedy algorithm in such a way that \(\Vert x - \mathbf {G}_n^t(x)\Vert = \Vert \sum _{i \in A} e_i\Vert \). On the other hand, \(\hat{\sigma }_n(x) = \Vert P_{[n+1,\infty )} x\Vert = (t + \varepsilon ) \Vert \sum _{i \in B} e_i\Vert \). The lower estimate follows from comparing these two quantities. \(\square \)

6 Appendix: The complex case

The results above are stated for the real case. The complex case is similar, but the constants are different. As customary, we set

$$\begin{aligned} \text{ sign }\,z = \left\{ \begin{array}{ll} z/|z| &{}\quad z \ne 0 \\ 0 &{}\quad z = 0 \end{array} \right. . \end{aligned}$$

The following result is present (implicitly or explicitly) in [5, Appendix] (the better-known real case is in [3, Lemmas 2.1 and 2.2]):

Lemma 6.1

Suppose \((e_i)\) is a \(\mathfrak {K}\)-quasi-greedy basis in a Banach space \(X\).

  1. 1.

    If \(A\) is a finite set, then \(\Vert \sum _{i \in A} a_i e_i\Vert \le 4 \sqrt{2} \mathfrak {K}\max _i |a_i| \Vert \sum _{i \in A} e_i\Vert \). Moreover, if the \(a_i\)’s are real, then \(\Vert \sum _{i \in A} a_i e_i\Vert \le 2 \mathfrak {K}\max _i |a_i| \Vert \sum _{i \in A} e_i\Vert \).

  2. 2.

    Suppose \(A\) is a greedy set for \(x \in X\). Let \(M = \min _{i \in A} |e_i^*(x)|\). Then

    $$\begin{aligned} \frac{M}{8 \sqrt{2} \mathfrak {K}^2} \Vert \sum _{i \in A} e_i\Vert \le \frac{M}{2 \mathfrak {K}} \Vert \sum _{i \in A} \text{ sign }\,\big ( e_i^*(x) \big ) e_i\Vert \le \Vert x\Vert . \end{aligned}$$

For \(M > 0\), define

$$\begin{aligned} {\mathbf {F}}\!_M : {\mathbb {C}}\rightarrow {\mathbb {C}}: z \mapsto \left\{ \begin{array}{ll} \text{ sign }\,(z) M &{}\quad |z| > M \\ z &{}\quad |z| \le M \end{array} \right. . \end{aligned}$$

For \(x \in X\), we set \({\mathbf {F}}\!_M(x) = x - \sum _i \big (e_i^*(x) - {\mathbf {F}}\!_M(e_i^*(x))) e_i\) (the sum converges, and \(e_i^*({\mathbf {F}}\!_M(x)) = {\mathbf {F}}\!_M(e_i^*(x))\) for every \(i\)). As in [2, Proposition 3.1], one can prove:

Lemma 6.2

In the above notation, \(\Vert {\mathbf {F}}\!_M(x)\Vert \le (1 + 3 \mathfrak {K}) \Vert x\Vert \).

As in Sect. 2, we obtain:

Lemma 6.3

Suppose \((e_i) \subset X\) is a basis with a quasi-greedy constant \(\mathfrak {K}\), and a set \(A\) is \(t\)-greedy for \(x \in X\). Then \(\Vert P_A x\Vert \le (1 + 8 \sqrt{2} t^{-1} \mathfrak {K}) \mathfrak {K}\Vert x\Vert \).

Lemma 6.4

Suppose \((e_i)\) is a \(\mathfrak {K}\)-quasi-greedy basis in \(X\). Consider \(x \in X\), and let \(a_i = e_i^*(x)\), for \(i \in \mathbb N\). Suppose a finite set \(A \subset \mathbb N\) satisfies \(\min _{i \in A} |a_i| \ge M\). Then \(M \Vert \sum _{i \in A} \text{ sign }\,(a_i) e_i\Vert \le 2\mathfrak {K}^2 \Vert x\Vert \). Furthermore, \(M \Vert \sum _{i \in A} e_i\Vert \le 8\mathfrak {K}^2 \Vert x\Vert \).

Proof

Consider \(C = \{i : |a_i| \ge M\}\) (note that \(A \subset C\)). For the brevity of notation, let \(e_i^\prime = \text{ sign }\,(a_i) e_i\) (if \(a_i = 0\), let \(e_i^\prime = e_i\)). Clearly the basis \((e_i^\prime )\) is \(\mathfrak {K}\)-quasi-greedy. Set \(y = \sum _{i \in C} e_i^\prime \). By Lemma 6.1(2), \(M \Vert y\Vert \le 2\mathfrak {K}\Vert x\Vert \). For \(\varepsilon > 0\), let

$$\begin{aligned} y_\varepsilon = \sum _{i \in A} e_i^\prime + (1+\varepsilon ) \sum _{i \in C \backslash A} e_i^\prime = \sum _{i \in C} e_i^\prime + \varepsilon \sum _{i \in C \backslash A} e_i^\prime . \end{aligned}$$

By the triangle inequality, \(\Vert y_\varepsilon \Vert \le \Vert y\Vert + \varepsilon \sum _{i \in C \backslash A} \Vert e_i\Vert \). Furthermore, \(\Vert \sum _{i \in A} e_i^\prime \Vert \le \mathfrak {K}\Vert y_\varepsilon \Vert \). As \(\varepsilon \) is arbitrary, we establish the first statement of the lemma.

The reasoning above also shows that \(M \Vert \sum _{i \in B} e_i^\prime \Vert \le 2 \mathfrak {K}^2 \Vert x\Vert \) for any \(B \subset A\). Let \(S\) be the absolute convex hull of the elements \(\sum _{i \in B} e_i^\prime \)—that is,

$$\begin{aligned} S = \Bigg \{ \sum _{B \subset A} t_B \sum _{i \in B} e_i^\prime : \sum _{B \subset A} |t_B| \le 1 \Bigg \}. \end{aligned}$$

We claim that \(\sum _{i \in A} e_i = \sum _{i \in A} \omega _i e_i^\prime \in 4 S\) here \(|\omega _i|=1\). Otherwise, by Hahn-Banach Separation Theorem, there exists a sequence \((b_i)_{i \in A} \in {\mathbb {C}}^{|A|}\) so that \(|\sum _{i \in B} b_i| < 1\) whenever \(B \subset A\), yet \(|\sum _{i \in A} \omega _i b_i| > 4\). Let \(B_+ = \{i \in A : \mathfrak {R}b_i \ge 0\}\) and \(B_- = \{i \in A : \mathfrak {R}b_i < 0\}\).

$$\begin{aligned} \sum _{i \in B_+} \mathfrak {R}b_i \le \Bigg | \sum _{i \in B_+} b_i \Bigg | \le 1, \end{aligned}$$

and similarly, \(\sum _{i \in B_-} (-\mathfrak {R}b_i) \le 1\). Therefore,

$$\begin{aligned} \sum _{i \in A} |\mathfrak {R}b_i| = \sum _{i \in B_+} |\mathfrak {R}b_i| + \sum _{i \in B_-} |\mathfrak {R}b_i| \le 2. \end{aligned}$$

The same way, we show that \(\sum _{i \in A} |\mathfrak {I}b_i| \le 2\). Consequently,

$$\begin{aligned} \Bigg |\sum _{i \in A} \omega _i b_i\Bigg | \le \sum _{i \in A} |b_i| \le \sum _{i \in A} \big (|\mathfrak {R}|b_i| + |\mathfrak {I}b_i|\big ) \le 4, \end{aligned}$$

yielding a contradiction. This establishes the second statement of our lemma. \(\square \)

These results allow us to emulate the proofs of previous sections, and to estimate the Lebesgue constants:

Theorem 6.5

Suppose \((e_i)\) is a \(\mathfrak {K}\)-quasi-greedy basis in a complex Banach space \(X\). Then:

  1. 1.
    $$\begin{aligned} \max \big \{\mathbf {k}_c(n), t^{-1} \varvec{\mu _d}(n)\big \} \le \mathbf {L}(n,t) \le 1 + 2\mathbf {k}(n) + 32 \sqrt{2} t^{-1} \mathfrak {K}^3 \varvec{\mu _d}(n). \end{aligned}$$
  2. 2.
    $$\begin{aligned} \frac{\varvec{\mu _d}(n)}{2 t \mathfrak {K}} \le {\mathbf {L}}_{\mathrm {ch}}(n,t) \le \frac{100 \mathfrak {K}^3 \varvec{\mu _d}(n)}{t}. \end{aligned}$$
  3. 3.
    $$\begin{aligned} t^{-1} \mathbf {c}(n) \le {\mathbf {L}}_{\mathrm {re}}(n,t) \le 1 + 8 \mathfrak {K}^2 + 32 \sqrt{2} t^{-1} \mathfrak {K}^3 \mathbf {c}(n). \end{aligned}$$