AMS Subject Classification (2000):

Introduction

We recall that a non-trivial Radon measure ν on R d is said to be A (in symbols: ν ∈ A ) if, for every ε > 0, there is a δ > 0 such that, for every cube Q ⊂ R d and every measurable E ⊂ Q, having | E | ∕ | Q |  < δ implies ν(E) ≤ ε ν(Q); where, here and in the future, we use | ⋅ | to denote a set’s Lebesgue measure. A non-trivial Radon measure ν on R d is said to be doubling if there is a finite C so that, for all cubes Q ⊂ R d, ν(2Q) ≤ C ν(Q), where 2Q denotes Q’s concentric double. It is easy to see that ν ∈ A implies that ν is doubling; it is not so easy (but classical) that the converse fails. If ν ∈ A then d ν = vdx for some non-negative v ∈ L loc 1(R d). In such a case we say that v ∈ A . It is well known that v ∈ A if and only if there is a p > 1 and a finite K p such that, for all cubes Q,

$$\displaystyle{ \left ( \frac{1} {\vert Q\vert }\int _{Q}v^{p}\,dx\right )^{1/p} \leq \frac{K_{p}} {\vert Q\vert } \int _{Q}v\,dx, }$$
(1)

which is the so-called “reverse-Hölder inequality”.

In a recent paper [9] the author proved that, if μ ∈ A , then, in a precise sense to be explained shortly, L 2(μ) and ordinary, Lebesgue-measure L 2 have the same almost-orthogonal systems; where we say that a collection of functions {ψ k } k is almost-orthogonal in L 2(ν) if there is a finite R so that, for all finite linear sums ∑ λ k ψ k ,

$$\displaystyle{ \int \left \vert \sum \lambda _{k}\psi _{k}\right \vert ^{2}\,d\nu \leq R\sum \vert \lambda _{ k}\vert ^{2}. }$$
(2)

He also proved that if μ is a doubling measure and L 2 and L 2(μ) have (in a precise sense) the same almost-orthogonal systems, then μ must be A .

Let us explain what this “precise sense” is.

If z = (t, y) ∈ R + d+1 ≡ R d × (0, ) and f: R d → C, we define f z (x) to be f((xt)∕y). This is the function f dilated and translated relative to the ball B(t; y), but without any measure-based normalization. If 0 < α ≤ 1 we say that \(\phi \in \mathcal{C}_{\alpha }\) if ϕ: R d → C has support contained in B(0; 1) and, for all x and x in R d, | ϕ(x) −ϕ(x ) | ≤ | xx  | α. We write \(\mathcal{C}_{\alpha,0}\) to mean the subspace of ϕ’s in \(\mathcal{C}_{\alpha }\) satisfying ∫ ϕdx = 0. We call a cube Q dyadic if Q = [j 12k, (j 1 + 1)2k) ×⋯ × [j d 2k, (j d + 1)2k) for some integers j 1, … , j d , and k, and we write (Q) for Q’s sidelength (which is 2k). We call the set of all dyadic cubes \(\mathcal{D}\). If \(Q \in \mathcal{D}\) we put z Q  ≡ (x Q , (Q)) ∈ R + d+1, where x Q is Q’s center. If \(\{\phi ^{(Q)}\}_{\mathcal{D}}\subset \mathcal{C}_{\alpha }\), then

$$\displaystyle{ \left \{\frac{\phi _{z_{Q}}^{(Q)}} {\sqrt{\vert Q\vert }} \right \}_{Q\in \mathcal{D}} }$$
(3)

is a family of Hölder-smooth functions, indexed over \(\mathcal{D}\), with each one dilated, translated, and (Lebesgue) measure-normalized to “fit” a dyadic cube Q. If each \(\phi ^{(Q)} \in \mathcal{C}_{\alpha,0}\) then it is easy to see that (3) is almost-orthogonal in L 2, with an R (as in (2)) that only depends on α and d. If each ϕ (Q) equals a fixed \(\phi \in \mathcal{C}_{\alpha,0}\) (a “mother wavelet”) then (3) is sometimes called a wavelet system [2].

We could also consider the collection

$$\displaystyle{ \left \{ \frac{\phi _{z_{Q}}^{(Q)}} {\sqrt{\mu (Q)}}\right \}_{Q\in \mathcal{D}}. }$$
(4)

In [9] the author showed that, if μ ∈ A then, for every family \(\{\phi ^{(Q)}\}_{\mathcal{D}}\subset \mathcal{C}_{\alpha }\), the set (3) is almost-orthogonal in L 2 if and only if (4) is almost-orthogonal in L 2(μ). He showed that this result has a partial converse: if μ is a doubling measure and it is the case that, for every \(\{\phi ^{(Q)}\}_{\mathcal{D}} \subset \mathcal{C}_{\alpha }\), the L 2 almost-orthogonality of (3) implies the L 2(μ) almost-orthogonality of (4), then μ ∈ A .

In a later paper [10] the author strengthened the converse. We define a T-sequence to be a function ζ mapping from \(\mathcal{D}\) into R + d+1 such that \(\zeta (Q) \in \overline{T(Q)}\) for all \(Q \in \mathcal{D}\). In [10] the author proved that if μ is doubling, and ϕ is any non-trivial, real, radial function in \(\mathcal{C}_{\alpha,0}\) such that, for all T-sequences ζ, the family

$$\displaystyle{ \left \{ \frac{\phi _{\zeta (Q)}} {\sqrt{\mu (Q)}}\right \}_{Q\in \mathcal{D}} }$$
(5)

is almost-orthogonal in L 2(μ), then μ ∈ A .

The hypotheses that ϕ be real and radial are unnecessary. The “real” assumption is a computational convenience. The “radial” hypothesis (combined with non-triviality) simply ensures that \(\widehat{\phi }\) (the Fourier transform of ϕ) does not vanish identically on any ray emanating from the origin. It turns out that smoothness and cancelation are also red herrings, at least for showing necessity of μ ∈ A . In the current work we replace these hypotheses with a non-degeneracy condition that can be applied to subsets of L (B(0; 1)) (bounded functions with supports contained in B(0; 1)). This condition allows individual functions in the set to have Fourier transforms with bad directions. It only requires that no direction be bad for all of them. Precisely, we say that {ϕ k }1 n ⊂ L (B(0; 1)) satisfies the collective non-degeneracy condition (CNDC) if there is no ray from the origin on which every \(\widehat{\phi }_{k}\) is identically 0.

Our main result is:

Theorem 1.1

Let μ be a doubling measure on R d and let {ϕ k } 1 n ⊂ L (B(0;1)) satisfy CNDC. If, for every 1 ≤ k ≤ n and every T-sequence ζ, the set

$$\displaystyle{ \left \{\frac{(\phi _{k})_{\zeta (Q)}} {\sqrt{\mu (Q)}}\right \}_{Q\in \mathcal{D}} }$$
(6)

is almost-orthogonal in L 2 (μ), then μ ∈ A .

The meaning of the theorem seems to be: If μ is doubling and L 2(μ) has a reasonable wavelet basis (one given by normalized translates/dilates of a finite set of mother wavelets), then μ must be A .

The proof uses a slightly non-standard characterization of A ; or, to be more precise, dyadic A . We recall that a measure ν belongs to dyadic A (in symbols: ν ∈ A d) if, for every ε > 0, there is a δ > 0 so that, for all dyadic cubes Q and all measurable E ⊂ Q, | E | ∕ | Q |  < δ implies ν(E) ≤ ε ν(Q). Obviously A  ⊂ A d. It is not hard to show that if ν ∈ A d and ν is doubling then ν ∈ A . To prove Theorem 1.1, it suffices to show that its hypotheses imply μ ∈ A d.

We will call \(\{c_{Q}\}_{\mathcal{D}}\), a sequence of non-negative numbers indexed over \(\mathcal{D}\), a Carleson sequence if, for all \(Q^{{\prime}}\in \mathcal{D}\),

$$\displaystyle{ \sum _{{ Q\in \mathcal{D} \atop Q\subset Q^{{\prime}}} }c_{Q}\vert Q\vert \leq \vert Q^{{\prime}}\vert. }$$
(7)

This is the same as saying that, for every \(Q^{{\prime}}\in \mathcal{D}\),

$$\displaystyle{\int _{Q^{{\prime}}}\left (\sum _{{ Q\in \mathcal{D} \atop Q\subset Q^{{\prime}}} }c_{Q}\chi _{Q}\right )\,dx \leq \vert Q^{{\prime}}\vert.}$$

In section “The One-Dimensional, Dyadic Case” we show that ν ∈ A d if and only if there is a finite R so that, for all Carleson sequences \(\{c_{Q}\}_{\mathcal{D}}\) and all \(Q^{{\prime}}\in \mathcal{D}\),

$$\displaystyle{ \sum _{{ Q\in \mathcal{D} \atop Q\subset Q^{{\prime}}} }c_{Q}\nu (Q) \leq R\nu (Q^{{\prime}}); }$$
(8)

which, the reader will note, is the same as

$$\displaystyle{\int _{Q^{{\prime}}}\left (\sum _{{ Q\in \mathcal{D} \atop Q\subset Q^{{\prime}}} }c_{Q}\chi _{Q}\right )\,d\nu \leq R\nu (Q^{{\prime}}).}$$

We prove Theorem 1.1 by showing that, given its hypotheses, μ must satisfy (8), for some fixed R, for all \(Q^{{\prime}}\in \mathcal{D}\) and all Carleson sequences.

Aside from some technical lemmas, the proof turns on a simple observation. Suppose that \((\Omega,\mathcal{M},\nu )\) is a measure space, and \(f: \Omega \rightarrow \mathbf{C}\) satisfies

$$\displaystyle{ \int _{\Omega }\vert f\vert ^{2}\,d\nu \leq R\int _{ \Omega }\vert f\vert \,d\nu < \infty }$$
(9)

for some finite R. Then the Cauchy-Schwarz inequality implies

$$\displaystyle{ \int _{\Omega }\vert f\vert \,d\nu \leq R\nu (\Omega ). }$$
(10)

(We need the ‘ < ’ in (9): consider f(x) = 1∕x on (0, 1) with Lebesgue measure.) In the proof of Theorem 1.1, \(\Omega \) will be a certain “nearly optimal” \(Q^{{\prime}}\in \mathcal{D}\) and f will essentially be a function of the form

$$\displaystyle{\sum _{{ Q\in \mathcal{D} \atop Q\subset Q^{{\prime}}} }c_{Q}\chi _{Q},}$$

with \(\{c_{Q}\}_{\mathcal{D}}\) a “nearly optimal” Carleson sequence, carefully defined to have the second inequality in (9). After some work, Theorem 1.1’s almost-orthogonality hypothesis will yield the first inequality in (9), giving us (10) (and (8)).

What seems to be going on here is a sneaky version of the self-improving (“John-Nirenberg”) property of BMO. Recall that f ∈ L loc 1(R d) is said to belong to BMO if

$$\displaystyle{ \sup _{{ Q\subset \mathbf{R}^{d} \atop Q\text{ a cube}} } \frac{1} {\vert Q\vert }\int _{Q}\vert f - f_{Q}\vert \,dx \equiv \Vert f\Vert _{{\ast}} < \infty, }$$
(11)

where f Q denotes \(\frac{1} {\vert Q\vert }\int _{Q}f\,dx\), f’s average over Q. The John-Nirenberg theorem ([4], p. 144) states that there are postive constants c 1(d) and c 2(d) such that, if f ∈ BMO, then for all cubes Q and all numbers λ > 0,

$$\displaystyle{\left \vert \{x \in Q:\ \vert f(x) - f_{Q}\vert >\lambda \}\right \vert \leq c_{1}(d)\exp (-c_{2}(d)\lambda /\Vert f\Vert _{{\ast}})\vert Q\vert.}$$

This implies that if (11) holds then

$$\displaystyle{\sup _{{ Q\subset \mathbf{R}^{d} \atop Q\text{ a cube}} } \frac{1} {\vert Q\vert }\int _{Q}\vert f - f_{Q}\vert ^{2}\,dx \leq C\Vert f\Vert _{ {\ast}}^{2}}$$

for some C depending only on d. In other words,

$$\displaystyle{\sup _{{ Q\subset \mathbf{R}^{d} \atop Q\text{ a cube}} } \frac{1} {\vert Q\vert }\int _{Q}\vert f - f_{Q}\vert ^{2}\,dx \leq C\left (\sup _{{ Q\subset \mathbf{R}^{d} \atop Q\text{ a cube}} } \frac{1} {\vert Q\vert }\int _{Q}\vert f - f_{Q}\vert \,dx\right )^{2}:}$$

“the L 1 norm controls the L 2 norm.”

Because we will need it later, we recall that f ∈ L loc 1(R d) is said to belong to dyadic B M O (“f ∈ BMO d ”) if the inequalty (11) holds when the supremum is taken over all dyadic cubes. We write the resulting (finite) supremum as ∥ f ∥ ∗, d . The analogous John-Nirenberg properties also hold for f ∈ BMO d , with the cubes now required to belong to \(\mathcal{D}\).

In section “The One-Dimensional, Dyadic Case” we state and prove a dyadic version of our main result, hoping it will illuminate the main ideas in the proof of Theorem 1.1.

In section “Technical Lemmas” we prove some technical lemmas.

In section “Proof of Theorem 1.1.” we prove Theorem 1.1 and give, as a corollary, an application to wavelet representations of linear operators.

Notations. If A and B are positive quantities depending on some parameters, we write ‘A ∼ B’ (“A and B are comparable”) to mean that there are positive numbers c 1 and c 2 (“comparability constants”) so that

$$\displaystyle{ c_{1}A \leq B \leq c_{2}A; }$$
(12)

and, if c 1 and c 2 depend on parameters, they do not do so in a way that makes (12) trivial. We often use ‘C’ to denote a constant that might change from occurrence to occurrence; we will not always say how C changes or what it depends on. If E and F are sets, we write E ⊂ F to express E ⊆ F.

We will refer to “finite linear sums” of the form \(\sum _{\gamma \in \Gamma }\lambda _{\gamma }g_{\gamma }(x)\), where \(\{\lambda _{\gamma }\}_{\Gamma }\) is a set of numbers and \(\{g_{\gamma }\}_{\Gamma }\) is a set of functions, both indexed over an infinite set \(\Gamma \) (typically \(\mathcal{D}\)). “Finite linear sum” will mean a sum in which all but finitely many of the λ γ ’s are 0. Similarly, a “finite sequence” \(\{\lambda _{\gamma }\}_{\Gamma }\) indexed over \(\Gamma \) will be one in which all but finitely many λ γ ’s are 0.

We indicate the end of a proof with the symbol .

The One-Dimensional, Dyadic Case

First we prove our characterization of A d (8) (see [7] and [11] for its original form).

Lemma 2.1

A Radon measure μ belongs to A d if and only if there is a finite R so that  (8) holds for all Carleson sequences \(\{c_{Q}\}_{\mathcal{D}}\) and all \(Q^{{\prime}}\in \mathcal{D}\) .

Proof of Lemma 2.1

Suppose that μ ∈ A d. Then μ is absolutely continuous, and we can write d μ = vdx, with v ∈ A d. Classical arguments (see [1]) show that v satisfies (1) with respect to dyadic cubes, for some p > 1. Let M d (⋅ ) denote the dyadic Hardy-Littlewood maximal operator:

$$\displaystyle{M_{d}(g)(x) \equiv \sup _{x\in Q\in \mathcal{D}}\frac{1} {\vert Q\vert }\int _{Q}\vert g(t)\vert \,dt.}$$

The L p-boundedness of M d (⋅ ) and Hölder’s inequality imply, for any \(Q^{{\prime}}\in \mathcal{D}\),

$$\displaystyle\begin{array}{rcl} \frac{1} {\vert Q^{{\prime}}\vert }\int _{Q^{{\prime}}}M_{d}(\chi _{Q^{{\prime}}}v)\,dx& \leq & \left ( \frac{1} {\vert Q^{{\prime}}\vert }\int _{Q^{{\prime}}}(M_{d}(\chi _{Q^{{\prime}}}v))^{p}\,dx\right )^{1/p} {}\\ & \leq & C_{p}\left ( \frac{1} {\vert Q^{{\prime}}\vert }\int _{Q^{{\prime}}}(v(x))^{p}\,dx\right )^{1/p} {}\\ & \leq & \frac{C_{p}K_{p}} {\vert Q^{{\prime}}\vert } \int _{Q^{{\prime}}}v(x)\,dx; {}\\ \end{array}$$

i.e.,

$$\displaystyle{\int _{Q^{{\prime}}}M_{d}(\chi _{Q^{{\prime}}}v)\,dx \leq Cv(Q^{{\prime}})}$$

for all \(Q^{{\prime}}\in \mathcal{D}\). Now let \(\{c_{Q}\}_{\mathcal{D}}\) be a Carleson sequence. If \(Q^{{\prime}}\in \mathcal{D}\) then

$$\displaystyle{\sum _{Q\subset Q^{{\prime}}}c_{Q}v(Q) =\sum _{Q\subset Q^{{\prime}}}c_{Q}\vert Q\vert \left ( \frac{1} {\vert Q\vert }v(Q)\right ) \leq \int _{Q^{{\prime}}}M_{d}(\chi _{Q^{{\prime}}}v)\,dx,}$$

by standard tent-space arguments (see, e.g., Theorem  2 on page 59 of [4]). Therefore μ ∈ A d implies (8).

Suppose (8) holds. First we will show that μ is absolutely continuous with respect to Lebesgue measure. Then we will finish the lemma’s proof.

Suppose E is measurable, | E |  = 0 and, without loss of generality, \(E \subset Q_{0} \in \mathcal{D}\). Cover E with countably many disjoint cubes Q 1 j ⊂ Q 0 such that

$$\displaystyle{ \sum _{j}\vert Q_{1}^{j}\vert \leq (1/2)\vert Q_{ 0}\vert. }$$

Now, having chosen the cubes {Q k j} j , let \(\{Q_{k+1}^{j^{{\prime}} }\}_{j^{{\prime}}}\) be a family of disjoint dyadic cubes such that: a) \(E \subset \cup _{j^{{\prime}}}Q_{k+1}^{j^{{\prime}} }\); b) each \(Q_{k+1}^{j^{{\prime}} }\) is a subset of some Q k j; c) for all Q k j,

$$\displaystyle{ \sum _{Q_{ k+1}^{j^{{\prime}}}\subset Q_{k}^{j}}\vert Q_{k+1}^{j^{{\prime}} }\vert \leq (1/2)\vert Q_{k}^{j}\vert. }$$
(13)

We can do this for all k because | E |  = 0. Inequality (13) implies that, for any \(Q \in \mathcal{D}\),

$$\displaystyle{ \sum _{Q_{k}^{j}\subset Q}\vert Q_{k}^{j}\vert \leq 2\vert Q\vert. }$$
(14)

We give the quick (and well known) proof of (14). By induction, for any Q k j and any n ≥ 0,

$$\displaystyle{\sum _{Q_{ k+n}^{j^{{\prime}}}\subset Q_{k}^{j}}\vert Q_{k+n}^{j^{{\prime}} }\vert \leq 2^{-n}\vert Q_{ k}^{j}\vert,}$$

which implies that

$$\displaystyle{\sum _{Q_{ k^{{\prime}}}^{j^{{\prime}}}\subset Q_{k}^{j}}\vert Q_{k^{{\prime}}}^{j^{{\prime}} }\vert \leq 2\vert Q_{k}^{j}\vert }$$

for every Q k j. If Q is arbitrary let \(\{Q_{k^{{\ast}}}^{j^{{\ast}} }\}_{j^{{\ast}},k^{{\ast}}}\) the maximal Q k j’s contained in Q. The cubes \(Q_{k^{{\ast}}}^{j^{{\ast}} }\) are disjoint. Therefore

$$\displaystyle{\sum _{Q_{k}^{j}\subset Q}\vert Q_{k}^{j}\vert =\sum _{ j^{{\ast}},k^{{\ast}}}\sum _{Q_{j}^{k}\subset Q_{k^{{\ast}}}^{j^{{\ast}}}}\vert Q_{j}^{k}\vert \leq 2\sum _{j^{{\ast}},k^{{\ast}}}\vert Q_{k^{{\ast}}}^{j^{{\ast}} }\vert \leq 2\vert Q\vert,}$$

proving (14).

Define:

$$\displaystyle{c_{Q} = \left \{\begin{array}{@{}l@{\quad }l@{}} 1/2\quad &\text{if}\ Q \in \{ Q_{k}^{j}\}_{ j,k}; \\ 0 \quad &\text{otherwise}. \end{array} \right.}$$

Inequalities (13) and (14) imply that \(\{c_{Q}\}_{\mathcal{D}}\) is Carleson. Therefore there is a finite R such that

$$\displaystyle{\sum _{j,k}(1/2)\mu (Q_{k}^{j}) \leq R\mu (Q_{ 0}) < \infty.}$$

But, because of a), for all N,

$$\displaystyle{N\mu (E) \leq \sum _{k=1}^{N}\sum _{ j}\mu (Q_{k}^{j}) \leq 2R\mu (Q_{ 0}),}$$

forcing μ(E) = 0.

The rest of the proof that μ ∈ A d is like what we just saw, only more careful. Let \(Q_{0} \in \mathcal{D}\), E ⊂ Q 0, and | E | ∕ | Q 0 |  < η < < 1. For k ≥ 1, let {Q k j} j be the maximal dyadic subcubes of Q 0 such that

$$\displaystyle{\frac{\vert E \cap Q_{k}^{j}\vert } {\vert Q_{k}^{j}\vert } > 2^{(d+1)k}\eta.}$$

These are the Calderón-Zygmund cubes, taken at “height” 2(d+1)k η, of χ E relative to Q 0. Because of their maximality, for each Q k j,

$$\displaystyle{\frac{\vert E \cap Q_{k}^{j}\vert } {\vert Q_{k}^{j}\vert } \leq 2^{d}2^{(d+1)k}\eta = (1/2)2^{(d+1)(k+1)}\eta,}$$

which implies that every cube \(Q_{k+1}^{j^{{\prime}} }\) is contained in some Q k j, and that, for every Q k j,

$$\displaystyle{\sum _{Q_{ k+1}^{j^{{\prime}}}\subset Q_{k}^{j}}\vert Q_{k+1}^{j^{{\prime}} }\vert \leq (1/2)\vert Q_{k}^{j}\vert,}$$

which is the condition (13) we saw earlier. The same reasoning as before implies that, for all \(Q \in \mathcal{D}\),

$$\displaystyle{\sum _{Q_{j}^{k}\subset Q}\vert Q_{j}^{k}\vert \leq 2\vert Q\vert.}$$

Almost every point of E is a point of density. Therefore we will keep getting cubes Q k j as long as 2(d+1)k η is less than 1: there is a K 0 ∼ log(1∕η) such that, for all 1 ≤ k ≤ K 0, | E ∖ j Q k j |  = 0, and hence μ(E ∖ j Q k j) = 0. (The union ∪ j Q k j “almost contains” E.) Define:

$$\displaystyle{c_{Q} = \left \{\begin{array}{@{}l@{\quad }l@{}} 1/2\quad &\text{if}\ Q \in \{ Q_{j}^{k}\}_{ j,k}; \\ 0 \quad &\text{otherwise}. \end{array} \right.}$$

The sequence \(\{c_{Q}\}_{\mathcal{D}}\) is Carleson; therefore

$$\displaystyle{\sum _{Q\subset Q_{0}}c_{Q}\mu (Q) \leq R\mu (Q_{0}).}$$

But

$$\displaystyle{\sum _{Q\subset Q_{0}}c_{Q}\mu (Q) = (1/2)\sum _{j,k}\mu (Q_{j}^{k}) \geq (1/2)\sum _{ k=1}^{K_{0} }\sum _{j}\mu (Q_{k}^{j}) \geq (1/2)K_{ 0}\mu (E),}$$

because, for each k ≤ K 0, the part of E outside ∪ j Q k j has μ-measure 0. Thus,

$$\displaystyle{\mu (E) \leq \frac{2R} {K_{0}}\mu (Q_{0}),}$$

and 2RK 0 → 0 as η → 0+: μ ∈ A d.

If I = [j2k, (j + 1)2k) ⊂ R is a dyadic interval, define I + ≡ [2j2k−1, (2j + 1)2k−1) (I’s left half) and I  ≡ [(2j + 1)2k−1, (2j + 2)2k−1) (I’s right half), and set

$$\displaystyle{h_{(I)} \equiv \chi _{I^{+}} -\chi _{I^{-}}.}$$

The functions \(\{h_{(I)}/\vert I\vert ^{1/2}\}_{I\in \mathcal{D}}\) are known as the Haar functions, which comprise an orthonormal basis for L 2(R).

The dyadic analogue of Theorem 1.1 is

Theorem 2.2

Let μ be a non-trivial Radon measure on R . If

$$\displaystyle{ \left \{ \frac{h_{(I)}} {\sqrt{\mu (I)}}\right \}_{I\in \mathcal{D}} }$$
(15)

is almost-orthogonal in L 2 (μ) then μ ∈ A d .

Proof of Theorem 2.2.

The reader might want to look back at (9) and (10).

Fix \(I_{0} \in \mathcal{D}\) and 0 < ε < < (I 0). Let \(\mathcal{F}(I_{0},\epsilon )\) be the familiy of Carleson sequences \(\{c_{I}\}_{\mathcal{D}}\) such that c I  = 0 if I ⊄ I 0 or (I) < ε. By compactness, there is a Carleson sequence \(\{\tilde{c}_{I}\}_{\mathcal{D}}\in \mathcal{F}(I_{0},\epsilon )\) such that

$$\displaystyle{\sum _{\mathcal{D}}\tilde{c}_{I}\mu (I) =\sup \left \{\sum _{\mathcal{D}}c_{I}\mu (I):\ \{ c_{I}\}_{\mathcal{D}}\in \mathcal{F}(I_{0},\epsilon )\right \} < \infty.}$$

Call the supremum L. Define

$$\displaystyle{f(x) \equiv \sum _{\mathcal{D}}\tilde{c}_{I}\chi _{I}(x) -\left (\sum _{\mathcal{D}}\tilde{c}_{I}\vert I\vert \right )\frac{\chi _{I_{0}}(x)} {\vert I_{0}\vert }.}$$

Notice that, because \(\{\tilde{c}_{I}\}_{\mathcal{D}}\) is Carleson,

$$\displaystyle{ \frac{1} {\vert I_{0}\vert }\left (\sum _{\mathcal{D}}\tilde{c}_{I}\vert I\vert \right ) \leq 1.}$$

The function f is supported on I 0 and satisfies ∫ fdx = 0. Also, f belongs to BMO d , with ∥ f ∥ ∗, d  ≤ 2. Let us prove this fact. Take \(J \in \mathcal{D}\). If JI 0 = ∅ we have nothing to prove. If I 0 ⊂ J then f J  = 0 and

$$\displaystyle{\int _{J}\vert f - f_{J}\vert \,dx \leq 2\sum _{\mathcal{D}}\tilde{c}_{I}\vert I\vert \leq 2\vert I_{0}\vert \leq 2\vert J\vert.}$$

If J ⊂ I 0 then

$$\displaystyle{\int _{J}\vert f - f_{J}\vert \,dx \leq 2\sum _{I\in \mathcal{D}:I\subset J}\tilde{c}_{I}\vert I\vert \leq 2\vert J\vert.}$$

By the John-Nirenberg theorem, there exists an absolute constant—which we call C—so that, for all \(J \in \mathcal{D}\),

$$\displaystyle{ \int _{J}\vert f - f_{J}\vert ^{2}\,dx =\sum _{ I\in \mathcal{D}:\ I\subset J}\frac{\vert \langle f,h_{(I)}\rangle \vert ^{2}} {\vert I\vert } \leq C\vert J\vert, }$$
(16)

where 〈⋅ , ⋅ 〉 denotes the usual (Lebesgue) L 2 inner product. Because of how we defined f, the inner products 〈f, h (I)〉 = 0 if I ⊄ I 0 or (I) < ε. Therefore the sequence defined by

$$\displaystyle{\alpha _{I} \equiv \frac{\vert \langle f,h_{(I)}\rangle \vert ^{2}} {\vert I\vert ^{2}} }$$

is a bounded multiple of a sequence from \(\mathcal{F}(I_{0},\epsilon )\), implying

$$\displaystyle{\sum _{\mathcal{D}}\frac{\vert \langle f,h_{(I)}\rangle \vert ^{2}} {\vert I\vert ^{2}} \mu (I) \leq CL,}$$

with C an absolute constant.

We can write

$$\displaystyle{f =\sum _{\mathcal{D}}\frac{\langle f,h_{(I)}\rangle } {\vert I\vert } h_{(I)},}$$

and this is an exact, finite sum, because of f’s special form. We rewrite it as

$$\displaystyle{f =\sum _{\mathcal{D}}\gamma _{I} \frac{h_{(I)}} {\sqrt{\mu (I)}},}$$

where

$$\displaystyle{\gamma _{I} =\langle f,h_{(I)}\rangle \frac{\sqrt{\mu (I)}} {\vert I\vert }.}$$

The L 2(μ) almost-orthogonality of (15) implies that

$$\displaystyle\begin{array}{rcl} \int \vert f\vert ^{2}\,d\mu \leq R\sum _{ \mathcal{D}}\vert \gamma _{I}\vert ^{2}& =& R\sum \vert \langle f,h_{ (I)}\rangle \vert ^{2}\frac{\mu (I)} {\vert I\vert ^{2}} {}\\ & =& R\sum _{\mathcal{D}}\frac{\vert \langle f,h_{(I)}\rangle \vert ^{2}} {\vert I\vert ^{2}} \mu (I) {}\\ & \leq & RCL. {}\\ \end{array}$$

But

$$\displaystyle{L =\sum _{\mathcal{D}}\tilde{c}_{I}\mu (I) =\int _{I_{0}}(f + c_{0})\,d\mu,}$$

where

$$\displaystyle{c_{0} = \frac{1} {\vert I_{0}\vert }\sum _{\mathcal{D}}\tilde{c}_{I}\vert I\vert \leq 1.}$$

Therefore

$$\displaystyle{\int \vert f\vert ^{2}\,d\mu \leq RC\left (\int \vert f\vert \,d\mu +\mu (I_{ 0})\right ),}$$

which implies

$$\displaystyle{\int \vert f\vert \,d\mu \leq C^{{\prime}}\mu (I_{ 0}),}$$

and

$$\displaystyle{ \sum _{\mathcal{D}}\tilde{c}_{I}\mu (I) \leq C^{{\prime\prime}}\left (\int \vert f\vert \,d\mu +\mu (I_{ 0})\right ) \leq \tilde{ C}\mu (I_{0}). }$$
(17)

The sequence \(\{\tilde{c}_{I}\}_{\mathcal{D}}\) is optimal for sequences from \(\mathcal{F}(I_{0},\epsilon )\). Therefore (17) holds for every sequence in \(\mathcal{F}(I_{0},\epsilon )\). But the bound holds independent of I 0 and ε; therefore, by an obvious limiting argument, it holds for all Carleson sequences \(\{c_{I}\}_{\mathcal{D}}\). By Lemma 2.1, the measure μ belongs to A d.

Remark

We ask the reader to note how, in the interaction between (16) and (17), the John-Nirenberg theorem lets us bound an L 2 norm by an L 1 norm—which is the heart of the proof.

Technical Lemmas

The first lemma in this section says that, if every family of the form (6) is almost-orthogonal in L 2(μ), then these families must be, in an obvious sense, uniformly almost-orthogonal.

Lemma 3.1

Let ψ ∈ L (B(0;1)). Suppose that, for every T-sequence ζ, there is a finite R = R(ζ,μ,ψ) such that, for all finite linear sums

$$\displaystyle{\sum _{\mathcal{D}}\lambda _{Q} \frac{\psi _{\zeta (Q)}} {\sqrt{\mu (Q)}},}$$

we have

$$\displaystyle{ \int \left \vert \sum _{\mathcal{D}}\lambda _{Q} \frac{\psi _{\zeta (Q)}} {\sqrt{\mu (Q)}}\right \vert ^{2}\,d\mu \leq R\sum _{ \mathcal{D}}\vert \lambda _{Q}\vert ^{2}. }$$
(18)

Then there is a finite \(\tilde{R} =\tilde{ R}(\mu,\psi )\) such that  (18) holds for all T-sequences ζ.

Proof of Lemma 3.1

For every T-sequence ζ, we can define a linear map \(L_{\zeta }:\ell ^{2}(\mathcal{D}) \rightarrow L^{2}(\mu )\) by

$$\displaystyle{ L_{\zeta }(\{\lambda _{Q}\}_{\mathcal{D}}) \equiv \sum _{\mathcal{D}}\lambda _{Q} \frac{\psi _{\zeta (Q)}} {\sqrt{\mu (Q)}}. }$$
(19)

Inequality (18) shows that the series in (19) converges unconditionally to an f ∈ L 2(μ), and that \(\int \vert f\vert ^{2}\,d\mu \leq R\sum _{\mathcal{D}}\vert \lambda _{Q}\vert ^{2}\). By the Uniform Boundedness Principle, if no universal \(\tilde{R}\) exists, then there is a sequence \(\{\lambda _{Q}\}_{\mathcal{D}}\in \ell^{2}(\mathcal{D})\) such that \(\sum _{\mathcal{D}}\vert \lambda _{Q}\vert ^{2} \leq 1\), and there is a sequence of T-sequences ζ k , such that

$$\displaystyle{ \int \left \vert \sum _{\mathcal{D}}\lambda _{Q} \frac{\psi _{\zeta _{k}(Q)}} {\sqrt{\mu (Q)}}\right \vert ^{2}\,d\mu \rightarrow \infty. }$$
(20)

We will patch together a T-sequence \(\tilde{\zeta }\) such that \(\{ \frac{\psi _{\tilde{\zeta }(Q)}} {\sqrt{\mu (Q)}}\}_{\mathcal{D}}\) is not almost-orthogonal. Fix the sequence \(\{\lambda _{Q}\}_{\mathcal{D}}\). If \(\mathcal{F}\subset \mathcal{D}\) is finite, there is an \(N = N(\mathcal{F})\) such that

$$\displaystyle{\int \left \vert \sum _{Q\in \mathcal{F}}\lambda _{Q} \frac{\psi _{\zeta (Q)}} {\sqrt{\mu (Q)}}\right \vert ^{2}\,d\mu \leq N}$$

for all T-sequences ζ. Thus, because of (20), we know that, if \(\mathcal{F}_{0} \subset \mathcal{D}\) is finite and R is any large number, there is a finite subset \(\mathcal{F}_{1} \subset \mathcal{D}\), disjoint from \(\mathcal{F}_{0}\), and there is a T-sequence ζ 1, such that

$$\displaystyle{\int \left \vert \sum _{Q\in \mathcal{F}_{1}}\lambda _{Q} \frac{\psi _{\zeta _{1}(Q)}} {\sqrt{\mu (Q)}}\right \vert ^{2}\,d\mu > R.}$$

Let R k  → . Let \(\mathcal{F}_{1} \subset \mathcal{D}\) be a finite subset and ζ 1 a T-sequence such that

$$\displaystyle{\int \left \vert \sum _{Q\in \mathcal{F}_{1}}\lambda _{Q} \frac{\psi _{\zeta _{1}(Q)}} {\sqrt{\mu (Q)}}\right \vert ^{2}\,d\mu > R_{ 1}.}$$

Having defined \(\mathcal{F}_{1}\), \(\mathcal{F}_{2}\), … , \(\mathcal{F}_{n}\), let \(\mathcal{F}_{n+1} \subset \mathcal{D}\) be a finite subset disjoint from \(\cup _{1}^{n}\mathcal{F}_{k}\), and ζ n+1 a T-sequence such that

$$\displaystyle{\int \left \vert \sum _{Q\in \mathcal{F}_{n+1}}\lambda _{Q}\frac{\psi _{\zeta _{n+1}(Q)}} {\sqrt{\mu (Q)}}\right \vert ^{2}\,d\mu > R_{ n+1}.}$$

Define \(\tilde{\zeta }: \mathcal{D}\rightarrow \mathbf{R}_{+}^{d+1}\) by

$$\displaystyle{\tilde{\zeta }(Q) = \left \{\begin{array}{@{}l@{\quad }l@{}} \zeta _{k}(Q)\quad &\text{if}\ Q \in \mathcal{F}_{k}; \\ z_{Q} \quad &\text{if}\ Q\notin \cup _{k}\mathcal{F}_{k}. \end{array} \right.}$$

Then \(\tilde{\zeta }\) is a T-sequence for which (18) fails.

The proof of Theorem 1.1 uses a general form of the Calderón reproducing formula. Our approach is based on ideas and methods of Frazier, Jawerth, and Weiss [3]. We gratefully acknowledge their influence and inspiration.

Recall that if \(\psi \in \mathcal{C}_{\alpha,0}\) is real, radial, non-trivial, and normalized so that

$$\displaystyle{\int _{0}^{\infty }\vert \widehat{\psi }(y\xi )\vert ^{2}\,\frac{dy} {y} = 1}$$

for all ξ ≠ 0, then, if f ∈ ∪1 < p <  L p(R d), we have

$$\displaystyle{f(x) =\int _{\mathbf{R}_{+}^{d+1}}(f {\ast} y^{-d}\psi _{ (0,y)}(t))\,y^{-d}\psi _{ (0,y)}(x - t)\,\frac{dt\,dy} {y} }$$

in various senses [8, 11]. To be consistent with the notation in the introduction, we have written “y d ψ (0, y)” in place of the more traditional “ψ y ”. We will continue to follow this convention.

We define \(\Phi (x)\) to be the inverse Fourier transform of exp(− | ξ | 2 − | ξ | −2). We notice that \(\Phi \) belongs to the Schwartz class \(\mathcal{S}\), and that \(\widehat{\Phi }(\xi )\) and all of \(\widehat{\Phi }\)’s derivatives vanish to infinite order at the origin.

It is important that \(\widehat{\Phi }(\xi ) > 0\) on all of R d {0}.

Lemma 3.2

Suppose that {ϕ k } 1 n ⊂ L (B(0;1)) satisfies CNDC. For ξ ∈ R d ∖{0} define

$$\displaystyle{ G(\xi ) \equiv \int _{0}^{\infty }\widehat{\Phi }(y\xi )\left (\sum _{ 1}^{n}\vert \widehat{\phi }_{ k}(y\xi )\vert ^{2}\right )\,\frac{dy} {y}. }$$
(21)

The function G(ξ) is infinitely differentiable on R d ∖{0} and homogeneous of degree 0: G(tξ) = G(ξ) for all t > 0. There are positive numbers c 1 and c 2 such that c 1 ≤ G(ξ) ≤ c 2 for all ξ≠0.

Proof of lemma.

The homogeneity is obvious. Every \(\widehat{\phi }_{k}\) is infinitely differentiable, and \(D^{\alpha }\widehat{\phi }_{k} \in L^{\infty }\) for every k and multi-index α. The function \(\widehat{\Phi }\) is also infinitely differentiable, and, for all α, \(D^{\alpha }\widehat{\Phi }\) vanishes rapidly at 0 and infinity. These imply that G is infinitely differentiable. The CNDC implies that G(ξ) never vanishes on S d−1 ≡ {ξ:   | ξ |  = 1}. The smoothness of G and the compactness of S d−1 imply that G lies between two positive constants there, hence on all of R d {0}.

Now, given {ϕ k }1 n ⊂ L (B(0; 1)) satisfying CNDC, and G as defined by (21), we set

$$\displaystyle{ m(\xi ) \equiv \frac{1} {G(\xi )} }$$
(22)

for ξ ≠ 0, and undefined at the origin. By standard arguments ([4], p. 26), the Fourier multiplier operators given by

$$\displaystyle{\widehat{T_{G}f}(\xi ) \equiv G(\xi )\widehat{f}(\xi )}$$

and

$$\displaystyle{\widehat{T_{m}f}(\xi ) \equiv m(\xi )\widehat{f}(\xi ),}$$

initially defined for \(f \in \mathcal{C}_{0}^{\infty }(\mathbf{R}^{d})\), extend to bounded operators on L p(R d) for every 1 < p < . On these domains they are inverses of each other: T G T m  = T m T G  = I, the identity.

For each ϕ k , define \(\tilde{\phi }_{k}(x) \equiv \overline{\phi _{k}(-x)}\), and recall that \(\widehat{\tilde{\phi }_{k}}(\xi ) = \overline{\widehat{\phi }_{k}}(\xi )\). If f ∈ L 2(R d) then

$$\displaystyle{T_{G}f =\sum _{ 1}^{n}\int _{ \mathbf{R}_{+}^{d+1}}(f {\ast} y^{-d}\Phi _{ (0,y)} {\ast} (y^{-d}\tilde{\phi }_{ k})_{(0,y)}(t))\,(y^{-d}\phi _{ k})_{(0,y)}(x - t)\,\frac{dt\,dy} {y},}$$

where we interpret each integral as

$$\displaystyle{\lim _{{ \epsilon \searrow 0 \atop R\nearrow \infty } }\int _{\epsilon }^{R}\left (\int _{\mathbf{ R}^{d}}(f {\ast} y^{-d}\Phi _{ (0,y)} {\ast} (y^{-d}\tilde{\phi }_{ k})_{(0,y)}(t))\,(y^{-d}\phi _{ k})_{(0,y)}(x - t)\,dt\right )\,\frac{dy} {y},}$$

with the limit existing in L 2. As we shall see, if \(f \in \mathcal{C}_{0}^{\infty }(\mathbf{R}^{d})\), the limit also exists pointwise in x, with the integral being, in a natural sense, absolutely convergent.

Because T m and T G are inverses of each other, if \(f \in \mathcal{C}_{0}^{\infty }(\mathbf{R}^{d})\),

$$\displaystyle{f =\sum _{ 1}^{n}\int _{ \mathbf{R}_{+}^{d+1}}(f {\ast} T_{m}(y^{-d}\Phi _{ (0,y)}) {\ast} (y^{-d}\tilde{\phi }_{ k})_{(0,y)}(t))\,(y^{-d}\phi _{ k})_{(0,y)}(x - t)\,\frac{dt\,dy} {y},}$$

where the integrals converge (in the above sense) in L 2. Let us define

$$\displaystyle{\Psi (x) \equiv T_{m}(\Phi )(x).}$$

With this notation, we can rewrite the preceding integral formula as

$$\displaystyle{f =\sum _{ 1}^{n}\int _{ \mathbf{R}_{+}^{d+1}}(f {\ast} y^{-d}\Psi _{ (0,y)} {\ast} (y^{-d}\tilde{\phi }_{ k})_{(0,y)}(t))\,(y^{-d}\phi _{ k})_{(0,y)}(x - t)\,\frac{dt\,dy} {y}.}$$

(We have used the dilation-invariance of T m .)

A look at \(\Psi \)’s Fourier transform shows that \(\Psi \in \mathcal{S}\) and \(\int \Psi \,dx = 0\). The same are true of \(\Psi _{k}\), which we define as

$$\displaystyle{\Psi _{k}(x) \equiv \Psi {\ast}\tilde{\phi }_{k}(x).}$$

With this convention we can compress our integral formula to

$$\displaystyle{ f =\sum _{ 1}^{n}\int _{ \mathbf{R}_{+}^{d+1}}(f {\ast} y^{-d}(\Psi _{ k})_{(0,y)}(t))\,(y^{-d}\phi _{ k})_{(0,y)}(x - t)\,\frac{dt\,dy} {y}. }$$
(23)

We now prove two lemmas relating to (23).

Lemma 3.3

Suppose that \(\Gamma \in \mathcal{S}\) , \(\int \Gamma \,dx = 0\) , and γ ∈ L (B(0;1)). There is a \(C = C(\Gamma,\gamma )\) such that, if \(f \in \mathcal{C}_{0}^{\infty }(\mathbf{R}^{d})\) satisfies |∇f|≤ A pointwise and B is any positive number, then

$$\displaystyle{\int _{0}^{B}\left (\int _{\mathbf{ R}^{d}}\left \vert (f {\ast} y^{-d}\Gamma _{ (0,y)}(t))\,(y^{-d}\gamma )_{ (0,y)}(x - t)\right \vert \,dt\right )\,\frac{dy} {y} \leq CAB.}$$

Remark

In our applications of Lemma 3.3, \(\Gamma = \Psi _{k}\), γ = ϕ k , and AB ∼ 1.

Proof of Lemma 3.3

The function \(\Gamma \) satisfies

$$\displaystyle\begin{array}{rcl} \vert \Gamma (x)\vert & \leq & C(1 +\vert x\vert )^{-d-2} {}\\ \vert \nabla \Gamma (x)\vert & \leq & C(1 +\vert x\vert )^{-d-3} {}\\ \int \Gamma (x)\,dx& =& 0, {}\\ \end{array}$$

for a fixed constant C. A lemma of Uchiyama [6] says that we can decompose \(\Gamma \) into a rapidly converging sum of dilates of smooth, compactly supported functions, with integrals equal to 0. Precisely:

$$\displaystyle{\Gamma (x) = C\sum _{j=0}^{\infty }2^{-j(d+2)}(F_{ j})_{(0,2^{j})}(x),}$$

for an appropriate C, where each F j has support contained in B(0; 1) and satisfies

$$\displaystyle\begin{array}{rcl} \Vert F_{j}\Vert _{\infty }& \leq & C {}\\ \int F_{j}\,dx& =& 0. {}\\ \end{array}$$

(Uchiyama’s lemma actually yields ∥ ∇F j  ∥   ≤ C, but we don’t need that.) The function \((F_{j})_{(0,2^{j})}\) has support contained in B(0; 2j) and the function \(((F_{j})_{(0,2^{j})})_{(0,y)}\) has support contained in B(0; 2j y). The smoothness of f and the cancelation in F j imply that

$$\displaystyle\begin{array}{rcl} \vert f {\ast} y^{-d}((F_{ j})_{(0,2^{j})})_{(0,y)}(t)\vert & \leq & CA2^{j}y\Vert y^{-d}((F_{ j})_{(0,2^{j})})_{(0,y)}\Vert _{1} {}\\ & \leq & CA2^{j}y2^{jd} = CA2^{j(d+1)}y {}\\ \end{array}$$

for any t, and therefore

$$\displaystyle\begin{array}{rcl} \vert f {\ast} y^{-d}\Gamma _{ (0,y)}(t)\vert & \leq & CA\sum _{j=0}^{\infty }2^{-j(d+2)}2^{j(d+1)}y {}\\ & =& CAy. {}\\ \end{array}$$

Since ∥ γ ∥ 1 ≤ C(γ),

$$\displaystyle{\int _{\mathbf{R}^{d}}\left \vert (f {\ast} y^{-d}\Gamma _{ (0,y)}(t))\,y^{-d}\gamma _{ (0,y)}(x - t)\right \vert \,dt \leq CAy,}$$

implying

$$\displaystyle\begin{array}{rcl} \int _{0}^{B}\left (\int _{\mathbf{ R}^{d}}\left \vert (f {\ast} y^{-d}\Gamma _{ (0,y)}(t))\,y^{-d}\gamma _{ (0,y)}(x - t)\right \vert \,dt\right )\,\frac{dy} {y} & \leq & \int _{0}^{B}(CAy)\,\frac{dy} {y} {}\\ & =& CAB, {}\\ \end{array}$$

proving the lemma.

The next lemma uses a standard definition and one derived from it.

Definition 3.4

If Q ⊂ R d is a cube then we set \(\widehat{Q} \equiv Q \times (0,\ell(Q)) \subset \mathbf{R}_{+}^{d+1}\) (sometimes called the “Carleson box” above Q) and \(R(Q) \equiv \{ (t,y) \in \mathbf{R}_{+}^{d+1}:\ d((t,y),\widehat{Q}) \geq \ell (Q)\}\), where d(⋅ , ⋅ ) denotes the usual Euclidean distance to a set in R + d+1.

Lemma 3.5

Let \(\Gamma \in \mathcal{S}\) and γ ∈ L (B(0;1)). There is constant \(C = C(\Gamma,\gamma )\) such that if f ∈ L 1 ( R d ) and the support of f is contained in a cube Q then

$$\displaystyle{\int _{R(Q)}\left \vert (f {\ast} y^{-d}\Gamma _{ (0,y)}(t))\,y^{-d}\gamma _{ (0,y)}(x - t)\right \vert \,\frac{dt\,dy} {y} \leq \frac{C} {\vert Q\vert }\int \vert f\vert \,dt}$$

for all x ∈ Q.

Proof of Lemma 3.5.

For j = 0, 1, 2  , define \(R_{j}(Q) \equiv \{ (t,y) \in R(Q):\ 2^{j}\ell(Q) \leq d((t,y),\widehat{Q}) < 2^{j+1}\ell(Q)\}\), and observe that R(Q) = ∪0 R j (Q). Since γ has its support contained in B(0; 1), \(\gamma _{(0,y)}(x - t) =\gamma (\frac{x-t} {y} )\) can be non-zero only if | xt |  < y. Therefore there is a positive c = c(d) such that, if x ∈ Q and (t, y) ∈ R j (Q), γ (0, y)(xt) will be zero unless y > c2j (Q). If y > c2j (Q), Hölder’s inequality implies

$$\displaystyle{\vert f {\ast} y^{-d}\Gamma _{ (0,y)}(t)\vert \leq C(2^{j}\ell(Q))^{-d}\Vert f\Vert _{ 1}}$$

and

$$\displaystyle{\int _{\mathbf{R}^{d}}\left \vert (f {\ast} y^{-d}\Gamma _{ (0,y)}(t))\,y^{-d}\gamma _{ (0,y)}(x - t)\right \vert \,dt \leq C(2^{j}\ell(Q))^{-d}\Vert f\Vert _{ 1}.}$$

If (t, y) ∈ R j (Q) then y < 2j+2 (Q). Therefore:

$$\displaystyle\begin{array}{rcl} & \int _{R_{j}(Q)}\left \vert (f {\ast} y^{-d}\Gamma _{(0,y)}(t))\,y^{-d}\gamma _{(0,y)}(x - t)\right \vert \,\frac{dt\,dy} {y} & {}\\ & \leq C\int _{c2^{j}\ell(Q)}^{2^{j+2}\ell(Q) }\left (\int _{\mathbf{R}^{d}}\left \vert (f {\ast} y^{-d}\Gamma _{(0,y)}(t))\,y^{-d}\gamma _{(0,y)}(x - t)\right \vert \,dt\right )\,\frac{dy} {y} & {}\\ & \leq C(2^{j}\ell(Q))^{-d}\Vert f\Vert _{1}. & {}\\ \end{array}$$

Summing over j finishes the proof.

Proof of Theorem 1.1.

For the rest of this section, μ will be a fixed doubling measure.

The proof of Theorem 1.1 works by rewriting each of the n summands in (23) as an average of sums of the form

$$\displaystyle{\sum _{\mathcal{D}}\lambda _{Q}\frac{(\phi _{k})_{\zeta (Q)}} {\sqrt{\mu (Q)}}}$$

where ζ is a T-sequence. We now describe how this rewriting will go. If \(Q = [j_{1}2^{k},(j_{1} + 1)2^{k}) \times \cdots \times [j_{d}2^{k},(j_{d} + 1)2^{k}) \in \mathcal{D}\) we set t Q  ≡ (j 12k, j 22k,  , j d 2k), the “left-most corner” of Q. Define V 0 ≡ [0, 1)d, the “unit” dyadic cube. If \(Q \in \mathcal{D}\), we define a bijective mapping σ(Q, ⋅ , ⋅ ): T(V 0) → T(Q) by

$$\displaystyle{\sigma (Q,\tau,\eta ) \equiv (t_{Q} +\ell (Q)\tau,\ell(Q)\eta ).}$$

We point out some properties of this mapping. If g: T(Q) → C is measurable we can define h: T(V 0) → C by h(τ, η) ≡ g(σ(Q, τ, η)). By the change-of-variables formula,

$$\displaystyle{ \int _{T(Q)}g(t,y)\,\frac{dt\,dy} {y} =\vert Q\vert \int _{T(V _{0})}h(\tau,\eta )\,\frac{d\tau \,d\eta } {\eta }. }$$
(24)

We can write

$$\displaystyle{\int _{T(Q)}(f {\ast} y^{-d}(\Psi _{ k})_{(0,y)}(t))\,y^{-d}(\phi _{ k})_{(0,y)}(x - t)\,\frac{dt\,dy} {y} }$$

as

$$\displaystyle{\int _{T(Q)}y^{-2d}\langle f,\overline{(\Psi _{ k})_{(t,y)}}\rangle \,(\phi _{k})_{(t,y)}(x)\,\frac{dt\,dy} {y},}$$

where 〈⋅ , ⋅ 〉 is the ordinary L 2 inner product. Because of (24), this is equal to

$$\displaystyle\begin{array}{rcl} & \vert Q\vert \int _{T(V _{0})}(\ell(Q)\eta )^{-2d}\langle f,\overline{(\Psi _{k})_{\sigma (Q,\tau,\eta )}}\rangle \,(\phi _{k})_{\sigma (Q,\tau,\eta )}(x)\,\frac{d\tau \,d\eta } {\eta } & {}\\ & =\vert Q\vert ^{-1}\int _{T(V _{0})}\eta ^{-2d}\langle f,\overline{(\Psi _{k})_{\sigma (Q,\tau,\eta )}}\rangle \,(\phi _{k})_{\sigma (Q,\tau,\eta )}(x)\,\frac{d\tau \,d\eta } {\eta }.& {}\\ \end{array}$$

Therefore, we can formally rewrite the integral in (23) as:

$$\displaystyle\begin{array}{rcl} & \sum _{\mathcal{D}}\int _{T(Q)}y^{-2d}\langle f,\overline{(\Psi _{k})_{(t,y)}}\rangle \,(\phi _{k})_{(t,y)}(x)\,\frac{dt\,dy} {y} & \\ & =\int _{T(V _{0})}\left (\sum _{\mathcal{D}}\vert Q\vert ^{-1}\langle f,\overline{(\Psi _{k})_{\sigma (Q,\tau,\eta )}}\rangle \,(\phi _{k})_{\sigma (Q,\tau,\eta )}(x)\right )\eta ^{-2d}\,\frac{d\tau \,d\eta } {\eta }.&{}\end{array}$$
(25)

Of course, if the summation only runs over a finite set of Q’s (as it will for us), the equality is literal.

In proving Theorem 1.1, it will be more convenient to write (25) as

$$\displaystyle{\int _{T(V _{0})}\sum _{\mathcal{D}}\left [\left (\vert Q\vert ^{-1}\langle f,\overline{(\Psi _{ k})_{\sigma (Q,\tau,\eta )}}\rangle \sqrt{\mu (Q)}\right )\,\frac{(\phi _{k})_{\sigma (Q,\tau,\eta )}(x)} {\sqrt{\mu (Q)}} \right ]\eta ^{-2d}\,\frac{d\tau \,d\eta } {\eta }.}$$

Proof of Theorem 1.1

We fix, once and for all, a function \(b \in \mathcal{C}_{0}^{\infty }(\mathbf{R}^{d})\) that is non-negative, has support contained in B(0; 1∕2), and satisfies ∫ bdx = 1. Recall our definition of z Q  ≡ (x Q , (Q)), where x Q is Q’s center and (Q) is Q’s sidelength. If Q ⊂ R d is any cube then \(b_{z_{Q}}\) is supported in Q and satisfies \(\int b_{z_{Q}}\,dx =\vert Q\vert\). If ν is any doubling measure then

$$\displaystyle{ \int b_{z_{Q}}\,d\nu \sim \nu (Q), }$$
(26)

with comparability constants depending on b and ν. If \(Q_{0} \in \mathcal{D}\) and 2j < < 1, we define \(\mathcal{F}(Q_{0},2^{j})\) to be the family of Carleson sequences \(\{c_{Q}\}_{\mathcal{D}}\) such that c Q  = 0 if Q ⊄ Q 0 or (Q) < 2j (Q 0). It is clear that the set of numbers

$$\displaystyle{ \left \{\mu (Q_{0})^{-1}\sum _{ \mathcal{D}}c_{Q}\mu (Q):\ Q_{0} \in \mathcal{D},\ \{c_{Q}\}_{\mathcal{D}}\in \mathcal{F}(Q_{0},2^{j})\right \} }$$
(27)

is bounded above by 1 + | j | . Call the actual supremum L(j). Theorem 1.1 will follow once we show that sup j L(j) < .

Fix j. There exist a \(Q_{0} \in \mathcal{D}\) and a Carleson sequence \(\{\tilde{c}_{Q}\}_{\mathcal{D}}\in \mathcal{F}(Q_{0},2^{j})\) such that

$$\displaystyle{\mu (Q_{0})^{-1}\sum _{ \mathcal{D}}\tilde{c}_{Q}\mu (Q) \geq (1/2)L(j).}$$

Fix Q 0 and \(\{\tilde{c}_{Q}\}\). Theorem 1.1 will follow if we show that \(\mu (Q_{0})^{-1}\sum _{\mathcal{D}}\tilde{c}_{Q}\mu (Q)\) is bounded by a number independent of Q 0 and j.

Define

$$\displaystyle{f(x) \equiv \sum _{\mathcal{D}}\tilde{c}_{Q}b_{z_{Q}}(x).}$$

Because of (26),

$$\displaystyle{ \int f\,d\mu \sim \sum _{\mathcal{D}}\tilde{c}_{Q}\mu (Q) \sim L(j)\mu (Q_{0}). }$$
(28)

As with Theorem 2.2, the “game” now is to show that

$$\displaystyle{ \int \vert f\vert ^{2}\,d\mu \leq C\int \vert f\vert \,d\mu, }$$
(29)

for some C <  independent of Q 0 and j; because, as we have seen, the Cauchy-Schwarz inequality will imply

$$\displaystyle{\int \vert f\vert \,d\mu \leq C\mu (Q_{0});}$$

which, with (28), will yield

$$\displaystyle{L(j) \leq C,}$$

for some absolute C independent of Q 0 and j.

Because of (28), (29) will follow from

$$\displaystyle{\int \vert f\vert ^{2}\,d\mu \leq CL(j)\mu (Q_{ 0}).}$$

It is obvious that f is supported inside Q 0 and satisfies  | f | dx ≤ | Q 0 | . It will be important to us that f ∈ BMO, with a BMO norm bounded by a constant depending only on b and d; so let us prove this. Write f =  k f k , where

$$\displaystyle{f_{k}(x) =\sum _{Q:\ \ell(Q)=2^{k}}\tilde{c}_{Q}b_{z_{Q}}(x).}$$

Each f k is infinitely differentiable and satisfies: (i) ∥ f k  ∥   ≤ 1; and (ii) ∥ ∇f k  ∥   ≤ C2k. We note that inequality (ii) implies | ∇f | ≤ C(2j (Q 0))−1 pointwise.

Let Q be a cube and write

$$\displaystyle{f =\sum _{k:\,2^{k}\geq \ell(Q^{{\prime}})}f_{k} +\sum _{k:\,2^{k}<\ell(Q^{{\prime}})}f_{k} \equiv F_{1} + F_{2}.}$$

We can cover Q with C(d) congruent dyadic cubes {Q j }1 C(d) such that (1∕2)(Q ) ≤ (Q j ) < (Q ), which implies that, if \(Q \in \mathcal{D}\) and (Q) < (Q ), then (Q) ≤ (Q j ) for every j; hence, if QQ  ≠ ∅ then Q ⊂ Q j for some j. Then:

$$\displaystyle\begin{array}{rcl} \int _{Q^{{\prime}}}\vert F_{2}(x)\vert \,dx& =& \int _{Q^{{\prime}}}\left (\sum _{Q:\ell(Q)<\ell(Q^{{\prime}})}\tilde{c}_{Q}b_{z_{Q}}(x)\right )\,dx {}\\ & \leq & \sum _{j=1}^{C(d)}\int _{ Q_{j}^{{\ast}}}\left (\sum _{Q:Q\subset Q_{j}^{{\ast}}}\tilde{c}_{Q}b_{z_{Q}}(x)\right )\,dx {}\\ & \leq & \sum _{j=1}^{C(d)}\sum _{ Q:Q\subset Q_{j}^{{\ast}}}\tilde{c}_{Q}\vert Q\vert {}\\ & \leq & \sum _{1}^{C(d)}\vert Q_{ j}^{{\ast}}\vert {}\\ &\leq & C\vert Q^{{\prime}}\vert. {}\\ \end{array}$$

On the other hand, | ∇F 1(x) | ≤ C(Q ), implying that

$$\displaystyle{\int _{Q^{{\prime}}}\vert F_{1}(x) - (F_{1})_{Q^{{\prime}}}\vert \,dx \leq C\vert Q^{{\prime}}\vert.}$$

Therefore f belongs to BMO, with a norm ≤ C.

We invoke a standard fact about BMO ([4], p. 159): If h ∈ BMO, \(\Gamma \in \mathcal{S}\), and \(\int \Gamma \,dx = 0\), then, for all cubes Q ⊂ R d,

$$\displaystyle{ \frac{1} {\vert Q\vert }\int _{\widehat{Q}}\vert h {\ast} y^{-d}\Gamma _{ (0,y)}(t)\vert ^{2}\,\frac{dt\,dy} {y} \leq C\Vert h\Vert _{{\ast}}^{2},}$$

where the constant C only depends on \(\Gamma \). This implies that, for h ∈ BMO, the sequence of numbers \(\{c_{Q}\}_{\mathcal{D}}\) defined by

$$\displaystyle{c_{Q} \equiv \frac{1} {\vert Q\vert }\int _{T(Q)}\vert h {\ast} y^{-d}\Gamma _{ (0,y)}(t)\vert ^{2}\,\frac{dt\,dy} {y} }$$

is a bounded multiple of a Carleson sequence.

We can write f = g 1 + g 2 + g 3 + g 4, where

$$\displaystyle\begin{array}{rcl} g_{1}(x)& \equiv & \sum _{1}^{n}\int _{ \{(t,y):\ y<2^{j-1}\ell(Q_{0})\}}(f {\ast} y^{-d}(\Psi _{ k})_{(0,y)}(t))\,y^{-d}(\phi _{ k})_{(0,y)}(x - t)\,\frac{dt\,dy} {y} {}\\ g_{2}(x)& \equiv & \sum _{1}^{n}\int _{ \{(t,y):\ 2^{j-1}\ell(Q_{0})\leq y<\ell(Q_{0})\}}(f {\ast} y^{-d}(\Psi _{ k})_{(0,y)}(t))\,y^{-d}(\phi _{ k})_{(0,y)}(x - t)\,\frac{dt\,dy} {y} {}\\ g_{3}(x)& \equiv & \sum _{1}^{n}\int _{ \{(t,y):\ \ell(Q_{0})\leq y\leq 3\ell(Q_{0})\}}(f {\ast} y^{-d}(\Psi _{ k})_{(0,y)}(t))\,y^{-d}(\phi _{ k})_{(0,y)}(x - t)\,\frac{dt\,dy} {y} {}\\ g_{4}(x)& \equiv & \sum _{1}^{n}\int _{ \{(t,y):\ y>3\ell(Q_{0})\}}(f {\ast} y^{-d}(\Psi _{ k})_{(0,y)}(t))\,y^{-d}(\phi _{ k})_{(0,y)}(x - t)\,\frac{dt\,dy} {y}. {}\\ \end{array}$$

Lemmas 3.3 and 3.5 imply that the integrals on the right-hand sides all converge absolutely. By Lemma 3.5, g 4 is pointwise bounded by C | Q 0 | −1  | f | dx ≤ C for x ∈ Q 0, and it is easy to see that the same bound holds for g 3. Since \(f \in \mathcal{C}_{0}^{\infty }(\mathbf{R}^{d})\) and | ∇f | ≤ C(2j (Q 0))−1 pointwise, Lemma 3.3 implies that | g 1 | is bounded by an absolute constant in Q 0. Thus, for x ∈ Q 0, we may write f = g 2 + G, where | G | ≤ C, and C does not depend on Q 0 or j.

By Lemma 3.1, there is an R such that, for every 1 ≤ k ≤ n, every T-sequence ζ, and every finite sequence \(\{\lambda _{Q}\}_{\mathcal{D}}\subset \mathbf{C}\),

$$\displaystyle{\int \left \vert \sum _{\mathcal{D}}\lambda _{Q}\frac{(\phi _{k})_{\zeta (Q)}} {\sqrt{\mu (Q)}}\right \vert ^{2}\,d\mu \leq R\sum _{ \mathcal{D}}\vert \lambda _{Q}\vert ^{2}.}$$

We claim that

$$\displaystyle{ \int _{Q_{0}}\vert g_{2}\vert ^{2}\,d\mu \leq CRL(j)\mu (Q_{ 0}) }$$
(30)

for a constant C depending on μ and d, but not on Q 0 or j. Since \(\int _{Q_{0}}\vert G\vert ^{2}\,d\mu \leq C\mu (Q_{0})\), proving (30) will finish the proof.

There exist N = N(d) dyadic cubes {Q i }1 N, congruent to Q 0, such that \(\overline{Q_{i}} \cap \overline{Q_{0}}\not =\emptyset\). If x ∈ Q 0 then the support restriction on the ϕ k ’s implies that

$$\displaystyle{g_{2}(x) =\sum _{ k=1}^{n}\int _{ \{(t,y):\ t\in \cup _{0}^{N}Q_{i},\ 2^{j-1}\ell(Q_{0})\leq y<\ell(Q_{0})\}}(f{\ast}y^{-d}(\Psi _{ k})_{(0,y)}(t))\,y^{-d}(\phi _{ k})_{(0,y)}(x-t)\,\frac{dt\,dy} {y}.}$$

For each 0 ≤ i ≤ N and 1 ≤ k ≤ n, define

$$\displaystyle{\gamma _{i,k}(x) \equiv \int _{\{(t,y):\ t\in Q_{i},\ 2^{j-1}\ell(Q_{0})\leq y<\ell(Q_{0})\}}(f {\ast} y^{-d}(\Psi _{ k})_{(0,y)}(t))\,y^{-d}(\phi _{ k})_{(0,y)}(x - t)\,\frac{dt\,dy} {y}.}$$

Inequality (30) will follow once we show

$$\displaystyle{ \int \vert \gamma _{i,k}\vert ^{2}\,d\mu \leq CRL(j)\mu (Q_{ i}), }$$
(31)

because μ’s doubling property implies μ(Q i ) ≤ C μ(Q 0).

For 0 ≤ i ≤ N, we define \(\mathcal{F}_{i}\) to be the (finite!) family of dyadic subcubes Q of Q i such that 2j (Q i ) ≤ (Q) ≤ (Q i ). We can then write:

$$\displaystyle{\gamma _{i,k}(x) =\sum _{Q\in \mathcal{F}_{i}}\int _{T(Q)}(f {\ast} y^{-d}(\Psi _{ k})_{(0,y)}(t))\,y^{-d}(\phi _{ k})_{(0,y)}(x - t)\,\frac{dt\,dy} {y}.}$$

We rewrite the last equation as

$$\displaystyle{\gamma _{i,k}(x) =\int _{T(V _{0})}\sum _{Q\in \mathcal{F}_{i}}\left [\left (\vert Q\vert ^{-1}\langle f,\overline{(\Psi _{ k})_{\sigma (Q,\tau,\eta )}}\rangle \sqrt{\mu (Q)}\right )\,\frac{(\phi _{k})_{\sigma (Q,\tau,\eta )}(x)} {\sqrt{\mu (Q)}} \right ]\eta ^{-2d}\,\frac{d\tau \,d\eta } {\eta }.}$$

For each (τ, η) ∈ T(V 0),

$$\displaystyle{\int \left \vert \sum _{Q\in \mathcal{F}_{i}}\left [\vert Q\vert ^{-1}\langle f,\overline{(\Psi _{ k})_{\sigma (Q,\tau,\eta )}}\rangle \sqrt{\mu (Q)}\right ]\,\frac{(\phi _{k})_{\sigma (Q,\tau,\eta )}(x)} {\sqrt{\mu (Q)}} \right \vert ^{2}\,d\mu }$$

is less than or equal to R times

$$\displaystyle{\sum _{Q\in \mathcal{F}_{i}}\left \vert \vert Q\vert ^{-1}\langle f,\overline{(\Psi _{ k})_{\sigma (Q,\tau,\eta )}}\rangle \sqrt{\mu (Q)}\right \vert ^{2} =\sum _{ Q\in \mathcal{F}_{i}}\left (\frac{\vert \langle f,(\Psi _{k})_{\sigma (Q,\tau,\eta )}\rangle \vert ^{2}} {\vert Q\vert ^{2}} \right )\mu (Q).}$$

Thus, by the generalized Minkowski inequality,

$$\displaystyle{\left (\int \vert \gamma _{i,k}\vert ^{2}\,d\mu \right )^{1/2} \leq R^{1/2}\int _{ T(V _{0})}\left (\sum _{Q\in \mathcal{F}_{i}}\left (\frac{\vert \langle f,\overline{(\Psi _{k})_{\sigma (Q,\tau,\eta )}}\rangle \vert ^{2}} {\vert Q\vert ^{2}} \right )\mu (Q)\right )^{1/2}\,\eta ^{-2d}\,\frac{d\tau \,d\eta } {\eta }.}$$

But \((T(V _{0}),\eta ^{-2d}\,\frac{d\tau \,d\eta } {\eta } )\) is a finite measure space (with a total measure only depending on d); therefore,

$$\displaystyle{\int _{T(V _{0})}\left (\sum _{Q\in \mathcal{F}_{i}}\left (\frac{\vert \langle f,\overline{(\Psi _{k})_{\sigma (Q,\tau,\eta )}}\rangle \vert ^{2}} {\vert Q\vert ^{2}} \right )\mu (Q)\right )^{1/2}\,\eta ^{-2d}\,\frac{d\tau \,d\eta } {\eta } }$$

is less than or equal to a dimensional constant times

$$\displaystyle{\left (\sum _{Q\in \mathcal{F}_{i}}\int _{T(V _{0})}\left [\left (\frac{\vert \langle f,\overline{(\Psi _{k})_{\sigma (Q,\tau,\eta )}}\rangle \vert ^{2}} {\vert Q\vert ^{2}} \right )\mu (Q)\right ]\,\eta ^{-2d}\,\frac{d\tau \,d\eta } {\eta } \right )^{1/2};}$$

which implies that

$$\displaystyle\begin{array}{rcl} \int \vert \gamma _{i,k}\vert ^{2}\,d\mu & \leq & CR\sum _{ Q\in \mathcal{F}_{i}}\int _{T(V _{0})}\left [\left (\frac{\vert \langle f,\overline{(\Psi _{k})_{\sigma (Q,\tau,\eta )}}\rangle \vert ^{2}} {\vert Q\vert ^{2}} \right )\mu (Q)\right ]\,\eta ^{-2d}\,\frac{d\tau \,d\eta } {\eta } {}\\ & =& CR\sum _{Q\in \mathcal{F}_{i}}\left (\int _{T(V _{0})}\left (\frac{\vert \langle f,\overline{(\Psi _{k})_{\sigma (Q,\tau,\eta )}}\rangle \vert ^{2}} {\vert Q\vert ^{2}} \right )\,\eta ^{-2d}\,\frac{d\tau \,d\eta } {\eta } \right )\mu (Q). {}\\ \end{array}$$

But, for each \(Q \in \mathcal{F}_{i}\), by the change of variables formula (24),

$$\displaystyle{\int _{T(V _{0})}\left (\frac{\vert \langle f,\overline{(\Psi _{k})_{\sigma (Q,\tau,\eta )}}\rangle \vert ^{2}} {\vert Q\vert ^{2}} \right )\,\eta ^{-2d}\,\frac{d\tau \,d\eta } {\eta } =\vert Q\vert ^{-1}\int _{ T(Q)}\vert f {\ast} y^{-d}(\Psi _{ k})_{(0,y)}(t)\vert ^{2}\,\frac{dt\,dy} {y};}$$

and, because f ∈ BMO, with ∥ f ∥  ≤ C, the sequence defined by

$$\displaystyle{c_{Q,i} \equiv \vert Q\vert ^{-1}\int _{ T(Q)}\vert f {\ast} y^{-d}(\Psi _{ k})_{(0,y)}(t)\vert ^{2}\,\frac{dt\,dy} {y} }$$

is a bounded multiple of a Carleson sequence. By our definition of L(j),

$$\displaystyle{\sum _{Q\in \mathcal{F}_{i}}c_{Q,i}\mu (Q) \leq CL(j)\mu (Q_{i})}$$

(because all of the Q’s occurring in the sum satisfy (Q) ≥ 2j (Q i ) and are contained in Q i ). Therefore

$$\displaystyle\begin{array}{rcl} \int \vert \gamma _{i,k}\vert ^{2}\,d\mu & \leq & CR\sum _{ Q\in \mathcal{F}_{i}}\left (\vert Q\vert ^{-1}\int _{ T(Q)}\vert f {\ast} y^{-d}(\Psi _{ k})_{(0,y)}(t)\vert ^{2}\,\frac{dt\,dy} {y} \right )\mu (Q) {}\\ & =& CR\sum _{Q\in \mathcal{F}_{i}}c_{Q,i}\mu (Q) {}\\ & \leq & CRL(j)\mu (Q_{i}), {}\\ \end{array}$$

finishing the proof of Theorem 1.1.

We present an easy corollary of Theorem 1.1. We first note that, by duality, if {ψ k } k  ⊂ L 2(ν) satisfies (2), then, for all f ∈ L 2(ν),

$$\displaystyle{ \sum _{k}\vert \langle f,\psi _{k}\rangle _{\nu }\vert ^{2} \leq R\int \vert f\vert ^{2}\,d\nu }$$
(32)

(where we use 〈⋅ , ⋅ 〉 ν to denote the inner product in L 2(ν)); and, conversely, if {ψ k } k  ⊂ L 2(ν) satisfies (32), it satisfies (2).

In [9] the author looked at linear operators of the form

$$\displaystyle{ \sum _{\mathcal{D}}\frac{\langle f,\psi _{\zeta (Q)}^{(Q)}\rangle _{\nu }} {\nu (Q)} \phi _{\zeta ^{{\prime}}(Q)}^{(Q)}(x), }$$
(33)

for a doubling measure ν, sequences of functions \(\{\psi ^{(Q)}\}_{\mathcal{D}}\) and \(\{\phi ^{(Q)}\}_{\mathcal{D}}\) in \(\mathcal{C}_{\alpha }\), and T-sequences ζ and ζ . One can think of (33) as a simple model for a wavelet representation of a Calderón-Zygmund singular integral operator (see [5] and references cited there). By Littlewood-Paley theory, if the ψ (Q)’s and ϕ (Q)’s lie in \(\mathcal{C}_{\alpha,0}\) and ν ∈ A then (33) defines a bounded linear operator on L 2(ν) in the following sense: If \(\mathcal{F}_{1} \subset \mathcal{F}_{2} \subset \mathcal{F}_{3} \subset \cdots \) is any increasing sequence of finite subsets of \(\mathcal{D}\) such that \(\mathcal{D} = \cup _{i}\mathcal{F}_{i}\) then, for all f ∈ L 2(ν),

$$\displaystyle{ T(f)(x) \equiv \lim _{i\rightarrow \infty }\sum _{Q\in \mathcal{F}_{i}}\frac{\langle f,\psi _{\zeta (Q)}^{(Q)}\rangle _{\nu }} {\nu (Q)} \phi _{\zeta ^{{\prime}}(Q)}^{(Q)}(x) }$$
(34)

exists in L 2(ν) and \(\Vert T(f)\Vert _{L^{2}(\nu )} \leq C(\nu,\alpha )\Vert f\Vert _{L^{2}(\nu )}\).Footnote 1 We present a partial converse:

Corollary 4.1

Suppose that μ is doubling. Let {ϕ k } 1 n ⊂ L (B(0;1)) satisfy CNDC and suppose that, for each 1 ≤ k ≤ n and each T-sequence ζ, the series

$$\displaystyle{ \sum _{\mathcal{D}}\frac{\langle f,(\phi _{k})_{\zeta (Q)}\rangle _{\mu }} {\mu (Q)} (\phi _{k})_{\zeta (Q)}(x), }$$
(35)

defined as in ( 34 ), yields an L 2 (μ) bounded linear operator. Then μ ∈ A .

Proof of Lemma 3.3

Call the operator defined by (35) T. If T is L 2(μ) bounded then \(\vert \int T(f)\,\overline{f}\,d\mu \vert \leq C\int \vert f\vert ^{2}\,d\mu\) for all f ∈ L 2(μ). But

$$\displaystyle{\int T(f)\,\overline{f}\,d\mu =\sum _{\mathcal{D}}\frac{\vert \langle f,(\phi _{k})_{\zeta (Q)}\rangle _{\mu }\vert ^{2}} {\mu (Q)}.}$$

Therefore, by the converse to (32), (6) is almost-orthogonal in L 2(μ). QED.

Remark

We believe the most natural application of Corollary 4.1 is this. Let \(\psi \in \mathcal{C}_{\alpha,0}\) be real, radial, and non-trivial. If μ is doubling and the series

$$\displaystyle{\sum _{\mathcal{D}}\frac{\langle f,\psi _{\zeta (Q)}\rangle _{\mu }} {\mu (Q)} \psi _{\zeta (Q)}(x)}$$

(with the sum defined as above) gives an L 2(μ) bounded operator for every T-sequence ζ, then μ ∈ A .