1 Introduction

On \(\mathbb {R}^n,\) operators of the form \(\widehat{Tf}(\xi )=m_{\theta , \beta }(\xi ) \widehat{f}(\xi ),\) where \(m_{\theta , \beta }=|\xi |^{-\frac{\theta \beta }{2}} e^{i|\xi |^{\theta }}\chi _{\{|\xi |>1\}}\) are known as oscillating multipliers. They are extensively studied starting with the pioneering works of Hardy, Hirschman [26] and Wainger [50]. Charles Fefferman proved the crucial weak type (1, 1) estimates in [22], and the sharp range for \(L^p\) estimates was obtained by Fefferman and Stein in [23]. They were also studied by Hörmander [27]. We also refer to the articles [42, 43, 48] for results in the context of the wave operators, that is, \(\theta =1\). Weighted estimates for oscillating multipliers on \(\mathbb {R}^n\) were initiated by Chanillo [13] and extended in [15]. In [16], weighted end-point estimates were obtained by Chanillo, Kurtz, and Sampson. In this article, we shall confine ourselves to weighted \(L^p\) estimates for these operators on stratified Lie groups. In order to do that, let us recall the following preliminaries.

Let \({\mathfrak g}\) be a d-dimensional, graded nilpotent Lie algebra so that

$$\begin{aligned} {\mathfrak g} \ = \ \bigoplus \limits _{i=1}^s \, {\mathfrak g}_i \end{aligned}$$

as a vector space and \([{\mathfrak g}_i, {\mathfrak g}_j] \subset {\mathfrak g}_{i+j}\) for all ij. Suppose that \({\mathfrak g}_1\) generates \({\mathfrak g}\) as a Lie algebra. The associated, connected, simply connected Lie group G is called a stratified Lie group. The homogeneous dimension of G is defined as \(Q \ = \ \sum _j j \, \textrm{dim}({\mathfrak g}_j).\) Consider the sublaplacian \({\mathcal {L}} = - \sum _k X_k^2\) on G, where \(\{X_k\}\) is a basis for \({\mathfrak g}_1\). For any Borel measurable function m on \({\mathbb {R}}_{+} = [0,\infty )\), we can define the spectral multiplier operator

$$\begin{aligned} m(\sqrt{{\mathcal {L}}}) \ = \ \int _0^{\infty } m(\lambda ) \, d E_{\lambda } \end{aligned}$$

where \(\{E_{\lambda }\}_{\lambda \ge 0}\) is the spectral resolution of \(\sqrt{{\mathcal {L}}}\). Since the exponential map is a global diffeomorphism, the measure on G can be identified with the d-dimensional Lebesgue measure. In this setting, analogue of the classical Hörmander-Mikhlin multiplier theorem was established in the seminal work by Christ in [18], also fundamental end-point estimates were obtained by Mauceri–Meda in [41], see also [44,45,46] for other influential works. In recent times, there are many important works in this context, we refer [3, 7,8,9,10, 14, 38,39,40, 49]. We are inspired by the recent work [14] where the authors have introduced a general class of multipliers covering oscillating multipliers and obtained important end-point estimates. On more general graded groups Fourier multiplier operators are studied in [11, 12, 24, 25, 32] and references therein. Throughout this article, for any Borel measurable set R and \(1\le p<\infty ,\) \(\left\langle f \right\rangle _{p, R}\) denotes \((\frac{1}{|R|}\int _{R}|f|^p)^{1/p}.\) Also, a family of sets \(\mathcal {S}\) is called \(\eta -\)sparse if for each \(R\in \mathcal {S}\) there exists \(E_{R}\subset R\) such that \(|E_R|\ge \eta |R|\) and \(\{E_R\}_{\mathcal {S}}\) are pairwise disjoint. Now we state our main result.

1.1 Statement of main results

Motivated by [14], we introduce the following class of multipliers. Let \(\nu >1\) be a number which will be specified later. Also, let \(\phi \) be a smooth function on \((0, \infty ),\) supported on \(\{\nu ^{-1}\le \lambda \le \nu \}\) and satisfying \(\sum _{j} \phi (\nu ^{-j}\lambda )=1\) for all \(\lambda >0.\) Define \(m^j(\lambda ):= m(\nu ^j \lambda ) \phi (\lambda ).\)

Definition 1.1

Let \(\theta \in \mathbb {R}\setminus \{0\}\) and \(\beta \ge 0.\) We say \(m\in \mathscr {M}(\theta , \beta )\) if m is supported in the set \(\{\lambda \in \mathbb {R}_{+}: \lambda ^{\theta }\ge 1\}\) and

$$\begin{aligned}&\sup _{j \theta >0} \nu ^{j\theta \beta /2}\Vert m^{j}\Vert _{L^{\infty }(\mathbb {R}_{+})}<\infty ,\end{aligned}$$
(1)
$$\begin{aligned} \text {and}\ \ {}&\sup _{j \theta >0} \nu ^{-j\theta (2s-\beta )/2}\Vert m^{j}\Vert _{L^2_{s}(\mathbb {R}_{+})}<\infty \ \ \ \text {for all}\ \ s\in \mathbb {N}. \end{aligned}$$
(2)

Example 1.2

Let \(\theta \in \mathbb {R}\setminus \{0\}.\) Define \( m_{\theta , \beta }(\lambda ):=e^{i\lambda ^{\theta }}{\lambda ^{-\frac{\theta \beta }{2}}}\chi _{\{\lambda \in [0, \infty ): \lambda ^{\theta }\ge 1\}}.\) Then it is easy to see that \(m_{\theta , \beta }\in \mathscr {M}(\theta , \beta ).\)

Now we state our main sparse domination principle for the multiplier class \(\mathscr {M}(\theta , \beta ).\)

Theorem 1.3

Let \(\theta \in \mathbb {R}\setminus \{0\}\) and \(\beta \ge 0,\) and \(m\in \mathscr {M}(\theta , \beta ).\) Then there exist sparse families \(\mathcal {S}\) and \(\mathcal {S}'\) such that for all compactly supported bounded functions fg we have

$$\begin{aligned} |\langle m(\sqrt{\mathcal {L}})f, g \rangle |\lesssim _{\theta , \beta , r_{1}, r_{2}}\sum _{R\in \mathcal {S}} |R|\langle f \rangle _{r_1, R}\left\langle g\right\rangle _{r_2', R}\\ \text {and}\,\, |\langle m(\sqrt{\mathcal {L}})f, g \rangle |\lesssim _{\theta , \beta , r_{1}, r_{2}}\sum _{R\in \mathcal {S'}} |R|\langle f \rangle _{r_2', R}\left\langle g\right\rangle _{r_1, R}, \end{aligned}$$

where \(r_1, r_2\) satisfy

$$\begin{aligned}&\left( \frac{1}{r_1}-\frac{1}{2}\right) <\frac{\beta }{2Q},\ \ \ 1\le r_1\le r_2\le 2, \end{aligned}$$
(3)
$$\begin{aligned}&\text {or}\ \left( \frac{1}{r_1}-\frac{1}{r_2}\right) <\frac{\beta }{2Q},\ \ \ 1\le r_1\le 2\le r_2\le r'_{1}. \end{aligned}$$
(4)

As a special case, we obtain the following corollary.

Corollary 1.4

Let \(\theta \in \mathbb {R}\setminus \{0\}\) and \(\beta \ge 0.\) Then there exist sparse families \(\mathcal {S}\) and \(\mathcal {S}'\) such that for all compactly supported bounded functions fg we have

$$\begin{aligned} |\langle m_{\theta , \beta }(\sqrt{\mathcal {L}})f, g \rangle |\lesssim _{\theta , \beta , r_{1}, r_{2}}\sum _{R\in \mathcal {S}} |R|\langle f \rangle _{r_1, R}\left\langle g\right\rangle _{r_2', R}\\ \text {and}\,\, |\langle m_{\theta , \beta }(\sqrt{\mathcal {L}})f, g \rangle |\lesssim _{\theta , \beta , r_{1}, r_{2}}\sum _{R\in \mathcal {S'}} |R|\langle f \rangle _{r_2', R}\left\langle g\right\rangle _{r_1, R}, \end{aligned}$$

where \(r_1, r_2\) satisfy either condition (3) or (4).

The motivation for proving such an estimate arises from recent works [2, 4, 5, 19, 20, 30, 31, 33,34,35,36] where sparse domination is achieved for several classical operators in Harmonic analysis in various settings. The key importance in proving such an estimate lies in the fact that one can obtain a range of quantitative weighted estimates depending on the decay parameter \(\beta ,\) we state them here. The Muckenhoupt class of weights (\(A_p\)) and reverse Hölder’s classes (\(RH_{q}\)) are defined in details in Sect. 4.

Theorem 1.5

Let \(\theta \in \mathbb {R}\setminus \{0\}.\) We have the following results:

  1. i)

    Let \(m\in \mathscr {M}(\theta , 2Q).\) Then \(m(\sqrt{\mathcal {L}})\) maps \(L^p(\omega )\) to \(L^p(\omega )\) for all \(1<p<\infty \) and \(\omega \in A_{p}.\)

  2. ii)

    Let \(m\in \mathscr {M}(\theta , \beta )\) with \(Q\le \beta < 2Q.\) Then \(m(\sqrt{\mathcal {L}})\) maps \(L^p(\omega )\) to \(L^p(\omega )\) for \(p_{\beta }<p<\infty \) and \(\omega \in A_{p/p_{\beta }},\) where \(p_{\beta }:=\frac{2Q}{\beta }.\)

  3. iii)

    Let \(m\in \mathscr {M}(\theta , \beta )\) with \(0<\beta <Q.\) Then \(m(\sqrt{\mathcal {L}}): L^p(\omega )\rightarrow L^p(\omega )\) for all \(2<p<s_{\beta },\ \ \omega \in A_{p/2}\cap RH_{(s_{\beta }/p)'},\) where \(\frac{1}{s_{\beta }}:=\frac{1}{2}-\frac{\beta }{2Q}.\)

We believe these results are completely new in the setting of stratified Lie groups. The article is organized as follows. In the next section, we recall some necessary preliminaries and Sect. 3 contains the proof of Theorem 1.3. In Sect. 4, we prove Theorem 1.5 and other applications of Theorem 1.3 to Riesz means and dispersive equations.

2 Preliminaries

Let \(\{\delta _r\}_{r>0}\) be the group of dilations associated to G and let \(|\cdot |\) be a homogeneous quasi-norm, i.e., \(|x| = 0\) if and only if \(x=0\) where 0 denotes the group identity, and \(|\delta _r x | = r |x|\) for all \(r>0\) and \(x\in G\). Moreover, the right convolution kernel of the operator \(m(\sqrt{\mathcal {L}})\) will be denoted by \(K_{m},\) that is,

$$\begin{aligned} m(\sqrt{\mathcal {L}})f(x)=\ \int _G f(x\cdot y^{-1}) \, K_m (y) \, dy. \end{aligned}$$

In general, \(K_{m}\) is just a distribution but whenever m is compactly supported, \(K_m\) can be identified with an \(L^2\) function on G. See [24] for more details regarding analysis on these groups. The following estimates are well known.

Theorem 2.1

[49] For any function h and \(M>0\), we denote \(h_{M}(t):=h(tM).\)

  1. i)

    The following Plancherel-type identity holds

    $$\begin{aligned} \Vert K_{h}\Vert ^{2}_{L^2(G)}=\int _{0}^{\infty } |h(t)|^2 t^{Q-1}\, dt. \end{aligned}$$
    (5)

    In particular, if the multiplier is supported on [0, M] then \(\Vert K_{h}\Vert ^{2}_{L^2(G)}\le M^{Q}\Vert h_{M}\Vert _{2}^{2}.\)

  2. ii)

    For any compactly supported multiplier h

    $$\begin{aligned} \int _G |K_{h}(x)|^2 (1+|x|^s)^2 \, dx \ \lesssim \ \Vert h\Vert ^2_{L^2_s(\mathbb {R}_{+})} \end{aligned}$$
    (6)

    holds for any \(s > 0\). As a consequence, \(\Vert K_{h}\Vert _{L^1}\le \Vert h\Vert _{L^2_{s}(\mathbb {R}_{+})}\) for \(s>\frac{Q}{2}.\)

We also need the following notion of dyadic grids in spaces of homogeneous type. We refer to [17] and [29, 37] for details. Let \(0<c_1\le C_1<\infty \) and \(\mu \in (0,1)\). By a general dyadic grid \(\mathscr {D}=\bigcup _{k\in \mathbb {Z}}\mathscr {D}_k\) on G, we mean a countable collection of sets \({R}_k^{\alpha }\) for \(k\in \mathbb {Z}\), each associated with a point \(z_k^\alpha \), \(\alpha \) coming from a countable index set, with the following properties:

  • \(G=\bigcup \limits _{\alpha }{R}_k^{\alpha }\) for every \(k\in \mathbb {Z}\).

  • If \(l\ge k\), then either \(R_l^{\beta }\subset R_k^{\alpha }\) or \(R_l^{\beta }\cap R_k^{\alpha }=\emptyset \).

  • For the constants \(c_1, C_1>0\) we have \(B(R_k^\alpha ):=B(z_k^\alpha ,c_1\mu ^k)\subset R_k^\alpha \subset B(z_k^\alpha ,C_1\mu ^k)=:C_{1} B(R_k^\alpha )\).

  • If \(l\ge k\) and \(R_l^{\beta }\subset R_k^{\alpha }\), then \(C_1 B(R_l^{\beta })\subset C_1 B(R_k^{\alpha })\).

For sufficiently small \(\mu \in (0, 1),\) Hytönen and Kairema [29, Theorem 4.1] proved the existence of a finite collection of dyadic grids \(\mathscr {D}^{n},\) \(n=1,2,\ldots ,\mathfrak {N},\) such that for every ball \(B(z,r)\subset G\) with \(\mu ^{k+2}\le r<\mu ^{k+1}\), there exists some \(n\in \{1,2,\ldots ,\mathfrak {N}\}\) and \(R_k^\alpha \in \mathscr {D}^n\) such that \(B(z,r)\subset R_k^\alpha \) and \(\text {diam}\,(R_k^\alpha )\le C\,r\), where C depends on \(\mu \). For the purposes of this article, the number \(\mu \in (0, 1)\) is now considered fixed and \(\nu \) will denote \(\frac{1}{\mu }\).

Remark 2.2

We remark that the sparse families \(\mathcal {S}, \mathcal {S'}\) in Theorem 1.3 consist of elements from the dyadic grids \(\mathscr {D}^n,\) \(n=1,2,\ldots ,\mathfrak {N}.\)

3 Proof of Theorems

We shall prove Theorem 1.3 for case \(\theta >0,\) other parts can be modified accordingly, see Remark 3.7. Let us fix \(\theta >0, \beta \ge 0,\) and \(m\in \mathscr {M}(\theta , \beta ).\) Recall that \(m^j(\lambda ):= m(\nu ^j \lambda ) \phi (\lambda )\) for \(j\ge 0,\) where \(\phi \) is a smooth function on \((0, \infty ),\) supported on \(\{\nu ^{-1}\le \lambda \le \nu \},\) satisfying \(\sum _{j} \phi (\nu ^{-j}\lambda )=1.\) Then, \(m^j\) satisfy the following

$$\begin{aligned}&\Vert m^j\Vert _{L^{\infty }({\mathbb {R}}_{+})}\lesssim \nu ^{-j \theta \beta /2}\ \ \ \text {for}\ \ j\ge 0,\nonumber \\ \text {and}\ \ {}&\Vert m^j\Vert _{L^2_s({\mathbb {R}}_{+})}\lesssim \nu ^{ j \theta (2s - \beta ) /2}\ \ \ \text {for}\ \ j\ge 0, \end{aligned}$$
(7)

where the implicit constants are independent of j. We also introduce the following notation \(m_{j}(\lambda ):=m(\lambda ) \phi (\nu ^{-j}\lambda ).\) Recall that \(\sum _{j} \phi (\nu ^{-j}\lambda )=1,\) then \(m(\lambda )=\sum _{j\ge 0} m_{j}(\lambda ).\) Moreover, \(m_{j}(\lambda )=m^{j}(\nu ^{-j}\lambda ).\) Then we have the following decomposition

$$\begin{aligned} m(\sqrt{\mathcal {L}})=\sum _{j\ge 0} T_{j}, \ \ \ \ \text {where}\ \ \ \ T_{j}f=f* K_{m_{j}}. \end{aligned}$$

It is easy to see by homogeneity that \(K_{m_{j}}=K_{m}* (\nu ^{jQ}K_{\phi }(\delta _{\nu ^j}\cdot ))(x).\) Motivated by [4] we make a further decomposition in the space variable, namely

$$\begin{aligned} T^{l}_{j}f(x)=\int f(z)K_{m_{j}}(z^{-1}x) \phi (\nu ^{-l+j(1-\theta )}|z^{-1}x|) \ dz,\ \ \ l\in \mathbb {Z}. \end{aligned}$$
(8)

Then \(T_{j}f=\sum _{l\in \mathbb {Z}}T^{l}_{j}f\) and consequently \(m(\sqrt{\mathcal {L}})=\sum _{j\ge 0}\sum _{l\in \mathbb {Z}} T^{l}_{j}.\) Now we shall focus on proving certain crucial estimates and for some \(\epsilon >0\) we group the terms according to their spatial scale, i.e.,

$$\begin{aligned} T_{j}f=\sum _{l\le j\epsilon } T^{l}_{j}f+ \sum _{l>j \epsilon } T^{l}_{j}f. \end{aligned}$$

Let us start by proving \(L^2-L^2\) estimates for the pieces \(T^{l}_{j}.\) Let \(l>j\epsilon \) and denote \(g:=K_{m_{j}}(\cdot ) \phi (\nu ^{-l+j(1-\theta )}|\cdot |).\) By Young’s inequality we have

$$\begin{aligned} \Vert T_{j}^{l}f\Vert _{2}\le \Vert f\Vert _{2}\Vert g\Vert _{1}&\le \Vert f\Vert _{2}\left( \int _{|x|\simeq \nu ^{l-j(1-\theta )}}|K_{m_{j}}(x)|\, dx \right) \\&= \Vert f\Vert _{2}\left( \int _{|x|\simeq \nu ^{l-j(1-\theta )}} \nu ^{jQ}|K_{m^{j}}(\delta _{\nu ^{j}}x)|\, dx \right) \\&\le \Vert f\Vert _{2}\left( \int _{|x|\simeq \nu ^{l+j\theta }} |K_{m^{j}}(x)|\, dx \right) \\&=\Vert f\Vert _{2}\left( \int _{|x|\simeq \nu ^{l+j\theta }}(1+|x|^s) (1+|x|^s)^{-1}|K_{m^{j}}(x)|\, dx \right) \\&\le \Vert f\Vert _{2}\left( \int _{|x|\simeq \nu ^{l+j\theta }}(1+|x|^s)^{2} |K_{m^{j}}(x)|^{2}\, dx \right) ^{1/2} \nu ^{(l+j\theta )(\frac{Q}{2}-s)}\\&\lesssim \nu ^{(l+j\theta )(\frac{Q}{2}-s)} \Vert m^j\Vert _{L^2_{s}} \Vert f\Vert _{2}\lesssim \nu ^{(l+j\theta )(\frac{Q}{2}-s)}\nu ^{j\theta (s-\frac{\beta }{2})} \Vert f\Vert _{2}\\&\lesssim \nu ^{-\frac{j\theta \beta }{2}} \nu ^{l(\frac{Q}{2}-\frac{s}{2})} \nu ^{\frac{j\theta Q}{2}} \nu ^{-l\frac{s}{2}}\Vert f\Vert _{2}\lesssim \nu ^{l(\frac{Q}{2}-\frac{s}{4})} \nu ^{-\frac{ls}{4}} \nu ^{\frac{j\theta Q}{2}} \nu ^{-l\frac{s}{2}}\Vert f\Vert _{2}. \end{aligned}$$

Observe that the term \(\nu ^{l(\frac{Q}{2}-\frac{s}{4})}\ll 1\) if s is chosen large enough. Moreover, as \(l>j\epsilon ,\) \(\nu ^{-\frac{ls}{4}} \nu ^{\frac{j\theta Q}{2}}\le \nu ^{\frac{j\theta Q}{2}} \nu ^{-\frac{j\epsilon s}{4}}\ll 1\) provided \(s\gg \lfloor {\frac{Q\theta }{\epsilon }\rfloor }.\) Finally, choose s large such that \(\nu ^{-l\frac{s}{4}}\le \nu ^{-Q(Q+\frac{\theta \beta }{2})l}\) as well as \(\nu ^{-l\frac{s}{4}}\le \nu ^{-j\epsilon \frac{s}{4}}\le \nu ^{-Q(Q+\frac{\theta \beta }{2})j}.\) Therefore, we obtain

$$\begin{aligned} \Vert T^{l}_{j}f\Vert _{2}\le c_{\epsilon } \nu ^{- Q(Q+\frac{\theta \beta }{2})(j+l)}\Vert f\Vert _{2} \ \ \ \text {for} \ \ l> j\epsilon . \end{aligned}$$
(9)

At this point we would like to remark that one can in fact improve the bounds for \(T^{l}_{j}, l> j\epsilon ,\) to \(\Vert T^{l}_{j}f\Vert _{2}\lesssim _{\epsilon } \nu ^{-C Q(Q+\frac{\theta \beta }{2})(j+l)}\Vert f\Vert _{2}\) for any large constant C. Certainly, \(\Vert T_{j}f\Vert _{2}\le \nu ^{-\frac{j\theta \beta }{2}}\Vert f\Vert _{2}\) since \(\Vert m_{j}\Vert _{L^{\infty }}\le \nu ^{-\frac{j\theta \beta }{2}}\). Combining this with (9) we prove the following lemma:

Lemma 3.1

We obtain the following estimates:

  1. i)
    $$\begin{aligned} \Vert T^{l}_{j}f\Vert _{2}\lesssim _{\epsilon } \nu ^{-C Q(Q+\frac{\theta \beta }{2})(j+l)}\Vert f\Vert _{2} \ \ \ \text {for} \ \ l> j\epsilon . \end{aligned}$$
  2. ii)
    $$\begin{aligned} \left\| \sum _{l\le j\epsilon }T^{l}_{j}f\right\| _{2}\lesssim _{\epsilon } \nu ^{-\frac{j\theta \beta }{2}} \Vert f\Vert _{2}. \end{aligned}$$

Now we shall prove \(L^1-L^1\) estimates for the pieces \(T^{l}_{j}.\) Recall that \(g=K_{m_{j}}(\cdot ) \phi (\nu ^{-l+j(1-\theta )}|\cdot |).\) The previous argument shows that for any \(l> j\epsilon \)

$$\begin{aligned} \Vert T^{l}_{j}f\Vert _{1}\le \Vert f\Vert _{1} \Vert g\Vert _{1}\lesssim _{\epsilon } \nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)}\Vert f\Vert _{1}. \end{aligned}$$
(10)

Another observation, together with (6), yields the following for any \(s>\frac{Q}{2}\)

$$\begin{aligned} \Vert T_{j}f\Vert _{1}\le \Vert f\Vert _{1} \Vert K_{m_j}\Vert _{1}\le \Vert f\Vert _{1} \Vert K_{m^j}\Vert _{1}\lesssim \nu ^{j\theta (s-\frac{\beta }{2})}\Vert f\Vert _{1}. \end{aligned}$$
(11)

Consequently,

$$\begin{aligned} \Vert T_{j}f\Vert _{1}\lesssim \nu ^{j(-\frac{\theta \beta }{2}+\frac{\theta Q}{2}+\varepsilon )}\Vert f\Vert _{1}\ \ \ \text {for any}\ \ \varepsilon >0. \end{aligned}$$

Moreover, summing (10) in l,  we obtain

$$\begin{aligned} \left\| \sum _{l\le j\epsilon }T^{l}_{j}f\right\| _{1}=\Vert T_{j}f-\sum _{l> j\epsilon }T^{l}_{j}f\Vert _{1}\lesssim \nu ^{j(-\frac{\theta \beta }{2}+\frac{\theta Q}{2}+\varepsilon )}\Vert f\Vert _{1}. \end{aligned}$$

Lemma 3.2

Combining (10) and the above discussion we have the following estimates:

  1. i)

    For any \(\varepsilon >0\)

    $$\begin{aligned} \left\| \sum _{l\le j\epsilon }T^{l}_{j}f\right\| _{1}\lesssim _{\epsilon } \nu ^{j(-\frac{\theta \beta }{2}+\frac{\theta Q}{2}+\varepsilon )}\Vert f\Vert _{1}. \end{aligned}$$
  2. ii)

    For any \(l> j\epsilon ,\) we have

    $$\begin{aligned} \Vert T^{l}_{j}f\Vert _{1}\lesssim _{\epsilon } \nu ^{-C Q(Q+\frac{\theta \beta }{2})(j+l)}\Vert f\Vert _{1}. \end{aligned}$$

Finally, we need \(L^1-L^{\infty }\) estimates for the operators \(T^{l}_{j}.\) In order to do that we first need to prove pointwise estimates for the kernel \(K_{h}\) of the operator \(h(\sqrt{\mathcal {L}})\) where h is supported on [M/4, M] for some \(M>0.\) Let \(p_t\) denote the convolution kernel associated with the heat operator \(e^{-t\mathcal {L}}.\) Recall the following Gaussian estimate (see e.g. [6])

$$\begin{aligned} |p_{t}(x)|\le \frac{C}{t^{Q/2}} e^{-\frac{|x|^2}{c\,t}}. \end{aligned}$$
(12)

We just sketch the proof, see [9] for details. Denote \(H(\lambda )=e^{\frac{\lambda ^2}{M^2}}h(\lambda ),\) then \(h(\lambda )=e^{-\frac{\lambda ^2}{M^2}}H(\lambda ).\) Also, \(\Vert H_{M}\Vert _{2}\simeq \Vert h_{M}\Vert _2\) as h is supported on [M/4, M]. Therefore, \(K_{h}(y^{-1}x)=\int _{G} p_{\frac{1}{M^2}}(z^{-1} x) K_{H}(y^{-1}z)\, dz.\) Hölder’s inequality and (5) implies

$$\begin{aligned} |K_{h}(y^{-1}x)|\le \Vert p_{1/M^2}(z^{-1}x)\Vert _{L^2(dz, G)}\Vert K_{H}(y^{-1}z)\Vert _{L^2(dz, G)}\lesssim M^{Q} \Vert H_{M}\Vert _{2}\lesssim M^{Q} \Vert h_{M}\Vert _{2}. \end{aligned}$$
(13)

Observe that a factor of \(M^{Q/2}\) appears from (5) and another from \(\Vert p_{1/M^2}(z^{-1}x)\Vert _{L^2(dz, G)}.\) Using Fourier inversion, we can write

$$\begin{aligned} K_{h}(y^{-1}x)=\frac{1}{2\pi }\int _{\mathbb {R}} \widehat{G}(t)p_{(1-it)/M^2}(y^{-1}x)\, dt, \end{aligned}$$
(14)

where \(G(\lambda ):=h(M\sqrt{\lambda })e^{\lambda }.\) Note that supp(G) is contained [0, 1] due to the support condition on h, so \(e^\lambda \) and its derivatives are always bounded. At this point we use the following estimate from [47]

$$\begin{aligned} |p_{(1-it)/M^2}(y^{-1}x)|\le C M^{Q}e^{-\frac{M^2 |y^{-1}x|^2}{(1+t^2)}} \le C M^{Q} (1+M|y^{-1}x|)^{-s}(1+|t|)^s. \end{aligned}$$

Therefore, from the above bound with (14), we have for any \(s>0\)

$$\begin{aligned}&|K_{h}(y^{-1}x)|\le C M^{Q}(1+M |y^{-1}x|)^{-s}\int |\widehat{G}(t)|(1+|t|)^{s}\, dt\nonumber \\&\quad \lesssim M^{Q}(1+M|y^{-1}x|)^{-s} \Vert G\Vert _{L^{2}_{s+\varkappa +\frac{1}{2}}}\nonumber \\&\quad \lesssim M^{Q}(1+M|y^{-1}x|)^{-s} \Vert h_{M}\Vert _{L^{2}_{s+\varkappa +\frac{1}{2}}}, \end{aligned}$$
(15)

for any small \(\kappa >0.\) Using complex interpolation of (13) and (15), as in [21], we remove the extra exponent 1/2 in the Sobolev exponent to obtain

$$\begin{aligned} |K_{h}(y^{-1}x)|\lesssim M^{Q}(1+M|y^{-1}x|)^{-s} \Vert h_{M}\Vert _{L^{2}_{s+\varkappa }} \end{aligned}$$
(16)

for any \(s>0\) and any arbitrarily small \(\varkappa >0.\) Now we prove the following lemma regarding \(L^1-L^{\infty }\) estimates for \(T^{l}_{j}.\)

Lemma 3.3

  1. i)

    For \(l> j\epsilon \)

    $$\begin{aligned} \Vert T^{l}_{j}f\Vert _{L^{\infty }}\lesssim _{\epsilon } \nu ^{-C Q(Q+\frac{\theta \beta }{2})(j+l)} \Vert f\Vert _{1}. \end{aligned}$$
  2. ii)

    We also have

    $$\begin{aligned} \left\| \sum _{l\le j\epsilon }T^{l}_{j}f\right\| _{\infty }\le \nu ^{jQ} \nu ^{-\frac{j\theta Q}{2}} \Vert f\Vert _{1}. \end{aligned}$$

Proof

Let \(l>j\epsilon .\) Then we have

$$\begin{aligned} |T^{l}_{j}f(x)|&\le \left( \sup _{\{y: |y^{-1}x|\simeq \nu ^{l-j(1-\theta )}\}}|K_{m_j}(y^{-1}x)|\right) \int |f(y)|\, dy\\&\le \left( \sup _{\{y: |y^{-1}x|\simeq \nu ^{l-j(1-\theta )}\}}|\nu ^{jQ}K_{m^j}(\delta _{\nu ^j}(y^{-1}x))|\right) \Vert f\Vert _{1}\\&\le \nu ^{jQ} \left( \sup _{\{y: |y^{-1}x|\simeq \nu ^{l+j\theta }\}}|K_{m^j}((y^{-1}x))|\right) \Vert f\Vert _{1}\\&\le \nu ^{jQ} \left( \sup _{\{y: |y^{-1}x|\simeq \nu ^{l+j\theta }\}}(1+|y^{-1}x|)^{-s} \Vert m^{j}\Vert _{L^{2}_{s+\varkappa }}\right) \Vert f\Vert _{1}\ \ (\text {using} (16))\\&\lesssim \nu ^{jQ} \nu ^{-s(l+j\theta )} \nu ^{j\theta (s+\varkappa -\frac{\beta }{2})}\Vert f\Vert _{1}, \end{aligned}$$

for any \(s>Q\) and a fixed small \(\varkappa >0\) from (16). Since, \(l> j\epsilon ,\) we may choose s sufficiently large, as done in the proof of Lemma 3.1, depending on \(Q, \theta , \epsilon , \varkappa \) such that

$$\begin{aligned} \Vert T^{l}_{j}f\Vert _{L^{\infty }}\lesssim _{\epsilon } \nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)} \Vert f\Vert _{1}. \end{aligned}$$

For the second part, observe that

$$\begin{aligned} \left| \sum _{l\le j\epsilon }T^{l}_{j}f(x)\right|&=\big | \int f(z) K_{m_{j}}(z^{-1}x) \sum _{l\le j\epsilon } \phi (\nu ^{-l+j(1-\theta )}|z^{-1}x|)\big |\\&\le \int |f(z)| |K_{m_{j}}(z^{-1}x)|\, dz\\&\overset{\text {using}\,\,(13)}{\lesssim }\nu ^{jQ}\Vert m^j\Vert _{L^{\infty }}\Vert f\Vert _{1}\lesssim \nu ^{jQ} \nu ^{-\frac{j\theta Q}{2}} \Vert f\Vert _{1}, \end{aligned}$$

completing the proof. \(\square \)

Lemma 3.4

(\(L^{r_1}-L^{r_1}\) estimates) Let \(1\le r_1\le 2.\) Then interpolating Lemmas 3.2 and 3.1 we obtain the following:

  1. i)

    For any \(\varepsilon >0\)

    $$\begin{aligned} \left\| \sum _{l\le j\epsilon } T^{l}_{j}f\right\| _{r_1}\lesssim _{\epsilon }\nu ^{-\frac{j\theta \beta }{2}} \nu ^{j(\frac{\theta Q}{2}+\varepsilon )(\frac{2}{r_1}-1)}\Vert f\Vert _{r_1}. \end{aligned}$$
    (17)
  2. ii)

    For any \(l>j\epsilon \)

    $$\begin{aligned} \Vert T^{l}_{j}f\Vert _{r_1}\lesssim _{\epsilon } \nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)}\Vert f\Vert _{r_1}. \end{aligned}$$
    (18)

The next lemma concerns the key estimate which is required for our sparse domination estimates.

Lemma 3.5

Let \(1\le r_1\le r_2\le 2.\) Then we have the following estimates:

  1. i)

    For \(l> j\epsilon \)

    $$\begin{aligned} \Vert T^{l}_{j}f\Vert _{r_2}\lesssim _{\epsilon }\nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)} \nu ^{jQ\left( \frac{1}{r_1}-\frac{1}{r_2}\right) } \Vert f\Vert _{r_1}. \end{aligned}$$
    (19)
  2. ii)

    For any \(\varepsilon >0\)

    $$\begin{aligned} \left\| \sum _{l\le j\epsilon } T^{l}_{j}f\right\| _{r_2}\lesssim _{\epsilon }\nu ^{-\frac{j\theta \beta }{2}} \nu ^{j(\frac{\theta Q}{2}+\varepsilon )(\frac{2}{r_2}-1)} \nu ^{jQ\left( \frac{1}{r_1}-\frac{1}{r_2}\right) }\Vert f\Vert _{r_1}. \end{aligned}$$
    (20)

Proof

Let us introduce a smooth cutoff function \(\psi \) such that \(\psi =1\) on the support of \(\phi .\) Then \(m_{j}(\lambda )=m(\lambda )\,\phi (\nu ^{-j}\lambda )=m(\lambda )\,\phi (\nu ^{-j}\lambda ) \psi (\nu ^{-j}\lambda )=m_{j}(\lambda )\,\psi (\nu ^{-j}\lambda ).\) Let \(K_{\psi }\) be the kernel of \(\psi (\sqrt{\mathcal {L}}).\) Therefore, for \(l> j\epsilon ,\) using Lemma 3.4 and Young’s inequality, we obtain

$$\begin{aligned} \Vert T^{l}_{j}f\Vert _{r_2}\lesssim _{\epsilon }\nu ^{-Q(Q+\frac{\theta \beta }{2})(j+l)}\Vert f*K_{\psi ({\nu ^{-j}\cdot })}\Vert _{r_2}\lesssim _{\epsilon }\nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)}\Vert f\Vert _{r_1} \Vert K_{\psi ({\nu ^{-j}\cdot })}\Vert _{t}, \end{aligned}$$
(21)

where \(\frac{1}{t}=\frac{1}{r_2}+1-\frac{1}{r_1}.\) Next recall from (16) and homogeneity that

$$\begin{aligned} |K_{\psi (\nu ^{-j}\cdot )}(x)|=\nu ^{jQ}|K_{\psi }(\delta _{\nu ^j}x)|\le \nu ^{jQ}\frac{C}{(1+|\delta _{\nu ^j}x|)^N}\le \nu ^{jQ}\frac{C}{(1+\nu ^{j}|x|)^N} \end{aligned}$$

for any \(N>0.\) From (21) and the above pointwise estimate we obtain the following

$$\begin{aligned} \Vert T^{l}_{j}f\Vert _{r_2}&\lesssim _{\epsilon }\nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)}\Vert f*K_{\psi ({\nu ^{-j}\cdot })}\Vert _{r_2}\\&\lesssim _{\epsilon }\nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)}\Vert f\Vert _{r_1} \Vert K_{\psi ({\nu ^{-j}\cdot })}\Vert _{t}\lesssim _{\epsilon }\nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)} \nu ^{jQ\left( \frac{1}{r_1}-\frac{1}{r_2}\right) } \Vert f\Vert _{r_1}, \end{aligned}$$

where we have used that \(\Vert K_{\psi ({\nu ^{-j}\cdot })}\Vert _{t}\lesssim \nu ^{jQ(1-\frac{1}{t})}\) by choosing N sufficiently large. Similarly, modifying the above arguments together with (17), we have

$$\begin{aligned} \left\| \sum _{l\ge j\epsilon } T^{l}_{j}f\right\| _{r_2}\lesssim _{\epsilon }\nu ^{-\frac{j\theta \beta }{2}} \nu ^{j(\frac{\theta Q}{2}+\varepsilon )(\frac{2}{r_2}-1)} \Vert f\Vert _{r_1} \nu ^{jQ\left( \frac{1}{r_1}-\frac{1}{r_2}\right) } \end{aligned}$$

for any \(\varepsilon >0.\) \(\square \)

We also need the following \(L^p\) improving estimate.

Lemma 3.6

Let \(1\le r_1\le 2\le r_2\le r_{1}'.\) Then the following estimates hold true:

  1. i)

    For \(l> j\epsilon \)

    $$\begin{aligned} \Vert T^{l}_{j}f\Vert _{r_2}\lesssim _{\epsilon } \nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)} \nu ^{jQ(\frac{1}{r_1}-\frac{1}{r_2'})}\Vert f\Vert _{r_1}. \end{aligned}$$
    (22)
  2. i)

    We also have

    $$\begin{aligned} \left\| \sum _{l\le j\epsilon } T^{l}_{j}f\right\| _{r_2}\lesssim _{\epsilon } \nu ^{-\frac{j\theta \beta }{2}}\nu ^{jQ(\frac{1}{r_1}-\frac{1}{r_2})}\Vert f\Vert _{r_1}. \end{aligned}$$
    (23)

Proof

Interpolating Lemmas 3.1 and 3.3 we obtain that for any \(1\le r_1\le 2\)

$$\begin{aligned}&\Vert T^{l}_{j}f\Vert _{r_1'}\lesssim _{\epsilon } \nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)}\Vert f\Vert _{r_1}\ \ \text {for}\ \ l> j\epsilon ,\end{aligned}$$
(24)
$$\begin{aligned} \text {and}\ \ \ {}&\left\| \sum _{l\le j\epsilon }T^{l}_{j}f\right\| _{r_1'}\lesssim _{\epsilon } \nu ^{-\frac{j\theta Q}{2}}\nu ^{jQ(1-\frac{2}{r'_1})}\Vert f\Vert _{r_1}. \end{aligned}$$
(25)

Recall from the previous lemma that \(m_{j}(\lambda )=m_{j}(\lambda )\psi (\nu ^{-j}\lambda )\) where \(\psi \) is introduced in Lemma 3.5. Also recall

$$\begin{aligned} |K_{\psi (\nu ^{-j}\cdot )}(x)|\le \nu ^{jQ}\frac{C}{(1+\nu ^j|x|)^N}\ \ \text {for any}\ \ N>0. \end{aligned}$$
(26)

For \(l> j\epsilon ,\) employing Young’s inequality we obtain

$$\begin{aligned} \Vert T^l_{j}f\Vert _{2}\lesssim _{\epsilon } \nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)}\Vert f*K_{\psi (\nu ^{-j}\cdot )}\Vert _{2}\le \nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)}\Vert K_{\psi (\nu ^{-j}\cdot )}\Vert _{t}\Vert f\Vert _{r_1}, \end{aligned}$$

where \(\frac{1}{t}=\frac{1}{2}+\frac{1}{r_1'}.\) Therefore, combining the above with (26) yields the following for \(l> j\epsilon \)

$$\begin{aligned} \Vert T^{l}_{j}f\Vert _{2}\lesssim _{\epsilon } \nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)} \nu ^{jQ(\frac{1}{r_1}-\frac{1}{2})}\Vert f\Vert _{r_1}. \end{aligned}$$
(27)

Arguing similarly we obtain

$$\begin{aligned} \left\| \sum _{l\le j\epsilon } T^{l}_{j}f\right\| _{2}\lesssim _{\epsilon } \nu ^{-\frac{j\theta \beta }{2}}\nu ^{jQ(\frac{1}{r_1}-\frac{1}{2})}\Vert f\Vert _{r_1}. \end{aligned}$$
(28)

Interpolating (24) and (27) we obtain that

$$\begin{aligned}&\Vert T^{l}_{j}f\Vert _{r_2}\lesssim _{\epsilon } \nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)} \nu ^{jQ(\frac{1}{r_1}-\frac{1}{r_2'})}\Vert f\Vert _{r_1},\ \text {holds for all}\ \ \ l> j\epsilon . \end{aligned}$$

Similarly, interpolating (25) and (28), we get

$$\begin{aligned} \left\| \sum _{l\le j\epsilon } T^{l}_{j}f\right\| _{r_2}\lesssim _{\epsilon } \nu ^{-\frac{j\theta \beta }{2}}\nu ^{jQ(\frac{1}{r_1}-\frac{1}{r_2})}\Vert f\Vert _{r_1}. \end{aligned}$$

This completes the proof. \(\square \)

Now we are in a position to prove our main Theorem 1.3 for the case \(\theta >0.\) After having the key unweighted estimates, the proof of sparse domination is now quite standard and we provide a brief sketch, for more details we refer to [4].

Proof of Theorem 1.3

Recall the dyadic families \(\mathscr {D}^n\) for \(n=1, \cdots , \mathfrak {N}\) and \(\mathscr {D}^n=\cup _{k\in \mathbb {Z}} \mathscr {D}^{n}_{k}.\) Let us define the operators

$$\begin{aligned}&T^{l}_{j, n}f:=\sum _{\begin{array}{c} R\in \mathscr {D}^n:\\ R\in \mathscr {D}^n_{\lfloor j(1-\theta )-j\epsilon +10\rfloor } \end{array}} T^{l}_{j}(f\chi _{c_{G}B(R)}) \ \ \text {for}\ \ l\le j\epsilon ,\\&T^{l}_{j, n}f:=\sum _{\begin{array}{c} R\in \mathscr {D}^n:\\ R\in \mathscr {D}^n_{\lfloor j(1-\theta )-l+10\rfloor } \end{array}} T^{l}_{j}(f\chi _{c_{G}B(R)}) \ \ \text {for}\ \ l> j\epsilon , \end{aligned}$$

where the universal constant \(c_{G}\) is chosen sufficiently small (by rescaling the metric) to ensure that the support of \(T^{l}_{j}(f\chi _{c_{G}B(R)})\) is contained in R. Therefore, it is enough to obtain sparse domination for

$$\begin{aligned} \mathcal {T}_{n}:=\sum \limits _{\begin{array}{c} j\ge 0\\ l\in \mathbb {Z} \end{array}} T^{l}_{j, n},\,\, \text {for}\ \ n=1, \cdots , \mathfrak {N}. \end{aligned}$$

Hence, we only prove sparse domination for one of the \(\mathcal {T}_n\) and suppress the index n for simplicity. By localisation and Lemma 3.5, we obtain for \(1\le r_1\le r_2\le 2\)

$$\begin{aligned}&\left| \left\langle \sum _{l\le j\epsilon } T^{l}_{j}f, g\right\rangle \right| \nonumber \\&\quad \lesssim \sum _{j\ge 0} \sum _{\begin{array}{c} R\in \mathscr {D}:\\ R\in \mathscr {D}_{\lfloor j(1-\theta )-j\epsilon +10\rfloor } \end{array}} \left\| \left( \sum _{l\le j\epsilon } T^{l}_{j}(f\chi _{c_{G}B(R)})\right) \chi _{R} \right\| _{r_{2}}\Vert g \chi _{R}\Vert _{r'_2} \nonumber \\&\quad \overset{(20)}{\lesssim }_{\epsilon } \sum _{j\ge 0} \sum _{\begin{array}{c} R\in \mathscr {D}:\\ R\in \mathscr {D}_{\lfloor j(1-\theta )-j\epsilon +10\rfloor } \end{array}} \nu ^{-\frac{j\theta \beta }{2}} \nu ^{j(\frac{\theta Q}{2}+\varepsilon )(\frac{2}{r_2}-1)} \nu ^{jQ\left( \frac{1}{r_1}-\frac{1}{r_2}\right) }\Vert f \chi _{R}\Vert _{r_1} \Vert g \chi _{R}\Vert _{r'_2}\nonumber \\&\quad \lesssim _{\epsilon } \sum _{j\ge 0} \sum _{\begin{array}{c} R\in \mathscr {D}:\\ R\in \mathscr {D}_{\lfloor j(1-\theta )-j\epsilon +10\rfloor } \end{array}} |R|^{\frac{1}{r_1}+\frac{1}{r'_2}-1}\nu ^{-\frac{j\theta \beta }{2}} \nu ^{j(\frac{\theta Q}{2}+\varepsilon )(\frac{2}{r_2}-1)} \nu ^{jQ\left( \frac{1}{r_1}-\frac{1}{r_2}\right) }|R|\left\langle f \right\rangle _{r_1, R} \left\langle g\right\rangle _{r'_2, R}\nonumber \\&\quad \lesssim _{\epsilon } \sum _{j\ge 0} \sum _{\begin{array}{c} R\in \mathscr {D}:\\ R\in \mathscr {D}_{\lfloor j(1-\theta )-j\epsilon +10\rfloor } \end{array}} \nu ^{Q(j\epsilon -j(1-\theta ))\big (\frac{1}{r_1}+\frac{1}{r'_2}-1\big )}\nu ^{-\frac{j\theta \beta }{2}} \nu ^{j(\frac{\theta Q}{2}+\varepsilon )(\frac{2}{r_2}-1)} \nu ^{jQ\left( \frac{1}{r_1}-\frac{1}{r_2}\right) } |R|\left\langle f \right\rangle _{r_1, R} \left\langle g\right\rangle _{r'_2, R}\nonumber \\&\quad \lesssim _{\epsilon } \sum _{j\ge 0} \nu ^{j\big (\theta Q(\frac{1}{r_1}-\frac{1}{r_2})-\frac{\theta \beta }{2}+\frac{\theta Q}{2}\big (\frac{2}{r_2}-1\big )+\varepsilon \big (\frac{2}{r_2}-1\big )+\epsilon Q(\frac{1}{r_1}-\frac{1}{r_2})\big )} \sum _{\begin{array}{c} R\in \mathscr {D}:\\ R\in \mathscr {D}_{\lfloor j(1-\theta )-j\epsilon +10\rfloor } \end{array}}|R| \left\langle f \right\rangle _{r_1, R} \left\langle g\right\rangle _{r'_2, R}, \end{aligned}$$
(29)

since \(\epsilon \) and \(\varepsilon \) are sufficiently small the above gives a geometrically decaying sparse collection if

$$\begin{aligned} \textstyle {\theta Q\left( \frac{1}{r_1}-\frac{1}{r_2}\right) -\frac{\theta \beta }{2}+\frac{\theta Q}{2}\left( \frac{2}{r_2}-1\right)<0\iff \frac{1}{r_1}-\frac{1}{2}<\frac{\beta }{2Q}.} \end{aligned}$$

A similar argument for \(T^{l}_{j}, \ l> j\epsilon ,\) with Lemma 3.5, yields the following

$$\begin{aligned}&\left| \left\langle \sum _{j\ge 0}\sum _{l>j\epsilon } T^{l}_{j}(f\chi _{c_{G}B(R)}), g\right\rangle \right| \nonumber \\&\quad \lesssim \sum _{j\ge 0} \sum _{l>j\epsilon }\sum _{\begin{array}{c} R\in \mathscr {D}:\\ R\in \mathscr {D}_{\lfloor j(1-\theta )-l+10\rfloor } \end{array}} \Vert T^{l}_{j}(f\chi _{c_{G}B(R)}) \chi _{R} \Vert _{r_{2}}\Vert g \chi _{R}\Vert _{r'_2}\end{aligned}$$
(30)
$$\begin{aligned}&\qquad \overset{(19)}{\lesssim }\sum _{j\ge 0} \sum _{l>j\epsilon }\sum _{\begin{array}{c} R\in \mathscr {D}:\\ R\in \mathscr {D}_{\lfloor j(1-\theta )-l+10\rfloor } \end{array}} \nu ^{Q(l-j(1-\theta ))\big (\frac{1}{r_1}-\frac{1}{r_2}\big )} \nu ^{-CQ(Q+\frac{\theta \beta }{2})(j+l)} \nu ^{jQ\left( \frac{1}{r_1}-\frac{1}{r_2}\right) } |R| \left\langle f \right\rangle _{r_1, R} \left\langle g\right\rangle _{r'_2, R}\nonumber \\&\quad \lesssim \sum _{j\ge 0} \nu ^{jQ\big (\theta \big (\frac{1}{r_1}-\frac{1}{r_2}\big )-C\big (Q+\frac{\theta \beta }{2}\big )\big )}\sum _{l>j\epsilon } \nu ^{Ql\big (\big (\frac{1}{r_1}-\frac{1}{r_2}\big )-C\big (Q+\frac{\theta \beta }{2}\big )\big )} \sum _{\begin{array}{c} R\in \mathscr {D}:\\ R\in \mathscr {D}_{\lfloor j(1-\theta )-l+10\rfloor } \end{array}} |R| \left\langle f \right\rangle _{r_1, R} \left\langle g\right\rangle _{r'_2, R}. \end{aligned}$$
(31)

As mentioned earlier, the constant C can be chosen sufficiently large, indeed we can always ensure that \(\big (\theta \big (\frac{1}{r_1}-\frac{1}{r_2}\big )-C(Q+\frac{\theta \beta }{2})\big )<0\) and \(\big (\big (\frac{1}{r_1}-\frac{1}{r_2}\big )-C(Q+\frac{\theta \beta }{2})\big )<0.\) Therefore, we again obtain geometrically decaying \((r_1, r'_2)\) sparse domination. Hence, combining (29) and (31), we obtain \((r_1, r_2')\) sparse domination for \(1\le r_1\le r_2\le 2\) provided \(\frac{1}{r_1}-\frac{1}{2}<\frac{\beta }{2Q}.\)

Arguing similarly in the case \(1\le r_1\le 2\le r_2\le r_{1}'\) with Lemma 3.6 yields

$$\begin{aligned}&\left| \left\langle \sum _{l\le j\epsilon } T^{l}_{j}(f\chi _{c_{G}B(R)}), g\right\rangle \right| \nonumber \\&\quad \lesssim \sum _{j\ge 0} \sum _{\begin{array}{c} R\in \mathscr {D}:\\ R\in \mathscr {D}_{\lfloor j(1-\theta )-j\epsilon +10\rfloor } \end{array}} \Vert \textstyle {\left( \sum _{{l\le j\epsilon }} T^{l}_{j}(f\chi _{c_{G}B(R)})\right) } \chi _{R} \Vert _{r_{2}}\Vert g \chi _{R}\Vert _{r'_2} \nonumber \\&\quad \overset{(23)}{\lesssim }_{\epsilon } \sum _{j\ge 0} \sum _{\begin{array}{c} R\in \mathscr {D}:\\ R\in \mathscr {D}_{\lfloor j(1-\theta )-j\epsilon +10\rfloor } \end{array}} \nu ^{Q(j\epsilon -j(1-\theta ))\big (\frac{1}{r_1}-\frac{1}{r_2}\big )}\nu ^{-\frac{j\theta \beta }{2}}\nu ^{jQ(\frac{1}{r_1}-\frac{1}{r_2})} |R|\left\langle f \right\rangle _{r_1, R} \left\langle g\right\rangle _{r'_2, R}\nonumber \\&\quad \lesssim _{\epsilon } \sum _{j\ge 0} \nu ^{j\big (-Q(1-\theta )\big (\frac{1}{r_1}-\frac{1}{r_2}\big )-\frac{\theta \beta }{2}+Q\big (\frac{1}{r_1}-\frac{1}{r_2}\big )+\varepsilon Q\big )}\sum _{\begin{array}{c} R\in \mathscr {D}:\\ R\in \mathscr {D}_{\lfloor j(1-\theta )-j\epsilon +10\rfloor } \end{array}} |R| \left\langle f \right\rangle _{r_1, R} \left\langle g\right\rangle _{r'_2, R}, \end{aligned}$$
(32)

since \(\epsilon >0\) is sufficiently small we have a geometrically decaying \((r_1, r'_2)\) sparse domination for \(1\le r_1\le 2\le r_2\le r_{1}'\) if

$$\begin{aligned} -Q(1-\theta )\big (\frac{1}{r_1}-\frac{1}{r_2}\big )-\frac{\theta \beta }{2}+Q\big (\frac{1}{r_1}-\frac{1}{r_2}\big )<0\iff \frac{1}{r_1}-\frac{1}{r_2}<\frac{\beta }{2Q}. \end{aligned}$$

A similar argument also produces \((r_1, r'_2)\) sparse domination for the pieces \(T^{l}_{j}, l> j\epsilon ,\) in the range \(1\le r_1\le 2\le r_2\le r_{1}'.\) Our proof only produces geometrically decaying sparse domination since in the dyadic scale \(\mathscr {D}_{\lfloor j(1-\theta )-j\epsilon +10\rfloor }\) cubes are disjoint, however, to obtain a true sparse bound a similar argument can be produced as in [4], see also [35]. Since \(m(\sqrt{\mathcal {L}})^*=\overline{m}(\sqrt{\mathcal {L}}),\) \((r_1, r'_2)\) sparse domination implies \((r_2, r'_1)\) sparse domination. This completes the proof of Theorem 1.3 for the case \(\theta > 0.\) \(\square \)

Remark 3.7

Let \(\theta <0\) and \(m\in \mathscr {M}(\theta , \beta ).\) The case \(\theta <0\) represents low frequencies, hence we need to decompose the multiplier as done in [4, 14]. Therefore,

$$\begin{aligned} m(\lambda ) = \sum _{j: j \le 0} m_j (\lambda ), \end{aligned}$$

where \(m_{j}=m(\lambda )\phi (\nu ^{-j}\lambda ).\) We can rewrite the above as \(m(\lambda ) = \sum _{k: k \ge 0} m_{-k} (\lambda ), \) where \(m_{-k}=m(\lambda )\phi (\nu ^{k}\lambda )\) for \(k\ge 0.\) Also \(m^{-k}(\lambda ):=m(\nu ^{-k}\lambda )\phi (\lambda ).\) Then the facts that \(\Vert m^{-k}\Vert _{L^{\infty }}\le C\nu ^{k\theta \beta /2},\) and \(\Vert m^{-k}\Vert _{L^2_{s}}\le C \nu ^{-k\theta (2s-\beta )/2}\) for all \(k\ge 0\) and for \(s\in \mathbb {N},\) yield the following estimate as in Lemma 3.1 by choosing \(s\gg \lfloor -\frac{\theta \beta }{2\epsilon }\rfloor \)

$$\begin{aligned}&\Vert T^l_{k}f\Vert _{2}\lesssim _{\epsilon } \nu ^{Q(\frac{\theta \beta }{2}-Q)(k+l)}\Vert f\Vert _{2}\ \ \text {for} \ \ l>k\epsilon ,\\ \text {also,}\\&\Vert T_{k}f\Vert _{2}\lesssim \nu ^{\frac{k\theta \beta }{2}}\ \ \text {for} \ \ k\ge 0. \end{aligned}$$

Now one can modify Lemmas 3.2, and 3.3 appropriately to obtain similar results in this case.

4 Applications

4.1 Quantitative estimates

In this subsection we obtain several weighted estimates for oscillating multipliers \(m(\sqrt{\mathcal {L}}).\) Recall the following notion of Muckenhoupt weights on homogeneous spaces from [2]. Let \(1<p<\infty ,\) \(\omega \in A_p\) if

$$\begin{aligned}{}[\omega ]_{A_p}:=\sup _{B}\left( \frac{1}{|B|}\int _{B} \omega \right) \left( \frac{1}{|B|} \int _{B}\omega ^{1-p'}\right) ^{p-1}<\infty , \end{aligned}$$
(33)

where the supremum is over all balls. Also, we say \(\omega \in RH_{q}\) for \(1<q<\infty ,\) if \([\omega ]_{RH_{q}}:=\sup \limits _{B}\left\langle \omega \right\rangle _{1, B}^{-1}\left\langle \omega \right\rangle _{q, B}<\infty .\) Corresponding to a sparse family \(\mathcal {S},\) and \(1\le r_1, r_2\le \infty ,\) let \(\Lambda _{\mathcal {S}, r_1, r'_{2}}\) denote the following bilinear form

$$\begin{aligned} \Lambda _{\mathcal {S}, r_1, r'_{2}}(f, g):=\sum _{R\in \mathcal {S}}|R| \left\langle f \right\rangle _{r_1, R} \left\langle g \right\rangle _{r'_2, R}. \end{aligned}$$

The following quantitative estimate was proved in [2].

Lemma 4.2

[2] For any \(r_1<p<r_2,\) and \(\omega \in A_{p/r_1}\cap RH_{(r_2/p)'},\) we have

$$\begin{aligned} \Lambda _{\mathcal {S}, r_1, r'_{2}}(f, g)\lesssim _{r_1, r_2, p, \mathcal {S}} \big ([\omega ]_{A_{p/r_1}}[\omega ]_{RH_{(r_2/p)'}}\big )^{max\{\frac{1}{p-r_1}, \frac{r_2-1}{r_2-p}\}}\Vert f\Vert _{L^p(w)}\Vert g\Vert _{L^{p'}(w^{1-p'})}.\end{aligned}$$

As a consequence of Theorem 1.3, we now prove Theorem 1.5 concerning weighted estimates for \(m(\sqrt{\mathcal {L}}).\)

Proof of Theorem 1.5

We first prove part (i) and (ii). Assume \(m\in \mathscr {M}(\theta , \beta )\) with \(\theta \in \mathbb {R}\setminus \{0\}\) with \(Q\le \beta \le 2Q.\) The proof follows from Theorem 1.3 and reverse Hölder’s property of \(A_p\) weights, see [28]. It is easy to observe from Theorem 1.3 that we have \((r_1, 1)\) sparse domination for all \(r_1\) such that \(0<\frac{1}{r_1}<\frac{1}{p_{\beta }}.\) Let \(p_{\beta }<p<\infty ,\) and \(\omega \in A_{p/p_{\beta }}.\) We can always choose \(\varepsilon >0\) such that \(\frac{1}{p}<\frac{1}{p_{\beta }}-\frac{\varepsilon }{p}.\) Denote \(\frac{1}{r_1}:=\frac{1}{p_{\beta }}-\frac{\varepsilon }{p}.\) Moreover, reverse Hölder’s inequality ensures that the quantity \(\varepsilon \) can be chosen such that \(\omega \in A_{\frac{p}{r_{1}}}.\) Theorem 1.3 and Lemma 4.2 imply that for any compactly supported f and g,  there exists a sparse family \(\mathcal {S}\) such that

$$\begin{aligned} \big |\left\langle m(\sqrt{\mathcal {L}})f, g\right\rangle \big |\lesssim \Lambda _{\mathcal {S}, r_1, 1}(f, g)\lesssim C([\omega ]_{A_{p/p_{\beta }}}) \Vert f\Vert _{L^p(w)}\Vert g\Vert _{L^{p'}(w^{1-p'})}. \end{aligned}$$

Now duality concludes the proof.

Let us now prove part (iii). Let \(m\in \mathscr {M}(\theta , \beta )\) with \(0<\beta <Q.\) Theorem 1.3 implies that we have a \((2, s')\) sparse domination for all \(s'\) such that \(\frac{1}{2}\le \frac{1}{s'}<\frac{1}{2}+\frac{\beta }{2Q}.\) Let \(2<p<s_{\beta },\) and \(\omega \in A_{p/2}\cap RH_{(s_{\beta }/p)'}.\) By self-improving property of reverse Hölder’s classes, \(\omega \in RH_{(s_{\beta }/p)'(1+\delta )},\) for sufficiently small \(\delta >0.\) It is easy to choose s such that \(2<p<s<s_{\beta }\) satisfying \(\frac{1}{2}<\frac{1}{s'}<\frac{1}{2}+\frac{\beta }{2Q}=\frac{1}{s'_{\beta }}\) and \(\omega \in RH_{(s/p)'}\) simultaneously. Therefore,

$$\begin{aligned} \big |\left\langle m(\sqrt{\mathcal {L}})f, g\right\rangle \big |\lesssim \Lambda _{\mathcal {S}, 2, s'}(f, g)\lesssim C([\omega ]_{A_{p/2}}, [\omega ]_{RH_{(s_{\beta }/p)'}}) \Vert f\Vert _{L^p(w)}\Vert g\Vert _{L^{p'}(w^{1-p'})}. \end{aligned}$$

Now the proof follows from duality. \(\square \)

4.2 Riesz means

For \(k, \alpha , t>0,\) we define the Riesz means

$$\begin{aligned} I_{k, \alpha , t}(\mathcal {L}):=k t^{-k}\int _{0}^{t}(t-s)^{k-1}e^{i s (\sqrt{\mathcal {L}})^{\alpha }}\, ds. \end{aligned}$$
(34)

Without loss of generality, let us assume that \(t=1\) and simply denote \(I_{k, \alpha , 1}(\mathcal {L})\) by \(I_{k, \alpha }(\mathcal {L}).\) It is well-known that the operator \(I_{k, \alpha }(\mathcal {L})\) can be written as \(\sigma ((\sqrt{\mathcal {L}})^{\alpha }),\) where the spectral multiplier \(\sigma \) can be decomposed as \(\sigma (\lambda )=c_{k}\psi (\lambda ) \lambda ^{-k} e^{i\lambda }+\sigma _{1}(\lambda ),\) where \(\sigma _{1}\) is a smooth function satisfying the Mikhlin–Hörmander condition, and \(\psi \) is a \(C^{\infty }\) function such that \(\psi =0\) if \(0\le \lambda \le 1\) and \(\psi \equiv 1\) for \(\lambda \ge 2.\) We refer to [1, 8, 42, 43] and references therein. As \(\sigma _{1}((\sqrt{\mathcal {L}})^\alpha )\) always satisfy (1, 1) sparse domination, the following sparse domination follows from Corollary 1.4

$$\begin{aligned} |\langle I_{k,\alpha }(\mathcal {L})f, g \rangle |\lesssim _{k, \alpha , r_{1}, r_{2}}\sum _{R\in \mathcal {S}} |R|\langle f \rangle _{r_1, R}\left\langle g\right\rangle _{r_2', R}\\ \text {and}\,\, |\langle I_{k,\alpha }(\mathcal {L})f, g \rangle |\lesssim _{k, \alpha , r_{1}, r_{2}}\sum _{R\in \mathcal {S'}} |R|\langle f \rangle _{r_2', R}\left\langle g\right\rangle _{r_1, R}, \end{aligned}$$

where \(r_1, r_2\) satisfy

$$\begin{aligned} \left( \frac{1}{r_1}-\frac{1}{2}\right)<\frac{k}{Q},\quad 1\le r_1\le r_2\le 2,\quad \text {or}\quad&\left( \frac{1}{r_1}-\frac{1}{r_2}\right) <\frac{k}{Q},\quad 1\le r_1\le 2\le r_2\le r'_{1}. \end{aligned}$$

The above sparse domination and Theorem 1.5 yield the following weighted estimates.

  1. i)

    Let \(k\ge Q.\) Then \(I_{k, \alpha }({\mathcal {L}})\) maps \(L^p(\omega )\) to \(L^p(\omega )\) for all \(1<p<\infty \) and \(\omega \in A_{p}.\)

  2. ii)

    Let \(\frac{Q}{2}\le k<Q.\) Then \(I_{k, \alpha }({\mathcal {L}})\) maps \(L^p(\omega )\) to \(L^p(\omega )\) for \(p_{k}<p<\infty \) and \(\omega \in A_{p/p_{k}},\) where \(p_{k}:=\frac{Q}{k}.\)

  3. iii)

    Let \(0<k<\frac{Q}{2}.\) Then \(I_{k, \alpha }({\mathcal {L}}): L^p(\omega )\rightarrow L^p(\omega )\) for all \(2<p<s_{k},\ \ \omega \in A_{p/2}\cap RH_{(s_{k}/p)'},\) where \(\frac{1}{s_{k}}:=\frac{1}{2}-\frac{k}{Q}.\)

4.3 Dispersive equations

Let \(f\in C^{\infty }_{0}(G)\) and \(\alpha \in \mathbb {N}.\) Consider the dispersive equation

$$\begin{aligned} i\, \partial _{t} u+ (\sqrt{\mathcal {L}})^{\alpha }\ u=0,\,\, u(\cdot , 0)=f. \end{aligned}$$

Then \(u(x, t)=e^{it (\sqrt{\mathcal {L}})^{\alpha }}f(x, t).\) For a fixed time t,  rescaling the operator \(\sqrt{\mathcal {L}}\) by \(t^{1/\alpha }\sqrt{\mathcal {L}},\) one can prove the following as a consequence of Corollary 1.4

$$\begin{aligned} |\langle u(\cdot , t) , g \rangle |\lesssim _{\beta , \alpha , r_1, r_2, t} \sum _{R\in \mathcal {S}} |R|\langle (I+\sqrt{\mathcal {L}})^{\beta }f \rangle _{r_1, R}\left\langle g\right\rangle _{r_2', R}\ \ \end{aligned}$$

whenever \(\left( \frac{1}{r_1}-\frac{1}{2}\right) <\frac{\beta }{\alpha Q}, 1\le r_1\le r_2\le 2\) or \(\left( \frac{1}{r_1}-\frac{1}{r_2}\right) <\frac{\beta }{\alpha Q}, 1\le r_1\le 2\le r_2\le r'_{1}.\) Let \(W^{s, p}_{\omega }\) denotes the non-homogeneous weighted Sobolev space \(W^{s, p}_{\omega }=\{f: \Vert f\Vert _{W^{s, p}_{\omega }}:=\Vert (I+\sqrt{\mathcal {L}})^{s}f\Vert _{L^p(\omega )}<\infty \}.\) As an application of Theorem 1.5, we can derive the following weighted estimates:

  1. i)

    Let \(1<p<\infty \) and \(\omega \in A_{p}.\) Then \(\Vert u(\cdot , t)\Vert _{L^p(\omega )}\lesssim \Vert f\Vert _{W^{\beta , p}_{\omega }}\) provided \(\beta \ge \alpha \, Q.\)

  2. ii)

    Let \(\frac{\alpha \,Q}{2}\le \beta <\alpha Q.\) Then \(\Vert u(\cdot , t)\Vert _{L^p(\omega )}\lesssim \Vert f\Vert _{W^{\beta , p}_{\omega }}\) holds for all \(p_{\alpha , \beta }<p<\infty \) and \(\omega \in A_{p/p_{\alpha , \beta }},\) where \(p_{\alpha , \beta }:=\frac{Q\,\alpha }{\beta }.\)

  3. iii)

    Finally, let \(0<\beta <\frac{\alpha \,Q}{2}.\) We also have that \(\Vert u(\cdot , t)\Vert _{L^p(\omega )}\lesssim \Vert f\Vert _{W^{\beta , p}_{\omega }}\) holds for all \(2<p<s_{\alpha , \beta },\ \ \omega \in A_{p/2}\cap RH_{(s_{\alpha , \beta }/p)'},\) where \(\frac{1}{s_{\alpha , \beta }}:=\frac{1}{2}-\frac{\beta }{\alpha Q}.\)