1 Introduction

Let \(\mu \) be a positive Borel measure with compact support in \(\mathbb R^d\). For \(0< \alpha < d\), the \(\alpha \)-dimensional energy of \(\mu \) is given by

$$\begin{aligned} I_{\alpha }(\mu ) = \iint |x-y|^{-\alpha } d\mu (x) d\mu (y). \end{aligned}$$

The energy \(I_{\alpha }\) has been widely used in various studies, especially geometric measure theory problems, to describe regularity property of measure. In fact, it is well known that finiteness of energy determines the Hausdorff dimension of the support of \(\mu \). Finiteness of \(I_{\alpha }(\mu )\) and \(L^{2}\) averaged decay estimates of \(\widehat{\mu }\) over the ball B(0, 1) are closely related. Here B(xr) denotes the ball which is centered at x and of radius r. Indeed, by the identity

$$\begin{aligned} \iint |x-y|^{-\alpha } d\mu (x) d\mu (y) = C_{\alpha ,d} \int |\widehat{\mu }(\xi )|^{2} |\xi |^{\alpha - d} d\xi \end{aligned}$$

it follows that \(I_{\alpha }(\mu ) < \infty \) for \(\alpha < \delta \) provided that \( \int _{B(0,1)} |\widehat{\mu }(\lambda \xi )|^{2} d\xi \le C \lambda ^{-\delta } \) for a positive constant \(\delta \). Conversely, if \(I_{\alpha }(\mu ) < \infty \), it follows that \( \int _{B(0,1)} |\widehat{\mu }(\lambda \xi )|^{2} d\xi \lesssim \lambda ^{-\alpha } I_{\alpha }(\mu ). \) (See Chapter 8 in [22] and Chapter 12 in [18] for further details.)

If B(0, 1) is replaced by a smooth submanifold of lower dimension, it is expected that the decay rate gets worse. In connection with problems in geometric measure theory there have been attempts to characterize averaged decay over smooth manifolds. As is well understood in problems such as Fourier restriction problems, the curvature properties of the underlying submanifolds become important.

Let \(\Sigma \) be a smooth compact submanifold with measure \(d\nu \). Let us consider the estimate, for \(\lambda >1\),

$$\begin{aligned} \int _{\Sigma } |\widehat{\mu }(\lambda \xi )|^{2} d\nu (\xi ) \le C \lambda ^{-\zeta } I_{\alpha }(\mu ). \end{aligned}$$
(1)

In addition to \(I_\alpha (\mu )<\infty \) the estimate (1) has been studied under the assumption that

$$\begin{aligned} |\widehat{\nu }(\xi )| \lesssim |\xi |^{-a}, \quad \nu (B(x,\rho )) \lesssim \rho ^{b}. \end{aligned}$$

The following can be found in [12]: If \(0\,<\,a,b\,<\,d\) and a compactly supported probability measure \(\nu \) satisfies the above condition, then (1) holds with \(\zeta =\max (\min (\alpha ,a),\, \alpha -d +b)\).

In particular, in relation to the Falconer distance set problem (cf. [12, 18, 22]) the case that \(\Sigma \) is the unit sphere \(\mathbb S^{d-1}\) and \(\nu \) is the usual surface measure was studied extensively after Mattila’s contribution [17] to the Falconer distance set problem. An extension of Mattila’s estimate in [17] was later obtained by Sjölin [19]. The results in [17, 19] were based on a rather straightforward \(L^2\) argument. Their results were further improved subsequently by Bourgain, Wolff and Erdoğan [5, 13, 14, 21]. These improvements were based on sophisticated methods which were developed in the study of the Fourier restriction problem (and Bochner–Riesz conjecture). Especially in \(\mathbb R^2\), for \(\Sigma =\mathbb S^1\) the sharp estimates were established by Mattila [17] and Wolff [21]. (See also Erdoğan [1214].) In fact, it is proved in [17, 19] that (1) holds with \(\zeta \le \max ( \min (\alpha ,1/2),\alpha -1)\) and \(\zeta \) should be smaller than or equal to \(\max (\min (\alpha ,1/2),\alpha /2)\). Later Wolff proved that (1) holds with \(\zeta < \alpha /2\) for \(0< \alpha <2\). Recently a related result was obtained by replacing the circle with a certain class of general curves in \(\mathbb R^2\) by Erdoğan and Oberlin [15].

In this paper, we are concerned with the average of \(\widehat{\mu }\) over space curves in \(\mathbb {R}^{d}\), \(d \ge 3\). Let \(\gamma : I = [0,1] \rightarrow \mathbb {R}^{d}\) be of a \(C^{d+1}\) curve satisfying

$$\begin{aligned} \det (\gamma '(t), \gamma ''(t), \ldots , \gamma ^{(d)}(t)) \ne 0 \text { for }\, t \in I. \end{aligned}$$
(2)

As is to seen later, the averaged estimate over curves are closely related to the restriction estimates for the curves which have been studied by various authors. We refer the reader to [14, 711, 16, 20] and references therein.

For a nonnegative number x, let us denote by \( [x], \langle x\rangle \) the integer part and the fractional part of x, respectively. The following is our first result.

Theorem 1.1

Let \(0< \alpha < d\), and let \(\mu \) be a positive Borel measure supported in B(0, 1), and \(\gamma \in C^{d+1}([0,1])\) be a space curve satisfying (2). Suppose \(I_\alpha (\mu )=1\), then for \(\lambda >1\) there exists a constant \(C >0\) such that, for \(\delta <\delta (\alpha )\),

$$\begin{aligned} \int _0^1 |\widehat{\mu }(\lambda \gamma (t))|^2 dt \le C \lambda ^{-\delta }, \end{aligned}$$
(3)

where \(\delta (\alpha )=\frac{\alpha -d+2}{2}\) if \(d-1 \le \alpha < d\), and \(\delta (\alpha )=\max \Big (\frac{1-\langle d-\alpha \rangle }{[d-\alpha ]+1}, \frac{2-\langle d-\alpha \rangle }{([d-\alpha ]+1)(2-\langle d-\alpha \rangle ) +1}\Big )\) otherwise.

For the case \(d-1\le \alpha <d\) the estimate is sharp except for the issue of the endpoint. But for the other case there is a gap between the bound (3) and the upper bounds which are obtained by considering specific test examples. When \(0 <\alpha \le 1\) we see from Theorem 1 in [12] that (3) holds with \(\delta \le \delta (\alpha )= \min (\alpha , 1/d)\) and this is optimal. (See Proposition 4.1 for the upper bounds of \(\delta \).)

In order to prove (3), instead of finiteness of \(\alpha \)-dimensional energy \(I_\alpha (\mu )\), it is convenient to work with a growth condition on \(\mu \). We assume that there exists a constant \(C_\mu \), independent of x and r, such that

$$\begin{aligned} \mu (B(x,r)) \le C_{\mu } r^{\alpha } \quad \text {for all } x \in \mathbb {R}^{d} \text { and } r > 0. \end{aligned}$$
(4)

It is clear that (4) implies that \(I_{\alpha -\epsilon }(\mu ) < \infty \) for any \(\epsilon >0\). The converse is essentially true up to a logarithmic loss (for example, see Lemma 3.4). For \(\mu \) satisfying (4) we set

$$\begin{aligned} \langle \mu \rangle _\alpha = \sup _{(x,r)\in \mathbb R^d\times \mathbb R_+} {r^{-\alpha }} \mu (B(x,r)). \end{aligned}$$
(5)

For the integral in the left hand side of (3) it doesn’t seem easy to make use of the geometric feature of the curve \(\gamma \). So we consider a dual form which looks like Fourier restriction estimate. In fact, (3) is equivalent to the estimate

$$\begin{aligned} |\int \widehat{g}d\mu | \le C\lambda ^{\frac{1}{2}(1 - \delta )}\Vert g\Vert _2 \end{aligned}$$

when g is supported in \(\lambda \gamma +O(1)\), the O(1)-neighborhood of the curve \(\lambda \gamma \). This can also be generalized by allowing different orders of integrability. We investigate \(\kappa =\kappa (q)\) for which

$$\begin{aligned} \begin{aligned} \Vert \widehat{g} \Vert _{L^{q}(d\mu )} \le C \lambda ^{\kappa } \Vert g\Vert _{L^2} \end{aligned} \end{aligned}$$
(6)

holds for some \(C>0\). This also has its own interest and for the case of the circle the optimal results were obtained by Erdoğan [12].

Now, to facilitate the statements of our results, we define some notations. For \(j= 1,\ldots , d \) and \(0<\alpha \le j\) we set

$$\begin{aligned} \beta _j(\alpha )=([j-\alpha ]+1)\alpha +\frac{(j-1-[j-\alpha ])(j-[j-\alpha ] )}{2}. \end{aligned}$$

For a fixed \(0<\alpha \le d\), we define the closed intervals \(J(\ell )\), \(\ell = -1,0,1,\dots ,d-1-[d-\alpha ]\), by setting

$$\begin{aligned} J(\ell ) = {\left\{ \begin{array}{ll} \, [\, 2\beta _d(\alpha ), \,\infty \, ], &{} \text {if } \, \ell =-1,\\ \, [\, 2 \beta _{d-\ell -1}(\alpha -\ell -1), \,2 \beta _{d-\ell }(\alpha -\ell )\, ], &{}\text {if }\, 0 \le \ell \le d-3-[d-\alpha ], \\ \, [\, 2 ([d-\alpha ]+1), \,2 \beta _{d-\ell }(\alpha - \ell )\,], &{}\text {if }\, \ell = d-2-[d-\alpha ],\\ \, [\,1,\, 2([d-\alpha ]+1)\,], &{}\text {if }\, \ell = d-1-[d-\alpha ]. \end{array}\right. } \end{aligned}$$

Note that \(\beta _{d-\ell }(\alpha -\ell )\) decreases as \(\ell \) increases. For each \(\ell = -1,0,1,\dots ,d-1-[d-\alpha ]\) and \(q\in J(\ell )\), we also set

$$\begin{aligned} \kappa (\alpha ,q,\ell )= {\left\{ \begin{array}{ll} \, \frac{1}{2} - \frac{\alpha }{q}, &{}\text {if }\,\, \ell =-1, \\ \, \frac{1}{2} - \frac{\alpha -\ell }{q} + \frac{1}{d-\ell } \big ( \frac{\beta _{d-\ell }(\alpha -\ell )}{q} - \frac{1}{2} \big ), &{}\text {if }\,\, 0\le \ell \le d-3-[d-\alpha ], \\ \, \frac{1}{2} - \frac{\alpha -\ell }{q} + \frac{1}{\mathfrak J_\ell } \big ( \frac{\beta _{d-\ell }(\alpha -\ell )}{q} - \frac{1}{2} \big ), &{}\text {if }\,\, \ell = d-2-[d-\alpha ], \\ \, \min \big ( \frac{d-\alpha }{4}, \frac{d-\alpha }{2([d-\alpha ]+1)}\big ) , &{}\text {if }\,\, \ell =d-1-[d-\alpha ], \end{array}\right. } \end{aligned}$$

where \(\mathfrak J_\ell = d-\ell = 2\) if \([d-\alpha ]=0\), and \(\mathfrak J_\ell =|J(d-2-[d-\alpha ])|/2\) if \([d-\alpha ] \ge 1\). Here \(|J(\ell )|\) denotes the length of \(J(\ell )\). It should be noted that, for given \(\alpha \) and \(\ell \), \(\kappa (\alpha ,q,\ell )\) is defined only for \(q\in J(\ell )\). (See Fig. 1.)

Fig. 1
figure 1

The solid lines represent the value of \(\kappa (\alpha ,q,\ell )\) as a function of 1 / q for specific values of \(\alpha \), namely, \(\alpha =d,\,d-1,\,d-j,\,d-j-1\) while j is an integer, \(1\le j<d-1\). For integer \(\alpha \), \(\kappa (\alpha ,q,\ell )\) decreases as so does q. The dotted graphs \(L_1,L_2\) give the cases of non-integer \(\alpha \) satisfying \(d-j-1<\alpha <d-j\) and \(1 \le j = [d-\alpha ]\). If \(\alpha <d-j-1 + ({j+1})/({j+2})\), \(\kappa (\alpha ,q,\ell )\) may increase. So, \(\kappa (\alpha ,q,\ell )\) may exceed \(\kappa (d-j-1,q,\ell )\) at \(A_1\). (See \(L_1\).) However, if \(\alpha \) is close enough to \(d-j\), the line of the shape like \(L_2\) appears. The dotted graph \(L_3\) shows the case of non-integer for \(\alpha \in (d-1,\,d)\). In this case, \(\kappa (\alpha ,q,\ell )\) always decreases in q. Except for \(A_1,A_2,B,\dots ,F\), every marked dot is given by \((\frac{1}{q},\kappa (\alpha ,q,\ell )) =\) \((\frac{1}{2\beta _{d-\ell }(\alpha -\ell )}, \frac{1}{2} - \frac{\alpha -\ell }{2\beta _{d-\ell }(\alpha -\ell )}).\)

Our second result reads as the following from which Theorem 1.1 is to be deduced later.

Theorem 1.2

Let \(0 < \alpha \le d\), and let \(\gamma \) be given as in Theorem 1.1. Suppose that \(\mu \) is supported in B(0, 1) and satisfies (4). Then

$$\begin{aligned} \Vert \widehat{g}\Vert _{L^{q}(d\mu )} \le C\langle \mu \rangle _\alpha ^{\,\,\frac{1}{q}} \lambda ^{\kappa (\alpha ,q,\ell )+\epsilon } \Vert g\Vert _{L^2} \end{aligned}$$

holds for any \(\epsilon >0\) and for \(q\in J(\ell )\), \(\ell = -1,0,1,\dots ,d-1-[d-\alpha ]\).

For a given \(\alpha \), the results of Theorem 1.2 are sharp for \(q \in J(\ell )\), \(\ell \le d-3-[d-\alpha ]\) in that the value \(\kappa \) can not generally be made smaller except \(\epsilon \). For \(q\in J(\ell )\), \(\ell \ge d-2-[d-\alpha ]\), the results are sharp only when \([d-\alpha ]=0\). In this case \(\kappa (\alpha ,q,d-2) = \frac{1}{4} + \frac{d-\alpha -1}{2q}\) for \(q \in J(d-2)\), which is obtained by adapting the bilinear argument due to Erdoğan [13]. (See Theorem 3.2.) It follows by Hölder’s inequality that \(\kappa (\alpha ,q,d-1)=\frac{d-\alpha }{4}\) for \(q \in J(d-1)\). When \([d-\alpha ]\ge 1\) and \(\alpha \) is an integer i.e. \(\alpha = d-[d-\alpha ]\), we have \(\mathfrak J_\ell = |J(d-2-[d-\alpha ])|/2 = d-\ell \). For this case, \(\kappa (\alpha ,q,d-2-[d-\alpha ])\) are sharp. In general, \(\mathfrak J_\ell = |J(d-2-[d-\alpha ])|/2 \le d-\ell \) for \([d-\alpha ]\ge 1\). (See Proposition 4.2.)

Remark 1.3

If \(\ell \le d-3-[d-\alpha ]\), \(\kappa (\alpha ,q,\ell )\) decreases as so does q. However \(\kappa (\alpha ,q,d-2-[d-\alpha ])\) may increase though q decreases except for the case \([d-\alpha ] =0\).

As shown in Section 3, the decay rate \(\delta \) in Theorem 1.1 is determined by the minimum of \(\kappa (\alpha ,q,\ell )\), which is given by \( \frac{d-\alpha }{4}\) if \([d-\alpha ]=0\), or \(\min _{q\in J(d-2-[d-\alpha ])} \kappa (\alpha ,q, \ell )\) if \([d-\alpha ]\ge 1\). (See Fig. 1.)

Although those notations seem to be complicated, most of them are naturally associated with the scaling structure of curves. For example, \(\beta _j(\alpha )\) generalizes the number \( \beta _d(d)=d(d+1)/2 \) which appears in the studies on restriction estimates for space curves (e.g. [14, 7, 8, 10, 11]). We need to use the intervals \(J(\ell )\) in order to extend the estimate (7) beyond the known range given by (9) with \(p=2\). Except for the case \(\ell = d-1-[d-\alpha ]\) the number \(\kappa (\alpha ,q,\ell )\) is actually obtained by interpolating the estimates for q at the endpoints of \(J(\ell )\).

The paper is organized as follows. In Section 2, we prove various \(L^p\rightarrow L^q\) estimates for the related oscillatory integral operators (Theorem 2.1). In Section 3, Theorem 1.2 will be deduced from the estimates in Section 2 and we prove Theorem 1.1. In Section 4, we discuss the upper bounds of \(\delta \) and the lower bounds of \(\kappa \) which appear in Theorems 1.1 and 1.2, respectively. In Section 5, we provide proofs of the estimates in Section 2 by making use of multilinear argument in [16]. Also Theorem 3.2 will be proved in Section 6 by adapting the bilinear argument due to Erdoğan [12].

Throughout the paper the constant C may vary from line to line and in addition to \(\,\widehat{}\,\,\) we also use \(\mathcal F\) to denote the Fourier transform.

2 Oscillatory Integral Operators

For \(\lambda \ge 1\) let us consider an oscillatory integral operator defined by

$$\begin{aligned} \mathcal E^{\gamma }_{\lambda }f(x) = a(x) \int _{I} e^{i\lambda x \cdot \gamma (t)} f(t) dt , \end{aligned}$$

where a is a bounded function supported in B(0, 1) with \(\Vert a\Vert _\infty \le 1\). The estimate (6) can be deduced from the estimate

$$\begin{aligned} \Vert \mathcal E^{\gamma }_{\lambda } f\Vert _{L^{q}(d\mu )} \lesssim \lambda ^{-\vartheta } \Vert f\Vert _{L^{2}(I)}. \end{aligned}$$
(7)

In fact, \(\lambda \gamma (t) +O(1)\) can be foliated into a set of O(1)-translations of the curve \(\lambda \gamma \). Then, a simple change of variables, Minkowski’s inequality, and Plancherel’s theorem together with (7) give (6) with \(\kappa =\frac{1}{2}-\vartheta \). The converse also can be shown by making use of the uncertainty principle. See Lemma 3.1 for the details.

In the recent paper [16], two of the authors proved that if \(\mu \) and \(\gamma \) satisfy (4) and (2), respectively, then

$$\begin{aligned} \Vert \mathcal E^{\gamma }_{\lambda } f\Vert _{L^{q}(d\mu )} \lesssim \lambda ^{-\frac{\alpha }{q}} \Vert f\Vert _{L^{p}(I)} \end{aligned}$$
(8)

holds for \(1 \le p,q \le \infty \) satisfying \(d/q \le 1 - 1/p\), \(q \ge 2 d\) and

$$\begin{aligned} \frac{\beta _d(\alpha )}{q}+\frac{1}{p} < 1, \quad q > \beta _d( \alpha ) +1. \end{aligned}$$
(9)

We refer to [16] and references therein for further discussions about this estimate and related results. Then from Lemma 3.1 it follows that (6) holds with \(\kappa =\frac{1}{2}-\frac{\alpha }{q}\) if \(q > \max (2\beta _d(\alpha ), 2d)\) and \(\lambda >1\). However this is not enough in order to obtain the estimate (6) for the other q. Hence we are led to investigate the estimates with (pq) which does not satisfy (9). It is natural to expect that the decay gets worse as (1 / p, 1 / q) gets away from the range (9). If \(\alpha =d\), then by the Lebesgue-Radon-Nikodym theorem we have \(d\mu =f(x) dx\) and by the Lebesgue differentiation theorem and (4) it follows that f is a bounded function. Hence, by projection argument, it is not difficult to see that, for \(k=0, \dots , d\),

$$\begin{aligned} \Vert \mathcal E_\lambda ^\gamma f\Vert _{L^q(d\mu )}\le C\lambda ^{-\frac{k}{q}}\Vert f\Vert _p \end{aligned}$$

whenever \(\frac{\beta _k(k)}{q}+\frac{1}{p} \le 1 \). But this argument readily fails with a general measure \(\mu \). To get around this difficulty we make use of the induction argument based on multilinear estimates (see [16] and [6]).

The following is an extension of the earlier result in [16].

Theorem 2.1

Let \(\gamma \) and \(\mu \) be given as in Theorem 1.2. For each integer \(\ell = 0, 1,\dots , d-1-[d-\alpha ]\), there exists a constant \(C_\ell \) such that

$$\begin{aligned} \Vert \mathcal E_\lambda ^{\gamma } f\Vert _{L^q(d\mu )}\le C_\ell \,\langle \mu \rangle _\alpha ^{\,\,\frac{1}{q}}\, \lambda ^{-\frac{\alpha -\ell }{q}}\Vert f\Vert _{L^{p}(I)} \end{aligned}$$
(10)

holds for \(f \in L^{p}(I)\) and \(\lambda \ge 1\) whenever \((d-\ell )/q + 1/p \le 1 \), \(q \ge 2(d-\ell )\) and

$$\begin{aligned} \frac{\beta _{d-\ell }(\alpha -\ell )}{q}+\frac{1}{p} < 1, \quad q > \beta _{d-\ell }(\alpha -\ell ) +1 . \end{aligned}$$
(11)

Theorem 2.1 is proved by routine adaptation of the argument in [16]. Compared to [16] the main difference here is to utilize various multilinear estimates of different degrees of multilinearity. For completeness we provide a proof of Theorem 2.1 in Section 5.

Remark 2.2

It is easy to check that among the four conditions on (pq) above, the first two conditions become redundant for some \(\ell \). In fact, since \(\beta _{d-\ell }(\alpha -\ell )>d-\ell \) if and only if \(\alpha -\ell >1\), and \(\beta _{d-\ell }(\alpha -\ell )+1>2(d-\ell )\) if and only if \(\alpha -\ell >2\), the estimate (10) holds whenever

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\beta _{d-\ell }(\alpha -\ell )}{q}+\frac{1}{p}< 1,\, q > \beta _{d-\ell }(\alpha -\ell ) +1, &{} \text {if }\, 2<\alpha -\ell (\text {i.e. }\ell \le d-3-[d-\alpha ]) , \\ \frac{\beta _{d-\ell }(\alpha -\ell )}{q}+\frac{1}{p}< 1,\, q \ge 2(d-\ell ), &{} \text {if }\, 1<\alpha -\ell \le 2\,\,\,\,\, (\text {i.e. } \ell =d-2-[d-\alpha ]) , \\ \frac{ d-\ell }{q} + \frac{1}{p} \le 1 ,\, q \ge 2(d-\ell ), &{} \text {if }\, 0 < \alpha -\ell \le 1\,\,\,\,\, (\text {i.e. } \ell = d-1-[d-\alpha ]) . \end{array}\right. } \end{aligned}$$

If \([d-\alpha ] \ge 1\), we also have estimates for pq satisfying \(([d-\alpha ] +1)/q +1/p >1\) and \(q <2 ([d-\alpha ] + 1)\), which are given as follows.

Theorem 2.3

Suppose that \(\gamma ,\mu \) are given as in Theorem 2.1. Then, there exists a positive constant C such that

$$\begin{aligned} \Vert \mathcal E^{\gamma }_{\lambda }f\Vert _{L^{q}(d\mu )} \le C \langle \mu \rangle _\alpha ^{\,\,\frac{1}{q}}\lambda ^{-\big ( \frac{\alpha - d}{q} + 1- \frac{1}{p} \big )} \Vert f\Vert _{L^{p}(I)}, \end{aligned}$$

whenever \( ([d-\alpha ]+1)^{-1}(1 -\frac{1}{p} )\le \frac{ 1}{q} \le 1- \frac{1}{p} \), \(q \ge [d-\alpha ]+1\) and \(\frac{\beta _{[d-\alpha ] +1}(1-\langle d-\alpha \rangle )}{q} + \frac{1}{p} < 1\).

There is no reason to believe these estimates are sharp. Particularly, if \([d-\alpha ] =0\) (this gives the condition that \(1/p + 1/q \le 1\) and \(q \ge 2\)), Theorem 2.3 coincides with Theorem 2.1 for \(\ell = d-1\). See Section 5 for a proof, which is based on the generalized Hausdorff-Young inequality.

By interpolating the estimates (10) for which (1 / p, 1 / q) is near the critical line one can improve the bound. To state this, we define some notations. In addition, let us assume that \(p \le 2\) for simplicity. For each \(\alpha \) let \(\mathscr {A}(\ell ) \) be the set of \((\frac{1}{p},\frac{1}{q})\) such that \(1\le p \le 2\) and

$$\begin{aligned} {\left\{ \begin{array}{ll} \, \frac{\beta _{d}(\alpha )}{q} +\frac{1}{p}< 1 , &{}\text { if } \ell = -1,\\ \, \frac{\beta _{d-\ell -1}(\alpha -\ell -1)}{q} +\frac{1}{p} < 1 \le \frac{\beta _{d-\ell }(\alpha -\ell )}{q} +\frac{1}{p}, &{}\text { if } \ell =0, \dots , d-3-[d-\alpha ], \\ \, \frac{[d-\alpha ]+1}{q} +\frac{1}{p} \le 1 \le \frac{\beta _{[d-\alpha ]+2}(2-\langle d-\alpha \rangle )}{q} +\frac{1}{p}, &{}\text { if } \ell = d-2-[d-\alpha ]. \end{array}\right. } \end{aligned}$$

Let us also denote by \(\mathscr {A}(d-1-[d-\alpha ])\) the set of (pq) satisfying the condition given in Theorem 2.3 and \(1\le p \le 2\). Note that \(\mathscr {A}(d-1)\) when \([d-\alpha ]=0\) represents the line segment \(1/q + 1/p=1\) and \(1\le p \le 2\).

By interpolating the estimates in Theorem 2.1 and Theorem 2.3, we obtain the following.

Corollary 2.4

Let \(\gamma \) and \(\mu \) be defined as in Theorem 2.1. Suppose (10) holds. Then, for \(1\le p\le 2\), there exists a constant \(C>0\) such that, for any \(\epsilon > 0\),

$$\begin{aligned} \Vert \mathcal E^{\gamma }_{\lambda }f\Vert _{L^{q}(d\mu )} \le C \langle \mu \rangle _{\alpha }^{\,\,\frac{1}{q}}\lambda ^{-\eta (\alpha ,p,q,\ell )+\epsilon } \Vert f\Vert _{L^{p}(I)}, \end{aligned}$$
(12)

where

$$\begin{aligned} \eta (\alpha ,p,q,\ell ) = {\left\{ \begin{array}{ll} \, \frac{\alpha }{q}, &{}\text {if }\, (\frac{1}{p},\frac{1}{q}) \in \mathscr {A}(-1), \\ \, \frac{\alpha -\ell }{q} - \frac{2}{|J(\ell )|} \left( \frac{\beta _{d-\ell }(\alpha -\ell )}{q} + \frac{1}{p}-1 \right) , &{}\text {if }\, (\frac{1}{p},\frac{1}{q}) \in \mathscr {A}(\ell ) ,\, 0 \le \ell \le d-2-[d-\alpha ], \\ \, \frac{\alpha - d}{q} + 1- \frac{1}{p}, &{}\text {if }\, (\frac{1}{p},\frac{1}{q}) \in \mathscr {A}(d-1-[d-\alpha ]). \end{array}\right. } \end{aligned}$$

Note that if \(0 < \alpha \le 1\), we have only \(\ell = 0\). In this case, there is nothing to interpolate. The results in Corollary 2.4 are sharp for \(-1 \le \ell \le d-3-[d-\alpha ]\) except \(\epsilon \)-loss. This can be shown by the same examples which are used for the proof of Proposition 4.2.

3 Proof of Theorem 1.1 and 1.2

As mentioned in the previous section, we will apply the decay estimate for the related oscillatory integral operator to obtain (6). In this section we may assume that \(\gamma \) is close to \(\gamma _\circ ^d\) so that \(\Vert \gamma -\gamma _\circ ^d\Vert _{C^{d+1}(I)} \le \epsilon \) for any given \(\epsilon >0\). Here \(\gamma _\circ ^d\) is defined by (27). This can be justified easily by decomposing the curve \(\gamma \) into a finite union of (sub)curves, rescaling and using Lemma 5.1.

We start with observing that (7) is equivalent to the estimate (6).

Lemma 3.1

Let \(d \ge 2\) and \(0 < \alpha \le d\). Suppose that \(\gamma \in C^{d+1}(I)\) satisfies (2) and \(\mu \) is a positive Borel measure supported in B(0, 1) satisfying (4). The estimate (7) holds with \(\vartheta =\frac{1}{2} -\kappa \) if and only if the estimate (6) holds whenever g is supported in \(\lambda \gamma (I)+O(1)\).

Proof

First we show that (7) implies (6). Let g be a function which is supported in \(\lambda \gamma (I)+O(1)\). By the change of variables \(\xi \rightarrow \lambda \xi \), we may write

$$\begin{aligned} \widehat{g}(x) = \int _{\gamma (I)+O(\lambda ^{-1})} e^{i\lambda x \cdot \xi } \lambda ^{d} g(\lambda \xi ) d\xi . \end{aligned}$$

Let us consider a nondegenerate curve \(\gamma _*\) which is given by extending \(\gamma \) to the interval \(I_*:= [-\frac{C}{\lambda }, 1 + \frac{C}{\lambda }]\) such that \(\gamma _{*} = \gamma \) on I and \(\Vert \gamma _* -\gamma _\circ ^d\Vert _{C^{d+1}( I_* )} \le \epsilon \) for a sufficiently small \(\epsilon >0\). Then it follows that, for a sufficiently large constant C,

$$\begin{aligned} \gamma (I) + O(\lambda ^{-1}) \subset \{\gamma _{*}(s) + (0,\mathbf {v}) : s \in {I_*}, \mathbf {v} \in \mathbb {R}^{d-1} \text { satisfying } |\mathbf {v}| \le C \lambda ^{-1}\} . \end{aligned}$$

Let us define a map \(\Gamma : I_*\times \mathbb {R}^{d-1} \rightarrow \mathbb {R}^{d}\) by \(\Gamma (s,\mathbf {v}) = \gamma _{*}(s) + (0,\mathbf {v}) \). Then \(|\det \frac{\partial \Gamma }{\partial (s,\mathbf {v})}|\ge c> 0\). Thus we have

$$\begin{aligned} \widehat{g} (x) = C \int _{|\mathbf v|\lesssim \lambda ^{-1}} \int _{ I_*} e^{i \lambda x \cdot (\gamma _{*}(s) + (0,\mathbf {v}))} \lambda ^{d} \widetilde{g}(\lambda (\gamma _{*}(s) + (0,\mathbf {v}))) ds d\mathbf {v} \end{aligned}$$

with \(|\widetilde{g}|\lesssim |g| \). By setting \(\widetilde{\gamma }(t) = \gamma _{*} ( ( 1 + 2C/{\lambda } )t - C/{\lambda } )\), we have a nondegenerate curve \(\widetilde{\gamma }\) defined on I which is still close to \(\gamma _\circ ^d\). Then, it follows that

$$\begin{aligned} |\widehat{g}(x)| \le C \int _{|\mathbf v|\lesssim \lambda ^{-1}} \Big | \int _{I} e^{i \lambda x \cdot \widetilde{\gamma }(t)} \lambda ^{d} \widetilde{g}(\lambda (\widetilde{\gamma }(t) + (0,\mathbf {v}))) dt \Big | d\mathbf {v} . \end{aligned}$$

After Minkowski’s inequality, we apply (7) by freezing \(\mathbf v\) to see that

$$\begin{aligned} \Vert \widehat{g}\Vert _{L^{q}(d\mu )} \le C \int _{|\mathbf v|\lesssim \lambda ^{-1}} \lambda ^{-\vartheta } \Vert f_{\mathbf {v}}\Vert _{L^{2}(I)} d\mathbf {v} , \end{aligned}$$

where \(f_{\mathbf {v}}(t) := \lambda ^{d} \widetilde{g}(\lambda (\widetilde{\gamma }(t)+(0,\mathbf {v})))\). By the Cauchy–Schwarz inequality, we get

$$\begin{aligned} \Vert \widehat{g} \Vert _{L^{q}(d\mu )}\le & {} C \lambda ^{-\vartheta -\frac{d-1}{2}} \Big ( \int _{|\mathbf v|\lesssim \lambda ^{-1}} \int _{I} |f_{\mathbf {v}}(t)|^{2} dt d\mathbf {v} \Big )^{\frac{1}{2}}\\\le & {} \,C \lambda ^{\frac{1}{2}-\vartheta } \Big (\int _{\lambda \gamma (I) + O(1)} |g( \xi )|^{2} d\xi \Big )^{\frac{1}{2}} , \end{aligned}$$

which implies (6).

Conversely, let us show that (6) implies (7). For \(\mathbf {v} = (v_{2},\cdots ,v_{d}) \in \mathbb {R}^{d-1}\) as above, one easily sees that

$$\begin{aligned} \Big | \int _{I} e^{i\lambda x \cdot \gamma (t)} a(x) f(t) dt \Big | = \lambda ^{d-1} \Big | \int _{|\mathbf v|\lesssim \lambda ^{-1}} \int _{I} e^{i \lambda x \cdot (\gamma (t) + (0,\mathbf {v}))} a(x) f(t) dt \, e^{-i \lambda x \cdot (0,\mathbf {v})} d\mathbf {v} \Big |. \end{aligned}$$
(13)

By expanding into power series we write \( e^{-i \lambda x\cdot (0,\mathbf {v})} = \sum _{\eta , \eta '} c_{\eta , \eta '} x^{\eta } (0,\lambda \mathbf {v})^{\eta '}, \) where \(\eta , \eta '\) denote multi-indices. Then it is easy to see \( \sum _{\eta , \eta '} |c_{\eta ,\eta '}| R^{|\eta |+|\eta '|}\lesssim e^{dR^2} \). Since \(\mu \) is supported in B(0, 1), setting \(G_{\eta '}(t,\mathbf {v}) := f(t) (0,\lambda \mathbf {v})^{\eta '} \) gives

$$\begin{aligned} (13)&\lesssim \sum _{\eta ,\eta '} |c_{\eta ,\eta '}| \lambda ^{d-1} \Big | \int _{I} \int _{|\mathbf v|\lesssim \lambda ^{-1}} e^{i \lambda x \cdot (\gamma (t) + (0,\mathbf {v}))} G_{\eta '}(t,\mathbf {v}) d\mathbf {v} dt \Big |. \end{aligned}$$

By the change of variables \(\xi = \gamma (t) + (0,\mathbf {v})\), we obtain

$$\begin{aligned} \Big | \int _{I} \int _{|\mathbf v|\lesssim \lambda ^{-1}} e^{i \lambda x \cdot (\gamma (t) + (0,\mathbf {v}))} G_{\eta '}(t,\mathbf {v}) d\mathbf {v} dt \Big | = \lambda ^{-d}\Big | \int _{\lambda \gamma (I) + O(1)} e^{i x \cdot \xi } g_{\eta '}(\lambda ^{-1}\xi ) d\xi \Big | \end{aligned}$$

where \(g_{\eta '}(\xi ) = G_{\eta '}(t(\xi ),\mathbf v(\xi ))\). Hence, using (6) and Minkowski’s inequality and reversing the change of variables we see

$$\begin{aligned} \Big \Vert \int _{I} e^{i\lambda x \cdot \gamma (t)} a(x) f(t) dt \Big \Vert _{L^q(d\mu )}\lesssim & {} \lambda ^{-1 + \kappa } \sum _{\eta ,\eta '} |c_{\eta ,\eta '}| \, \Vert g_{\eta '}(\lambda ^{-1} \cdot ) \Vert _{L^2(\mathbb R^d)} \\\lesssim & {} \lambda ^{-1+\frac{d}{2} + \kappa } \sum _{\eta ,\eta '} |c_{\eta ,\eta '}| \Big (\int _{|\mathbf v|\lesssim \lambda ^{-1}}\int _{I} | f(t) (0,\lambda \mathbf {v})^{\eta '} |^{2} dt d\mathbf {v} \Big )^\frac{1}{2}\\\lesssim & {} \lambda ^{-\frac{1}{2} +\kappa } \sum _{\eta ,\eta '} |c_{\eta ,\eta '}| C^{|\eta '|} {\Vert f \Vert _{L^2(I)}} \lesssim \lambda ^{\kappa - \frac{1}{2}} \Vert f\Vert _{L^2(I)}. \end{aligned}$$

The third inequality follows from \(|(0,\lambda \mathbf {v})| \lesssim 1\). This completes the proof.

Now we prove Theorem 1.2.

Proof of Theorem 1.2

By Lemma 3.1 and Corollary 2.4 with \(p=2\), it follows that the estimate (6) holds with

$$\begin{aligned} \kappa = \frac{1}{2} - \eta (\alpha ,2,q,\ell ) + \epsilon \end{aligned}$$
(14)

for \(\epsilon >0\) and \(-1 \le \ell \le d-2-[d-\alpha ]\). For these \(\ell \), \((\frac{1}{2},\frac{1}{q})\in \mathscr {A}(\ell )\) if and only if \(q\in J(\ell )\).

Now we consider the case \(q \in J(d-1-[d-\alpha ])\) (\(\ell =d-1-[d-\alpha ]\)). By the same argument as in the above, using Lemma 3.1 and Corollary 2.4 with \(p=2\), we get (6) with \(\kappa =(d-\alpha )/q+\epsilon \ge (d-\alpha )/2([d-\alpha ]+1) +\epsilon \) for \(\epsilon >0\) and \([d-\alpha ]+1 \le q \le 2([d-\alpha ]+1)\) (which coincides with \(\mathscr {A}(d-1-[d-\alpha ])\) for \(p=2\)). Since \(\mu \) is a finite measure, the range of q can be extended by Hölder inequality. Thus we obtain (6) with

$$\begin{aligned} \kappa = \frac{d-\alpha }{2([d-\alpha ]+1)} +\epsilon \end{aligned}$$
(15)

for \(q \in J(d-1-[d-\alpha ]) = [1,2([d-\alpha ]+1)]\). Hence (14) and (15) correspond to \(\kappa (\alpha ,q,\ell ) +\epsilon \) for \([d-\alpha ]\ge 1\) because \(|J(\ell )| = 2(d-\ell )\) for \(\ell \le d-3-[d-\alpha ]\).

As mentioned above, for \([d-\alpha ]=0\) (and \(\ell =d-2-[d-\alpha ], d-1-[d-\alpha ]\)), a better estimate is possible by making use of the bilinear approach (see Erdoğan [12]). The following is proved in Section 6.

Theorem 3.2

Suppose that \(d \ge 2\) and \(d-1\le \alpha \le d\). Let \(\gamma \), \(\mu \), and g be given as in Theorem 1.2. Then, for \(\lambda > 1\), \(q \ge 2\) and \(\epsilon > 0\), there exists a constant \(C >0\) such that

$$\begin{aligned} \Vert \widehat{g}\Vert _{L^{q}(d\mu )} \le C\langle \mu \rangle _\alpha ^{\,\,\frac{1}{q}} \lambda ^{\kappa +\epsilon } \Vert g\Vert _{L^2} \end{aligned}$$

for \( \kappa =\max (\frac{1}{4} + \frac{ d -\alpha -1}{2q},\,\frac{1}{2} + \frac{d-\alpha -2}{q}) \).

This gives

$$\begin{aligned} \kappa > {\left\{ \begin{array}{ll} \, \frac{1}{4} + \frac{d-\alpha -1}{2q}, &{}\text { if } 2 \le q \le 2(\alpha -d +3),\\ \, \frac{1}{2} + \frac{d-\alpha -2}{q}, &{}\text { if } 2(\alpha -d +3) \le q. \end{array}\right. } \end{aligned}$$

Since \([2, \, 2(\alpha -d+3) ]= J(d-2)\) if \([d-\alpha ]=0\), (6) holds with \(\kappa =\frac{1}{4} + \frac{d-\alpha -1}{2q}+\epsilon = \kappa (\alpha ,q,d-2) +\epsilon \) for \(q\in J(d-2)\), \(\epsilon >0\). By taking \(q=2\) and using Hölder’s inequality, we get \(\kappa = \frac{d-\alpha }{4} +\epsilon = \kappa (\alpha ,q,d-1) +\epsilon \) for \(q \in J(d-1) = [1,2]\). This completes the proof. \(\square \)

Now we turn to the proof of Theorem 1.1 for which we need the following lemma.

Lemma 3.3

Let \(\mu \) be a finite measure which is supported in B(0, 1). Suppose that the estimate

$$\begin{aligned} \Big |\int \widehat{g}(x) d\mu (x)\Big | \lesssim {\sqrt{I_\alpha (\mu )}} \lambda ^{\kappa } \Vert g\Vert _2 \end{aligned}$$
(16)

holds whenever g is supported in \(\lambda \gamma (I) + O(1)\). Then (3) holds with \(\delta =1-2\kappa \).

Proof

The proof is a simple modification of the argument in [21] (see also [15]). By the assumption (16) and duality, we have

$$\begin{aligned} \int _{\lambda \gamma (I) + O(1)} |\widehat{\mu }(\xi )|^{2} d\xi \lesssim I_\alpha (\mu ) \lambda ^{2\kappa }. \end{aligned}$$
(17)

Let \(\psi \) be a Schwartz function which is equal to 1 on the support of \(\mu \). Then

$$\begin{aligned} \int _{I} |\widehat{\mu }(\lambda \gamma (t))|^{2} dt = \int _{I} |\widehat{\psi } *\widehat{\mu }(\lambda \gamma (t))|^{2} dt \lesssim \int _{\mathbb R^d} \int _{I} |\widehat{\psi }(\lambda \gamma (t)-\xi )| dt |\widehat{\mu }(\xi )|^{2} d\xi . \end{aligned}$$

By rapid decay of \(\widehat{\psi }\), \(\int _{I} |\widehat{\psi }(\lambda \gamma (t)-\xi )| dt \lesssim \lambda ^{-1}{(1+{{\mathrm{dist}}}(\lambda \gamma (I),\xi ))^{-N}}\) for a sufficiently large \(N \ge d\). Hence, it follows that

$$\begin{aligned} \int _{I} |\widehat{\mu }(\lambda \gamma (t))|^{2} dt \lesssim \frac{1}{\lambda } \int \frac{|\widehat{\mu }(\xi )|^{2}}{(1+{{\mathrm{dist}}}(\lambda \gamma (I),\xi ))^{N}} d\xi . \end{aligned}$$
(18)

By dyadic decomposition along the distance between \(\xi \) and \(\lambda \gamma (I)\), we see

$$\begin{aligned} \int \frac{|\widehat{\mu }(\xi )|^{2}}{(1+{{\mathrm{dist}}}(\lambda \gamma (I),\xi ))^{N}} d\xi\lesssim & {} \int _{\lambda \gamma (I)+O(1)} |\widehat{\mu }(\xi )|^{2} d\xi \\&+\, \sum ^{\infty }_{ j=1} 2^{-N j} \int _{\lambda \gamma (I)+O(2^{j})} |\widehat{\mu }(\xi )|^{2} d\xi \\\lesssim & {} I_\alpha (\mu )\lambda ^{2\kappa } {+} I_\alpha (\mu ) \sum ^{\infty }_{j=1} 2^{-Nj} 2^{(d-1)j} \lambda ^{2\kappa } {\lesssim } I_\alpha (\mu )\lambda ^{2\kappa }. \end{aligned}$$

The second inequality follows from the fact that \(\lambda \gamma (I) + O(2^j)\) is a union of translations of \(\lambda \gamma (I) +O(1)\). Consequently, by combining this and (18) we obtain (3) with \(\delta =1-2\kappa \).

We also need the following lemma due to Wolff [21, Lemma 1.5]. In [21] the proof of this lemma is given only for \(d=2\) but the argument works for any dimension.

Lemma 3.4

Let \(\mu \) be a positive Borel measure supported in B(0, 1). Then, for \(R>1\), \(\mu \) can be written as \( \mu =\sum _{1\le j\le O(\log R)} \mu _j\) such that \(\mu _j\) is a positive Borel measure supported in B(0, 1) and, for each j,

$$\begin{aligned} \mu _j(\mathbb R^d) \sup _{(x,r)\in \mathbb R^d\times [R^{-1}, \infty ) } r^{-\alpha } \mu _j(B(x,r))\lesssim {I_\alpha (\mu )}. \end{aligned}$$
(19)

Proof of Theorem 1.1

By Lemma 3.3, for (3) we need to show (16). Now, by Lemma 3.4 with \(R=\lambda \) there are as many as \(O(\log \lambda )\) measures. Ignoring logarithmic loss we may consider only one of such measures \(\mu \) which satisfies (19), and we need to show that, for \(\kappa > (1-\delta (\alpha ))/2\),

$$\begin{aligned} \Big |\int \widehat{g}(x) d\mu (x)\Big | \le C\mu (\mathbb R^d)^\frac{1}{2}\langle \mu \rangle _\alpha ^{\,\,\frac{1}{2}} \lambda ^{\kappa } \Vert g\Vert _2 \end{aligned}$$
(20)

holds whenever g is supported in \(\lambda \gamma (I) + O(1)\) and \(\mu \) is a positive Borel measure supported in B(0, 1) satisfying (19). However, we may assume a stronger condition \(\mu (\mathbb R^d)\langle \mu \rangle _\alpha \le {I_\alpha (\mu )}\) holds. In fact, since g is supported in \(\lambda \gamma (I)+O(1)\), the estimate we need to show is equivalent to

$$\begin{aligned}\Big |\int \mathcal F({\psi (\cdot / \lambda )g})(x) d\mu (x)\Big | \le C\mu ( {\mathbb R^d})^\frac{1}{2}\langle \mu \rangle _\alpha ^{\,\,\frac{1}{2}} \lambda ^{\kappa } \Vert g\Vert _2,\end{aligned}$$

where \(\psi \) is a Schwartz function with \(\psi \sim 1\) on the ball B(0, Cd) and with \(\widehat{\psi }\) supported in B(0, 1). Since \(\mathcal F({\psi (\cdot / \lambda )g})=\lambda ^d\widehat{\psi }(\lambda \,\cdot )*\widehat{g}\), we may replace \(d\mu \) with \(\lambda ^d d\mu *|\widehat{\psi }|(\lambda \,\cdot )\). Then it is easy to see that \(\lambda ^d d\mu *|\widehat{\psi }|(\lambda \,\cdot )(\mathbb R^d)\lesssim \mu (\mathbb R^d)\) and \(\lambda ^d d\mu *|\widehat{\psi }|(\lambda \,\cdot ) (B(x,r))\lesssim \sup _{(x,r)\in \mathbb R^d\times [\lambda ^{-1}, \infty ) } r^{-\alpha } \mu (B(x,r)) \) for \(r>0\).

Since \(\mu \) is supported in B(0, 1) with \(\mu (\mathbb {R}^d)\langle \mu \rangle _\alpha \le I_\alpha (\mu )\) and \(q\ge 2\), by Hölder’s inequality and Theorem 1.2 we get, for \(\kappa > \kappa (\alpha ,q,\ell )\),

$$\begin{aligned} \int |\widehat{g}(x)| d\mu (x) \le \Vert \widehat{g}\Vert _{L^q(d\mu )}\mu (\mathbb R^d)^{1-\frac{1}{q}} \lesssim \mu (\mathbb R^d)^{1-\frac{1}{q}}\langle \mu \rangle _\alpha ^{\,\,\frac{1}{q}} \lambda ^{\kappa } \Vert g\Vert _{2}.\end{aligned}$$

Clearly, \(\mu (\mathbb R^d)\lesssim \langle \mu \rangle _\alpha \) because \(\mu \) is supported in B(0, 1). Hence we have (20) whenever \(\kappa > \kappa (\alpha ,q,\ell )\) with \(q\ge 2\). Therefore we only have to check the minimum of \(\kappa (\alpha ,q,\ell )\), \(q\in J(\ell )\) which depends on \(\alpha \).

First we consider the case \(d-1\le \alpha < d\). It is easy to see that \(\min _{\ell } \min _{q\in J(\ell )\cap [2,\infty )} \kappa (\alpha ,q,\ell )\) \(= \kappa (\alpha ,2,d-2)= \frac{d-\alpha }{4}.\) Thus we obtain the first part of Theorem 1.1.

For the case \([d-\alpha ] \ge 1\), finding the minimum of \( \min _{q\in J(\ell )\cap [2,\infty )} \kappa (\alpha ,q,\ell )\) is less obvious. As mentioned in Remark 1.3, the minimum occurs when \(q \in J(d-2-[d-\alpha ])\). In fact, \(\min _{q \in J(d-2-[d-\alpha ])} \kappa (\alpha ,q,\ell )\) is given by

$$\begin{aligned} \frac{1}{2} - \frac{\alpha - d+ 2+[ d-\alpha ] }{2 \beta _{[d-\alpha ]+ 2}(\alpha -d+ 2 +[d-\alpha ])} = \frac{1}{2} - \frac{2-\langle d-\alpha \rangle }{2 ([d-\alpha ]+1)(2-\langle d-\alpha \rangle ) +2} \end{aligned}$$

with \(q = 2 \beta _{[d-\alpha ]+2}(2-\langle d-\alpha \rangle )\), or \(\frac{d-\alpha }{2([d-\alpha ]+1)}\) with \(q = 2([d-\alpha ]+1)\). Combining these two gives the other part of Theorem 1.1. This completes the proof.

4 Upper Bound for \(\delta \) and Lower Bound for \(\kappa \)

In this section we consider the upper bound for \(\delta \) and the lower bound for \(\kappa \) which limit the values \(\delta \), and \(\kappa \) in the estimates (3) and (6). As mentioned before, for the former there is a gap between our result and the plausible upper bound stated in Proposition 4.1. For the latter, the bounds we obtain here turn out to be sharp in various cases.

Proposition 4.1

Let \(0< \alpha <d\) and \(\gamma \) be given as in Theorem 1.1. Suppose (3) holds uniformly whenever \(I_\alpha (\mu )=1\). Then, for \([d-\alpha ] = 1, \cdots , d-2\),

figure a

Thus we see that (3) is sharp when \(d-1 \le \alpha < d\). As mentioned before, Theorem 1 in [12] shows that (21c) is sufficient for (3) to hold when \(\alpha \in (0,1]\).

Proof of Proposition 4.1

For a given \([d-\alpha ]\), let us fix an integer \(\ell \) such that \(0\le \ell \le d-[d-\alpha ]-1\).

Let \(\psi \) be a Schwartz function supported in B(0, 2) with \(\Vert \psi \Vert _{L^1}=1\). We also set

$$\begin{aligned} \psi _{\lambda ,\ell } (x) = \lambda ^{-\frac{1-d+\ell }{2}} \psi (\lambda ^{1-\frac{1}{d-\ell }} x_1, \dots , \lambda ^{1-\frac{d-\ell }{d-\ell }} x_{d-\ell },x_{d-\ell +1},\dots ,x_d), \end{aligned}$$

so that \(\Vert \psi _{\lambda ,\ell } \Vert _{L^1} =1\). Then there exists a rectangle \(S_\ell \) such that \( |\widehat{\psi _{\lambda ,\ell }}| \sim 1\) on \(S_\ell \), where \(S_\ell \) is a d-dimensional rectangle defined by

$$\begin{aligned} S_\ell =\Big \{x\in \mathbb {R}^{d} : |x_{1}|\lesssim & {} \lambda ^{1-\frac{1}{d-\ell }}, |x_{2}|\lesssim \lambda ^{1-\frac{2}{d-\ell }}, \cdots ,\\ |x_{d-\ell }|\lesssim & {} \lambda ^{1-\frac{d-\ell }{d-\ell }}=1,\cdots , |x_{d}|\lesssim 1\Big \}. \end{aligned}$$

By Taylor’s expansion, we have

$$\begin{aligned} \gamma (t) - \gamma (0)= & {} \gamma '(0)t + \gamma ''(0)\frac{t^{2}}{2!} + \cdots + \gamma ^{(d)}(0)\frac{t^{d}}{d!} + \mathbf e(t)\nonumber \\=: & {} M^{\gamma ,d}_{0}\gamma _{\circ }^d(t) + \mathbf e(t), \end{aligned}$$
(22)

where \(M_0^{\gamma ,d}\) is a nonsingular matrix given by (28) and \(|\mathbf e(t)| \lesssim t^{d+1}\). Clearly, we may also assume that \(\gamma (0)=0\).

Let \(d\mu (x) = |\det (M^{\gamma ,d}_{0})^{t} |\, \psi _{\lambda ,\ell }( (M^{\gamma ,d}_{0})^{t} x )dx\). Then we have \(\widehat{\mu }(\lambda \gamma (t)) = \widehat{\psi _{\lambda ,\ell }} (\lambda (\gamma _\circ ^d(t) + (M_0^{\gamma ,d})^{-1} \mathbf e(t))) \) by (22). If \(t < c \,\lambda ^{-1/(d-\ell )}\) for a sufficiently small c, \(\lambda (\gamma _\circ ^d(t) + (M_0^{\gamma ,d})^{-1} \mathbf e(t)) \in S_\ell \). Hence, it follows that

$$\begin{aligned} \int _0^1 |\widehat{\mu }(\lambda \gamma (t))|^{2} dt \ge \int _0^{c \, \lambda ^{-\frac{1}{d-\ell }}}\Big |\widehat{\psi _{\lambda ,\ell }} \Big (\lambda (\gamma _\circ ^d(t) + (M_0^{\gamma ,d})^{-1} \mathbf e(t))\Big )\Big |^{2} dt \gtrsim \,\lambda ^{-\frac{1}{d-\ell }} . \end{aligned}$$

On the other hand, \( I_{\alpha }(\mu ) = \int |\widehat{\psi _{\lambda ,\ell }}((M^{\gamma ,d}_0)^{-1}\xi )|^{2}|\xi |^{\alpha -d}d\xi \lesssim \int _{ M^{\gamma ,d}_0 S_\ell } |\xi |^{\alpha -d} d\xi \) by the rapid decay of \(\psi \). Hence, we see

$$\begin{aligned}&I_{\alpha }(\mu ) \le C\int _{|\xi |\lesssim 1} |\xi |^{\alpha -d} d\xi + C \sum _{k=0}^{d-\ell -2} \int _{\{\lambda ^{1-\frac{k+2}{d-\ell }} \lesssim |\xi |\lesssim \lambda ^{1-\frac{k+1}{d-\ell }}\}\cap M_0^{\gamma ,d} S_{\ell }} |\xi |^{\alpha -d} d\xi . \end{aligned}$$

Using spherical coordinates,

$$\begin{aligned}&\int _{\{\lambda ^{1-\frac{k+2}{d-\ell }} \lesssim |\xi |\lesssim \lambda ^{1-\frac{k+1}{d-\ell }}\}\cap M_0^{\gamma ,d} S_{\ell }} |\xi |^{\alpha -d} d\xi \lesssim \left( {\int ^{\lambda ^{-1+\frac{k+1}{d-\ell }}}_{0}\cdots \int ^{\lambda ^{-1+\frac{k+1}{d-\ell }}}_{0}} d\theta _{d-1}\dots d\theta _{d-\ell -1}\right) \\&\quad \times \int ^{\lambda ^{-\frac{d-\ell -k-2}{d-\ell }}}_{0} \cdots \int ^{\lambda ^{-\frac{2}{d-\ell }}}_{0} \int ^{\lambda ^{-\frac{1}{d-\ell }}}_{0} \Bigg ({\int ^{1}_{0} \cdots \int ^{1}_{0}} \int ^{\lambda ^{1-\frac{k+1}{d-\ell }}}_{\lambda ^{1-\frac{k+2}{d-\ell }}} r^{\alpha -1} dr d\theta _{d-\ell -2} \cdots d\theta _{d-k-\ell -1} \Bigg ) \cdots d\theta _{1}. \end{aligned}$$

Hence, evaluating the integrals we get

$$\begin{aligned} I_\alpha (\mu ) \lesssim \sum _{k=0}^{d-\ell -1} \lambda ^{h(k)}, \end{aligned}$$

where

$$\begin{aligned} h(k)={(\alpha -\ell -1) - \frac{1}{d-\ell }\left( (k+1)(\alpha -\ell -1) + \frac{(d-\ell -k-2)(d-\ell -k-1)}{2} \right) } . \end{aligned}$$

Clearly, (3) implies \(\lambda ^{-\frac{1}{d-\ell }}\lesssim \lambda ^{-\delta }\sum _{k=0}^{d-\ell -1} \lambda ^{h(k)}\). Letting \(\lambda \rightarrow \infty \) we get

$$\begin{aligned} \delta \le \frac{1}{d-\ell } + \max _{0\le k \le d-\ell -1} h(k). \end{aligned}$$

Since h(x) attains the maximum at \(x= d-\alpha -1/2\), it is easy to see that \( \max _{0 \le k \le d-\ell -1 } h(k) = h([d-\alpha ]) . \) Since \(d-\ell -1 \ge [d-\alpha ]\), we now consider the cases \(d-\ell -1 = [d-\alpha ]\) and \(d-\ell -1 > [d-\alpha ]\), separately. When \(d-\ell -1 = [d-\alpha ]\), we have

$$\begin{aligned} \delta \le \frac{1}{d-\ell } + h(d-\ell -1) = \frac{1}{[d-\alpha ]+1}. \end{aligned}$$
(23)

When \(d-\ell -1 >[d-\alpha ]\), we examine the value of \((d-\ell )^{-1} + h([d-\alpha ])\) for \(\ell = 0,\dots , d-[d-\alpha ]-2\). Since \((d-\ell )^{-1}+ h([d-\alpha ])\) with \(\ell = d-[d-\alpha ]-2\) is the minimum, we get

$$\begin{aligned} \delta \le \frac{1}{d-(d-[d-\alpha ]-2)} + h([d-\alpha ]) = 1- \frac{d-\alpha }{[d-\alpha ]+2}. \end{aligned}$$
(24)

Thus we conclude that \(\delta \) has upper bounds (23) or (24) for \(d-[d-\alpha ]-1 <\alpha \le d- [d-\alpha ]\), which gives (21b). Especially for \([d-\alpha ]=0\) i.e. \(d-1<\alpha < d\), the minimum value is \(1-\frac{d-\alpha }{2}\), which is (21a).

Finally, we show (21c). In this case, \([d-\alpha ] = d-1\), i.e. \(0<\alpha \le 1\). Repeating the same argument, we see that (23) implies \(\delta \le \frac{1}{d}\). Hence it suffices to show \(\delta \le \alpha \) for \(\alpha \in (0,d)\). To obtain this, let \(\alpha _*\in (\alpha , d)\) and consider \(d\mu (x)=|x|^{-d+\frac{\alpha _{*}}{2}}\psi (x)dx\) for a Schwartz function \(\psi \) in the above. It is easy to see \( \widehat{\mu }(\xi ) = C |\cdot |^{-\frac{\alpha _{*}}{2}}*\widehat{\psi }(\xi ) \approx (1+|\xi |)^{-\frac{\alpha _{*}}{2}}\). So we get

$$\begin{aligned} \int _0^1 |\widehat{\mu }(\lambda \gamma (t))|^2 dt \gtrsim \lambda ^{-\alpha _*}. \end{aligned}$$

Since \(\alpha < \alpha _*\), \( I_\alpha (\mu ) = \int |\widehat{\mu }(\xi )|^2 |\xi |^{\alpha -d} d\xi \le \int _{|\xi |>1}|\xi |^{\alpha -d-\alpha _*} d\xi + \int _{|\xi |<1} |\xi |^{\alpha -d} d\xi \lesssim 1. \) Hence, (3) implies \(\delta \le \alpha _*\) for any \(\alpha _*\in (\alpha ,d)\), which gives \(\delta \le \alpha \) as desired.

Now we consider the lower bounds for \(\kappa \) in Theorem 1.2. We define the intervals \(J_\circ (\ell )\subset [1,\infty )\) by

$$\begin{aligned} J_\circ (\ell ) = {\left\{ \begin{array}{ll} \, J(\ell ), &{}\text {for }\, \ell = -1,0,\cdots , d-3-[d-\alpha ],\\ \, [\,2 \beta _{d-\ell -1}(\alpha -\ell -1),\, 2 \beta _{d-\ell }(\alpha -\ell )\,], &{}\text {for } \ell = d-2-[d-\alpha ],\\ \, [\, 1,\, 2 \beta _{[d-\alpha ]+1}(1-\langle d-\alpha \rangle ) \,], &{}\text {for } \ell = d-1-[d-\alpha ]. \end{array}\right. } \end{aligned}$$

For each \(q \in J_\circ (\ell )\) we also define \(\kappa _\circ (\alpha ,q,\ell )\) given by

$$\begin{aligned}\kappa _\circ (\alpha ,q,\ell ) = {\left\{ \begin{array}{ll} \tfrac{1}{2}-\tfrac{\alpha }{q}, &{} \text {if } q\in J_\circ (-1), \\ \tfrac{1}{2} - \tfrac{\alpha -\ell }{q} + \tfrac{1}{d-\ell } \big ( \tfrac{\beta _{d-\ell }(\alpha -\ell )}{q} - \tfrac{1}{2} \big ), &{} \text {if } q \in J_\circ (\ell ), \end{array}\right. } \end{aligned}$$

for \(0 \le \ell \le d-1-[d-\alpha ]\). Then \(\kappa _\circ (\alpha ,q,\ell ) = \kappa (\alpha , q,\ell )\) for \(q \in J(\ell )\), \(-1 \le \ell \le d-3-[d-\alpha ]\). Also, for given \(\alpha \) and \(\ell \), \(\kappa _\circ (\alpha ,q,\ell )\) is defined only for \(q\in J_\circ (\ell )\). It is easy to see that \(\kappa _\circ (\alpha ,q,\ell )\) continuously decreases as \(\ell \) increases.

Proposition 4.2

Suppose (6) holds with \(\mu \), \(\gamma \) and g which are given as in Theorem 1.2. For \(q \in J_\circ (\ell )\),

$$\begin{aligned} \kappa \ge \kappa _\circ (\alpha ,q,\ell ). \end{aligned}$$
(25)

In addition, \(\kappa \ge (d-\alpha )/4\) when \(d-1\le \alpha \le d\).

Proof of Proposition 4.2

We show (25) first. Fix \(\alpha \) and consider the measure \(\mu _\circ \) given by

$$\begin{aligned} d\mu _\circ (x) = \psi (x) \prod ^{[d-\alpha ]}_{j=1} d\delta (x_{j}) |x_{[d-\alpha ]+1}|^{-\langle d-\alpha \rangle } dx_{[d-\alpha ]+1} \cdots dx_{d} , \end{aligned}$$
(26)

where \(\psi \) is a smooth function supported in B(0, 1) and \(\delta \) is the delta measure. When \([d-\alpha ]=0\), we write \(d\mu _\circ (x) = \psi (x) |x_1|^{-\langle d-\alpha \rangle } d x_1 dx_2\cdots d x_d\). Then, as can be easily checked \(\mu _\circ \) satisfies (4).

Let \(g(y):=\lambda ^{-\frac{1}{2}}\chi _{\lambda \gamma (I)+O(1)}(y)\). Then \( |\widehat{g}(x) | = \lambda ^{-\frac{1}{2}} \big | \int _{\lambda \gamma (I) + O(1)} e^{i x\cdot y }dy \big | \gtrsim \lambda ^{\frac{1}{2}} \) whenever \(x\in B(0,c\lambda ^{-1})\) for a sufficiently small \(c>0\). It follows that

$$\begin{aligned} \Vert \widehat{g}\Vert _{L^{q}(d\mu )} \gtrsim \lambda ^{\frac{1}{2}}\mu (B(0,c\lambda ^{-1}))^{\frac{1}{q}} \sim \lambda ^{\frac{1}{2} - \frac{\alpha }{q}}. \end{aligned}$$

Since \(\Vert g\Vert _{L^{2}(\mathbb {R}^{d})}\sim 1\), (6) and letting \(\lambda \rightarrow \infty \) gives \(\kappa \ge 1/2- \alpha / q\).

Now let \(\ell \) be an integer such that \(0 \le \ell \le d-1-[d-\alpha ]\). Let us consider the measure \(\mu \) defined by \(\int F(x) d\mu = \int F( (M_0^{\gamma ,d})^{-t}x) d\mu _\circ (x)\). Note that \(d\mu \) is a compactly supported positive Borel measure satisfying (4). Let \(J = [0,\lambda ^{-\frac{1}{d-\ell }}]\) and set \(g(y) = \chi _{\lambda \gamma (J)+O(1)}(y).\) Then \(\Vert \widehat{g}\Vert _{L^q(d \mu )}^q = \int |\widehat{ g} ( (M^{\gamma ,d}_{0})^{-t}x) |^q d\mu _\circ (x)\) and

$$\begin{aligned} |\widehat{g}((M^{\gamma ,d}_{0})^{-t} x)| = \Big |\int _{\lambda \gamma (J)+O(1)} e^{i x \cdot (M^{\gamma ,d}_{0})^{-1}(y-\lambda \gamma (0))} dy \Big |. \end{aligned}$$

Using Taylor’s expansion in (22) we see that \((M^{\gamma ,d}_{0})^{-1}(y-\lambda \gamma (0))\) is contained in \(\lambda \gamma _{\circ }^d(J) + O(1)\). Hence,

$$\begin{aligned} |\widehat{g}((M^{\gamma ,d}_{0})^{-t} x)| \gtrsim \lambda ^{1-\frac{1}{d-\ell }} \chi _{P_{\ell }}(x), \end{aligned}$$

where \( P_{\ell } = [0, c\lambda ^{\frac{1}{d-\ell }-1}]\times [0, c\lambda ^{\frac{2}{d-\ell }-1}]\times \cdots \times [0, c\lambda ^{\frac{d-\ell }{d-\ell }-1}]\times [0, c]\times \cdots \times [0, c], \) for a small \(c>0\). Since \(\mu _\circ (P_{\ell })\sim \lambda ^{-(\alpha -\ell )+\frac{\beta _{d-\ell }(\alpha -\ell )}{d-\ell }}\), we get

$$\begin{aligned} \Vert \widehat{g}\Vert _{L^q(d\mu )} \gtrsim \lambda ^{1-\frac{1}{d-\ell }}\Big (\int \chi _{P_\ell }(x)d\mu _\circ (x)\Big )^{\frac{1}{q}} \sim \lambda ^{1-\frac{\alpha -\ell }{q} + \frac{1}{d-\ell }(\frac{\beta _{d-\ell }(\alpha -\ell )}{q}-1)} . \end{aligned}$$

Combined with this and \(\Vert g \Vert _{L^{2}} \sim \lambda ^{\frac{1}{2}-\frac{1}{2(d-\ell )}}\), (6) gives, for \(0 \le \ell \le d-1-[d-\alpha ]\),

$$\begin{aligned} \kappa \ge \frac{1}{2} - \frac{ \alpha - \ell }{q} + \frac{1}{d-\ell } \left( \frac{\beta _{d-\ell }(\alpha -\ell )}{q} -\frac{1}{2} \right) . \end{aligned}$$

Considering the maximum along \(\ell \) and the lower bound \(\kappa \ge \frac{1}{2} -\frac{\alpha }{q}\), we can see that \(\kappa \ge \frac{1}{2} -\frac{\alpha }{q}\) for \(q \in J_\circ (-1)\), i.e. \(q \ge 2 \beta _d(\alpha )\). When \(2 \beta _{d-1}(\alpha -1) \le q \le 2 \beta _{d}(\alpha )\), i.e. \(q \in J_\circ (0)\), we get \(\kappa \ge \frac{1}{2} - \frac{ \alpha }{q} + \frac{1}{d } \big ( \frac{\beta _{d }(\alpha )}{q} -\frac{1}{2} \big )\). Similarly for each \(\ell \), we conclude that \(\kappa \ge \kappa _\circ (\alpha ,q,\ell )\) for \(q \in J_\circ (\ell )\).

We now show that \(\kappa \ge (d-\alpha )/4\) when \(d-1\le \alpha \le d\). For this, we adapt the argument in [12]. Let \(G_1\) be a Schwartz function supported in \( D := [0,\lambda ^{\frac{1}{2}}]\times [0,1]\times \cdots \times [0,1]\) \(\subset \lambda \gamma _{\circ }^{d}(I) + O(1)\) such that \(\Vert G_1\Vert _{L^2} =1\) and \(|\widehat{G_{1}}(x)| > \lambda ^{\frac{1}{4}}/100\) on a rectangle \(D^*\) of dimension \(\lambda ^{-\frac{1}{2}}\times 1\times \cdots \times 1\).

For a fixed \(\lambda \ge 1\), we set \(T = \lambda ^{\frac{\alpha -(d-1)}{2}} \) and define a Schwartz function \(G_2\) by

$$\begin{aligned} \widehat{G_{2}}(x) := T^{-\frac{1}{2}} \sum ^{T-1}_{k=0} \widehat{G_{1}}(x-\frac{k}{T}e_{1}), \end{aligned}$$

where \(e_1 = (1,0,\dots ,0)\in \mathbb R^d\). Then \(|\widehat{G_{2}}|\gtrsim T^{-\frac{1}{2}}\lambda ^{\frac{1}{4}}\) on the set \(S:=\bigcup ^{T-1}_{k=0}(D^* + \frac{k}{T}e_{1})\) and \(\Vert G_{2}\Vert _{2}^2 = T^{-1} \sum ^{T-1}_{k=0} \Vert \widehat{G_{1}}(\cdot -\frac{k}{T}e_{1})\Vert _2^2 =1\). Moreover \(G_{2}\) is supported in D. Hence, if we set

$$\begin{aligned}G_{3}(x) := |\det M^{\gamma ,d}_{0}|^{-\frac{1}{2}} G_{2} ((M^{\gamma ,d}_{0})^{-1}x),\end{aligned}$$

then \(G_{3}\) is supported in \(M_{0}^{\gamma ,d}D \subset \lambda \gamma (I) +O(1)\) and \(\Vert {G_{3}}\Vert _{L^{2}} = 1\).

Let us set \( d\mu _\circ (x) = \lambda ^{\frac{d-\alpha }{2}} \chi _{S}(x) dx\). It is not difficult to verify that \(\mu _\circ \) satisfies (4). In fact, if \( \lambda ^{-\frac{1}{2}} \le \rho < 1 \), there exists an integer j such that \(j/T \le \rho \le (j+1)/T\) by the definition of S. Hence, for any \(x \in \mathbb R^d\), we have

$$\begin{aligned} \mu _\circ (B(x ,\rho ))&= \lambda ^{\frac{d-\alpha }{2}} |S\cap B(x ,\rho )| \lesssim \lambda ^{\frac{d-\alpha }{2}} (j+1)\lambda ^{-\frac{1}{2}} \rho ^{d-1} \lesssim \lambda ^{-\frac{\alpha - (d-1)}{2}} T \rho ^{d} \le \rho ^d\\&\lesssim \lambda ^{-\frac{\alpha -(d-1)}{2}} T \rho ^d \le \rho ^d \le \rho ^\alpha . \end{aligned}$$

The other cases \(0< \rho < \lambda ^{-\frac{1}{2}}\) and \(\rho \ge 1\) can be handled similarly. So, by Lemma 5.2 the measure \(\mu \) defined by

$$\begin{aligned}\int F(x) d\mu = \int F( (M_0^{\gamma ,d})^{-t}x) d\mu _\circ (x)\end{aligned}$$

also satisfies (4). Since \(T \le \lambda ^{\frac{\alpha -(d-1)}{2}} < T+1\), it follows that

$$\begin{aligned} \Vert {\widehat{G_{3}}}\Vert _{L^{q}(d \mu )}^q= & {} \int |\widehat{G_{3}}((M^{\gamma ,d}_{0})^{-t}x)|^{q} d\mu _\circ (x) \sim \int |\widehat{G_{2}}(x)|^{q} d\mu _\circ (x)\\\gtrsim & {} T^{-\frac{q}{2}}\lambda ^{\frac{q}{4}} \lambda ^{\frac{d-\alpha }{2 }}|S|\\\gtrsim & {} \lambda ^{\frac{q(d-\alpha )}{4}}. \end{aligned}$$

Hence we see \(\kappa \ge (d-\alpha )/4\) by letting \(\lambda \rightarrow \infty \).

5 Proof of Theorem 2.1 and 2.3

For a given \(\alpha \), let \(\ell \) be an integer in \([0, d-1-[d-\alpha ]]\). As \(\ell \) increases, the oscillatory decay in (10) gets worse while the range (11) gets wider. The case of \(\ell =0\) is already established in [16]. To show Theorem 2.1 for the other cases, we consider the collection \(\Gamma (k,\epsilon )\) of curves which is given by

$$\begin{aligned} \Gamma (k,\epsilon )=\Big \{\gamma \in C^{d+1}(I) : \Vert \gamma -\gamma _\circ ^k \Vert _{ C^{k+1}(I)}\le \epsilon \Big \}, \end{aligned}$$

where

$$\begin{aligned} \gamma _\circ ^k(t)=\Big (t,\,{t^2}/{2!},\,\dots ,\, {t^k}/{k!},\,0,\,\dots ,\, 0 \Big ),\,\, 1 \le k \le d.\end{aligned}$$
(27)

The curves in \(\Gamma (k,\epsilon )\) are nondegenerate in \(\mathbb R^k\) when they are projected to \(\mathbb R^k\times \{0\}\). Viewing these curves as nondegenerate curves in \(\mathbb R^k\) provides various multilinear estimates under a separation condition between functions (see Proposition 5.3). From these multilinear estimates we can obtain the linear estimate by adapting the argument in [16]. The difference here is that we run induction on scaling argument on each k-linear estimates which were not exploited before. This requires control of rescaling of measures when \(d-k = \ell \) variables are fixed.

5.1 Normalization of Curves

In Lemma 5.1 we show that any nondegenerate curve defined in a sufficiently small interval can be made arbitrarily close to \(\gamma _\circ ^k\). This can be shown by Talyor expansion of \(\gamma \) of degree k and rescaling. It is worth noting that the condition (2) does not guarantee \(| \det M_\tau ^{\gamma , k} | \ge c >0\) for some c, where

$$\begin{aligned} M_\tau ^{\gamma , k}= ( \gamma '(\tau ),\gamma ''(\tau ), \ldots , \gamma ^{(k)}(\tau ),e_{k+1}, \ldots , e_d ) \end{aligned}$$
(28)

and \(e_j\)’s are the unit vectors whose j-th component is 1. However, by Lemma 2.1 in [16], we may assume that (after a finite number of decompositions and rescaling) any non-degenerate curve \(\gamma \) is close to \(\gamma _\circ ^d\) in a small interval. Using this we can see that \( M_\tau ^{\gamma , k} \) is invertible and there is a constant \(B>0\) by which \(\Vert (M_\tau ^{\gamma , k})^{-1} \Vert \) is uniformly bounded for \(\tau \in I\). (Here \(\Vert M\Vert \) denotes the usual matrix norm such that \(\Vert M\Vert = \max _{|x| = 1} |Mx|\).) In fact, if \(\gamma \in \Gamma (d, \epsilon )\), we have \(\gamma = \gamma _\circ ^{d} + \mathbf e_d \) such that \(\Vert \mathbf e_d \Vert _{C^{d+1}(I)} < \epsilon \). Then \(\det M_\tau ^{\gamma , k} = \det (\gamma '_\circ ,\gamma ''_\circ ,\dots ,\gamma ^{(k)}_\circ ,e_{k+1},\dots ,e_d) + \textit{error terms}\). For sufficiently small \(\epsilon \), it follows that \(\det M_\tau ^{\gamma , k} \ge \frac{1}{2}\). (Note that \(\det (\gamma '_\circ ,\gamma ''_\circ ,\dots ,\gamma ^{(k)}_\circ ,e_{k+1},\dots ,e_d) =1\).)

For \(a,b\in \mathbb R\), \(a\ne b\), let us set

$$\begin{aligned} |[a,b]|={\left\{ \begin{array}{ll} \,\, [a,b] \,\, \text { if } a<b,\\ \,\, [b,a] \,\, \text { if } b<a. \end{array}\right. } \end{aligned}$$

We define the normalized curve by setting

$$\begin{aligned} \gamma _\tau ^h(t)= (M_\tau ^{\gamma ,k}\,D_h^k)^{-1}(\gamma (ht+\tau )-\gamma (\tau )), \end{aligned}$$
(29)

where \(D_h^k\) is the diagonal matrix given by \(D_h^k =(he_1, h^2e_2, \dots , h^{k} e_k, e_{k+1},\dots , e_d )\). Then \(\gamma _\tau ^h(t)\) can be close to \(\gamma _\circ ^k\) if h is sufficiently small, as follows.

Lemma 5.1

Let \(\tau \in I\) and \(\gamma \in \Gamma ( d,\epsilon )\) for some \(\epsilon >0\). Then, there is a constant \(\delta >0\) such that \(\gamma _{\tau }^h\in \Gamma (k,\epsilon )\) whenever \(|[\tau ,\tau +h]| \subset I\), \(0<|h|\le \delta \).

Proof

We may assume that \(h >0\), i.e. \( |[\tau ,\tau +h]| = [\tau ,\tau +h]\). The case that \(h <0\) can be shown in the same manner. By Taylor’s expansion, we have

$$\begin{aligned} \gamma (ht +\tau ) - \gamma (\tau )&= \gamma '(\tau ) ht + \gamma ''(\tau ) \frac{(ht)^2}{2!} + \cdots + \gamma ^{(k)}(\tau ) \frac{(ht)^k}{k!} + \mathbf e (\tau , h, t) \\&= M_\tau ^{\gamma , k} D_h^k \gamma _\circ ^k(t) + \mathbf e (\tau , h, t), \end{aligned}$$

where \(\Vert \mathbf e (\tau , h, t) \Vert _{C^{k+1}(I)} \le C h^{k+1}\) for some constant \(C>0\) independent of \(\tau \). Hence we obtain \(\Vert \gamma _\tau ^h - \gamma _\circ ^k \Vert _{C^{k+1}(I)} = \Vert (M_\tau ^{\gamma , k} D_h^k)^{-1} \mathbf e(\tau , h,t)\Vert _{C^{k+1}(I)} \lesssim h\), which implies that \(\gamma _\tau ^h \in \Gamma (k,\epsilon )\) if we take \(\delta \lesssim \epsilon /2\).

5.2 Rescaling of Measure

For \(M > 0\), we denote by \(\mathfrak M(\alpha ,M)\) the set of compactly supported positive Borel measures satisfying \(0<\langle \mu \rangle _\alpha \le M\). Let \(\mu \in \mathfrak M (\alpha ,M)\), and let A be a non-singular matrix. Let us now define a measure \(\mu ^k_{A,h}\) by setting

$$\begin{aligned} \int F(x) d \mu ^k_{A,h} (x) =\int F( D_h^k A x) d \mu (x) \end{aligned}$$
(30)

for any compactly supported continuous function F and \(0< |h| < 1\). By the Riesz representation theorem we see that \(\mu ^k_{A,h}\) is the unique measure given by (30).

Lemma 5.2

Let \(\mu \) and A be given as above, \(\ell = 0,1,\dots , d-1-[d-\alpha ]\). Set \(k = d-\ell \). Then, \(\mu ^k_{A,h}\) is also a Borel measure satisfying

$$\begin{aligned} \langle \mu ^k_{A,h} \rangle _\alpha \le C \langle \mu \rangle _\alpha \Vert A^{-1} \Vert ^\alpha |h|^{-\beta _k(\alpha -d+k)}. \end{aligned}$$
(31)

Here C is independent of hA.

Proof

By the proof of Lemma 2.3 in [16], it suffices to show that

$$\begin{aligned} \mu _h^k (B(0,\rho )) \le C \langle \mu \rangle _\alpha |h|^{-\beta _k(\alpha -d+k)} \rho ^\alpha , \end{aligned}$$

where \(\mu _h^k : = \mu _{I_d,h}^k\) and \(I_d\) is the \(d\times d \) identity matrix. It is clear that \(\mu _h^k(B(0,\rho )) = \mu ( (D_h^k)^{-1}B(0,\rho ))\le \mu ( \mathcal R)\), where \(\mathcal R\) is a rectangle of dimension \(|h|^{-1} \rho \times \cdots \times |h|^{-k}\rho \times \rho \times \cdots \times \rho \). If we denote by \(\widetilde{\mathcal R}\) a larger rectangle of dimension \(|h|^{-1} \rho \times \cdots \times |h|^{-k}\rho \times |h|^{-([d-\alpha ]+1)}\rho \times \cdots \times |h|^{-([d-\alpha ]+1)}\rho \) which contains \(\mathcal R\), then it follows that \(\mu (\widetilde{\mathcal R}) \sim |h|^{-([d-\alpha ]+1)(d-k)} \mu (\mathcal R)\). Since \(1 \le [d-\alpha ]+1 \le k\), \(\widetilde{\mathcal R}\) is covered by cubes \(Q_1,\dots , Q_N\) of side length \(|h|^{-([d-\alpha ]+1)} \rho \) with \(N \lesssim |h|^{-(k-1-[d-\alpha ])(k-[d-\alpha ])/2}\). Since \(\mu (Q_i)\le \langle \mu \rangle _\alpha |h|^{-\alpha ([d-\alpha ]+1)} \rho ^\alpha \), we get

$$\begin{aligned} \mu _h^k(B(0,\rho ))\lesssim & {} |h|^{([d-\alpha ]+1)(d-k)} \mu (\widetilde{\mathcal R})\\\le & {} |h|^{([d-\alpha ]+1)(d-k)} \sum _{i=1}^N \mu (Q_i)\\\le & {} \langle \mu \rangle _\alpha |h|^{-\beta _k(\alpha - d + k)} \rho ^\alpha . \end{aligned}$$

This completes the proof.

5.3 Multilinear (k-Linear) Estimates

Let us set, for \(\lambda \ge 1\),

$$\begin{aligned} \mathcal E^{\gamma }_{\lambda }f(x) = a(x) \int _{I} e^{i\lambda x \cdot \gamma (t)} f(t) dt , \end{aligned}$$

where a is a bounded function supported in B(0, 1) with \(\Vert a\Vert _\infty \le 1\). As mentioned above, we need to prove k-linear estimates for \(\mathcal E^{\gamma }_{\lambda }\) while \(\gamma \in \Gamma (k,\epsilon )\). This can be achieved simply by freezing other \(d-k\) variables. By applying Lemma 2.5 in [16] and Plancherel’s theorem, we obtain a k-linear \(L^2\rightarrow L^2\) estimate.

Lemma 5.3

Let \(\gamma \in \Gamma (k,\epsilon )\) and \(\mathcal I_1,\dots , \mathcal I_k\) be closed intervals contained in I which satisfy \(\min _{i\ne j}{{\mathrm{dist}}}(\mathcal I_i, \mathcal I_j)\ge L\). If \(\epsilon >0\) is sufficiently small, then there is a constant C, independent of \(\gamma \), such that

$$\begin{aligned} \Big \Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Big \Vert _{L^2(\mathbb R^d)}\le CL^{-\frac{k^2-k}{4}}\lambda ^{-\frac{k}{2}} \prod _{i=1}^k \Vert f_i \Vert _{L^2(\mathbb R)} \end{aligned}$$
(32)

whenever \(f_i\) is supported in \(\mathcal I_i\), \(i=1,2,\dots , k\).

Proof

For the proof, it suffices to show that for a constant vector \(\mathbf c \in \mathbb R^{d-k}\),

$$\begin{aligned} \int \Big | \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i (x_1,\dots ,x_k,\mathbf c) \Big |^2 d x_1 \cdots d x_k \le C L^{-\frac{k^2-k}{2}}\lambda ^{- k } \prod _{i=1}^k \Vert f_i \Vert ^2_{L^2}. \end{aligned}$$
(33)

Then (32) follows by integrating along \(\mathbf c\) since a is supported in B(0, 1) and \(\Vert a\Vert _\infty \le 1\). To prove this, let us set \(\gamma (t) = (\gamma _{\star }(t),\gamma _{\mathbf c}(t))\) where \(\gamma _{\star }(t)\) is the first k components of \(\gamma (t)\) and the rest of the components are denoted by \(\gamma _{\mathbf c}(t)\). Also let us set

$$\begin{aligned} F(\mathbf t) = e^{i \lambda \mathbf c \cdot \sum _{i=1}^k \gamma _{\mathbf c}(t_i)} \prod _{i=1}^k f_i(t_i) \end{aligned}$$

where \(\mathbf t = (t_1,\dots ,t_k)\). Then we have

$$\begin{aligned} \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i (x_1,\dots ,x_k,\mathbf c) = \int _{I^k} e^{ i \lambda (x_1,\dots ,x_k)\cdot \sum _{i=1}^k\gamma _{\star }(t_i) } F(\mathbf t) d \mathbf t. \end{aligned}$$

Since \(\gamma \in \Gamma (k, \epsilon )\), we have \(\gamma _\star (t) = (t, t^2/2!,\dots ,t^k/k!) + \mathbf e\) such that \(\Vert \mathbf e\Vert _{C^{k+1}(I)} \le \epsilon \). Then we can apply Lemma 2.5 in [16], to say k-linear estimates in \(\mathbb R^k\), or more directly change of variables and Plancherel’s theorem. Since \(\Vert F\Vert ^2_{L^2} = \prod _{i=1}^k \Vert f_i \Vert ^2_{L^2}\) we get (33).

Now we obtain an \(L^p \rightarrow L^q(d\mu )\) estimate by interpolating (32) with the trivial \(L^1 \rightarrow L^\infty (d\mu ) \) estimate.

Proposition 5.4

Let \(\mathcal I_1,\dots , \mathcal I_k\), and \(\gamma \in \Gamma (k,\epsilon )\) be given as in Lemma 5.3. Suppose \(\mu \in \mathfrak M(\alpha ,1)\). If \(\epsilon >0\) is sufficiently small, then for \(1/p+1/q\le 1\) and \(q\ge 2\) there is a constant C, independent of \(\gamma \), such that

$$\begin{aligned} \Big \Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Big \Vert _{L^q(d\mu )}\le C \langle \mu \rangle _\alpha ^{\,\,\frac{1}{q}} L^{-\frac{k^2-k}{2q}}\lambda ^{-\frac{\alpha -d+k}{q}}\prod _{i=1}^k\Vert f_i \Vert _p \end{aligned}$$

whenever \(f_i\) is supported in \(\mathcal I_i\), \(i=1,2,\dots , k\).

Proof

Since we have the trivial estimate \(\Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Vert _{L^\infty (d\mu )} \le \prod _{i=1}^k \Vert f_i\Vert _{L^1}\), in view of interpolation it suffices to show that

$$\begin{aligned} \Big \Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Big \Vert _{L^2(d\mu )} \le C \langle \mu \rangle _\alpha ^{\,\,\frac{1}{2}} L^{-\frac{k^2 -k}{4}} \lambda ^{- \frac{\alpha - d +k}{2}} \prod _{i=1}^k \Vert f_i\Vert _{L^2}. \end{aligned}$$

Since the Fourier transform of \( \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i\) is supported in a ball of radius \(C \sqrt{2k} \lambda \) for some constant \(C> 0\), we observe that \( \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i = ( \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i) *\phi _\lambda , \) where \(\phi _\lambda (x) = \lambda ^d \phi (\lambda x)\) and \(\phi \) is a Schwartz function such that \(\widehat{\phi }=0 \) if \(|\xi | \ge 2 C \sqrt{2k}\), and \(\widehat{\phi }=1\) if \(|\xi |\le C \sqrt{2k}\). Note that \( |\phi _\lambda | *\mu (x) \le C \langle \mu \rangle _\alpha \lambda ^{d-\alpha }\). By Lemma 5.3, it follows that

$$\begin{aligned} \Big \Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Big \Vert _{L^2(d\mu )}\le & {} \Big \Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Big \Vert _{L^2(\mathbb R^d)} \Vert | \phi _\lambda | *\mu \Vert ^{\frac{1}{2}}_{\infty }\\\le & {} C \langle \mu \rangle _\alpha ^{\,\,\frac{1}{2}} L^{-\frac{k^2 - k}{4}} \lambda ^{-\frac{\alpha - d +k}{2}} \prod _{i=1}^k \Vert f_i \Vert _{L^2} \end{aligned}$$

as desired.

5.4 The Induction Quantity

For \(\lambda \ge 1\), \(1\le p, q\le \infty \), and \(\epsilon >0\), we define \(Q_\lambda =Q_\lambda (p, q,\epsilon )\) by setting

$$\begin{aligned} Q_\lambda = \sup \{\, \Vert \mathcal E^\gamma _\lambda f\Vert _{L^q(d\mu )}: \mu \in \mathfrak M(\alpha ,1),\, \gamma \in \Gamma (k,\epsilon ), \, {\Vert f\Vert _{L^p(I)}\le 1},\, a \in \mathfrak A \}, \end{aligned}$$
(34)

where \(\mathfrak {A}\) is a set of measurable functions supported in B(0, 1) and \(\Vert a\Vert _\infty \le 1\). It is clear that \(Q_\lambda \) is finite for any \(\lambda >0\).

Lemma 5.5

Let \(\gamma \in \Gamma (k,\epsilon )\), \(\mu \in \mathfrak M(\alpha ,1)\), and let \(\lambda \ge 1\), \(0<|h|<1\). Suppose that f is supported in the interval \(|[\tau ,\tau +h]|\subset [0,1]\). Then, if \(\epsilon >0\) is sufficiently small, there is a constant \(\delta >0\), independent of \(\gamma \), such that if \(0<|h|\le \delta \)

$$\begin{aligned} \Vert \mathcal E^\gamma _\lambda f \Vert _{L^q(d\mu )}\le C \langle \mu \rangle _\alpha ^{\,\,\frac{1}{q}} \,|h|^{1-\frac{1}{p}-\frac{\beta _{k}(\alpha -d+k)}{q}} Q_{\lambda } \Vert f\Vert _p. \end{aligned}$$
(35)

Proof

Let us denote \(f_h(t) = h f(ht +\tau )\).

Recalling (29) we have

$$\begin{aligned} | \mathcal E_\lambda ^\gamma f(x) | {=} \Big |\int _I e^{ i \lambda x \cdot (\gamma (ht + \tau ) - \gamma (\tau ))} a(x) f_h(t) dt \Big | {=} \Big |\int _I e^{i \lambda (M_\tau ^{\gamma , k} D_h^k)^t x \cdot \gamma _\tau ^h(t)} a(x) f_h(t) dt \Big |. \end{aligned}$$

Let us set \(\mu _{\tau ,h}^k := \mu _{(M_\tau ^{\gamma , k})^t,h}^k\) which is given by (30).

Assuming that \(\langle \mu \rangle _\alpha \ne 0\), we set

$$\begin{aligned} d \widetilde{\mu }(x) = \frac{ |h|^{\beta _k( \alpha -d+k)}}{C \Vert (M_\tau ^{\gamma , k})^t \Vert ^\alpha \langle \mu \rangle _\alpha } d \mu _{\tau ,h}^k(x). \end{aligned}$$

Then \(\langle \widetilde{\mu }\rangle _\alpha \le 1\), i.e. \(\widetilde{\mu }\in \mathfrak M(\alpha ,1)\) by Lemma 5.2. Routine changes of variables gives

$$\begin{aligned} \Vert \mathcal E_\lambda ^\gamma f \Vert _{L^q(d\mu )}^q&\le \int \Big | a_{\tau ,h}^k (x) \int _I e^{ i \lambda x \cdot \gamma _\tau ^h(t)} f_h(t) dt \Big |^q d \mu _{\tau ,h}^k (x) \\&= \frac{ C \Vert (M_\tau ^{\gamma , k})^t \Vert ^\alpha \langle \mu \rangle _\alpha }{ |h|^{ \beta _k(\alpha -d+k)} } \int \Big | a_{\tau ,h}^k (x) \int _I e^{ i \lambda x \cdot \gamma _\tau ^h(t)} f_h(t) dt \Big |^q d \widetilde{\mu }(x) , \end{aligned}$$

where \(a_{\tau ,h}^k (x) = a ((M_\tau ^{\gamma , k} D_h^k)^{-t}x)\). If \(\epsilon >0\) is sufficiently small, then \(\Vert (M_\tau ^{\gamma , k})^{-t} \Vert \le c \) uniformly for \(\gamma \in \Gamma (k, \epsilon )\). Then \(\gamma _\tau ^h(t) \in \Gamma (k, c |h|\epsilon ) \subset \Gamma (k,\epsilon )\) if \(0 < |h| \le \delta \) for small \(\delta =\delta (\epsilon )\). In addition, \(a_{\tau ,h}^k \in \mathfrak A\) since \({{\mathrm{supp}}}a_{\tau ,h}^k = D_h^k (M_\tau ^{\gamma , k})^t {{\mathrm{supp}}}a\). By the definition of \(Q_\lambda \), it follows that

$$\begin{aligned} \int | \mathcal E_\lambda ^\gamma f |^q d \mu (x)\lesssim & {} \langle \mu \rangle _\alpha |h|^{-\beta _k( \alpha -d +k)} \int |\mathcal E_\lambda ^{\gamma _\tau ^h} f_h |^q d \widetilde{\mu }(x)\\\le & {} C \langle \mu \rangle _\alpha |h|^{-\beta _k( \alpha -d +k)} (Q_\lambda \Vert f_h\Vert _p)^q, \end{aligned}$$

which implies (35) as \(\Vert f_h \Vert _p = h^{1-1/p} \Vert f\Vert _p\).

5.5 Proof of Theorem 2.1

Let \(\ell \) be a fixed integer such that \(1 \le \ell \le d-1-[d-\alpha ]\) and let \(k = d-\ell \). We choose a sufficiently small \(\epsilon >0\) such that \(\det (M_\tau ^{\gamma , k}) \ge \frac{1}{2}\) if \(\gamma \in \Gamma (d,\epsilon )\), and Lemma 5.3 and 5.4 hold whenever \(\gamma \in \Gamma (k,\epsilon )\). Let us be given a curve \(\gamma \in C^{d+1}([0,1])\) satisfying (2). By Lemma 5.1, there exists \(\delta >0 \) such that \(\gamma _\tau ^h \in \Gamma (k,\epsilon )\) for \(|h|<\delta \). Then Lemma 5.5 also holds for such \(\gamma _\tau ^h \in \Gamma (k,\epsilon )\). Thus, after decomposing the interval I into finite union of intervals of length less than \(\delta \), by rescaling we may assume that \(\gamma \in \Gamma (k, \epsilon )\) and \(\mu \in \mathfrak M (\alpha , 1)\).

In fact, we decompose \(I = \bigcup _{j=0}^{n-1} [\frac{j}{n}, \frac{j+1}{n} ] =: \bigcup _{j=1}^{n-1} I_j\) with \(h:= 1/n<\delta \). Then we have

$$\begin{aligned} \Vert \mathcal E_\lambda ^\gamma f\Vert _{L^q(d\mu )} \le \sum _{j=0}^{n-1} \Vert \mathcal E_\lambda ^\gamma f\chi _{[jh, jh+h ]} \Vert _{L^q(d\mu )} = \sum _{j=0}^{n-1} (C_{\gamma ,j,h})^{\frac{1}{q}} \Vert \mathcal E_\lambda ^{\gamma _j} f_j \Vert _{L^q(d\mu _j)}, \end{aligned}$$

where \(f_j(t) = h f(ht +j h)\chi _I(t)\), \(\gamma _j = \gamma _{jh}^h\), and \( \mu _j = \frac{1}{C_{\gamma ,j,h}} \mu _{jh}^h\) with \(C_{\gamma ,j,h} = C \Vert (M_{jh}^{\gamma ,k} )^{-t} \Vert ^\alpha \langle \mu \rangle _\alpha \) \(h^{-\beta _{d-\ell }(\alpha -\ell )}. \) Hence, it is enough to obtain the desired estimate for each \(\Vert \mathcal E_\lambda ^{\gamma _j} f_j \Vert _{L^q(d\mu _j)}\). Clearly, from Lemma 5.1 and 5.2 it follows that \(\gamma _j \in \Gamma (k, \epsilon )\), and also \(\mu _j \in \mathfrak M(\alpha ,1)\). Therefore we are reduced to showing (10) for \(\gamma \in \Gamma (k,\epsilon )\), \(\mu \in \mathfrak M(\alpha ,1)\).

Let \(q \ge p \ge 1\) be numbers satisfying the conditions in Theorem 2.1. Note that the other case \(1\le q <p\) follows by Hölder’s inequality. Also let \(Q_\lambda =Q_\lambda (p, q,\epsilon )\) be defined by (34). Then, for the proof of Theorem 2.1 we need to show

$$\begin{aligned} Q_\lambda \lesssim \lambda ^{-\frac{\alpha -\ell }{q}}. \end{aligned}$$
(36)

Let \(\gamma \in \Gamma (k,\epsilon )\), \(\mu \in \mathfrak M(\alpha ,1)\) be given, and f be a function supported in I with \(\Vert f\Vert _{L^p(I)}=1\) such that

$$\begin{aligned} Q_\lambda =Q_\lambda ( p,q,\epsilon ) \le 2 \Vert \mathcal E_\lambda ^\gamma f\Vert _{L^q(d\mu )}. \end{aligned}$$
(37)

Let \(A_1,\dots , A_{k-1}\) be dyadic numbers such that

$$\begin{aligned} 1=A_0\gg A_1\gg A_2 \dots \gg A_{k-1}.\end{aligned}$$

These numbers are to be chosen later. For \(i=1, \dots , k-1\), let us denote by \( \{ \mathcal I^i\}\) the collection of closed dyadic intervals of length \(A_i\) which are contained in [0, 1]. And we set \(f_{\mathcal I^i}=\chi _{\mathcal I^i} f\) so that, for each \(i=1, \dots , k-1,\) \( f=\sum _{\mathcal I^i} f_{\mathcal I^i}\) almost everywhere whenever f is supported in I. Hence, it follows that

$$\begin{aligned} \mathcal E^\gamma _\lambda f =\sum _{\mathcal I^i} \mathcal E^\gamma _\lambda f_{\mathcal I^i}, \,\, i=1, \dots , k-1. \end{aligned}$$
(38)

We now recall the multilinear decomposition from [16] (Lemma 2.8).

Lemma 5.6

Let \(\gamma :I\rightarrow \mathbb R^d\) be a smooth curve. Let \(A_0,A_1,\dots , A_{k-1}\), and \(\{\mathcal I^i\}\), \(i=1, \dots , k-1\) be defined as in the above. Then, for any \(x\in \mathbb R^d\), there is a constant C, independent of \(\gamma , x\), \(A_0,A_1,\dots , A_{k-1}\), such that

$$\begin{aligned} |\mathcal E^\gamma _\lambda&f(x)|\le C \sum _{i=1}^{k-1} A_{i-1}^{-2(i-1)} \max _{\mathcal I^{i}} |\mathcal E^\gamma _\lambda f_{\mathcal I^{i}}(x)|\nonumber \\&+ CA_{k-1}^{-2(k-1)}\max _{\begin{array}{c} \mathcal I^{k-1}_{1},\mathcal I^{k-1}_{2}, \dots ,\mathcal I^{k-1}_{k};\\ \Delta (\mathcal I^{k-1}_{1}, \mathcal I^{k-1}_{2}, \dots ,\mathcal I^{k-1}_{k})\ge A_{k-1} \end{array}} |\prod _{i=1}^k \mathcal E^\gamma _\lambda f_{\mathcal I^{k-1}_{i}} (x)|^\frac{1}{k}. \end{aligned}$$
(39)

Here \(\mathcal I^{j}_{i}\) denotes the element in \(\{\mathcal I^i\}\) and \( \Delta (\mathcal I^{k-1}_{1}, \dots ,\mathcal I^{k-1}_{k})=\min _{1\le j<m \le k} {{\mathrm{dist}}}(\mathcal I^{k-1}_{j}, \mathcal I^{k-1}_{m}). \)

We consider the linear and multi-linear terms in (39), separately. For the linear term, using Lemma 5.5 we see that

$$\begin{aligned} \Big \Vert \max _{\mathcal I^{i}} |\mathcal E^\gamma _\lambda f_{\mathcal I^{i}}|\Big \Vert _{L^q(d\mu )}\le & {} \Big (\sum _{\mathcal I^{i}} \Big \Vert \mathcal E^\gamma _\lambda f_{\mathcal I^{i}}\Big \Vert _{L^q(d\mu )}^q\Big )^\frac{1}{q}\\\le & {} {A_i}^{1-\frac{1}{p}-\frac{\beta _{d-\ell }(\alpha -\ell )}{q}} Q_{ \lambda } \Big ( \sum _{\mathcal I^{i}} \Vert f_{\mathcal I^{i}}\Vert _p^q\Big )^\frac{1}{q} \\\le & {} {A_i}^{1-\frac{1}{p}-\frac{\beta _{d-\ell }(\alpha -\ell )}{q}} Q_{ \lambda } \Big ( \sum _{\mathcal I^{i}} \Vert f_{\mathcal I^{i}}\Vert _p^p\Big )^\frac{1}{p}\\= & {} {A_i}^{1-\frac{1}{p}-\frac{\beta _{d-\ell }(\alpha -\ell )}{q}} Q_{ \lambda } \Vert f\Vert _p, \end{aligned}$$

because \(\ell ^p \subset \ell ^q\) for \(q \ge p\). Applying Proposition 5.4 to the multilinear term, we obtain

$$\begin{aligned} \left\| \max _{\begin{array}{c} \mathcal I^{1}_{d-\ell -1},\mathcal I^{2}_{d-\ell -1}, \dots ,\mathcal I^{d-\ell }_{d-\ell -1};\\ \Delta (\mathcal I^{1}_{d-\ell -1}, \mathcal I^{2}_{d-\ell -1}, \dots ,\mathcal I^{d-\ell }_{d-\ell -1})\ge A_{d-\ell -1} \end{array}} |\prod _{i=1}^{d-\ell } \mathcal E^\gamma _\lambda f_{\mathcal I^{d-\ell -1}_{i}} (x)|^\frac{1}{d-\ell } \right\| _{L^q(d\mu )} \le C A^{-C}_{d-\ell -1}\lambda ^{-\frac{\alpha -\ell }{q}} \Vert f\Vert _p. \end{aligned}$$

By (39), (37) and these two estimates, we get

$$\begin{aligned} Q_\lambda \le C \sum _{i=1}^{d-\ell -1} A_{i-1}^{-C}{A_i}^{1-\frac{1}{p} -\frac{{\beta _{d-\ell }(\alpha -\ell )}}{q}} Q_{\lambda } + C A^{-C}_{d-\ell -1}\lambda ^{-\frac{\alpha -\ell }{q}}. \end{aligned}$$

Since \(1-\frac{1}{p}-\frac{\beta _{d-\ell }(\alpha -\ell )}{q}>0\), we can choose \(A_1, \dots , A_{d-\ell -1}\), successively, so that \( CA_{i-1}^{-C}\) \({A_i}^{1-\frac{1}{p}-\frac{\beta _{d-\ell }(\alpha -\ell )}{q}}<\frac{1}{2(d-\ell )}\) for \(i=1,\dots , d-\ell -1\). Therefore, we obtain \( Q_\lambda \le \frac{1}{2} Q_\lambda +\lambda ^{-\frac{\alpha -\ell }{q}}\), which implies (36).

5.6 Proof of Theorem 2.3

To prove k-linear estimate for pq satisfying \(\frac{1}{k} (1 - \frac{1}{p} )>\frac{1}{q}\) we no longer make use of Plancherel’s theorem, but we may still use the linear oscillatory integral estimate which is of 1-dimensional in its nature. The following is basically interpolation between k-linear and linear estimates.

Proposition 5.7

Let \(\mathcal I_1,\dots , \mathcal I_k\), \(\gamma \), and \(\mu \) be given as in Proposition 5.4. If \(\epsilon >0\) is sufficiently small, then for pq satisfying \(q \ge k\) and \( \frac{1}{k} (1 - \frac{1}{p} ) \le \frac{1}{q} \le 1- \frac{1}{p} , \) there is a constant C, independent of \(\gamma \), such that

$$\begin{aligned} \Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Vert _{L^{\frac{q}{k}} (d\mu )} \le C \langle \mu \rangle _\alpha ^{\,\,\frac{k}{q}} \lambda ^{-k(1-\frac{1}{p} - \frac{d-\alpha }{q})}\prod _{i=1}^k\Vert f_i \Vert _p \end{aligned}$$
(40)

whenever \(f_i\) is supported in \(\mathcal I_i\), \(i=1,2,\dots , k\).

Proof

For \(q\ge k\), Minkowski’s inequality gives

$$\begin{aligned} \Big \Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Big \Vert _{L^{\frac{q}{k}} (d\mu )}\le & {} \Big \Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Big \Vert _{L^{\frac{q}{k}} (\mathbb R^d)} \Vert | \phi _\lambda | *\mu \Vert ^{\frac{k}{q}}_{\infty }\\\le & {} C \langle \mu \rangle _\alpha ^{\,\,\frac{k}{q}} \lambda ^{\frac{k(d-\alpha ) }{q}} \Big \Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Big \Vert _{L^{\frac{q}{k}}(\mathbb R^d)}. \end{aligned}$$

Thus it suffices to show that for \(\frac{1}{k} (1 - \frac{1}{p} ) \le \frac{1}{q} \le 1- \frac{1}{p} \),

$$\begin{aligned} \Big \Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Big \Vert _{ L^{\frac{q}{k}}(\mathbb R^d)} \le \lambda ^{-k(1-\frac{1}{p}) } \prod _{i=1}^k \Vert f_i \Vert _{L^p(I)}. \end{aligned}$$
(41)

For \(\frac{k}{q}= 1-\frac{1}{p} ,\,\, 1\le p\le 2\), the estimate (41) follows by interpolation between the \(L^2\rightarrow L^2\) estimate (32) and the trivial estimate \( \Vert \prod _{i=1}^k \mathcal E_\lambda ^\gamma f_i \Vert _{L^\infty } \le \prod _{i=1}^k \Vert f_i\Vert _{L^1}\), provided \(f_i\) is supported in \(\mathcal I_i\), \(i=1,2,\dots , k\). On the other hand, since \(|\partial _{x_1}\partial _t (x\cdot \gamma (t))|\sim 1\), using Hörmander’s generalization of Hausdorff-Young’s theorem, we have \(\Vert \mathcal E_\lambda ^\gamma f \Vert _{L^q}\le C\lambda ^{-(1-\frac{1}{p})}\Vert f\Vert _p.\) By Hölder’s inequality we obtain (41) for \(\frac{1}{q}= 1-\frac{1}{p} ,\,\, 1\le p\le 2\). Therefore, by interpolating the estimates for \(\frac{k}{q}= 1-\frac{1}{p}\) and \(\frac{1}{q}= 1-\frac{1}{p}\) we obtain (41) for pq satisfying \(q \ge k\) and \( \frac{1}{k} (1 - \frac{1}{p} ) \le \frac{1}{q} \le 1- \frac{1}{p} \).

Once Proposition 5.7 is obtained, one can prove Theorem 2.3 by adopting the same line of argument in the proof of Theorem 2.1. So we shall be brief. We use (40) with \(k = [d-\alpha ]+1\) to estimates the multilinear terms in (39), and to the linear terms we apply Lemma 5.5. Thus, we have

$$\begin{aligned} Q_\lambda \le C \langle \mu \rangle _\alpha ^{\,\,\frac{1}{q}}\sum _{i=1}^{[d-\alpha ]} A_{i-1}^{-C}{A_i}^{1 - \frac{1}{p} -\frac{\beta _{[d-\alpha ]+1}(1-\langle d-\alpha \rangle )}{q} } Q_{\lambda } + C \langle \mu \rangle _\alpha ^{\,\,\frac{1}{q}} A^{-C}_{d-\ell -1}\lambda ^{-(1 -\frac{1}{p} - \frac{d-\alpha }{q})} \end{aligned}$$

provided that \(q \ge [d-\alpha ]+1\) and \((1-1/p)/([d-\alpha ]+1) \le 1/q \le 1-1/p\). Therefore, we obtain \(Q_\lambda \lesssim \langle \mu \rangle _\alpha ^{\,\,\frac{1}{q}}\lambda ^{- (1 -\frac{1}{p} - \frac{d-\alpha }{q})}\) whenever \(1 - \frac{1}{p} -\frac{\beta _{[d-\alpha ]+1}(1-\langle d-\alpha \rangle )}{q} >0 \). This completes the proof.

6 Proof of Theorem 3.2

Proof of Theorem 3.2 is based on an adaptation of Erdoğan’s argument in [12]. (Also see [15].) The following is basically a 2-dimensional result in that we only need to assume \(\gamma '\) and \(\gamma ''\) are linearly independent. To begin with, by finite decomposition, translation and scaling we may assume, as before, that \(\gamma \) is close to \(\gamma _\circ ^d\) such that \(\Vert \gamma -\gamma _\circ ^d\Vert _{C^N(I)}\lesssim \epsilon _0\) for sufficiently large N and small enough \(\epsilon _0\).

6.1 Geometric Observations

To estimate the integrals on the right hand side of (48), we begin with some geometric observations regarding the curves.

Lemma 6.1

Let \(\mathcal I=[\tau _{1},\tau _{2}] \subset [0,1]\) be an interval of length \(L \gtrsim \lambda ^{-\frac{1}{2}}\), then \(\lambda \gamma (\mathcal I) + O(1)\) is contained in a parallelotope \(\lambda M^{\gamma ,d}_{\tau _1} {R_{L}}+\lambda \gamma (\tau _1)\) where \({R_{L}}\) is a rectangle of dimension \(C L \times CL^{2} \times \cdots \times C L^{2}\) which is centered at the origin.

Proof

To see this it is sufficient to show that \( \gamma (\mathcal I) + O(\lambda ^{-1})\) is contained in \(M^{\gamma ,d}_{\tau _1}{R_{L}}+\gamma (\tau _1)\). For any \(t \in [\tau _{1},\tau _{2}]\), by Taylor’s expansion, we have

$$\begin{aligned} \gamma (t) - \gamma (\tau _{1}) = M^{\gamma ,d}_{\tau _1}D_{L}^d{\gamma _{\circ }^{d}}(\frac{t-\tau _{1}}{L}) + \mathbf e(t,\tau _{1},L) \end{aligned}$$

where \(\mathbf e(t,\tau _{1},L) \lesssim L^{d+1}\). So \(\gamma (\mathcal I)-\gamma (\tau _1)\) is contained in \(M^{\gamma ,d}_{\tau _1} R\) where R is a rectangle of dimension \(\sim L\times L^2\times \dots \times L^d\) which is centered at the origin. Since \(\lambda ^{-1}\lesssim L^2\) it is clear that \( \gamma (\mathcal I) + O(\lambda ^{-1})\) contained in \(M^{\gamma ,d}_{\tau _1}{R_{L}}+\gamma (\tau _1)\).

The following concerns the size of intersection of tubular neighborhoods of curves.

Lemma 6.2

Let \({\mathcal I}, {\mathcal J} \subset [0,1]\) be intervals satisfying \(|{\mathcal I}|, |\mathcal J| \sim 2^{-n} \) and \( {{\mathrm{dist}}}({\mathcal I}, {\mathcal J}) \sim 2^{-n}\) with \( 2^{n} \le \lambda ^{\frac{1}{2}}\). Then, for \(y \in \mathbb {R}^d\),

$$\begin{aligned} \big | (y + \lambda \gamma ({\mathcal I}) + B(0,C)) \cap (\lambda \gamma ({\mathcal J}) + B(0,C) )\big | \lesssim 2^n. \end{aligned}$$
(42)

Proof

As before, by a change of variables it is sufficient to show that

$$\begin{aligned} \big | (y + \gamma ({\mathcal I}) + B(0,C\lambda ^{-1})) \cap (\gamma ({\mathcal J}) + B(0,C\lambda ^{-1}) )\big | \lesssim 2^n\lambda ^{-d}. \end{aligned}$$

Let V be the subspace spanned by \(\gamma '(0)\) and \(\gamma ''(0)\), and \(P_V\) be the projection to V. Since both sets are contained in \(O(\lambda ^{-1})\)-neighborhood of arcs, it suffices to show that

$$\begin{aligned} \big | P_V(y + \gamma ({\mathcal I}) + B(0,C\lambda ^{-1})) \cap P_V(\gamma ({\mathcal J}) + B(0,C\lambda ^{-1}) )\big | \lesssim 2^n\lambda ^{-2}. \end{aligned}$$

The sets \(P_V(y + \gamma ({\mathcal I}) + B(0,C\lambda ^{-1})) \), \(P_V(\gamma ({\mathcal J}) + B(0,C\lambda ^{-1}) )\) are contained in \(P_V(y) + P_V\gamma ({\mathcal I}) + O(\lambda ^{-1})\), \( P_V\gamma ({\mathcal J}) + O(\lambda ^{-1})\). Since \(\gamma \) is close to \(\gamma _\circ ^d\), \(P_V\gamma \) is close to \((t,t^2/2)\). So, \(P_V(y) + P_V\gamma ({\mathcal I}) + O(\lambda ^{-1})\) , \( P_V\gamma ({\mathcal J}) + O(\lambda ^{-1})\) are contained in neighborhoods of curves of the form \((t,t^2/2)+O(\lambda ^{-1})\) for \(t\in \mathcal I\), \(t\in \mathcal J\) respectively, and the angle between them is \(\sim 2^{-n}\). Hence we get the desired bound.

Lemma 6.3

Let \(\lambda \gtrsim \delta ^{-2} \). Let \({\mathcal I}\) be an interval contained in \([\tau -\delta , \tau +\delta ]\) and \(R^*\) be the rectangle \(R^*=\{x: |\langle x,\frac{\gamma '(\tau )}{|\gamma '(\tau )|} \rangle |\le 1/\delta , |\langle v_2,x \rangle | \le 1, \dots , |\langle v_d,x \rangle | \le 1 \}\) where \(v_2, \dots , v_d\) are an orthonormal basis for the orthogonal complement of span \(\{\gamma '(\tau )\}\). Then, for a sufficiently large \(C>0\),

$$\begin{aligned} \lambda \gamma (\mathcal I)+ R^*+O(1) \subset \lambda \gamma (\mathcal I)+ O(C) \end{aligned}$$

Proof

The above is equivalent to

$$\begin{aligned} \gamma (\mathcal I)+ \lambda ^{-1}R^*+O(\lambda ^{-1}) \subset \gamma (\mathcal I)+ O(C\lambda ^{-1}). \end{aligned}$$

Any member in the left-hand side set can be written as \(\gamma (t)+ s\gamma '(\tau )+ O(\lambda ^{-1}) \) for some s, \(|s|\lesssim (\lambda \delta )^{-1}\). Hence we need only to show

$$\begin{aligned} |\gamma (t)+ s\gamma '(\tau )-\gamma (t+s)|\le C\lambda ^{-1}\end{aligned}$$

whenever \(|s|\lesssim (\lambda \delta )^{-1}\). However this is clear because \(\gamma (t)+ s\gamma '(\tau )-\gamma (t+s)=\int _{t+s}^t\gamma '(u)-\gamma '(\tau ) du=O(\lambda ^{-1})\).

Let \(\varphi \) be a fixed Schwartz function which is equal to 1 in a unit cube Q centered at the origin and vanishes outside 2Q. Moreover \(\widehat{\varphi }\) satisfies

$$\begin{aligned} | \widehat{\varphi }(x) | \le C_M \sum ^{\infty }_{j=1} 2^{-Mj} \chi _{2^j Q} (x), \quad \text {for each} \,\, x \in \mathbb {R}^d, \, M \in \mathbb {Z}^+. \end{aligned}$$
(43)

For a rectangle \(R \subset \mathbb {R}^d\), we denote \(\varphi _R\) as \(\varphi \circ a_R^{-1}\), where \(a_R\) is an affine mapping which takes Q onto R. The following lemma is a slight generalization of Lemma 3.1 in [13].

Lemma 6.4

Let \(\lambda _d \le \cdots \le \lambda _2 \le \lambda _1 \lesssim \lambda \) and let \(\mu \) be a positive Borel measure supported in B(0, 1) satisfying (4). Let R be a rectangle of dimensions \(\lambda _1 \times \lambda _2 \times \cdots \times \lambda _d\), \(R^*\) be the dual set of R centered at the origin, and A be a nonsingular matrix. Then

  1. (i)

    \(\Vert \mu *|\mathcal F(\varphi _R\circ A^{-1})| \Vert _{\infty } \lesssim \langle \mu \rangle _\alpha |\det A| \Vert A^{-t}\Vert ^{\alpha } \lambda _{1}^{d-\alpha }\),

  2. (ii)

    \(\int _{K A^{-t} R^*} \mu *|\mathcal F(\varphi _R\circ A^{-1})| ( x + y ) dy \lesssim \langle \mu \rangle _\alpha K^{\alpha } \Vert A^{-t}\Vert ^{\alpha } \lambda _{1}^{d-\alpha } \prod _{k=1}^{d}\lambda _k^{-1}\), for \(K \gtrsim 1\) and \(x \in \mathbb {R}^d\).

Proof

Fixing a large enough M, by (43) and change of variables we have

$$\begin{aligned} | \widehat{\varphi _R}(x) |&\le C_M \prod _{k=1}^d \lambda _k \sum ^{\infty }_{j=1} 2^{-Mj} \chi _{2^j R^*} (x), \end{aligned}$$

which gives

$$\begin{aligned} \int |\widehat{\varphi _{R}}(A^{t}(x-y))| d\mu (y)&\lesssim \prod _{k=1}^d \lambda _k \sum ^{\infty }_{j=1} 2^{-Mj} \int \chi _{2^j R^*} (A^{t}(x-y)) d\mu (y). \end{aligned}$$
(44)

Since \(2^{j}R^*\) is covered by as many as \(\sim \lambda _{1}^{d}/\prod _{k=1}^d \lambda _k\) cubes with side-length \(2^{j}\lambda ^{-1}_{1}\), applying (4) and (5) to each of the cubes gives \( \int \chi _{2^j R^*} (A^{t}(x-y)) d\mu (y) \lesssim \langle \mu \rangle _\alpha (\Vert A^{-t}\Vert 2^{j} \lambda _{1}^{-1})^{\alpha } \frac{\lambda _{1}^{d}}{\prod _{k=1}^d \lambda _k}. \) Since \(\mu *|\mathcal F(\varphi _{R}\circ A^{-1})|(x) = |\det A| \int |\widehat{\varphi _{R}}(A^{t}(x-y))| d\mu (y)\), by combining the inequalities we get

$$\begin{aligned} \mu *|\mathcal F(\varphi _{R}\circ A^{-1})|(x)\lesssim & {} |\det A| \sum ^{\infty }_{j=1} 2^{-Mj} \langle \mu \rangle _\alpha (\Vert A^{-t}\Vert 2^{j}\lambda _{1}^{-1})^{\alpha } {\lambda _{1}^{d}}\\\lesssim & {} \langle \mu \rangle _\alpha |\det A| \Vert A^{-t}\Vert ^{\alpha } \lambda ^{d-\alpha }_{1}. \end{aligned}$$

This proves (i).

We now turn to (ii). Without loss of generality we may assume \( x = 0 \). By (44),

$$\begin{aligned}&\int _{K A^{-t} R^*} \mu *|\mathcal F(\varphi _{R}\circ A^{-1})|(y) dy \nonumber \\&\quad \lesssim |\det A| \prod ^d_{k=1}\lambda _{k} \sum ^{\infty }_{j=1} 2^{-Mj} \iint \chi _{K A^{-t} R^*}(y) \chi _{2^{j}A^{-t}R^*}(y-u) d\mu (u) dy. \end{aligned}$$
(45)

Since \(u-y \in 2^{j} A^{-t} R^*\) in the last integrand, \( \chi _{K A^{-t} R^*}(y) \lesssim \chi _{(K+2^{j}) A^{-t} R^*}(u)\). So we have

$$\begin{aligned}&\iint \chi _{K A^{-t} R^*}(y) \chi _{2^{j}A^{-t}R^*}(y-u) d\mu (u)dy\\&\quad \le |\det A^{-t}|\frac{2^{jd}}{\prod ^d_{k=1}\lambda _{k}} \int \chi _{(K+2^{j}) A^{-t} R^*}(u) d\mu (u) \\&\quad \lesssim |\det A^{-t}|\frac{2^{jd}}{\prod ^d_{k=1}\lambda _{k}} \langle \mu \rangle _\alpha (\Vert A^{-t}\Vert (K+2^{j})\lambda _{1}^{-1})^{\alpha } \frac{\lambda _{1}^{d}}{\prod ^d_{k=1}\lambda _{k}}. \end{aligned}$$

For the last inequality we cover \((K+2^{j})R^*\) with \(O(\frac{\lambda _{1}^{d}}{\prod ^d_{k=1}\lambda _{k}})\) cubes of side length \((K+2^{j})\lambda _{1}^{-1}\) and use (4) and (5). By combining this and (45), we get (ii). \(\square \)

6.2 Whitney Type Decomposition

By a Whitney type decomposition we may write

$$\begin{aligned}{}[0,1] \times [0,1] = \left[ \bigcup _{4 \le 2^n \le \lambda ^{\frac{1}{2}}} \Big [ \bigcup _{\begin{array}{c} |\mathcal I|=|\mathcal J|=2^{-n} \\ \mathcal I \sim \mathcal J \end{array}} (\mathcal I \times \mathcal J) \Big ] \right] \bigcup D \end{aligned}$$
(46)

where \(\mathcal I\), \(\mathcal J\) are dyadic intervals, D is a union of finitely overlapping boxes of side length \(\approx \lambda ^{-\frac{1}{2}}\), and D is contained in the \(C\lambda ^{-\frac{1}{2}}\)-neighborhood of the diagonal \(\{ (x,x): x \in [0,1]\}\). Here, we say \(\mathcal I \sim \mathcal J\) to mean that \(\mathcal I\), \(\mathcal J\) are not adjacent but have adjacent parent intervals.

For \(\mathcal I=[a, b]\) we set

$$\begin{aligned} \Omega _{ \lambda , \mathcal I } = {\left\{ \begin{array}{ll} \,\, \{ x \in \lambda \gamma (I) + O(1) : \gamma '(b)\cdot {(x-\gamma (b))}\le 0\le \gamma '(a) \cdot {(x-\gamma (a))} \} &{}\text { if }a\ne 0, b\ne 1, \\ \,\, \{ x \in \lambda \gamma (I) + O(1) : \gamma '(b)\cdot {(x-\gamma (b))}\le 0\} &{}\text { if }a=0,\\ \,\, \{ x \in \lambda \gamma (I) + O(1) : 0\le \gamma '(a) \cdot {(x-\gamma (a))} \} &{}\text { if }b= 1, \end{array}\right. } \end{aligned}$$

and set

$$\begin{aligned} g_{\mathcal I} = g \cdot \chi _{\Omega _{\lambda ,\mathcal I}}. \end{aligned}$$
(47)

For distinct dyadic intervals \(\mathcal I, \mathcal J \subset [0,1]\), the intersection of \(\Omega _{\lambda , \mathcal I}\) and \(\Omega _{\lambda , \mathcal J}\) has Lebesgue measure zero in \(\mathbb {R}^{d}\) because \(2^{-n}\ge \lambda ^{-1/2}\). This leads to

$$\begin{aligned} | \widehat{g}(x) |^2 \le \sum _{n \ge 2}^{\log \lambda ^{\frac{1}{2}}} \sum _{\begin{array}{c} |\mathcal I|=|\mathcal J|=2^{-n} \\ \mathcal I \sim \mathcal J \end{array}} | \widehat{g_{\mathcal I}}(x) \widehat{g_{\mathcal J}}(x) | + 2\sum _{\mathcal I \in \mathfrak I_{E}} | \widehat{g_{\mathcal I}}(x) |^{2} \end{aligned}$$

where \(\mathfrak I_{E}\) is a finitely overlapping set of dyadic intervals \(\mathcal I\) with \(|\mathcal I| \approx \lambda ^{-\frac{1}{2}}\). Using above inequality, we have for any \(q \ge 2\),

$$\begin{aligned} \Vert \widehat{g} \Vert ^2_{L^q(d\mu )} \le \sum _{n \ge 2}^{\log \lambda ^{\frac{1}{2}}} \sum _{\begin{array}{c} |\mathcal I|=|\mathcal J|=2^{-n} \\ \mathcal I \sim \mathcal J \end{array}} \Vert \widehat{g_{\mathcal I}} \widehat{g_{\mathcal J}} \Vert _{L^{\frac{q}{2}}(d\mu )} + \sum _{\mathcal I \in \mathfrak I_{E}} \Vert \widehat{g_{\mathcal I}} \Vert ^2_{L^q(d\mu )}. \end{aligned}$$
(48)

6.2.1 Estimate for \(g_{\mathcal I}\), \(\mathcal I \in \mathfrak I_E\)

For \(\mathcal I = [\tau _1,\tau _2] \in \mathfrak I_E\), we have \(2^{-n} \approx \lambda ^{-1/2}\). By Lemma 6.1 the support of \(g_{\mathcal I}\), i.e. \(\Omega _{\lambda ,\mathcal I}\) is contained in a parallelotope \(M^{\gamma ,d}_{\tau _{1}}R\) where R is a rectangle of dimensions \(C\lambda ^{\frac{1}{2}} \times C \times \cdots \times C\) . Hence \(\widehat{g_{\mathcal I}} = \widehat{g_{\mathcal I}} *\mathcal F(\varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1})\). Since \(\Vert \mathcal F(\varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1}) \Vert _{1}\lesssim C\), by Hölder’s inequality we get

$$\begin{aligned} | \widehat{g_{\mathcal I}} | \lesssim ( | \widehat{g_{\mathcal I}} |^q *| \mathcal F(\varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1}) | )^{\frac{1}{q}}. \end{aligned}$$

So, we have

$$\begin{aligned} \Vert \widehat{g_{\mathcal I}} \Vert ^{q}_{L^{q}(d\mu )} \lesssim \int (|\widehat{g_{\mathcal I}}|^{q} *| \mathcal F(\varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1}) |)(x) d\mu (x) \lesssim \langle \mu \rangle _\alpha \lambda ^{\frac{d-\alpha }{2}} \Vert \widehat{g_{\mathcal I}} \Vert ^{q}_{q}. \end{aligned}$$
(49)

The last inequality follows from (i) in Lemma 6.4 and the fact that R has dimensions \(C\lambda ^{\frac{1}{2}} \times C \times \cdots \times C\). Since \(q \ge 2\), by Hausdorff-Young inequality and Hölder’s inequality, we have

$$\begin{aligned} \Vert \widehat{g_{\mathcal I}}\Vert _{q} \le \Vert g_{\mathcal I}\Vert _{q'} \le \Vert g_{\mathcal I}\Vert _{2} |\Omega _{\lambda , \mathcal I}|^{(\frac{1}{2} -\frac{1}{q})} \lesssim \Vert g_{\mathcal I} \Vert _{2} (\lambda ^{\frac{1}{2}})^{\frac{1}{2}-\frac{1}{q}}. \end{aligned}$$

Thus, combining this with (49),

$$\begin{aligned} \sum _{\mathcal I \in \mathfrak I_E} \Vert \widehat{g_{\mathcal I}} \Vert ^2_{L^q(d\mu )} \lesssim \langle \mu \rangle _\alpha ^{\,\,\frac{2}{q}}\lambda ^{\frac{d-\alpha }{q}} \lambda ^{\frac{1}{2}-\frac{1}{q}} \sum _{\mathcal I \in \mathfrak I_E} \Vert g_{\mathcal I}\Vert ^2_2 \lesssim \langle \mu \rangle _\alpha ^{\,\,\frac{2}{q}}\lambda ^{\frac{1}{2} + \frac{(d-1)-\alpha }{q}} \Vert g\Vert ^2_2. \end{aligned}$$
(50)

6.2.2 Bilinear Term Estimate

Firstly, we assume \(q=2\). Fix n with \(4 \le 2^n \le \lambda ^{1/2}\) and a pair \(\mathcal I = [\tau _{1},\tau _{2}], \mathcal J = [\tau _{3},\tau _{4}]\) of dyadic intervals with \(|\mathcal I| = |\mathcal J| = 2^{-n}\) and \(\mathcal I \sim \mathcal J\). Since \(\mathcal I \sim \mathcal J\), the support of \( g_{\mathcal I} *g_{\mathcal J} \) is contained in a parallelotope \(M_{\tau _{1}}^{\gamma ,d}R\) where R is a rectangle with dimensions \( 2C\lambda 2^{-n} \times 2C\lambda 2^{-2n} \times \cdots \times 2C\lambda 2^{-2n} \). Using \( g_{\mathcal I} *g_{\mathcal J} = ( g_{\mathcal I} *g_{\mathcal J} ) (\varphi _{R}\circ (M_{\tau _{1}}^{\gamma ,d})^{-1}) \), we obtain

$$\begin{aligned} \int | \widehat{g_{\mathcal I}}(x) \widehat{g_{\mathcal J}}(x) | d\mu (x) \lesssim \int | \widehat{g_{\mathcal I}}(x) \widehat{g_{\mathcal J}}(x) | ( \mu *| \mathcal F( \varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1} ) | )(x) dx. \end{aligned}$$
(51)

Consider a tiling of \(\mathbb {R}^d\) with rectangles T of dimensions \(C2^{-n} \times C \times \cdots \times C\). Note that each T is contained in a rectangle \(x_{T} + C\lambda 2^{-2n}R^*\) for some \(x_{T} \in \mathbb {R}^d\). Also let \(\phi \) be a fixed non-negative Schwartz function satisfying \(\phi > 1/2\) on Q, \( {{\mathrm{supp}}}\widehat{\phi } \subseteq Q\) and the inequality of (43). Using the properties of \(\phi \), we obtain \( 1 \lesssim \sum _T \phi ^3_T \lesssim \sum _T \phi ^2_T \lesssim 1\), where \(\phi _T := \phi \circ a^{-1}_T\).

Set \(\widehat{g_{\mathcal I, T}} := \widehat{g_{\mathcal I}} \cdot (\phi _{T}\circ (M^{\gamma ,d}_{\tau _{1}})^{t}).\) By \(1 \lesssim \sum _T \phi ^3_T\) and Cauchy–Schwarz inequality, we get

$$\begin{aligned}&\int | \widehat{g_{\mathcal I}}(x) \widehat{g_{\mathcal J}}(x) | (\mu *| \mathcal F( \varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1}) |)(x) dx \nonumber \\&\quad \lesssim \sum _{T} \int | \widehat{g_{\mathcal I, T}}(x) \widehat{g_{\mathcal J, T}}(x) | (\mu *| \mathcal F( \varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1}) |)(x) (\phi _{T}\circ (M^{\gamma ,d}_{\tau _{1}})^{t}) (x) dx \nonumber \\&\quad \lesssim \sum _{T} \Vert \widehat{g_{\mathcal I, T}} \widehat{g_{\mathcal J, T}} \Vert _2 \,\Vert \mu *| \mathcal F( \varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1}) | (\phi _{T}\circ (M^{\gamma ,d}_{\tau _{1}})^{t}) \Vert _2. \end{aligned}$$
(52)

By a standard argument

$$\begin{aligned}&\int | \widehat{g_{\mathcal I, T}}(x) \widehat{g_{\mathcal J, T}}(x)|^2 dx = \int | \widetilde{g_{\mathcal I, T}} *\overline{g_{\mathcal J, T}}(y) |^2 dy \\& \le \sup _{y} | (y + {{\mathrm{supp}}}(g_{\mathcal I, T})) \cap {{\mathrm{supp}}}(g_{\mathcal J, T}) | \Vert g_{\mathcal I, T} \Vert ^2_2 \Vert g_{\mathcal J, T} \Vert ^2_2. \end{aligned}$$

By Lemma 6.3, \(y + {{\mathrm{supp}}}(g_{\mathcal I, T})\), \({{\mathrm{supp}}}(g_{\mathcal J, T})\) are contained in \(y + \lambda \gamma ({\mathcal I}) + B(0,C)\), \(\lambda \gamma ({\mathcal J}) + B(0,C)\), respectively. Thus, Lemma 6.2 implies \( \sup _{y} | (y + {{\mathrm{supp}}}(g_{\mathcal I, T})) \cap {{\mathrm{supp}}}(g_{\mathcal J, T}) | \lesssim 2^n\). So, we get

$$\begin{aligned} \int | \widehat{g_{\mathcal I, T}}(x) \widehat{g_{\mathcal J, T}}(x)|^2 dx \lesssim 2^n \Vert g_{\mathcal I, T} \Vert ^2_2 \Vert g_{\mathcal J, T} \Vert ^2_2. \end{aligned}$$
(53)

Now we show

$$\begin{aligned} \Vert \mu *| \mathcal F( \varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1}) | (\phi _{T}\circ (M^{\gamma ,d}_{\tau _{1}})^{t}) \Vert _2^2 \lesssim \langle \mu \rangle _\alpha ^{\,\,2} \lambda ^{d-\alpha } 2^{-n}. \end{aligned}$$
(54)

First we note that by (i) in Lemma 6.4,

$$\begin{aligned} \Vert \mu *| \mathcal F( \varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1}) | \Vert _{\infty } \lesssim \langle \mu \rangle _\alpha \lambda ^{ d - \alpha } (2^{-n})^{ d - \alpha }. \end{aligned}$$
(55)

Using (43) for \(\phi _T\) and (ii) in Lemma 6.4 with recalling that T is contained in \(x_T + C \lambda 2^{-2n} R^*\) for some \(x_T \in \mathbb {R}^d\), we have

$$\begin{aligned}&\int (\mu *| \mathcal F( \varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1}) |)(x) (\phi _{T}\circ (M^{\gamma ,d}_{\tau _{1}})^{t}) (x) dx \\&\quad \lesssim \sum ^{\infty }_{j=1} 2^{-Mj} \int _{2^{j}\lambda 2^{-2n} (M^{\gamma ,d}_{\tau _{1}})^{-t} R^*} \mu *| \mathcal F( \varphi _R \circ (M^{\gamma ,d}_{\tau _{1}})^{-1}) |(x - 2^j (M^{\gamma ,d}_{\tau _{1}})^{-t}x_{T}) dx \\&\quad \lesssim \langle \mu \rangle _\alpha (2^{n})^{d-1-\alpha }. \end{aligned}$$

Since \(\phi _T (x) \lesssim 1\), by combining this and (55), we get (54).

By the inequalities (51), (52), (53), and (54) and using the fact that \(\sum _{T} \phi ^{2}_{T} \lesssim 1\)

$$\begin{aligned} \Vert \widehat{g_{\mathcal I}} \widehat{g_{\mathcal J}} \Vert _{L^1(d\mu )} \lesssim \langle \mu \rangle _\alpha \lambda ^{\frac{d-\alpha }{2}} \sum _T \Vert g_{\mathcal I, T} \Vert _2 \Vert g_{\mathcal J, T} \Vert _2 \lesssim \langle \mu \rangle _\alpha \lambda ^{\frac{d-\alpha }{2}} \Vert g_{\mathcal I} \Vert _2 \Vert g_{\mathcal J} \Vert _2. \end{aligned}$$
(56)

For the last inequality, we used the Cauchy–Schwarz inequality and Plancherel’s theorem.

By (56), we have

$$\begin{aligned}&\sum _{ 4 \le 2^n \le \lambda ^{1/2} } \sum _{ \begin{array}{c} |\mathcal I| = |\mathcal J| = 2^{-n} \\ \mathcal I \sim \mathcal J \end{array}} \Vert \widehat{g_{\mathcal I}} \widehat{g_{\mathcal J}} \Vert _{L^1(d\mu )} \lesssim \langle \mu \rangle _\alpha \lambda ^{\frac{d-\alpha }{2}} \sum _{ 4 \le 2^n \le \lambda ^{1/2} } \sum _{|\mathcal I|=2^{-n}} \sum _{\mathcal I \sim \mathcal J} \Vert g_{\mathcal I} \Vert _2 \Vert g_{\mathcal J} \Vert _2 \\&\quad \lesssim \langle \mu \rangle _\alpha \lambda ^{\frac{d-\alpha }{2}} \sum _{ 4 \le 2^n \le \lambda ^{1/2} } \Big ( \sum _{|\mathcal I|=2^{-n}} \Vert g_{\mathcal I}\Vert ^2_2 \Big )^{\frac{1}{2}} \Big ( \sum _{|\mathcal I|=2^{-n}} \Vert g_{\mathcal I}\Vert ^2_2 \Big )^{\frac{1}{2}} \lesssim \langle \mu \rangle _\alpha \lambda ^{\frac{d-\alpha }{2} }\log \lambda \Vert g \Vert _2^2. \end{aligned}$$

For the second inequality we use the fact that there are finitely many intervals \(\mathcal J\) related to \(\mathcal I\) for each dyadic interval \(\mathcal I\). Thus we get the required bound in the case \(q=2\).

Now we assume \(q \ge 4\). Let \(\mathcal I\), \(\mathcal J\) with \(\mathcal I\sim \mathcal J\), and R be defined as before. Using \( g_{\mathcal I} *g_{\mathcal J} = ( g_{\mathcal I} *g_{\mathcal J} ) (\varphi _{R}\circ (M_{\tau _{1}}^{\gamma ,d})^{-1}) \), Hölder’s inequality, and (55), we have

$$\begin{aligned} \Vert \widehat{g_{\mathcal I}}\widehat{g_{\mathcal J}}\Vert ^{\frac{q}{2}}_{L^{\frac{q}{2}}(d\mu )}&\lesssim \langle \mu \rangle _\alpha \lambda ^{ d - \alpha } (2^{-n})^{ d - \alpha } \Vert \widehat{g_{\mathcal I}}\widehat{g_{\mathcal J}}\Vert ^{\frac{q}{2}-2}_{\infty } \int |\widehat{g_{\mathcal I}}(x)\widehat{g_{\mathcal J}}(x)|^2 dx. \end{aligned}$$

Repeating the argument for (53) and using Lemma 6.2, we have \(\int |\widehat{g_{\mathcal I}}(x)\widehat{g_{\mathcal J}}(x)|^2 dx \lesssim 2^n \Vert g_{\mathcal I}\Vert ^2_2 \Vert g_{\mathcal J}\Vert ^2_2. \) Also, by Young’s inequality and Cauchy–Schwarz inequality, \(\Vert \widehat{g_{\mathcal I}} \widehat{g_{\mathcal J}}\Vert _{\infty } \lesssim \lambda 2^{-n} \Vert g_{\mathcal I}\Vert _2 \Vert g_{\mathcal J}\Vert _2\). Hence, we get

$$\begin{aligned} \Vert \widehat{g_{\mathcal I}}\widehat{g_{\mathcal J}}\Vert ^{\frac{q}{2}}_{L^{\frac{q}{2}}(d\mu )} \lesssim \langle \mu \rangle _\alpha \lambda ^{d-\alpha +\frac{q}{2}-2} (2^{n})^{-d+\alpha +3-\frac{q}{2}}\Vert g_{\mathcal I}\Vert ^{\frac{q}{2}}_2 \Vert g_{\mathcal J}\Vert ^{\frac{q}{2}}_2. \end{aligned}$$

Here, if \(-d + \alpha + 3 - \frac{q}{2} \ge 0 \), since \(2^n \le \lambda ^{\frac{1}{2}}\), then \(\lambda ^{d-\alpha +\frac{q}{2}-2} (2^{n})^{-d+\alpha +3-\frac{q}{2}} \le \lambda ^{\frac{q}{4}+\frac{(d-1)-\alpha }{2}}\). Otherwise, \(\lambda ^{d-\alpha +\frac{q}{2}-2} (2^{n})^{-d+\alpha +3-\frac{q}{2}} < \lambda ^{d-\alpha +\frac{q}{2}-2}\). Hence

$$\begin{aligned} \Vert \widehat{g_{\mathcal I}}\widehat{g_{\mathcal J}}\Vert _{L^{\frac{q}{2}}(d\mu )} \lesssim \langle \mu \rangle _\alpha ^{\,\,\frac{2}{q}} \lambda ^{\max (\frac{1}{2}+\frac{(d-1)-\alpha }{q}, 1-\frac{2\alpha }{q}+\frac{2(d-2)}{q})} \Vert g_{\mathcal I}\Vert _2 \Vert g_{\mathcal J}\Vert _2. \end{aligned}$$

Thus by the same argument as before, we sum along n, \(\mathcal I\), \(\mathcal J\) to get

$$\begin{aligned} \sum _{ 4 \le 2^n \le \lambda ^{1/2} } \sum _{ \begin{array}{c} |\mathcal I| = |\mathcal J| = 2^{-n} \\ \mathcal I \sim \mathcal J \end{array}} \Vert \widehat{g_{\mathcal I}}\widehat{g_{\mathcal J}}\Vert _{L^{\frac{q}{2}}(d\mu )} \lesssim \langle \mu \rangle _\alpha ^{\,\,\frac{2}{q}} \lambda ^{\max (\frac{1}{2}+\frac{(d-1)-\alpha }{q}, 1-\frac{2\alpha }{q}+\frac{2(d-2)}{q}) + \epsilon } \Vert g \Vert _2^2. \end{aligned}$$

Since the intermediate cases follow by interpolation, this completes the proof. \(\square \)