Abstract
We introduce and prove small cap square function estimates for the unit parabola and the truncated light cone. More precisely, we study inequalities of the form
where \(\Gamma _\alpha (R^{-1})\) is the set of small caps of width \(R^{-\alpha }\). We find sharp upper and lower bounds of the constant \(C_{\alpha ,p}(R)\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we study the square function estimates. We begin with the most general setting. Let \(\Omega \subset \mathbb {R}^n\) be a set in the frequency space, and suppose we are given a partition of \(\Omega \) into subsets \(\Sigma =\{\sigma \}\):
We will only consider the case when \(\sigma \) are morally rectangles. For any function f, we define \(f_\sigma =(\psi _\sigma \widehat{f})^\vee \), where \(\psi _\sigma \) is a smooth bump function adapted to \(\sigma \). We will also assume \(supp \widehat{f}\subset \Omega \) in the following discussions. The inequality we are interested in is of the following form:
Square Function Estimate:
The goal is to find the best constant \(C_{p,\Sigma }\) that works for all test functions f.
This type of estimate is of huge interest in harmonic analysis. We briefly review some well-known results.
When \(\Omega \) is the \(R^{-1}\)-neighborhood of the unit parabola \(\mathcal {P}=\{(\xi ,\xi ^2)\in \mathbb {R}^2: |\xi |\le 1\}\) and \(\Sigma =\{\sigma \}\) is the set of \(\sim R^{-1/2}\times R^{-1}\)-caps that form a partition of \(\Omega \), then an argument of Córdoba–Fefferman (see also [1, Proposition 3.3]) gives
(Throughout this article, we suppress the \(\sim \) symbol for simplicity when the precise scale is unimportant.)
When \(\Omega \) is the \(R^{-1}\)-neighborhood of the unit cone \(\mathcal {C}=\{(\xi _1,\xi _2,\xi _3)\in \mathbb {R}^3: \xi _3=\sqrt{\xi _1^2+\xi _2^2}, 1/2\le \xi _3\le 1\}\) and \(\Sigma =\{\sigma \}\) are \(1\times R^{-1/2}\times R^{-1}\)-caps that form a partition of \(\Omega \), then the sharp \(L^4\) square function estimate was proved by Guth–Wang–Zhang [6]:
Here, \(A\lessapprox B\) means \(A\lesssim _\epsilon R^\epsilon B\) for any \(\epsilon >0\).
When \(\Omega \) is certain neighborhood of a moment curve, it was studied by Gressman, Guo, Pierce, Roos and Yung [3]. The sharp \(L^7\) estimate was obtained by Maldague [7]. There are some other related results (see [4, 8]).
In the discussion above, we see that the size of caps in the partition of parabola is \(R^{-1/2}\times R^{-1}\); the size of caps in the partition of cone is \(1\times R^{-1/2}\times R^{-1}\). We usually call them the canonical partition. Besides the canonical partition of parabola and cone, Demeter, Guth and Wang [2] introduced the “small cap decoupling" which is the decoupling inequality for a finer partition than the canonical partition. Similarly, we can also ask the question about the small cap square function estimate.
The goal of this paper is to prove the sharp square function estimates for the small caps of parabola and cone. We will first define the small caps. Then we will introduce and study examples which give sharp lower bounds of the constants. Finally, we will prove the sharp bounds of the constants.
1.1 Small Caps
1.1.1 Small Caps for Parabola
Let \(\mathcal {P}:=\{(\xi ,\xi ^2):\xi \in \mathbb {R},|\xi |\le 1\}\) be the unit parabola, and \(N_{R^{-1}}(\mathcal {P})\) be its \(R^{-1}\)-neighborhood. For \(1/2\le \alpha \le 1\), let \(\Gamma _\alpha (R^{-1})\) be the partition of \(N_{R^{-1}}(\mathcal {P})\) into rectangular boxes of dimensions \(R^{-\alpha }\times R^{-1}\). More precisely, each \(\gamma \in \Gamma _\alpha (R^{-1})\) is of form
where \(I\subset [-1,1]\) is an interval of length \(R^{-\alpha }\). Note that we have \(\#\Gamma _\alpha (R^{-1})\sim R^\alpha \). Our square function estimate is
Theorem 1
For \(supp \widehat{f}\subset N_{R^{-1}}(\mathcal {P})\), we have
where
Remark
We remark that \(p\ge 4\alpha +2\) is equivalent to \(\alpha (\frac{1}{2}-\frac{2}{p})\ge (\alpha -\frac{1}{2})(\frac{1}{2}-\frac{1}{p})\). Therefore, (2) is equivalent to (up to constant) \(C_{\alpha ,p}(R)\sim R^{\alpha (\frac{1}{2}-\frac{2}{p})}+ R^{(\alpha -\frac{1}{2})(\frac{1}{2}-\frac{1}{p})}.\)
1.1.2 Small Caps for Cone
Denote the truncated cone in \(\mathbb {R}^3\) by
For \(1/2\le \beta \le 1\), let \(\Gamma _{\beta }(R^{-1})\) be the partition of \(N_{R^{-1}}(\mathcal {C})\) into caps of dimensions \(1\times R^{-\beta }\times R^{-1}\). More precisely, we first choose a partition of \(\mathbb {S}^1\) into \(R^{-\beta }\)-arcs: \(\mathbb {S}^1=\sqcup \sigma \). For each arc \(\sigma \), consider the \(R^{-1}\)-neighborhood of
which is a cap of dimensions \(1\times R^{-\beta }\times R^{-1}\). \(\Gamma _{\beta }(R^{-1})\) is the set of caps constructed in this way (see Fig. 1). Note that \(\#\Gamma _{\beta }(R^{-1})\sim R^{\beta }\). Our square function estimate is
Theorem 2
For \(supp \widehat{f}\subset N_{R^{-1}}(\mathcal {C})\), we have
where
Remark
We remark that there is no interpolation argument in the proof of square function estimate. It is because that we cannot rewrite our square function estimate in the form of
where X, Y are some normed vector spaces and T is a linear operator. Another way to see the interpolation argument is prohibited is by looking at the numerology in (4). We draw the graph of \((\frac{1}{p},\log _R C_{\beta ,p}(R))\), where we ignore the \(C_\epsilon R^\epsilon \) factor in \(C_{\beta ,p}(R)\) (See Fig. 2). We see the critical exponent \(p=8\) corresponds to a concave point \((\frac{1}{8},\frac{\beta }{2})\) in the graph. But if the interpolation argument works, then the graph should be convex which is a contradiction. Not being allowed to do interpolation will be the main difficulty in the proof. This means that we need to prove the estimate for all p, but not only the critical p. Let us consider the case \(\beta =1/2\). One critical exponent \(p=4\) was proved by Guth–Wang–Zhang [6]. The result for another critical exponent \(p=8\) and hence for \(p\in (4,8)\) is not included in [6]. We also remark that
1.2 Elementary Tools
We briefly introduce the notion of dual rectangle and local orthogonality.
Definition 1
Let R be a rectangle of dimensions \(a\times b\times c\). Then the dual rectangle of R, denoted by \(R^*\), is the rectangle centered at the origin of dimensions \(a^{-1}\times b^{-1}\times c^{-1}\). Here \(R^*\) is made from R by letting the length of each edge of R become the reciprocal.
From our definition, we see that if \(R_2\) is a translated copy of \(R_1\), then \(R_1^*=R_2^*\). The motivation for defining dual rectangle is the following result.
Lemma 1
For any rectangle R, there exists a smooth function \(\omega _R\) which satisfies \(\frac{1}{10}\cdot \textbf{1}_R(x)\le \omega _R(x)\le 10\cdot \textbf{1}_R(x)\) for \(x\in R\), and \(\omega _R\) decays rapidly outside R. Also, \(supp \widehat{\omega }_R \subset R^*\).
This lemma is very standard, so we omit the proof. The next result is the local orthogonality property.
Lemma 2
Let R be a rectangle and \(\{f_i\}\) is a set of functions. If \(\{ supp \widehat{f}_i+R^* \}\) are finitely overlapping, then
Proof
Note that \(\widehat{f_i\omega _R}=\widehat{f_i}*\widehat{\omega _R}\) is supported in \(supp \widehat{f}_i+R^*\). By the finitely overlapping property, we see the above is bounded by
\(\square \)
Remark
Note that \(\omega _R\) is essentially \(\textbf{1}_R\) by ignoring the rapidly decaying tail. It turns out that the tail is always harmless. Therefore, to get rid of some irrelevant technicalities, we will just ignore the rapidly decaying tail, and write (5) as
There is another notion called comparable. Given two rectangles \(R_1, R_2\), we say \(R_1\) is essentially contained in \(R_2\), if there exists a universal constant C (say \(C=100\)) such that
We say \(R_1\) and \(R_2\) are comparable if \(R_1\) is essentially contained in \(R_2\) and vice versa, i.e.,
Throughout this paper, we will just ignore the unimportant constant C, and just write \(R_1\subset R_2\) to denote that \(R_1\) is essentially contained in \(R_2\).
2 Small Cap Square Function Estimate for Parabola
We prove Theorem 1 in this section. We begin with the sharp examples.
2.1 Sharp Examples
There are two types of examples: concentrated example and flat example.
\(\boxed {\hbox {Case 1: }p\ge 4\alpha +2}\)
We introduce the concentrated example. Choose f such that \(\widehat{f}(\xi )=\psi _{N_{R^{-1}}(\mathcal {P})}(\xi )\), where \(\psi _{N_{R^{-1}}(\mathcal {P})}\) is a smooth bump function supported in \(N_{R^{-1}}(\mathcal {P})\). We see that \(f(0)=\int \widehat{f}(\xi )d\xi \sim R^{-1}\). Since \(\widehat{f}\) is supported in the unit ball centered at the origin, f is locally constant in B(0, 1). Therefore,
We consider the right hand side of (1). By definition, for each \(\gamma \in \Gamma _\alpha (R^{-1})\), \(\widehat{f}_\gamma \) is roughly a bump function supported in \(2\gamma \). Let \(\gamma ^*\) be the dual rectangle of \(\gamma \) which has dimensions \(R^\alpha \times R\) and is centered at the origin. By an application of integration by parts and by ignoring the tails, we assume
Here, “\(\approx \)" means up to a \(C_\epsilon R^\epsilon \) factor for any \(\epsilon >0\). We will use the same notation throughout the paper.
We see that
We evaluate the integral above. There are two extreme regions: \(B(0,R^\alpha )\) where all the \(\{\gamma ^*\}\) overlap; \(B(0,R)\setminus B(0,R/2)\) where \(\{\gamma ^*\}\) is \( O(R^{2\alpha -1})\)-overlapping. For the intermediate region \(B(0,r)\setminus B(0,r/2)\) (\(R^\alpha \le r\le R\)), we see that \(\{\gamma ^*\}\) is \( O(r^{-1}R^{2\alpha })\)-overlapping. We may find a dyadic radius r such that
Since \(p\ge 4\alpha +2\ge 4\), the expression above is maximized when \(r=R^\alpha \). Plugging in, we obtain
Plugging into (1), we have
which gives
\(\boxed {\hbox {Case 2:} 2\le p\le 4\alpha +2}\)
We introduce the flat example. Let \(\theta \subset N_{R^{-1}}(\mathcal {P})\) be a \(R^{-1/2}\times R^{-1}\)-cap. Choose f such that \(\widehat{f}(\xi )=\psi _\theta (\xi )\), where \(\psi _\theta \) is a smooth bump function supported in \(N_{R^{-1}}(\mathcal {P})\). Let \(\theta ^*\) be the dual rectangle of \(\theta \) which has dimensions \(R^{1/2}\times R\) and is centered at the origin. By the locally constant property, f is an \(L^1\) normalized function essentially supported in \(\theta ^*\). By ignoring the tails, we assume
We see that
We consider the right hand side of (1). By the same reasoning as in Case 1, for each \(\gamma \in \Gamma _\alpha (R^{-1})\) with \(\gamma \subset \theta \), we know that \(\widehat{f}_\gamma \) is roughly a bump function supported in \(2\gamma \). Therefore, we can assume
We also note that \(\gamma _1^*\) and \(\gamma _2^*\) are comparable when \(\gamma _1,\gamma _2\subset \theta \). We have
Plugging into (1), we have
which gives
2.2 Proof of Theorem 1
By the standard localization argument, it suffices to prove
We introduce some notations. Throughout the proof, we use \(\gamma \) to denote caps of dimensions \(R^{-\alpha }\times R^{-1}\). For \(R^{-1/2}\le \Delta \le 1\), we will consider caps \(\tau \) of length \(\Delta \) and thickness \(R^{-1}\). We write \(|\tau |=\Delta \) to indicate the length of \(\tau \). We will also partition the region \(B_R\) into rectangles of dimensions \(R^\alpha \times R\). For simplicity, we denote these rectangles by \(B_{R^\alpha \times R}.\) The longest direction of \(B_{R^\alpha \times R}\) will be specified in the proof.
Let \(K\sim \log R\) and let \(m\in \mathbb {N}\) be such that \(K^m=R^{1/2}\). By doing the broad-narrow reduction as in [2, Section 5.1], we have
Note that \(C^mK^C\lesssim _\epsilon R^\epsilon \), for each \(\epsilon >0\).
We first estimate the right hand side of (6).
Lemma 3
Let \(\theta \) be a cap of length \(R^{-1/2}\). Then,
Proof
We partition \(B_R\) into \(B_{R^\alpha \times R}\), where each \(B_{R^\alpha \times R}\) is a translation of \(\gamma ^*\) for \(\gamma \subset \theta \) (note that for all \(\gamma \subset \theta \), \(\gamma ^*\)’s are comparable). It suffices to prove for any \(B_{R^\alpha \times R}\),
Here, \(\omega _{B_{R^\alpha \times R}}\) is a weight which \(=1\) on \(B_{R^\alpha \times R}\) and decays rapidly outside \(B_{R^\alpha \times R}\). And \(\Vert g\Vert _{L^p(\omega )}\) is defined to be \((\int |g|^p\omega )^{1/p}\). We remark that we use \(\omega _{B_{R^\alpha \times R}}\) instead of \(\textbf{1}_{B_{R^\alpha \times R}}\) is to make the local orthogonality and locally constant property rigorous. As such technicality is well-known (see for example in [1]), we will just pretend \(\omega _{B_{R^\alpha \times R}}=\textbf{1}_{B_{R^\alpha \times R}}\) for convenience.
We further do the partition
where each \(B_{R^{1/2}\times R}\) is a translation of \(\theta ^*\). Since \(f_\theta \) is locally constant on each \(B_{R^{1/2}\times R}\), we have
By local orthogonality, Hölder’s inequality and noting \(p\ge 2\), we have
Combining the inequalities, we finish the proof of (8). \(\square \)
By Lemma 3, the right hand side of (6) is bounded by
Next, we estimate (7). For any summand in (7), we will show that
This will imply (7)\(^{\frac{1}{p}}\) \(\lessapprox C_{\alpha ,p}(R)\Big \Vert \Big (\sum \limits _{\gamma }|f_\gamma |^2\Big )^{1/2}\Big \Vert _p\), and then finishes the proof of Theorem 1. It remains to prove (9).
Fix a \(\Delta \in [R^{-1/2},1]\) and a \(\tau \) with \(|\tau |=\Delta \). We first consider \(\bigcap _{\gamma \subset \tau } \gamma ^*\). It is easy to see \(\bigcap _{\gamma \subset \tau } \gamma ^*\) is an \(R^\alpha \times R^\alpha \Delta ^{-1}\)-rectangle when \(\Delta \ge R^{\alpha -1}\); \(\bigcap _{\gamma \subset \tau } \gamma ^*\) is an \(R^\alpha \times R\)-rectangle when \(\Delta \le R^{\alpha -1}\). We consider these two cases separately.
\(\boxed {\hbox {Case 1:} \Delta \ge R^{\alpha -1}}\)
We choose a partition \(B_R=\bigsqcup B_{R^{\alpha }\times R^\alpha \Delta ^{-1}}\), where each \(B_{R^{\alpha }\times R^\alpha \Delta ^{-1}}\) is a translation of \(\bigcap _{\gamma \subset \tau }\gamma ^*\). We just need to show
Since each \(|f_\gamma |\) is locally constant on \(B_{R^{\alpha }\times R^\alpha \Delta ^{-1}}\) when \(\gamma \subset \tau \), we have
Since \(\{f_\gamma \}_{\gamma \subset \tau }\) are locally orthogonal on \(B_{R^{\alpha }\times R^\alpha \Delta ^{-1}}\), we have
Therefore, (10) is reduced to
Next, we apply the parabolic rescaling. Recall that \(\tau \) is a cap of length \(\Delta \). We dilate by factor \(\Delta ^{-1}\) in the tangent direction of \(\tau \) and dilate by factor \(\Delta ^{-2}\) in the normal direction of \(\tau \). Under the rescaling, we see that: \(\tau \) becomes the \(R^{-1}\Delta ^{-2}\)-neighborhood of \(\mathcal {P}\); \(\tau _1\) and \(\tau _2\) become \(K^{-1}\)-separated caps with length \(K^{-1}\) and thickness \(R^{-1}\Delta ^{-2}\); the rectangle \(B_{R^{\alpha }\times R^\alpha \Delta ^{-1}}\) in the physical space becomes \(B_{R^\alpha \Delta }\). Let \(g,g_1,g_2\) be the rescaled version of \(f_\tau ,f_{\tau _1},f_{\tau _2}\) respectively. The inequality (11) becomes
We recall the following bilinear restriction estimate (see for example in [9]).
Lemma 4
Let \(r>1\), \(K>1\). Suppose \(g_1, g_2\) satisfy \(supp \widehat{g}_1, supp \widehat{g}_2\subset N_{r^{-2}}(\mathcal {P})\) and \(dist (supp \widehat{g}_1, supp \widehat{g}_2)>K^{-1}\). Then for \(p\ge 2\) and \(r'\ge r\) we have
Proof
We just need to prove for \(r'=r\). When \(p=2\), this is trivial. When \(p=4\), this is the bilinear restriction estimate. When \(p=\infty \), we note that
The second-last inequality is by Hölder and the condition on the support of \(\widehat{g}_1,\widehat{g}_2\). The last inequality is by Plancherel. For other p, the proof is by using Hölder to interpolate between \(p=2,4,\infty \). \(\square \)
We return to (12). Noting that \(R^\alpha \Delta \ge (R\Delta ^2)^{1/2}\), we apply the lemma above to bound the left hand side of (12) by \((R^\alpha \Delta )^{\frac{2}{p}-1}\Vert g\Vert _{L^2(B_{R^\alpha \Delta })}\). It suffices to prove
When \(p\ge 4\), we use \(C_{\alpha ,p}(R)\gtrsim R^{\alpha (\frac{1}{2}-\frac{2}{p})}\). Then (14) boils down to
which is equivalent to
which is true since \(R\ge 1\).
When \(p\le 4\), we use \(C_{\alpha ,p}(R)\gtrsim R^{(\alpha -\frac{1}{2})(\frac{1}{2}-\frac{1}{p})}\). Then (14) boils down to
which is equivalent to
which is true since \(\alpha \ge 1/2.\)
\(\boxed {\hbox {Case 2:} \Delta \le R^{\alpha -1}}\)
We choose a partition \(B_R=\bigsqcup B_{R^{\alpha }\times R}\), where each \(B_{R^{\alpha }\times R}\) is a translation of \(\bigcap _{\gamma \subset \tau }\gamma ^*\). We just need to show
Since each \(|f_\gamma |\) is locally constant on \(B_{R^{\alpha }\times R}\) when \(\gamma \subset \tau \), we have
Since \(\{f_\gamma \}_{\gamma \subset \tau }\) are locally orthogonal on \(B_{R^{\alpha }\times R}\), we have
Therefore, (17) is reduced to
Next, we do the same parabolic rescaling as above. The rectangle \(B_{R^{\alpha }\times R}\) in the physical space becomes \(B_{R^\alpha \Delta \times R\Delta ^2}\). Let \(g,g_1,g_2\) be the rescaled version of \(f_\tau ,f_{\tau _1},f_{\tau _2}\) respectively. The inequality (18) becomes
To apply Lemma 4, we do the partition \(B_{R^\alpha \Delta \times R\Delta ^2}=\bigsqcup B_{R\Delta ^2}\). So, (19) is reduced to
By Lemma 4,
It suffices to prove
When \(p\ge 4\alpha +2\), we use \(C_{\alpha ,p}(R)\gtrsim R^{\alpha (\frac{1}{2}-\frac{2}{p})}\). Then (21) boils down to
which is equivalent to
Using \(\Delta \ge R^{-\frac{1}{2}}\), we just need to prove
The last inequality is equivalent to \(\frac{1}{4}-\frac{1}{2p}-\frac{\alpha }{p}\ge 0\), which is further equivalent to \(p\ge 4\alpha +2\). We also remark that this is the place where the critical exponent \(p=4\alpha +2\) appears.
When \(2\le p\le 4\alpha +2\), we use \(C_{\alpha ,p}(R)\gtrsim R^{(\alpha -\frac{1}{2})(\frac{1}{2}-\frac{1}{p})}\). Then (21) boils down to
which is equivalent to
which is true since \(\Delta ^{-1}\le R^{1/2}.\)
The proof of Theorem 1 is finished.
3 Small Cap Square Function Estimate for Cone
We prove Theorem 2 in this section. We begin with the sharp examples.
3.1 Sharp Examples
Choose f such that \(\widehat{f}=\psi _{N_{R^{-1}}(\mathcal {C})}(\xi )\), where \(\psi _{N_{R^{-1}}(\mathcal {C})}(\xi )\) is a smooth bump function supported in \(N_{R^{-1}}(\mathcal {C})\). We are going to calculate the lower bound of \(\Vert f\Vert _{p}\), which is the left hand side of (3). We see that \(f(0)=\int \widehat{f}(\xi ) d\xi \sim R^{-1}\). Since \(\widehat{f}\) is supported in the unit ball centered at the origin, f is locally constant in B(0, 1). Therefore,
We also estimate the integral of f in the region \(\{|x|\sim R\}\). We first do a canonical partition of \(N_{R^{-1}}(\mathcal {C})\) into \(1\times R^{-1/2}\times R^{-1}\)-planks, denoted by
Then we can write \(f=\sum _\theta f_\theta \), such that each \(\widehat{f}_\theta \) is a smooth bump function on \(\theta \). Let \(\theta ^*\) be the dual rectangle of \(\theta \), so \(\theta ^*\) has size \(1\times R^{1/2}\times R\) and is centered at the origin. By an application of integration by parts, we can assume
Now the key observation is that \(\{\theta ^*\}\) are disjoint in \(B(0,R)\setminus B(0,\frac{9}{10}R)\), so we see that
Combining with (24), we see
And we see the threshold for these two lower bounds to be equal is at \(p=4\).
For this same f, we will estimate the upper bound of the right hand side of (3). Recall that \(\gamma \) is a \(1\times R^{-\beta }\times R^{-1}\)-cap contained in \(N_{R^{-1}}(\mathcal {C})\), and by definition \(\widehat{f}_\gamma =\psi _\gamma \widehat{f}\). Therefore, \(\widehat{f}_\gamma \) is a smooth bump function adapted to \(\gamma \). By an application of integration by parts, we can assume
Here, the dual rectangle \(\gamma ^*\) is centered at the origin with size \(1\times R^\beta \times R\). See Fig. 3: the rectangle on the left hand side is \(\gamma \); the rectangle on the right hand side is \(\gamma ^*\).
Therefore, we can write
Note that each \(\gamma ^*\) is supported in B(0, R), so we rewrite
We are going to calculate \(\int _{\{x_3=r\}}\Big (\sum _\gamma \textbf{1}_{\gamma ^*}\Big )^{p/2}\). Here is the result:
Proposition 1
For \(p\ge 2\), we have
Proof
Fix the plane \(\{x_3=r\}\). For each \(\gamma ^*\), we set
\(\gamma ^*_r\) is a rectangle of size \(1\times R^\beta \) in the plane \(\{x_3=r\}\). Denote the center of \(\gamma ^*_r\) by \(C(\gamma ^*_r)\). We see that \(C(\gamma ^*_r)\) lies on the circle
and the long direction of \(\gamma ^*_r\) is tangent to \(S_r\) (see Fig. 4). We can rewrite the left hand side of (27) as
We also notice two useful facts: (1) \(\#\{\gamma ^*_r\}\sim R^\beta \); (2) \(\{C(\gamma ^*_r)\}\) are roughly \(rR^{-\beta }\)-separated on the circle \(S_r\).
\(\boxed {\hbox {Case 1:} 0\le r\le 10}\)
In this case, we see that \(\{\gamma ^*_r\}\) essentially form a bush centered at the origin. Evaluating the concentrated part and spread-out part, we have
\(\boxed {\hbox {Case 2:} 10\le r\le R^\beta }\)
For any point \(P\in \bigcup \gamma ^*_r\), we are going to estimate \(\sum _\gamma \textbf{1}_{\gamma ^*_r}(P)\). Define
We see that any \(P\in \bigcup \gamma ^*_r\) satisfies \(d(P)\lesssim R^\beta \), and if \(P\in \bigcup \gamma ^*_r\) lies inside \(S_r\) then \(d(P)=0\). For simplicity, we write \(d=d(P)\). We consider several cases:
-
(1)
\(d\le 10\). In this case, P lies in the 10-neighborhood of \(S_r\). Therefore,
$$\begin{aligned} \sum _\gamma \textbf{1}_{\gamma ^*_r}(P)=\sum _\gamma \textbf{1}_{\gamma ^*_r\cap N_{10}(S_r)}(P) \end{aligned}$$Noting that \(\gamma ^*_r\cap N_{10}(S_r)\) is essentially a \(1\times r^{1/2}\)-rectangle centered at \(C(\gamma ^*_r)(\in S_r)\) and noting that \(\{C(\gamma ^*_r)\}\) are \(rR^{-\beta }\) separated, we have
$$\begin{aligned} \sum _\gamma \textbf{1}_{\gamma ^*_r\cap N_{10}(S_r)}(P)\sim \frac{r^{1/2}}{rR^{-\beta }}=r^{-1/2}R^\beta . \end{aligned}$$ -
(2)
\(10\le d\le r\). We claim in this case
$$\begin{aligned} \sum _\gamma \textbf{1}_{\gamma ^*_r}(P)\sim R^\beta (rd)^{-1/2}. \end{aligned}$$See Fig. 5. By translation and rotation, we may assume \(S_r\) is centered at \((-r,0)\) and P lies on the \(x_2\)-axis. By Pythagorean theorem, the coordinate of P is \((0, \sqrt{d(d+2r)})\). Since \(d\le r\), we may ignore some constant factor and write the coordinate of P as
$$\begin{aligned} P=(0,(dr)^{1/2}). \end{aligned}$$(28)The next step is to find the number of \(\gamma ^*_r\) that pass through P. Suppose \(P\in \gamma ^*_r\). Since the center of \(\gamma ^*_r\) lies in \( S_r\), we may denote its coordinate by \(C(\gamma ^*_r)=(-r+r\cos \theta ,r\sin \theta )\). Let \(\ell \) be the line passing through \(C(\gamma ^*_r)\) and tangent to \(S_r\) (which is also the core line of \(\gamma ^*_r\)):
$$\begin{aligned} \ell : y-r\sin \theta =-\frac{\cos \theta }{\sin \theta }(x+r-r\cos \theta ). \end{aligned}$$Since \((dr)^{1/2}\le R^\beta \), we see that \(P\in \gamma ^*_r\) is equivalent to \(dist (\ell ,P)\le \frac{1}{2}\). By some calculation,
$$\begin{aligned} dist (\ell ,P)&=\frac{|(dr)^{1/2}-r\sin \theta +\frac{\cos \theta }{\sin \theta }r(1-\cos \theta )|}{\sqrt{1+\frac{\cos ^2\theta }{\sin ^2\theta }}}\\&=|\sin \theta (dr)^{1/2}-r(1-\cos \theta )|\\&=2|(dr)^{1/2}\sin \frac{\theta }{2}\cos \frac{\theta }{2}-r\sin ^2\frac{\theta }{2}|. \end{aligned}$$We just need to find the number of \(\theta \) such that \(dist (\ell ,P)\le 1/2\). By symmetry, we just compute the positive solutions \(\theta \) that are close to 0. In this case, the inequality becomes
$$\begin{aligned} (dr)^{1/2}\sin \frac{\theta }{2}\cos \frac{\theta }{2}-r\sin ^2\frac{\theta }{2}\le 1/4. \end{aligned}$$The meaningful solutions will be
$$\begin{aligned} \sin \frac{\theta }{2}&\le \frac{(dr)^{1/2}\cos \frac{\theta }{2}-\sqrt{dr\cos ^2\frac{\theta }{2}-r}}{2r}\\&=\frac{1}{2}\frac{1}{(dr)^{1/2}\cos \frac{\theta }{2}+\sqrt{dr\cos ^2\frac{\theta }{2}-r}}\\&\sim (dr)^{-1/2}. \end{aligned}$$In the last step, we use \(\cos \frac{\theta }{2}\sim 1\). Therefore, \(0\le \theta \lesssim (dr)^{-1/2}\). Since \(\{C(\gamma ^*_r)\}\) have angle separation \(\sim R^{-\beta }\), we see the number of \(\gamma ^*_r\) that contains P is \(\sim R^\beta (dr)^{-1/2}\).
-
(3)
\(r\le d\le R^{\beta }\). We claim in this case
$$\begin{aligned} \sum _\gamma \textbf{1}_{\gamma ^*_r}(P)\sim R^\beta d^{-1}. \end{aligned}$$The calculation is exactly the same as above, with the only modification that we replace (28) by \(P=(0,d)\).
Combining the three scenarios (1), (2), (3), we can estimate
In the last line, we use \(\approx \) is because when \(p=4\), the summation is over \(\sim \log R\) same numbers instead of a geometric series.
\(\boxed {\hbox {Case 3:} R^\beta \le r\le R}\)
This is almost the same as \(\boxed {\hbox {Case 2}}\). Actually, it is even simpler, since we only have scenarios (1) and (2) (with the range in (2) replaced by \(10\le d\le r^{-1}R^{2\beta }\) and noting \(r^{-1}R^{2\beta }\le r\)). The same argument will give
\(\square \)
With (27), we can finally estimate
The last step is because of \(R^{\beta (2+\frac{p}{4})}\le R^{\frac{p\beta }{2}}+R^{(2-\frac{p}{4})+\frac{p\beta }{2}}\).
Combining (25), (26) and plugging into (3), we obtain
Considering of the three cases \(2\le p\le 4\), \(4\le p\le 8\) and \(p\ge 8\) will give us that the right hand side of (4) is actually the lower bound of \(C_{\beta ,p}(R)\) (up to \(R^\epsilon \) factor).
3.2 Proof of Theorem 2
The difficult part of the proof will be in the range \(4\le p\le 8\). Recall from Remark 1.1.2 that we need to prove for all p but not only the endpoint p, since there is no interpolation argument. The main tool we are going to use is called the amplitude dependent wave envelope estimate by Guth–Maldague [5]. Before giving the proof, we introduce some notations from [5, 6].
Recall \(\mathcal {C}\) is the truncated cone in \(\mathbb {R}^3\):
We have the canonical partition of \(N_{R^{-1}}(\mathcal {C})\) into \(1\times R^{-1/2}\times R^{-1}\)-planks \(\Theta =\{\theta \}\):
More generally, for any dyadic \(s\in [R^{-1/2},1]\), we can partition the \(s^2\)-neighborhood of \(\mathcal {C}\) into \(1\times s\times s^2\)-planks \({\textbf {S}}_s=\{\tau _s\}\):
Note in particular \({\textbf {S}}_{R^{-1/2}}=\Theta \). For each s and a frequency plank \(\tau _s\in {\textbf {S}}_s\), we define the box \(U_{\tau _s}\) in the physical space to be a rectangle centered at the origin of dimensions \(Rs^2\times Rs\times R\) whose edge of length \(Rs^2\) (respectively Rs, R) is parallel to the edge of \(\tau _s\) with length 1 (respectively s, \(s^2\)). Note that for any \(\theta \in \Theta \), \(U_\theta \) is just \(\theta ^*\) (the dual rectangle of \(\theta \)). Also, \(U_{\tau _s}\) is the convex hull of \(\cup _{\theta \subset \tau _s}U_{\theta }\).
We make a useful observation, which will be used later. For any \(\theta \subset \tau _s\), we see that \(\theta ^*\) is a \(1\times R^{1/2}\times R\)-plank. Define \(U_{\theta ,s}\) to be the \(Rs^2\times Rs\times R\)-plank which is made by dilating the corresponding edges of \(\theta ^*\). Our observation is that \(U_{\tau _s}\) and \(U_{\theta ,s}\) are comparable:
This is not hard to see by noting that the second longest edge of \(\theta ^*\) form an angle \(\lesssim s\) with the \(Rs\times R\)-face of \(U_{\tau _s}\). We just omit the proof.
We cover \(\mathbb {R}^3\) by translated copies of \(U_{\tau _s}\). We will use \(U\parallel U_{\tau _s}\) to indicate U is one of the translated copies. If \(U\parallel U_{\tau _s}\), then we define \(S_U f\) by
We can think of \(S_U f\) as the wave envelope of f localized in U in the physical space and localized in \(\tau _s\) in the frequency space. We have the following inequality of Guth, Wang and Zhang (see [6, Theorem 1.5]):
Theorem 3
[Wave envelope estimate] Suppose \(supp \widehat{f}\subset N_{R^{-1}}(\mathcal {C})\). Then
for any \(\epsilon >0\).
There is a refined version of the wave envelope estimate proved by Guth and Maldague (See [5, Theorem 2]):
Theorem 4
[Amplitude dependent wave envelope estimate] Suppose \(supp \widehat{f}\subset N_{R^{-1}}(\mathcal {C})\). Then for any \(\alpha >0\),
for any \(\epsilon >0\). Here, \(\mathcal {G}_{\tau _s}(\alpha )=\Big \{ U\parallel U_{\tau _s}:|U|^{-1}\Vert S_U f\Vert _2^2\gtrsim |\log R|^{-1} \frac{\alpha ^2}{(\#{\textbf {S}}_s)^2} \Big \}\).
Remark
In the original paper [5], their definition for \(\mathcal {G}_{\tau _s}(\alpha )\) is
where \(\#\tau _s=\#\{\tau _s\in {\textbf {S}}_s:f_{\tau _s}\not \equiv 0\}\). Noting that \(\#\tau _s\le \#{\textbf {S}}_s\), we see our \(\mathcal {G}_{\tau _s}(\alpha )\) is a bigger set, and hence our (32) is weaker than the original version ([5] Theorem 2).
Proof of Theorem 2
\(\boxed {\hbox {Case 1:} p\ge 8}\) This is just by Cauchy–Schwarz inequality, since \(\#\Gamma _\beta (R^{-1})\sim R^{\beta }.\)
\(\boxed {\hbox {Case 2:} 2\le p\le 4}\)
We have (31). By dyadic pigeonholing on s, we can find s such that
We fix this s. Denote \({\textbf {U}}:=\{U: U\parallel U_\tau ~for~some~ \tau \in {\textbf {S}}_s\}\). Then the inequality above can be written as
We remind readers that each \(U\in {\textbf {U}}\) has size \(Rs^2\times Rs\times R\). We also have the following \(L^2\) estimate:
We provide a quick proof for (35). We have
Noting that \(\{f_\theta : \theta \subset \tau \}\) are locally orthogonal on any translation of \(U_\tau \) and recalling (30), we have
Next, we will do dyadic pigeonholing on \(\Vert S_U f\Vert _2^2\). (Actually, we only need to prove a local version of the inequality, so we just care about those U that intersect \(B_R\). There are in total \(R^{O(1)}\) of them.) We can find a number \(W>0\) and set \({\textbf {U}}'=\{U\in {\textbf {U}}: \Vert S_Uf\Vert _2^2\sim W\}\), so that
Since every \(U\in {\textbf {U}}\) has the same measure \(R^3s^2\), there is no ambiguity to write \(|U|^{-1}\) in (36).
Let \(\alpha \) be such that \(\frac{1}{p}=\frac{\alpha }{4}+\frac{1-\alpha }{2}\). Then \(\alpha =4(\frac{1}{2}-\frac{1}{p})\). Applying Hölder’s inequality gives
Next we are going to exploit more orthogonality for \(S_U f\). Suppose \(U\parallel U_\tau \). By definition
We remind readers that \(\{\tau \}\) are \(1\times s\times s^2\)-caps; \(\{\theta \}\) are \(1\times R^{-1/2}\times R^{-1}\)-caps; \(\{\gamma \}\) are \(1\times R^{-\beta }\times R^{-1}\)-caps. Since U is too small for \(\{f_\gamma :\gamma \subset \theta \}\) to be orthogonal on U, we need to find a larger rectangle. First, let us look at the rectangles \(\{\gamma : \gamma \subset \theta \}\). We want to find a rectangle \(\nu _\theta \) as big as possible, such that \(\{\gamma +\nu : \gamma \subset \theta \}\) are finitely overlapping. Actually, we can choose \(\nu _\theta \) to be of size \(R^{1/2-\beta }\times R^{-\beta }\times R^{-1}\) (here the edge of \(\nu _\theta \) with length \(R^{1/2-\beta }\) (respectively \(R^{-\beta }\), \(R^{-1}\)) are parallel to the edge of \(\theta \) with length 1 (respectively \(R^{-1/2}\), \(R^{-1}\)). See Fig. 6: the left hand side is \(\theta \) and \(\{\gamma :\gamma \subset \theta \}\); the right hand side is our \(\nu _\theta \). It is not hard to see \(\{\gamma +\nu _\theta : \gamma \subset \theta \}\) are finitely overlapping. Let \(\nu _\theta ^*\) be the dual of \(\nu _\theta \) in the physical space, then \(\nu _\theta ^*\) has size \(R^{\beta -\frac{1}{2}}\times R^\beta \times R\) and we have the local orthogonality (we just ignore the rapidly decaying tail for simplicity):
Define
which is a rectangle of size
We tile \(\mathbb {R}^3\) with translated copies of \(V_\theta \), and we write \(V\parallel V_\theta \) if V is one of the tiles. Noting that \(\frac{R^{\beta -\frac{1}{2}}}{Rs^2}\le \frac{R^\beta }{Rs}\), we will discuss three scenarios: 1. \(\frac{R^{\beta -\frac{1}{2}}}{Rs^2}\le \frac{R^\beta }{Rs}\le 1\); 2. \(\frac{R^{\beta -\frac{1}{2}}}{Rs^2}\le 1\le \frac{R^\beta }{Rs}\); 3. \(1\le \frac{R^{\beta -\frac{1}{2}}}{Rs^2}\le \frac{R^\beta }{Rs}\).
-
If \(\frac{R^{\beta -\frac{1}{2}}}{Rs^2}\le \frac{R^\beta }{Rs}\le 1\), then \(V_\theta \) is essentially \(U_\tau \). In this case, we already have the orthogonality of \(\{f_\gamma : \gamma \subset \theta \}\) on \(U(\parallel U_\tau )\). Therefore,
$$\begin{aligned} \Vert f\Vert _p^p&\lessapprox |U|^{-p(\frac{1}{2}-\frac{1}{p})}\sum _{\tau \in {\textbf {S}}_s}\sum _{U\parallel U_\tau }\bigg (\int _U \sum _{\theta \subset \tau }|\sum _{\gamma \subset \theta }f_\gamma |^2\bigg )^{\frac{p}{2}}\\&\sim |U|^{-p(\frac{1}{2}-\frac{1}{p})}\sum _{\tau \in {\textbf {S}}_s}\sum _{U\parallel U_\tau }\bigg (\int _U \sum _{\gamma \subset \tau }|f_\gamma |^2\bigg )^{\frac{p}{2}}\\&\le \sum _{\tau \in {\textbf {S}}_s}\sum _{U\parallel U_\tau }\int _U\Big (\sum _{\gamma \subset \tau }|f_\gamma |^2\Big )^{\frac{p}{2}}\\&=\sum _{\tau \in {\textbf {S}}_s}\int _{\mathbb {R}^3}\Big (\sum _{\gamma \subset \tau }|f_\gamma |^2\Big )^{\frac{p}{2}}\\&\le \int _{\mathbb {R}^3}\Big (\sum _{\gamma \in \Gamma _{\beta }(R^{-1})}|f_\gamma |^2\Big )^{p/2}. \end{aligned}$$ -
In the other two scenarios, we proceed as follows.
$$\begin{aligned} \Vert f\Vert _p^p&\lessapprox |U|^{-p(\frac{1}{2}-\frac{1}{p})}\sum _{\tau \in {\textbf {S}}_s} \sum _{U\parallel U_\tau } \bigg (\int _U \sum _{\theta \subset \tau }|f_\theta |^2\bigg )^{p/2}\\&\le |U|^{-p(\frac{1}{2}-\frac{1}{p})}\sum _{\tau \in {\textbf {S}}_s} \sum _{U\parallel U_\tau } \#\{\theta \subset \tau \}^{\frac{p}{2}-1}\sum _{\theta \subset \tau }\bigg (\int _U |f_\theta |^2\bigg )^{p/2}\\&\le |U|^{-p(\frac{1}{2}-\frac{1}{p})}\#\{\theta \subset \tau \}^{\frac{p}{2}-1}\sum _{\tau \in {\textbf {S}}_s} \sum _{\theta \subset \tau } \sum _{V\parallel V_\theta } \bigg (\int _V |f_\theta |^2\bigg )^{p/2}\\ (\text {By~orthogonality})&\sim |U|^{-p(\frac{1}{2}-\frac{1}{p})}\#\{\theta \subset \tau \}^{\frac{p}{2}-1}\sum _{\tau \in {\textbf {S}}_s} \sum _{\theta \subset \tau } \sum _{V\parallel V_\theta } \bigg (\int _V \sum _{\gamma \subset \theta }|f_\gamma |^2\bigg )^{p/2}\\ (\text {H}\ddot{\textrm{o}}\text {lder})&\le |U|^{-p(\frac{1}{2}-\frac{1}{p})}\#\{\theta \subset \tau \}^{\frac{p}{2}-1}\sum _{\tau \in {\textbf {S}}_s}\\&\qquad \sum _{\theta \subset \tau } \sum _{V\parallel V_\theta } |V|^{p(\frac{1}{2}-\frac{1}{p})}\int _V \Big (\sum _{\gamma \subset \theta }|f_\gamma |^2\Big )^{p/2}\\&\le \bigg (\frac{|V|}{|U|}\bigg )^{p(\frac{1}{2}-\frac{1}{p})}\#\{\theta \subset \tau \}^{\frac{p}{2}-1} \Big \Vert \Big (\sum _{\gamma \in \Gamma _\beta (R^{-1})}|f_\gamma |^2\Big )^{\frac{1}{2}}\Big \Vert _p^p\\&= \bigg (\max \Big \{\frac{R^\beta }{Rs},1\Big \}\max \Big \{\frac{R^{\beta -\frac{1}{2}}}{Rs^2},1\Big \}\bigg )^{p(\frac{1}{2}-\frac{1}{p})} (s R^{\frac{1}{2}})^{\frac{p}{2}-1}\\&\qquad \Big \Vert \Big (\sum _{\gamma \in \Gamma _\beta (R^{-1})}|f_\gamma |^2\Big )^{\frac{1}{2}}\Big \Vert _p^p. \end{aligned}$$
We just need to check
\(*\) If \(\frac{R^{\beta -\frac{1}{2}}}{Rs^2}\le 1\le \frac{R^\beta }{Rs}\), then the left hand side of (39) equals \(R^{(\beta -\frac{1}{2})(\frac{p}{2}-1)}\), which is \(\le \) the right hand side of (39).
\(*\) If \(1\le \frac{R^{\beta -\frac{1}{2}}}{Rs^2}\le \frac{R^\beta }{Rs}\), then the left hand side of (39) equals
which is less than the right hand side of (39) since \(s^{-1}\le R^{1/2}\).
\(\boxed {\hbox {Case 3:} 4\le p\le 8}\)
Note that
We can assume the range of \(\alpha \) is \(R^{-100}\Vert f\Vert _\infty \le \alpha \le \Vert f\Vert _\infty \). Other \(\alpha \) are considered as negligible.
By dyadic pigeonholing, we can find \(\alpha >0\) such that
We just need to fix this \(\alpha \), and prove an upper bound for \(\alpha ^p |\{x\in \mathbb {R}^3:|f(x)|> \alpha \}|\). By (32), we have
By pigeonholing again, we can find s such that
We fix this s. We also remind readers the definition of \(\mathcal {G}_\tau (\alpha )\):
since \(\#{\textbf {S}}_s\sim s^{-1}\). Continuing the estimate in (40), we have
Moving the power of \(\alpha \) to the left hand side, we obtain
Our final goal is to prove that the right hand side above is
To do that, we again need to exploit the orthogonality of \(\{f_\gamma :\gamma \subset \theta \}\). The argument is different from that in \(\boxed {\hbox {Case 2:} 2\le p\le 4}\). In \(\boxed {\hbox {Case 2:} 2\le p\le 4}\), we expand the integration domain U to a bigger rectangle V to get orthogonality, whereas here we are going to use Cauchy–Schwarz inequality.
We discuss the geometry of these caps. Fix a \(\tau \in {\textbf {S}}_s\). By definition, \(U_\tau \) is a \(Rs^2\times Rs\times R\)-rectangle in the physical space. Then \(U_\tau ^*\) is a \(R^{-1}s^{-2}\times R^{-1}s^{-1}\times R^{-1}\)-rectangle. We make the following observation: for each \(\theta \subset \tau \), we can show that \(U^*_\tau \) is comparable to another rectangle, which has the same size but with the edges parallel to the corresponding edges of \(\theta \). We explain it with more details. Let \(U_{\theta ,s}\) be the \(Rs^2\times Rs\times R\)-rectangle which is made from the \(1\times R^{1/2}\times R\)-rectangle \(\theta ^*\) by dilating the corresponding edges. Then \(U_{\theta ,s}^*\) is a \(R^{-1}s^{-2}\times R^{-1}s^{-1}\times R^{-1}\)-rectangle whose edges are parallel to the corresponding edges of the \(1\times R^{-1/2}\times R^{-1}\)-rectangle \(\theta \). We want to show \(U_\tau ^*\) and \(U_{\theta ,s}^*\) are comparable. This is equivalent to show \(U_\tau \) and \(U_{\theta ,s}\) are comparable, which is an observation we made at (29). Therefore, for any \(\theta \subset \tau \), we can assume the edges of \(U_\tau ^*\) are parallel to the corresponding edges of \(\theta \).
Fix a \(U\parallel U_\tau \), then \(U^*=U_\tau ^*\). See Fig. 7: on the left is \(\theta \) and \(\{\gamma : \gamma \subset \theta \}\); on the middle is our \(U^*\). We will discuss two scenarios depending on whether \(R^{-\beta }\) (the width of \(\gamma \)) is bigger than \(R^{-1}s^{-1}\) (the width of \(U^*\)).
-
If \(R^{-\beta }\ge R^{-1}s^{-1}\), then we see that \(\{ \gamma +U^*: \gamma \subset \theta \}\) are finitely overlapping. This means that \(\{f_\gamma : \gamma \subset \theta \}\) are locally orthogonal on U:
$$\begin{aligned} \int _U \Big |\sum _{\gamma \subset \theta }f_\gamma \Big |^2\lesssim \int _U \sum _{\gamma \subset \theta }|f_\gamma |^2. \end{aligned}$$Therefore,
$$\begin{aligned} \Vert f\Vert _p^p&\lessapprox |U|^{1-\frac{p}{2}}\sum _{\tau \in {\textbf {S}}_s}\sum _{U\parallel U_s}\bigg (\int _U \sum _{\theta \subset \tau }|\sum _{\gamma \subset \theta }f_\gamma |^2\bigg )^{\frac{p}{2}}s^{4-p}\\&\lesssim |U|^{1-\frac{p}{2}}\sum _{\tau \in {\textbf {S}}_s}\sum _{U\parallel U_s}\bigg (\int _U \sum _{\gamma \subset \tau }|f_\gamma |^2\bigg )^{\frac{p}{2}}s^{4-p}\\ (\text {H}\ddot{\textrm{o}}\text {lder})&\le s^{4-p}\sum _{\tau \in {\textbf {S}}_s}\sum _{U\parallel U_s}\int _U\Big (\sum _{\gamma \subset \tau }|f_\gamma |^2\Big )^{\frac{p}{2}}\\&=s^{4-p}\sum _{\tau \in {\textbf {S}}_s}\int _{\mathbb {R}^3}\Big (\sum _{\gamma \subset \tau }|f_\gamma |^2\Big )^{\frac{p}{2}}\\&\le s^{4-p}\int _{\mathbb {R}^3}\Big (\sum _{\gamma \in \Gamma _{\beta }(R^{-1})}|f_\gamma |^2\Big )^{p/2}. \end{aligned}$$We just need to check
$$\begin{aligned} s^{4-p}\le R^{\frac{\beta p}{2}+\frac{p}{4}-2}.\end{aligned}$$Plugging \(s^{-1}\le R^{-1/2}\), the inequality above is reduced to
$$\begin{aligned} R^{p/4}\le R^{\frac{\beta p}{2}}, \end{aligned}$$which is true since \(\beta \ge 1/2\).
-
If \(R^{-\beta }\le R^{-1}s^{-1}\), we will define a set of new planks which we call \(\pi \). See on the right hand side of Fig. 7. We partition \(\theta \) into a set of \(1\times R^{-1}s^{-1}\times R^{-1}\)-planks, which we denoted by \(\{\pi : \pi \subset \theta \}\). If the partition is well chosen (the size of caps can vary within a constant multiple), we can assume each \(\gamma \) fits into one \(\pi \), so we define
$$\begin{aligned} f_\pi :=\sum _{\gamma \subset \pi }f_\gamma . \end{aligned}$$Now, our key observation is that \(\{\pi +U^*: \pi \subset \theta \}\) are finitely overlapping. This is true by noting that: the width of \(U^*\) and \(\pi \) are both \(R^{-1}s^{-1}\); the angle between the longest edge of \(\pi \) and \(U^*\) is less than \(R^{-1/2}\) and \(R^{-1}s^{-2}\cdot R^{-1/2}\le R^{-1}s^{-1}\). Therefore, we have that \(\{f_\pi : \pi \subset \theta \}\) are locally orthogonal on U, i.e.,
$$\begin{aligned} \int _U \Big |\sum _{\pi \subset \theta }f_\pi \Big |^2\lesssim \int _U \sum _{\pi \subset \theta }|f_\pi |^2. \end{aligned}$$(43)Another step of Cauchy–Schwarz will give
$$\begin{aligned} \int _U \sum _{\pi \subset \theta }|f_\pi |^2&=\int _U \sum _{\pi \subset \theta }\Big |\sum _{\gamma \subset \pi }f_\gamma \Big |^2\le \#\{\gamma \subset \pi \} \int _U \sum _{\gamma \subset \theta }|f_\gamma |^2\nonumber \\&=R^\beta R^{-1}s^{-1} \int _U \sum _{\gamma \subset \theta }|f_\gamma |^2. \end{aligned}$$(44)As a result, we obtain
$$\begin{aligned} \int _U |f_\theta |^2\lesssim R^\beta R^{-1}s^{-1}\int _U \sum _{\gamma \subset \theta }|f_\gamma |^2. \end{aligned}$$Summing over \(\theta \subset \tau \), we obtain
$$\begin{aligned} \int _U \sum _{\theta \subset \tau }|f_\theta |^2\lesssim R^\beta R^{-1}s^{-1}\int _U \sum _{\gamma \subset \tau }|f_\gamma |^2. \end{aligned}$$Therefore,
$$\begin{aligned} \Vert f\Vert _p^p&\lessapprox |U|^{1-\frac{p}{2}}\sum _{\tau \in {\textbf {S}}_s} \sum _{U\parallel U_\tau } \bigg (\int _U \sum _{\theta \subset \tau }|f_\theta |^2\bigg )^{p/2}s^{4-p}\\&\lesssim |U|^{1-\frac{p}{2}}(R^\beta R^{-1}s^{-1})^{\frac{p}{2}} \sum _{\tau \in {\textbf {S}}_s} \sum _{U\parallel U_\tau } \bigg (\int _U \sum _{\gamma \subset \tau }|f_\gamma |^2\bigg )^{p/2}s^{4-p}\\ (\text {H}\ddot{\textrm{o}}\text {lder})&\le s^{4-p}(R^\beta R^{-1}s^{-1})^{\frac{p}{2}}\sum _{\tau \in {\textbf {S}}_s} \sum _{U\parallel U_\tau } \int _U \Big (\sum _{\gamma \subset \tau }|f_\gamma |^2\Big )^{p/2}\\&\le s^{4-p}(R^\beta R^{-1}s^{-1})^{\frac{p}{2}}\Big \Vert \Big (\sum _{\gamma \in \Gamma _\beta (R^{-1})}|f_\gamma |^2\Big )^{\frac{1}{2}}\Big \Vert _p^p. \end{aligned}$$
We just need to check
which is equivalent to
Plugging \(s^{-1}\le R^{1/2}\) and noting that \(4-\frac{3p}{2}<0\), we prove the result. \(\square \)
References
Demeter, C.: Fourier Restriction, Decoupling and Applications, vol. 184. Cambridge University Press, Cambridge (2020)
Demeter, C., Guth, L., Wang, H.: Small cap decouplings. Geom. Funct. Anal. 30(4), 989–1062 (2020)
Gressman, P.T., Guo, S., Pierce, L.B., Roos, J., Yung, P.-L.: Reversing a philosophy: from counting to square functions and decoupling. J. Geom. Anal. 31(7), 7075–7095 (2021)
Gressman, P., Pierce, L., Roos, J., Yung, P.-L.: A new type of superorthogonality. Proc. Am. Math. Soc. 152(02), 665–675 (2024)
Guth, L., Maldague, D.: Amplitude dependent wave envelope estimates for the cone in \(\mathbb{R}^3\). arXiv:2206.01093 (2022)
Guth, L., Wang, H., Zhang, R.: A sharp square function estimate for the cone in \(\mathbb{R} ^3\). Ann. Math. 192(2), 551–581 (2020)
Maldague, D.: A sharp square function estimate for the moment curve in \(\mathbb{R}^3\). arXiv:2210.17436 (2022)
Pierce, L.B.: On superorthogonality. J. Geom. Anal. 31, 7096–7183 (2021)
Tao, T.: A sharp bilinear restriction estimate for paraboloids. Geom. Funct. Anal. 13, 1359–1384 (2003)
Acknowledgements
The author would like to thank Prof. Larry Guth and Dominique Maldague for helpful discussions. The author also want to thank Changkeun Oh for the discussion of Theorem 1. The author also want to thank referees for carefully reading the manuscript and many helpful suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Dave Walnut.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Gan, S. Small Cap Square Function Estimates. J Fourier Anal Appl 30, 36 (2024). https://doi.org/10.1007/s00041-024-10095-x
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00041-024-10095-x