Abstract
For \(p \in [2,\infty )\), we consider the \(L^p \rightarrow L^p\) boundedness of a Nikodym maximal function associated to a one-parameter family of tubes in \({\mathbb {R}}^{d+1}\) whose directions are determined by a non-degenerate curve \(\gamma \) in \({\mathbb {R}}^d\). These operators arise in the analysis of maximal averages over space curves. The main theorem generalises the known results for \(d = 2\) and \(d = 3\) to general dimensions. The key ingredient is an induction scheme motivated by recent work of Ko-Lee-Oh.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Consider a \(C^{\infty }\) non-degenerate curve \(\gamma : I: = [-1,1] \rightarrow {\mathbb {R}}^d\). In other words,
The curve \(\gamma \) defines a one-parameter family of directions in \({\mathbb {R}}^{d+1}\). For \( 0< \delta < 1\) and \(s \in I\), consider a \(\delta \)-tube in \({\mathbb {R}}^{d+1}\) in the direction of \((\gamma (s) \ 1)^{\top }\), defined as
and the corresponding averaging operator
whenever \(g \in L^1_{\textrm{loc}}({\mathbb {R}}^{d+1})\). Our goal is to investigate the \(L^p\) boundedness properties of the Nikodym maximal function
The main result is as follows.
Theorem 1
Let \(\gamma : I \rightarrow {\mathbb {R}}^d\) be a non-degenerate curve. There exists \(C_{d,\gamma } > 0\) such that
By interpolating with the trivial bound at \(L^{\infty }\), we estimate the \(L^p\) operator norm for the maximal function as \(O((\log \delta ^{-1})^{{d}/{p}})\) for \(2 \le p \le \infty \). This is sharp in the sense that the \(L^p\) operator norm has polynomial blowup in \(\delta ^{-1}\) for \(1 \le p < 2\) (see Sect. 5). The result is new for \(d \ge 4\). The theorem also slightly strengthens the known estimates for \(d = 2\) and \(d = 3\) (see [7, Lemma 1.4] and [1, Proposition 5.5], respectively) by improving the dependence on \(\delta ^{-1}\).
The operator \(\mathcal{{N}_{\delta }^{\gamma }}\) is a variant of the classical Nikodym maximal function considered in [2]. The main difference lies in the dimensional setup of the problem: by the above definition, \(\mathcal{{N}_{\delta }^{\gamma }}\) maps functions on \({\mathbb {R}}^{d+1}\) to functions on \({\mathbb {R}}^d\), whereas the classical operator considered in [2] is a mapping between functions on the same Euclidean space.
Maximal functions of the form (2) naturally arise in the study of local smoothing problems for averaging operators associated to \(\gamma \), as first observed in Mockenhaupt-Seeger-Sogge [7]. In [7, Lemma 1.4] estimates for \(\mathcal{{N}_{\delta }^{\gamma }}\) were obtained for \(d = 2\). The \(d=3\) case was later considered in [1, Propostion 5.5], in relation to the problem of bounding the helical maximal function. The averages \(\mathcal{{A}_{\delta }^{\gamma }}\) are also closely related to the restricted X-ray transforms considered in [4, 5, 8].
The proof scheme is outlined as follows: First, oscillations are introduced into the problem, followed by a fractional Sobolev embedding to dominate the maximal function by a Fourier integral operator (see Proposition 2). This allows us to fully access orthogonality in the subsequent decomposition. While the application of Sobolev embedding in this context is standard, the use of the fractional variant introduced here constitutes a novel element compared to the previous works [1, 7].
Next, the desired Fourier integral estimates are established through an induction scheme based on a parameter N, measuring the degree of non-degeneracy of the curve. This induction method conceals the intricacies of the root analysis detailed in [1], marking a significant departure from the previous cases of \(d = 2\) and \(d = 3\). The motivation for this induction approach stems from [6], where a (more complex) induction argument is employed to investigate the local smoothing problem associated to averages along curves in \({\mathbb {R}}^d\).
The base case for induction is essentially straightforward. In the induction step, the operator is divided into two parts: one where the induction hypothesis with parameter \(N - 1\) can be applied, and the other where the support of the resulting symbol ensures the existence of precisely one root \(s = \sigma (\xi )\) for the map \(s \mapsto \langle \gamma ^{(N-1)}(s), \xi \rangle \). This root generates a degenerate cone \(\Gamma \), and now a decomposition is performed with respect to the distance to \(\Gamma \). The most singular component is directly bounded in \(L^2\). To effectively bound the remaining segments, a further decomposition on the curve is performed, followed by rescaling arguments, and a final verification that the symbols we end up with are amenable to another application of the induction hypothesis.
1.1 Outline of the Paper
This paper is structured as follows:
2 Notational Conventions
For a set \(E \subseteq {\mathbb {R}}^n\), we denote its characteristic function by \(\chi _{E}\). Given \(f \in L^1({\mathbb {R}}^n)\) we let either \({\hat{f}}\) or \({\mathcal {F}}(f)\) denote its Fourier transform and \({\check{f}}\) or \({\mathcal {F}}^{-1}(f)\) denote its inverse Fourier transform, which are normalised as follows:
For \(m \in L^{\infty }({\mathbb {R}}^n) \), we denote by \(m(\tfrac{1}{i}\partial _{x})\) the Fourier multiplier operator defined by its action on \(g \in \mathcal{{S}}({\mathbb {R}}^{})\) as
Finally, given two non-negative real numbers A, B, and a list of parameters \(M_1, \ldots , M_n\), the notation \(A \lesssim _{M_1, \ldots , M_n} B\) or \(A = O_{M_1, \ldots , M_n}(B)\) signifies that \(A \le C B\) for some constant \(C = C_{M_1, \ldots , M_n} > 0\) depending only on the parameters \(M_1, \ldots , M_n\). In addition, \(A \sim _{M_1, \ldots , M_n} B\) is used to signify that both \(A \lesssim _{M_1, \ldots , M_n} B\) and \(B \lesssim _{M_1, \ldots , M_n} A\) hold.
3 Initial Reductions and Sobolev Embedding
3.1 Initial Reductions
Let \(I:= [-1,1]\) and \(\gamma :I \rightarrow {\mathbb {R}}^d\) be a non-degenerate curve, as in Sect. 1. We begin by replacing the classical averaging operators by Fourier integral operators. Given \({a \in L^{\infty }( {\mathbb {R}}^{d} \times I \times I)}\), consider
where \(\mathcal{{F}}_{x}(g)(\xi ,t)\) denotes \(\mathcal{{F}}_{x}(g(\,\cdot ,t))(\xi )\), the Fourier transform of g in x only. Define the associated maximal operator
Choose a function \(\psi \in C_{c}^{\infty }({\mathbb {R}})\) with \(\mathrm{{supp}} \ \psi \subseteq [-1,1]\) such that its inverse Fourier transform \({\check{\psi }}\) is non-negative and \({\check{\psi }}(y) \gtrsim 1\) whenever \(|y|\le 1\). Let \({\tilde{\chi }}_{I}\) be a non-negative smooth function that satisfies \({\tilde{\chi }}_{I}(x) = 1\) for all \(x \in I\) and \({\tilde{\chi }}_{I}(x) = 0\) when \(x \notin [-2,2]\). Define
Let \(K_{\delta }\) denote the kernel of the averaging operator \(\mathcal{{A}_{\delta }^{\gamma }}\) defined in (1). In particular,
By integral formula for the inverse Fourier transform and a change of variable,
Thus, the pointwise estimate
holds. It is therefore enough to bound the operator \({\mathcal {N}}[a_{\delta },\gamma ]\).
We now perform an endpoint Sobolev embedding to replace the \(L^{\infty }_s\) norm in the maximal function with an \(L^2_{s}\) norm. Here we write
where a and \(\gamma \) are as above and \(\big (1+ \sqrt{-\partial _{s}^{2}}\big )^{1/2}\) is the fractional differential operator in s with multiplier \((1+|\sigma |)^{1/2}\).
Proposition 2
For a nondegenerate curve \(\gamma : I \rightarrow {\mathbb {R}}^d\), \(0< \delta < 1\) and \(a_{\delta }\) as defined in (4), we have
for all \(g \in \mathcal{{S}}({\mathbb {R}}^{d+1})\).
Proof
Let \({\tilde{\chi }}: {\mathbb {R}} \rightarrow [0,1]\) satisfy \({\tilde{\chi }}(\sigma ) = 1\) for all \(\sigma \in (-C\delta ^{-1}, C\delta ^{-1})\) and \({\tilde{\chi }}(\sigma ) = 0\) when \(\sigma \notin (-2C\delta ^{-1},2C\delta ^{-1})\). The constant C is chosen large enough to satisfy the requirements of the forthcoming argument. Defining
where the multiplier operator \({\tilde{\chi }}\big (\tfrac{1}{i} \partial _s\big )\) is defined in Sect. 2, it suffices to prove
for all \(g \in \mathcal{{S}}({\mathbb {R}}^{d+1})\).
To prove (5), fix \(g \in \mathcal{{S}}({\mathbb {R}}^{d+1})\) and write
where \({\tilde{\chi }}_1(\sigma ):= (1+|\sigma |)^{-1/2}{\tilde{\chi }}(\sigma )\). Temporarily fix \(x \in {\mathbb {R}}^d\). The above expression can be written as a convolution product in s variable between \(\mathcal{{F}}_{s}^{-1}({\tilde{\chi }}_1)\) and \({\mathfrak {D}}_s\mathcal{{A}}[a_{\delta },\gamma ]g(x, \, \cdot \,).\) Using Young’s inequality, Plancherel’s theorem and by noting that the \(L^2\) norm of \({\tilde{\chi }}_1\) is \(O(|\log \delta |^{1/2})\), we obtain
Combining Fubini’s theorem with the above estimate for each \(x \in {\mathbb {R}}^d\), we obtain (5).
To prove (6), write
where
for \(\sigma \in {\mathbb {R}}\). Note that \((1+ |\sigma |)^{-1}(1-{\tilde{\chi }}(\sigma ))^{1/2}\) has uniformly bounded \(L^2\) norm (in \(\delta \)). Thus, an application of Young’s convolution inequality gives
Integrating in x using Fubini’s theorem,
By Plancherel’s theorem, the quantity on the right can be estimated from above by \(L^2\) norm of the function \({\mathcal {B}}_{\text {err}}[a_{\delta }, \gamma , \tilde{\chi _{3}}]g\), where
for
By Minkowski’s integral inequality, Plancherel’s theorem and Cauchy–Schwarz in the t variable,
Thus, the proof of (6) boils down to the estimate \(\Vert b_{\delta }(\xi , \, \cdot \,, t)\Vert _{{L^2_{\sigma }({\mathbb {R}})}} \lesssim _{} 1\) uniformly in \((\xi ,t) \in {\mathbb {R}}^{d} \times I\). Since \(|\xi | \lesssim \delta ^{-1}\) and C is large,
Noting the easy estimate \(| \partial _{s}^{\beta }a_{\delta }(\xi ,s,t) | \lesssim _{\beta } 1\) for all \(\beta \in {\mathbb {N}}\), we apply integration-by-parts to estimate the oscillatory integral in (7). In particular,
It is evident that the required \(L^2\) estimate for \(b_{\delta }\) follows from this rapid decay, completing the proof of (6). \(\square \)
Proposition 2 reduces the analysis to estimating the operator \({\mathfrak {D}}_s\mathcal{{A}}[a_{\delta },\gamma ]\). We begin by dyadically decomposing the frequency space. Suppose \(\eta , \beta \in C_{c}^{\infty }({\mathbb {R}})\) are the classical Littlewood–Paley functions such that
and
For \(\lambda \in \{0\} \cup 2^{{\mathbb {N}}}\), introduce the dyadic symbols
Theorem 1 is a consequence of the following result.
Proposition 3
Let \(\lambda \in \{0\} \cup 2^{{\mathbb {N}}}\) and \(0< \delta < 1\). Then,
Proof
(Proposition 3\(\implies \) Theorem 1) Let \({\tilde{\eta }}, {\tilde{\beta }} \in C_c^{\infty }({\mathbb {R}})\) be two non-negative functions such that \({\tilde{\eta }}(r) = 1\) for \(r \in \mathrm{{supp}} \ \eta \), \( {\tilde{\beta }}(r) = 1\) for \(r \in \mathrm{{supp}} \ \beta \) and
For \(g \in \mathcal{{S}}({\mathbb {R}}^{d+1})\), define
It is clear from the definitions that \({\mathfrak {D}}_s\mathcal{{A}}[a_{\delta }^{\lambda },\gamma ]g = {\mathfrak {D}}_s\mathcal{{A}}[a^{\lambda }_{\delta },\gamma ]g^{\lambda }\). By Plancherel’s theorem and the support properties of the \(a^{\lambda }_{\delta }\), we have
Applying Proposition 3 for each \(\lambda \) and observing that \(a_{\delta }^{\lambda } = 0\) when \(\delta ^{-1} \lesssim _{d} \lambda \), we obtain
Combining the above inequality with Proposition 2, we deduce Theorem 1. \(\square \)
The multiplier associated to \({\mathfrak {D}}_s\mathcal{{A}}[a^{0}_{\delta },\gamma ]\) is a bounded function and so the \(\lambda = 0\) case of Proposition 3 is immediate. More interesting cases arise when \(\lambda \in 2^{{\mathbb {N}}}\).
4 The Proof of Proposition 3
4.1 Setting Up the Induction Scheme
Fix \(\lambda \in 2^{{\mathbb {N}}}\). We begin with few basic definitions.
Definition 4
Let \(1 \le L \le d\). Define \({\mathfrak {S}}(B,L)\) to be the collection of all curves \(\gamma : I \rightarrow {\mathbb {R}}^d\) such that for all \(s \in I\), we have
where the determinant is interpreted as the square root of the sum of squares of its \(L \times L\) minors.
Definition 5
Let \(1 \le L \le d\) and \(\gamma \in {\mathfrak {S}}(B,L)\). A symbol \(a \in C^{3d}( {\mathbb {R}}^d \times I \times I)\) is said to be of type \((\lambda , A,L)\) with respect to \(\gamma \) if the following hold:
-
1.
There exists a constant \(C = C(A,B) > 1\), independent of \(\lambda \), such that
$$\begin{aligned} \mathrm{{supp}}_{\xi } \ a \subseteq \{ \xi \in {\mathbb {R}}^d: C\lambda \le |\xi | \le 2C\lambda \}. \end{aligned}$$ -
2.
\(|\partial _{s}^{\beta }a(\xi ,s,t)| \lesssim _{\beta ,A} 1 \) for \( 0 \le \beta \le 3d\) and \((\xi ,s,t) \in \mathrm{{supp}} \ a\).
-
3.
The inner product estimate
$$\begin{aligned} A^{-1}|\xi | \le \sum _{i = 1}^L | \langle \gamma ^{(i)}(s),\xi \rangle | \le A |\xi | \qquad \text { holds for all }(\xi ,s) \in \mathrm{{supp}}_{\xi ,s} \ a. \end{aligned}$$(10)
Proposition 3 is consequence of the following result.
Proposition 6
Fix \(1 \le L \le d\), \(\gamma \in {\mathfrak {S}}(B,L)\) and let a be a symbol of type \((\lambda ,A,L)\) with respect to \(\gamma \). Then,
In view of (10), it is clear that Proposition 3 corresponds to the case \(L = d\) of Proposition 6.
Proposition 6 is proved by inducting on L. Given an arbitrary symbol \(a \in C^{3d}({\mathbb {R}}^d \times I \times I)\) and a smooth curve \(\gamma \), we present here a general argument which will be used repeatedly through the induction process in order to obtain favourable norm bounds for the Fourier integral operator \({\mathfrak {D}}_s\mathcal{{A}}[a,\gamma ]\). For \(g \in \mathcal{{S}}({\mathbb {R}}^{d})\), we aim for the estimate
By applying Plancherel’s theorem and the Cauchy–Schwarz inequality,
Since the Hilbert transform is bounded on \(L^2\),
Thus, to prove (11), it suffices to show that there exists \(\Lambda > 1\) such that
Applying Plancherel’s theorem and the Cauchy–Schwarz inequality,
where \({\mathcal {B}}[a]\) is the operator that integrates (in \(t'\) variable) functions against the kernel
At this point, note that \(\partial _{s}\mathcal{{A}}[a,\gamma ]g\) can be expressed as \(\mathcal{{A}}[{\mathfrak {d}}_sa,\gamma ]g\), with a symbol
Applying Schur’s test, we see that (11) is a consequence of the estimates
completing the discussion.
The first application of this reduction is the following lemma.
Lemma 7
(Base case) Proposition 6 holds when \(L = 1\).
Proof
Choose a curve \(\gamma \) and a symbol a that satisfies the assumptions of Proposition 6 with \(L = 1\). In particular, a is of type \((\lambda , A,1)\) with respect to \(\gamma \) and as a consequence,
Following the previous discussion, we wish to obtain good decay estimates for the function \(K[{\mathfrak {d}}_s^{\iota }a]\) with \(\iota = 0,1\). Integrating-by-parts in (12) and using Definition 5 ii), we have
Clearly, these decay estimates imply the required bounds (13) with \(\Lambda = \lambda \). Consequently, we obtain (11) with the implicit constant depending only on A, B and d. \(\square \)
Lemma 7 addresses the base case of Proposition 6. It remains to establish the inductive step.
Proposition 8
Suppose the statement of Proposition 6 is true for \(L = N-1\). Then it is also true for \(L = N\).
Proposition 6, and therefore Theorem 1, follow from Proposition 8 and Lemma 7. For the remainder of the section we present the proof of Proposition 8, which is broken into steps.
4.2 Initial Decomposition
To begin the proof of Proposition 8, let \(\gamma \) and a be chosen to satisfy the assumptions of the Proposition 6 with \(L = N\). We apply a natural division of the symbol a. Let \(H: {\mathbb {R}}^{d+1} \rightarrow {\mathbb {R}}\) be defined as the product
where \(A'\) is large constant which will be chosen depending only on A, B and N. Here \(\eta \) is as defined in (8). Note that
Furthermore, (10) holds for the pair \((\gamma , a(1-H))\) with A replaced with \(A'\) and \(L = N-1\). Thus, \(a(1-H)\) is a symbol of type \((\lambda ,A',N - 1)\) with respect to \(\gamma \). Applying the induction hypothesis, we deduce the desired estimate for the part of the operator corresponding to the symbol \(a(1-H)\).
Since (10) holds with \(L = N\) in \(\mathrm{{supp}} \ a\) by assumption, the inequalities
also hold for all \((\xi ,s) \in \mathrm{{supp}}_{\xi ,s} \ aH\), provided \(A'\) is chosen large enough depending on N and A. Henceforth, for simplicity, we write a in place of aH and therefore work with the stronger assumptions (14) and (15) on the support of a. An application of the implicit function theorem now shows that for any \(\xi \in \mathrm{{supp}}_{\xi } \ a\), there exists \(\sigma (\xi ) \in I\) with
The strategy now involves a decomposition of the symbol away from the most degenerate regions in \({\mathbb {R}}^{d+1}\). Set
where the constant \(\varepsilon _{0}= \varepsilon _{0}(A,B)\) will be chosen small enough to satisfy the forthcoming requirements of the proof. The function G should be interpreted as the function measuring the distance of \((\xi ,s)\) from the co-dimension N surface
We now decompose the \((\xi ,s)\)-space dyadically away from \(\Gamma \). Suppose \(\eta _1, \beta _1 \in C_{c}^{\infty }({\mathbb {R}})\) are chosen such that
and
Set
where \(\varepsilon _{1}\) will be chosen small enough (depending on \(\varepsilon _{0}\)) to satisfy the forthcoming requirements of the proof. Observe that \(a = a^0 + \sum _{n \in {\mathbb {N}}} a^n\) and this automatically induces a similar decomposition for the Fourier integral operator \({\mathfrak {D}}_s\mathcal{{A}}[a,\gamma ]\).
Since
the symbols \(a^n\) are trivially zero except for \(O_{A,B}(\log \lambda )\) many values of n. Thus, by Plancherel’s theorem,
In light of the above, it remains to bound the fractional operator \({{\mathfrak {D}}_s\mathcal{{A}}[a^n,\gamma ]}\) for different values of n. The case of \(n = 0\) is dealt with by the following lemma.
Lemma 9
Next lemma addresses the case of all other values of n.
Lemma 10
For any \(n \ge 1\), we have
Assuming Lemmas 9 and 10 for now, we plug (21), (22) into (20) and obtain
This concludes the proof of Proposition 8.
Rest of the section is dedicated to the proofs of the two key lemmas (Lemmas 9 and 10).
4.3 Proof of Lemma 9
To prove Lemma 9, we do not appeal to the induction hypothesis but directly estimate the operator.
Proof of Lemma 9
In view of discussions around (11) and (13), it suffices to show
Indeed, (23) implies (13) with \(a = a^{0}\) and \(\Lambda = \lambda ^{1/N}\), which in turn gives (21) as discussed above.
The estimate (23) for \(\iota = 0\) is immediate from (12) as the \(\mathrm{{supp}}_{s} \ a^{0}(\xi ,\cdot ,\cdot )\) is contained in an interval of length \(O_{A,B}(\lambda ^{-1/N})\) for any fixed \(\xi \in {\mathbb {R}}^d\). By (12) again, the case \(\iota = 1\) becomes evident once we verify the estimates
It is easy to see that \(|\partial _{s} (a^{0})(\xi ,s,t)|\lesssim _{A,B} \lambda ^{1/N}\). To estimate the remaining term, note that for any \(1 \le i \le N\), we have
for \((\xi ,s) \in \mathrm{{supp}}_{\xi ,s} \ a^{0}\). Using Taylor’s theorem,
for \((\xi ,s) \in \mathrm{{supp}}_{\xi ,s} \ a^{0}\), as required. Thus, we obtain (23) and consequently (21). \(\square \)
4.4 Further Decomposition
In order to prove Lemma 10 we must introduce a further decomposition of the symbol. Let \(\zeta \in C_{c}^{\infty }({\mathbb {R}})\) be chosen such that \(\mathrm{{supp}} \ \zeta \subseteq ~[-1,1]\) and \(\sum _{\nu \in {\mathbb {Z}}} \zeta (\,\cdot \, - \nu ) = 1\). For \(n \in {\mathbb {N}}\) and \(\nu \in {\mathbb {Z}}\), consider the symbol
where \(s_{n,\nu }:= 2^n\lambda ^{-{1}/{N}}\nu \). Observe that the original symbol is recovered as the sum
where C is a constant that depends only on A, B. The following lemma records a basic property of the localised symbols, which is useful later in the proof.
Lemma 11
Let \(n \ge 1, \nu \in {\mathbb {Z}}\) and \(\rho := 2^n\lambda ^{-{1}/{N}}\). For any \((\xi ,s) \in \mathrm{{supp}}_{\xi ,s} \ a^{n,\nu }\), we have
Proof
The upper bound in (26) is easier to prove than the lower bound and follows from a similar argument. Consequently, we will focus only on the lower bound.
Fix \(n \ge 1\) and \(\nu \in {\mathbb {Z}}\). Recall from the definitions that
for all \((\xi ,s) \in \mathrm{{supp}}_{\xi ,s} \ a^{n,\nu }\). Fixing \(\xi \), we now consider two cases depending on which terms of the above sum dominate.
Case 1 Suppose \((\varepsilon _{0}\varepsilon _{1}^{-1}\rho )/4 \le |s - \sigma (\xi )|\). By the Mean Value Theorem, there exists \(s_{*} \in I\) between s and \(\sigma (\xi )\) such that
Combining this with (14) and (16), we deduce that \(|\langle \gamma ^{(N-1)}(s), \xi \rangle |\gtrsim _{A} \lambda (\varepsilon _{0}\varepsilon _{1}^{-1}\rho )\). This gives the lower bound in (26).
Case 2 Suppose Case 1 fails. Using (27), we can find \(1 \le i_0 \le N-2\) such that
with \(c_N:= (4N)^{-N}\), whilst \(|s - \sigma (\xi )| \le \varepsilon _{0}\varepsilon _{1}^{-1}\rho \) and
By Taylor’s theorem,
provided the constant \(\varepsilon _{0}\) is chosen small enough depending on B and N. Combining (28) and (29), we deduce that
which implies the lower bound in (26). \(\square \)
In view of (25), we restrict our attention to \({\mathfrak {D}}_s\mathcal{{A}}[a^{n,\nu },\gamma ]\) for fixed \(n \in {\mathbb {N}}\) and \(\nu \in {\mathbb {Z}}\). Before proceeding to its analysis, we make the following elementary observation about the size of \(\rho := 2^{n}\lambda ^{-1/N}\). From the definition (17) of \(\beta _1\), note that
Combining this with (19), we deduce that \(\rho = O_{B,d}( \varepsilon _{1}\varepsilon _{0}^{-1}).\) Thus, by choosing \(\varepsilon _{1}\) small enough (depending on \(\varepsilon _{0},B,d\)), we can assume that
In the following subsections, the norm bounds for the operator \({\mathfrak {D}}_s\mathcal{{A}}[a^{n,\nu },\gamma ]\) are obtained using the induction hypothesis via a method of rescaling.
4.5 Rescaling for the Curve
In this subsection, we introduce the rescaling map in a generic setup and describe its basic properties which will play a crucial role in the proof of Lemma 10.
For \(\gamma \in {\mathfrak {S}}(B,N)\) and \(s_{\circ } \in I\), let
Using (9), note that \(\dim V_{s_{\circ }}^N = N\). For \(0< \rho < 1,\) define a linear operator \(T_{s_{\circ }, \rho }^N\) such that
and
It is clear that \(T_{s_{\circ }, \rho }^N\) is a well-defined map such that
Supposing \([s_{\circ } - \rho , s_{\circ } + \rho ] \subseteq I\), we define the rescaled curve
For simplicity, we introduce the notation
The following lemma verifies nondegeneracy assumptions for the rescaled curve.
Lemma 12
For \(0 < \rho \le B^{-2d}\) and \(\gamma \in {\mathfrak {S}}(B,N)\), the rescaled curve \({\tilde{\gamma }}\) as in (33) lies in \({\mathfrak {S}}(B_1,N-1)\) where \(B_1\) depends only on B and N.
A key feature of Lemma 12 is that the parameter \(B_1\) is independent of \(\rho \).
Proof of Lemma 12
We begin by verifying the first part of (9) for the curve \({\widetilde{\gamma }}\). From the definition, we see that \({\tilde{\gamma }}^{(i)}(s) = \rho ^{i} T^{-1} \gamma ^{(i)}(s_{0} + \rho s)\) for any \(i \in {\mathbb {N}}\). Combining this identity with (9) and (32), we deduce that
Let \(1 \le i \le N\). By Taylor’s theorem, (31) and (32), we have
Combining (35) with (9), we obtain uniform size estimates for \({\tilde{\gamma }}^{(i)}(s)\) when \(1 \le i \le N\). Together with (34), this implies
It remains to verify the second part in (9) for the curve \({\tilde{\gamma }}\) and \(L = N-1\). In view of (36), it suffices to obtain a lower bound for the determinant of the \(d \times N\) matrix whose columns vectors are formed by \(({\tilde{\gamma }}^{(i)}(s))_{1\le i \le N}\) for \(s \in I\). Observe that using the multilinearity of the determinant and elementary column operations, (35) gives
By the hypothesis of the lemma, \(\rho \) is small enough so that the above identity combined with (9) gives the estimate
Now, an application of (36) (in particular, \(|{\tilde{\gamma }}^{(N)}(s)|\lesssim _{B} 1\)) completes the proof of (9) for \(\gamma = {\tilde{\gamma }}\), \(L = N-1\) and B replaced with a new constant \(B_1\). \(\square \)
The rescaling map \(T^{N}_{s_{\circ },\rho }\) can be used to introduce a rescaling for the operators we are interested in. This is done in the next subsection.
4.6 Rescaling for the Operator
To introduce the operator rescaling, we begin by considering a Schwartz function \(u: {\mathbb {R}} \rightarrow {\mathbb {R}}\). Let \(s_{\circ } \in I\) and \(0< \rho < 1\). Direct computations give
where \({\tilde{u}}(s):= u(s_{\circ } + \rho s)\). Thus,
For an arbitrary symbol \(a \in C^{2d}({\mathbb {R}}^d \times I \times I)\) and \(\gamma \in {\mathfrak {S}}(B,N)\), recall the definition of \(\mathcal{{A}}[a,\gamma ]\) from (3). Temporarily fixing \(x \in {\mathbb {R}}^d\), set
By combining (37) for each \(x \in {\mathbb {R}}^d\) with Fubini’s theorem,
We claim that for \((x,s) \in {\mathbb {R}}^{d+1}\), the identity
holds with T, \({\tilde{\gamma }}\) as in (33), symbol
and input function \({\tilde{g}}\) defined by
Verifying (40) is just a matter of unwinding the definitions. First, we expand \(\tilde{\mathcal{{A}}}[a,\gamma ]g(x,s)\) using (38) as the oscillatory integral
Applying change of variables \(\xi \rightarrow T^* \xi \), the above expression can be written as
proving the claim (40).
Fix \(n \in {\mathbb {N}}\), \(\nu \in {\mathbb {Z}}\) and recall the definitions of \(a^{n,\nu }\) and \(s_{n,\nu }\) from Sect. 4.4. Consider the rescaling map T as defined in Sect. 4.5 for
Furthermore, consider the operator rescaling as in (40) for \(a = a^{n,\nu }\). In this setup, we record some of the basic properties of how \(T^*\) (as defined in (33)) interacts with \({\tilde{a}}\).
Lemma 13
The rescaling map \(T^*\) satisfies the estimate
Proof
Fix \(1 \le i \le N\). From the definition of T, we have
Fix \(\xi \in \mathrm{{supp}}_{\xi } \ {\tilde{a}}\) so that, by the definition of the rescaled symbol, \(T^* \, \xi \in \mathrm{{supp}}_{\xi } \ a^{n,\nu }\). Using Lemma 11 when \(i \le N-1\) and the Cauchy–Schwarz inequality (or (14)) when \(i = N\), we obtain
Combining this with (43), we deduce that
On the other hand, if \( v \in (V^{N}_{s_{n,\nu }})^{\perp }\) is a unit vector, one can argue as in (43) to have
where the fact \(\rho < 1\) has been used. Combining (44), (45) and (9), we obtain the lower bound in (42). The upper bound follows from (32). \(\square \)
The following lemma now verifies how rescaling improves the type condition of the symbol.
Lemma 14
The rescaled symbol \({\tilde{a}}\) is of type \((\rho ^{N}\lambda , A_1, N-1)\) with respect to \({\widetilde{\gamma }}\), where \(A_1\) depends only on A, B and N.
Proof
By Lemma 13, it is clear that
Since \(0< \rho < 1\) by (30), the estimates \(|\partial _{s}^{\beta }{\tilde{a}}(\xi ,s,t)|\lesssim _{\beta } 1\) for \((\xi ,s,t) \in \mathrm{{supp}} \ {\tilde{a}}\) follows from similar derivative estimates for a. Thus, it remains to verify that (10) holds for the rescaled setup for \(L = N-1\), \(\gamma = {\widetilde{\gamma }}\) and \(a = {\tilde{a}}\). Explicitly, we wish to show
To this end, we recall from Lemma 11 that
However, by unwinding the definition,
which is the required estimate. \(\square \)
4.7 Proof of Lemma 10
With all the available tools, the operator \({\mathfrak {D}}_s\mathcal{{A}}[a^{n},\gamma ]\) can be estimated easily for \(n \ge 1\).
Proof of Lemma 10
Fix \(n \ge 1\). Temporarily fix \(\nu \in {\mathbb {Z}}\). In view of (39) and (40) for \(a = a^{n,\nu }\), we have
Suppose \({\tilde{\zeta }} \in C_c^{\infty }({\mathbb {R}})\) is chosen such that \(\mathrm{{supp}} \ {\tilde{\zeta }} \subseteq [-4,4]\), \({\tilde{\zeta }}(r) = 1\) when \(| r | \le 3\) and
In view of the support properties of \(a^{n,\nu }\) (in particular, (18) and (24)), we have
Consequently, recalling the integral expression (41), it is clear that one can replace \({\tilde{g}}\) with \({\tilde{g}}^{n,\nu }\) in (47) where
Now, Lemmas 12 and 14 ensure that the rescaled pair (\({\tilde{a}}, {\tilde{\gamma }}\)) satisfy the assumptions of Proposition 6 with \(L = N-1\) (note that (30) ensures that \(\rho \) is of the right size, so Lemma 12 applies). Thus, the statement of the proposition applies and we obtain
Thus, the proof of Lemma 10 reduces to summing the above estimates in \(\nu \) without further loss in \(\lambda \). Using (25), Plancherel’s theorem and the support properties of symbols \(a^{n,\nu }\), we combine (48) for different values of \(\nu \) to deduce that
After a change of variable, it is evident that \(\Vert {\tilde{g}}^{n,\nu }\Vert _{L^2({\mathbb {R}}^{d+1})} = \Vert g^{n,\nu }\Vert _{L^2({\mathbb {R}}^{d+1})}\), where
Thus, by another application of the Plancharel’s theorem,
concluding the proof. \(\square \)
In the next section, we discuss the sharpness of the main theorem.
5 Sharpness of the Theorem 1
By acting the maximal function on standard test functions, here we discuss the sharpness of Theorem 1 in two directions: sharpness of the range of p and the operator norm dependence on \(\log \delta ^{-1}\).
5.1 Sharpness of the Range of p
Fix \(p \in [1,\infty )\) and assume that given any \(\epsilon > 0\), we have
Temporarily fix \(\epsilon \) and \(\delta \). Set \(g_{\delta }:= \chi _{B(0,\delta )}\). It is easy to show that the \(\delta \)-neighbourhood of the curve \(-\gamma \) is a subset of the super-level set
Applying Chebyshev’s inequality and using (49), we have
Letting \(\delta \rightarrow 0\), we see that \(p \ge 2 - \epsilon \). Letting \(\epsilon \rightarrow 0\), we conclude that \(p \ge 2\). Thus, \(L^p\) operator norm of \(\mathcal{{N}_{\delta }^{\gamma }}\) has polynomial blowup in \(\delta ^{-1}\) for \(p \in [1,2)\).
5.2 Sharpness of the Operator Norm
Fix \(0< \delta < 1\). Consider two vectors \({\varvec{w}}:= (x,0), \ {\varvec{z}}:= (y,0) \in {\mathbb {R}}^{d+1}\). It follows from the definition that
if and only if there exists a \(t \in [-1,1]\) such that
Assuming \(|\gamma (s)|\sim 1\) for all \(s \in [-1,1]\), it is also not hard to see that
whenever (50) holds.
Fixing \((x,r) \in {\mathbb {R}}^d \times I\), set \(f_{\delta }:= \chi _{ {\varvec{w}} + T_{10\delta }(r)}\) and note that \(\Vert f_{\delta }\Vert _{L^2({\mathbb {R}}^{d+1})} \sim \delta ^{d/2}\). Fix \(0 \le k \le \lfloor \log (\delta ^{-1}) \rfloor \), define
We claim that
Indeed, in view of (51), \(A_{k}\) contains all points \(y \in {\mathbb {R}}^d\) for which there exist \(s,r \in [-1,1]\) such that (50) holds and \(|\gamma (s) - \gamma (r)|\sim 2^{k}\delta \). The latter condition ensures that the admissible directions \(\gamma (s)\) belong to a portion of the curve which is contained inside a ball of radius \( \sim 2^{k}\delta \). Moreover, for a fixed direction \(\gamma (s)\), any \(y \in {\mathbb {R}}^d\) that lies in the \(\delta \)-neighbourhood of the tube \(x + \{ t(\gamma (r) - \gamma (s)): t \in [-1,1] \}\) satisfies (50). Therefore, \(A_k\) contains the \(\delta \)-neighbourhood of a two-dimensional cone in \({\mathbb {R}}^d\) of diameter \( \sim 2^k\delta \), justifying our claim. Thus,
Consequently, we see that
In view of the above, we may conjecture that \((\log \delta ^{-1})^{{1}/{2}}\) is the sharp \(L^2\) operator norm of \(\mathcal{{N}_{\delta }^{\gamma }}\). In other words, it is possible that Theorem 1 gives only a partial result in this direction.
In the next section, we discuss a generalisation of Theorem 1.
6 Further Extensions
As observed in [1], a stronger version of Theorem 1 which deals with families of anisotropic tubes is used in actual applications to the proofs of certain geometric maximal estimates (such as that of the helical maximal function). In this section, we state the anisotropic extension of Theorem 1 with a brief discussion on how the argument presented in the article can be adapted for the more general setup.
We begin by introducing the anisotropic tubes using the Frenet frame co-ordinate system. For \(s \in I\), let \(\{e_1(s),\ldots ,e_d(s)\}\) denote the collection of Frenet frame basis vectors, formed by applying Gram–Schmidt process to the set \(\{\gamma ^{(1)}(s), \ldots , \gamma ^{(d)}(s)\}\). For \({\textbf{r}}= (r_1,\ldots ,r_d) \in (0,1)^{d}\), we consider a tube in \({\mathbb {R}}^{d+1}\) in the direction of \(\gamma (s)\), defined as
As before, we can introduce the corresponding averaging and maximal operator as
and
whenever \(g \in L^1_{\textrm{loc}}({\mathbb {R}}^{d+1})\).
By modifying the argument presented in Sects. 3 and 4, the \(L^p\) boundedness problem for \({\mathcal {N}}_{{\textbf{r}}}^{\gamma }\) can be resolved under mild hypothesis on \({\textbf{r}}\). Our result [3] is as follows.
Theorem 15
Let \({\textbf{r}}:= (r_1,\ldots ,r_d) \in (0,1)^{d}\) be chosen such that
for any \(1 \le i \le j \le k \le d\) hold. Then, there exists \(C_{d,\gamma } > 1\) such that
There are two most interesting cases where Theorem 15 can be applied. These are when \({\textbf{r}}= {\textbf{r}}_{\text {iso}}:= (\delta ,\ldots , \delta )\) and \({\textbf{r}}= (\delta , \delta ^{2},\ldots , \delta ^{d})\) for \(0< \delta < 1\). In both cases, it is clear that \({\textbf{r}}\) satisfies (53). By applying Theorem 15 in the first case, we recover Theorem 1 as a consequence.
Discussion on the proof of Theorem 15
In the following discussion, we highlight the major changes from the arguments presented in this article. A detailed proof can be found in [3].
From (4), we recall the definition
In view of (52), the anisotropic version of \(a_{\delta }\) is defined by the formula
whenever \({\textbf{r}}\in (0,1)^{d}\).
By arguing along the lines of Sect. 3, we can reduce the proof of Theorem 15 to establishing operator norm estimates for the Fourier integral operator \({\mathfrak {D}}_s\mathcal{{A}}[a_{{\textbf{r}}},\gamma ]\). In particular, it suffices to show that
In Sect. 3, we obtained an equivalent version of (54) for \({\textbf{r}}= {\textbf{r}}_{\text {iso}}\) by first dyadically decomposing the operator and then applying Proposition 3 to each part. The proposition, in turn, was proved using an induction argument (in particular, via Proposition 6). In the same way, we can reduce the proof of (54) to a variant of Proposition 6. The modifications reuired in the proposition to adapt to the anisotropic setup do not alter the core argument of the proof. Key steps involving the decomposition as described in Sects. 4.2 and 4.4, and the rescaling as described in Sects. 4.5 and 4.6 remain intact. The differences mainly come from the description of the class of symbols of our interest.
Recall how the derivative bounds
were explicitly used for directly estimating parts of the operator in \(L^2\) at many stages in the proof of Proposition 6 (in particular, see the proofs of Lemmas 7 and 9). Consequently, the operator norm of \({\mathfrak {D}}_s\mathcal{{A}}[a_{\delta },\gamma ]\) depends on the upper bound in (55). Since rescaling of symbols preserves (55), one was able to carry these estimates unchanged throughout the induction process (see Definition 5). The situation differs in the anisotropic setup because of two reasons.
Firstly, in contrast to (55), the best attainable \(L^{\infty }\) bounds for the derivatives of the anisotropic symbol are
Note that the expression on the right depends on \({\textbf{r}}\) and can be extremely large. However, after applying the decomposition as described in Sects. 4.2 and 4.4, improved \(L^{\infty }\) bounds can be attained for the derivatives of each part of the symbol. In view of this, rather than assuming a uniform control over the \(C^{3d}\) norm of the symbol, the anisotropic variant of Proposition 6 includes pointwise bounds for the derivatives of the symbol expressed in a form that is sensitive to the many decomposition in the argument.
Secondly, the action of the rescaling map on the symbol \(a_{{\textbf{r}}}\) significantly alters its derivative estimates. Thus, the properties listed in Definition 5 to describe the rescaling-invariant class of symbols that contain \(a_{\delta }\) must be modified to accommodate all symbols you encounter at various stages in the argument in the anisotropic setup.
Apart from the modifications in the symbol class as mentioned above, we also require additional control over the coefficients \(r_i\) for the purpose of establishing acceptable bounds at stages of direct \(L^2\) estimation. The mild conditions (53) are introduced for this purpose. The author does not know if these conditions are necessary for obtaining the maximal estimate, but they seem to fit well in the induction argument. By Combining (53) with the modified description of symbols, we prove the anisotropic variant of Proposition 6, completing the proof of Theorem 15. \(\square \)
References
Beltran, D., Guo, S., Hickman, J., Seeger, A.: Sharp \(L^p\) bounds for the helical maximal function. To appear in Am. J. Math
Cordoba, A.: The Kakeya maximal function and the spherical summation multipliers. Am. J. Math. 99(1), 1–22 (1977). https://doi.org/10.2307/2374006
Govindan Sheri, A.: On certain geometric maximal functions in harmonic analysis. Thesis, University of Edinburgh
Greenleaf, A., Uhlmann, G.: Composition of some singular Fourier integral operators and estimates for restricted X-ray transforms. Ann. Inst. Fourier 40(2), 443–466 (1990)
Ko, H., Lee, S., Oh, S.: Sharp Sobolev regularity of restricted x-ray transforms. Proc. R. Soc. Edinb. Sect. A (2023). https://doi.org/10.1017/prm.2023.82
Ko, H., Lee, S., Oh, S.: Sharp smoothing properties of averages over curves. Forum Math. Pi 11, 4–33 (2023). https://doi.org/10.1017/fmp.2023.2
Mockenhaupt, G., Seeger, A., Sogge, C.D.: Wave front sets, local smoothing and Bourgain’s circular maximal theorem. Ann. Math. 136(1), 207–218 (1992). https://doi.org/10.2307/2946549
Pramanik, M., Seeger, A.: Lp Sobolev regularity of a restricted X-ray transform in \(\mathbb{R}^3\). Harmonic Anal. Appl 47–64 (2006)
Acknowledgements
The author is indebted to Jonathan Hickman for both suggesting the problem and the constant support provided throughout the development of this paper. The author thanks David Beltran for a variety of helpful comments and suggestions regarding this project. The author also thanks the referees for all the valuable suggestions for an earlier manuscript.The author was supported by The Maxwell Institute Graduate School in Analysis and its Applications, a Centre for Doctoral Training funded by the UK Engineering and Physical Sciences Research Council (Grant EP/L016508/01), Heriot-Watt University, and the University of Edinburgh.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Hans G. Feichtinger.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Govindan Sheri, A. \(L^2\) Estimates for a Nikodym Maximal Function Associated to Space Curves. J Fourier Anal Appl 30, 4 (2024). https://doi.org/10.1007/s00041-023-10062-y
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00041-023-10062-y