1 Introduction

The study of the restriction phenomena for the Fourier transform in \({\mathbb {R}}^n\) has been an active research topic in harmonic analysis over the last decades. The most common instance of it, a Fourier restriction estimate, comes in the form of the following inequality for every Schwartz function \(f \in {\mathcal {S}}({\mathbb {R}}^n)\)

$$\begin{aligned} \left\Vert {\widehat{f}_{|_S}}\right\Vert _{L^q(S,\nu )} \le C(p,q,S,\nu ) \left\Vert {f}\right\Vert _{L^p({\mathbb {R}}^n)}, \end{aligned}$$

where \(\widehat{f}\) is the Fourier transform of f, S a hypersurface with appropriate curvature properties, \(\nu \) a suitable measure on S, the exponents p and q vary in an appropriate range, and the constant \(C(p,q,S,\nu )\) is independent of f. The a priori estimate in the previous display guarantees the existence of a bounded restriction operator \({\mathcal {R}} :L^p({\mathbb {R}}^n) \rightarrow L^q(S,\nu )\) such that \({\mathcal {R}} f = \widehat{f}\) on S when \(f \in {\mathcal {S}}({\mathbb {R}}^n)\). Such Fourier restriction estimates were first studied by Fefferman and Stein who proved a result in any dimension ([11], pg. 28). This result was later improved by the celebrated Stein-Tomas method [26, 31] which focuses on the case \(q=2\). Since then, a huge mathematical effort has been put into studying the Fourier restriction phenomena leading to the development of many new techniques. Despite that, many problems for any arbitrary dimension \(n \ge 3\) are still open. For example, the question about sufficient conditions on the exponents p and q in order for a Fourier restriction estimate to hold true.

In fact, standard examples (constant functions, Knapp examples) in the case of the sphere \(S={\mathbb {S}}^{n-1}\) with the induced Borel measure \(\sigma \) provide necessary conditions on the range of exponents p and q in order for the inequality in the previous display to hold true, namely

$$\begin{aligned} 1 \le p < \frac{2n}{n+1}, \qquad q \le \frac{n-1}{n+1} p', \end{aligned}$$

where \(\frac{1}{p}+\frac{1}{p'}=1\). The main conjecture in the theory of Fourier restriction is that these conditions are sufficient too. We refer to the exposition of Tao in [30] for a description of the aforementioned standard examples. We point to the same reference also for a more exhaustive introduction to the research topic of Fourier restriction, as well as an overview of the results up to 2004.

In the case of a \({\mathcal {C}}^2\) convex curve \(\Gamma \) in the plane \({\mathbb {R}}^2\) the conditions on the exponents are also sufficient. Sharp estimates were proved first for the circle \({\mathbb {S}}^1\) by Zygmund in [33], and for more general curves by Carleson and Sjölin in [5] and Sjölin in [25]. In fact, in [25] Sjölin proved a uniform Fourier restriction result for such curves upon the choice of a specific measure \(\nu = \nu (\Gamma )\) on each curve. This is the so called affine arclength measure, encompassing the curvature properties of the \(\Gamma \). In the case of the circle, it coincides with the induced Borel measure \(\sigma \), thus proving the sharpness of the result of Sjölin.

In [18] Müller, Ricci, and Wright addressed a different feature of the Fourier restriction phenomena, namely the pointwise relation between \({\mathcal {R}}f\) and \(\widehat{f}\) for an arbitrary function \(f \in L^p({\mathbb {R}}^n)\). In the case of a \({\mathcal {C}}^2\) convex curve and a function \(f \in L^p({\mathbb {R}}^2)\), with \(1 \le p < 8/7\) they proved that \(\nu \)-almost every point of the curve is a Lebesgue point for \(\widehat{f}\). Moreover, they showed that the regularized value of \(\widehat{f}\) coincides with that of \({\mathcal {R}}f\) at \(\nu \)-almost every point of the curve. The main ingredient in their proof is given by the estimates for a certain maximal Fourier restriction operator \({\mathcal {M}}\) defined as follows. For every Schwartz function \(f \in {\mathcal {S}}({\mathbb {R}}^2)\) we define

$$\begin{aligned} {\mathcal {M}} \widehat{f} (x) := \sup _{R} \left|{\int _{{\mathbb {R}}^2} \widehat{f} (x-y) \chi _R(y) {{\,\textrm{d}\,}}y}\right|, \end{aligned}$$
(1.1)

where \(\chi _R\) is a bump function adapted to R normalized in \(L^1({\mathbb {R}}^2)\) and the supremum is taken over all rectangles R centred at the origin with sides parallel to the axes. Next, they use the estimate

$$\begin{aligned} M \widehat{f} \le ({\mathcal {M}} \widehat{h})^{\frac{1}{2}}, \end{aligned}$$

where M is the classical two-parameter maximal operator and h is defined by \(\widehat{h} = \left|{\widehat{f}}\right|^2\). To obtain the desired result about the Lebesgue points for \(f \in L^p({\mathbb {R}}^2)\), they need to bound the norms of h by those of f. This forces to assume the additional condition \(p < 8/7\) on the exponent.

In [22] Ramos extended their result to the full range \(1 \le p < 4/3\) in the case of the circle \({\mathbb {S}}^1\). The improvement relies on the estimates he proved for a more general class of maximal Fourier restriction operators

$$\begin{aligned} \{ {\mathcal {M}}_g :\left\Vert {g}\right\Vert _{L^\infty ({\mathbb {R}}^2)} = 1 \}, \end{aligned}$$

where for every function g normalized in \(L^{\infty }({\mathbb {R}}^2)\) we define \({\mathcal {M}}_g\) as follows. For every Schwartz function \(f \in {\mathcal {S}}({\mathbb {R}}^2)\) we define

$$\begin{aligned} {\mathcal {M}}_g \widehat{f} (x) := \sup _{R} \left|{ \int _{{\mathbb {R}}^2} \widehat{f} (x-y) g(x-y) \left|{R}\right|^{-1} 1_R(y) {{\,\textrm{d}\,}}y}\right|, \end{aligned}$$
(1.2)

where the supremum is taken over all rectangles R centred at the origin with sides parallel to the axes. In particular, the freedom in the choice of g allows Ramos to bring the absolute value inside the integral defining the averages, thus bypassing the artificial limitation arising in Müller, Ricci, and Wright argument.

The line of investigation about the boundedness properties of maximal Fourier restriction operators initiated by Müller, Ricci, and Wright has been developed further in a series of papers that followed up. In [32] Vitturi studied estimates for a maximal Fourier restriction operator in the case of the sphere \({\mathbb {S}}^{n-1}\) in \({\mathbb {R}}^n\) for any arbitrary dimension \(n \ge 3\). The operator considered is of the form described in (1.1) with the supremum taken over averages on balls. Vitturi used the estimates on this operator to prove the analogue of the Lebesgue points property of \(\widehat{f}\) for every function \(f \in L^p({\mathbb {R}}^2)\) with \(1 \le p \le 8/7\). The range of exponents was later improved by Ramos in [22] to \(1 \le p \le 4/3\) considering maximal Fourier restriction operators of the form described in (1.2) with the supremum taken over averages on balls. It is worth noting that in the case of dimension \(n \ge 3\), due to the range of Stein-Tomas estimates, the endpoint \(p = 4/3\) is recovered, as opposed to the case of dimension \(n=2\).

In parallel, in [16] Kovač studied estimates for certain variational Fourier restriction operators in any arbitrary dimension \(n \ge 2\). These operators are defined by variation norms, rather than the \(L^\infty \) norm, on averages of the form of those appearing on the right hand side in (1.1) computed with respect to isotropic rescaling of an arbitrary measure \(\mu \). He developed an abstract method to upgrade Fourier restriction estimates with \(p<q\) to estimates for the variational Fourier restriction operators with the same exponents. As a consequence, he obtained a quantitative version of the qualitative result about the convergence of averages in Lebesgue points. Kovač provided sufficient conditions for the method to work. These conditions are expressed in terms of certain decay estimates on the gradient of \(\widehat{\mu }\). Together with Oliveira e Silva, he later improved over the sufficient conditions in [17].

Next, in [23] Ramos studied estimates for certain maximal Fourier restriction operators associated with an arbitrary measure \(\mu \) in the case of dimension \(n=2\) and \(n=3\). Once again, he considered operators of the form described in (1.2) with the supremum taken over averages computed with respect to isotropic rescaling of \(\mu \). Ramos provided sufficient conditions on the measure \(\mu \) to obtain estimates for the maximal Fourier restriction operators. These conditions are expressed in terms of the boundedness properties close to \(L^2({\mathbb {R}}^n)\) of the maximal function associated with \(\mu \). In particular, he recovered the case of the spherical measures that, in dimension \(n=2\) and \(n=3\), do not satisfy the sufficient conditions stated in [16, 17]. Since Kovač and Oliveira e Silva use stronger norms but weaker averages than Ramos, the results in [16, 17] and those in [22, 23] are not comparable, and we refer to those papers for an exposition of the connections between their results.

Finally, in [15] Jesurum studied estimates for a maximal Fourier restriction operator in the case of the moment curve \(\{ (t,\frac{1}{2}t^2, \dots , \frac{1}{n} t^n) :t \in {\mathbb {R}}\}\) in \({\mathbb {R}}^n\) for any arbitrary dimension \(n \ge 3\). The operator considered is of the form described in (1.2) with the supremum taken over averages on balls. Jesurum followed the argument of Drury in [10], where Drury proved Fourier restriction estimates for the moment curve in the full range \(1 \le p < (n^2+n+2)/(n^2+n)\), \(q = 2p'/(n^2+n)\). In particular, Jesurum recovered the analogue of the Lebesgue points property of \(\widehat{f}\) for every function \(f \in L^p({\mathbb {R}}^n)\) with p in the same range of exponents.

In fact, both Ramos in [23] and Jesurum in [15] considered also stronger maximal Fourier restriction operators. In particular, in the definition of these operators they substituted the supremum taken over \(L^1\) averages on balls with that over \(L^r\) averages for arbitrary \(r \ge 1\). By Hölder’s inequality, the operators are increasing in r. We refer to those papers for details about the estimates for these maximal Fourier restriction operators, as well as the analysis of the threshold values for \(r \ge 1\) in relation to such estimates.

In this paper, we are concerned with extending the results of Müller, Ricci, and Wright in [18] and Ramos in [22] to the case of arbitrary convex curves in the plane, uniformly in the curve. Such curves are the boundaries of non-empty open convex sets in \({\mathbb {R}}^2\). Passing from the case of the circle \({\mathbb {S}}^1\) to the case of an arbitrary \({\mathcal {C}}^2\) convex curve \(\Gamma \) is straight-forward upon the choice of the affine arclength measure on \(\Gamma \). We are going to introduce such measure in a moment. The main point of the paper is the removal of the \({\mathcal {C}}^2\) regularity condition on the curve. It goes through the choice of a suitable extension of the affine arclength measure, which was suggested by an affine invariant construction described by Oberlin in [21]. The desired extension of the results then follows the line of proof by Ramos up to the appropriate modifications.

We turn now to the description of two measures on an arbitrary convex curve \(\Gamma \) in the plane. We elaborate in more detail in Sect. 2 and Appendix A. A first measure \(\nu \) is built from the arclength parametrization such a curve always admits

$$\begin{aligned} z :J \rightarrow \Gamma \subseteq {\mathbb {R}}^2, \end{aligned}$$

where J is an interval in \({\mathbb {R}}\), possibly unbounded. Let m be the Lebesgue measure on J. The first and second derivatives \(z'\) and \(z''\) with respect to m are functions well-defined pointwise m-almost everywhere on J. We define a measure \(\nu \) on J by

$$\begin{aligned} {{\,\textrm{d}\,}}\nu (t) = \root 3 \of {\det \begin{pmatrix} z'(t)&z''(t) \end{pmatrix}} {{\,\textrm{d}\,}}t. \end{aligned}$$

With a slight abuse of notation we denote by \(\nu \) also its push-forward to \(\Gamma \) via the affine arclength parametrization z. In particular, when \(\Gamma \) is \({\mathcal {C}}^2\) the argument of the cubic root is well-defined everywhere in J and the measure \(\nu \) on \(\Gamma \) is called affine arclength measure. We extend the term to denote \(\nu \) in the general case of arbitrary convex curves.

We define a second measure \(\mu \) on \(\Gamma \) following Oberlin. Oberlin’s construction of the affine measures \(\{\mu _{n,\alpha } :\alpha \ge 0 \}\) on \({\mathbb {R}}^n\) is analogous to that of the Hausdorff measures. The only difference is that in the former we use rectangular parallelepipeds in \({\mathbb {R}}^n\) to cover sets while in the latter we use balls. This change guarantees the affine invariance of \(\mu _{n,\alpha }\), as well as it allows \(\mu _{n,\alpha }\) to be sensitive to the curvature properties of the set on which \(\mu _{n,\alpha }\) is evaluated. A general definition of \(\mu _{n,\alpha }\) can be found in [21]. Here, we restrict ourselves to the case \(n=2\), \(\alpha =2/3\) and we drop the subscripts from the notation of \(\mu \).

Definition 1.1

(Affine measure \(\mu \) on \({\mathbb {R}}^2\)) For every \(\delta > 0\) and every subset \(E \subseteq {\mathbb {R}}^2\) we define

$$\begin{aligned} \mu ^\delta (E) := \inf \Big \{ \sum _{R \in {\mathcal {R}}'} \left|{R}\right|^{\frac{1}{3}} :{\mathcal {R}}' \subseteq {\mathcal {R}}^\delta , E \subseteq \bigcup _{R \in {\mathcal {R}}'} R\Big \}, \end{aligned}$$

where \(\left|{R}\right|\) is the Lebesgue measure of the rectangle R and \({\mathcal {R}}^\delta \) is the collection of all rectangles in \({\mathbb {R}}^2\) with diameter smaller than or equal to \(\delta \). Next, we define

$$\begin{aligned} \mu ^*(E) := \lim _{\delta \rightarrow 0} \mu ^\delta (E). \end{aligned}$$

Finally, we define the affine measure \(\mu \) on \({\mathbb {R}}^2\) to be the restriction of the outer measure \(\mu ^*\) on \({\mathbb {R}}^2\) to its Carathéodory measurable subsets of \({\mathbb {R}}^2\).

With a slight abuse of notation we denote by \(\mu \) also its restriction to the convex curve \(\Gamma \), as well as its push-forward to J via the inverse of the bijective function given by an arclength parametrization z for \(\Gamma \).

In [21] Oberlin proved that if the curve \(\Gamma \) is \({\mathcal {C}}^2\), then the affine measure \(\mu \) and the affine arclength measure \(\nu \) are comparable up to multiplicative constants uniform in the curve. The first observation of this paper is the extension of this property to the case of arbitrary convex curves.

Theorem 1.2

There exist constants \(0< A \le B < \infty \) such that for every convex curve \(\Gamma \) we have

$$\begin{aligned} A \nu \le \mu \le B \nu , \end{aligned}$$

where \(\mu , \nu \) are the measures on \(\Gamma \) defined above.

The second observation of this paper is the uniform extension of the boundedness properties of the maximal Fourier restriction operator defined in (1.2) to the case of arbitrary convex curves.

Theorem 1.3

Let \(1 \le p < 4/3\), \(q = p'/3\). There exists a constant \(C = C(p) < \infty \) such that for every function \(g \in L^{\infty }({\mathbb {R}}^2)\) normalized in \(L^{\infty }({\mathbb {R}}^2)\), every convex curve \(\Gamma \), and every Schwartz function \(f \in {\mathcal {S}}({\mathbb {R}}^2)\) we have

$$\begin{aligned} \left\Vert {{\mathcal {M}}_g \widehat{f}}\right\Vert _{L^q(\Gamma ,\nu )} \le C \left\Vert {f}\right\Vert _{L^p({\mathbb {R}}^2)}, \end{aligned}$$

where \(\nu \) is the measure on \(\Gamma \) defined above.

We have two straight-forward corollaries. The first is a uniform Fourier restriction result for arbitrary convex curves.

Corollary 1.4

Let \(1 \le p < 4/3\), \(q = p'/3\). There exists a constant \(C = C(p) < \infty \) such that for every convex curve \(\Gamma \) and every Schwartz function \(f \in {\mathcal {S}}({\mathbb {R}}^2)\) we have

$$\begin{aligned} \left\Vert {\widehat{f}}\right\Vert _{L^q(\Gamma ,\nu )} \le C \left\Vert {f}\right\Vert _{L^p({\mathbb {R}}^2)}, \end{aligned}$$

where \(\nu \) is the measure on \(\Gamma \) defined above.

The second is the extension of the result on Lebesgue points of \(\widehat{f}\) on the curve to the case of arbitrary convex curves.

Corollary 1.5

Let \(1 \le p < 4/3\). Let \(\Gamma \) be a convex curve and \(\nu \) the measure on \(\Gamma \) defined above. If \(f \in L^p({\mathbb {R}}^2)\), then \(\nu \)-almost every point of \(\Gamma \) is a Lebesgue point for \(\widehat{f}\). Moreover, the regularized value of \(\widehat{f}\) coincides with the one of \({\mathcal {R}}f\) at \(\nu \)-almost every point of \(\Gamma \).

The results stated in Theorem 1.3 and the corollaries highlight a strict relation between the following objects. On one hand, the affine arclength and Oberlin’s affine measures, sensitive to the curvature properties of the sets on which they are defined. On the other hand, uniform estimates for classical operators involving smooth enough submanifolds in \({\mathbb {R}}^n\), where the curvature properties of the submanifold play a significant role. Beyond Fourier restriction operators, it is the case of convolution operators, X-ray transforms, and Radon transforms. We conclude the Introduction briefly mentioning previous works pointing at the aforementioned relation in the analysis of all these operators [1,2,3,4, 6,7,8,9, 14, 19, 20, 28, 29]. We refer to these papers and the references therein for a more thorough exposition of the relation. Finally, we point out the work of Gressman in [12] on the generalization of the affine arclength measure to smooth enough submanifolds of any arbitrary dimension d in \({\mathbb {R}}^n\).

1.1 Guide to the paper

In Sect. 2 we introduce some notations, definitions, and previous results we clarify in Appendix A. In Sect. 3 we prove Theorem 1.2. In Sect. 4 we prove Theorem 1.3 and the corollaries.

2 Preliminaries

2.1 Notation

We introduce the following notations.

For every interval \(I \subseteq {\mathbb {R}}\) we denote by \(\Delta (I)\) the lower triangle associated with I defined by

$$\begin{aligned} \Delta (I) := \{(s,t)\in I\times I: t<s\} . \end{aligned}$$

For all vectors \(a,b \in {\mathbb {R}}^2 \setminus \{ (0,0) \}\) we denote by \(\theta (a,b) \in [0,2 \pi )\) the counterclockwise angle from a to b.

2.2 Real analysis

We recall a result about the metric density associated with the absolutely continuous part of a measure with respect to the Lebesgue measure.

Definition 2.1

Let \(x \in {\mathbb {R}}^n\). We say that a sequence \(\{ E_\varepsilon :\varepsilon > 0 \}\) shrinks to x nicely if it is a sequence of Borel sets in \({\mathbb {R}}^n\) and there is a number \(\alpha >0\) satisfying the following property. There is a sequence of balls \(\{ B(x,r_\varepsilon ) :\varepsilon > 0 \}\) with \(\lim _{\varepsilon \rightarrow 0} r_\varepsilon =0\), such that for every \(\varepsilon > 0\) we have \(E_\varepsilon \subseteq B(x,r_\varepsilon )\) and

$$\begin{aligned} m(E_\varepsilon (x)) \ge \alpha m(B(x,r_\varepsilon )). \end{aligned}$$

Theorem 2.2

(Rudin [24], Theorem 7.14) For every \(x \in {\mathbb {R}}^n\) let \(\{ E_\varepsilon (x) :\varepsilon > 0 \}\) be a sequence that shrinks to x nicely. Let \(\mu \) be a Borel measure on \({\mathbb {R}}^n\). Let

$$\begin{aligned} {{\,\textrm{d}\,}}\mu = \mu ' {{\,\textrm{d}\,}}m + {{\,\textrm{d}\,}}\mu _s, \end{aligned}$$

be the decomposition of \(\mu \) into the absolutely continuous and singular parts with respect to the Lebesgue measure m in \({\mathbb {R}}^n\). Then, for m-almost every \(x \in {\mathbb {R}}^n\) we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \frac{\mu (E_\varepsilon (x))}{m(E_\varepsilon (x))}= \mu '(x). \end{aligned}$$

2.3 Convex curves

We introduce some auxiliary notations and definitions, and we recall some observations and properties for convex curves in the plane. They guarantee a formalization of the definition of the affine arclength measure \(\nu \) we gave in the Introduction. These properties are standard, but we were not able to find any clear reference for them. Therefore, for the sake of completeness we include the required proofs in Appendix A.

Definition 2.3

A set \(K \subseteq {\mathbb {R}}^n\) is convex if for all \(x,y \in K\), \(0 \le \lambda \le 1\) we have

$$\begin{aligned} \lambda x + (1-\lambda ) y \in K. \end{aligned}$$

A convex curve \(\Gamma \subseteq {\mathbb {R}}^2\) is the boundary \(\partial K\) of a non-empty open convex set \(K \subseteq {\mathbb {R}}^2\).

From now on, we restrict ourselves to compact convex curves. We extend the definitions and results to every non-compact convex curve \(\Gamma \) considering the sequence of compact convex curves

$$\begin{aligned} \{\Gamma _N := \partial ( K \cap [-N,N]^2 ) :N \in {\mathbb {N}}\} . \end{aligned}$$

Theorem 2.4

Every compact convex curve \(\Gamma \) is rectifiable.

Therefore, a compact convex curve \(\Gamma \) admits an arclength parametrization

$$\begin{aligned} z :J = [0, \ell (\Gamma )) \rightarrow \Gamma \subseteq {\mathbb {R}}^2, \end{aligned}$$

where \(\ell (\Gamma )\) is the length of the curve \(\Gamma \). Without loss of generality, we assume the parametrization to be counterclockwise. Moreover, we have an almost identical arclength parametrization defined by

$$\begin{aligned}&\widetilde{z} :\widetilde{J} = (0, \ell (\Gamma )] \rightarrow \Gamma \subseteq {\mathbb {R}}^2, \\&\widetilde{z}(\ell (\Gamma )) := z(0), \qquad \forall t \in (0, \ell (\Gamma )), \widetilde{z}(t) := z(t). \end{aligned}$$

With a slight abuse of notation, we denote both of the arclength parametrizations by z. The identification is harmless and involves a single point. At any time it will be made clear by the context which one is the appropriate choice of the arclength parametrization we are considering. A first instance of the feature just described appears in the following statement about the existence of well-defined left and right derivatives of the function z. Strictly speaking, we should define the left derivative \(\widetilde{z}'_l\) of \(\widetilde{z}\) on \(\widetilde{J}\), and the right derivative \(z'_r\) of z on J.

Theorem 2.5

The left and right derivatives \(z'_l\) and \(z'_r\) of z with respect to the Lebesgue measure m on J are well-defined functions from J to \({\mathbb {S}}^1\), and they coincide m-almost everywhere.

In fact, the functions \(z'_l\) and \(z'_r\) admit well-defined derivatives m-almost everywhere.

Theorem 2.6

The derivatives \(z''_l\) and \(z''_r\) of \(z'_l\) and \(z'_r\) with respect to the Lebesgue measure m on J are well-defined m-almost everywhere. They are functions from J to \({\mathbb {R}}^2\) and coincide m-almost everywhere.

Next, we define the Borel measure \(\sigma \) on J as follows. For all \(a,b \in J\), \(a \le b\) we define

$$\begin{aligned}&\sigma ((a,b)) := \max \{ 0, \theta (z'_r(a),z'_l(b)) \}, \qquad \sigma ((a,b]) := \theta (z'_r(a),z'_r(b)), \nonumber \\&\sigma ([a,b)) := \theta (z'_l(a),z'_l(b)), \qquad \sigma ([a,b]) := \theta (z'_l(a),z'_r(b)). \end{aligned}$$
(2.1)

We denote by \(\kappa \) the metric density associated with the absolutely continuous part of \(\sigma \) with respect to the Lebesgue measure m on J.

Theorem 2.7

The measure \(\sigma \) is positive. The function \(\kappa \) coincides m-almost everywhere with the functions \(\det \begin{pmatrix} z_l'&z_l'' \end{pmatrix}\) and \(\det \begin{pmatrix} z_r'&z_r'' \end{pmatrix}\).

Finally, we define the affine arclength measure \(\nu \) on J by

$$\begin{aligned} {{\,\textrm{d}\,}}\nu (t) = \root 3 \of {\kappa (t)} {{\,\textrm{d}\,}}t. \end{aligned}$$

With a slight abuse of notation, we denote by \(\nu \) also its push-forward to \(\Gamma \) via the affine arclength parametrization z.

3 Proof of Theorem 1.2

We begin by stating and proving an auxiliary lemma about the qualitative relation between the affine measure \(\mu \) and the Lebesgue measure m on J.

Lemma 3.1

The measure \(\mu \) is absolutely continuous with respect to the Lebesgue measure m on J, namely for every subset \(E \subseteq J\) we have

$$\begin{aligned} m(E) = 0 \Rightarrow \mu (E) = 0. \end{aligned}$$

In its proof, we need the following auxiliary definition.

Definition 3.2

Let \(I \subseteq J\) be an interval. Let c and d be in the closure \(\bar{J}\) of J such that \(\bar{I}= [c,d]\). Assume that \(\sigma ((c,d)) \le \pi /2\). We define the rectangle R(I) over I to be the minimal rectangle containing z(I) as follows.

If \(z'\) is constant on the interior of I, then z(I) is a segment. The affine measure \(\mu \) of z(I) is zero, as z(I) can be covered by arbitrarily thin rectangles. We define R(I) to be the segment z(I) itself.

If \(z'\) is not constant on the interior of I, then we define R(I) to be the rectangle with two adjacent vertices in z(c) and z(d), and minimal width h(R(I)), see Fig. 1. The condition on \(z'\) guarantees that \(h(R(I)) > 0\). Moreover, let \(b(R(I)) = \left|{z(d)-z(c)}\right|\). Furthermore, let the point z(e) be in the intersection between z(I) and the side of the rectangle opposite to that connecting z(c) to z(d). Finally, let \(\phi \) and \(\psi \) be the angles defined by

$$\begin{aligned} \phi := \theta ( z(e) - z(c), z(d) - z(c) ), \qquad \psi := \theta ( z(c) - z(d), z(e) - z(d) ). \end{aligned}$$
Fig. 1
figure 1

The rectangle R(I) over the interval I

Proof of Lemma 3.1

Let \(E \subseteq J\) be such that \(m(E)=0\). We want to show that for every \(\rho > 0\) there exists a covering of z(E) by a collection of rectangles with bounded diameter such that the sum of their areas is bounded by \(\rho \).

By assumption, E has 1-dimensional Hausdorff measure zero. Therefore, for every \(\varepsilon >0\) there exists a covering of E by disjoint intervals \(\{ I_n = [c_n,d_n) :n \in {\mathbb {N}}\}\) of bounded lengths \(\ell _n=m(I_n) = \left|{d_n - c_n}\right|\) such that

$$\begin{aligned} \sum _{n \in {\mathbb {N}}} \ell _n \le \varepsilon . \end{aligned}$$
(3.1)

Without loss of generality, up to splitting every interval into four subintervals, we can assume \(\sigma (I_n) \le \pi /2\).

The set z(E) can be covered by the family \(\{R_n :n \in {\mathbb {N}}\}\) of rectangles, where for every \(n \in {\mathbb {N}}\) we define \(R_n = R(I_n)\) to be the rectangle over the interval \(I_n\) as in Definition 3.2. The diameter of \(R_n\) is bounded from above by

$$\begin{aligned} \left|{z(e) - z(c)}\right| + \left|{z(d) - z(e)}\right|. \end{aligned}$$

By the definition of the length of a curve, see Definition A.6 in the Appendix, the sum in the previous display is bounded from above by \(\ell (z(I_n))\). Finally, since z is an arclength parametrization, we have that \(\ell (z(I_n)) = \ell _n\). Therefore, for every \(n \in {\mathbb {N}}\) the diameter of \(R_n\) is bounded from above.

Moreover, we claim that for every \(n \in {\mathbb {N}}\) we have

$$\begin{aligned} \frac{h_n}{\ell _n} \le \sigma (I_n), \end{aligned}$$
(3.2)

where \(h_n = h(R_n)\). In fact, for \(e_n\), \(\phi _n\), and \(\psi _n\) as in Definition 3.2 and Fig. 1 we have

$$\begin{aligned} \frac{h_n}{\ell _n}&\le \frac{h_n}{\left|{c_n-e_n}\right|} +\frac{h_n}{\left|{d_n-e_n}\right|} \\&\le \frac{h_n}{\left|{z(c_n)-z(e_n)}\right|} +\frac{h_n}{\left|{z(d_n)-z(e_n)}\right|} = \sin \phi _n + \sin \psi _n \le \phi _n + \psi _n \le \sigma (I_n). \end{aligned}$$

Therefore, we have

$$\begin{aligned} \sum _{n \in {\mathbb {N}}} \left|{R_n}\right|^{\frac{1}{3}}&= \sum _{n \in {\mathbb {N}}} (b_n h_n)^{\frac{1}{3}} \le \sum _{n \in {\mathbb {N}}} \ell _n^{\frac{2}{3}} \Big (\frac{h_n}{\ell _n}\Big )^{\frac{1}{3}} \\&\le \sum _{n \in {\mathbb {N}}} \ell _n^{\frac{2}{3}} \sigma (I_n)^{\frac{1}{3}} \\&\le \Big ( \sum _{n \in {\mathbb {N}}} \ell _n \Big )^{\frac{2}{3}} \Big ( \sum _{n \in {\mathbb {N}}} \sigma (I_n) \Big )^{\frac{1}{3}} \\&\le \varepsilon ^{\frac{2}{3}} (2 \pi )^{\frac{1}{3}} , \end{aligned}$$

where we used the definition of the length of a curve to dominate \(b_n = b(R_n)\) by \(\ell _n\) in the first inequality, the inequality in (3.2) in the second, Hölder’s inequality with the pair of exponents (3/2, 3) in the third, and the inequality in (3.1), the disjointness of \(I_n\), and the definition of \(\sigma \) in the fourth.

By taking \(\varepsilon \) arbitrarily small, we obtain the desired result. \(\square \)

Next, we prove the quantitative relation between the affine measure \(\mu \) and the affine arclength measure \(\nu \) stated in Theorem 1.2.

Proof of Theorem 1.2

Without loss of generality, up to splitting J into eight disjoint subintervals, we can assume \(\sigma (J) \le \pi /4\). It is enough to prove the desired comparability for every subset \(E \subseteq J\).

Part I: \(A \nu \le \mu \). Let R be a closed rectangle such that

$$\begin{aligned} R \cap z(J) = z([c,d]), \end{aligned}$$

where \([c,d] \subseteq J\). Let \(\Phi :\Delta ([c,d]) \rightarrow R+R\) be the function defined by

$$\begin{aligned} \Phi (s,t) = z(s)+z(t). \end{aligned}$$

The determinant of its Jacobian is defined m-almost everywhere, and it is

$$\begin{aligned} \det \begin{pmatrix} z'(t)&z'(s) \end{pmatrix}. \end{aligned}$$

Since the area of the subset \(R+R\) is \(4\left|{R}\right|\), we have

$$\begin{aligned} 4 \left|{R}\right|&\ge \int _{\Delta ([c,d])} \det \begin{pmatrix} z'(t)&z'(s) \end{pmatrix} {{\,\textrm{d}\,}}s {{\,\textrm{d}\,}}t \nonumber \\&= \int _{\Delta ([c,d])} \Big ( \int _{[t,s]} \det \begin{pmatrix} z'(t)&{{\,\textrm{d}\,}}z' \end{pmatrix} \Big ) {{\,\textrm{d}\,}}s {{\,\textrm{d}\,}}t \nonumber \\&\ge \int _{\Delta ([c,d])} \int _t^s \det \begin{pmatrix} z'(t)&z''(u) \end{pmatrix} {{\,\textrm{d}\,}}u {{\,\textrm{d}\,}}s {{\,\textrm{d}\,}}t, \end{aligned}$$
(3.3)

where \({{\,\textrm{d}\,}}z'\) is the distributional derivative of \(z'\), and \(z''\) is a function coinciding m-almost everywhere with \(z_l''\) and \(z_r''\).

For m-almost all \(t,u \in J\), \(t \le u\) we have

$$\begin{aligned} \det \begin{pmatrix} z'(t)&z''(u) \end{pmatrix}&= \left|{z''(u)}\right| \sin ( \theta ( z'(t),z'(u)) + \theta ( z'(u),z''(u)) ) \nonumber \\&= \left|{z''(u)}\right| \cos ( \theta ( z'(t),z'(u)) ) \nonumber \\&\ge \frac{1}{2} \left|{z''(u)}\right| \sin ( \theta ( z'(u),z''(u)) ) \nonumber \\&= \frac{1}{2} \det \begin{pmatrix} z'(u)&z''(u) \end{pmatrix}, \end{aligned}$$
(3.4)

where in the second and in the third equality, as well as in the inequality we used

$$\begin{aligned} \theta ( z'(u),z''(u)) = \frac{\pi }{2}, \end{aligned}$$

and in the inequality we also used

$$\begin{aligned} 0 \le \theta ( z'(t),z'(u)) \le \sigma (J) \le \frac{\pi }{4}. \end{aligned}$$

Therefore, there exists a constant \( C < \infty \) such that we have

$$\begin{aligned} \nu ([c,d])&= \int _c^d \root 3 \of {\kappa (u)} {{\,\textrm{d}\,}}u \\&= \int _c^d ((d-u)(u-c))^{-\frac{1}{3}} ((d-u)(u-c))^{\frac{1}{3}} \root 3 \of {\kappa (u)} {{\,\textrm{d}\,}}u \\&\le \Big ( \int _c^d ((d-u)(u-c))^{-\frac{1}{2}} {{\,\textrm{d}\,}}u \Big )^{\frac{2}{3}} \Big ( \int _c^d (d-u)(u-c) \kappa (u) {{\,\textrm{d}\,}}u \Big )^{\frac{1}{3}} \\&\le C \Big ( \int _c^d \int _c^u \int _u^d \kappa (u) {{\,\textrm{d}\,}}s {{\,\textrm{d}\,}}t {{\,\textrm{d}\,}}u \Big )^{\frac{1}{3}} \\&\le C \Big ( \int _{\Delta ([c,d])} \int _{s}^{t} \kappa (u) {{\,\textrm{d}\,}}u {{\,\textrm{d}\,}}s {{\,\textrm{d}\,}}t \Big )^{\frac{1}{3}} \\&\le 2 C \left|{R}\right|^{\frac{1}{3}}, \end{aligned}$$

where we used the definition of \(\nu \) in the first equality, Hölder’s inequality with the pair of exponents (3/2, 3) in the first inequality, we evaluated the first factor, which is independent of c and d, in the third equality, we used Fubini in the second inequality, and we used Theorem 2.7 and the chains of inequalities in (3.4) and (3.3) in the third inequality.

Now, let \(\{ R_n :n \in {\mathbb {N}}\}\) be a set of rectangles covering z(E) and define \(E_n \subseteq E\) by

$$\begin{aligned} z(E_n)= z(E) \cap R_n. \end{aligned}$$

Then \(\{ E_n :n \in {\mathbb {N}}\}\) is a covering of E, and we have

$$\begin{aligned} \sum _{n \in {\mathbb {N}}} \left|{R_n}\right|^{\frac{1}{3}} \ge 2 C \sum _{n \in {\mathbb {N}}} \nu (E_n) \ge 2 C \nu (E). \end{aligned}$$

By taking the \(\liminf \) over all the possible coverings, we obtain the desired inequality.

Part II: \(\mu \le B \nu \). By Lemma 3.1, there exists a function \(\mu ' :J \rightarrow [0, \infty )\) defined m-almost everywhere such that for every measurable subset \(E \subseteq J\) we have

$$\begin{aligned} \mu (E) = \int _E \mu '(t) {{\,\textrm{d}\,}}t. \end{aligned}$$

By Theorem 2.2, for m-almost every \(t \in J\) we have

$$\begin{aligned} \mu '(t)=\lim _{\varepsilon \rightarrow 0} \frac{ \mu ([s,s+\varepsilon ])}{\varepsilon }, \qquad \text { where}\ t \in [s,s+\varepsilon ]. \end{aligned}$$

As in the proof of Lemma 3.1, the limit is bounded from above by

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \frac{ \varepsilon ^{\frac{2}{3}} (\sigma ([s,s+\varepsilon ]))^{\frac{1}{3}}}{\varepsilon } = \Big ( \lim _{\varepsilon \rightarrow 0} \frac{ \sigma ([s,s+\varepsilon ])}{\varepsilon } \Big )^{\frac{1}{3}}. \end{aligned}$$

By Theorem 2.2 and Theorem 2.7, we obtain the desired inequality. \(\square \)

4 Proofs of Theorem 1.3 and the corollaries

We begin with an auxiliary definition.

Definition 4.1

A measurable function a in \({\mathbb {R}}^n\) is a bump function if there exists a rectangular parallelepiped R centred at the origin with sides parallel to the axes such that

$$\begin{aligned} \left\Vert {a}\right\Vert _{L^\infty ({\mathbb {R}}^n)}\le \left|{R}\right|^{-1}1_R. \end{aligned}$$

We denote by \({\mathcal {A}}_n\) the collection of bump functions on \({\mathbb {R}}^n\).

The convolution with such bump functions is pointwise bounded by the strong Hardy-Littlewood maximal function, uniformly in the rectangle.

Next, we state and prove an auxiliary lemma about the boundedness properties of the adjoint operator of a certain linearised maximal Fourier restriction operator.

Lemma 4.2

Let \(1\le r< 2\). There exists a constant \(C=C(r) < \infty \) such that the following property holds true.

For every convex curve \(\Gamma \) parametrized by arclength \(z: J \rightarrow \Gamma \subseteq {\mathbb {R}}^2\) and every collection \(\{ a_{z(t)} :t \in J \} \subseteq {\mathcal {A}}_2\) of bump functions such that, as a function in (tx),

$$\begin{aligned} a_{z(t)}(x) \in L^{\infty }({{\,\textrm{d}\,}}\nu (t); L^1({{\,\textrm{d}\,}}x)), \end{aligned}$$

let \(S = S(\Gamma ,\{ a \})\) be the operator defined as follows. For every function \(f \in L^4(J,\nu )\) we define

$$\begin{aligned} Sf(\xi )=\int _{J} \widehat{a}_{z(t)}(\xi ) e^{2\pi i \xi \cdot z(t)} f(t ) {{\,\textrm{d}\,}}\nu (t) . \end{aligned}$$

Then, we have

$$\begin{aligned} \left\Vert {Sf}\right\Vert _{L^{2r'}({\mathbb {R}}^2)} \le C \left\Vert {f}\right\Vert _{L^{\frac{2r}{3-r}}(J,\nu )} . \end{aligned}$$

Its proof relies on a lemma about the boundedness properties of an adjoint operator of a linearised maximal operator combined with a Fourier transform proved by Ramos.

Lemma 4.3

(Ramos [22], Lemma 1) Let \(n,k \ge 1\). There exists a constant \(C=C(n,k) < \infty \) such that the following property holds true.

For every collection

$$\begin{aligned} \Big \{ b_x :\widehat{b}_x = \prod _{i=1}^k \widehat{b}_{x,i} , b_{x,i} \in {\mathcal {A}}_n, x \in {\mathbb {R}}^n \Big \}, \end{aligned}$$

of convolution products of k bump functions such that, as function in (xy),

$$\begin{aligned} {b_x}(y) \in L^\infty ( {{\,\textrm{d}\,}}x; L^1( {{\,\textrm{d}\,}}y)), \end{aligned}$$

let \(T = T(\{b_x \})\) be the operator defined as follows. For every function \(f \in L^2({\mathbb {R}}^n) \cap L^1({\mathbb {R}}^n)\) we define

$$\begin{aligned} Tf(\xi )=\int _{{\mathbb {R}}^d} \widehat{b}_x(\xi ) e^{2\pi i x \cdot \xi } f(x){{\,\textrm{d}\,}}x . \end{aligned}$$

Then, we have

$$\begin{aligned} \left\Vert { Tf }\right\Vert _{L^2({\mathbb {R}}^n)} \le C \left\Vert {f}\right\Vert _{L^2({\mathbb {R}}^n)} . \end{aligned}$$

Proof of Lemma 4.2

Without loss of generality, by the definition of \(\nu \), we restrict our attention to \(I \subseteq J\) where \(z'_l\) and \(z'_r\) coincide, and \(\kappa (t)\) is well-defined and strictly positive.

Following the idea of Carleson-Sjölin in [5] and Sjölin in [25], we rewrite the square of Sf via a two-dimensional integral

$$\begin{aligned} Sf(\xi )^2&=\int _{I\times I} \widehat{a}_{z(t)}(\xi ) {\widehat{a}_{z(s)}(\xi )} e^{2\pi \xi \cdot (z(t)+z(s))} f(t ) {f(s)} {{\,\textrm{d}\,}}\nu (t) {{\,\textrm{d}\,}}\nu (s) \nonumber \\&= 2 \int _{\Delta (I)} \widehat{a}_{z(t)}(\xi ) {\widehat{a}_{z(s)}(\xi )} e^{2\pi \xi \cdot (z(t)+z(s))} f(t ) {f(s)} {{\,\textrm{d}\,}}\nu (t) {{\,\textrm{d}\,}}\nu (s). \end{aligned}$$

We change variables via the bijective function \(\Phi :\Delta (I) \rightarrow \Omega \subseteq {\mathbb {R}}^2\) defined by

$$\begin{aligned} \Phi (s,t) = z(s)+z(t), \end{aligned}$$

and for \((s,t) \in \Delta (I)\) we define

$$\begin{aligned} \widehat{b}_{z(s)+z(t)}&:= \widehat{a}_{z(s)} {\widehat{a}_{z(t)}} , \\ F(z(s)+z(t))&:= f(s) {f(t)}\left|{\det \begin{pmatrix} z'(s)&z'(t) \end{pmatrix}}\right|^{-1} \root 3 \of {\kappa (t)} \root 3 \of {\kappa (s)}. \end{aligned}$$

By the definition of \(\nu \) and \(\Phi \), we obtain

$$\begin{aligned} Sf(\xi )^2= 2 \int _\Omega \widehat{b}_{x}(\xi ) e^{2\pi i \xi \cdot x} F(x){{\,\textrm{d}\,}}x . \end{aligned}$$

Next, we prove by interpolation that for every \(1\le r\le 2\) there exists a constant \(C=C(r) < \infty \) such that we have

$$\begin{aligned} \left\Vert {Sf}\right\Vert _{L^{2r'}({\mathbb {R}}^2)}^{2r}=\left\Vert {Sf^2}\right\Vert _{L^{r'}({\mathbb {R}}^2)}^r \le C \left\Vert {F}\right\Vert _{L^r({\mathbb {R}}^2)}^r . \end{aligned}$$

The case \(r=1\) follows from \(\left\Vert {\widehat{b}_x}\right\Vert _{L^\infty ({\mathbb {R}}^2)} \le C\). The case \(r=2\) follows from Lemma 4.3.

After that, to estimate the \(L^r({\mathbb {R}}^2)\) norm of F for \(1\le r<2\), we invert the change of variables \(\Phi \),

$$\begin{aligned} \int _{\Omega } \left|{F(x)}\right|^r dx&= \int _{\Delta (I)} \left|{f(t )f(s)}\right|^{r} {\kappa (t)^{\frac{r}{3}}} {\kappa (s)^{\frac{r}{3}}} \left|{\det \begin{pmatrix} z'(t)&z'(s) \end{pmatrix}}\right|^{1-r} {{\,\textrm{d}\,}}t {{\,\textrm{d}\,}}s \nonumber \\&= \int _{\Delta (I)} \left|{f(t )f(s)}\right|^{r} {\kappa (t)^{\frac{r}{3}}} {\kappa (s)^{\frac{r}{3}}} \left|{\sin (\theta (t) -\theta (s))}\right|^{1-r} {{\,\textrm{d}\,}}t {{\,\textrm{d}\,}}s, \end{aligned}$$
(4.1)

where we define \(\theta :I \rightarrow [0,2 \pi )\) by requiring

$$\begin{aligned} \begin{pmatrix} \cos \theta (t) \\ \sin \theta (t) \end{pmatrix} = z'(t). \end{aligned}$$

We split \(\Delta (I)\) in the four subsets defined as follows. For \(j \in \{1,2,3,4\}\) we define

$$\begin{aligned} \Delta _{j} := \Big \{ (s,t) \in \Delta (I) :\theta (s) -\theta (t) \in \Big [ \frac{(j-1) \pi }{2} , \frac{j \pi }{2} \Big ) \Big \} , \end{aligned}$$

and we observe that

$$\begin{aligned}&\text {for}\ (s,t) \in \Delta _{1}, \qquad \sin (\theta (s)-\theta (t)) \ge \frac{1}{2} (\theta (s)-\theta (t)) \ge 0, \\&\text {for}\ (s,t) \in \Delta _{2}, \qquad \sin (\theta (s)-\theta (t)) \ge \frac{1}{2} (\pi + \theta (t)-\theta (s)) \ge 0 , \\&\text {for}\ (s,t) \in \Delta _{3}, \qquad \sin (\theta (t)-\theta (s)) \ge \frac{1}{2} (\theta (s)-\theta (t)- \pi ) \ge 0, \\&\text {for}\ (s,t) \in \Delta _{4}, \qquad \sin (\theta (t)-\theta (s)) \ge \frac{1}{2} (2 \pi + \theta (t)-\theta (s)) \ge 0. \end{aligned}$$

We obtain the desired estimate by controlling the portions of the integral in (4.1) in the corresponding subsets separately.

Case I: \((s,t) \in \Delta _{1}\). We have

$$\begin{aligned} \theta (s)-\theta (t) \ge \int _t^s \kappa (u) {{\,\textrm{d}\,}}u \ge 0. \end{aligned}$$
(4.2)

By the assumption on I made at the beginning of the proof, the function \(\Psi _1 :\Delta _{1} \rightarrow \widetilde{\Delta }_{1} \subseteq [0, 2 \pi )^2\) defined by

$$\begin{aligned}&\Psi _1 (s,t) = (\alpha (s), \beta (t)) , \nonumber \\&\alpha (s)=\int _{0}^s \kappa (u) {{\,\textrm{d}\,}}u , \qquad \beta (t)=\int _{0}^t \kappa (u) {{\,\textrm{d}\,}}u, \end{aligned}$$
(4.3)

is bijective. Together with the change of variables via the function \(\Psi _1\), the inequality in (4.2) yields that the portion of the integral in (4.1) on \(\Delta _{1}\) is bounded from above by

$$\begin{aligned} \int _{\widetilde{\Delta }_{1}} \left|{f(s (\alpha ) )}\right|^{r} \left|{f(t(\beta ))}\right|^{r} {\kappa (s(\alpha ))^{\frac{r}{3}-1}} {\kappa (t(\beta ))^{\frac{r}{3}-1}} \left|{\alpha - \beta }\right|^{1-r} {{\,\textrm{d}\,}}\alpha {{\,\textrm{d}\,}}\beta . \end{aligned}$$

By Hardy-Littlewood-Sobolev inequality, up to a multiplicative constant, the previous display is bounded from above by

$$\begin{aligned} \left\Vert {\left|{f \circ s}\right|^r ( \kappa \circ s )^{\frac{r}{3}-1}}\right\Vert ^2_{L^{\frac{2}{3-r}} (\widetilde{I}, \widetilde{\nu })}, \end{aligned}$$

where \(\widetilde{I} = \alpha (I)\) and \(\widetilde{\nu }\) is the push-forward to \(\widetilde{I}\) via \(\alpha \) of the measure \(\nu \) on I. We change variables via the inverse of the bijective function \(\alpha \) defined in (4.3). Up to a multiplicative constant, we obtain the desired estimate for the portion of the integral in (4.1) on \(\Delta _{1}\) by

$$\begin{aligned} \left\Vert {f}\right\Vert ^{2r}_{L^{\frac{2r}{3-r}} (I,\nu )}. \end{aligned}$$

Case II: \((s,t) \in \Delta _{2}\). For \(S_2(t)\) defined by

$$\begin{aligned} S_2(t) := \sup \Big \{ u \in J :\theta (u) \le \theta (t)+\pi \Big \} \ge s, \end{aligned}$$

we have

$$\begin{aligned} \pi + \theta (t)-\theta (s) \ge \int _s^{S_2(t)} \kappa (u) {{\,\textrm{d}\,}}u \ge 0. \end{aligned}$$
(4.4)

By the assumption on I made at the beginning of the proof, the function \(\Psi _2 :\Delta _{2} \rightarrow \widetilde{\Delta }_{2} \subseteq [0, 2 \pi )^2\) defined by

$$\begin{aligned}&\Psi _2 (s,t) = (\alpha (s,t), \beta (t)) , \nonumber \\&\alpha (s,t)=\int _s^{S_2(t)} \kappa (u) {{\,\textrm{d}\,}}u +\int _{0}^t \kappa (u) {{\,\textrm{d}\,}}u + \pi , \qquad \beta (t)=\int _{0}^t \kappa (u) {{\,\textrm{d}\,}}u, \end{aligned}$$
(4.5)

is bijective. Since the function \(S_2\) is increasing then it is differentiable almost everywhere. Therefore, the function \(\Psi _2\) is approximately totally differentiable almost everywhere in its domain, see Theorem 1 and the following Example in [13]. Together with the change of variables via the function \(\Psi _2\), the inequality in (4.4) yields that the portion of the integral in (4.1) on \(\Delta _{2}\) is bounded from above by

$$\begin{aligned} \int _{\widetilde{\Delta }_{2}} \left|{f(s (\alpha ))}\right|^{r} \left|{f(t(\beta ))}\right|^{r} {\kappa (s(\alpha ))^{\frac{r}{3}-1}} {\kappa (t(\beta ))^{\frac{r}{3}-1}} \left|{\alpha - \beta - \pi }\right|^{1-r} {{\,\textrm{d}\,}}\alpha {{\,\textrm{d}\,}}\beta , \end{aligned}$$

where we used the result stated in Theorem 2 in [13] for changes of variables that are approximately totally differentiable almost everywhere.

As in Case I, we conclude by Hardy-Littlewood-Sobolev inequality and the change of variables via the inverse of the bijective function defined in (4.5).

Case III: \((s,t) \in \Delta _{3}\). For \(S_3(t)\) defined by

$$\begin{aligned} S_3(t) := \inf \Big \{ u \in J :\theta (u) \ge \theta (t)+\pi \Big \} \le s, \end{aligned}$$

we have

$$\begin{aligned} \theta (s) - \theta (t) - \pi \ge \int _{S_3(t)}^s \kappa (u) {{\,\textrm{d}\,}}u \ge 0. \end{aligned}$$

We conclude as in Case II, with the change of variables via the bijective function \(\Psi _3 :\Delta _{3} \rightarrow \widetilde{\Delta }_{3} \subseteq [0, 2 \pi )^2\) defined by

$$\begin{aligned}&\Psi _3 (s,t) = (\alpha (s,t), \beta (t)) , \\&\alpha (s,t)=\int _{S_3(t)}^s \kappa (u) {{\,\textrm{d}\,}}u +\int _{0}^t \kappa (u) {{\,\textrm{d}\,}}u + \pi , \qquad \beta (t)=\int _{0}^t \kappa (u) {{\,\textrm{d}\,}}u. \end{aligned}$$

Case IV: \((s,t) \in \Delta _{4}\). We have

$$\begin{aligned} 2 \pi + \theta (t) - \theta (s) \ge \int _s^{\ell (\Gamma )} \kappa (u) {{\,\textrm{d}\,}}u + \int _0^{t} \kappa (u) {{\,\textrm{d}\,}}u \ge 0. \end{aligned}$$

We conclude as in Case I, with the change of variables via the bijective function \(\Psi _4 :\Delta _{4} \rightarrow \widetilde{\Delta }_{4} \subseteq [0, 2 \pi )^2\) defined by

$$\begin{aligned}&\Psi _4 (s,t) = (\alpha (s), \beta (t)) , \\&\alpha (s) = 2 \pi - \int _s^{\ell (\Gamma )} \kappa (u) {{\,\textrm{d}\,}}u, \qquad \beta (t)=\int _{0}^t \kappa (u) {{\,\textrm{d}\,}}u. \end{aligned}$$

\(\square \)

Next, we prove the boundedness of the maximal Fourier restriction operator uniformly in the convex curve stated in Theorem 1.3.

Proof of Theorem 1.3

The proof follows a standard argument that we repeat for the sake of completeness. Let \(g \in L^\infty ({\mathbb {R}}^2)\) be a function normalized in \(L^\infty ({\mathbb {R}}^2)\). Let R be a measurable function associating a point in \(\Gamma \) to a rectangle centred at the origin with sides parallel to the axes. We consider the linearised maximal Fourier restriction operator \({\mathcal {M}}_{g,R}\) defined as follows. For every Schwartz function \(f \in {\mathcal {S}}({\mathbb {R}}^2)\) we define

$$\begin{aligned} {\mathcal {M}}_{g,R} \widehat{f}(t)=\int _{{\mathbb {R}}^2} \widehat{f}(z(t) - y) g(z(t) - y) \left|{R(z(t))}\right|^{-1} 1_{R(z(t))}(y) {{\,\textrm{d}\,}}y. \end{aligned}$$

We aim at proving boundedness properties for \({\mathcal {M}}_{g,R}\) with constants independent of the linearising function R.

The operator is bounded from \(L^1({\mathbb {R}}^2)\) to \(L^\infty (J, \nu )\). To prove its boundedness properties near \(L^{4/3}({\mathbb {R}}^2)\), we introduce the bump function

$$\begin{aligned} a_x(y) := \left|{R(x)}\right|^{-1} 1_{R(x)}(y)\overline{g(x-y)}, \end{aligned}$$

and, by Plancherel, we rewrite

$$\begin{aligned} {\mathcal {M}}_{g,R} \widehat{f} (t) = \int _{{\mathbb {R}}^2} \overline{\widehat{a}_{z(t)} (\xi )} e^{2\pi i \xi \cdot z(t)}f(\xi ) {{\,\textrm{d}\,}}\xi . \end{aligned}$$

The adjoint operator \({\mathcal {M}}_{g,R}^*\) with respect to the \(L^1(J, \nu )\)-pairing is defined by

$$\begin{aligned} {\mathcal {M}}_{g,R}^*h(\xi )=\int _{J} \widehat{a}_{z(t)} (\xi )e^{-2\pi i \xi \cdot z(t)}h(t) {{\,\textrm{d}\,}}\nu (t). \end{aligned}$$

By Lemma 4.2, for \(1 \le r < 2\) we have

$$\begin{aligned} {\mathcal {M}}_{g,R}^*:L^{\frac{2r}{3-r}}(J, \nu ) \rightarrow L^{2r'}({\mathbb {R}}^2), \qquad \left\Vert {{\mathcal {M}}_{g,R}^*}\right\Vert _{{{\,\textrm{op}\,}}} < \infty , \end{aligned}$$

hence, for \(1 \le p < 4/3\) we have the desired result

$$\begin{aligned} {\mathcal {M}}_{g,R} :L^{p}({\mathbb {R}}^2) \rightarrow L^{\frac{p'}{3}}(J, \nu ), \qquad \left\Vert {{\mathcal {M}}_{g,R}}\right\Vert _{{{\,\textrm{op}\,}}} < \infty , \end{aligned}$$

where \(\left\Vert {\cdot }\right\Vert _{{{\,\textrm{op}\,}}}\) stands for the norm of the operator and \(p' = 2r'\). \(\square \)

Finally, we prove the corollaries.

Proof of Corollary 1.4

For every function \(f \in {\mathcal {S}}({\mathbb {R}}^2)\) we define the function g by

$$\begin{aligned} g(\xi ) ={\left\{ \begin{array}{ll} \displaystyle \frac{\left|{\widehat{f}(\xi )}\right|}{\widehat{f}(\xi )}, \qquad &{}\text {if}\ \widehat{f}(\xi ) \ne 0, \\ 1, \qquad &{}\text {if}\ \widehat{f}(\xi ) =0. \end{array}\right. } \end{aligned}$$

In particular, we have

$$\begin{aligned} \left\Vert {g}\right\Vert _\infty =1, \qquad \widehat{f} g = \left|{\widehat{f}}\right|. \end{aligned}$$

Therefore, the function \({\mathcal {M}}_g \widehat{f}\) dominates the function \(\left|{\widehat{f}}\right|\), and the desired result follows from Theorem 1.3. \(\square \)

Proof of Corollary 1.5

The desired result holds true for every function \(f \in {\mathcal {S}}({\mathbb {R}}^2)\).

For \(1 \le p < 4/3\), the desired result for every function \(f \in L^p({\mathbb {R}}^2)\) follows from a standard approximation argument and the boundedness properties of the maximal operator stated in Theorem 1.3. \(\square \)