1 Fourier Multipliers, the Mikhlin Multiplier Theorem, and Hörmander’s Extension

The Fourier transform of an integrable function f on \({\mathbb {R}}^n\) is defined by \({\widehat{f}}(\xi )= \int _{{\mathbb {R}}^n}f(x) {e}^{-2\pi i x \cdot \xi }\mathrm{d}x\) and its inverse Fourier transform by \(f^{\vee }(x) ={\widehat{f}}(-x)= \int _{{\mathbb {R}}^n}f(\xi ) {e}^{2\pi i x \cdot \xi }\mathrm{d}\xi \). Schwartz functions are infinitely differentiable functions with rapid decay at infinity. The Fourier transform is an injective map from the Schwartz class onto itself, whose inverse is the inverse Fourier transform.

Multiplying Schwartz functions by a fixed bounded function yields \(L^1\) functions, on which the inverse Fourier transform is well defined. For a fixed bounded function \(\sigma \) on \({\mathbb {R}}^n\), we consider the operator acting for Schwartz functions f on \({\mathbb {R}}^n\) via the well-defined action

$$\begin{aligned} T_{\sigma }(f)(x)= ({\widehat{f}} \sigma )^{\vee }(x) =\int _{{\mathbb {R}}^n} {\widehat{f}}(\xi ) \sigma (\xi ) {e}^{2\pi i x \cdot \xi }\mathrm{d}\xi . \end{aligned}$$

Such operators are called multiplier operators to be distinguished from multiplication operators as they multiply the frequency (the Fourier transform) of a function, instead of the function itself. A long-standing problem in harmonic analysis is to find optimal conditions on \(\sigma \) such that \(T_{\sigma }\) is a bounded operator on \(L^p({\mathbb {R}}^n)\) for a given p in \((1,\infty )\). If this is the case, then \(\sigma \) is called an \(L^p\) Fourier multiplier.

We begin by providing a short historical account of this topic. Let us denote by \(\partial _k\) differentiation in the k the variable on \({\mathbb {R}}^n\). Mikhlin’s theorem [19] states that if a function \(\sigma \) defined on \({\mathbb {R}}^n\setminus \{0\}\) has the property that for all multi-indices \(\alpha =(\alpha _1, \dots , \alpha _n)\in ({\mathbb {Z}}^+\cup \{0\})^n\) with \(|\alpha | = \alpha _1+\cdots +\alpha _n \le n\) there is a constant \(C_\alpha <\infty \) such that

$$\begin{aligned} |\xi |^{ |\alpha |} \Big | \partial ^{\alpha _1}_1 \cdots \partial ^{\alpha _n}_n \sigma (\xi ) \Big | \le C_\alpha , \end{aligned}$$
(1.1)

then \(\sigma \) is an \(L^p({\mathbb {R}}^n)\) Fourier multiplier. Actually, in the original formulation of Mikhlin’s theorem [19] it was assumed that at most one derivative falls per each variable, i.e., each \(\alpha _j\) in (1.1) is either 0 or 1. We know nowadays that we can replace the condition \(\alpha _j\in \{0,1\}\) for all j in (1.1) simply by \(|\alpha |\le [\frac{n}{2}]+1\); see for instance Stein’s book [26, p. 96]. This formulation in Stein’s book is more or less the classical version of Mikhlin’s theorem known today. The fact that the total number of differentiations can be taken to be essentially “half the dimension” first appeared in Hörmander’s [16] work.

Precisely, Hörmander [16] provided an improvement of Mikhlin’s theorem by replacing condition (1.1) by

$$\begin{aligned} \sup _{k\in {\mathbb {Z}}} 2^{-kn+2k|\alpha |} \int _{2^k<|\xi |<2^{k+1}} |\partial ^\alpha \sigma (\xi )|^2 \mathrm{d}\xi <\infty , \end{aligned}$$
(1.2)

where \(\partial ^\alpha = \partial _1^{\alpha _1}\cdots \partial _n^{\alpha _n}\). To compare conditions (1.1) and (1.2), we note that (1.1) requires the function \(M_\alpha (\xi )=|\partial ^\alpha \sigma (\xi )| |\xi |^{|\alpha |}\) be bounded on \({\mathbb {R}}^n\setminus \{0\}\) for all \(\alpha \) with \(|\alpha | \le [n/2]+1\), while (1.2) relaxes this assumption to the requirement that the \(L^2\) averages of \(M_\alpha \) over the dyadic annuli \(\{2^k<|\xi |<2^{k+1}\}\) are uniformly bounded. Additionally, it is useful to observe that Hörmander’s condition (1.2) can be rewritten in the form

$$\begin{aligned} \sup _{k\in {\mathbb {Z}}} \Vert \partial ^\alpha [\sigma (2^k \cdot )]\Vert _{L^2(A)} <\infty , \end{aligned}$$
(1.3)

where \(A=\{\xi \in {\mathbb {R}}^n: 1<|\xi |<2\}\) denotes the unit annulus in \({\mathbb {R}}^n\).

To obtain a sharper version of the Hörmander multiplier theorem, we introduce derivatives of fractional order. Let \(\Delta =\sum _{j=1}^n \partial _j^2\) be the Laplacian on \({\mathbb {R}}^n\). We denote by \((I-\Delta )^{s/2} \) the operator given by multiplication by \((1+4\pi ^2 |\xi |^2)^{s/2}\) on the Fourier transform; this operator corresponds to a “combined derivative” of all orders from 0 up to and including the fractional number s. A variant of Hörmander’s result involving fractional derivatives can be formulated as follows: Let \(s>0\) and let \(\Psi \) be a Schwartz function whose Fourier transform is supported in the annulus \(\{\xi : 1/2< |\xi |< 2\}\) and which satisfies \(\sum _{j\in {\mathbb {Z}}} \widehat{\Psi }(2^{-j}\xi )=1\) for all \(\xi \ne 0\). If for some \(1\le r\le 2\) and \(s>n/r\), \(\sigma \) satisfies

$$\begin{aligned} \sup _{k\in {\mathbb {Z}}} \big \Vert (I-\Delta )^{s/2} \big [ \widehat{\Psi }\sigma (2^k \cdot )\big ] \big \Vert _{L^r({\mathbb {R}}^n) }<\infty , \end{aligned}$$
(1.4)

then \(T_\sigma \) admits a bounded extension from \(L^p({\mathbb {R}}^n)\) to itself for all \(1<p<\infty \). We would like to point out that in the special case when s is a positive integer and \(r=2\), the present version of the Hörmander multiplier theorem is equivalent to the one expressed in terms of (1.2) (equivalently (1.3)), as one can verify using the classical equivalence of Sobolev norms

$$\begin{aligned} \big \Vert (I-\Delta )^{s/2} f \big \Vert _{L^2} \approx \sum _{|\alpha | \le s} \big \Vert \partial ^\alpha f \Vert _{L^2} . \end{aligned}$$

It is natural to ask whether condition (1.4) still implies that \(\sigma \) is an \(L^p\) Fourier multiplier for some \(p\in (1,\infty )\) if \(s\le \frac{n}{r}\). As no derivatives are required of \(\sigma \) in order for it to be an \(L^2\) Fourier multiplier, one expects that the closer p is to 2, the fewer derivatives are needed in condition (1.4). Calderón and Torchinsky [1, Theorem 4.7] first addressed this point by showing, via an interpolation argument, that \(T_\sigma \) is bounded from \(L^p({\mathbb {R}}^n)\) to itself whenever condition (1.4) holds for all p satisfying \(\big | \frac{1}{p} -\frac{1}{2} \big | <\frac{s}{n}\) and \(\big | \frac{1}{p} -\frac{1}{2} \big | = \frac{1}{r} \); in other words boundedness holds when \(\big | \frac{1}{p} -\frac{1}{2} \big | <\frac{s}{n}\) and \(rs>n\). Counterexamples were provided by Grafakos, He, Honzík, and Nguyen [13] indicating that unboundedness holds when \(\big | \frac{1}{p} -\frac{1}{2} \big | >\frac{s}{n}\). Moreover, Slavíková [24] obtained an example indicating that \(L^p({\mathbb {R}}^n)\) boundedness may fail on the line segments \(|\frac{1}{p}-\frac{1}{2}|=\frac{s}{n}\). Prior to this, positive endpoint results on \(L^p\) and on \(H^1\) involving Besov spaces were given by Seeger [21,22,23].

We note that boundedness from \(L^p\) to \(L^p\) may indeed hold on the critical line; for instance the multiplier \(m_{a,b}(\xi ) = \psi (\xi ) |(\xi )|^{- b} e^{i| \xi |^a}\) on \({\mathbb {R}}^{ n}\), where \(a > 0\), \(a \ne 1\), \(b > 0\), and \(\psi \) is a smooth function on \({\mathbb {R}}^{ n}\) which vanishes in a neighborhood of the origin and is equal to 1 in a neighborhood of infinity, satisfies (1.4) with \(s = b/a\) and any \(r> n/s\). The range of p’s for which \(T_{m_{a,b}}\) is a bounded operator on \(L^p({\mathbb {R}}^{ n})\) can be completely described by the equation \( |\frac{1}{p}-\frac{1}{2}|\le \frac{b/a}{ n} \) (see Hirschman [15, comments after Theorem 3c], Wainger [29, Part II], and Miyachi [20, Theorem 3]).

Recently, Grafakos and Slavíková [12] provided a version of the Hörmander multiplier theorem in [11] in which condition (1.4) (for \(1\le r \le 2\) and \(rs>n\)) is replaced by the weaker assumption

$$\begin{aligned} \sup _{k \in {\mathbb {Z}}}\left\Vert {(I- \Delta )^{\frac{s}{2}} [{\widehat{\Psi }} \sigma (2^k\cdot )]} \right\Vert _{{L}^{\frac{n}{s},1}({\mathbb {R}}^n)} < \infty , \end{aligned}$$
(1.5)

proving that (1.5) implies boundedness for \(T_{\sigma }\) from \(L^p \) to itself for \(|\frac{1}{p}-\frac{1}{2}|<\frac{s}{n}\). Here \(L^{\frac{n}{s},1}\) is the Lorentz space with indices n/s and 1 defined in terms of the norm

$$\begin{aligned} \Vert f\Vert _{L^{q,1}({\mathbb {R}}^n)}=\int _0^\infty f^*(r)r^{\frac{1}{q}}\,\frac{ \mathrm{d}r}{r} , \quad 0<q<\infty , \end{aligned}$$

where \(f^*\) is the nonincreasing rearrangement of the function f, namely, the unique nonincreasing left-continuous function on \((0,\infty )\) that is equimeasurable with f and is given explicitly by

$$\begin{aligned} f^*(t)= \inf \big \{r\ge 0:\,\, |\{y\in {\mathbb {R}}^n:\,\, |f(y)|>r\}| < t \big \}\, . \end{aligned}$$

The Lorentz space \(L^{\frac{n}{s},1}({\mathbb {R}}^n)\) is known to be, at least for integer values of s, locally the largest rearrangement-invariant function space such that membership of \((I-\Delta )^\frac{s}{2}f\) to this space forces f to be bounded, see [5, 27]. So in this sense, condition (1.5) is optimal for the Hörmander multiplier theorem.

2 The Marcinkiewicz Multiplier Theorem

The Marcinkiewicz multiplier theorem predates that of Mikhlin’s theorem and was first formulated in the context of Fourier series. Although for one-dimensional Fourier series, it is essentially equivalent to Mikhlin’s theorem, Marcinkiewicz’s multiplier theorem presents differences in two and higher dimensions. We consider the double Fourier series transformation

$$\begin{aligned} f(x,y)= \sum _{m,n=1}^\infty A_{m,n} (x,y) \mapsto \sum _{m,n=1}^\infty \lambda _{m,n}A_{m,n} (x,y), \end{aligned}$$
(2.1)

where

$$\begin{aligned} A_{m,n} (x,y)&= a_{m,n} \cos (mx) \cos (ny) + b_{m,n} \cos (mx) \sin (ny) \\&\quad + c_{m,n} \sin (mx) \cos (ny) + d_{m,n} \sin (mx) \sin (ny) \end{aligned}$$

and \(a_{m,n}\), \(b_{m,n}\), \(c_{m,n}\), and \(d_{m,n}\) are the Fourier coefficients of the periodic function f on \([-\pi , \pi )^2\). Marcinkiewicz’s theorem [18] states that if

$$\begin{aligned} K_{\alpha ,\beta } =&\sum _{\mu =2^\alpha }^{2^{\alpha +1}-2}\,\,\,\sum _{\nu =2^\beta }^{2^{\beta +1}-2} \big |\lambda _{\mu +1,\nu }-\lambda _{\mu ,\nu } -\lambda _{\mu +1,\nu +1} +\lambda _{\mu ,\nu +1}\big | \\&+ \sum _{\mu =2^\alpha }^{2^{\alpha +1}-2} \big |\lambda _{\mu +1,2^{\beta +1} }-\lambda _{\mu ,2^{\beta +1}} \big | \\&+ \sum _{\nu =2^\beta }^{2^{\beta +1}-2} \big | \lambda _{2^\alpha ,\nu +1} -\lambda _{2^\alpha ,\nu } \big | \\&+ \big | \lambda _{2^{\alpha +1}-1}, \lambda _{2^{\beta +1}-1} \big | \end{aligned}$$

satisfies

$$\begin{aligned} \sup \{ K_{\alpha ,\beta }:\,\, \alpha ,\beta =0,1,2,\dots \} =K<\infty , \end{aligned}$$

then the transformation in (2.1) is bounded from \(L^p([-\pi , \pi )^2)\) to itself, when \(1<p<\infty \). As pointed out by Marcinkiewicz himself, there are natural extensions of this result for Fourier series to higher dimensions.

The condition about the finiteness of \(K_{\alpha ,\beta }\) concerns the summability of the discrete first partial derivatives and of the discrete mixed derivative of the multiplier sequence \({\lambda _{m,n}}\) over dyadic intervals \(([2^\alpha , 2^{\alpha +1}]\cap \mathbb Z) \times ([2^\beta , 2^{\beta +1}]\cap {\mathbb {Z}})\), and can be recasted in the nonperiodic setting as well. Here is the formulation of the Marcinkiewicz multiplier theorem on Euclidean spaces: Let \(I_j= (-2^{j+1}, -2^j]\cup [2^j, 2^{j+1})\) for \(j\in {\mathbb {Z}}\). Let \(\sigma \) be a bounded function on \({\mathbb {R}}^n\) such that for all \(\alpha =(\alpha _1,\dots , \alpha _n)\) with \(0\le \alpha _1, \dots ,\alpha _n\le 1\) the derivatives \(\partial ^\alpha \sigma \) are continuous up to the boundary of any rectangle \(I_{j_1}\times \cdots \times I_{j_n}\) on \({\mathbb {R}}^n\). Assume that there is a constant \(K<\infty \) such that for all partitions \( \{s_1,\dots , s_k\}\cup \{r_1,\dots , r_\ell \}=\{1,2,\dots , n\}\) with \(n=k+\ell \) and all \(\xi _{j_i}\in I_{j_i} \) we have

$$\begin{aligned} \sup _{\xi _{r_1}\in I_{j_{r_1}}}\!\cdots \! \sup _{\xi _{r_\ell }\in I_{j_{r_\ell }}} \int _{I_{j_{s_1}}} \!\! \cdots \! \int _{I_{j_{s_k}} } \!\! \!\big | (\partial _{ {s_1}} \cdots \partial _{ {s_k}} \sigma ) (\xi _1, \dots , \xi _n)\big | \, \mathrm{d}\xi _{ {s_k}}\cdots \mathrm{d}\xi _{ {s_1}} \le K \end{aligned}$$
(2.2)

for all \( (j_1,\dots , j_n)\in {\mathbb {Z}}^n\). Then \(\sigma \) is an \(L^p\) Fourier multiplier on \({\mathbb {R}}^n\) for \(1<p<\infty \), and there is a constant \(C_{p, n}<\infty \) such that

$$\begin{aligned} \Vert T_\sigma \Vert _{L^p\rightarrow L^p}\le C_{p, n}\big (K+ \Vert \sigma \Vert _{L^\infty }\big ) . \end{aligned}$$
(2.3)

This result constitutes the classical version of the Marcinkiewicz multiplier theorem and its proof can be found in harmonic analysis textbooks, such as in Stein’s book [26, p. 109].

Although condition (2.2) is a bit hard to verify, it is a straightforward consequence of the seemingly easier property

$$\begin{aligned} \Big | \partial ^{\alpha _1}_1 \cdots \partial ^{\alpha _n}_n\sigma (\xi ) \Big | \le C_\alpha |\xi _1|^{ -\alpha _1}\cdots |\xi _n|^{ -\alpha _n} , \end{aligned}$$
(2.4)

whenever all \(\alpha _j \in \{0,1\}\) and \(\xi _j\ne 0\) for all j for which \(\alpha _j=1\).

While (1.1) is often a consequence of the property that a smooth function \(\sigma \) on \({\mathbb {R}}^n\setminus \{0\}\) is homogeneous of degree zero, i.e.,

$$\begin{aligned} \sigma (\lambda \xi ) = \sigma (\xi ) ,\quad \quad \text {for all}\, \xi \in {\mathbb {R}}^n\, \text {and all}\, \lambda >0, \end{aligned}$$
(2.5)

condition (2.4) follows from a mixed (or anisotropic) homogeneity property

$$\begin{aligned} \sigma (\lambda ^{k_1} \xi _1,\dots , \lambda ^{k_n} \xi _n) = \sigma (\xi _1,\dots , \xi _n) ,\quad \quad \text {for all}\, \xi \in {\mathbb {R}}^n\, \text {and all}\, \lambda >0, \end{aligned}$$
(2.6)

for some fixed positive reals \(k_1, \dots , k_n\). Indeed, (2.5) yields

$$\begin{aligned} \partial ^\alpha [\sigma ( \xi ) ]= \partial ^\alpha [\sigma (\lambda \xi ) ] = \lambda ^{|\alpha |} \partial ^\alpha \sigma (\lambda \xi ) \end{aligned}$$

and so taking \(\lambda =|\xi |^{-1}\) yields (1.1) with \(C_\alpha \) the maximum of \(\partial ^\alpha \sigma \) on \({\mathbb {S}}^{n-1}\). Analogously, under hypothesis (2.6), we obtain

$$\begin{aligned} \partial ^\alpha [\sigma ( \xi ) ]= \partial ^\alpha [\sigma (\lambda ^{k_1} \xi _1,\dots , \lambda ^{k_n} \xi _n) ] = \lambda ^{\alpha _1k_1+\cdots +\alpha _nk_n} \partial ^\alpha \sigma (\lambda ^{k_1} \xi _1,\dots , \lambda ^{k_n} \xi _n) , \end{aligned}$$

and picking the unique \(\lambda >0\) such that \((\lambda ^{k_1}\xi _1, \dots , \lambda ^{k_n}\xi _n)\in {\mathbb {S}}^{n-1}\) for a given \(\xi \) with all \(\xi _j\ne 0\), we deduce (2.4) with \(C_\alpha =\Vert \partial ^\alpha \sigma \Vert _{L^\infty ({\mathbb {S}}^{n-1})}\), as \(\lambda ^{k_j\alpha _j}\le |\xi _j|^{-\alpha _j}\).

Example 2.1

For \(i,j\in \{1,\dots , n\}\), the smooth function

$$\begin{aligned} \sigma _1(\xi _1, \dots , \xi _n) = \frac{\xi _i \xi _j }{\xi _1^2+\xi _2^2+\cdots +\xi _n^2} \end{aligned}$$

on \({\mathbb {R}}^n\setminus \{0\}\) satisfies (1.1) while, for fixed \(m_k\in 2n {\mathbb {Z}}^+\), the smooth function

$$\begin{aligned} \sigma _2(\xi _1, \dots , \xi _n) = \frac{ ( \xi _1^{m_1} \xi _2^{m_2}\cdots \xi _n^{m_n})^{1/n}}{\xi _1^{m_1}+\xi _2^{m_2}+\cdots +\xi _n^{m_n}}, \end{aligned}$$

also defined on \({\mathbb {R}}^n\setminus \{0\}\), satisfies (2.4).

Just as in the case of Hörmander’s extension of Mikhlin’s theorem fractional differentiation provided optimal results, one may wonder if something analogous is possible in the case of Marcinkiewicz’s theorem. To avoid cumbersome notation, for complex numbers \(s_j\) with nonnegative real part, we define the differential operator (acting on functions on \({\mathbb {R}}^n\))

$$\begin{aligned} \Gamma (s_1,\dots , s_n):= (I-\partial _1^2)^{s_1/2}\cdots (I-\partial _n^2)^{s_n/2} \end{aligned}$$

given by multiplication by \((1+4\pi ^2\xi _1^2)^{s_1/2}\cdots (1+4\pi ^2\xi _n^2)^{s_n/2}\) on the Fourier transform side. We also define the anisotropic dilation of a function f by

$$\begin{aligned} D_{k_1,\dots , k_n} f (x_1,\dots , x_n) = f(2^{k_1}x_1, \dots , 2^{k_n} x_n) \end{aligned}$$

where \(k_1,\dots , k_n\) are integers. Finally, we define the tensor product of functions \(f_k\) on the line by

$$\begin{aligned} (f_1\otimes \cdots \otimes f_n)(x_1,\dots , x_n) = f_1(x_1) \dots f_n(x_n) ,\quad \quad x_j\in {\mathbb {R}}. \end{aligned}$$

This is a function on \({\mathbb {R}}^n\).

There is a version of the Marcinkiewicz multiplier theorem analogous to the version of the Hörmander multiplier theorem related to condition (1.4): Let \(\psi \) be a Schwartz function on the line whose Fourier transform is supported in \([-2,-1/2]\cup [1/2,2]\) and which satisfies \(\sum _{j\in {\mathbb {Z}}} {\widehat{\psi }}(2^{-j}\xi )=1\) for all \(\xi \ne 0\). If

$$\begin{aligned} \sup _{j_1,\dots , j_n \in {\mathbb {Z}}}\left\Vert { \Gamma (s_1,\dots , s_n)\Big [ ( \underbrace{ {\widehat{\psi }} \otimes \cdots \otimes {\widehat{\psi }} }_{n\, times } ) D_{j_1,\dots , j_n} \sigma \Big ]} \right\Vert _{{L}^{r}({\mathbb {R}}^n)} < \infty , \end{aligned}$$
(2.7)

where \(1\le r<\infty \), \(s_i>0\), \( \min (s_1, \dots , s_n)>1/r\), then \(T_{\sigma }\) maps \({L}^p({\mathbb {R}}^n)\) to \({L}^p({\mathbb {R}}^n)\) when \( | \frac{1}{p} -\frac{1}{2} | < \min (s_1, \dots , s_n)\).

A proof of this result can be found in Grafakos and Slavíková [11]. Earlier versions were given by Carbery [2], who considered the case in which the multiplier lies in a product-type \(L^2\)-based Sobolev space, and Carbery and Seeger [3, Remark after Prop. 6.1], who considered the case \(s_1=\cdots = s_n > | \frac{1}{p} -\frac{1}{2} |=\frac{1}{r}\). The positive direction of Carbery and Seeger’s result in the range \( | \frac{1}{p} -\frac{1}{2} | < \frac{1}{r}\) also appeared in [4, Condition (1.4)]; notice that the range is expressed in terms of the integrability of the multiplier and not in terms of its smoothness. Another extension of the Marcinkiewicz multiplier theorem to general Banach spaces was obtained by Hytönen [17].

We have recently obtained a two-dimensional version of the Marcinkiewicz multiplier theorem analogous to the version of Hörmander’s theorem associated with condition (1.5) in [12]. This provides a condition weaker than (2.7), hence a stronger theorem.

Theorem 2.2

Let \(0<s_1<s_2 <1\). Let \(\psi \) be a Schwartz function on real line whose Fourier transform is supported in \(1/2<|\xi | <2 \) and satisfies \(\sum _{j \in {\mathbb {Z}}} {\widehat{\psi }}\left( 2^{-j} \xi \right) =1\) for all \(\xi \in {\mathbb {R}}\setminus \{0\}\). If a bounded function \(\sigma \) on \({\mathbb {R}}^2\) satisfies

$$\begin{aligned} \sup _{j_1, j_2\in {\mathbb {Z}} } \left\Vert {\Gamma \left( s_1 , s_2 \right) \left[ ({\widehat{\psi }} \otimes {\widehat{\psi }} ) D_{j_1, j_2} \sigma \right] } \right\Vert _{ {L}^{\frac{1}{s_1},1}({\mathbb {R}}^2)} < \infty , \end{aligned}$$
(2.8)

then \(T_{\sigma }\) extends to a bounded operator from \( L ^p({\mathbb {R}}^2)\) to itself for all \(1<p<\infty \) satisfying

$$\begin{aligned} \left| \frac{1}{p}-\frac{1}{2} \right| < s_1. \end{aligned}$$

Moreover, unboundedness may fail on the line \( | \frac{1}{p}-\frac{1}{2} | = s_1\).

An example contained in [24], taken in dimension \(n=1\), yields a function \(\sigma _1\) on the line satisfying \(\sup _{j\in \mathbb Z}\Vert (I-\partial ^2)^{s_1/2} [ \widehat{\psi } \sigma _1(2^j \cdot ) ] \Vert _{L^{1/s_1}} <\infty \) for some \(s_1<1/2\) but \(T_{\sigma _1}\) is unbounded on \(L^p({\mathbb {R}}) \) when \( | \frac{1}{p}-\frac{1}{2} | = s_1\). In two dimensions we set \(\sigma (\xi _1,\xi _2) =\sigma _1(\xi _1) \psi (\xi _2)\), to obtain unboundedness on the line \( | \frac{1}{p}-\frac{1}{2} | = s_1\).

3 Comparison of the Mikhlin–Hörmander and Marcinkiewicz Theorems

In order to properly assess the differences between the Mikhlin–Hörmander and Marcinkiewicz theorems, we focus attention on pairs of analogous versions. In the list below we provide comparisons for each such pair.

  1. 1.

    Let \(\sigma \) be a function defined on \({\mathbb {R}}^n\) except the hyperplanes on which one coordinate vanishes. Then assumption

    $$\begin{aligned} \Big | \partial ^{\alpha _1}_1 \cdots \partial ^{\alpha _n}_n \sigma (\xi ) \Big | \le C_\alpha |\xi |^{- |\alpha |}, \end{aligned}$$
    (3.1)

    assumed for all \(\alpha _j \in \{0,1\}\) is weaker than

    $$\begin{aligned} \Big | \partial ^{\alpha _1}_1 \cdots \partial ^{\alpha _n}_n \sigma (\xi ) \Big | \le C_\alpha |\xi _1|^{-\alpha _1}\cdots |\xi _n|^{-\alpha _n}, \end{aligned}$$
    (3.2)

    assumed for all \(\alpha _j \in \{0,1\}\). Thus (2.2) is also weaker than (3.1). Thus the Marcinkiewicz theorem provides a stronger result in this case.

  2. 2.

    Condition (3.2) assumed for all \(\alpha _j \in \{0,1\}\) does not relate to condition (3.1) assumed for all \(|\alpha |\le [\frac{n}{2}]+1\). This is because the former condition requires \(\sigma \) to have one derivative per variable (total n derivatives), the latter only requires a “total” number of \([\frac{n}{2}]+1\) derivatives. Thus the Marcinkiewicz theorem and Mikhlin’s theorems do not compare in this case.

  3. 3.

    For any \(1<r<\infty \), condition (1.4) assumed for \(rs>n \) is stronger than (2.7) assumed for \(r \min (s_1,\dots , s_n)>1\) when \(s=s_1+\cdots +s_n\). This is a consequence of the inequality

    $$\begin{aligned} \begin{aligned}&\sup _{j_1,\dots ,j_n\in {\mathbb {Z}}} \left\| \Gamma (s_1,\dots , s_n) \big [ ({\widehat{\psi }} \otimes \cdots \otimes {\widehat{\psi }} ) D_{j_1,\dots , j_n} \sigma \big ]\right\| _{L^r}\\&\quad \quad \le C\sup _{j\in {\mathbb {Z}}} \left\| (I-\Delta )^{\frac{s}{2}} \Big [\sigma (2^{j}\cdot ) \widehat{\Phi } \Big ]\right\| _{L^r } , \end{aligned} \end{aligned}$$
    (3.3)

    where \(1<r<\infty \), \(0<1/r< s_1\le s_2\le \dots \le s_n\), and \(s=s_{1}+\dots +s_n\), proved at the end of this section, (see also [11]). Thus the Marcinkiewicz theorem provides a stronger result in this case as well.

  4. 4.

    The expressions in (2.7) and (2.8) do not compare. Thus, the Lorentz versions of the Hörmander’s and Marcinkiewicz’s theorem do not compare.

To prove (3.3), we first fix some notation. Let \(\psi \) be a Schwartz function on the line whose Fourier transform is supported in the set \(\{\xi : \frac{1}{2}\le |\xi |\le 2\}\) and which satisfies \(\sum _{k\in {\mathbb {Z}}} \widehat{\psi }(2^k \xi ) =1\) for every \(\xi \ne 0\). Also, let \(\Phi \) be a Schwartz function on \({\mathbb {R}}^n\) having analogous properties.

We begin by noting that if \(k\in {\mathbb {Z}}\), \(1< r < \infty \) and \( s>1/r\), then

$$\begin{aligned} \big \Vert (-\partial ^2)^{\frac{s}{2}}\big [f(2^{k}\cdot )\widehat{\psi }\, \big ]\big \Vert _{L^r({\mathbb {R}})} \le C \big \Vert (I-\partial ^2)^{\frac{s}{2}} f\big \Vert _{L^r({\mathbb {R}})} . \end{aligned}$$
(3.4)

To prove this inequality we will make use of the following form of the Kato–Ponce inequality [10, Theorem 1]

$$\begin{aligned} \big \Vert (-\partial ^2)^{\frac{s}{2}} (fg) \big \Vert _{L^r} \le C \big \Vert (-\partial ^2)^{\frac{s}{2}} f \big \Vert _{L^r}\big \Vert g \big \Vert _{L^\infty }+C \big \Vert f \big \Vert _{L^\infty }\big \Vert (-\partial ^2)^{\frac{s}{2}} g \big \Vert _{L^r}. \end{aligned}$$

to obtain the estimate

$$\begin{aligned}&\big \Vert (-\partial ^2)^{\frac{s}{2}}\big [f(2^{k}\cdot )\widehat{\psi }\,\big ]\big \Vert _{L^r({\mathbb {R}})}\\&\quad \le C\left( \big \Vert (-\partial ^2)^{\frac{s}{2}}\big [f(2^{k}\cdot )\big ]\big \Vert _{L^r({\mathbb {R}})} \big \Vert \widehat{\psi }\big \Vert _{L^\infty ({\mathbb {R}})} +\big \Vert f(2^{k}\cdot )\big \Vert _{L^\infty ({\mathbb {R}})}\big \Vert (-\partial ^2)^{\frac{s}{2}}\widehat{\psi }\big \Vert _{L^r({\mathbb {R}})}\right) \\&\quad \le C\left( \big \Vert (-\partial ^2)^{\frac{s}{2}}\big [f(2^{k}\cdot )\big ]\big \Vert _{L^r({\mathbb {R}})}+ \big \Vert (I-\partial ^2)^{\frac{s}{2}}f\big \Vert _{L^r({\mathbb {R}})}\right) \\&\quad =C\left( 2^{k(s-\frac{1}{r})} \big \Vert (-\partial ^2)^{\frac{s}{2}}f\big \Vert _{L^r({\mathbb {R}})}+ \big \Vert (I-\partial ^2)^{\frac{s}{2}}f\big \Vert _{L^r({\mathbb {R}})}\right) \\&\quad \le C\big (2^{k(s-\frac{1}{r})}+1\big ) \big \Vert (I-\partial ^2)^{\frac{s}{2}}f\big \Vert _{L^r({\mathbb {R}})}, \end{aligned}$$

having used the one-dimensional Sobolev embedding theorem as \(rs>1\).

We now return to the proof of (3.3). Set \(F(\xi )=\sum _{a=-n}^n \widehat{\Phi }(2^a \xi )\), \(\xi \in {\mathbb {R}}^n\). Then \(F(\xi )=1\) for any \(\xi \) satisfying \(\frac{1}{2^n}\le |\xi | \le 2^n\). Therefore, if \(j_1,\dots ,j_n\) are integers and \(j:=\max \{j_1,\dots ,j_n\}\), then \(F(2^{j_1-j}\xi _1,\dots ,2^{j_n-j}\xi _n)=1\) on \(\{(\xi _1,\dots ,\xi _n): \frac{1}{2}\le |\xi _1| \le 2,\dots , \frac{1}{2} \le |\xi _n| \le 2\}\). Consequently,

$$\begin{aligned} \prod _{\ell =1}^n \widehat{\psi }(\xi _\ell ) = F(2^{j_1-j}\xi _1,\dots ,2^{j_n-j}\xi _n) \prod _{\ell =1}^n \widehat{\psi }(\xi _\ell ). \end{aligned}$$

In view of this identity we can write

$$\begin{aligned}&\left\| \prod _{\rho =1}^n(I-\partial _\rho ^2)^{\frac{s_\rho }{2}} \bigg [ \big ( D_{j_1,\dots , j_n} \sigma \big ) \big ( \widehat{\psi } \otimes \cdots \otimes \widehat{\psi }\,\big ) \bigg ]\right\| _{L^r}\\&\quad =\left\| \prod _{\rho =1}^n (I-\partial _\rho ^2)^{\frac{s_\rho }{2}} \bigg [ \big ( D_{j_1,\dots , j_n} \sigma \big ) \big ( \widehat{\psi } \otimes \cdots \otimes \widehat{\psi }\,\big ) \big (D_{j_1-j,\dots , j_n-j} F\big ) \bigg ]\right\| _{L^r}\\&\quad \le C\sum _{a=-n}^n\sum _{\{i_1,\dots ,i_k\} \subseteq \{1,\dots ,n\}} \\&\quad \left\| \prod _{\tau =1}^k (-\partial _{i_\tau }^2)^{\frac{s_{i_\tau }}{2}} \bigg [ \big ( \widehat{\psi } \otimes \cdots \otimes \widehat{\psi }\,\big ) \big ( D_{j_1,\dots , j_n} \sigma \big ) \big (D_{j_1-j+a,\dots , j_n-j+a} \widehat{\Phi }\big ) \bigg ]\right\| _{L^r}. \end{aligned}$$

Using (3.4) in the variables \(i_1,\dots ,i_k\) and the Sobolev embedding in the remaining variables, we estimate the corresponding term in the last expression by a constant multiple of

$$\begin{aligned}&\bigg [ \prod _{\rho =1}^n \big (1+2^{(j_{i_\rho }-j+a)(s_{i_\rho }-\frac{1}{r})}\big ) \bigg ] \left\| (I-\partial _{1}^2)^{\frac{s_{1}}{2}} \cdots (I-\partial _{n}^2)^{\frac{s_{n}}{2}} \bigg [\sigma (2^{j-a}\cdot ) \widehat{\Phi } \bigg ]\right\| _{L^r}\\&\quad \le C\big (1+2^{n\max _{\ell =1,\dots ,n}(s_\ell - \frac{1}{r})}\big )^n \left\| (I-\Delta )^{\frac{s_{1}+\dots +s_n}{2}} \bigg [\sigma (2^{j-a}\cdot ) \widehat{\Phi } \bigg ]\right\| _{L^r}\\&\quad \le C \sup _{m\in {\mathbb {Z}}} \left\| (I-\Delta )^{\frac{s}{2}} \bigg [\sigma (2^{m}\cdot ) \widehat{\Phi } \bigg ]\right\| _{L^r}. \end{aligned}$$

This implies (3.3).

4 A Key Lemma

In this section, we prove a key lemma needed in the proof of Theorem 2.2. We denote the strong maximal operator by \({\mathcal {M}}\). This is defined at a point as the supremum of the averages of a function over all rectangles with sides parallel to the axes containing the given point x. It is a well-known fact that \({\mathcal {M}}\) is bounded on \(L^p\) for all \(p>1\) but fails to be of weak type (1, 1).

In the next section, we prove the main theorem when \(n=2\) and \(s_1>1/2\).

Lemma 4.1

Let \(0<1/q<s_1<s_2<1\). Then there is a constant C depending on these parameters such that for all measurable functions f on \({\mathbb {R}}^2\), all for all \(x=(x_1, x_2)\in {\mathbb {R}}^n\), and all \(j_1, j_2\in {\mathbb {Z}}\) we have

$$\begin{aligned} \left\Vert {\dfrac{f(x_1+2^{-j_1}y_1, x_2+2^{-j_2}y_2)}{(1+|y_2|)^{s_1} (1+|y_2|)^{s_2}}} \right\Vert _{ {L}^{\frac{1}{s_1},\infty }(\mathrm{d}y_1 \mathrm{d}y_2)} \le C {\mathcal {M}} (|f|^q)(x)^{\frac{1}{q}}. \end{aligned}$$
(4.1)

Proof

By a translation and a dilation we may assume that \(j_1 =j_2=0\) and \(x=(x_1, x_2)=0\). So we only need to prove that

$$\begin{aligned} \left\Vert {\dfrac{g(y_1, y_2)}{(1+|y_1|)^{s_1} (1+|y_2|)^{s_2}}} \right\Vert _{ {L}^{\frac{1}{s_1},\infty }({\mathbb {R}}^2, \mathrm{d}y_1\mathrm{d}y_2)} \le C {\mathcal {M}} (|g|^q)(0)^{\frac{1}{q}}. \end{aligned}$$
(4.2)

Obviously, we may assume that \({\mathcal {M}}_{ } (|g|^q)(0) = 1 \). For \(j_1, j_2\in {\mathbb {Z}}^+\cup \{0\}\) define the following rectangles that tile \({\mathbb {R}}^2\)

$$\begin{aligned} R_{j_1, j_2} =\Bigg \{y=(y_1, y_2)\in {\mathbb {R}}^2:\,\, {\left\{ \begin{array}{ll} 2^{j_i}< |y_i|\le 2^{j_i+1} &{}\hbox { if}\ j_i\ge 1 \\ |y_i|\le 1 &{}\hbox { if}\ j_i=0 , \end{array}\right. } \quad i=1,2. \Bigg \} \end{aligned}$$

For \(a>0\) and \(j_1, j_2 \) nonnegative integers we have the estimate

$$\begin{aligned} |\{y \in R_{j_1 , j_2}:\,\, |g(y ) |>a\} |\le \dfrac{1}{a^q} \int _{R_{j_1, j_2}} |g(y )|^q \mathrm{d}y \le \dfrac{2^{j_1+ j_2+4}}{a^q} , \end{aligned}$$

as \({\mathcal {M}} (|g|^q)(0) = 1 \). Using \(|R_{j_1, j_2}|\le 2^{j_1+ j_2+4}\) we obtain

$$\begin{aligned} |\{y \in R_{j_1, j_2}:\,\, |g(y ) |>a\} |\le 2^{j_1+ j_2 +4} \min \big (1, a^{-q} \big ), \end{aligned}$$

hence for all \(j_1, j_2\ge 0\) we deduce

$$\begin{aligned} \left|\left\{ y\in R_{j_1, j_2} :\,\, \dfrac{ |g(y )|}{(1+|y_1|)^{s_1} (1+|y_2|)^{s_2}} >a \right\} \right|\le 2^{j_1+ j_2+4} \min \left( 1,\frac{1}{(a 2^{ j_1s_1+ j_2s_2})^q}\right) . \end{aligned}$$

Let us write \(g=g_0+g_1\), where \(g_0=g\chi _{R_{0,0}}\). It will suffice to obtain (4.2) for each one of \(g_0\) and \(g_1\). We begin with \(g_0\). We have

$$\begin{aligned} \begin{aligned}&\left\Vert {\frac{g_0(y )}{(1+|y_1|)^{s_1} (1+|y_2|)^{s_2}}} \right\Vert _{ L^{\frac{1}{s_1},\infty }} \\&\quad = \sup _{a>0} a \left|\left\{ y\in R_{0,0} : \,\, \frac{g(y )}{(1+|y_1|)^{s_1} (1+|y_2|)^{s_2}}>a \right\} \right|^{s_1}\\&\quad \le \sup _{a>0} a |\{ y\in R_{0, 0}:\,\, |g(y )|>a \}|^{s_1} \\&\quad = \left\Vert {g} \right\Vert _{L^{1/s_1,\infty }(R_{0, 0})} \\&\quad \le C\left\Vert {g} \right\Vert _{L^{q}(R_{0 ,0})} \\&\quad \le C' {\mathcal {M}}_{ } (|g|^q)(0)^{\frac{1}{q}} \\&\quad = C' , \end{aligned} \end{aligned}$$

as \(L^q(R_{0, 0})\) embeds in \(L^{1/s_1,\infty }(R_{0, 0})\) when \(q>1/s_1\). This proves (4.2) for \(g_0\) in place of g. Now for \(g_1\) we argue as follows:

$$\begin{aligned}&\left|\left\{ y\in {\mathbb {R}}^2: \,\, \dfrac{ |g_1(y )|}{(1+|y_1|)^{s_1} (1+|y_2|)^{s_2}}>a \right\} \right|\\&\quad \le \quad \sum _{\begin{array}{c} j_1, j_2=0\\ j_1+ j_2>0 \end{array}}^{\infty } \left|\left\{ y\in R_{j_1, j_2} : \,\, \dfrac{ |g(y )|}{(1+|y_1|)^{s_1} (1+|y_2|)^{s_2}}>a \right\} \right|\\&\quad \le \sum _{\begin{array}{c} j_1, j_2=0\\ j_1+ j_2>0 \end{array}}^{\infty } 2^{j_1+ j_2+4} \min \left( 1,\frac{1}{a^q (2^{ j_1s_1+ j_2s_2})^q}\right) \\&\quad \le 16\sum _{\begin{array}{c} j_1,j_2\ge 0\\ j_1+j_2>0 \\ 2^{j_1s_1+j_2s_2}<\frac{1}{a} \end{array}} 2^{j_1 + j_2} + 16\sum _{\begin{array}{c} j_1,j_2\ge 0\\ j_1+j_2>0 \\ 2^{j_1s_1+j_2s_2}\ge \frac{1}{a} \end{array}} 2^{j_1+j_2}\frac{1}{a^q2^{qj_1s_1+qj_2s_2}} \\&\quad = 16\, ( I+II). \end{aligned}$$

Notice that I is zero if \(a>1\). So, in estimating I we may assume that \(a\le 1\). We have

$$\begin{aligned} I \le \sum _{\begin{array}{c} j_1,j_2\ge 0\\ 2^{j_1s_1+j_2s_2}<\frac{1}{a} \end{array} }2^{j_1+j_2}&\le \sum _{\begin{array}{c} j_1 \ge 0\\ 2^{j_1}<a^{-\frac{1}{s_1}} \end{array}} 2^{j_1} \sum _{\begin{array}{c} j_2\ge 0\\ 2^{j_2}<a^{-\frac{1}{s_2}}2^{-\frac{j_1s_1}{s_2}} \end{array} } 2^{j_2} \\&\le c \sum _{\begin{array}{c} j_1\ge 0 \\ 2^{j_1}<a^{-\frac{1}{s_1}} \end{array}} 2^{j_1} a^{-\frac{1}{s_2}}2^{-\frac{j_1s_1}{s_2}} \\&= c \sum _{\begin{array}{c} j_1\ge 0 \\ 2^{j_1}<a^{-\frac{1}{s_1}} \end{array}}2^{j_1(1-\frac{s_1}{s_2})} a^{-\frac{1}{s_2}} \\&\le c' a^{-\frac{1}{s_1}(1-\frac{s_1}{s_2})}a^{-\frac{1}{s_2}}\\&= c'a^{-\frac{1}{s_1}}. \end{aligned}$$

For term II we note that if \(a>1\), then all indices \(j_1, j_2\) in \({\mathbb {Z}}^+\cup \{0\}\) appear in the sum. In this case, we obtain

$$\begin{aligned} II \le \frac{ \chi _{a>1} }{a^q} \sum _{j_1\ge 0 } 2^{j_1(1-qs_1)}\sum _{j_2\ge 0 } 2^{j_2(1-qs_2)} = C \frac{ \chi _{a>1} }{a^q} \le C \frac{ \chi _{a>1} }{a^{1/s_1}} , \end{aligned}$$

since \(q>\frac{1}{s_1}\). We may therefore assume that \(a\le 1\) in below. We have

$$\begin{aligned} II&\le \sum _{\begin{array}{c} j_1,j_2 \ge 0 \\ j_1+j_2>0\\ 2^{j_1s_1+j_2s_2}>\frac{1}{a} \end{array} }2^{j_1+j_2}\frac{1}{a^q2^{qj_1s_1+qj_2s_2}} \\&\le \frac{ 1 }{a^q} \sum _{\begin{array}{c} j_1\ge 0\\ 2^{j_1}<a^{-\frac{1}{s_1}} \end{array}} 2^{j_1(1-qs_1)} \sum _{\begin{array}{c} j_2\ge 0\\ 2^{j_2}>a^{-\frac{1}{s_2}}2^{-j_1\frac{s_1}{s_2}} \end{array}}2^{j_2(1-qs_2)} \\&\quad + \frac{1}{a^q} \sum _{\begin{array}{c} j_1\ge 0 \\ 2^{j_1}\ge a^{-\frac{1}{s_1}} \end{array}} 2^{j_1(1-qs_1)} \sum _{j_2\ge 0 } 2^{j_2(1-qs_2)} \\&\le \frac{ 1 }{a^q} \sum _{\begin{array}{c} j_1\ge 0\\ 2^{j_1}<a^{-\frac{1}{s_1}} \end{array}} 2^{j_1(1-qs_1)} \big ( C \, a^{-\frac{1}{s_2}+q } 2^{j_1 (qs_1-\frac{s_1}{s_2}) } \big ) + \frac{1}{a^q} \sum _{ 2^{j_1}\ge a^{-\frac{1}{s_1}}} 2^{j_1(1-qs_1)} C \\&\le c\, a^{-\frac{1}{s_1}} . \end{aligned}$$

These estimates provide the required conclusion for \(g_1\), hence (4.2) holds for both \(g_0\) and \(g_1\), thus it holds for g. \(\square \)

5 The Proof of Theorem 2.2 When \(s_1>1/2\)

We begin by considering the simplest case where \(s_1>1/2\). We will need the following well-known lemma, whose proof is omitted as it can be obtained via the off-diagonal Marcinkiewicz interpolation theorem [7, 28] using the classical Hausdorff–Young inequality \(\Vert {\widehat{f}} \Vert _{L^{p_i'} }\le \Vert f\Vert _{L^{p_i}}\) with \(1<p_1<p<p_2< 2\), \(i=1,2\).

Lemma 5.1

The Hausdorff–Young inequality holds for Lorentz spaces \(L^{p,1}\), precisely we have \(\Vert {\widehat{f}} \Vert _{L^{p',1} ({\mathbb {R}}^2) }\le C_{p } \Vert f\Vert _{L^{p,1}({\mathbb {R}}^2)}\) when \(1<p<2\).

Next we present the proof of Theorem 2.2 when \(n=2\) and \(s_1>1/2\).

Proof

Let \(\psi \) be the Schwartz function in the statement of the theorem. Define a Schwartz function \(\theta \) on \({\mathbb {R}}\) by setting

$$\begin{aligned} {\widehat{\theta }}(\xi )= {\widehat{\psi }}(\xi /2)+{\widehat{\psi }}(\xi )+{\widehat{\psi }}(2\xi ) . \end{aligned}$$
(5.1)

Then \({\widehat{\theta }}\) is supported in the annulus \(1/4<|\xi | <4 \) and \({\widehat{\theta }}=1 \) on the support of \({\widehat{\psi }}\). We define Schwartz functions \(\Psi \) and \(\Theta \) by setting \({\widehat{\Psi }}(\xi _1,\xi _2) = {\widehat{\psi }}(\xi _1){\widehat{\psi }}(\xi _2)\) and \({\widehat{\Theta }}(\xi _1,\xi _2) = {\widehat{\theta }}(\xi _1){\widehat{\theta }}(\xi _2)\). For \(j\in {\mathbb {Z}}\) we define the Littlewood–Paley operators in each variable (associated with the bump \(\psi \)) by

$$\begin{aligned} \Delta _{j}^{\psi , 1}(f)(x)=\int _{{\mathbb {R}}} f\left( x_{1}-y, x_{2} \right) 2^{j} \psi \left( 2^{j} y\right) \mathrm{d} y \\ \Delta _{j}^{\psi , 2}(f)(x)=\int _{{\mathbb {R}}} f\left( x_{1}, x_{2}-y \right) 2^{j} \psi \left( 2^{j} y\right) \mathrm{d} y, \end{aligned}$$

and analogously we define \(\Delta _{j}^{\theta , 1}\) and \(\Delta _{j}^{\theta , 2}\) with \(\theta \) in place of \(\psi \). Also, for \(k_1,k_2\) in \({\mathbb {Z}}\), recall \( D_{k_1,k_2} f(x_1,x_2) = f(2^{k_1} x_1 , 2^{k_2} x_2)\) the anisotropic dilation of f associated with the parameters \(2^{k_1}\) and \(2^{k_2}\).

Since \({\widehat{\theta }}=1\) on the support of \({\widehat{\psi }}\), \(D_{-j_1,-j_2} {\widehat{\Theta }} \) equals 1 on the support of \(D_{-j_1,-j_2} {\widehat{\Psi }} \). So for any \(j_1,j_2 \in {\mathbb {Z}}\) we write

$$\begin{aligned}&\Delta _{j_1}^{\psi ,1} \Delta _{j_2}^{\psi ,2} T_{\sigma }(f)\left( x_{1},x_2\right) \\&\quad =\int _{{\mathbb {R}}}\int _{{\mathbb {R}}} {\widehat{f}}(\xi _1,\xi _2) {\widehat{\Psi }}\left( 2^{-j_1} \xi _1,2^{-j_2} \xi _2\right) \sigma (\xi _1,\xi _2) e^{2 \pi i (x_1 \xi _1+ x_2 \xi _2)} \mathrm{d} \xi _1\mathrm{d} \xi _2\\&\quad =\int _{{\mathbb {R}}}\int _{{\mathbb {R}}} (\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2}f)\;\widehat{\;}\,(\xi _1,\xi _2) {\widehat{\Psi }}\left( 2^{-j_1} \xi _1,2^{-j_2} \xi _2\right) \sigma (\xi _1,\xi _2) e^{2 \pi i (x_1 \xi _1+ x_2 \xi _2)} \mathrm{d} \xi _1\mathrm{d} \xi _2\\&\quad =\int _{{\mathbb {R}}}\int _{{\mathbb {R}}} 2^{j_1+j_2} (\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2}f)\, \widehat{\,\,\,}\, (2^{j_1}\xi '_1,2^{j_2}\xi '_2)\\&\qquad {\widehat{\Psi }}\left( \xi '_1, \xi '_2\right) \sigma (2^{j_1}\xi '_1,2^{j_2}\xi '_2) e^{2 \pi i (2^{j_1}x_1 \xi '_1+ 2^{j_2} x_2 \xi '_2)} \mathrm{d} \xi '_1 \mathrm{d} \xi '_2\\&\quad = \int _{{\mathbb {R}}}\int _{{\mathbb {R}}} (\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2}f)(2^{-j_1}y'_1,2^{-j_2}y'_2) [{\widehat{\Psi }} D_{j_1,j_2} \sigma ] \, \widehat{\,\,\,}\,(y'_1-2^{j_1}x_1, y'_2-2^{j_2}x_2 ) \mathrm{d} y'_1 \mathrm{d} y'_2\\&\quad =\int _{{\mathbb {R}}}\int _{{\mathbb {R}}} (\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2}f)(2^{-j_1}y_1+x_1,2^{-j_2}y_2+x_2) [{\widehat{\Psi }} D_{j_1,j_2} \sigma ] \, \widehat{\,\,\,}\,(y_1, y_2 ) \mathrm{d} y_1 \mathrm{d} y_2\\&\quad =\int _{{\mathbb {R}}}\int _{{\mathbb {R}}} \dfrac{\big (\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2}f\big )(2^{-j_1}y_1+x_1,2^{-j_2}y_2+x_2)}{(1+|y_1|)^{s_1} (1+|y_2|)^{s_2}} \\&\qquad (1+|y_1|)^{s_1} (1+|y_2|)^{s_2} [{\widehat{\Psi }} D_{j_1,j_2} \sigma ] \, \widehat{\,\,\,}\,(y_1, y_2 ) \mathrm{d} y_1 \mathrm{d} y_2 . \end{aligned}$$

Using Hölder’s inequality for Lorentz spaces, we obtain that the last displayed expression is bounded by the product of norms

$$\begin{aligned} \begin{aligned}&\left\Vert {\dfrac{\big (\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2}f\big )(2^{-j_1}y_1+x_1,2^{-j_2}y_2+x_2)}{(1+|y_1|)^{s_1} (1+|y_2|)^{s_2}}} \right\Vert _{ {L^{\frac{1}{s_1},\infty }}({\mathbb {R}}^2, \mathrm{d}y_1\mathrm{d}y_2)}\\&\quad \cdot \left\Vert {(1+|y_1|)^{s_1} (1+|y_2|)^{s_2} [{\widehat{\Psi }} D_{j_1,j_2} \sigma ]\, \widehat{\,\,\,}(y_1, y_2 )} \right\Vert _{ {L^{ (\frac{1}{s_1} )^\prime ,1}({\mathbb {R}}^2, \mathrm{d}y_1\mathrm{d}y_2)}}. \end{aligned} \end{aligned}$$

For second factor, in view of Hausdorff–Young’s inequality for Lorentz spaces (Lemma 5.1), we write

$$\begin{aligned}&\left\Vert {(1+|y_1|)^{s_1} (1+|y_2|)^{s_2} [ {\widehat{\Psi }} D_{j_1,j_2} \sigma ]\; \widehat{\,\,\,}\,(y_1, y_2 )} \right\Vert _{{L^{ (\frac{1}{s_1} )\prime ,1}({\mathbb {R}}^2,\mathrm{d}y_1\mathrm{d}y_2)}} \nonumber \\&\quad \le C\left\Vert {(1+4\pi ^2|y_1|^2)^{\frac{s_1}{2}} (1+4\pi ^2 |y_2|^2)^{\frac{s_2}{2}} [{\widehat{\Psi }} D_{j_1,j_2} \sigma ] \;\widehat{\,\,\,}\, (y_1,y_2)} \right\Vert _{ {L^{ (\frac{1}{s_1} )\prime ,1}({\mathbb {R}}^2,\mathrm{d}y_1\mathrm{d}y_2)}} \nonumber \\&\quad \le C \left\Vert {\Gamma \left( s_1,s_2\right) [ {\widehat{\Psi }} D_{j_1,j_2} \sigma ]} \right\Vert _{{L}^{\frac{1}{s_1},1}({\mathbb {R}}^2)} \nonumber \\&\quad \le CK. \end{aligned}$$
(5.2)

Pick a q such that \(\frac{1}{s_1}<q<2\). Then by Lemma 4.1, for fixed \(x_1,x_2\), we have

$$\begin{aligned} \begin{aligned}&\left\Vert {\dfrac{\big (\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2}f \big )(2^{-j_1}y_1+x_1,2^{-j_2}y_2+x_2)}{(1+|y_1|)^{s_1} (1+|y_2|)^{s_2}}} \right\Vert _{ {L^{\frac{1}{s_1},\infty }}({\mathbb {R}}^2, \mathrm{d}y_1\mathrm{d}y_2)} \\&\quad \le C {\mathcal {M}} \big (|\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2} f |^q \big )(x_1,x_2)^{\frac{1}{q}}. \end{aligned} \end{aligned}$$
(5.3)

Combining estimates (5.2) and (5.3), for \((x_1,x_2)\in {\mathbb {R}}^2\), we obtain that

$$\begin{aligned} |\Delta _{j_1}^{\psi ,1} \Delta _{j_2}^{\psi ,2} T_{\sigma }(f)\left( x_{1},x_2\right) |\le C K {\mathcal {M}} \big (|\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2} f|^q\big ) (x_1,x_2)^{\frac{1}{q}} . \end{aligned}$$

We may assume that \(p \ge 2\) as the case \(1<p<2\) follows by duality. By the lower inequality in the product-type Littlewood–Paley theorem and the Fefferman–Stein [6] inequality applied to \(L^{p/q}(\ell ^{2/q})\) (the indices satisfy \(1<2/q \le p/q <\infty \)) we write

$$\begin{aligned} \left\| T_{\sigma }(f)\right\| _{L^{p} \left( {\mathbb {R}}^{2}\right) }&\le C\bigg \Vert \bigg (\sum _{j_1,j_2 \in {\mathbb {Z}}}\left| \Delta _{j_1}^{\psi ,1} \Delta _{j_2}^{\psi ,2} T_{\sigma }(f)\right| ^{2}\bigg )^{\frac{1}{2}}\bigg \Vert _{L^{p}\left( {\mathbb {R}}^{2}\right) }\\&\le CK\bigg \Vert \bigg (\sum _{j_1,j_2 \in {\mathbb {Z}}} \left( {\mathcal {M}} (|\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2} f |^{q})\right) ^{\frac{2}{q}}\bigg )^{\frac{1}{2}}\bigg \Vert _{L^{p}\left( {\mathbb {R}}^{2}\right) }\\&\le CK\bigg \Vert \bigg (\sum _{j_1,j_2 \in {\mathbb {Z}}} \left( {\mathcal {M}} ( |\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2} f |^{q}) \right) ^{\frac{2}{q}}\bigg )^{\frac{q}{2}}\bigg \Vert _{L^{\frac{p}{q}}\left( {\mathbb {R}}^{2}\right) }^\frac{1}{q}\\&\le C'K\bigg \Vert \bigg (\sum _{j_1,j_2 \in {\mathbb {Z}}} |\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2} f |^{q\cdot \frac{2}{q}}\bigg )^{\frac{q}{2}}\bigg \Vert _{L^{\frac{p}{q}}\left( {\mathbb {R}}^{2}\right) }^\frac{1}{q}\\&\le C''K\bigg \Vert \bigg (\sum _{j_1,j_2 \in {\mathbb {Z}}} |\Delta _{j_1}^{\theta ,1} \Delta _{j_2}^{\theta ,2} f |^{2}\bigg )^{\frac{1}{2}}\bigg \Vert _{L^{p}\left( {\mathbb {R}}^{2}\right) }\\&\le C'''K \left\Vert {f} \right\Vert _{ {L}^p({\mathbb {R}}^2)} . \end{aligned}$$

The last line uses the upper inequality in the product-type Littlewood–Paley theorem. This completes the proof of Theorem 2.2 in the case \(s_1>1/2\). \(\square \)

6 Interpolation

Let us denote by \({\mathscr {C}}_0^{\infty } \) the space of smooth functions with compact support. We state an interpolation result that will be needed to complete the proof of Theorem 2.2.

Proposition 6.1

Suppose that \(1<p_0,p_1<\infty \), \(0<s^0_1, s^0_2, s^{1}_1, s^1_2 < \infty \) and \(s^k_1< s^k_2\) for \(k=0,1\). Let \(\Psi \) be a Schwartz function whose Fourier transform is supported in the square \([1/2,2]^2\) in \({\mathbb {R}}^2\). Suppose that for all \(f \in {\mathscr {C}}_0^{\infty } ({\mathbb {R}}^2)\) we have

$$\begin{aligned} \left\Vert {T_{\sigma } (f) } \right\Vert _{{L}^{p_0}({\mathbb {R}}^2)} \le C_0 \sup _{ j_1, j_2 \in {\mathbb {Z}} }\left\Vert { \Gamma \left( s^{0}_1 , s^{0}_2 \right) [ {\widehat{\Psi }} D_{j_1,j_2}\sigma ] } \right\Vert _{{L}^{\frac{1}{s^{0}_1 },1 }({\mathbb {R}}^2) } \left\Vert {f} \right\Vert _{{L}^{p_{0}}({\mathbb {R}}^2)} \end{aligned}$$

and

$$\begin{aligned} \left\Vert {T_{\sigma } (f) } \right\Vert _{{L}^{p_1}({\mathbb {R}}^2)} \le C_1 \sup _{ j_1, j_2 \in {\mathbb {Z}} }\left\Vert { \Gamma \left( s^{1}_1 , s^{1}_2 \right) [ {\widehat{\Psi }} D_{j_1,j_2}\sigma ] } \right\Vert _{{L}^{\frac{1}{s^{1}_1 },1 }({\mathbb {R}}^2) } \left\Vert {f} \right\Vert _{{L}^{p_{1}}({\mathbb {R}}^n)}. \end{aligned}$$

Then for all \(f \in {\mathscr {C}}_0^{\infty }({\mathbb {R}}^2)\) we have

$$\begin{aligned} \left\Vert {T_{\sigma } (f) } \right\Vert _{{L}^{p }({\mathbb {R}}^2)} \le C_* \sup _{ j_1, j_2 \in {\mathbb {Z}} }\left\Vert { \Gamma \left( s_1 , s_2 \right) [ {\widehat{\Psi }} D_{j_1,j_2}\sigma ] } \right\Vert _{{L}^{\frac{1}{s_1 },1 }({\mathbb {R}}^2) } \left\Vert {f} \right\Vert _{{L}^{p_1}({\mathbb {R}}^2)}, \end{aligned}$$

where \(C_* = C_{p_0,p_1,s_1^0,s_2^0,s_1^1,s_2^1,\theta } \,\,C_0^{1-\theta } C_1^\theta \), \(0<\theta <1\), and

$$\begin{aligned} \frac{1}{p}= \frac{1-\theta }{p_0}+\frac{\theta }{p_1},\;\; s_1=(1-\theta )s_1^0+ \theta s_1^1, \; s_2=(1-\theta )s_2^0+ \theta s_2^1 . \end{aligned}$$

The proof of this proposition is modeled after the proof of Theorem 3.1 in [12] and is omitted. Assuming the validity of Proposition 6.1, we conclude the proof of Theorem 2.2 by considering the case where \(s_1\le \frac{1}{2}\).

Proof of Theorem 2.2

Let us fix \(0<s_1\le \frac{1}{2}\). The idea of the proof is to interpolate between the points \(p_0=2\) and \(p_1=1+\varepsilon \) using Proposition 6.1. By duality we may consider only \(p \in (1,2)\). Let \(\psi \) be as in Theorem 2.2. When \(p=2\), by Plancherel’s theorem, for \(0<s_1^0<s_2^0<\infty \), we write

$$\begin{aligned} \begin{aligned}&\left\Vert {T_{\sigma }(f)} \right\Vert _{L^2({\mathbb {R}}^2)} \\&\quad = \bigg (\iint _{{\mathbb {R}}^2}\bigg |\sum _{j_1, j_2 \in {\mathbb {Z}}}{\widehat{\psi }}(2^{-j_1}\xi _1) {\widehat{\psi }}(2^{-j_2}\xi _2) \sigma (\xi _1,\xi _2 ) {\widehat{f}}(\xi _1,\xi _2 )\bigg |^2 \mathrm{d}\xi _1d\xi _2 \bigg )^{\frac{1}{2}}\\&\quad \le C \sup _{j_1, j_2\in {\mathbb {Z}} }\left\Vert { ( {\widehat{\psi }} \otimes {\widehat{\psi }}\,) D_{-j_1,-j_2}\sigma } \right\Vert _{{L}^{\infty }({\mathbb {R}}^2)} \left\Vert {f} \right\Vert _{{L}^2({\mathbb {R}}^2)} \\&\quad = C \sup _{j_1 , j_2\in {\mathbb {Z}} }\left\Vert {\Gamma \left( -s_1^0 , - s_2^0 \right) \Gamma \left( s_1^0 , s_2^0 \right) \left[ ( {\widehat{\psi }} \otimes {\widehat{\psi }}) D_{-j_1,-j_2}\sigma \right] } \right\Vert _{L^{ \infty } ({\mathbb {R}}^2)} \left\Vert {f} \right\Vert _{{L}^2({\mathbb {R}}^2)}\\&\quad \le C \sup _{j_1 , j_2\in {\mathbb {Z}} }\left\Vert { \Gamma \left( s_1^0 , s_2^0 \right) \left[ ( {\widehat{\psi }} \otimes {\widehat{\psi }}\,) D_{-j_1,-j_2}\sigma \right] } \right\Vert _{{L}^{\frac{1}{s},1} ({\mathbb {R}}^2)} \left\Vert {f} \right\Vert _{{L}^2({\mathbb {R}}^2)}. \end{aligned} \end{aligned}$$

The first inequality is justified from the fact that the double sum has at most 9 terms while the second inequality is a consequence of Lemma 6.2, stated and proved at the end of this section. This inequality holds for any \(s_1^0<s_2^0\) small positive numbers. Now given \(p \in (1,2)\) with \(\frac{1}{p}-\frac{1}{2}= | \frac{1}{p}-\frac{1}{2} |<s_1\), there exists a \(\tau \in (0,1)\) such that

$$\begin{aligned} \frac{1}{p}-\frac{1}{2} < \tau s_1. \end{aligned}$$
(6.1)

Set \(p_1=\frac{2}{\tau +1}\), \(s_1^1=\frac{1}{2}+\epsilon _1 <s_2^1=\frac{1}{2}+\epsilon _2\), \(\epsilon _2> \epsilon _1 >0\) to be specified later. Since \(p_1>1\) and \(\frac{1}{2}< s_1^1<s_2^1 <1\), Proposition 6.1 gives

$$\begin{aligned} \left\Vert {T_{\sigma } (f) } \right\Vert _{{L}^{p_1} ({\mathbb {R}}^2)} \le C \sup _{j_1, j_s\in {\mathbb {Z}} }\left\Vert { \Gamma \left( s^{1}_1 , s^{1}_2 \right) [ ( {\widehat{\psi }} \otimes {\widehat{\psi }}\,) D_{j_1,j_2} \sigma ] } \right\Vert _{{L}^{\frac{1}{s^{1}_1 },1 } ({\mathbb {R}}^2) } \left\Vert {f} \right\Vert _{{L}^{p_1} ({\mathbb {R}}^2)} . \end{aligned}$$

Set \(p_0=2\) and pick \(\theta \) such that

$$\begin{aligned} \frac{1}{p}= \frac{1-\theta }{2}+\frac{\theta }{p_1}. \end{aligned}$$

In our case \(\theta = \frac{2}{\tau }\left( \frac{1}{p}-\frac{1}{2}\right) \in (0,1)\) by (6.1). Now pick \(s_1^0, s_2^0\) such that

$$\begin{aligned} s_1=(1-\theta )s_1^0+ \theta s_1^1, \quad s_2=(1-\theta )s_2^0+ \theta s_2^1. \end{aligned}$$

Picking \(0<\epsilon _1<\epsilon _2\) small enough, we notice that the numbers \(s_1^0, s_2^0\) satisfy \(0<s_1^0<s_2^0\) and \(s_1^0<s_1\), \(s_2^0<s_2\). Applying Proposition 6.1 yields the desired result. \(\square \)

We end this section with the lemma promised earlier.

Lemma 6.2

For \(0< s_1< s_2 <1\) we have

$$\begin{aligned} \left\Vert {\Gamma \left( -s_1 , - s_2 \right) f } \right\Vert _{{L}^{\infty }({\mathbb {R}}^2)} \le C_{s_1,s_2} \left\Vert {f} \right\Vert _{{L}^{\frac{1}{s_1} ,1 } ({\mathbb {R}}^2)}. \end{aligned}$$
(6.2)

Proof of Lemma 6.2

Recall that for \(0<s<1\) the one-dimensional kernel \(G_s\) of \((I-\partial ^2)^{-s/2}\) (called the Bessel potential) satisfies

$$\begin{aligned} 0< G_s(x) \le C_s {\left\{ \begin{array}{ll} e^{-\frac{|x|}{2}} &{}\text { if } |x|>2\\ |x|^{-1+s} &{}\text { if } |x|\le 2 \end{array}\right. } \end{aligned}$$

(see [8, 26]). It follows that \(G_s\) lies in \(L^{(1/s)',\infty }({\mathbb {R}})\). By Hölder’s inequality for Lorentz spaces we have

$$\begin{aligned} \left| \Gamma \left( - s_1 , - s_2 \right) f(x ) \right| \le C \left\Vert {G_{s_1}\otimes G_{s_2} } \right\Vert _{{L}^{(\frac{1}{s_1})',\infty } ({\mathbb {R}}^2) } \left\Vert {f} \right\Vert _{{L}^{\frac{1}{s_1},1} ({\mathbb {R}}^2) }. \end{aligned}$$

It will suffice to show that

$$\begin{aligned} \begin{aligned} \sup _{\lambda>0} \lambda \left| \{(y_1,y_2)\in {\mathbb {R}}^2: \, \, G_{s_1}(y_1) G_{s_2}(y_2) >\lambda \} \right| ^{1-s_1} \le C_{s_1,s_2}' <\infty . \end{aligned} \end{aligned}$$

To prove this we write

$$\begin{aligned}&\left| \{ (y_1,y_2)\in {\mathbb {R}}^2: \, G_{s_1}(y_1) G_{s_2}(y_2)>\lambda \} \right| \\&\quad \quad = \!\! \int \limits _{y_2\in {\mathbb {R}}} \left| \bigg \{ y_1 \in {\mathbb {R}} : \,\, G_{s_1}(y_1)> \frac{ \lambda }{G_{s_2}(y_2) }\bigg \} \right| \mathrm{d}y_2 \\&\quad \quad \le \frac{C}{\lambda ^{\frac{1}{1-s_1}}} \int \limits _{y_2\in {\mathbb {R}}} G_{s_2}(y_2)^{\frac{1}{1-s_1}} \mathrm{d}y_2 \\&\quad \quad = \frac{C'}{\lambda ^{\frac{1}{1-s_1}}} , \end{aligned}$$

since \(G_{s_2} (y ) \le C_{s_2} |y |^{-1+s_2} \) for \(|y |\le 2\) and has exponential decay at infinity. \(\square \)

7 Final Remarks

We indicate why hypothesis (2.8) is indeed weaker than (2.7). Picking a smooth and compactly function \({\widehat{\Theta }} \) that equals 1 on the support of \({\widehat{\psi }}\otimes {\widehat{\psi }} \), matters reduce to showing the inequality

$$\begin{aligned} \big \Vert \Gamma (s_1, s_2) [ {\widehat{\Theta }} g ] \big \Vert _{L^{q,1}} \le C_{q,r} \big \Vert \Gamma (s_1, s_2) g \big \Vert _{L^{r}}, \end{aligned}$$
(7.1)

whenever \(s_1, s_2>0\) and \(1<q<r<\infty \). Note that (7.1) is quite easy if all \(s_j\) are even integers as \(L^r (K)\) embeds in \(L^{q,1}(K)\) when K is has compact support. For other values of \(s_j\) we write (7.1) in the equivalent form

$$\begin{aligned} \Big \Vert \Gamma (s_1, s_2) \big [ {\widehat{\Theta }} \, \Gamma (-s_1, -s_2)g \big ] \Big \Vert _{L^{q,1}} \le C_{q,r} \big \Vert g \big \Vert _{L^{r}}. \end{aligned}$$
(7.2)

We obtain (7.2) via complex interpolation. Let N be an even integer larger than \(\max (s_1, s_2)\). We will need the following lemma.

Lemma 7.1

Let \(1<p <\infty \). Then for any \(t_1, t_2 \in {\mathbb {R}}\) we have

$$\begin{aligned} \left\Vert {\Gamma \left( it_1, it_2\right) f} \right\Vert _{{L}^{p,1}({\mathbb {R}}^n)} \le C(p )(1+|t_1|) (1+|t_2|) \left\Vert {f} \right\Vert _{{L}^{p,1} ({\mathbb {R}}^2) }. \end{aligned}$$
(7.3)

Proof

We pick \(p_0\) and \(p_1\) such that \(p_0<p<p_1\). Then (7.3) holds with \(L^{p_0}\) and \({L}^{p_1}\) in place of \(L^{p,1}\) in view of the classical version of the Marcinkiewicz multiplier theorem. We then appeal to the off-diagonal version of the Marcinkiewicz interpolation theorem [7, 28] to conclude the proof. \(\square \)

We now replace \(s_1\) and \(s_2\) by complex numbers \(z_1\), \(z_2\), respectively, whose real parts lie in [0, N]. Using Lemma 7.1 we obtain the validity of (7.2) when \(\hbox {Re}\, z_1 =0\) or \(\hbox {Re}\, z_2 =0\). Moreover, in view of the embedding of \(L^r (K)\) into \(L^{q,1}(K)\) (when K is has compact support) and of Lemma 7.1, (7.2) also holds \(\hbox {Re}\, z_1 =N\) or \(\hbox {Re}\, z_2 =N\), and then we deduce (7.2) by a twofold application of the interpolation theorem for analytic families of operators [25]. We note that in view of Lemma 7.1, the initial interpolating estimates have at most polynomial growth in the imaginary part of \(z_j\). This makes the interpolation theorem for analytic families [25] applicable.

We end by providing an example showing that there exist functions that satisfy condition (2.8) but not (2.7).

Example 7.2

[12] Let \(\beta <0\), let \(\phi \) be a smooth function supported in \([1/2,2]\cup [-2,-1/2]\), let \(a_k\in (1/2,2)\cup (-2,-1/2)\), \(k\in {\mathbb {Z}}\), and let s be a positive integer. Then the function

$$\begin{aligned} \sigma (\xi )=\sum _{k\in {\mathbb {Z}}} \phi (2^{-k}\xi ) \left( \log \frac{4e }{|2^{-k}\xi -a_k| }\right) ^\beta \end{aligned}$$
(7.4)

does not satisfy (2.7) for any \(r>1\), it satisfies (2.8), and hence it is an \(L^p \) Fourier multiplier on the line for any \(p\in (1,\infty )\).

Let \(\widehat{\Psi }\) be a smooth function supported in \([1/2,2]\cup [-2,-1/2]\). We fix a positive integer s and observe that for any \(j\in {\mathbb {Z}}\),

$$\begin{aligned}&\Vert (I-\partial ^2)^{\frac{s}{2}}[\widehat{\Psi }\sigma (2^j\cdot )]\Vert _{L^{\frac{1}{s},1}(\mathbb R)} \le \left\| (I-\partial ^2)^{\frac{s}{2}}\left[ \widehat{\Psi } \phi \left( \log \frac{4e }{|\cdot -a_j| }\right) ^\beta \right] \right\| _{L^{\frac{1}{s},1}({\mathbb {R}})}\\&\quad +\left\| (I-\partial ^2)^{\frac{s}{2}}\left[ \widehat{\Psi } \phi (2 (\cdot ) ) \left( \log \frac{4 e }{|2(\cdot ) -a_{j-1}| }\right) ^\beta \right] \right\| _{L^{\frac{1}{s},1}({\mathbb {R}})}\\&\quad +\left\| (I-\partial ^2)^{\frac{s}{2}}\left[ \widehat{\Psi } \phi \big (\frac{\cdot }{2}\big ) \left( \log \frac{4e }{|\frac{(\cdot ) }{2}-a_{j+1}| }\right) ^\beta \right] \right\| _{L^{\frac{1}{s},1}({\mathbb {R}})} . \end{aligned}$$

In what follows, let us deal with the first term only, since the last two terms can be estimated in a similar way.

For \(j\in {\mathbb {Z}}\) we define the function

$$\begin{aligned} f_j(\xi )=\widehat{\Psi }(\xi ) \phi (\xi ) \left( \log \frac{4e }{|\xi -a_j| }\right) ^\beta . \end{aligned}$$

Each \(|f_j|\) is bounded by a constant independent of j and compactly supported in a set of measure at most 3, hence

$$\begin{aligned} \sup _{j} \Vert f_j\Vert _{L^{\frac{1}{s},1}({\mathbb {R}})}\le C<\infty . \end{aligned}$$
(7.5)

An easy computation shows that for all \(m\in {\mathbb {Z}}^+\) we have

$$\begin{aligned} \left| \frac{\mathrm{d}^m}{\mathrm{d} \xi ^m} f_j(\xi )\right| \le C_m \left( \log \frac{4e }{|\xi -a_j| }\right) ^{\beta -1} \frac{\chi _{[1/2,2]\cup [-2,-1/2]}(\xi )}{ |\xi -a_j|^{m}} \end{aligned}$$
(7.6)

and in fact,

$$\begin{aligned} \left| \frac{\mathrm{d}^m}{\mathrm{d} \xi ^m} f_j(\xi )\right| \approx \left( \log \frac{4e }{|\xi -a_j| }\right) ^{\beta -1} \frac{1}{ |\xi -a_j|^{m}} \end{aligned}$$

for \(\xi \) near \(a_j\). This observation shows that \(f_j^{(s)}\) does not belong to \(L^r\) for any \(r>1/s\). This implies that (2.7) fails for any \(r>1\).

We now turn to the validity of (2.8). Since the support of \(f_j\) has measure at most 3, (7.6) implies that for \(m\le s\) we have

$$\begin{aligned} \left( f_j^{(m)}\right) ^*(t)\le C_m \chi _{(0,3 )}(t) \left( \log \frac{8e}{t}\right) ^{\beta -1}\frac{1}{ (t/2)^{ m}}, \end{aligned}$$

where the constant C is independent of j. Then for \(m\le s\) we have

$$\begin{aligned} \left( f_j^{(m)} \right) ^*(t)\le C_m' \chi _{(0,3 )}(t) \left( \log \frac{8e}{t}\right) ^{\beta -1} \frac{1}{t^{s}}. \end{aligned}$$

Consequently,

$$\begin{aligned} \sup _{1\le m\le s} \left\| \frac{\mathrm{d}^m}{\mathrm{d} x^m} f_j\right\| _{L^{\frac{1}{s},1}({\mathbb {R}})} \le C \int _0^{3}\bigg [ \left( \log \frac{8e}{t}\right) ^{\beta -1} \frac{1}{t^{s}}\bigg ] t^{\frac{1}{1/s}} \,\, \frac{\mathrm{d}t}{t}<\infty , \end{aligned}$$
(7.7)

since \(\beta <0\). It remains to observe that

$$\begin{aligned} \Vert (I-\partial ^2)^{\frac{s}{2}}f_j\Vert _{L^{\frac{1}{s},1}({\mathbb {R}})}\approx \sum _{m=0} ^s\left\| \frac{\mathrm{d}^m}{\mathrm{d} x^m} f_j\right\| _{L^{\frac{1}{s},1}({\mathbb {R}})}. \end{aligned}$$

This can be proved in exactly the same way as for Lebesgue spaces, see, e.g., [26, Theorem 3, Chapter 5]. Therefore, using (7.5) and (7.7) we deduce that

$$\begin{aligned} \sup _{j\in {\mathbb {Z}}} \Vert (I-\Delta )^{\frac{s}{2}}[\widehat{\Psi }\sigma (2^j\cdot )]\Vert _{L^{\frac{1}{s},1}(\mathbb R)}<\infty \end{aligned}$$

for any positive integer s. Thus condition (2.8) holds and Theorem 2.2 now yields that \(\sigma \) is an \(L^p\) Fourier multiplier for any \(p\in (1,\infty )\).

After this paper was written, the nth dimensional case of Theorem 2.2 was obtained in [9] and the case where some \(s_j\) may coincide was completed in [14].

I would like to express my gratitude to Eli Stein for his support and encouragement throughout the many years I have known him. His pioneering role in the development of the subject, his significant mathematical contributions, and the great legacy he left will be dearly remembered by many generations of harmonic analysts.