1 Introduction

Let \(d \ge 2\). Let \(\gamma : I \rightarrow {\mathbb {R}}^d\) be a \(C^d\) curve defined on an interval I. The restriction of the Fourier transform of f to \(\gamma \) is given by

$$\begin{aligned} \hat{f}(\gamma (t)) = \int _{{\mathbb {R}}^d} e^{-i \langle x,\gamma (t) \rangle }f(x) dx \end{aligned}$$

for Schwartz functions \(f \in \mathcal {S}({\mathbb {R}}^d)\). We are interested in the \(L^p-L^q\) estimate of the restriction of the Fourier transform:

$$\begin{aligned} \bigg ( \int _I \left| \widehat{f}(\gamma (t)) \right| ^{q} ~dt \bigg )^{1/q} \le C \Vert f \Vert _{L^{p} ({\mathbb {R}}^d)}, \end{aligned}$$
(1)

and for what \(p-q\) range the estimate holds. The trivial estimate is the \(L^1-L^\infty \) estimate. The critical line for the \(p-q\) range is \(\frac{1}{q} = \frac{d(d+1)}{2} \frac{1}{p'}\), \(q > \frac{d^2+d+2}{d^2+d}\), where \(p'\) is the Hölder conjugate exponent of p. (See [1].)

We are also interested in the conditions on \(\gamma \) that allows the \(L^p-L^q\) estimate to hold on the critical line. The simplest case is \(\gamma (t) = (t, \frac{t^2}{2!}, \ldots , \frac{t^d}{d!})\). Zygmund [18] and Hörmander [13] showed that (1) holds on the critical line for \(d=2\) and Drury [11] showed the corresponding result for \(d\ge 3\). Christ [8] proved partial results for more general curves, and Bak et al. [4] showed that the estimate (1) holds if \(\gamma \) is nondegenerate. Now consider a curve of simple type of the form \(\gamma (t) = (t, {\frac{t^2}{2!}}, \ldots , {\frac{t^{d-1}}{(d-1)!}}, \phi (t))\) where \(\phi \) is a \(C^d\) function. In this case, (1) may fail if \(\gamma \) is degenerate, unless we replace the Euclidean arclength measure by the affine arclength measure. Let w(t) be a weight function defined by

$$\begin{aligned} w(t) = \vert \tau _{\gamma }(t)\vert ^{\frac{2}{d^2+d}} \end{aligned}$$

where \(\tau _{\gamma } = \det (\gamma ' ~ \gamma '' ~ \ldots ~ \gamma ^{(d)})\) is a torsion of \(\gamma \). The affine arclength measure is given by w(t)dt. Thus, we will replace the estimate (1) by

$$\begin{aligned} \bigg ( \int _I \left| \hat{f}(\gamma (t)) \right| ^{q} w(t) ~dt \bigg )^{1/q} \le C \Vert f \Vert _{L^{p} ({\mathbb {R}}^d)}. \end{aligned}$$
(2)

Furthermore, even though (2) fails at the endpoint \(p=q=\frac{d^2+d+2}{d^2+d}\), the restricted strong type (pq) may hold:

$$\begin{aligned} \bigg ( \int _I \left| \widehat{f}(\gamma (t))\right| ^{q} w(t) ~dt \bigg )^{1/q} \le C \Vert f \Vert _{L^{p,1} (\mathbb {R}^d)}. \end{aligned}$$
(3)

Bak et al. [3] showed that (2) holds for curves satisfying some conditions on the critical line, and in [5], they showed the endpoint estimate (3) holds when \(\phi \) is any polynomial, where \(C=C_N\) depends only on the upper bound N on the degree of the polynomial. Also, Bak and Ham [2] showed the corresponding endpoint estimate for certain complex curves \(\gamma (z) \in {\mathbb {C}}^d\) of simple type. For more cases, see also [10, 16] and [17].

In this paper, we extend the result in [3] to the endpoint estimate, i.e., (3) holds for some curves that satisfy some hypotheses involving a certain log-concavity condition.

Theorem 1.1

Suppose \(d \ge 2\). Let \(\gamma \in C^d(I)\) be of the form

$$\begin{aligned} \gamma (t) = \left( t, \frac{t^2}{2!}, \ldots , \frac{t^{d-1}}{(d-1)!}, \phi (t)\right) \end{aligned}$$

defined on \(I=(0,1)\). Suppose that \(\phi ^{(d)}\) is positive and increasing on I. Suppose that there exists \(\delta >0\) such that \(\phi ^{(d)}\) is log-concave on \((0,\delta )\), i.e.,

$$\begin{aligned} \phi ^{(d)}(\lambda x_1 + (1-\lambda )x_2) \ge [\phi ^{(d)}(x_1)]^\lambda [\phi ^{(d)}(x_2)]^{1-\lambda } \end{aligned}$$
(4)

for all \(\lambda \in [0,1]\) and \(x_1,x_2 \in (0,\delta )\). Then, for \(p_d = (d^2 + d + 2)/(d^2 + d)\), there is a constant \(C < \infty \), depending only on d, such that for all \(f \in L^{p_d,1} (\mathbb {R}^d)\),

$$\begin{aligned} \bigg ( \int _I \left| \widehat{f}(\gamma (t))\right| ^{p_d} w(t)~ dt \bigg )^{1/p_d} \le C\Vert f \Vert _{L^{p_d,1} (\mathbb {R}^d)}. \end{aligned}$$
(5)

The paper is organized as follows. In Sect. 2, we establish a lower bound for a Jacobian related to an offspring curve. In Sect. 3, we collect some useful results on interpolation spaces. Section 4 is devoted to the proof of Theorem 1.1. In Sect. 5, we provide some relevant examples.

We will use the notation \(A \lesssim B\) to mean that \(A \le CB\) for some constant C depending only on d. And \(A \approx B\) means \(A \lesssim B\) and \(B \lesssim A\).

2 A Lower Bound for a Certain Jacobian

In this section, we establish the lower bound for a certain Jacobian, which plays an important role to prove Theorem 1.1. Before formulating this result, we introduce some notation before presenting the crucial proposition needed to prove Theorem 1.1.

For \(d \ge 2\) and \(x =(x_1,\ldots ,x_d) \in {\mathbb {R}}^d\), let V(x) denote the determinant of the Vandermonde matrix:

$$\begin{aligned} V_d(x) = \prod \limits _{1 \le i < j \le d} (x_j - x_i).\end{aligned}$$

For \(0 \le t=t_1 \le \cdots \le t_d\), let \(h_i = t_{i} - t_1\). Then, \(0 = h_1 \le \cdots \le h_d\) and \(t_i = t + h_i\). Also, define

$$\begin{aligned} v(h) = V_d(h) = \prod \limits _{1 \le i < j \le d} (h_j - h_i). \end{aligned}$$

If \(\gamma : [0,1] \rightarrow {\mathbb {R}}^d\) and if \(0< t < 1-h_d\), define

$$\begin{aligned} \Gamma (t,h) = \sum \limits _{i=1}^{d} \gamma (t+h_i), \end{aligned}$$

which is called an offspring curve of \(\gamma \) for each fixed h. Let \(J_{\phi }(t,h)\) be the Jacobian determinant of \(\Gamma \):

$$\begin{aligned} J_{\phi }(t,h) = \det \left( \frac{\partial \Gamma }{\partial t}, \frac{\partial \Gamma }{\partial h_1}, \ldots , \frac{\partial \Gamma }{\partial h_d}\right) . \end{aligned}$$

Now we formulate the following proposition, which provides the lower bound of Jacobian of the offspring curve. (See also Proposition 2.1 in [3] and Proposition 3.5 in [9].)

Proposition 2.1

Let \(J_\phi (t,h)\) be defined as above, where

\(\gamma (t) = (t,\frac{t^2}{2!}, \ldots , \frac{t^{d-1}}{(d-1)!}, \phi (t))\) satisfies the condition in Theorem 1.1. Then, for \(t \in [0, \delta )\), \(h \in (0,\delta )^{d-1}\), and \(t+h_d<\delta \),

$$\begin{aligned} J_\phi (t,h) \ge C_d~v(h) \bigg [\prod \limits _{i=1}^{d} \phi ^{(d)} (t + h_d) \bigg ]^{1/d} \end{aligned}$$
(6)

for some constant \(C_d\) which depends only on d.

Before embarking on the proof of Proposition 2.1, we need some definitions and lemmas from [3].

Lemma 2.2

(Lemma 2.2 in [3]) Fix \(\lambda \in (0,1)\). Define some intervals \((a_i , b_i)\) by

$$\begin{aligned} a_i < b_i \quad for \, i=1, \ldots , N \;and \;b_i \le a_{i+1} \quad for\, i=1, \ldots , N-1.\end{aligned}$$

Suppose also that for \(m=1, \ldots , M\), and for \(s \in {\mathbb {R}}^N\), \(v_m(s)\) is a function having one of the three following forms:

$$\begin{aligned} v_m(s) = {\left\{ \begin{array}{ll} s_j - s_i &{} \text{ for } \text{ some } 1 \le i < j \le N, \\ d_i - s_i &{} \text{ for } \text{ some } d_j \ge b_j, \\ s_i - c_i &{} \text{ for } \text{ some } c_j \le a_j. \end{array}\right. } \end{aligned}$$

Suppose that \(\lambda _n \in (0,1)\) and \(\lambda _n \le \lambda \) for \(n=1,\ldots ,N\). Let \(\mathcal {R}_N(a,b,\lambda )\) be the region of all \(s=(s_1,\ldots ,s_N) \in {\mathbb {R}}^N\) satisfying \((1-\lambda _n)a_n + \lambda _n b_n \le s_n \le b_n\) for \(n=1,\ldots ,N\). Then

$$\begin{aligned} \int _{\mathcal {R}_N(a,b,\lambda )} \prod \limits _{m=1}^{M} v_m(s) ~ds_N \ldots ds_1 \ge C(M,\lambda )^N \int _{a_1}^{b_1} \ldots \int _{a_N}^{b_N} \prod \limits _{m=1}^{M} v_m(s) ~ds_N \ldots ds_1.\nonumber \\ \end{aligned}$$
(7)

Now, we define a function \(\zeta _d(t;h)\) recursively:

$$\begin{aligned} \zeta _2(t;h_2) = \chi _{[0,h_2]}(t) \end{aligned}$$
(8)

For \(d \ge 3\) and \(t \le h_d\), define

$$\begin{aligned} \begin{aligned} \mathfrak {R}_{d-1}(t,h) = \{x \in {\mathbb {R}}^{d-1} : \,&0 \le x_1 \le \min (t,h_2),\\&h_i \le x_i \le h_{i+1}, \, i=2,\ldots ,d-2 \\&\max (t,h_{d-1}) \le x_{d-1} \le h_d \}, \end{aligned} \end{aligned}$$
(9)

and define

$$\begin{aligned} \zeta _d(t;h) = \int _{\mathfrak {R}_{d-1}(t,h)} \zeta _{d-1}(t-u_1;u_2,\ldots ,u_{d-1}) ~du_1 \ldots du_{d-1} \end{aligned}$$
(10)

if \(t \le h_{d}\), and \(\zeta _d(t;h) = 0\) if \(t > h_{d}\).

Consider a function \(\widetilde{J}_{\phi }^d(s) : {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) defined by

$$\begin{aligned} \widetilde{J}_{\phi }^d(s) = \det (\gamma '(s_1) \ldots \gamma '(s_d)). \end{aligned}$$
(11)

Notice that \(\gamma '(s_i) = (1, s_i, \ldots , (s_i)^{d-2}/(d-2)!, \phi '(s_i))\).

Observe that by simple calculation,

$$\begin{aligned} \widetilde{J}_{\phi }^d(t,t+h_2,\ldots ,t+h_d) = J_{\phi }(t,h). \end{aligned}$$

Lemma 2.3

(Lemma 2.3 in [3]) Let \(\zeta _d\) and \(\widetilde{J}_{\phi }^d(t)\) be defined by (8), (10), and (11) with \(s_1 \le \cdots \le s_d\). Then

$$\begin{aligned} \widetilde{J}_{\phi }^d(s) = \int _{s_1}^{s_d} \zeta _d(u-s_1;s_2-s_1,\ldots ,s_d-s_{d-1})\phi ^{(d)}(u)~du. \end{aligned}$$

Lemma 2.4

Suppose that \(\phi ^{(d)}\) is log-concave on \((0,\delta )\) and \(0=h_1 \le h_2 \le \cdots \le h_d\). Then,

$$\begin{aligned} \bigg [\prod \limits _{i=1}^d \phi ^{(d)}(t+h_i)\bigg ]^{1/d} \le \phi ^{(d)}(H_d(t,h)) \end{aligned}$$
(12)

where \(t+h_i \in (0,\delta )\) for \(i=1,\ldots ,d\) and \(H_d(t,h) = \frac{1}{d} \sum _{i=1}^{d}(t+h_i) \in [t,t+h_d]\).

Proof

Let \(\beta (t) = -\log [\phi ^{(d)}(t)]\). Then, \(\beta \) is convex on \((0,\delta )\). Therefore, by Jensen’s inequality,

$$\begin{aligned} \frac{1}{d} \sum \limits _{i=1}^d \beta (t_i) \ge \beta \left( \frac{1}{d} \sum \limits _{i=1}^d t_i \right) \end{aligned}$$

where \(t_i \in (0,\delta )\) for \(i=1,\ldots ,d\). It follows that

$$\begin{aligned} \exp \bigg [ \frac{1}{d} \sum \limits _{i=1}^d \beta (t_i) \bigg ] \ge \exp \bigg [\beta \left( \frac{1}{d} \sum \limits _{i=1}^d t_i \right) \bigg ], \end{aligned}$$

which implies

$$\begin{aligned} \prod \limits _{i=1}^d \bigg [ \exp \bigg ( \beta (t_i) \bigg ) \bigg ]^{1/d} \ge \exp \bigg [ \beta \left( \frac{1}{d} \sum \limits _{i=1}^d t_i \right) \bigg ]. \end{aligned}$$

Namely,

$$\begin{aligned} \prod \limits _{i=1}^d \bigg [ \exp \bigg ( -\log [ \phi ^{(d)}(t_i) ] \bigg ) \bigg ]^{1/d} \ge \exp \bigg ( -\log \left[ \phi ^{(d)} \left( \frac{1}{d} \sum \limits _{i=1}^d t_i \right) \right] \bigg ), \end{aligned}$$

which implies

$$\begin{aligned} \prod \limits _{i=1}^d [ \phi ^{(d)}(t_i) ]^{1/d} \le \phi ^{(d)} \left( \frac{1}{d} \sum \limits _{i=1}^d t_i \right) . \end{aligned}$$

If we put \(t_1=t\) and \(t_i=t+h_i\), we get

$$\begin{aligned} \bigg [\prod \limits _{i=1}^d \phi ^{(d)}(t+h_i)\bigg ]^{1/d} \le \phi ^{(d)}(H_d(t,h)). \end{aligned}$$

\(\square \)

Proof of Proposition 2.1

We adapt the proof of Proposition 2.1 in [3].

We will use both notations \(t_i\) and \(t+h_i\), where \(t_i = t+h_i\) for \(0=h_1 \le h_2 \le \cdots \le h_d\).

$$\begin{aligned} J_{\phi }(t,h)&= \widetilde{J}_{\phi }^d(t,t+h_2,\ldots ,t+h_d) \\&= \int _t^{t+h_d} \zeta _d(u-t;h)~\phi ^{(d)}(u)~du \nonumber \\&\ge \int _{H_d(t,h)}^{t+h_d}\zeta _d(u-t;h)~\phi ^{(d)}(u)~du. \end{aligned}$$

The equality follows from Lemma 2.3 and the inequality follows from nonnegativity. Since \(\phi ^{(d)}\) is increasing,

$$\begin{aligned} J_{\phi }(t,h) \ge \phi ^{(d)}(H_d(t,h)) \int _{H_d(t,h)}^{t+h_d}\zeta _d(u-t;h)~du. \end{aligned}$$
(13)

We will show that

$$\begin{aligned} \int _{H_d(t,h)}^{t+h_d}\zeta _d(u-t;h)~du \ge c_d~v(h). \end{aligned}$$
(14)

To show (14), we will use induction on \(d\ge 2\).

It is easy to verify for the case \(d=2\) that the (14) holds with \(c_d=1/2\). Suppose that (14) holds for \(d-1\ge 2\). Consider a function \(\pi \) such that

$$\begin{aligned} \pi ^{(d)}(u) = \chi _{\{u\ge \bar{t} \}}(u) \end{aligned}$$

where \(\bar{t}=\frac{1}{d}(t_1+\cdots +t_d)\). Observe that

$$\begin{aligned} \partial _{t_1} \ldots \partial _{t_{d-1}} \widetilde{J}_\phi ^d (t_1, \ldots , t_d)&= (-1)^{d+1} \det ( \gamma ''(t_1) \ldots \gamma ''(t_{d-1}) ) \\&= (-1)^{d+1}\widetilde{J}_{\phi '}^{d-1} (t_1, \ldots , t_{d-1}). \end{aligned}$$

Since \(\widetilde{J}_\phi ^d(t) = 0\) if \(t_i = t_{i+1}\), we get

$$\begin{aligned}&\widetilde{J}_\phi ^d(t_1, \ldots , t_d) \nonumber \\&= (-1)^{d-1} \int _{t_1} ^{t_2} \ldots \int _{t_{d-1}} ^{t_d} \partial _{s_1} \ldots \partial _{s_{d-1}} \widetilde{J}_\phi ^d (s_1, \ldots , s_{d-1}, t_d) ~ds_{d-1} \ldots ds_1 \nonumber \\&= \int _{t_1} ^{t_2} \ldots \int _{t_{d-1}} ^{t_d} \widetilde{J}_{\phi '}^{d-1} (s_1, \ldots , s_{d-1}) ~ds_{d-1} \ldots ds_1. \end{aligned}$$
(15)

By applying (15) and Lemma 2.3, we get

$$\begin{aligned}&\int _{H_d(t,h)}^{t+h_d}\zeta _d(u-t;h)~du = \widetilde{J}_\pi ^d(t_1,\ldots ,t_d) \\&\quad =\int _{t_1}^{t_2} \ldots \int _{t_{d-1}}^{t_d} \int _{s_1}^{s_{d-1}} \chi _{\{u \ge \bar{t} \} }(u)\\&\qquad \times \zeta _{d-1}(u-s_1;s_2-s_1,\ldots ,s_{d-1}-s_{d-2})~du~ds_{d-1} \ldots ds_1. \end{aligned}$$

Let \(\lambda _i = \frac{d-i}{d}\). Note that if \(s_i \ge \lambda _it_i + (1-\lambda _i)t_{i+1}\), then \(\bar{s}=\frac{1}{d-1}(s_1 + \cdots + s_{d-1}) \ge \frac{1}{d}(t_1+\cdots +t_d)=\bar{t},\) so \(\chi _{\{u \ge \bar{t} \} }(u) \ge \chi _{\{u \ge \bar{s} \} }(u)\). Therefore,

$$\begin{aligned}&\int _{H_d(t,h)}^{t+h_d}\zeta _d(u-t;h)~du \nonumber \\ {}&\ge \int _{\lambda _1t_1 + (1-\lambda _1)t_2 }^{t_2} \ldots \int _{\lambda _{d-1}t_{d-1} + (1-\lambda _{d-1})t_d}^{t_d} \int _{s_1}^{s_{d-1}} \chi _{\{u \ge \bar{s} \} }(u)\nonumber \\&\quad \times \zeta _{d-1}(u-s_1;s_2-s_1,\ldots ,s_{d-1}-s_{d-2}) ~du~ ds_{d-1} \ldots ds_1. \end{aligned}$$
(16)

By the induction hypotheses, we get the inequality

$$\begin{aligned}&\int _{s_1}^{s_{d-1}} \chi _{\{u \ge \bar{s} \} }(u) \zeta _{d-1} (u-s_1;s_2-s_1,\ldots ,s_{d-1}-s_{d-2})~du \nonumber \\&\quad \ge c_{d-1}V_{d-1}(s_1,\ldots ,s_{d-1}). \end{aligned}$$
(17)

By (16) and (17), we have

$$\begin{aligned}&\int _{H_d(t,h)}^{t+h_d}\zeta _d(u-t;h)~du \\&\quad \ge c_{d-1} \int _{\lambda _1t_1 + (1-\lambda _1)t_2 }^{t_2} \ldots \int _{\lambda _{d-1}t_{d-1} + (1-\lambda _{d-1})t_d}^{t_d} V_{d-1}(s_1,\ldots ,s_{d-1})~ds_1\ldots ds_{d-1}. \end{aligned}$$

Using the fact that \(V_{d-1}\) is of the form \(\prod v_m(t)\) in Lemma 2.2, and

$$\begin{aligned} V_d(t_1,\ldots ,t_d)= (d-1)!\int _{t_1}^{t_2} \ldots \int _{t_{d-1}}^{t_d} V_{d-1}(s_1,\ldots ,s_{d-1})~ds_{d-1} \ldots ds_1, \end{aligned}$$

we get the inequality (14) (see [3, p. 9]). If we apply (12) and (14) to (13), we obtain (6). \(\square \)

3 Preliminaries on Interpolation Space

In this section, we provide some definitions and lemmas established in [5], which are needed to prove Theorem 1.1. Let \(\bar{X} = (X_0, X_1)\) be a compatible couple of quasi-normed spaces \(X_0\) and \(X_1\), i.e., both \(X_0\) and \(X_1\) are continuously embedded in the same topological vector space. We can define both the K-functional on \(X_0 + X_1\), given by

$$\begin{aligned} K(f,t,\bar{X}) = \inf _{f=f_0 + f_1} ( \Vert f_0 \Vert _{X_0} + t \Vert f_1 \Vert _{X_1} ), \end{aligned}$$

and the J-functional on \(X_0 \cap X_1\), given by

$$\begin{aligned} J(f,t,\bar{X}) = \max (\Vert f \Vert _{X_0}, t\Vert f \Vert _{X_1}). \end{aligned}$$

For \(0<\theta <1\), let the interpolation space \(\bar{X}_{\theta , q}\) be a subspace of \(X_0 + X_1\), where

$$\begin{aligned} \Vert f \Vert _{\bar{X}_{\theta , q}} = {\left\{ \begin{array}{ll} \bigg (\sum \limits _{n \in {\mathbb {Z}}} [2^{-n\theta } K(f,2^n,\bar{X})]^q \bigg )^{1/q} &{} { 1 \le q < \infty ,} \\ \sup \limits _{n \in {\mathbb {Z}}} 2^{-n\theta } K(f,2^n,\bar{X}) &{} {q= \infty } \end{array}\right. } \end{aligned}$$

is finite. Then, \(X_0 \cap X_1\) is dense in \(\bar{X} _{\theta ,q}\) when \(1 \le q < \infty \), so we can give an equivalent norm \(\Vert \cdot \Vert _{\bar{X}_{\theta , q; J}}\) on \(\bar{X} _{\theta ,q}\) by

$$\begin{aligned} \Vert f \Vert _{\bar{X}_{\theta , q; J}} = \inf \bigg (\sum \limits _{n \in {\mathbb {Z}}} [2^{-n\theta } J(f_n,2^n,\bar{X})]^q \bigg )^{1/q}, \end{aligned}$$

where the infimum is taken over \(f=\sum f_n\) and \(f_n \in X_0 \cap X_1\), with convergence in \(X_0+X_1\). Note that \(\Vert \cdot \Vert _{\bar{X}_{\theta , q}}\) and \(\Vert \cdot \Vert _{\bar{X}_{\theta , q; J}}\) are equivalent when \(0< \theta < 1\). (For details, see Theorem 3.11.3 in [6].)

To present some lemmas, we introduce some definitions. Let \(0<r\le 1\). For a quasi-normed space X, its norm is called \(r-convex\) if there exists a constant \(C>0\) such that

$$\begin{aligned} \Big \Vert \sum \limits _{i=1}^{n} x_i \Big \Vert _X \le C \left( \sum \limits _{i=1}^{n}\Vert x_i \Vert _X ^r \right) ^{1/r} \end{aligned}$$

for any finite \(x_i \in X\). Kalton [14] and Stein et al. [15] showed that the Lorentz space \(L^{r,\infty }\) is \(r-convex\) for \(0<r<1\).

For a quasi-normed space X, let \(\ell _s ^p (X)\) be a sequence space whose element \(\{f_n\}\) is X-valued and satisfies

$$\begin{aligned} \left( \sum \limits _{n \in {\mathbb {Z}}} 2^{nsp} \Vert f_n \Vert _X ^p \right) ^{1/p} < \infty . \end{aligned}$$

We can also define a function space \(b_s^p(X;dw)\), where w is a weight function and X is Lorentz space on an interval I, such that \(f \in b_s^p(X;dw)\) implies \(\{ \chi _{\mathcal {W}_{w,n}} f \}_{n \in {\mathbb {Z}}} \in \ell _s ^p(X)\), i.e.,

$$\begin{aligned} \Vert f \Vert _{b_s^p(X;dw)} = \left( \sum \limits _{n \in {\mathbb {Z}}} 2^{nsp} \Vert \chi _{\mathcal {W}_{w,n}} f \Vert _X ^p \right) ^{1/p} < \infty , \end{aligned}$$

where \(\mathcal {W}_{w,n} = \{ t \in I:2^n \le w(t) < 2^{n+1} \}\).

Then, by definition, \(b_{1/p}^p(L^p;dw)=L^p(I;dw)\).

Now, we state some lemmas that will be helpful in proving Theorem 1.1.

Lemma 3.1

(Lemma A.3 in [5]) Let \(0<r \le 1\) and V be an \(r-convex\) space. For \(i=1,\ldots ,n\), let

$$\begin{aligned} \bar{X}^i = (X_0^i, X_1^i) \end{aligned}$$

be couples of compatible quasi-normed spaces and let \(\mathcal {M}\) be an n-linear operator defined on \(\prod _{i=1}^n (X_0^i \cap X_1^i)\) with values in V. Suppose that

$$\begin{aligned} \Vert \mathcal {M}(f_1,\ldots ,f_n) \Vert _V \le \prod \limits _{i=1}^n \Vert f_i \Vert _{X_0^i}^{1-\theta _i} \Vert f_i \Vert _{X_1^i}^{\theta _i} \end{aligned}$$

for \(0<\theta _i <1\) for all i. Then there is \(C>0\) such that for all \((f_1,\ldots ,f_n) \in \prod _{i=1}^n (X_0^i \cap X_1^i)\),

$$\begin{aligned} \Vert \mathcal {M}(f_1,\ldots ,f_n) \Vert _V \le \prod \limits _{i=1}^n \Vert f_i \Vert _{\bar{X}_{\theta _i,r}^i} \end{aligned}$$

and \(\mathcal {M}\) extends to a bounded operator on \(\prod _{i=1}^n \bar{X}_{\theta _i,r}^i\).

Lemma 3.2

(Theorem 1.3 in [5]) For \(i=1,\ldots ,n\) and \(c_1,\ldots ,c_n \in {\mathbb {R}}\), \(c_1 \ne c_i\) for \(i = 2, \ldots , n\). Let \(0 < r \le 1\), and \(\bar{X} = (X_0,X_1)\) be a couple of compatible complete quasi-normed spaces. Let V be an \(r-convex\) space and \(\mathcal {M}\) be an n-linear operator defined on \(X_0+X_1\) and w be a weight function. Suppose

$$\begin{aligned} \Vert \mathcal {M}[f_1, \ldots , f_n] \Vert _V \le \Vert f_1 \Vert _{b_{c_1}^r(X_1;dw)} \prod \limits _{i =2} ^n \Vert f_i \Vert _{b_{c_i}^r (X_0;dw)}. \end{aligned}$$

Then,

$$\begin{aligned} \Vert \mathcal {M}[f_1, \ldots , f_n] \Vert _V \lesssim \prod \limits _{i=1}^n \Vert f_i \Vert _{b_{c}^{nr} \big (\bar{X}_{\frac{1}{n}, nr};dw\big )} \end{aligned}$$

where \(c = \frac{1}{n} \sum _{i=1}^n c_i\).

Lemma 3.3

(Lemma A.4 in [5]) Let \(0<p \le \infty \), \(s_0,s_1 \in {\mathbb {R}}\), and \(0< \theta < 1\). Let \((X_0,X_1)\) be a compatible couple of quasi-normed spaces. If \(p \le q \le \infty \), then there is the continuous embedding

$$\begin{aligned} \ell _s^p ((X_0,X_1)_{\theta ,q}) \hookrightarrow (\ell _{s_0}^p(X_0), \ell _{s_1}^p(X_1))_{\theta ,q} \end{aligned}$$

for \(s=(1-\theta )s_0 + \theta s_1\).

In fact, \(b_s^p(X)\) is a retract of \(l_s^p(X)\). Define \(r : l_s^p(X) \rightarrow b_s^p(X)\) by \(r(\{f_n\})= \sum _{n \in {\mathbb {Z}}} \mathcal {W}_{w,n} f_n\) and \(i : b_s^p(X) \rightarrow l_s^p(X)\) by \([i(f)]_n = \mathcal {W}_{w,n} f \). Then, \(r \circ i\) is the identity operator on \(b_s^p(X)\). Therefore, Lemma 3.3 implies that there is the continuous embedding

$$\begin{aligned} b_s^p ((X_0,X_1)_{\theta ,q}) \hookrightarrow (b_{s_0}^p(X_0), b_{s_1}^p(X_1))_{\theta ,q} \end{aligned}$$

under the hypotheses of Lemma 3.3.

4 Proof of Theorem 1.1

The interval \(I = (0,1)\) can be decomposed into \((0,\delta ) \cup [\delta , 1)\). Since \(\phi ^{(d)}\) is positive and increasing on I, \(\gamma (t)\) is nondegenerate if \(t \in [\delta ,1)\) for any \(0<\delta <1\). Then, by Theorem 1.4 in [4], Theorem 1.1 holds on \([\delta ,1)\). Therefore, it is enough to show that Theorem 1.1 holds on \((0,\delta )\), if \(\gamma \) satisfies the log-concavity property (4) for some \(\delta > 0\) and \(\phi ^{(d)}\) is positive and increasing on \((0,\delta )\). Let \(q_d=p_d'= \frac{d^2+d+2}{2}\) and \(I=(0,\delta )\).

Definition 4.1

Let \(\mathfrak {C}\) be a class of \(\gamma (t)\), defined on I, given by \(\gamma (t)=(t, {\frac{t^2}{2!}}, \ldots , {\frac{t^{d-1}}{(d-1)!}}, \phi (t))\), for which \(\phi \in C^d(I)\), and \(\phi ^{(d)}\) is positive, increasing and log-concave on I.

Consider the adjoint operator \(T_w\) given by

$$\begin{aligned} T_w g(x) = \int _I e^{-i\langle x,\gamma (t)\rangle }g(t)w(t)~dt, \end{aligned}$$

and define \(\mathcal {C}\) by

$$\begin{aligned} \mathcal {C} = ~\sup _{\gamma \in \mathfrak {C}} ~\sup _{\Vert g \Vert _{L^{q_d}(I;dw)} \le 1} \Vert T_w g \Vert _{L^{q_d, \infty }} ^{**} \end{aligned}$$
(18)

where \(\Vert f \Vert _{L^{q_d, \infty }} ^{**} = \sup _{t>0} t^{1/q_d} f^{**}(t)\) with \(f^{**}\) is the maximal function of nonincreasing rearrangment of f.

The proof is an adaptation of the Proof of Theorem 4.2 in [5]. We will prove an \(L^2\)-estimate and an \((L^{q_d},~L^{q_d,\infty })\)-estimate for some d-linear operator \(\mathcal {M}\) which will be constructed from \(T_w\), and using a technique introduced in [7] with these two estimates, we will get a suitable estimate for the \(L^{q_d/d,\infty }\) norm of \(\mathcal {M}\). Then, we can get an estimate for a multi-linear operator \(\widetilde{\mathcal {M}}\) using Lemmas 3.13.3 and we can show that \(\mathcal {C}\) is bounded by some constant depending only on d.

Define a d-linear operator \(\mathcal {M}\) by

$$\begin{aligned} \mathcal {M}[g_1,\ldots ,g_d](x)&= \prod \limits _{i=1}^d T_w g_i (x) \\&= \int _{I^d} e^{-i\Big \langle x, \sum \limits _{i=1}^d \gamma (t_i) \Big \rangle } \prod \limits _{i=1}^d[g_i(t_i)w(t_i)]~dt_1 \ldots dt_d. \end{aligned}$$

Let \(I^d = \bigcup E_{\pi }\) where

$$\begin{aligned} E_{\pi } = \{ (t_1, \ldots , t_d) \in I^d : t_{\pi (1)} \le \cdots \le t_{\pi (d)} \} \end{aligned}$$

and \(\pi \) is the permutation on d. Then, without loss of generality, we can assume \(t_1 \le \cdots \le t_d\) so that the operator \(\mathcal {M}\) is defined on \(E = E_1 := \{ (t_1, \ldots , t_d) \in I^d : t_1 \le \cdots \le t_d \}\). Therefore, redefine the operator \(\mathcal {M}\) by

$$\begin{aligned} \mathcal {M}[g_1,\ldots ,g_d](x)=\int _{E} e^{-i\langle x, \Gamma (t,h)\rangle } G(t,h)W(t,h)~dt dh \end{aligned}$$

where \(G(t,h) = \prod _{i=1}^d g_i(t+h_i)\), \(W(t,h) = \prod _{i=1}^d w(t+h_i)\), \(h \in I^{d-1}\), and \(t+h_d < \delta \). Divide E into \(F_k, k \in {\mathbb {Z}}\), where

$$\begin{aligned} F_k = \{ (t, t+h_2, \ldots , t+h_d) \in E : 2^{-(k+1)} < v(h) \le 2^{-k} \}, \end{aligned}$$

and define

$$\begin{aligned} \mathcal {M}_k [g_1, \ldots , g_d](x) = \int _{F_k} e^{-i\langle x, \Gamma (t,h)\rangle } G(t,h)W(t,h)~dt dh. \end{aligned}$$
(19)

We will obtain an upper bound for \(\mathcal {M}_k\).

\(\mathbf {L^2-estimate}\)    By the change of variables \(\Gamma (t,h) \rightarrow y\), Plancherel’s theorem, and the change of variables \(y \rightarrow \Gamma (t,h)\), we get

$$\begin{aligned} \Vert \mathcal {M}_k [g_1, \ldots , g_d] \Vert _2^2 \lesssim \int _{F_k} \vert G(t,h)W(t,h)\vert ^2 J_\phi (t,h)^{-1} ~dt dh. \end{aligned}$$

Observe that \(J_\phi (t,h)\) is nonzero on \(F_k\). Then, by [9], the change of variables \(\Gamma (t,h) \rightarrow y\) is at most d!-to-one, so we can use the change of variables without any problem.

Since \(\Gamma \in \mathfrak {C}\), Proposition 2.1 holds, so we get the inequality

$$\begin{aligned} J_\phi (t,h) \ge C_d~v(h) \bigg [\prod \limits _{i=1}^{d} \phi ^{(d)} (t + h_d) \bigg ]^{1/d} \end{aligned}$$
(20)

for some \(C_d>0\), which depends only on d. By (20) and the definition of w, we get

$$\begin{aligned} \Vert \mathcal {M}_k [g_1, \ldots , g_d] \Vert _2^2 \lesssim \int _{F_k} \vert G(t,h)W(t,h)\vert ^2 v(h)^{-1}W(t,h)^{(d+1)/2} ~dt ~dh. \end{aligned}$$

It is known (Lemma 1 of [12]) that the sublevel set estimate for v(h) is

$$\begin{aligned} \vert \{h \in {\mathbb {R}}^{d-1} : v(h) \le c \}\vert \lesssim c^{2/d}. \end{aligned}$$

Taking \(c=2^{-k}\), we get

$$\begin{aligned} \Vert \mathcal {M}_k [g_1, \ldots , g_d] \Vert _2^2 \lesssim 2^{k \frac{d-2}{d}}\int _{F_k} \vert G(t,h)[W(t,h)]^\frac{(3-d)}{4}\vert ^2 ~dt ~dh. \end{aligned}$$
(21)

Also, we can get the following inequality,

$$\begin{aligned} \bigg [ \int _{F_k} \vert G(t,h)[W(t,h)]^\frac{(3-d)}{4}\vert ^2 ~dt dh \bigg ]^{1/2} \le \Vert g_j w^{\frac{3-d}{4}} \Vert _2 \prod \limits _{i\ne j} \Vert g_iw^{\frac{3-d}{4}} \Vert _{\infty } \end{aligned}$$

for any \(j=1,\ldots ,d\). Complex interpolation and (21) lead to

$$\begin{aligned} \Vert \mathcal {M}_k [g_1, \ldots , g_d] \Vert _2 \lesssim 2^{k \frac{d-2}{2d}} \prod \limits _{i=1}^d \Vert g_i w^{\frac{3-d}{4}} \Vert _{r_i} \end{aligned}$$

with \(\sum _{i=1}^d r_i^{-1} = \frac{1}{2}\). Finally, putting \(r_i = 2d\), we obtain

$$\begin{aligned} \Vert \mathcal {M}_k [g_1, \ldots , g_d] \Vert _2 \lesssim 2^{k \frac{d-2}{2d}} \prod \limits _{i=1}^d \Vert g_i w^{\frac{3-d}{4}} \Vert _{2d}. \end{aligned}$$
(22)

\(\mathbf {(L^{q_d}, L^{q_d,\infty })-estimate}\)    Fix h and let \(I_h = (0,\delta - h_d)\). Observe that \(\Gamma (\cdot ,h) \in \mathfrak {C}\). Then,

$$\begin{aligned} \bigg \Vert \int _{I_h} e^{-i\langle \cdot ,\Gamma _h(t)\rangle }g(t)w_{\Gamma }(t) ~dt \bigg \Vert _{L^{q_d,\infty }} \le \mathcal {C} \Vert g \Vert _{L^{q_d}(I_h;dw)} \end{aligned}$$

with \(w_{\Gamma }(t) = \vert \tau _{\Gamma }(t)\vert ^{\frac{2}{d^2+d}}\). Furthermore, observe that if \(w_\epsilon (t) \le w(t)\), then we can write \(w_\epsilon (t) = \epsilon (t) w(t)\) with \(0 \le \epsilon \le 1\) and

$$\begin{aligned} \bigg \Vert \int _{I_h} e^{-i\langle \cdot ,\gamma (t)\rangle }g(t)w_\epsilon (t) ~dt \bigg \Vert _{L^{q_d,\infty }}&= \bigg \Vert \int _{I_h} e^{-i\langle \cdot ,\gamma (t)\rangle }g(t)\epsilon (t)w(t) ~dt \bigg \Vert _{L^{q_d,\infty }} \nonumber \\&\lesssim \mathcal {C} \bigg [ \int _{I_h} \vert g(t) \epsilon (t) \vert ^{q_d} w(t) ~dt \bigg ]^{1/q_d} \nonumber \\&\le \mathcal {C} \bigg [ \int _{I_h} \vert g(t)\vert ^{q_d} \epsilon (t)w(t) ~dt \bigg ]^{1/q_d}\nonumber \\&=\mathcal {C} \bigg [ \int _{I_h} \vert g(t)\vert ^{q_d} w_{\epsilon }(t) ~dt \bigg ]^{1/q_d}. \end{aligned}$$
(23)

Also, for \(\sum _{i=1}^d \epsilon _i = 1\), let \(w_{\epsilon ,h}(t) = \prod _{i=1}^d w(t+h_i)^{\epsilon _i}\). Then, by the positivity of \(\phi ^{(d)}\) and Jensen’s inequality for a convex function \(-\log \),

$$\begin{aligned} -\log \left( \sum \limits _{i=1}^d \phi ^{(d)}(t+h_i) \right)&\le -\log \bigg ( \frac{\sum \epsilon _i \phi ^{(d)}(t+h_i)}{\sum \epsilon _i} \bigg ) \nonumber \\ {}&\le \sum \limits _{i=1}^d \epsilon _i \bigg (-\log ( \phi ^{(d)}(t+h_i) ) \bigg ) \nonumber \\&= -\log \bigg ( \prod \limits _{i=1}^d \phi ^{(d)}(t+h_i) ^{\epsilon _i} \bigg ), \end{aligned}$$
(24)

so we get \(w_{\epsilon ,h}\le w_{\Gamma }\).

By (23) and (24), we get

$$\begin{aligned} \bigg \Vert \int _{I_h} e^{-i\langle \cdot ,\Gamma _h(t)\rangle }g(t)w_{\epsilon ,h}(t) ~dt \bigg \Vert _{L^{q_d,\infty }} \lesssim \mathcal {C} \bigg [ \int _{I_h} \vert g(t)\vert ^{q_d} w_{\epsilon ,h}(t) ~dt \bigg ]^{1/q_d}. \end{aligned}$$

If we put \(G(t,h) \frac{W(t,h)}{w_{\epsilon ,h}(t)}\) instead of g(t), then

$$\begin{aligned} \begin{aligned}&\left\| \int _{I_h} e^{-i\langle \cdot ,\Gamma _h(t)\rangle }G(t,h)W(t,h) ~dt \right\| _{L^{q_d,\infty }} \\&\qquad \lesssim \mathcal {C} \bigg [ \int _{I_h} \left| G(t,h)\frac{W(t,h)}{w_{\epsilon ,h}(t)} \right| ^{q_d}w_{\epsilon ,h}(t) ~dt \bigg ]^{1/q_d}. \end{aligned} \end{aligned}$$
(25)

So we have

$$\begin{aligned} \bigg \Vert \mathcal {M}_k[g_1,\ldots ,g_d] \bigg \Vert _{L^{q_d,\infty }}&= \bigg \Vert \int _{F_k} e^{-i\langle x, \Gamma (t,h)\rangle } G(t,h)W(t,h)~dt dh \bigg \Vert _{L^{q_d,\infty }} \\ {}&\le \int _H \bigg \Vert \int _{I_h} e^{-i\langle x, \Gamma (t,h)\rangle } G(t,h)W(t,h)~dt \bigg \Vert _{L^{q_d,\infty }} ~dh \\&\lesssim \mathcal {C} \int _H \bigg [ \int _{I_h} \bigg \Vert G(t,h)\frac{W(t,h)}{w_{\epsilon ,h}(t)}\bigg \vert ^{q_d}w_{\epsilon ,h}(t) ~dt \bigg ]^{1/q_d} ~dh \end{aligned}$$

where \(H=\{(h_1,\ldots ,h_d) \in I^d: 0=h_1 \le h_2 \cdots \le h_d,~2^{-(k+1)} < v(h) \le 2^{-k} \}\) and the last expression is bounded by

$$\begin{aligned} \mathcal {C}&\int _H \bigg [ \int _{I_h} \bigg \vert g_1(t)w(t)^{1-\frac{\epsilon _i}{p_d}} \prod \limits _{i=2}^d g_i(t+h_d) w(t+h_d)^{1-\frac{\epsilon _i}{p_d}} \bigg \vert ^{q_d} ~dt \bigg ]^{1/q_d} ~dh \end{aligned}$$

where \(p_d'=q_d\). Since H is a subset of \(F_k\), the sublevel set estimate of v(h) gives \(\vert H \vert \lesssim 2^{-2k/d}\). Since \(q_d' = p_d\), we get

$$\begin{aligned} \bigg \Vert \mathcal {M}_k[g_1,\ldots ,g_d] \bigg \Vert _{L^{q_d,\infty }}&\lesssim \mathcal {C} \int _H \bigg [ \int _{I_h} \bigg \vert g_1(t)w(t)^{1-\frac{\epsilon _1}{p_d}} \bigg \vert ^{q_d} dt \bigg ]^{1/q_d} \\&\quad \quad \times \prod \limits _{i=2}^d \Vert g_i(\cdot +h_d) w(\cdot +h_d)^{1-\frac{\epsilon _i}{p_d}} \Vert _{\infty } ~dh \\&\lesssim 2^{-2k/d} \mathcal {C} \Vert g_1w^{1-\frac{\epsilon _1}{p_d}} \Vert _{q_d} \prod \limits _{i=2}^d \Vert g_i w^{1-\frac{\epsilon _i}{p_d}} \Vert _{\infty }. \end{aligned}$$

By symmetry,

$$\begin{aligned} \bigg \Vert \mathcal {M}_k[g_1,\ldots ,g_d] \bigg \Vert _{L^{q_d,\infty }} \lesssim 2^{-2k/d} \mathcal {C} \prod \limits _{i=1}^d \Vert g_i w^{1-\frac{\epsilon _i}{p_d}} \Vert _{s_i} \end{aligned}$$
(26)

where \(\sum _{i=1}^d \epsilon _i = 1\) and \(\sum _{i=1}^d \frac{1}{s_i} = \frac{1}{q_d}\).

\(\mathbf {Estimate~on~the~L^{q_d/d,\infty }~norm~of~\mathcal {M}}\)    Fix \(y>0\) and define \(G_y = \{ x : \vert \mathcal {M}[g_1,\ldots ,g_d](x)\vert > 2y \}\). Then, for any constant K,

$$\begin{aligned} \vert G_y \vert \le y^{-2} \bigg \Vert \sum \limits _{2^k \le K} \mathcal {M}_k[g_1,\ldots ,g_d] \bigg \Vert _2 ^2 + y^{-q_d} \bigg \Vert \sum \limits _{2^k > K} \mathcal {M}_k[g_1,\ldots ,g_d] \bigg \Vert _{q_d,\infty } ^{q_d}. \end{aligned}$$

By (22) and (26), we obtain

$$\begin{aligned} \vert G_y\vert \lesssim y^{-2} K^{(d-2)/d} \prod \limits _{i=1}^d \Vert g_i w^{\frac{3-d}{4}} \Vert _{2d} ^2 + y^{-q_d} K^{-2q_d/d} \mathcal {C}^{q_d} \prod \limits _{i=1}^d \Vert g_i w^{1-\frac{\epsilon _i}{p_d}} \Vert _{s_i} ^{q_d}. \end{aligned}$$

If we choose K appropriately so that

$$\begin{aligned} y^{-2} K^{(d-2)/d} \prod \limits _{i=1}^d \Vert g_i w^{\frac{3-d}{4}} \Vert _{2d}^2 = y^{-q_d} K^{-2q_d/d} \mathcal {C}^{q_d} \prod \limits _{i=1}^d \Vert g_i w^{1-\frac{\epsilon _i}{p_d}} \Vert _{s_i}^{q_d}, \end{aligned}$$

which means

$$\begin{aligned} K^{\frac{d-2+2q_d}{d}} = y^{2-q_d} \mathcal {C}^{q_d} \prod \limits _{i=1}^d \Vert g_i w^{1-\frac{\epsilon _i}{p_d}} \Vert _{s_i}^{q_d} \prod \limits _{i=1}^d \Vert g_i w^{\frac{3-d}{4}} \Vert _{2d}^{-2}, \end{aligned}$$

then we obtain

$$\begin{aligned} y \vert G_y\vert ^{\frac{d-2+2q_d}{(d+2)q_d}} \lesssim \mathcal {C}^{\frac{d-2}{d+2}} \prod \limits _{i=1}^d \Vert g_i w^{\frac{3-d}{4}} \Vert _{2d}^{\frac{4}{d+2}} \prod \limits _{i=1}^d \Vert g_i w^{1-\frac{\epsilon _i}{p_d}} \Vert _{s_i}^{\frac{d-2}{d+2}}. \end{aligned}$$

Since \(\frac{d-2+2q_d}{(d+2)q_d} = \frac{d}{q_d}\), we get

$$\begin{aligned} \Vert \mathcal {M} [g_1,\ldots ,g_d] \Vert _{\frac{q_d}{d}, \infty }&\lesssim \mathcal {C}^{\frac{d-2}{d+2}} \prod \limits _{i=1}^d \Vert g_i w^{\frac{3-d}{4}} \Vert _{2d}^{\frac{4}{d+2}} \prod \limits _{i=1}^d \Vert g_i w^{1-\frac{\epsilon _i}{p_d}} \Vert _{s_i}^{\frac{d-2}{d+2}} \\ {}&= \mathcal {C}^{\frac{d-2}{d+2}} \prod \limits _{i=1}^d \Vert g_i w^{\frac{3-d}{4}} \Vert _{2d}^{\frac{4}{d+2}} \prod \limits _{i=1}^d \Vert g_i w^{1-\frac{\epsilon _i}{p_d}} \Vert _{s_i} ^{\frac{d-2}{d+2}}. \end{aligned}$$

Observe that \(\Vert g_i w^{\frac{3-d}{4}} \Vert _{2d} \approx \sum _{k \in {\mathbb {Z}}}2^{k\frac{3-d}{4}} \Vert \chi _{\mathcal {W}_{w,k}} g_i \Vert _{2d}\) and

\(\Vert g_i w^{1-\frac{\epsilon _i}{p_d}} \Vert _{s_i} \approx \sum _{k \in {\mathbb {Z}}} 2^{k(1-\frac{\epsilon _i}{p_d})} \Vert \chi _{\mathcal {W}_{w,k}} g_i \Vert _{s_i}\), so we can write

$$\begin{aligned} \Vert \mathcal {M} [g_1,\ldots ,g_d] \Vert _{\frac{q_d}{d}, \infty } \lesssim \mathcal {C}^{\frac{d-2}{d+2}} \prod \limits _{i=1}^d \Vert g_i \Vert _{b_{\frac{3-d}{4}}^1(L^{2d};dw)}^{\frac{4}{d+2}} \prod \limits _{i=1}^d \Vert g_i \Vert _{b_{1-\frac{\epsilon _i}{p_d}}^1(L^{s_i};dw)} ^{\frac{d-2}{d+2}}. \end{aligned}$$
(27)

Then, by Lemma 3.1 and (27),

$$\begin{aligned} \Vert \mathcal {M} [g_1,\ldots ,g_d] \Vert _{\frac{q_d}{d}, \infty } \lesssim \mathcal {C}^{\frac{d-2}{d+2}} \prod \limits _{i=1}^d \Vert g_i \Vert _ {\bar{X}^i_{\frac{d-2}{d+2},1}} \end{aligned}$$
(28)

where \(\bar{X}^i_{\frac{d-2}{d+2},1} = \big (b_{\frac{3-d}{4}}^1(L^{2d};dw), b_{1-\frac{\epsilon _i}{p_d}}^1(L^{s_i};dw) \big )_{\frac{d-2}{d+2},1}\).

Also, we can find the continuous embedding

$$\begin{aligned} b_{\frac{4}{d+2}\frac{3-d}{4} + \frac{d-2}{d+2}(1-\frac{\epsilon _i}{p_d})}^1 \bigg ( (L^{2d},L^{s_i})_{\frac{d-2}{d+2}, 1};dw \bigg ) \\ \hookrightarrow \bigg (b_{\frac{3-d}{4}}^1(L^{2d};dw), b_{1-\frac{\epsilon _i}{p_d}}^1(L^{s_i};dw) \bigg )_{\frac{d-2}{d+2},1} \end{aligned}$$

by Lemma 3.3 with \(b_s^p\) instead of \(l_s^p\). Therefore, if we define

$$\begin{aligned} a_i=\frac{3-d}{d+2} + \frac{d-2}{d+2}\bigg (1-\frac{\epsilon _i}{p_d}\bigg ) \end{aligned}$$

and

$$\begin{aligned} \frac{1}{b_i}=\frac{4}{d+2} \cdot \frac{1}{2d} +\frac{d-2}{d+2} \cdot \frac{1}{s_i}, \end{aligned}$$

we get \((L^{2d},L^{s_i})_{\frac{d-2}{d+2}, 1} = L^{b_i, 1}\) and

$$\begin{aligned} \Vert \mathcal {M} [g_1,\ldots ,g_d] \Vert _{\frac{q_d}{d}, \infty } \lesssim \mathcal {C}^{\frac{d-2}{d+2}} \prod \limits _{i=1}^d \Vert g_i \Vert _ {b_{a_i}^1(L^{b_i,1};dw)}, \end{aligned}$$
(29)

where \(\sum _{i=1}^d a_i = \sum _{i=1}^d \frac{1}{b_i} = \frac{d}{q_d}\).

Now, define a multi-linear operator \(\widetilde{\mathcal {M}}\) by

$$\begin{aligned} \widetilde{\mathcal {M}}[g_1,\ldots ,g_n] = \prod \limits _{i=1}^n T_w g_i(x). \end{aligned}$$

for \(n > q_d\). Let \(r=\frac{q_d}{n} < 1\). Then, as we stated in Sect. 4, \(L^{r,\infty }\) is an \(r-convex\) space. We may write

$$\begin{aligned} \widetilde{\mathcal {M}}[g_1,\ldots ,g_n] = \mathcal {M}[g_1,\ldots ,g_d]\prod \limits _{i=d+1}^n T_w g_i(x) \end{aligned}$$

and by Hölder’s inequality, it follows

$$\begin{aligned} \Vert \widetilde{\mathcal {M}}[g_1,\ldots ,g_n] \Vert _{L^{r,\infty }} \lesssim \Vert \mathcal {M}[g_1,\ldots ,g_d] \Vert _{L^{q_d/d,\infty }} \prod \limits _{i=d+1}^n \Vert T_w g_i(x) \Vert _{L^{q_d,\infty }}. \end{aligned}$$
(30)

Observe that if we put \(g_i = g\) and \(a_i=\frac{1}{b_i}=\frac{1}{q_d}\) for all \(i=1,\ldots ,d\) in (29), we get

$$\begin{aligned} \Vert T_w g \Vert _{L^{q_d,\infty }} \lesssim \mathcal {C}^{\frac{d-2}{d^2+2d}} \Vert g \Vert _{b_{1/q_d}^1 (L^{q_d,1};dw)}. \end{aligned}$$
(31)

By applying (29) and (31) to (30), and by using the generalized geometric means inequality, we get

$$\begin{aligned}&\Vert \widetilde{\mathcal {M}}[g_1,\ldots ,g_n] \Vert _{L^{r,\infty }} \nonumber \\&\lesssim \mathcal {C}^{\frac{d-2}{d+2}} \prod \limits _{i=1}^d \Vert g_i \Vert _ {b_{a_i}^1(L^{b_i,1};dw)} \prod \limits _{i=d+1}^n \mathcal {C}^{\frac{d-2}{d^2+2d}} \Vert g_i \Vert _{b_{1/q_d}^1 (L^{q_d,1};dw)} \nonumber \\&= \mathcal {C}^{\frac{d-2}{d+2}+\frac{d-2}{d^2+2d}(n-d)} \Vert g_1 \Vert _ {b_{a_1}^1(L^{b_1,1};dw)} \Vert g_2 \Vert _ {b_{a_2}^1(L^{b_2,1};dw)}\nonumber \\&\qquad \times \prod \limits _{i=3}^d \Vert g_i \Vert _ {b_{a_i}^1(L^{b_i,1};dw)} \prod \limits _{i=d+1}^n \Vert g_i \Vert _{b_{1/q_d}^1 (L^{q_d,1};dw)} \nonumber \\&\lesssim \mathcal {C}^{\frac{(d-2)n}{d^2+2d}} \Vert g_1 \Vert _ {b_{a_1}^1(L^{b_1,1};dw)} \Vert g_2 \Vert _ {b_{a_2}^1(L^{b_2,1};dw)} \nonumber \\&\qquad \times \prod \limits _{i=3}^n \Vert g_i \Vert _{b_{a_i}^1(L^{b_i,1};dw)} ^{\frac{d-2}{n-2}} \Vert g_i \Vert _{b_{1/q_d}^1 (L^{q_d,1};dw)} ^{\frac{n-d}{n-2}}, \end{aligned}$$
(32)

where \(\sum _{i=1}^d a_i = \sum _{i=1}^d \frac{1}{b_i} = \frac{d}{q_d}\).

We will choose \(a_i\) and \(b_i\) appropriately to get a upper bound of \(\widetilde{\mathcal {M}}\). Recall that \(a_i\) depends on \(\epsilon _i\) and \(b_i\) depends on \(s_i\). Let \(\eta >0\) be small enough and let

$$\begin{aligned} \frac{1}{s_i} = {\left\{ \begin{array}{ll} \frac{1}{dq_d} - \eta (d+2)\frac{n-1}{n-2}, &{} { i=1,} \\ \frac{1}{dq_d} + \eta \frac{d+2}{n-2}, &{} { i=2,}\\ \frac{1}{dq_d} + \eta \frac{d+2}{d-2}, &{} { 3 \le i \le d }. \end{array}\right. } \end{aligned}$$

Then,

$$\begin{aligned} \frac{1}{b_i} = {\left\{ \begin{array}{ll} \frac{1}{q_d} - \eta (d-2)\frac{n-1}{n-2}, &{} { i=1,} \\ \frac{1}{q_d} + \eta \frac{d-2}{n-2}, &{} { i=2,}\\ \frac{1}{q_d} + \eta , &{} { 3 \le i \le d} \end{array}\right. } \end{aligned}$$

and it is easy to check that \(\sum _{i=1}^d \frac{1}{b_i} = \frac{d}{q_d}\). Moreover, we get

$$\begin{aligned} \frac{1}{b_2} = \frac{d-2}{n-2}\frac{1}{b_3} + \frac{n-d}{n-2}\frac{1}{q_d}. \end{aligned}$$

Therefore, applying Lemma 3.1 in (32) allows us to get

$$\begin{aligned} \Vert \widetilde{\mathcal {M}}[g_1,\ldots ,g_n] \Vert _{L^{r,\infty }}&\lesssim \mathcal {C}^{\frac{(d-2)n}{d^2+2d}} \Vert g_1 \Vert _ {b_{a_1}^1(L^{b_1,1};dw)} \Vert g_2 \Vert _ {b_{a_2}^1(L^{b_2,1};dw)} \\&\qquad \times \prod \limits _{i=3}^n \Vert g_i \Vert _{\bar{Y}_{\frac{n-d}{n-2}, 1}} \end{aligned}$$

where \(\bar{Y}_{\frac{n-d}{n-2}, 1} = \big ( {b_{a_3}^1(L^{b_3,1};dw)}, b_{1/q_d}^1(L^{q_d,1};dw) \big )_{\frac{n-d}{n-2}, 1}\). By Lemma 3.3, there is a continuous embedding

$$\begin{aligned} b_{c_3}^1 \bigg ((L^{b_3,1},L^{q_d,1})_{\frac{n-d}{n-2}, 1} ;dw \bigg ) = b_{c_3}^1(L^{b_2,1};dw)\hookrightarrow \bar{Y}_{\frac{n-d}{n-2}, 1} \end{aligned}$$

where \(c_3 = \frac{d-2}{n-2} a_3 + \frac{n-d}{n-2} \frac{1}{q_d}\). We put \(c_1=a_1\) and \(c_2=a_2\) and choose \(\epsilon _1\), \(\epsilon _2\), and \(\epsilon _3\) properly so that \(c_1\), \(c_2\), and \(c_3\) are all different. Then,

$$\begin{aligned} \Vert \widetilde{\mathcal {M}}[g_1,\ldots ,g_n] \Vert _{L^{r,\infty }}&\lesssim \mathcal {C}^{\frac{(d-2)n}{d^2+2d}} \Vert g_1 \Vert _ {b_{c_1}^1(L^{b_1,1};dw)} \Vert g_2 \Vert _ {b_{c_2}^1(L^{b_2,1};dw)} \\&\qquad \times \prod \limits _{i=3}^n \Vert g_i \Vert _{b_{c_3}^1(L^{b_2,1};dw)} \\&\lesssim \mathcal {C}^{\frac{(d-2)n}{d^2+2d}} \Vert g_1 \Vert _ {b_{c_1}^r(L^{b_1,r};dw)} \Vert g_2 \Vert _ {b_{c_2}^r(L^{b_2,r};dw)}\\&\qquad \times \prod \limits _{i=3}^n \Vert g_i \Vert _{b_{c_3}^r(L^{b_2,r};dw)}. \end{aligned}$$

Note that the last inequality comes from the trivial embedding. If we apply Lemma 3.2 to the last expression, we get

$$\begin{aligned} \Vert \widetilde{\mathcal {M}}[g_1,\ldots ,g_n] \Vert _{L^{r,\infty }} \lesssim \mathcal {C}^{\frac{(d-2)n}{d^2+2d}} \prod \limits _{i=1}^n \Vert g_i \Vert _{b_c ^{nr}(\bar{Z}_{\frac{1}{n},nr};dw)} \end{aligned}$$

where \(c=\frac{1}{n}\sum _{i=1}^n c_i\) and \(\bar{Z}_{\frac{1}{n},nr} = (L^{b_2,r},L^{b_1,r})_{\frac{1}{n},nr}\).

By simple calculation, we get \(c = \frac{1}{q_d}\) and

$$\begin{aligned} \bar{Z}_{\frac{1}{n},nr} = (L^{b_2,r},L^{b_1,r})_{\frac{1}{n},nr} = L^{q_d}, \end{aligned}$$

since \(\frac{1}{n}\frac{1}{b_1} + \frac{n-1}{n}\frac{1}{b_2} = \frac{1}{q_d}\). Therefore, \({b_c ^{nr}\big (\bar{Z}_{\frac{1}{n},nr};dw\big )} = b_{1/q_d}^{q_d}(L^{q_d};dw) = L^{q_d}(dw)\) and we obtain

$$\begin{aligned} \Vert \widetilde{\mathcal {M}}[g_1,\ldots ,g_n] \Vert _{L^{r,\infty }} \lesssim \mathcal {C}^{\frac{(d-2)n}{d^2+2d}} \prod \limits _{i=1}^n \Vert g_i \Vert _{L^{q_d}(dw)}. \end{aligned}$$

If we put \(g = g_i\) for all \(i=1,\ldots ,n\), we get

$$\begin{aligned} \Vert \widetilde{\mathcal {M}}[g_1,\ldots ,g_n] \Vert _{L^{r,\infty }} \approx \Vert T_w g \Vert _{L^{q_d,\infty }} ^n \lesssim \mathcal {C}^{\frac{(d-2)n}{d^2+2d}} \Vert g \Vert _{L^{q_d}(dw)} ^ n. \end{aligned}$$

By the definition (18) of \(\mathcal {C}\), this leads to \(\mathcal {C}^{\frac{(d-2)}{d^2+2d}} \lesssim \mathcal {C}\), which implies that \(\mathcal {C}\) is bounded by some constant depending only on d.

\(\square \)

5 Some Examples

Now we provide some examples that satisfy the hypotheses of Theorem 1.1. For a given function \(\phi ^{(d)} : (0,\delta ) \rightarrow {\mathbb {R}}^+\), define \(\psi : (\delta ^{-1},\infty ) \rightarrow {\mathbb {R}}\) by \(\psi (x) = \frac{1}{\phi ^{(d)}(1/x)}\). If \(\psi \) is log-convex, then \(\phi ^{(d)}\) is log-concave. The proof is as follows. If we assume that \(\psi \) is log-convex,

$$\begin{aligned} \psi (\lambda x_1 + (1-\lambda ) x_2) \le [\psi (x_1)] ^{\lambda } [\psi (x_2)]^{1-\lambda }. \end{aligned}$$

It follows that

$$\begin{aligned} \psi (\lambda /t_1 + (1-\lambda )/t_2)^{-1} \ge [\phi ^{(d)}(t_1)] ^\lambda [\phi ^{(d)}(t_2)] ^{1-\lambda } \end{aligned}$$

where \(t_1 = 1/x_1\) and \(t_2=1/x_2\). Since function 1/x is convex and \(\psi ^{-1}\) is decreasing on \((0,\infty )\), we have

$$\begin{aligned} \phi ^{(d)}(\lambda t_1 + (1-\lambda )t_2) \ge [\phi ^{(d)}(t_1)] ^\lambda [\phi ^{(d)}(t_2)] ^{1-\lambda } \end{aligned}$$

so \(\phi ^{(d)}\) is log-concave. Therefore, if \(\psi (x) = \psi _{\phi ^{(d)}}(x) = \frac{1}{\phi ^{(d)}(1/x)}\) is positive, increasing, and log-convex on \((\delta ^{-1},\infty )\), then \(\phi ^{(d)}\) satisfies the hypotheses of Theorem 1.1. Also, for following examples, proving \(\psi \) is log-convex is easier than proving \(\phi ^{(d)}\) is log-concave, so we will give a proof that \(\psi \) is positive, increasing, and log-convex.

1. Let \(\phi (t) = e^{-1/t}\) and \(t \in (0,\delta )\), where \(\delta \) will be chosen later. Then,

$$\begin{aligned} \phi ^{(d)}(t) = e^{-1/t} \bigg ( \frac{a_{1,d}}{t^{d+1}} + \cdots + \frac{a_{d,d}}{t^{2d}} \bigg ) \end{aligned}$$

where

$$\begin{aligned} a_{i,d} = {\left\{ \begin{array}{ll} (-1)^{d+1}d! &{} { i=1 } \\ a_{i-1,d-1} - (d+i-1)a_{i,d-1} &{} { 1< i < d } \\ 1 &{}{ i=d } \end{array}\right. }. \end{aligned}$$

Then, \(\psi _{\phi ^{(d)}}(x) = e^x \big (\sum _{i=1}^d a_{i,d} x^{d+i}\big )^{-1} \). Let \(P(x) = \sum _{i=1}^d a_{i,d} x^{d+i}\). The leading coefficient of P, \(P'\), \(P''\) are 1, 2d, \(2d(2d-1)\), respectively. Therefore, if we take \(\delta \) small enough, which means x large enough, then \(P>0\) and \(PP'' \le (P')^2\), which implies that P is log-concave and \(P^{-1}\) is log-convex. So we can check that \(\psi _{\phi ^{(d)}}(x)\) is log-convex and \(\psi _{\phi ^{(d)}}(x)\) is positive and increasing for \(x \in (\delta ^ {-1}, \infty )\).

Likewise, for \(\phi (t) = e^{-{1/t^m}}\) with \(m \in {\mathbb {N}}\),

$$\begin{aligned} \psi _{\phi ^{(d)}}(x) = e^{x^m}\left( \sum \limits _{i=0} ^{(d-1)m} a_i x^{m+d+i} \right) ^{-1}, \end{aligned}$$

where the leading coefficient \(a_{(d-1)m} = 1\) and \(a_i\) for \(i=1, \ldots ,(d-1)m-1\) is determined by d and m. Therefore \(\psi _{\phi ^{(d)}}(x)\) is log-convex, positive, and increasing for \(x \in (\delta ^ {-1}, \infty )\).

2. Let \(\phi _2(t) = \exp (-e^{1/t})\). Then,

$$\begin{aligned} \phi _2^{(d)}(t) = \exp (-e^{1/t}) \bigg [e^{1/t} \frac{P_{d-1}(t)}{t^{2d}} + \cdots + e^{(d-1)/t} \frac{P_{1}(t)}{t^{2d}} + e^{d/t} \frac{1}{t^{2d}} \bigg ] \end{aligned}$$

where the \(P_i(t)\) are certain polynomials with degree \(\le i\). Therefore,

$$\begin{aligned} \psi _{\phi _2^{(d)}}(x) = e^{-e^x} \bigg [ e^x \tilde{P}_{d-1}(x) + \cdots + e^{(d-1)x} \tilde{P}_1(x) + e^{dx} x^{2d}\bigg ] ^{-1} \end{aligned}$$

where degree of \(\tilde{P}_i\) \(\le 2d\). Let \(P(x) = e^x \tilde{P}_{d-1}(x) + \cdots + e^{(d-1)x} \tilde{P}_1(x) + e^{dx} x^{2d}\). If x is large enough, then \(P>0\) and \(PP'' \le (P')^2\). (For x large, P acts like \(e^{dx}x^{2d}\)). Therefore, \(\psi _{\phi _2^{(d)}}(x)\) is log-convex, positive, and increasing if x is large enough.

Observe that (Likewise,) for \(\phi _n (t) = \exp (-\exp ( \ldots ( \exp (1/t) \ldots )\), \(\psi _{\phi _n^{(d)}}(x)\) satisfies the log-convexity for x large enough too.