1 Introduction

By \(\Omega \), we denote the space of all real or complex valued double sequences which is the vector space with coordinatewise addition and scalar multiplication. The space \(\mathcal {L}_p\) of double sequences [3] is defined by

$$\begin{aligned} \mathcal {L}_p = \left\{ \left( {{x_{n,m}}} \right) \in \Omega :\,\,\sum \limits _{n= 0}^\infty {\sum \limits _{m = 0}^\infty {{{\left| {{x_{n,m}}} \right| }^p} < \infty } } \right\} , \end{aligned}$$

where \(1\le p<\infty ,\) which is a complete space with the norm

$$\begin{aligned} {\left\| x \right\| _{\mathcal {L}_p}}:= \left( \sum \limits _{n= 0}^\infty {\sum \limits _{m = 0}^\infty {{{\left| {{x_{n,m}}} \right| }^p}} }\right) ^{1/p}. \end{aligned}$$

For more information on the normed spaces of double sequences and domain of triangle matrices in normed/paranormed sequence spaces, and the matrix transformations and summability theory, we refer the readers to the textbook [1] and the recent papers [2, 4, 7,8,9, 14, 17] and [21,22,23,24,25,26,27,28,29].

Let X and Y be two double sequence spaces and \(\mathsf {H}=(h_{nmjk})\) be a four-dimensional infinite matrix of real or complex numbers. Then, we say that \(\mathsf {H}\) defines a matrix mapping from X into Y,  and we denote it by writing \(\mathsf {H}:X\rightarrow Y,\) if for every double sequence \(x=(x_{n,m})\in X\) the double sequence \(\mathsf {H}x=\{(\mathsf {H}x)_{n,m}\}\), the \(\mathsf {H}\)-transform of x, exists and is in Y,  where

$$\begin{aligned} \left( {\mathsf {H}x} \right) _{n,m} =\sum \limits _{j= 0}^\infty {\sum \limits _{k = 0}^\infty {h_{nmjk} x_{j,k} },\quad (n,m=0,1,\ldots ).} \end{aligned}$$

The purpose of this paper is to establish the lower bound and the operator norm of factorizable four-dimensional matrices as operators on the double sequence space \(\mathcal {L}_p\). For \(p\in \mathbb {R} \backslash \{0\}\), the lower bound involved here is the number \(L_{p}(\mathsf {H})\), which is defined as the supremum of those \(\ell ,\) obeying the following inequality

$$\begin{aligned} \left\| {\mathsf {H}x} \right\| _{\mathcal {L}_p} \ge \ell \left\| x \right\| _{\mathcal {L}_p } , \end{aligned}$$

where \(x\ge 0,~x\in \mathcal {L}_p\) and \(\mathsf {H}=(h_{nmjk})\) is a nonnegative four-dimensional matrix. Also, we consider the upper bound \(k>0,\) of the form

$$\begin{aligned} \left\| {\mathsf {H}x} \right\| _{\mathcal {L}_p} \le k\left\| x \right\| _{\mathcal {L}_p } , \end{aligned}$$

for all nonnegative sequences x. The constant k is not depending on x. We seek the smallest possible value of k and denote the best upper bound by \(\left\| {\mathsf {H}} \right\| _{\mathcal {L}_p} \) as the operator norm of \(\mathsf {H}\) on \(\mathcal {L}_p .\) When we deal with two-dimensional matrices, we use, respectively, the notation \( L_p(\mathsf {A})\) and \(\left\| {\mathsf {\mathsf {A}}} \right\| _{\ell _p} \) for the lower bound and the operator norm of the matrix \(\mathsf {A}\) on \(\ell _p\), where \(\ell _p\) is the space of all real or complex \(p-\)absolutely summable sequences.

2 Main Results

Our main results are as follows.

Theorem 2.1

Let \(\mathsf {A}=(a_{nj})\) and \(\mathsf {B}=(b_{mk})\) be two nonnegative infinite matrices with the lower bounds \(L_p(\mathsf {A})\) and \(L_p(\mathsf {B})\), respectively. Let \(\mathsf {H}=(a_{nj}b_{mk})\) be the four-dimensional matrix constructed from \(\mathsf {A}\) and \(\mathsf {B}.\) Then,

$$\begin{aligned} L_p(\mathsf {H}) = L_p(\mathsf {A})L_p(\mathsf {B}) \end{aligned}$$

Proof

Let \(x=(x_{jk})\) be a nonnegative double sequence in \({\mathcal {L}_p}\). Then,

$$\begin{aligned} \sum \limits _{n = 0}^\infty {\sum \limits _{m = 0}^\infty {{{\left( {\sum \limits _{j = 0}^\infty {\sum \limits _{k = 0}^\infty {{a_{nj}}{b_{mk}}{x_{jk}}} } } \right) }^p}} }= & {} \sum \limits _{n = 0}^\infty {\sum \limits _{m = 0}^\infty {{{\left( {\sum \limits _{j = 0}^\infty {{a_{nj}}\sum \limits _{k = 0}^\infty {{b_{mk}}{x_{jk}}} } } \right) }^p}} } \\= & {} \sum \limits _{n = 0}^\infty {\sum \limits _{m = 0}^\infty {{{\left( {\sum \limits _{j = 0}^\infty {{{a_{nj}}{y_{mj}}} } } \right) }^p}} }\quad \left( {{y_{mj}}:= \sum \limits _{k = 0}^\infty {{b_{mk}}{x_{jk}}} } \right) \\\ge & {} {\left( {{L_p}\left( \mathsf {A} \right) } \right) ^p}\sum \limits _{m = 0}^\infty {\sum \limits _{j = 0}^\infty {y_{mj}^p} } \\= & {} {\left( {{L_p}\left( \mathsf {A} \right) } \right) ^p}\sum \limits _{j = 0}^\infty {\sum \limits _{m = 0}^\infty {{{\left( {\sum \limits _{k = 0}^\infty {{b_{mk}}{x_{jk}}} } \right) }^p}} } \\\ge & {} {\left( {{L_p}\left( \mathsf {A} \right) } \right) ^p}{\left( {{L_p}\left( \mathsf {B} \right) } \right) ^p}\sum \limits _{j = 0}^\infty {\sum \limits _{k = 0}^\infty {x_{jk}^p} } . \end{aligned}$$

This implies that

$$\begin{aligned} {L_p}\left( \mathsf {H} \right) \ge {L_p}\left( \mathsf {A} \right) {L_p}\left( \mathsf {B} \right) . \end{aligned}$$

In order to see that one even has equality, look at double sequences of the form

$$\begin{aligned} {x_{jk}} = {\varsigma _j}{\eta _k}. \end{aligned}$$

Then, one has that

$$\begin{aligned} \sum \limits _{n = 0}^\infty {\sum \limits _{m = 0}^\infty {{{\left( {\sum \limits _{j = 0}^\infty {\sum \limits _{k = 0}^\infty {{a_{nj}}{b_{mk}}{x_{jk}}} } } \right) }^p}} } = \sum \limits _{n = 0}^\infty {{{\left( {\sum \limits _{j = 0}^\infty {{a_{nj}}{\varsigma _j}} } \right) }^p}\sum \limits _{m = 0}^\infty {{{\left( {\sum \limits _{k = 0}^\infty {{b_{mk}}{\eta _k}} } \right) }^p}} } .\, \end{aligned}$$

Now let \(\alpha > {L_p}\left( \mathsf {A}\right) \) and \(\beta > {L_p}\left( \mathsf {B}\right) \). Then, there exist nonnegative sequences \((\varsigma _j)\) and \(({\eta _k})\) such that

$$\begin{aligned} \sum \limits _{n = 0}^\infty {{{\left( {\sum \limits _{j = 0}^\infty {{a_{nj}}{\varsigma _j}} } \right) }^p}} < {\alpha ^p}\sum \limits _{j = 0}^\infty {\varsigma _j^p} , \end{aligned}$$

and

$$\begin{aligned} \sum \limits _{m = 0}^\infty {{{\left( {\sum \limits _{k = 0}^\infty {{b_{mk}}{\eta _k}} } \right) }^p}} < {\beta ^p}\sum \limits _{k= 0}^\infty {\eta _k^p} . \end{aligned}$$

Then,

$$\begin{aligned} \sum \limits _{n = 0}^\infty {\sum \limits _{m = 0}^\infty {{{\left( {\sum \limits _{j = 0}^\infty {\sum \limits _{k = 0}^\infty {{a_{nj}}{b_{mk}}{x_{jk}}} } } \right) }^p}} } < {\left( {\alpha \beta } \right) ^p}\sum \limits _{j = 0}^\infty {\varsigma _j^p} \sum \limits _{k = 0}^\infty {\eta _k^p} . \end{aligned}$$

This implies that

$$\begin{aligned} {L_p}\left( \mathsf {H} \right) < \alpha \beta . \end{aligned}$$

Consequently,

$$\begin{aligned} {L_p}\left( \mathsf {H} \right) \le {L_p}\left( \mathsf {A} \right) {L_p}\left( \mathsf {B} \right) . \end{aligned}$$

\(\square \)

Theorem 2.2

Let \(\mathsf {A}=(a_{nj})\) and \(\mathsf {B}=(b_{mk})\) be two nonnegative infinite matrices with the norms \({\left\| \mathsf {A} \right\| _{{\ell _p}}}\) and \({\left\| \mathsf {B} \right\| _{{\ell _p}}}\), respectively. Let \(\mathsf {H}=(a_{nj}b_{mk})\) be the four-dimensional matrix constructed from \(\mathsf {A}\) and \(\mathsf {B}.\) Then,

$$\begin{aligned} {\left\| \mathsf {H} \right\| _{\mathcal {L}_p } } = {\left\| \mathsf {A} \right\| _{{\ell _p}}}{\left\| \mathsf {B} \right\| _{{\ell _p}}} \end{aligned}$$

Proof

The proof can be easily adapted from the one of Theorem 2.1 and so is omitted. \(\square \)

To provide some applications of the above Theorems, we refer the readers to the next two sections.

3 Extension of Hardy’s and Copson’s Inequalities

In this section, using Theorems 2.1 and 2.2, we are going to provide an extension of Hardy’s and Copson’s inequalities to double series. First, consider the Hardy’s inequality [13]

$$\begin{aligned} \sum \limits _{n = 0}^\infty {{{\left( {\sum \limits _{k = 0}^n {\frac{{{x_k}}}{n+1}} } \right) }^p}} \le {\left( {\frac{p}{{p - 1}}} \right) ^p}\sum \limits _{k = 0}^\infty {x_n^p} \,\,\,\,\left( {1< p < \infty } \right) , \end{aligned}$$
(3.1)

where \(x_k\ge 0\) for all \(k\in \mathbb {N}\) and the constant \(\left( \frac{p}{p-1}\right) ^p\) is the best possible. Inequality (3.1) can be rewritten as \({\left\| C(1)\right\| _{\ell _p} } = \frac{p}{p-1},\) where \(C(1)=(c_{nk})_{n,k\ge 0}\) is the Cesàro matrix of order 1, defined by

$$\begin{aligned} {c_{nk}} = \left\{ \begin{array}{ll} \frac{1}{{n +1}}&{}\quad 0 \le k \le n,\\ 0&{}\quad \hbox {otherwise}. \end{array} \right. \end{aligned}$$
(3.2)

Now, consider the four-dimensional Cesàro matrix of order 1 and 1, \(\mathsf {C(1,1)}=\left( h_{nmjk}\right) \) defined by

$$\begin{aligned}{h_{nmjk}} = \left\{ \begin{array}{ll} \frac{1}{{\left( {n + 1} \right) \left( {m + 1} \right) }}&{}\quad \left( {0 \le j \le n,\,\,0 \le k \le m} \right) ,\\ 0&{}\quad o.w. \end{array} \right. \end{aligned}$$

Obviously, this matrix can be factorized as \(h_{nmjk}=c_{nj}c_{mk}\) where \({c_{nj}}\) and \({c_{mk}}\) are defined by (3.2). Applying Theorem 2.2, we have

$$\begin{aligned} {\left\| \mathsf {C(1,1)} \right\| _{\mathcal {L}_p} }={\left\| C(1)\right\| _{\ell _p} }{\left\| C(1)\right\| _{\ell _p} } = \left( \frac{p}{p-1}\right) ^2. \end{aligned}$$

This leads us to the following generalization of Hardy’s inequality.

Corollary 3.1

Let \({1< p < \infty } \) and \(x=\left( x_{nk}\right) \) be nonnegative sequence of real numbers in \({\mathcal {L}_p}\). Then,

$$\begin{aligned} \sum \limits _{n = 0}^\infty {\sum \limits _{m = 0}^\infty {{{\left( {\sum \limits _{j = 0}^n {\sum \limits _{k = 0}^m {\frac{{{x_{jk}}}}{{\left( {n + 1} \right) \left( {m + 1} \right) }}} } } \right) }^p}} } \le {\left( {\frac{p}{{p - 1}}} \right) ^{2p}}\sum \limits _{n = 0}^\infty {\sum \limits _{m = 0}^\infty {x_{nm}^p} }. \end{aligned}$$
(3.3)

The constant \({\left( {\frac{p}{{p - 1}}} \right) ^{2p}}\) in (3.3) is the best possible.

Inequality (3.3) is an extension of Hardy’s inequality to double series. It can be extended to multiple series [16].

Next, consider the Copson’s inequality [11] [see also ([13], Theorem 344)]

$$\begin{aligned} \sum \limits _{n = 0}^\infty {{{\left( {\sum \limits _{k = n}^\infty {\frac{{{x_k}}}{{k + 1}}} } \right) }^p}} \ge {p^p}\sum \limits _{k = 0}^\infty {x_k^p} \,\,\,\,\,\,\,\left( {0 < p \le 1} \right) , \end{aligned}$$
(3.4)

where \(x_k\ge 0\) for all \(k\in \mathbb {N}\). The inequality switch order when \(p>1\) and the constant \(p^p\) is the best possible. Again, inequality (3.4) can be rewritten as \(L_p\left( C(1)^t\right) =p,\) where \(C(1)^t\) denotes the transpose of C(1),  and C(1) is the Cesàro matrix of order 1, defined by (3.2). The transpose of the Cesàro matrix is called Copson matrix. Now, consider the four-dimensional Copson matrix \(\mathsf {{C}}^{\mathsf {t}}({\mathsf {1,1}})=\left( h_{nmjk}\right) \) defined by

$$\begin{aligned} {h_{nmjk}} = \left\{ \begin{array}{ll} \frac{1}{{\left( {j + 1} \right) \left( {k + 1} \right) }}&{}\quad \left( {j \ge n\,,\,k \ge m\,} \right) ,\\ 0&{}\quad o.w. \end{array} \right. \end{aligned}$$

Since the four-dimensional Copson matrix \({\mathsf {C}}^{\mathsf {t}}({\mathsf {1,1}})\) can be factorized as \({h_{nmjk}} = {c^t_{nj}}{c^t_{mk}}\), where \({c_{nj}}\) and \({c_{mk}}\) are defined by (3.2), applying Theorem 2.1 we have

$$\begin{aligned} {L_p}\left( {\mathsf {C}}^{\mathsf {t}}{\mathsf {(1,1)}}\right) =L_p\left( C(1)^t\right) L_p\left( C(1)^t\right) = p^2, \end{aligned}$$

whenever \(0<p\le 1.\) Further, since \({\left\| C(1)^t\right\| _{\ell _p} } = p,\) by Theorem  2.2, we deduce that

$$\begin{aligned} {\left\| {\mathsf {C}}^{\mathsf {t}}{\mathsf {(1,1)}} \right\| _{\mathcal {L}_p} } ={\left\| C(1)^t\right\| _{\ell _p} }{\left\| C(1)^t\right\| _{\ell _p} }= p^2, \end{aligned}$$

for \(1<p<\infty .\) These lead us to the following generalization of Copson’s inequality.

Corollary 3.2

Let \(x=\left( x_{nk}\right) \) be nonnegative sequence of real numbers. Then

$$\begin{aligned} \sum \limits _{n = 0}^\infty {\sum \limits _{m = 0}^\infty {{{\left( {\sum \limits _{j = n}^\infty {\sum \limits _{k = m}^\infty {\frac{{{x_{jk}}}}{\left( j+ 1\right) \left( k+1\right) }} } } \right) }^p}} } \ge {p^{2p}}\sum \limits _{n = 0}^\infty {\sum \limits _{m = 0}^\infty {x_{nm}^p} } \,\,\,\,\,\,\,\left( {0 < p \le 1} \right) . \end{aligned}$$
(3.5)

The inequality switch order when \(p>1\) and the constant \(p^{2p}\) is the best possible.

Inequality (3.5) is an extension of Copson’s inequality to double series. It can be extended to multiple series [18].

4 Complimentary Results for Four-Dimensional Hausdorff Matrices

Let \(\hbox {d}\mu \) and \(\hbox {d}\lambda \) be two Borel probability measures on [0,1] and \(\mathsf {H}_{\mu \times \lambda }=(h_{nmjk})\) be the four-dimensional Hausdorff matrix defined by [15]

$$\begin{aligned} {h_{nmjk}} = \left\{ \begin{array}{ll} \displaystyle \int _0^1 {\int _0^1 {( {_j^n} )( {_k^m}){\alpha ^j}{\beta ^k}{{(1 - \alpha )}^{n - j}}{{(1 - \beta )}^{m - k}}\hbox {d}\mu (\alpha ) \times \hbox {d}\lambda (\beta ),}}&{}\quad 0 {\le j \le n,0 \le k \le m,} \\ 0&{}\quad \hbox {otherwise},\\ \end{array} \right. \end{aligned}$$

for all \(n,m,j,k\in \mathbb {N}\). Clearly, we have

$$\begin{aligned} {h_{nmjk}} = \left( {{}_j^n} \right) \left( {{}_k^m} \right) \Delta _1^{n - j}\Delta _2^{m - k}{{\mu }_{j,k}}, \end{aligned}$$

where

$$\begin{aligned} {\mu _{j,k}} := \int _0^1 {\int _0^1 {{\alpha ^j}{\beta ^k}\hbox {d}\mu (\alpha ) \times \hbox {d}\lambda (\beta )\,} }, \,\,\,\,\,\,\,(j,k = 0,1,\ldots ) \end{aligned}$$

and

$$\begin{aligned} \Delta _1^{n - j}\Delta _2^{m - k}{\mu _{j,k}} = \sum \limits _{s = 0}^{n - j} {\sum \limits _{t = 0}^{m - k} {{{\left( { - 1} \right) }^{s + t}}\left( {_{\,\,\,s}^{n - j}} \right) \left( {_{\,\,\,\,\,t}^{m - k}} \right) {\mu _{j + s,k + t}}} } . \end{aligned}$$

The four-dimensional Hausdorff matrix contains some famous classes of matrices. These classes are as follows:

  1. 1.

    The choices \(\hbox {d}\mu (\alpha ) = \eta (1 - \alpha )^{\eta - 1} \hbox {d}\alpha \) and \(\hbox {d}\lambda (\beta ) = \gamma (1 - \beta )^{\gamma - 1} \hbox {d}\beta \) give the four-dimensional Cesàro matrix of order \(\eta \) and \(\gamma \) which is denoted by \(\mathsf {C}(\eta , \gamma )\).

  2. 2.

    The choices \(\hbox {d}\mu (\alpha ) =\) point evaluation at \(\alpha =\eta \) and \(\hbox {d}\lambda (\beta ) =\) point evaluation at \(\beta =\gamma \) give the four-dimensional Euler matrix of order \(\eta \) and \(\gamma \) which is denoted by \(\mathsf {E}(\eta , \gamma )\),

  3. 3.

    The choices \( \hbox {d}\mu (\alpha ) = \left| {\log \alpha } \right| ^{\eta - 1} /\Gamma (\eta )\hbox {d}\alpha \) and \(\hbox {d}\lambda (\beta ) = \left| {\log \beta } \right| ^{\gamma - 1} /\Gamma (\gamma )\hbox {d}\beta \) give the four-dimensional Hölder matrix of order \(\eta \) and \(\gamma \), which is denoted by \(\mathsf {H}(\eta , \gamma )\).

  4. 4.

    The choices \(\hbox {d}\mu (\alpha ) = \eta \alpha ^{\eta - 1} \hbox {d}\alpha \) and \(\hbox {d}\lambda (\beta ) = \gamma \beta ^{\gamma - 1} \hbox {d}\beta \) give the four-dimensional Gamma matrix of order \(\eta \) and \(\gamma \) which is denoted by \({{\Gamma }}\left( {\eta ,\gamma } \right) \).

The four-dimensional Cesàro, Hölder and Gamma matrices have nonnegative entries whenever \(\eta >0\) and \(\gamma >0,\) and also the four-dimensional Euler matrices, when \(0<\eta <1\) and \(0<\gamma <1\).

The study of the boundedness problem of four-dimensional Hausdorff matrices goes back to the some recent works of the author. For example, it is proved in ([19], Theorem 3.1) that

$$\begin{aligned} {\left\| {\mathsf {H}_{\mu \times \lambda }} \right\| _{\mathcal {L}_p}} = \int _0^1 {\int _0^1 {{{\left( {\alpha \beta } \right) }^{-\frac{{1}}{p}}}\hbox {d}\mu \left( \alpha \right) \times \hbox {d}\lambda \left( \beta \right) }}\,\,\,\, (1< p<\infty ). \end{aligned}$$
(4.1)

Further, it is proved in ([20], pp. 7–8) that

$$\begin{aligned} L_p\left( {\mathsf {H}_{\mu \times \lambda }^t} \right) = \int _0^1 {\int _0^1 {{{\left( {\alpha \beta } \right) }^{\frac{{1 - p}}{p}}}\hbox {d}\mu \left( \alpha \right) \times \hbox {d}\lambda \left( \beta \right) } }\,\,\,\, (0<p\le 1), \end{aligned}$$
(4.2)

and

$$\begin{aligned} L_p\left( {\mathsf {H}_{\mu \times \lambda }} \right) \ge \int _0^1 {\int _0^1 {{{\left( {\alpha \beta } \right) }^{-\frac{{1}}{p}}}\hbox {d}\mu \left( \alpha \right) \times \hbox {d}\lambda \left( \beta \right) } }\,\,\,\, (0<p\le 1). \end{aligned}$$
(4.3)

According to the Hellinger–Toeplitz theorems ([6], Propositions 7.2 and 7.3), (4.1), (4.2) and (4.3), respectively, give

$$\begin{aligned} {\left\| {\mathsf {H}_{\mu \times \lambda }^t} \right\| _{\mathcal {L}_p}} = \int _0^1 {\int _0^1 {{{\left( {\alpha \beta } \right) }^{\frac{{1 - p}}{p}}}\hbox {d}\mu \left( \alpha \right) \times \hbox {d}\lambda \left( \beta \right) } }\,\,\,\, (1< p<\infty ), \end{aligned}$$
(4.4)

and

$$\begin{aligned} L_p\left( {\mathsf {H}_{\mu \times \lambda }} \right) = \int _0^1 {\int _0^1 {{{\left( {\alpha \beta } \right) }^{-\frac{{1}}{p}}}\hbox {d}\mu \left( \alpha \right) \times \hbox {d}\lambda \left( \beta \right) } }\,\,\,\, (-\infty<p<0), \end{aligned}$$
(4.5)

and

$$\begin{aligned} L_p\left( {\mathsf {H}_{\mu \times \lambda }^t} \right) \ge \int _0^1 {\int _0^1 {{{\left( {\alpha \beta } \right) }^{\frac{{1 - p}}{p}}}\hbox {d}\mu \left( \alpha \right) \times \hbox {d}\lambda \left( \beta \right) } }\,\,\,\, (-\infty<p<0). \end{aligned}$$
(4.6)

The proof of (4.1), (4.2) and (4.3), in those papers, is all based on the special version of four-dimensional Euler matrix. On the other hand, the four-dimensional Hausdorff matrix can be factorized as

$$\begin{aligned} {h_{nmjk}} = h_{nj}^{(\mu )} h_{mk}^{(\lambda )}, \end{aligned}$$
(4.7)

where \({\mathsf {H}}_{{\mu }} = {( {h_{nj}^{(\mu )} })_{nj}}\) is the classical (two-dimensional) Hausdorff matrix [5] corresponding to the Borel probability measure \(\hbox {d}\mu \) (and similarly for \({\mathsf {H}}_{\lambda }\)). A similar factorization holds for the transpose \(\mathsf {H}_{\mu \times \lambda }^t.\) Therefore, (4.1), (4.2) and (4.3), can be obtained by a different way. In fact they are all special cases of Theorems 2.1 and 2.2, using the classical results on lower bounds and norms of the classical (two-dimensional) Hausdorff matrices in [6, 10, 12]. For example, to achieve (4.4), consider the transpose of the four-dimensional Hausdorff matrices as operators selfmap of the space \({\mathcal {L}_p}.\) Applying Theorem 2.2, we have the following result.

Theorem 4.1

Let p be fixed, \(1<p<\infty \) and \(\mathsf {H}_{\mu \times \lambda }\) be the four-dimensional Hausdorff matrix associated with the measures \(\mathrm{d}\mu \) and \(\mathrm{d}\lambda .\) Then, the transpose \(\mathsf {H}_{\mu \times \lambda }^t\) is bounded on \({\mathcal {L}_p}\) if and only if both \(\int _0^1 {{\alpha ^{\frac{{1 - p}}{p}}}\mathrm{d}\mu ( \alpha )} < \infty \) and \(\int _0^1 {{\beta ^{\frac{{1 - p}}{p}}}\mathrm{d}\lambda \left( \beta \right) } < \infty ,\) and we have

$$\begin{aligned} {\left\| {\mathsf {H}_{\mu \times \lambda }^t} \right\| _{\mathcal {L}_p}} = \int _0^1 {\int _0^1 {{{\left( {\alpha \beta } \right) }^{\frac{{1 - p}}{p}}}\mathrm{d}\mu \left( \alpha \right) \times \mathrm{d}\lambda \left( \beta \right) } }. \end{aligned}$$

Proof

For the classical (two-dimensional) Hausdorff matrix \(\mathsf {H}_{\mu },\) it is proved by Bennett [5] that \(\mathsf {H}^\mathsf {t}_{\mu }\) is bounded on \(\ell _p\) if and only if \( \int _0^1 {{\alpha ^{\frac{{1 - p}}{p}}}\hbox {d}\mu (\alpha )}<\infty , \) and that \({\left\| {{\mathsf {H}^\mathsf {t}_{\mu }}} \right\| _{{\ell _p}}} = \int _0^1 {{\alpha ^{\frac{{1 - p}}{p}}}\hbox {d}\mu (\alpha )}.\) The result of the theorem is now a consequence of Theorem 2.2. \(\square \)

Let \(\mathsf {E}(\eta ,\gamma )\), \(\mathsf {C}(\eta , \gamma )\), \(\mathsf {H}(\eta , \gamma )\) and \({\Gamma }\left( {\eta ,\gamma } \right) \) be the four-dimensional Euler matrices, Cesàro matrices, Hölder matrices and Gamma matrices, respectively. Applying Theorem 4.1 to these matrices, we have the following corollary.

Corollary 4.2

Let p be fixed, \(1<p<\infty \) and \(\eta ,\gamma >0\). Then,

  1. 1.

    \(\left\| \mathsf {E}^\mathsf {t}(\eta ,\gamma ) \right\| _{\mathcal {L}_p} =\left( \eta \gamma \right) ^{\frac{1-p}{p}}, \eta \le 1, \gamma \le 1.\)

  2. 2.

    \(\left\| \mathsf {C}^\mathsf {t}(\eta , \gamma )\right\| _{\mathcal {L}_p}=\frac{{\Gamma \left( {\eta + 1} \right) {\Gamma ^2}\left( {\frac{1}{p}} \right) \Gamma \left( {\gamma + 1} \right) }}{{\Gamma \left( {\eta + \frac{1}{p}} \right) \Gamma \left( {\gamma + \frac{1}{p}} \right) }}.\)

  3. 3.

    \(\left\| \mathsf {H}^\mathsf {t}(\eta , \gamma ) \right\| _{\mathcal {L}_p}=\frac{1}{{\Gamma \left( \eta \right) \Gamma \left( \gamma \right) }}\int _0^1 {\int _0^1 {{{\left( {\alpha \beta } \right) }^{\frac{p-1}{p}}}|\log \alpha {|^{\eta - 1}}|\log \beta {|^{\gamma - 1}}\mathrm{d}\alpha \times \mathrm{d}\beta } } .\)

  4. 4.

    \(\left\| {\Gamma }^\mathsf {t}\left( {\eta ,\gamma } \right) \right\| _{\mathcal {L}_p} =\frac{{{p^2}\eta \gamma }}{{\left( {p\eta - p + 1} \right) \left( {p\gamma - p + 1} \right) }},\,\,\, {\eta> \frac{p-1}{p},\gamma > \frac{p-1}{p}} .\)

Putting \(\eta =\gamma =1,\) the second part of the above corollary implies

$$\begin{aligned} \left\| \mathsf {C}^\mathsf {t}(1, 1)\right\| _{\mathcal {L}_p}=\frac{{\Gamma \left( {2} \right) {\Gamma ^2}\left( {\frac{1}{p}} \right) \Gamma \left( {2} \right) }}{{\Gamma \left( {1 + \frac{1}{p}} \right) \Gamma \left( {1 + \frac{1}{p}} \right) }}=p^2. \end{aligned}$$

This says us that the second part of Corollary 4.2 is a generalization of Copson’s inequality (see Corollary 3.2).

To the best of our knowledge the exact values of \(L_p\left( {\mathsf {H}}_{\mu \times \lambda } \right) \) and \(L_p\left( {\mathsf {H}}_{\mu \times \lambda }^t \right) \) for \(1<p<\infty \) have not been found yet. Keeping in mind factorization (4.7) and the results obtained by Chen and Wang in ([10], Theorem 2.2) for \(L_p(\mathsf {H}_{\mu })\) and \(L_p(\mathsf {H}^\mathsf {t}_{\mu })\) where \(p>1\) and \(\mathsf {H}_{\mu }\) is the classical (two-dimensional) Hausdorff matrix, we now enable to fill up this gap by the use of Theorem 2.1.

Theorem 4.3

Let p be fixed, \(1<p<\infty \) and \(\mathsf {H}_{\mu \times \lambda }\) be the four-dimensional Hausdorff matrix associated with the measures \(\hbox {d}\mu \) and \(\hbox {d}\lambda \). Then,

$$\begin{aligned} L_p\left( {\mathsf {H}_{\mu \times \lambda }} \right) =\mu \left( \{1\}\right) \times \lambda \left( \{1\}\right) , \end{aligned}$$

and

$$\begin{aligned} L_p\left( {\mathsf {H}_{\mu \times \lambda }^t} \right) ={\left( {{{\left( {\mu \left( {\{ 0\} } \right) } \right) }^p} + {{\left( {\mu \left( {\{ 1\} } \right) } \right) }^p}} \right) ^{\frac{1}{p}}}{\left( {{{\left( {\lambda \left( {\{ 0\} } \right) } \right) }^p} + {{\left( {\lambda \left( {\{ 1\} } \right) } \right) }^p}} \right) ^{\frac{1}{p}}}. \end{aligned}$$

We refer the readers to the paper [18] in which the operator norm and lower bound of some non-factorizable matrices are founded.